Causing Human Actions
Causing Human Actions
A Bradford Book
The MIT Press
Cambridge, Massachusetts
London, England
© 2010 Massachusetts Institute of Technology
All rights reserved. No part of this book may be reproduced in any form by any
electronic or mechanical means (including photocopying, recording, or information
storage and retrieval) without permission in writing from the publisher.
MIT Press books may be purchased at special quantity discounts for business or sales
promotional use. For information, please email [email protected] or
write to Special Sales Department, The MIT Press, 55 Hayward Street, Cambridge,
MA 02142.
This book was set in Stone Sans and Stone Serif by Toppan Best-set Premidia Limited.
Printed and bound in the United States of America.
Causing human actions : new perspectives on the causal theory of action / edited
by Jesús H. Aguilar and Andrei A. Buckareff.
p. cm.
“A Bradford Book.”
Includes bibliographical references and index.
ISBN 978-0-262-01456-4 (hardcover : alk. paper)—ISBN 978-0-262-51476-7 (pbk. :
alk. paper)
1. Act (Philosophy). 2. Action theory. 3. Intentionality (Philosophy).
I. Aguilar, Jesús H. (Jesús Humberto), 1962–. II. Buckareff, Andrei A., 1971–.
B105.A35C38 2010
128′.4—dc22
2010003187
10 9 8 7 6 5 4 3 2 1
Contents
Preface vii
References 297
Contributors 323
Index 325
Preface
This volume brings together essays by some of the leading figures working
in action theory today. What unifies all of the essays is that they either
directly engage in debates over some aspect of the causal theory of action
(CTA) or they indirectly engage with the CTA by focusing on issues that
have significant consequences for the shape of a working CTA or the ten-
ability of any version of the CTA. Some of the authors defend this theory,
while others criticize it. What they all agree on is that the CTA occupies a
central place in the philosophy of action and philosophy of mind as the
“standard story of action.” Two of the essays in this volume have appeared
elsewhere recently. (Chapters 8 and 9 by Carolina Sartorio and Randolph
Clarke, respectively, previously appeared in Noûs. They appear with the
permission of Wiley-Blackwell, and have been lightly edited for consis-
tency.) The remaining essays appear in this volume for the first time.
Editing this volume, though not an easy task, has been a labor of love
for us. We are convinced that foundational issues in the philosophy of
action, such as the issues explored in this volume, deserve greater atten-
tion. It is our hope that the publication of this collection of essays will
serve to elevate the prominence of the debates the essays range over in
future research on human action and agency. This volume, then, is in part
an effort to promote exploration of foundational issues in action theory
and especially to encourage further work on the CTA by defenders and
critics alike.
Work on this volume would have been more difficult if not impossible
without the support of a number of people and institutions. First, Philip
Laughlin, Marc Lowenthal, and Thomas Stone from MIT Press deserve a
special debt of gratitude for supporting this project. Also from MIT Press,
we would like to thank Judy Feldmann for her fantastic editorial work.
Thanks to Wiley-Blackwell for giving us the permission to publish the
essays by Carolina Sartorio and Randolph Clarke which originally appeared
viii Preface
in Noûs, and to Ernest Sosa for providing some much-needed help with
acquiring the permission to put these essays in our book. Second, we would
like to thank the authors who contributed to this volume. This volume
would not exist were it not for their efforts. Third, we would like to thank
Joshua Knobe for his help in the editing process by reviewing the essays
by Thomas Nadelhoffer, Josef Perner, and Johannes Roessler. His expertise
in experimental philosophy and psychology far outstrips ours. His philo-
sophical acumen with respect to all things action theoretic made him an
obvious person to go to for help in reviewing these essays. Fourth, some
of the work for this volume was carried out while Andrei Buckareff was a
participant in the 2009 National Endowment for the Humanities Seminar
on Metaphysics and Mind led by John Heil. Andrei wishes to thank the
NEH for the financial support and John Heil for creating a seminar envi-
ronment that afforded him the opportunity to complete some of the work
on this and other projects. Fifth, thanks are due to the institutions we work
at, Marist College and Rochester Institute of Technology, for their support
of our work on this and other research projects. Andrei is especially thank-
ful to Martin Shaffer, Dean of the School of Liberal Arts at Marist, and
Thomas Wermuth, Vice President for Academic Affairs at Marist, for the
course releases that gave him extra time to work on his research, including
editing this volume. Jesús was awarded the Paul A. and Francena L. Miller
Faculty Fellowship from the Rochester Institute of Technology to support
part of the work involved in this volume, also in the form of course
releases, something for which he is very grateful. Finally, extra special
thanks are due to our families and friends for their tolerance and their
support as we worked on this project. Andrei would especially like to thank
his spouse, Lara Kasper-Buckareff, for her encouragement and patience
with him, especially in the final weeks of working on this project. Likewise,
Jesús is full of gratitude to Amy Wolf for her constant support during his
work on this volume.
Jesús H. Aguilar
Rochester, New York
Andrei A. Buckareff
Poughkeepsie, New York
1 The Causal Theory of Action: Origins and Issues
Since ought implies can, writings about morality presuppose much about human
action. Yet although conclusions about action can defensibly be drawn from estab-
lished moral theory, no moral theory can become established unless its presupposi-
tions about action can be defended independently. (Donagan 1987, viii)3
The CTA: Origins and Issues 3
1.1 Aristotle
Although Aristotle was not the first major philosopher to write about
action—Plato wrote about action before him (see, e.g., the Phaedo
98c–99a)—to our knowledge he was the first one to think seriously about
the springs of action. Furthermore, the story he told about the role of the
mental in the production and explanation of action was a causal story
much along the lines of the above schema of the CTA. Such a proposal
may be seen as anachronistic. After all, the CTA as a proper theory of action
has only been identified under that title since the 1960s.4 Nonetheless, to
the extent that Aristotle had a theory of action, his theory is clearly a
progenitor of the CTA.5
Aristotle’s commitment to a proto-CTA theory of action can be pieced
together from portions of his De anima (DA), De motu animalum (Mot.),
and Nicomachean Ethics (NE). The origin of action lies in the agent accord-
ing to Aristotle (NE, Bk. III, ch. 1.20, 1111a, 23–24). Specifically, the springs
of action are what we now identify as pro-attitudes.6 For instance, he writes
that, “the proximate reason for movement is desire [ourexeos]”7 (Mot.
701a35; cf. DA 433a10–434a20). In NE, Aristotle distinguishes between
various types of desires. Of these types of desires two are of special interest
in understanding his account of the springs of action. They are the intrinsic
desires for what are deemed worthwhile ends (boulêsis)8 and the instrumen-
tal proximal action-triggering desires for the means to achieving the ends
(prohairesis).9 A simple statement of the etiology of action is found in
chapter 2 of Book VI of NE:
The origin of an action—the efficient cause, not the final cause—is prohairesis. The
origin of prohairesis is another desire [orexis] and goal-directed reason [eneka tinos].
(NE, 1139a31–34)10
1.2 Hobbes
The early modern period is very significant in the history of the CTA.
Several accounts of action emerged that incorporated the new explanatory
framework coming from the Scientific Revolution to the study of human
beings. In particular, an analysis of the internal mental causes that lead to
an action was developed within such a framework. Some accounts placed
the mental causes of action squarely in the natural realm, while others
placed them completely outside the realm of nature.
Among the naturalistic-oriented thinkers of this period the most influ-
ential figure is Thomas Hobbes, who as much as Aristotle is responsible for
what would be christened “the causal theory of action” in the twentieth
century. Although later thinkers such as John Locke, David Hume, Jeremy
Bentham, John Austin, and J. S. Mill articulated accounts of human action
and agency that reflect central tenets present in contemporary versions of
the CTA, Hobbes’s work on action captures so much of the fundamental
tenets of the CTA that nowadays this theory is sometimes referred to as
the “Hobbesian picture of agency” (Pink 1997; Schroeter 2004).
Despite its fundamental contribution to the CTA, Hobbes’s theory of
action gets very little treatment in the secondary literature on his philoso-
phy, often mentioned as a side note in discussions of his psychology and
theory of free will.13 Although this is not the place to mend such an omis-
sion, given the centrality of Hobbes’s contribution to the development of
the CTA we want to at least introduce some of its key features, in particular
those that still serve to identify the CTA as a distinctive theory vis-à-vis its
present-day competitors.
The CTA: Origins and Issues 5
I conceive that when it comes into a man’s mind to do or not to do some certain
action, if he have no time to deliberate, the doing it or abstaining necessarily follows
the present thought he has of the good or evil consequence thereof to himself. . . .
Also when a man has time to deliberate but deliberates not, because never anything
appeared that could make him doubt of the consequence, the action follows his
opinion of the goodness or harm of it. These actions I call voluntary. . . . (Ibid.)
6 J. H. Aguilar and A. A. Buckareff
1.3 Davidson
By the middle of the twentieth century and after the anticausalist hiatus
motivated by the work of philosophers such as Ludwig Wittgenstein and
Gilbert Ryle, the CTA reached its full maturity and emerged as the recog-
nized “standard story of action” among contemporary action theorists. The
locus classicus for the presentation and defense of this contemporary
version of the CTA is found in Donald Davidson’s work on action and
agency, particularly, in his groundbreaking essay “Actions, Reasons, and
Causes” (Davidson 1963/1980). We will focus on this essay, given its cen-
trality in discussions of contemporary versions of the CTA.
Davidson makes two claims about reasons-explanations of action in
“Actions, Reasons, and Causes.” The first is a claim about what constitutes
an agent’s primary reason for action, and the second claim is about the
causal role of an agent’s primary reason for action. Regarding the first
claim, Davidson echoes both Aristotle and Hobbes in affording pro-atti-
tudes a central role in his account of practical reasons. His view is particu-
larly Aristotelian in rendering beliefs an important role as well:
C1. R is a primary reason why an agent performed the action A under the descrip-
tion d only if R consists of a pro attitude of the agent towards actions with a certain
property, and a belief that A, under the description d, has that property. (Davidson
1963/1980, 5)
The CTA: Origins and Issues 7
According to this view, a reason for action is a belief and pro-attitude pair,
and explanations in terms of reasons will mention one or both of these
items.
Regarding the explanatory role of reasons, Davidson argues that the
relationship between reasons and actions displays the same pattern we
discern in causal explanations. Unless the relationship is causal, we are at
a loss to distinguish those cases in which an agent has a reason for acting
that fails to explain why she acts as she does, from cases where the agent’s
reason for acting explains why she acts as she does. Davidson notes that if
we dispense with a reason’s causal role:
Something essential has certainly been left out, for a person can have a reason for
an action, and perform the action, and yet this reason not be the reason why he
did it. Central to the relation between a reason and an action it explains is the idea
that the agent performed the action because he had the reason. (Ibid., 9)
So, to borrow an example from Alfred Mele, Sebastian may have a pair of
reasons for mowing his lawn this afternoon. He wants to mow the lawn
when the grass is dry and he also wants his spouse, Fred, to see him
mowing the lawn when Fred gets home from work in order to impress
him. It turns out that Sebastian only acts for one of these reasons. Mele
asks, “In virtue of what is it true that he mowed his lawn for this reason
and not for the other, if not that this reason (or his having it), and not
the other, played a suitable causal role in his mowing the lawn?” (Mele
1997, 240).
This explanatory problem for the noncausalist has been christened
“Davidson’s Challenge” in the action-theoretic literature (Ginet 2002; Mele
2003), and it is a direct challenge to anyone who rejects causalism about
reasons-explanations. Mele notes that the challenge is to “provide an
account of the reasons for which we act that does not treat . . . [them] as
figuring in the causation of the relevant behavior” (Mele 2003, 39).
Thus, Davidson’s second claim about reasons for action and their role
in the explanation of action has been the main source of contention
between causalists and noncausalists about reasons-explanations:
C2. A primary reason for an action is its cause. (Davidson 1963/1980, 12)
From the foregoing, we hope it is evident that there are at least two desid-
erata a satisfactory theory of action should satisfy. The first is a metaphysi-
cal desideratum requiring that a theory of action should provide us with
a way to distinguish behavior that is actional from behavior that is not
actional. This desideratum includes the need for a theory of action to tell
us a story about the role of agents in controlling their actions. The second
desideratum is epistemological and requires that a theory of action should
provide us with the resources to explain the occurrence of an action.
Addressing the metaphysical desideratum, all versions of the CTA
propose that a behavior is an action only if it is caused in the right way by
some appropriate nonactional mental items. Correspondingly, an agent
exercises her causal powers through the occurrence of nonactional mental
events and states, that is, agent causation is reducible to causation by non-
actional mental events and states. So the story of agency on the CTA is a
story about some part(s) of an agent causing some intentional behavior.
With respect to the epistemological desideratum, all versions of the CTA
propose that an action A’s occurrence can be explained by the reasons the
agent has for A-ing, and the reasons explain an agent’s A-ing only if the
reasons played a causal role in the etiology of the agent’s A-ing. Opponents
of the CTA have questioned just how well the CTA satisfies the two desid-
erata. In this section of the introduction we present the main metaphysical
and epistemological challenges for the CTA arising from the two
desiderata.
10 J. H. Aguilar and A. A. Buckareff
A climber might want to rid himself of the weight and danger of holding another
man on a rope, and he might know that by loosening his hold on the rope he could
rid himself of the weight and danger. This belief and want might so unnerve him
as to cause him to loosen his hold. (Davidson 1973/1980, 79)
they agree that basic causal deviance poses a serious challenge to the CTA
and deserves the attention of proponents of the CTA.
the price paid by the reductive strategy embraced by the CTA is literally
the abandoning of anything resembling an account of agency understood
as the capacity that subjects posses to give rise to and control an action.27
Defenders of the CTA have offered different ways of addressing the
problem of the absent agent. The two most prominent strategies involve
either embellishing the standard story given by the CTA or exploiting the
available resources of our best current versions of the CTA without making
any substantive additions. These two strategies can be labeled “embellish-
ment” and “revisionist” strategies, respectively.
Typically, versions of the embellishment strategy involve hierarchical
or “real-self” theories of agency inspired by the work of Harry Frankfurt.
Although Frankfurt is no defender of the CTA,28 his influence on the embel-
lishment strategy can be traced back to his work on autonomous agency.
The initial statement of this view of autonomy by Frankfurt (1971/1988)
was in his paper “Freedom of the Will and the Concept of a Person” and
was further developed in later papers of his.29 On this view, an agent acts
autonomously when he acts from a pro-attitude that the agent endorses
or identifies with. As Frankfurt notes, the sort of endorsement of or iden-
tification with a motivational state X that is indicative of autonomous
agency requires that the agent wants X to be the motivational state that
“moves him effectively to act” (Frankfurt 1971/1988, 10).
Michael Bratman (2000, 2001) and J. David Velleman (1992/2000) are
two prominent CTA advocates of the embellishment strategy grounded on
versions of hierarchical theories of agency. In the case of Bratman, the
emphasis is on the inclusion of self-governing policies “that say which
desires to treat in one’s effective deliberation as associated with justifying
reasons for action” (Bratman 2001, 309). For Velleman, the corresponding
emphasis consists in explaining the main functional role played by the
agent in the production of an action, namely, “that of a single party pre-
pared to reflect on, and take sides with, potential determinants of behavior
at any level in the hierarchy of attitudes” (Velleman 1992/2000, 139).
According to Velleman, this functional role is found in a propositional
attitude consisting in the desire to act in accordance with reasons.30
The differences notwithstanding, the embellishment strategies end up
suggesting that what is missing in the CTA’s reductionist approach to
agency is some species of reflective endorsement by an agent of the mental
causes of her action. That is, what the embellishment strategy offers is a
version of the CTA that is more complicated in terms of the number of
items involved in the production of actions expressing true agency. But
the ontological upshot remains essentially the same. Surely the CTA’s
14 J. H. Aguilar and A. A. Buckareff
picture of agency has become more complex; but complexity does not
imply ontological novelty. No new powers or unique phenomena that
affect the executive control of agents are conferred on them with this
extended metaphysical framework. Once we strip away the complex frame-
work added to embellish this theory in the interest of giving an account
of full-blooded agency, one is left with a “causal structure involving events,
states, and processes of a sort we might appeal to within a broadly natu-
ralistic psychology” (Bratman 2001, 312).31
It is important to note that the CTA’s commitment to a metaphysically
minimalist view of the items involved in the production of an action
contrasts starkly not only with the mentioned effort by defenders of the
ACTA to introduce a different type of causal relationship involving agents
who directly cause events, but also with the CTA’s other serious competi-
tor, namely, the volitionist theory of action (VTA). According to the VTA,
the way to correctly characterize action and fully incorporate the agent in
the picture requires the introduction of a unique type of irreducible mental
event that is essentially actional, typically a willing or a trying.32 Because
this type of actional mental event engages the agent directly, it may seem
that, if anything, the VTA is ontologically more economical than the
CTA particularly in the light of the modifications suggested by the embel-
lishment strategy. And yet, the positing of an event that is intrinsically
actional, unanalyzable, and whose single distinguishing feature is the
“actish” phenomenological feeling that comes with it33 introduces a
different and unique type of event that is supposed to do the hard
agential work. From the perspective of the CTA, the introduction by the
VTA of such a unique event amounts to an unnecessary ontological
enlargement, which by stipulation is identified as the place where agency
takes place.
Nevertheless, some defenders of the CTA recognize that at least with
respect to the commonsensical conceptual framework associated with
agency, the ACTA and the VTA appear to be responding to basic assump-
tions involved in the adoption of reactive attitudes toward agents, particu-
larly when agents are seen from a moral perspective. For unless the agent
herself is an active participant in the production and control of the actions
for which she is going to be held morally responsible and which serve as
the legitimate objects of reactive attitudes, the very possibility of these
fundamental practices is undermined. From this perspective, the problem
of the absent agent is the challenge of making sense of an agent who
occupies the central role in the story of morality and responsibility by
using exclusively the sparse ontology assumed by the CTA. This type of
The CTA: Origins and Issues 15
challenge motivates a second CTA strategy to deal with the problem of the
absent agent grounded on the recognition that the commonsensical con-
ceptual framework associated with agency needs to be accommodated
inside a reductive naturalist metaphysics of agency.34 The reason this
second strategy is revisionist is that it continues to exploit the available
metaphysical resources of the CTA while promising to deliver the goods
requested by the commonsensical conceptual framework.
Donald Davison inaugurated the revisionist effort within the CTA by
accepting the conceptual irreducibility of agency while maintaining his
well-known metaphysical commitment to monism, in this case taking the
form of the basic ontological items proposed by this theory of action. A
more recent and fully developed effort along these revisionist lines is found
in the work of John Bishop, particularly in his Natural Agency (1989). In
this book, Bishop tries to reconcile what he considers to be a fundamental
tension between two competing perspectives:
To its defenders, one of the main theoretical virtues of the CTA consists
in the way it captures the motivating contribution of reasons by identify-
ing them as what cause an agent to act. An agent may have lots of reasons
for acting, but the reason why she does it is the one that causes her to do
it. This simple idea is at the center of the most serious epistemological
debate between causalists and noncausalists about the explanatory role of
reasons concerning action. Perhaps the best way to make sense of this
debate is in terms of the above-mentioned Davidson’s Challenge, that is,
the challenge to provide an account of reasons for which someone acts
that does not treat such reasons among the causes of the action. Conse-
quently, anticausalists often see it as part of their theoretical job to show
that Davidson’s Challenge can be effectively answered, thereby seriously
undermining theories like the CTA.37
One philosopher whose work stands out as a representative of the type
of attack on the CTA grounded on an effort to deal with Davidson’s Chal-
lenge is George Wilson.38 The following schema articulates Wilson’s non-
causalist teleological view (Wilson 1989, 172):
An agent S performs some action A with the conscious purpose of satisfying a desire
to achieve some goal E, if and only if, S A’s because he desired to achieve E and S’s
A-ing occurred because S thereby intended to satisfy D.39
In that event, it is false that Norm climbed the rest of the way up the ladder. Although
his body continued to move up the ladder as it had been, and although he intended
of his movements that they “promote the satisfaction of [his] desire,” Norm was no
longer the agent of the movements. (Mele 2003, 46)
The problem for Wilson’s teleological account is that Norm meets Wilson’s
conditions for his reasons to explain his movements. Mele argues that
“[The thought experiment] lays bare a point that might otherwise
be hidden. Our bodily motions might coincide with our desires or inten-
tions, and even result in our getting what we want or what we intend to
get . . . , without those motions being explained by the desires or inten-
tions” (ibid., 46). Thus, it appears that one can meet all of the conditions
Wilson advances for acting for a particular reason and yet the reasons can
fail to explain one’s action (ibid., 45–51). The upshot, if Mele is correct,
is that a noncausal teleological account such as Wilson’s cannot meet
Davidson’s Challenge.
18 J. H. Aguilar and A. A. Buckareff
Some of the authors in this volume are defenders of the CTA and others
are critics. Some inform their proposals by reaching into the past and
drawing from historical sources while others concentrate solely on the
present state of the art and its future possibilities. Lastly, some authors rely
mainly on the tools of standard analytic philosophy of action to support
their claims while others base their conclusions on the latest empirical
research on human action. What all the authors of this volume have in
common is the recognition that given the centrality of the CTA, this is a
theory that for better or for worse one cannot ignore.
The first eleven chapters of the volume focus on metaphysical issues
about the CTA. Chapters 12 and 13 address the debate over reasons-
explanations of action. The final four chapters focus on assorted new
directions for thinking about CTA.
Michael S. Moore’s essay, “Renewed Questions about the Causal Theory
of Action,” starts off the portion of essays that focus on metaphysical
issues. Moore not only responds to some challenges to his own formulation
of the CTA, but addresses some more general metaphysical challenges for
the CTA, including, among other problems, the absent agent problem, how
the CTA can account for mental actions and omissions, and the nature
of the causal relation between mental items and events that count as
actions. Moore goes straight to the heart of many of the debates over the
CTA, defending the ontological credentials of the CTA while questioning
the tenability of the ontological commitments of some alternatives.
Michael Smith’s essay, “The Standard Story of Action: An Exchange (1),”
is the first part of an exchange between Smith and Jennifer Hornsby.
Hornsby’s essay, “The Standard Story of Action: An Exchange (2),” consti-
tutes the second half of the exchange. Smith defends the CTA. His paper
is a response to some of the arguments leveled against the CTA by Hornsby
(2004), particularly her arguments related to the role of agents in the etiol-
ogy of action on the CTA. Hornsby responds to Smith and argues that no
version of the CTA should be accepted.
John Bishop argues that action theorists must keep their theorizing
about action and agency in contact with the motivations for such work.
Bishop discusses the motivations for his own work on action, particularly
his desire to reconcile the ethical perspective of ourselves as moral agents
20 J. H. Aguilar and A. A. Buckareff
Notes
1. The locus classicus for a contemporary presentation and defense of the CTA is
Davidson 1963/1980. More recent attempts at defending the CTA have offered a
more fine-tuned account of the role of mental items in the causation of intentional
behavior. For more recent book-length defenses of versions of the CTA, see Brand
1984; Bishop 1989; Audi 1993; Enç 2003; and Mele 1992, 2003. Prominent alterna-
tives to the CTA include various noncausal theories of action (including versions of
volitionism) and agent-causal theories of action. For noncausal theories of action,
see Ginet 1990; Goetz 1988; and McCann 1998. For defenses of agent-causal theories
of action, see Chisholm 1966; Taylor 1966; and Alvarez and Hyman 1998.
2. Although we believe this schema captures the essential features of the CTA, this
is by no means a settled view, particularly with respect to the additional epistemic
clause to what otherwise is a metaphysical view on actions. In fact, one of the
contributors to this volume has argued precisely against such a formulation, propos-
ing instead that the components of the traditional schema be divided into two
different theories: a causal theory of action (CTA) that captures the metaphysical
part of the schema and a causal theory of action explanation (CTAE) that captures
the epistemological part of the schema. See Ruben 2003, 90.
4. John Ladd’s paper, “The Ethical Dimensions of the Concept of Action” (1965),
may be the first place where the term “the causal theory of action” is used in the
action-theoretic literature. But Ladd does not use “the causal theory of action” just
to refer to what we would now identify as the CTA. Rather, any theory of action
that “attempts to analyze ‘action’ in terms of the category of cause and effect” counts
as a causal theory of action for Ladd (1965, 640). This differs from current usage
since “causal theory of action” is used to refer to an event-causal theory of action.
Ladd’s usage allows for agent-causal theories of action to be causal theories of action.
In the same year, Daniel Bennett identifies “the causal thesis,” which is just what
we would now call the causal theory of action (Bennett 1965, 90).
5. The claim that Aristotle even had a theory of action is controversial. For instance,
John L. Ackrill argued that although Aristotle says much about what interests action
theorists (e.g., about the etiology of action and our responsibility for action), “he
does not direct his gaze steadily upon the questions ‘What is an action?’ and ‘What
is an action?’” (Ackrill 1978, 601). For a response to Ackrill, see Freeland 1985. For
a partial defense of the claim that Aristotle provides a causal theory of action, see
Mele 1981.
7. The translation is Martha Nussbaum’s (Aristotle 1978, 44). Also, in what follows
in our discussion of Aristotle, we will simply use “desire” to denote a conative state
generally. We will specify what type of desire when relevant.
8. Boulêsis is often rendered as “wish.” One could just as easily refer to intrinsic
conative states aimed at ends. “Wish” is nice shorthand, since wishes are pro-atti-
tudes. But for ease we stick to using “desire.” Nothing much hangs on this since
Aristotle’s story of motivation still has it that conative states are central as the
springs of action.
actions (treating efforts of will as somehow necessary conditions for action). Admit-
tedly, “instrumental action-triggering desire” is infelicitous for reasons we cannot
explore here. But such a translation seems warranted given the functional role of
prohairesis in the execution of the means to satisfy an agent’s ends.
10. The translation is based on Irwin’s (1999, 87), with some significant changes
made based on consulting the Greek text.
12. In this volume two chapters that explicitly exhibit the influence of Aristotle’s
ideas are the ones by Stout (chapter 7) and Juarrero (chapter 16).
13. Perhaps the only notable exception is Van Mill’s (2001) monograph Liberty,
Rationality, and Agency in Hobbes’s “Leviathan,” who nonetheless avoids drawing the
important connection to modern versions of the CTA and Hobbes’s theory of action.
Other significant discussions of Hobbes’s theory of action, though still embedded
within a larger account of his philosophy in general, are Peters 1967; Sorell 1986;
Tuck 1989; and Martinich 2005.
15. See Pink 2004 for a discussion of the relationship of Hobbes’s work on action
to that of the Scholastics, particularly Suarez.
16. The text is from Leviathan, Part I.53. It turns out that all voluntary actions are
free actions on his view. He writes in The Questions Concerning Liberty, Necessity, and
Chance that “I do indeed take all voluntary acts to be free, and all free acts to be
voluntary” (1656/1999, 82–83). See Of Liberty and Necessity (1654/1999, §28).
17. To give some context, in the sentence in which the definition appears, he writes
that “I conceive that in all deliberations, that is to say, in all alternate succession
of contrary appetites, the last is that which we call the will, and is immediately next
before the doing of the action, or next before the doing of it become impossible”
(Hobbes 1654/1999, 37).
24 J. H. Aguilar and A. A. Buckareff
18. In the next section of this introduction we discuss the problem of the absent
agent.
19. In addressing the problem of basic causal deviance in his later paper, “Problems
in the Explanation of Action,” Davidson admits that he is “convinced that the
concepts of event, cause, and intention are inadequate to account for intentional
action” (Davidson 2004, 106). John Bishop, in his essay in this volume, notes that
Davidson is here making a conceptual claim. Because Davidson identifies actions
with events, Bishop claims that for Davidson the problem of basic causal deviance
does not pose a problem for the ontology of action.
21. See Bishop 1989 and Mele 1992, 2003 for explicit statements of the causal role
of reasons and intentions that requires that actions be sensitive to the contents of
reasons for action and intentions.
22. See Bishop 1989 for extended discussion of this example. In turn, Aguilar’s
chapter in this volume discusses Bishop’s proposal to deal with such cases.
23. There is widespread agreement on the inadequacy of the old ballistic model of
mental causation in the etiology of action. The differences can be quite fine-grained
at times. Some strategies include, among others, those that emphasize causation by
self-referential intentions (Harman 1976/1997; Searle 1983), those that highlight the
immediacy of causal relations between action-triggering intentions—taking actions
to start in the brain (Brand 1984; Adams and Mele 1992; Mele 1992), and those that
afford feedback loops a prominent role in making sense of the guiding function of
intentions (Bishop 1989).
24. The most extensive analysis of this version of the absent agent problem and its
consequences for the CTA is by Velleman (1992/2000).
25. We can express the core tenets of the ACTA as follows: Any behavioral event A
of an agent S is an action if and only if (1) S causes A (either directly or by causing
some event); and (2) S’s causation of A is not ontologically reducible to causation
by some mental event or state of A.
26. Such a view is at least as old as Thomas Reid’s (1788/1983) theory of action.
27. In chapter 4 of this volume, Hornsby retakes this line of attack, while Michael
Smith in chapter 3 offers a reply. Another recent criticism of the CTA along these
lines, although based on a revised understanding of the role of consciousness, is
found in Schroeter 2004; for a CTA response, see Aguilar and Buckareff 2009.
28. For an illustration of his misgivings about the CTA, see Frankfurt 1978/1988,
70–72.
The CTA: Origins and Issues 25
30. “What really produces the bodily movements that you are said to produce, then,
is a part of you that performs the characteristic functions of agency. That part, I
claim, is your desire to act in accordance with reasons, a desire that produces behav-
ior, in your name, by adding its motivational force to that of whichever motives
appear to provide the strongest reasons for acting, just as you are said to throw your
weight behind them” Velleman (1992/2000, 141).
31. Precisely for this reason some critics of the CTA argue that the embellishment
strategy amounts to a sort of window-dressing, failing to address the source of their
concerns; see Hornsby 2004 and Schroeter 2004.
32. So, for instance, where H. A. Prichard proposed that “to act is really to will
something” (Prichard 1945, 190), Jennifer Hornsby has defended the view that
“every action is an event of trying or attempting to act, and every attempt that is
an action precedes and causes a contraction of muscles and a movement of the
body” (Hornsby 1980, 33). We can express the core tenets of the VTA as follows:
Any behavioral event A of an agent S is an action if and only if S’s A-ing is either
identical with or is the consequence of an instrumental mental action M (a trying,
willing, volition).
34. Velleman’s embellished CTA is also seen by him as directly addressing the com-
monsensical conceptual framework presupposed in ordinary attributions of agency,
which goes to show that the two CTA strategies to deal with the problem of the
absent agent discussed here are not exclusive but potentially complementary.
35. Writing about the motivation for any theory of action, this is what Bishop has
to say about the role of common sense as the basis for generating philosophical
theories and distinctions: “For one thing, the fact that common sense draws a dis-
tinction does not automatically warrant the need for a philosophical theory of its
basis: To provide motivation for such a theory we need to find a source of genuine
philosophical puzzlement in the wider context in which the distinction functions”
(Bishop 1989, 11).
36. It is worth noting that in this respect Bishop’s theory of action is like J. J. C.
Smart’s (1959) formulation of the identity theory of mind. Smart’s identity theory
is an ontological theory and not a theory of the meaning of mental terms. Just as
the intension of “pain” need not be the same as its extension, so also the intension
of “action” need not be the same as its extension. See also Davidson’s effort to draw
a similar distinction concerning agency: “If we can say, as I am urging, that a person
does, as agent, whatever he does intentionally under some description, then,
although the criterion of agency is, in the semantic sense, intentional, the expres-
sion of agency is itself purely extensional. The relation that holds between a person
26 J. H. Aguilar and A. A. Buckareff
and an event, when the event is an action performed by the person, holds regardless
of how the terms are described” (Davidson 1971/1980, 46–47).
37. For simplicity we are only focusing on the debate over whether reasons or
mental events associated with them are triggering causes of action, explaining action
in virtue of their triggering role. Dretske (1988) argues that reasons are structuring
causes of action. This is an interesting and provocative theory that is compatible
with a standard causalist account of reasons as triggering causes. A fully developed
account of the etiology and explanation of human action should have something
to say of the different sorts of causal roles played by mental items. Paying close
attention to Dretske’s work should play a part in such work.
38. In a recent essay, Wilson (2009) indicates that his interest in action theory
comes from his bafflement with the widespread acceptance of Davidson’s causalism
about reasons-explanations of action. This interest led Wilson to offer a full-blown
teleological theory of action in his book The Intentionality of Human Action (1989),
where among other things he tries to take on Davidson’s Challenge.
39. For other recent defenses of teleological noncausalism about reasons explana-
tions, see Ginet 1990, 2002; McCann 1998; Schueler 2003; and Sehon 2005.
40. Mele 1992 and 2003 both include responses to significant challenges to causal-
ism and defenses of noncausal accounts of reasons-explanations more generally.
43. See, e.g., the experimental work of Bertram Malle (2004) that indicates that the
folk practice is more in line with psychologism and also supports causalism. For a
discussion of Malle’s work in making the case against the antipsychologism of Julia
Tanney (2005), see Buckareff and Zhu 2009.
44. E. J. Lowe (2008) defends an antipsychologistic theory in his most recent book
on the metaphysics of mind and action.
2 Renewed Questions about the Causal Theory of Action
Michael S. Moore
1 Introduction
The causal theory of action (CTA) has long been the standard account of
human action in both philosophy and jurisprudence. The CTA essentially
asserts that human actions are particulars of a certain kind, namely, events.
Within the genus of events, human actions are differentiated by three
essential properties: (1) such actions are (at least partially) identical to
movements of the human body; (2) those movements are done in response
to certain representational states of belief, desire, intention, volition,
willing, choice, decision, deliberation, or the like; and (3) the “doing” in
(2) is analyzed in causal terms. Put simply, according to the CTA, human
actions are bodily movements caused by representational mental states
such as beliefs, desires, and so on.
The CTA is sometimes put as a conceptual thesis—this is what we mean
by the verbs of action; sometimes as an ontological thesis—human actions
just are certain mental states causing bodily movements; and sometimes
as a mere extensional equivalence regardless of any claims of synonymy
or identity—wherever there is a human action, there is bodily movement
caused by a certain mental state, and vice versa. Sometimes the CTA is
advanced as a combination of two or more of these claims. But in any case,
these are, broadly speaking, descriptive theses asserted by the CTA.
The CTA also is widely thought to have normative implications. This is
because human action is intimately linked to the agency that makes us
both morally responsible and legally liable for certain states of affairs. The
CTA thus becomes an analysis of the nature of one of the central condi-
tions of responsibility.
In an earlier work I described and defended a version of the CTA (Moore
1993). My theses were recognizable versions of the three theses distinctive
of the CTA in general: (1) each human action is partially identical to some
28 M. S. Moore
and states of affairs involving such objects.3 Talk of one such object (human
agents) standing in such relationships where the talk is explicitly not
elliptical—that is too mysterious to be taken seriously.
I now wish to examine the first of three more particular skepticisms about
the CTA. This is the skepticism denying that bodily movements are neces-
sary for human actions. Initially, we should distinguish the CTA thesis
discussed here from a different thesis, namely, the thesis that when there
are bodily movements involved in some human action, it is to those move-
ments that reference is made when using verbs of action. The latter thesis
could well be true while the thesis of interest here—that action essentially
involves bodily movements—was false.
My own critics, and those more generally critical of the CTA, often
proceed by way of counterexample. Surely, the argument goes, something
(x) is an action, and yet just as surely x involves no bodily movements. It
is useful to group the kinds of counterexamples proposed by different
values for the “x” in this formula. I shall consider four sorts of such coun-
terexamples. First, there are willed stillnesses by some agent. These include
the guardsman at Buckingham Palace trying (successfully) to remain per-
fectly still (Fletcher 1994, 1445), or the bird-watcher prone to twitching
who keeps still to avoid alarming the bird she is watching (Duff 2004, 83),
or the chocolate lover who is tempted to reach for the chocolates next to
her but stays still, resisting the temptation (Hornsby 2004, 5), or the hero
standing steadfast on the burning deck of some ship when every fiber of
his being urges flight (Annas 1978). These are surely actions, the objection
continues, yet they involve no movement of the bodies of the actors
involved.
A second set of counterexamples is provided by the very recent con-
struction of “mind–brain interface machines.”4 I imagined one of these in
a recent article (Moore 2009b) (before I knew they actually existed): Suppose
the American patriot who desires to warn Dawes and Revere whether the
British are coming by land or by sea is so wounded that he can’t move;
yet he is hooked up to a device on his scalp that measures the readiness
potential in the supplementary motor area of his brain when he is about
to perform a voluntary motor movement, and that device in turn is hooked
up to the light in the tower of the Old North Church in Boston. He wills
one movement of his finger. This causes not the finger to move, but the
device to register his attempt, and the light to shine but once; Revere and
32 M. S. Moore
Dawes get the message, alert their fellow rebels, and so on. Surely the para-
lyzed patriot has performed the action of alerting the rebels even though
he didn’t (because he couldn’t) move his body.
It is not very plausible to deny that there are actions in each of these
first two sorts of examples. More plausible is to deny that there are no
bodily movements in such cases. Consider, in descending order of obvious-
ness, a series of cases where this might plausibly be asserted.
morality, on the one hand, and our less stringent positive duties (plus
nonobligatory acts of super- or suberogation), on the other.11 We have a
stringent moral duty not to kill strangers, for example, but only a much
less stringent (or nonexistent, depending on the circumstances) duty to
prevent their death. A distinction between things done/things not done
does not recommend itself as serving to mark these moral differences. For
all there would be in the “things not done” category would be noninten-
tional omissions; intentional omissions would be on the “things done”
side of the conceptual divide, despite these being breaches of less stringent
positive duties (if breaches of obligation at all). We need a distinction that
tracks the positive/negative duty distinction in morality, which the act/
omission does well and which the things done/things not done distinction
does poorly.
Second, there is an important metaphysical distinction between items
that can serve as the relata of singular causal relations and items that
cannot. Willed bodily movements that are human actions (or on some
views of relata,12 the states of affairs constituted by such actions having a
certain property) can serve as such relata; the absence of such actions (i.e.,
omissions) cannot. The cause/failure to prevent line is not happily marked
by the things done/things not done line, for omissions are failures to
prevent, no matter whether intentional, negligent, or merely nonnegli-
gently inadvertent. Some of the items in the things done category can thus
stand in singular causal relations, but others (viz., intentional omissions)
cannot.
We thus have two good reasons to prefer an act/omission line to the
broader conception of agency implicit in the idea of things done. And by
the narrower notion of agency contained in the concept of a human
action, omissions and mental acts remain on the nonactional side of the
distinction. They thus are not counterexamples to the most desirable
concept of human action, that of a willed bodily movement.
I turn now to the second of the theses constitutive of the CTA. This is the
thesis that it is mental states like intention that are the distinctive source
of human agency and action. It might well be, though, that the least
troublesome subthesis of the CTA is the mental-causation thesis. For the
dominant tradition, denying that mental states like intention are the right
sort of things to be causes of physical events like bodily movements is the
36 M. S. Moore
brain states depends on there being token mental states, but that such
commitments commit us a priori to the view that folk psychology will be
borne out by neurophysiology, an a priori commitment she finds “absurd”
(Steward 1997, 134). She urges that representational states are (almost)
always individuated by their contents and not by the subjects who hold
them or the time at which they are held; so we can make do with talk of
representational states in the abstract, with no commitment to there being
tokens (ibid., 131–133).17 She argues specifically against there being tropes
(or “abstract particulars,” or “property instances”), recognizing (correctly
enough) that these are distinct from states of affairs (the having of a prop-
erty by a particular at a time).
These and other arguments raise very broad issues of general ontology,
of causal relata, and of mind–brain relations. We are not going to settle
such large-scale issues here. My own long-term, naturalistic commitments,
argued for elsewhere, are different. Unlike Steward (ibid., 38), I think that
there is a right answer, in general, as to what causal relata are, and they
do not include whole objects or persons; that there are token states of
intention and belief; that these can stand in causal relationships; that such
mental states are identical with certain brain states, even if it turns out
there are no universal type identities here but only a disjunction of local
ones; and that, accordingly, mental states of volition cause the bodily
movements that are their objects (i.e., I’m committed to the CTA). Having
argued for most of these basic commitments elsewhere,18 I shall return to
more manageable (because more specific) objections to the CTA.
The final worry about the CTA focuses on the causal relation between
intentions and the bodily movements that are the objects of such inten-
tions. Such worry can concede, arguendo at least, that human actions are
(at least partially) identical to bodily movements and that there are token
mental states of desire, intention, and belief. The worry here specifically is
that human action cannot be analyzed in terms of causation.
There is a raft of old worries here that should be mentioned only if to
put them aside. These were the worries of the post-Rylean ordinary lan-
guage philosophers that preceded (and largely motivated) the consensus
in contemporary philosophy about the CTA. These included the claims
that: analytically, an event could not be a human action if it was caused
38 M. S. Moore
it, despite the lack of causation of the relevant bodily movements by the
intention to do them.
Revert to my earlier imagined patriot trying to alert his fellow rebels
that the British are coming by land. Suppose he knows that, because of his
wounds, he cannot move his finger. But suppose he also knows enough
contemporary neuroscience to know that if he tries to move his finger, 300
milliseconds prior to such effort there will be activated in his brain pro-
cesses that will cause a “readiness potential” to begin, detectable in his
scalp over the supplementary motor area of his brain. Luckily for him, a
patriot neuroscientist has hooked him up to a “mind–brain interface”
machine, and this machine will read the slow negative shift in his readi-
ness potential, which reading will cause the light to go off in the tower of
the Old North Church in Boston. Knowing all this, the patriot tries to move
his finger but once, the light goes off but once, Revere and Dawes begin
their famous ride to alert the citizens of Lexington and Concord that the
British are coming by land, and the rest is history. I take it that the patriot
is morally responsible for alerting his fellow rebels and, if caught, is fairly
hung for treason by the British. For he has performed the actions of alert-
ing the rebels, sending the signal, lighting the light—even though his
intent to do any of these things caused only an intention to try to move
his fingers, and that intention, ex hypothesi, did not cause any of these
things. Rather, the intention to try to move his finger was the co-effect of
those brain events that also caused the change in scalp readings, which
caused the light to be lit, which caused the rebels to be alerted.
These conclusions are troublesome for the CTA. I put aside two easy
responses to them. One is to “outsmart”32 the argument by denying that
there would be either action or responsibility in such cases. As Gideon
Yaffe (2009) put it in response to this argument against the CTA, “if I have
to choose between the CTA and the conclusions (of action and responsibil-
ity) in such cases, I choose to adhere to the CTA.” The intuition that the
patriot performed the action of alerting the rebels, and that he is respon-
sible for doing so, is too strong for this willing of belief, is it not?
The second response is to rely on the earlier identification of intention
with the unconscious brain events measured by the shift in readiness
potential. Then in the actual world we live in—as opposed to a merely
possible world incompatible with the laws of this world—there are no
stories like the one I just told. In the actual world, the imagined patriot
would indeed perform the action of alerting the rebels, but he would do
so in virtue of his intentions (to alert, to light, to try to move) causing the
bodily change that caused the rebels to be alerted. In the actual world, the
world in which we live, there is no epiphenomenal problem.
Renewed Questions about the CTA 41
Notes
1. For criticism of my own version of the CTA, see the twelve articles collected in
a symposium on my Act and Crime book (Moore 1993). See also Mathis 2003 and
Duff 2004.
42 M. S. Moore
3. As I (and many before me) have argued. See Moore 2009a 333–334.
5. See Moore 1993, 87–88. The concept is Bruce Vermazen’s; see Vermazen 1985.
10. Morality is not indifferent to how we feel and think, however; the virtues in
part concern themselves with such matters. See Williams 1973.
13. See, e.g., Moore 2009a; Dowe 2000; and Hall 2004.
14. See, e.g., Mackie 1974; Mellor 1995; and Bennett 1988.
15. “A fact exists, in my usage, when any proposition is true—there is no more than
this to the existence of facts” (Steward 1997, 104).
16. The apt phrase used to describe ordinary language philosophy’s attempt to
preserve the specialness of the mental without committing to metaphysical dualism.
See Landesman 1965.
17. One place Steward might have looked for a commitment to there being inten-
tion-tokens is in the law. In contract law, for example, there is no contract unless
the contracting parties have “the same intention,” i.e., two intention-tokens having
the same content but held by different persons. In criminal law, to take another
example, when we blame an accused for intentionally hitting one or more persons
that he in fact did hit but did so while intending to hit someone else (the “trans-
ferred intent doctrine”), we count the intention-tokens held by the accused to see
when they are “exhausted” such that no more can be transferred.
18. On functionalism, see Moore 1988; on causal relata, see Moore 2009a, chapters
14–15; on intentions specifically, see Moore 1993, chapter 6, , 1997, and 2010.
28. This version of the much older epiphenomenal worry surfaced initially in the
work of Benjamin Libet. See, Libet et al. 1983 and Libet 1985. Libet’s work, and his
skeptical conclusions from it, have been carried on by Haggard and Elmer 1999,
Wegner 2002, and most recently, Haynes et al. 2007, and Soon et al. 2008.
29. See Moore 2006, forthcoming; and Mele 2006, 2009. There is an extensive early
symposium on Libet in 1985 Behavioral and Brain Sciences 8 (1985).
31. Mele (2003), for example, identifies these brain events with urges to move rather
than intentions.
32. The verb defined (out of Jack Smart’s name) in Dennett and Lambert 1978.
33. An example of the limited ambitions of this kind of claim is provided by the
theory of causation advanced in Dowe 2000.
3 The Standard Story of Action: An Exchange (1)
Michael Smith
Suppose an agent acts in some way. What makes it the case that he acted,
as distinct from his having been involved in some mere happening or
other? What makes him an agent, rather than a patient? According to the
standard story of action that gets told by philosophers, the answer lies in
the causal etiology of what happened (Hume 1777/1975; Hempel 1961;
Davidson 1963).
We begin by identifying some putative action that the agent performed
by tracing its effects back to some bodily movement. This bodily move-
ment has to be one that the agent knows how to perform, and it further
has to be the case that his knowledge how to perform it isn’t explained by
his knowledge how to do something else: in other words, it must be one
that could be a basic action (Danto 1963; Davidson 1971). We then estab-
lish whether the agent acted by seeing whether this bodily movement was
caused and rationalized in the right kind of way by some desire the agent
had that things be a certain way and a belief he had that something he
can just do, namely, move his body in the relevant way, has some suitable
chance of making things the way he desired them to be. If so, then that
bodily movement is an action; if not, then it is not.
It is easy to imagine someone objecting to this standard story right from
the outset: “If the standard story of action says that I act only if I move
my body, then it entails that there are no actions like those performed by
children who stand absolutely motionless when given the direction to do
so in a game of Freeze, or actions like sitting still in a chair or lying on a
bed. In these and a host of similar cases we plainly act, but we do so
without moving our bodies.” Despite its rhetorical force, however, this
objection rests on an uncharitable interpretation of what the standard view
has in mind when it talks about bodily movements.
When a defender of the standard view says that actions are bodily move-
ments, this has to be interpreted so that any orientation of the body counts
46 M. Smith
Cases like this underscore how very loose the connection is between
doings and actions. For though it would be perfectly acceptable to describe
John as having done something in this case—he did flick the switch, after
all—his doing that isn’t an action. Moreover, even though John’s finger
did indeed bend backward, bending his finger backward isn’t something
that he did, and nor did his finger bend backward as a result of anything
else he did. So, even though John did indeed flick the switch, he didn’t
act when he did that, and he didn’t do it by doing anything else, either.
When you dwell on it, it can seem puzzling that we are permitted to
describe things in these terms. But I doubt that it is worth dwelling on for
too long. Given how loose the connection is between doings and actions,
not much in the way of philosophical illumination about action is going
to be gained by attending to those occasions on which we do and don’t
describe people as doing things.
Let’s return to our original example. Given that the bodily movement
that John performs is one that he knows how to perform without perform-
ing some other action—his moving his finger, in our original example—the
standard story tells us that we can determine whether it was an action by
investigating its causal antecedents. Was that movement caused and ratio-
nalized by a desire John had that things be a certain way and a belief he
had that his moving his finger had some suitable chance of making things
the way he desired them to be? Did he (say) desire the illumination of the
room and believe that he could illuminate the room by moving his finger
against the switch? If so, did his desire and belief cause his finger move-
ment in the right way? If so, then that finger movement is an action; if
not, then we once again have to conclude that John was involved in a
mere happening in which he wasn’t an agent.
Standard though this story of action is, it is not universally accepted.
Jennifer Hornsby, for example, has recently provided several arguments
aimed at demonstrating the story’s crucial flaws (Hornsby 2004). Defenders
of the standard story should welcome such objections, as one of the best
ways they have available to them to demonstrate the plausibility of their
view is to work through and respond to objections. In what follows I will
consider four of the objections Hornsby puts forward. To anticipate, three
of her four objections seem to me to be simply misplaced. The fourth
objection is more worrying, but it turns out that it isn’t so much an objec-
tion to the standard story itself as an expression of dissatisfaction with a
certain way of developing that story. Once the dust settles, the standard
story of action thus seems to me to remain pretty much intact, notwith-
standing Hornsby’s four objections.
48 M. Smith
someone can do something intentionally without there being any action that is
their doing that thing. Consider A who decides she shouldn’t take a chocolate, and
refrains from moving her arm towards the box; or B who doesn’t want to be dis-
turbed by answering calls, and lets the telephone carry on ringing; or C who, being
irritated by someone, pays that person no attention. Imagining that each of these
things is intentionally done ensures that we have examples of agency. . . . But since
in these cases, A, B and C don’t move their bodies, we have examples which the
standard story doesn’t speak to. (Hornsby 2004, 5)
But is Hornsby right that the standard story does not speak to what
happens in these cases?
Hornsby is surely right that we ordinarily distinguish between actions
and omissions. The question, however, is whether the standard story of
action is intended to be a story about actions, in the sense of “action” in
which actions are distinct from omissions. Since I agree with her that the
standard story aims to explain agency quite generally, I doubt that this is
so. It seems to me much more plausible to suppose that it is intended to
be a story about actions in a quite general sense in which the distinction
between actions and omissions is invisible. From here on I will assume that
this is so. It might be thought that this just makes Hornsby’s criticism even
more acute. For we have already seen that the standard story identifies
actions with bodily movements, so isn’t it going to be impossible for it to
be a story about the agency involved in omissions, given that omissions
involve failures of bodily movement? I do not think so.
Focus on the case of A, who decides that she shouldn’t take a chocolate
and so refrains from moving her arm toward the chocolate box. Is it true
that A doesn’t move her body? Put like that, the question is likely to
mislead. When A refrains from moving her arm toward the chocolate box,
there is no bodily movement of that kind: she does not move her arm
toward the chocolate box. But it hardly follows from the fact that there is
no bodily movement of that kind that A doesn’t move her body at all.
Remember what was said earlier. The standard story tells us that whenever
an agent acts, her action can be identified with some bodily movement or
other, a bodily movement that she knows how to perform, where her
knowledge how to perform that bodily movement is not explained by her
The Standard Story of Action (1) 49
One may wonder . . . why casual claims like (SS) [Her desire . . . caused an event
which was her bodily movement], which are part of the standard story, should ever
have been made. For even where there is an event of the agent’s doing something,
its occurrence is surely not what gets explained. An action-explanation tells one
about the agent: one learns something about her that makes it understandable that
she should have done what she did. We don’t want to know (for example) why
there was an event of X’s offering aspirins to Y, nor why there was the actual event
of X’s offering aspirins to Y that there was. What we want to know is why X did
the thing she did—offer aspirins to Y, or whatever. When we are told that she did
50 M. Smith
it because she wanted to help in relieving Y’s headache, we learn what we wanted
to know. (Hornsby 2004, 8)
Bratman and Smith, when they raised questions about what it is for an agent of a
certain sort to be at work, turned these into questions about what sort of psycho-
logical cause is in operation. Like others who tell the standard story, they suppose
that citing states and events that cause a bodily movement carries the explanatory
force that might have been carried by mentioning the agent. But unless there is an
agent, who causes whatever it is that her action does, questions about action-
explanation do not even arise. An agent’s place in the story is apparent even before
anyone enquires into the history of the occasion. (Hornsby 2004, 19)
But what we just said in response to the third objection shows why
Hornsby is wrong to suppose that this is so.
Consider again my wife’s kicking me. Her place in this story as doer is
of course apparent before we inquire into the history of the occasion. But
this doesn’t suffice to secure her place in the story as the agent of anything
because, depending on how the story gets filled out, what she did may or
may not be an action. Moreover, the crucial information that fixes whether
or not what my wife did is an action is historical information. What
prompted her kicking me? This is a question about causation. If she was
asleep and her kicking me was caused by a bodily spasm, then she wasn’t
the agent of anything: she did something, but she didn’t act. But if her
kicking me was prompted not by some bodily spasm, but by a suitable
desire and belief, then she may well have been an agent. We simply cannot
ignore the history of what an agent did in determining whether she was
an agent. Her status as an agent is constituted by historical facts.
[W]hen an account of a causal transaction in the case of agency is given in the claim
that a person’s believing something and a person’s desiring something causes that
person’s doing something, it is assumed that the whole of the causal story is told
in an action-explanation. The fact that the person exercised a capacity to bring
something about is then suppressed. It is forgotten that the agent’s causal part is
taken for granted as soon as she is said to have done something. The species of
causality that belongs with the relevant idea of a person’s exercising her capacities
is concealed. (Hornsby 2004, 22)
Hornsby begins with the charge that the standard story is incomplete, but
then ends with the charge that when you add to the standard story what
needs to be added to it to make it complete, you discover that the causality
required to make sense of agency—the agent’s exercise of her capacity to
do things—is different from anything that’s on offer in the standard story.
Let’s begin with the charge of incompleteness. As I understand it, the
standard story of action is offered as an account of necessary conditions
for agency, not an account of necessary and sufficient conditions. Indeed,
it is well known that defenders of the standard story have a very difficult
time of it when they attempt to say not just what’s necessary, but also
what is sufficient for agency (Davidson 1973; Peacocke 1979a,b; Sehon
2005). The following example is typical of those that give rise to the
problem. Imagine a piano player who wants to appear extremely nervous
when he plays the piano and who believes that he can do so by hitting a
C# when he should hit a C at a certain point in a performance. However,
when he gets to that part of the performance, the fact that he has that
desire and belief so unnerves him that he is overcome and involuntarily
hits a C#. In this case the piano player has a suitable desire and belief, and
these do indeed cause his hitting a C#, but his doing so is not an action.
The piano player is a patient, not an agent.
Defenders of the standard story who wish to provide necessary and suf-
ficient conditions for agency thus need to rule out the possibility of such
internal wayward causal chains, and this turns out to be no easy task. But
of course, defenders of the standard story aren’t obliged to provide neces-
sary and sufficient conditions for agency. They can rest content with the
more modest project of providing necessary conditions—or anyway, they
can do so provided those necessary conditions illuminate what it is to be
an agent. When the standard story is interpreted in this more modest
way, the charge of incompleteness as such just doesn’t seem to be an
objection.
So what is Hornsby’s objection? Her view seems to be that, if we are to
have any illumination of what it is to be an agent, we have to add to the
The Standard Story of Action (1) 53
standard story, so understood, the idea that agents who act exercise their
capacity to do things, an idea that cannot be made sense of in the standard
story’s own terms. There is, however, a certain irony in her offering this
as an objection to the standard story, because defenders of the story dis-
agree among themselves about the need to add an agent’s exercise of her
rational capacities as a distinct causal factor in an action explanation.
Hempel thought it is absolutely crucial that we mention the agent’s exer-
cise of her capacities; Davidson thought that Hempel was wrong and that
it is completely unnecessary (Hempel 1961; Davidson 1976). We must
therefore ask which of these two views is correct, and, if the correct view
is Hempel’s, we must ask whether we can make sense of the idea of an
agent’s exercise of her capacities within the resources available to a defender
of the standard story.
In this connection, it is instructive to consider how defenders of the
standard story attempt to rule out internal wayward causal chains. What
do they themselves think they need to add to it in order to provide a more
complete account of agency (note that they still needn’t think that it is
sufficient)? Davidson was pessimistic that internal wayward causal chains
could be ruled out in anything other than a completely uninformative
way. The best that we could say to rule them out, he thought, was that
the attitudes in question must cause actions in the right way (Davidson
1973). If this were the best we could say, then I would have sympathy with
the view that the standard story doesn’t shed too much illumination on
what it is to be an agent. For what Davidson thinks of as the right kind of
causation is presumably simply whatever it takes to underwrite agency.
On the other hand, others think it is clear what is required. They think
that the problem in cases of internal wayward causal chains is that the
match between what the agent does and the content of her desires and
beliefs is entirely fluky. It is, for example, entirely fluky that the piano
player wanted to hit just the note on the piano that his nerves subse-
quently caused him to hit. For a doing to be an action, they suggest, what
the agent does must be differentially sensitive to the contents of his desires
and beliefs (Peacocke 1979). The movement of an agent’s body is an action
only if, in addition to having been caused by a suitable belief-desire pair,
if the agent had had a range of desires and beliefs that differed ever so
slightly in their content from those he actually has, he would still have
acted appropriately.
In order to see how this differential sensitivity condition is supposed to
rule out internal wayward causal chains, consider once again the example
of the piano player. Suppose he had desired to play the piano as if
54 M. Smith
Conclusion
Jennifer Hornsby
Smith discerns four different objections that I made to the standard story.
I’ll make brief responses to Smith’s replies to the first two of these as a way
of introducing the main business. (Sections 2 and 3 are in effect responses
to his replies to the third and fourth objections.)
(1) When I said that the standard story fails to accommodate omissions
of certain sorts, I was thinking of the story, as Smith thinks of it, as pur-
porting to give an account of what it is for A to do something intentionally
by saying what conditions a movement of A’s body must satisfy if it is to be
A’s Φ-intentionally doing some particular thing. My point was that
someone may do something intentionally although no movement of her
body occurs. Now Smith tells us that the standard story makes use of a
notion of moving the body according to which “any orientation of the body
counts as a bodily movement,” and “move the body” has application when
someone refrains from moving. I am sympathetic to an idea of Smith’s which
is in play here—namely, that there can be an exercise of a piece of bodily
agential know-how even when there is no overt movement. So I allow that
there can be a rationale for a capacious notion of moving the body, a bit
like Smith’s. Still, it isn’t clear that the know-how that is exercised by an
agent who intentionally does something can always be brought within the
scope of the bodily, or that an agent’s doing what she does always belongs
in the category of event. One of the examples I gave was of C, who delib-
erately paid someone no attention. Suppose that there was a period of an
hour during which C paid no attention to X (it might be a period during
which X was in the same room as C). Smith, it seems, will have to say that
throughout the period, there was an exercise of a piece of bodily agential
know-how affecting the way the agent’s body was oriented.2 That doesn’t
sound quite right to me.
However this may be, I had a particular reason for introducing examples
in which an agent’s intentionally doing something appears not to be a
movement of her body. I wanted to show the impossibility of accommo-
The Standard Story of Action (2) 59
2.1
On the standard approach, which gives rise to the standard story,3 actions
are taken to be events and treated then as having equal standing with
anything else that belongs in the event-causal order, in which causation
relates things in the category of events and states. On this approach, the
task of a philosophical account of bodily agency is to uncover the condi-
tions that bodily movements satisfy if and only if they are actions.
Inasmuch as these conditions are thought to be a matter of how the move-
ments are caused, “what makes it the case that” or “fixes” it that an
event is “an action rather than a mere happening in which a person is
involved” is treated as “historical information.” I think that the standard
approach is based in a false assumption. And I hope to make it clear
why I think the standard story should be abandoned by locating that
assumption now. (Smith points out that some who tell the standard story
are content if they can carry out only a more modest task. I shall come to
this. The present point is that I reject the assumption that governs the
approach.)
60 J. Hornsby
When Smith says that “the crucial information that fixes whether or
not [a bodily movement] is an action is historical information,” he is
objecting to something I had said. I had said that “unless there is an agent,
who causes whatever it is that her action causes, questions about action-
explanation do not even arise.” What I intended to convey was that his-
torical causal information cannot fix whether a movement is an action.
When causal information can be given in an action explanation, it is then,
as it were, already fixed that there is an action; whereas when a movement
of someone’s person’s body is their involvement in some mere happening,
the movement is then not so much as a candidate for an action-explana-
tion, and nothing could fix that it was an action.
Smith’s counter to this is his example of his wife’s kicking him. The
example is meant to show how historical information fixes things. Accord-
ing to Smith, a particular event of his wife’s kicking him may, or may not,
be an action, depending on its causal history. Smith is surely right in think-
ing that if he asks why she kicked him, not knowing whether her kick was
an action or not, he remains neutral on the question of whether it was.
But in order to see why this is not definitive in settling that an event may,
or may not, be an action, one can consider a case where Smith assumes
that his wife meant to kick him, and gets it wrong. So imagine that Smith
asks “Why did you kick me?”—thinking that his wife’s answer will give
her reason. He expects her to say “I wanted to ——” or “In order to ——.”
Instead she says “Oh, I didn’t mean to kick you: my kick must have been
a result of some sort of bodily spasm.” In this case, we may think that what
Smith was apt to treat as a candidate for an action-explanation wasn’t
really such a candidate; Smith’s mistake was to suppose that the event of
kicking was of a different kind from what it was. (It was of the generic kind
“bodily movement.” But if it was actually not an action, then at least it
was of some kind that he assumed it wasn’t.) So although there can be an
example of the sort Smith describes, in which he fails to know whether or
not his wife meant to kick him, this can hardly show that the event about
which he is then ignorant might itself equally well be an action or not be
an action.4 If it is not an action, then of course it fails to satisfy such con-
ditions as actions do, and Smith may come to know that it was not an
action by learning that such conditions weren’t satisfied. But it can still be
true that the event could not really satisfy such conditions—that nothing
could make it an action. And it can also be true that if his wife had meant
to kick him—had kicked him for a reason—then the kick that there would
then have been could not itself have been the product of a spasm. If these
The Standard Story of Action (2) 61
things are indeed true, then actions and movements that aren’t actions are
of fundamentally different kinds.5
On the standard approach, it is assumed that actions and other bodily
movements are of the same fundamental kind, differing from one another
in their relational properties. Taking the standard approach, one thinks it
possible to find in the event causal order a movement that might or might
not be an action, and then, by making use of a notion of causation that
relates things in the categories of event and state, to say how the move-
ment differs, in respect of its relational properties, from other things in
that order if it is an action. But if actions and other movements are of
fundamentally different kinds, then an action in its nature is not an event
about which it is genuinely an open question whether it is an action or a
movement of some other sort. (Remember that ignorance can ensure that
there seems to be an open question, even if, in the nature of things, one
answer is actually ruled out.)
2.2
We can see now that one may find fault with the standard approach to
action even if it leads only to the relatively unambitious account that
Davidson gave. Davidson asked “What events in the life of a person reveal
agency; what are his deeds and his doings in contrast to mere happenings
in his history; what is the mark that distinguishes his action?” (1971/1980,
43). And it is no wonder that the standard story is widely credited to
Davidson. He always thought that the key to understanding agency was
to take actions to be events, caused by states and other events and states,
and described in terms of states and further events that they in turn cause.
Although Davidson despaired of giving a set of conditions sufficient for an
event’s revealing agency, he never doubted that some events are actions
only because some of the bodily movements that belong in chains in the
event-causal order have a causal history of a certain sort.6
What Davidson despaired of doing was solving “the problem of wayward
causal chains” (see Smith’s chapter in this volume for the details)—a
problem that needs to be solved by someone who takes the standard
approach and aspires to give sufficient conditions for an event’s being an
action. Smith for his part thinks that he has a solution to the problem. I
shall come to this (section 3). But what I would draw attention to now is
an aspect of Smith’s own argument which might be thought to suggest
that there really need be no problem here in need of a solution. When
Smith speaks of the answer to the question of why his wife kicked him as
62 J. Hornsby
Smith for his part thinks that the standard story in Davidson’s version
“doesn’t shed too much illumination on what it is to be an agent.” One
can agree with that, and think that further illumination can be shed—that
there is more to be said. What we are now in a position to see is that one
may have one or another of two very different reasons for thinking that
there is more to be said. (A) One may tell the standard story, and think,
as Smith does, that it fails of sufficiency because it lacks a specifiable neces-
sary condition of a case of agency. (B) One may reject the standard
approach, as I do, but not stop at giving the mark of actions, thinking that
there is more to be said about the sorts of causal notions that are in play
when there are actions. I suspect that Smith does not countenance alterna-
tive (B) because he cannot credit that someone might put the standard
approach into question. (This would explain why he should have thought
that I must have been charging the standard story with incompleteness
when I claimed that something goes missing when it is told.)
3 Exercises of Capacities
But then, once again, it would appear to be the sort of capacity whose
exercise just is her acting intentionally.
Suppose that this is right. And suppose that Smith, persisting with the
standard approach, says that the idea of exercising a capacity possessed by
a rational agent belongs in the standard story. He will be taken round in
an evident circle when he tells the story. For if possession of a rational
capacity is required for there to be an action, then exercises of rational
capacities will be the actions of rational beings. But then something’s being
an exercise of such a capacity will be matter of its being an action. Making
use of the idea of such a capacity could not enable one to give a necessary
condition that combines with the old standard story’s other conditions to
say what else is causally required for there to be an action.
I hope that this starts to explain why I should think that the game is
up for the standard story when capacities are introduced. Just as actions
and action-explanations go hand in hand, so it would seem that actions
and exercises of certain capacities go hand in hand. An idea of an agential
capacity, like the idea of a certain sort of explanation, is presupposed to
the idea of an action. Being the exercise of a capacity cannot then be one
among a set of conditions of a movement’s being an action. It cannot
“make” an event that might not have been an action into an action.
Those who take the standard approach in philosophy of action seek condi-
tions of a movement’s being an action. They think that it is proper to ask,
about a movement that, according to them, may or may not be an action:
“What makes it an action?” Being aware that actions are susceptible to a
certain sort of explanation, and that this is causal explanation, they treat
causation by rationalizing beliefs and desires as one condition of a move-
ment’s being an action. But then they face the problem of wayward causal
chains. They react either by settling for speaking of such causation as
occurring in the right way (as Davidson did), or by seeking a further condi-
tion of an event’s being an action (as Smith does). What I have suggested
is that if one disallows their assumption that the question “What fixes it
that an event an action?” is in good order, then one will say that when
there is an action, there is an event the fact of whose occurrence is expli-
cable in a certain way and does not just so happen to be explicable in that
way. Actions are not then thought of as movements that count as actions
by virtue of having relational characteristics. Actions may be thought to
involve intentions essentially.
66 J. Hornsby
The question “What makes it the case [or fixes it] that an event is some-
one’s acting and not her involvement in some mere happening?” has
become very familiar. And the fact that it is possible to say what distin-
guishes actions from other events can help it to seem to be a perfectly good
question. (We saw that there is a perfectly good question about what dis-
tinguishes actions, which makes use of one idea of a mark.) It can then
become hard to believe that the question intended by those who tell the
standard story is misguided. But it is not outlandish to suggest that phi-
losophers have sometimes asked the wrong questions.
I’m aware that I have not done much more than quarrel with Smith’s
interpretation of Hornsby 2004. But my aim in this part of our exchange
has been not so much to argue for an alternative to the standard story, as
to demonstrate that there can be one. So I have done little here in the way
of showing why one should accept an alternative. To persuade someone
of an alternative of the sort I favor would require much more work. One
needs to demonstrate the counterintuitive character of the conclusions
reached if one follows Hume on the subject of causation. (That was a task
I did attempt in Hornsby 2004.) One needs also to say something positive
about the kind of causation by agents that is irreducible to causation found
in the event causal order. Inasmuch as such causation may be causation
by agents other than rational agents, one must also say something about
what is distinctive of the capacities that rational beings possess. The fact
that these tasks can be undertaken helps to show that even if one has no
truck with the project set by those who take the standard approach, still
one has plenty to do to try to cast light on the phenomenon of human
agency.
Acknowledgments
I thank Naomi Goulder for her helpful comments on a draft of this chapter.
Notes
2. Smith might suggest that in the example of C, we find, as well as the bodily
actions that the standard story purports to provide an account of, mental actions;
and then that there can be intervals during the period when C pays no attention
The Standard Story of Action (2) 67
to X at which even the orientation of C’s body is not to the point. My response
would be to say that although there may be a recognizable category of mental
actions, I doubt that we can plausibly decompose intuitively physical actions into
the mental and the bodily in the manner that such a suggestion would seem to
require. Consider, say, drinking a cup of coffee, or speaking, which admit of pauses
between bits of bodily activity.
3. I don’t know who coined the term “standard story”: perhaps it was David Velle-
man (who uses it in Velleman 1992). “Standard approach” is used in Anscombe
1989/2005 (a paper she wrote in 1974), and I want to mean by it what I think
Anscombe meant. At any rate, I think it safe to assume that the standard story, as
it is now commonly understood, is a product of the approach that Anscombe called
standard (Anscombe 2005, 111), and which she thought was hopeless.
4. I realize that I could seem to have denied the possibility of such examples in
Hornsby 2004. Following the sentence above which I’ve followed Smith in quoting,
and whose force I am now attempting to explain, came another sentence, which
Smith also quotes: “An agent’s place in the story is apparent even before anyone
enquires into the history of the occasion.” This was careless. I should not have sug-
gested that the agent’s place must always be apparent, at least if what is apparent
is known. Nevertheless, I take it that the agent’s place very often is apparent—that
very often someone is evidently doing something intentionally, even if one knows
not exactly what—and then a request for explanation takes for granted what is then
apparent.
an analysis of action is requested (see further note 8). So I think that the critics of
Anscombe who take the standard approach interpret her rather as Smith has inter-
preted me—as if I thought that the standard story were told in response to a good
question.
7. I say “facts of whose occurrence can be explained” here because if I were to say
simply “can be explained” it might give the impression that the explanandum of a
reasons-explanation is an event—a false impression, but one that will be welcome
to those who take the standard approach (see para. (2) in section 1 above). Still, at
various points, I forsake accuracy in order to avoid prolixity, and sometimes speak
of actions as being explicable in a certain way. Treat this as a sort of shorthand.
John Bishop
One such motivation is concern with the problem of natural agency. Witt-
genstein’s question may indeed be a focal one: what is it for behavior to
count as action, rather than just “mere” behavior? But this question already
70 J. Bishop
deploys a technical notion of action—a technical notion that has its source
in our ethical perspective on ourselves. For we hold an agent morally
responsible for a given outcome only if it came about or persisted through
that agent’s own action. Even if an agent’s behavior contributed to a certain
outcome, the agent would not be morally responsible for it unless the
behavior involved or constituted the agent’s own action.2,3 Now, if our
ethical perspective applies to the world (if agents really are sometimes
morally responsible for outcomes) then actions must be a feature of the
world. But how is it possible for actions in this sense to be part of the
natural world as our natural scientific worldview conceives it? Actions
either are or necessarily involve physical events—in particular, bodily
movements—and those events are open to natural scientific explanation.
So there is room for skepticism about how the very same outcome can
both result from morally responsible agency and also be in principle expli-
cable scientifically. In previous work, I put the problem of natural agency
thus: “We seem committed to two perspectives on human behavior—the
ethical and the natural—yet the two can be put in tension with one
another—so seriously in tension, in fact, as to convince some philosophers
either that the acting person is not part of the natural order open to sci-
entific inquiry or that morally responsible natural agency is an illusion”
(Bishop 1989, 15). One important motivation, then, for seeking to under-
stand what it is for something to be an action is to seek to resolve the
problem of natural agency. Does the concept of action that we need for
our ethical perspective have features that require going beyond the con-
fines of our prevailing natural scientific metaphysics, or may actions be
wholly accommodated within a naturalist ontology?
I shall call the claim that actions can be fitted into a naturalist ontology
reconciliatory naturalism. One way to defend reconciliatory naturalism is to
advance a causal theory of action. I suspect that this is the only route to
reconciliatory naturalism—but will here claim just that a certain sort of
causal theory of action, if successful, overcomes skepticism about natural
agency. To appreciate why a successful causal theory of action would
achieve this, consider how best to state the nature of the tension between
our ethical and natural scientific perspectives. That tension has often been
expressed as an apparent incompatibility between free will and determin-
ism—but the real problem is not really about determinism, as Gary Watson,
for one, makes very clear:
reconciliatory naturalism to be true than for it to turn out either that our
belief in [the reality of] agency is mistaken, or that, as agents, we belong
mysteriously beyond the natural universe that is open to scientific inquiry”
(Bishop 1989, 5). That it would be good for reconciliatory naturalism to
be true, while not, of course, counting as any kind of evidence that it is
true, might nevertheless justify our taking it to be true—though only if our
best assessment of the relevant arguments and evidence leaves its truth
open.7 But the fact that agency apparently involves a special kind of
agent-causation not admissible in fundamental naturalist ontology sug-
gests that its truth is not left open—unless a CTA can be successfully
defended, that is.
But why might one think it good that reconciliatory naturalism be
true—and so have a motivation for defending a CTA against agent-causa-
tionist skeptics about natural agency? One would need to be committed
both to the reality of human agency and to the idea that human existence
belongs wholly within the natural order. Such commitment might itself
be just intrinsic to a naturalist worldview. It might, however, be derived,
as in my own case, from the theistic religious traditions that emphasize
human freedom and responsibility but at the same time affirm human
creatureliness. We may indeed be creatures in the image of God, but we
remain creatures nonetheless. There is thus a tension between our crea-
turely dependency and our power to act, which I think is properly resolved
by reconciliatory naturalism (by contrast with the philosophical libertari-
anism to which many theists are attracted, under which, it seems to me,
it is hard to avoid the view that the human self acts from outside the
natural causal order).8 So I actually have a religious motive for defending
a reconciliatory, or “compatibilist,” naturalism, and, therefore, a suitably
formulated causal theory of action. But such a nonevidential motivation
is subject to philosophical correction, and would need to be set aside if a
reconciling account of natural agency ran into insuperable difficulty.
Donald Davidson is the major figure in the defense of a CTA, and he effec-
tively endorses the view that a successful CTA resolves the problem of
natural agency. In his essay “Intending,” he remarks that “the ontological
reduction [implied by CTA], if it succeeds, is enough to answer many
puzzles about the relation between the mind and the body, and to explain
the possibility of autonomous action in a world of causality” (Davidson
1978/1980, 88, my emphasis). Indeed, this ontological reduction is just
Skepticism about Natural Agency and the CTA 73
causal deviance. An agent’s behavior can have mental causes that make it
reasonable and yet not count as the relevant intentional action—or,
indeed, as any kind of intentional action at all. Davidson’s nervous climber
illustrates the point: “A climber might want to rid himself of the weight
and danger of holding another man on a rope, and he might know that
by loosening his hold on the rope he could rid himself of the weight and
danger. This belief and want might so unnerve him as to cause him to
loosen his hold, and yet it might be the case that he never chose to loosen
his hold, nor did he do it intentionally” (Davidson 1973/1980, 79). For
intentional action, behavior has to be caused in the right sort of way by
mental states and events that make it reasonable, yet, in “Freedom to Act,”
Davidson reports that he “despair[s] of spelling out” what that right sort
of way is (ibid.). But surely, not being able to spell this out will be embar-
rassing for a CTA defender, if the CTA is to secure the possibility of natural
agency. Roderick Chisholm was right, I think, to see in the possibility of
deviant counterexamples a major challenge to a CTA-based analysis of
action, and hence a potential argument in favor of agent-causationism
(Chisholm 1966). No wonder, the agent-causationists will say, that you
run into problems in trying to reduce an agent’s exercise of control to
causation by that agent’s mental states. For, arguably, all that can secure
“nondeviant” causation is to bring the agent back into the picture: the
climber would have acted intentionally only if he had brought it about that
he relaxed his grip, and that has to be understood as a relation between
him, as agent, and the relevant bodily movement.
When he returned to consider problems in the explanation of action in
an essay with that title first published in 1987, Davidson had this to say
on the problem of deviance, and I know of no evidence that he ever
changed his mind: “Several clever philosophers [he mentions Armstrong
and Peacocke in a footnote] have tried to show how to eliminate the
deviant causal chains, but I remain convinced that the concepts of event,
cause and intention are inadequate to account for intentional action”
(Davidson 1987/2004, 106).10 Yet the CTA, surely, is precisely the thesis
that “the concepts of event, cause and intention” are adequate “to account
for intentional action”? Or, at least—and this is a vital point—the CTA is
the thesis that the concepts of event, cause, and intention are adequate to
provide a suitably naturalistic ontological account of what constitutes inten-
tional action even if they do not provide a conceptual definition of what it
is to act with an intention. Did Davidson in effect concede, then, that the
ontological reduction offered by the CTA could not ultimately be defended
against the challenge posed by the argument from the possibility of causal
deviance? No. I do not think so. It looks rather as if Davidson thought
Skepticism about Natural Agency and the CTA 75
that the ontological reduction would be secure, even though the deviant
cases were enough to put paid to any hope of conceptual, definitional,
analysis.11
But is that correct? Agent-causationists will, of course, agree that the
deviant cases show the impossibility of defining an agent’s acting inten-
tionally as her being caused to behave reasonably by her own relevant
mental states, but surely they may argue that the deviant cases show
further that event-causation by her own mental states could not even
constitute the agent’s intentional action.
So there is something of a puzzle as to why Davidson was not more
concerned about resolving the problem of causal deviance and appears to
have continued to believe in the CTA’s ontological reduction (and its force
in resolving the problem of natural agency) even though he himself
believed that, despite sophisticated efforts by some philosophers, the devi-
ance problem could not be resolved. It seems as if he thought that this
problem was somehow peripheral, and some philosophers have apparently
agreed. Berent Enç, for instance, speaks of the “somewhat technical
problem” posed by causal deviance (Enç 2003, 3). Enç thinks there is a
deeper difficulty with the CTA, namely, the concern that its account of
action as constituted wholly by event-causal relations seems to leave the
active agent out of the picture.12 While there is indeed such a concern, the
problem of excluding deviance is not an independent technical issue but
is, rather, expressive of that very concern, since the claim that the CTA
cannot exclude deviance is a way of drawing attention to its alleged failure
to account for the agent’s own activity. Any proponent of the CTA who
accepts Enç’s deeper concern, then, ought to care about resolving the
problem of causal deviance.
There is, I think, a simple explanation for Davidson’s unconcern over
what he thought was the irresolvability of the deviance problem. In “Prob-
lems in the Explanation of Action,” he says: “Let me begin by answering
Wittgenstein’s famous question: what must be added to my arm going up
to make it my raising my arm? The answer is, I think, nothing. In those
cases where I do raise my arm and my arm therefore goes up, nothing has
been added to the event of my arm going up that makes it a case of my
raising my arm” (Davidson 1987/2004, 101). This is, at first, very surpris-
ing. One would expect a proponent of the CTA to say that what must be
added to my arm’s going up to make it a case of my raising my arm is just
the right kind of causal history. Davidson clarifies his answer thus:
What Davidson is relying on here is something that was clear from the
start in “Actions, Reasons, and Causes”: his ontology of actions is an ontol-
ogy of events. So, on Davidson’s view, actions are a subclass of events, so
that any particular action is identical with some particular event. Events
are happenings, occurrences. So actions, doings, would appear to be a
species or subclass of happenings, occurrences. To identify any particular
event as an action we do indeed need to satisfy conditions that relate to
the event’s causal context: but, though that yields a description of the
event as an action of a certain type, it does not add anything to the event
itself. Given this, the problem of natural agency just reduces to the problem
of explaining how mental states and events can belong to the natural
causal order. For, if actions are events, to be identified as such by having
the right sort of psychological causal history, then, once one has admitted
mental states and events to one’s ontology, there is no further problem
about admitting actions. The problem of causal deviance—even if it proves
irresolvable—can have no impact on this: actions are securely within any
naturalist ontology that admits psychological states and events, even if we
cannot complete in ultimate detail the conditions required for events to
count as actions.
constructed from the passivity of what happens” (Ruben 2008, 238). That
objection does have some force. But, in any case, as I have argued, identify-
ing an action with its intrinsic event (so that, as Davidson puts it, nothing
must be added to my arm’s going up to make it my raising my arm) leaves
the skeptic about natural agency unsatisfied, and the question begged
against the agent-causationist.
If actions are neither causes of, nor identical with, their intrinsic events,
how else could they be related to them? Are actions perhaps some kind of
complex that includes their intrinsic events as proper parts? Ruben thinks
that none of the available possibilities is attractive, and he makes the bold
move of denying the widespread assumption that actions have intrinsic
events—or, at least, of denying this assumption for the case of basic physi-
cal actions such as arm-raisings. So Ruben’s answer to Wittgenstein’s ques-
tion—what is left over if I subtract the fact that my arm goes up from the
fact that I raise my arm?—must be that typically there can be no such
subtraction. On Ruben’s view, there is no commonality among the follow-
ing three cases: (1) the mere event of an arm going up; (2) the event of its
going up where it is intrinsic to a (nonbasic) action (e.g., where I use a
pulley with my right arm in order to raise my immobilized left arm); and
(3) the basic action of my raising my arm to which no event is intrinsic.
Nevertheless, in a broad generic sense of “event” all these do count as
events. But we need a “disjunctive” theory of events in this broad sense,
under which there is no essential feature that mere event, intrinsic event,
and basic action “event” have in common. (This is to be compared with
disjunctive theories of perception, under which veridical perception and
hallucination do not have any single kind of “appearing” in common.)
This suggestion will strike many as implausible. Judgments of plausibility
are hardly decisive, however—besides, what is salient here is that Ruben’s
account, if correct, contributes nothing toward resolving the problem of
natural agency, for it is, in one respect, in the same position as the voli-
tionism he rejects, since it requires including in our ontology a sui generis
class of items that are essentially actions.
Nevertheless, Ruben has done the signal service of showing that, if we
are to avoid his own (perhaps somewhat desperate) move, we have to clean
up our account of how actions are related to their intrinsic events. And, if
we want to resolve the problem of natural agency—which is not a goal
that Ruben takes on, by the way, but it is a goal that provides sound moti-
vation for these inquiries—we will need to give an account of actions and
their intrinsic events that shows that it is reasonable to accept that they
are realized wholly within an ontology of the kind consistent with prevail-
Skepticism about Natural Agency and the CTA 79
Acknowledgments
Notes
1. Note that I do not wish to imply that Wittgenstein himself intended to pose this
question as the foundational one for the philosophy of action.
2. This is, of course, only a necessary condition for moral responsibility: further
conditions are required, relating to behavers’ capacities to understand the moral
significance of their behavior and to offer explanations of it in terms of their own
reasons for acting. Infants and nonhuman animals may arguably perform actions,
yet without being morally responsible for related outcomes.
4. This problem would disappear if a scientific revolution took place that presup-
posed a metaphysics of irreducible substance-causation. I shall argue that the pros-
pects for defending reconciliatory naturalism do not depend on having to expect
so apparently unlikely a development.
6. A CTA can resolve the problem of natural agency only if there is a generally sat-
isfactory naturalist solution to the mind–body problem: but that does not make the
82 J. Bishop
CTA otiose, of course, since skepticism about natural agency might coherently
persist even if physicalism about mental states and events is accepted.
8. It is important to note that some libertarians think they can resist this conclu-
sion. Robert Kane (1996) seeks to defend a naturalist libertarianism without com-
mitment to agent-causation, and Timothy O’Connor (2002) affirms the ontological
irreducibility of agent-causation yet nevertheless hopes to save naturalism by appeal
to emergentism. I have discussed these issues more fully in Bishop 2003.
10. The references are to Armstrong 1973, 1975, and Peacocke 1979a.
If this account [of acting with an intention as caused in the right way by attitudes and beliefs
that rationalize it] is correct, then acting with an intention does not require that there be any
mysterious act of the will or special attitude or episode of willing. For the account needs only
desires (or other pro attitudes), beliefs, and the actions themselves. There is indeed the relation
between these, causal or otherwise, to be analysed, but it is not an embarrassing entity that has
to be added to the world’s furniture. We would not, it is true, have shown how to define the
concept of acting with an intention: the reduction is not definitional but ontological. But the
ontological reduction, if it succeeds, is enough to answer many puzzles about the relation
between the mind and the body, and to explain the possibility of autonomous action in a world
of causality. (Ibid., 87–88)
12. Enç quotes J. David Velleman’s claim that the CTA cannot capture what it is
for an agent to be active: “reasons cause an intention, and an intention causes bodily
Skepticism about Natural Agency and the CTA 83
13. It might well be empirically true that arm-risings intrinsic to arm-raisings have
distinctive empirical features that distinguish them from all physically feasible
“mere” arm-risings (such as might occur in a nervous tic or when a paralyzed arm
is lifted, etc.). The possibility that an arm-rising of the very same highly specified
type could have occurred without an arm-raising might then be merely logical.
Nevertheless, the issues about to be canvassed about how actions are related to their
intrinsic events will still need to be dealt with, if we are to be clear about what an
action is.
14. Ruben attributes a view of this general kind to H. A. Prichard in “Acting, Willing,
and Desiring” in Prichard 1949; to Jennifer Hornsby (1980); and to Paul Pietroski
(2002).
15. See Bishop 1989, chapter 5, where I draw on Peacocke 1979b and Lewis 1980.
In Lewis’s paper the relevant elaboration of causal nondeviance is applied to the
case of perception.
6 Agential Systems, Causal Deviance, and Reliability
Jesús H. Aguilar
ceptable about a causal chain of events that goes through someone else’s
action, produces an event that satisfies the conditions to count as an
action, and yet fails to be an action.5 At stake is nothing less than the viabil-
ity of the CTA.
Similar considerations have motivated defenders of the CTA to confront
the challenge arising from prosthetic agency and in doing so reap the
benefits of a plausible answer in the form of extra conditions that identify
an intentional action. The most ambitious and promising of all these
efforts by defenders of the CTA is due to John Bishop. Not only does Bishop
offer a causalist answer to the challenge arising from prosthetic agency,
but contained in his answer he also offers a set of necessary and sufficient
causal conditions for a basic intentional action. Not surprisingly he calls
this a “final breakthrough” in the search for the elusive and much-sought-
for causal conditions for an intentional action.6 Furthermore, Bishop’s
necessary and sufficient conditions are the result of an analysis of inten-
tional action that comes from a rich systemic perspective in which the
agent and her contributions to the world of events are at the center of
attention. All these reasons justify our taking a careful look at Bishop’s
proposal and assessing his claim to have found the necessary and sufficient
conditions for a basic intentional action. In the rest of this chapter I first
examine Bishop’s proposal, stressing the way in which he uses the notion
of agential control to deal with deviance arising from prosthetic deviance.
Then, I raise some problems for the specific way in which feedback is sup-
posed to enter in this systemic picture, and I end up by suggesting a move
in the direction of reliability to complement Bishop’s otherwise attractive
strategy to tackle basic deviance.
There is a servo-system functioning to match the agent’s intention all right; but
given its detailed architecture, it can hardly count as realizing the agent’s controlled
regulation of his or her bodily movements since the feedback information about
orientation and muscular states does not get carried back to the agent’s central
processing system. (Bishop 1989, 170)
agent’s action. This in itself does not yet distinguish this causal chain from
a nondeviant one, for it is only in the process of receiving information
from the bodily movement that the desired distinction supposedly arises.
Bishop then asks whose brain is getting the feedback information: if it is
the prosthetic agent, then we are dealing with a deviant case; if it is the
main agent, then this is not a deviant case.
The problem with this suggestion is that nothing prevents enriching
the neurophysiologist scenario with the possibility of a further link going
this time from the neurophysiologist back to the subject’s brain—in other
words, an extra link that sends the information received from the subject’s
bodily movement back to the subject’s brain with the help of the neuro-
physiologist. If causal transitivity is sufficient to go in one direction, causal
transitivity should be sufficient to permit the flow of information to go in
the other direction. If this occurs, then both brains receive feedback infor-
mation, and hence this sole feature cannot be what distinguishes cases that
are deviant from cases that are not deviant.
There are different ways in which Bishop could reply to this objection.
One way is to stipulate that it is unacceptable to have a second agent who
sends back the information to the first agent in the way suggested, much
like Peacocke’s stipulation rejecting the possibility of prosthetic agency.
But this answer is clearly at odds with Bishop’s acceptance of nondeviant
cases like that of the assistant. If with Bishop we can conceive that the
assistant is capable of bridging the subject’s brain with the subject’s behav-
ior, then we can enrich this thought experiment and conceive that the
assistant is capable of bridging the subject’s behavior with the subject’s
brain. This of course will not pose a problem for Bishop’s proposal.
However, if with Bishop we can also conceive that the neurophysiologist
is capable of bridging the subject’s brain with the subject’s behavior, then
nothing stops us from similarly enriching this thought experiment and
conceiving that the neurophysiologist is capable of bridging the subject’s
behavior with the subject’s brain. This does create a serious problem for
Bishop’s proposal.
Alternatively, Bishop can answer this objection by accepting that if the
feedback information that allows the first agent to exercise his control
reaches his brain in the suggested way, then strictly speaking we do not
have a case of deviance. The enriched neurophysiologist scenario would
involve a rather strange and circuitous causal path, but nonetheless a
nondeviant one insofar as the relevant information is reaching the subject.
In fact, this alternative possibility, where the neurophysiologist acts as a
functional bridge of feedback information, is consistent with a systemic
92 J. H. Aguilar
intrusion is seen as undermining the control of the first agent over the
movements of his body. However, against Bishop’s diagnosis, the lack of
control arises not because some feedback information is misdirected or
unavailable to the first agent, but rather because typically the intervention
of an agent involves the breaking of the relevant causal chain, bringing
with it causal unreliability.
In fact, this is the type of scenario that essentially troubles Peacocke
with respect to cases involving a prosthetic agent. For instance, in the case
of the neurophysiologist, he suggests that:
When we say that an event is, under a given description, intentional of a person,
we normally imply that that person was the originator of that event. It is not clear
whether there is such a person as the originator of the bodily movement in our
example, but if there is, it is certainly not the person whose brain the neurophysi-
ologist is inspecting. (Peacocke 1979b, 88)
x’s being ϕ differentially explains y’s being ψ iff x’s being ϕ is a non-redundant part
of the explanation of y’s being ψ, and according to the principles of explanation
(laws) invoked in this explanation, there are functions . . . specified in these laws
such that y’s being ψ is fixed by these functions from x’s being ϕ. (Peacocke 1979b,
66)
we know that this cannot be the reason for considering this case deviant,
as the assistant case shows. Rather, what is crucial is the level of reliability
accompanying the intervention of the neurophysiologist. In particular,
what is crucial is the role played in this intervention by that central agen-
tial feature which can easily increase the unreliability of any system pos-
sessing it, namely, the “up-to-ness” implicit in autonomous agency. Thus,
if the neurophysiologist intervenes in such a way that it is up to her to
decide to materialize the subject’s intention, then the chances that this
involves a reliable connection diminish and the chances of its being
deviant increase, for it is easy to imagine all sorts of considerations that
may lead her not to satisfy the subject’s intention. However, if her inter-
vention is very similar to the assistant’s, namely, essentially blind to the
content of the subject’s action triggering intention, then the chances that
this involves a reliable connection increase and the chances it involves
deviance diminish.
Furthermore, a second point concerning deviant cases involving pros-
thetic agents is that making use of reliability shows that there is an inevi-
table misleading oversimplification in the neurophysiologist case as it is
normally presented in the literature. For if indeed this case can be differ-
entially explained then this is the best proof that it is reliable, and hence,
that it is not deviant. What is misleading, of course, is that we are asked
to consider a single successful case obviating the statistical evidence that
would show it to be a reliable one. Presumably, this is a fair move when
trying to pull the intuitions associated with the obstacle of an originator.
But as soon as we recognize that there are nondeviant cases involving
prosthetic agents this move loses much ground. Thus, as it stands, we
strictly speaking lack the relevant information that would settle the issue
as to whether indeed the neurophysiologist case is deviant or not. However,
this does not amount to a proof that cases involving prosthetic agents are
a real source of deviance. All it shows is that one can, with a little imagina-
tion, construct cases where the reliability of a normal causal connection
diminishes, and, hence, where the presence of deviance proportionally
increases. But that is all.
Nevertheless, it is important to note that although appealing to reli-
ability seems to take care of the sources of deviance involving prosthetic
agents, the larger question remains as to whether indeed reliability is an
objective feature of causal chains of events that separates those that are
not deviant from those that are. My view on this is that reliability together
with sensitivity and perhaps some condition involving feedback à la Bishop
provides the CTA with the conditions to identify a basic action, and hence
Agential Systems, Causal Deviance, and Reliability 97
Even assuming that it is possible to deal with deviance in cases that involve
prosthetic agents by appealing to the presence or absence of reliable causal
chains, we still need to know why an unreliable causal chain undermines
agential control and provides the basis for deviance. This question is even
more pressing when it is not hard to imagine cases where an agent per-
forms an intentional bodily movement despite the unreliability of the
causal chain involved in the production of such movement. For this seems
to show that agential control can be preserved even when the relevant
causal chain is not reliable.
For example, it is conceivable that after trying many times a subject
whose arm is paralyzed succeeds only once in moving her arm. Although
generated through an unreliable causal chain, there is no obvious reason
why in this case the relevant movement is not intentional. It appears, then,
that the reliability or unreliability of the relevant causal chain is indepen-
dent of the intentional status of a bodily movement. But, if this is the case,
then it is unclear how reliability can be seen as a necessary element in the
production of an action.
Nevertheless, a more careful analysis of, say, the case of the single action
performed by the paralyzed subject reveals that reliability does play a con-
stitutive role in the recognition of this subject as an agent and the accep-
tance of her single fortuitous arm movement as an action. Mutatis mutandis,
the same considerations apply to every case involving an intentional
action that result from an apparently unreliable causal chain.
As has been suggested earlier, the paralyzed subject would count as an
agential system insofar as she is capable of producing specific types of
bodily movements that correspond to the content of some specific type of
internal states capable of causing such bodily movements. However, in
order to make sense of these different types of events and states into which
an agential system can enter, more than a causal relationship among par-
ticular events and states is required. This extra requirement is that the
relevant causal connections among the particular events and states of the
system are reliable enough to establish a distinctive type of event or state
that is exclusively related to another distinctive type of event or state. That
98 J. H. Aguilar
is, the particular events and states of the agential system are grouped into
relevant types of events and states insofar as they are captured by reliable
connections. Only then do we have the required types of events and states
presupposed in our description of an agential system and its functions.
If this is correct, then the very types of bodily movements available to
an agential system turn out to be a function of the reliable connections
that link such types of intended events with their corresponding internal
types of states, typically, with types of intentions. In fact, the production
of such types of intentions is grounded on some further cognitive state of
the system that conveys the information that indeed the intended type of
bodily movement is potentially executable, again, because a reliable con-
nection is assumed to exist between the state of having a specific type of
intention and the production of a specific type of bodily movement.15
Therefore and despite appearances, the single successful movement of
the paralyzed subject is an action insofar as it is an instantiation of a type
of behavior that is reliably connected to a specific type of intention. Her
agential effort is to be understood as producing an intention to move her
arm, hoping that it will in turn give rise to what it normally produces,
namely, the movement of her arm. It just happens that in her abnormal
situation the reliable connection will not likely be instantiated because of
her physiological problem. But her intention remains and it is the rational
one to have if indeed she wants to move her arm, since under normal
conditions this is the most reliable way to give rise to such movement.
Note how the very basis of our rationalizing not just her single bodily
movement but her forming the intention to produce it against all odds is
precisely the assumption that under normal conditions that is what it takes
to move one’s arm. The main point here is that normalcy can only be
cashed out in terms of reliability.
Acknowledgments
Notes
1. Peacocke 1979b, Lewis 1980, and Bishop 1989, although differing with respect
to the details of their respective accounts of the sensitivity condition (in the case
Agential Systems, Causal Deviance, and Reliability 99
of Lewis, actually dealing with perception), are the key sources for an analysis of
this condition.
4. The most common cases where an agent transitively causes another agent’s
action are so-called interpersonal interactions (Hart and Honoré 1959). They involve
a first agent causing a second agent’s action and hence producing a causal chain of
events that, as in the present cases, also goes through an agent’s mental events.
However, the main difference between the present cases involving prosthetic agents
and those of interpersonal interaction is that in cases involving prosthetic agents
the relevant chain produces a basic action with the help of someone else’s action,
whereas in interpersonal interactions a causal chain starts with an agent’s basic
action and ends with another agent’s basic action. Nonetheless, these two cases
exhibit the versatility of the CTA with respect to its use of causal transitivity. I have
explored some of the features of such interpersonal interactions in relation to the
exercise and attribution of agency in Aguilar 2007.
5. At this juncture one may be willing to bite the bullet and propose that cases like
the neurophysiologist’s are not deviant. That is, one might propose that the causal
chain satisfies in the relevant way the content of the motivating internal events of
the first agent and that despite its going through a second agent’s action and delib-
eration it nonetheless counts as one of the first agent’s actions. An enriched version
of this move will be considered later and shown to be a rather unappealing
strategy.
9. However, as we will shortly see, internal feedback is problematic. Note also that
feedback is essentially a cognitive feature of a system. Hence, this opens the door
to an epistemic analysis of deviance and reliability that apparently has an impact
on this type of issue in action theory.
100 J. H. Aguilar
10. An interesting question here has to do with the specific mental states associated
with the “mental processes” that Bishop speaks about that are fed by the bodily
movement. Are they the same mental states that started the causing of the bodily
movement, or are they different? If they are the same mental states, then Bishop
needs to defend the idea that such mental states are sustained. If they are not the
same mental states, then he needs to defend the idea that there is some way in
which different mental states (say, different intentions) are able to work in tandem
and respond to feedback information. It seems that the most plausible proposal is
the first one. However, let us note that the questions arise as to how a state is capable
of having the required sustaining nature to count as “the same” intention and how
this intention is supposed to monitor the ensuing behavior. These questions need
to be answered if one is to have a complete account of the elements involved in the
production of a basic intentional action.
11. Note that this move effectively undermines the agential control that the neu-
rophysiologist has, for now she is seen as a neutral satisfier of whatever is intended
by the patient, that is, the “up-to-ness” of her intervention has been reduced to a
bare minimum.
12. Peacocke himself disagrees with this connection between differential explana-
tion and reliability, thinking that reliability is not a way to complement differential
explanation but rather is a rival theory. See, e.g., Peacocke 1979b, 91–95.
13. Hence, Peacocke, when clarifying the nature of the “fixing” relationship
involved in differential explanation, states that “‘fixed’ adverts only to the unique-
ness of determination by the function (as a matter of mathematics in numerical
cases): it does not imply that the laws are not statistical” (Peacocke 1979b, 67). The
laws that Peacocke has in mind are of this general form: (∀x)(∀n)(∀t)((Fxt & Gxnt)
⊃ Hxk (n)(t + δt)), where n ranges over numbers, t over times, and k is a numerical
functor. This does seem to take care of the main concern raised by Sehon (1997),
who concentrates his criticism of Peacocke’s proposal on the possibility of satisfying
particular causal chains of events as opposed to statistical types of causal chains of
events.
14. Here is Peacocke again alluding to this feature: “There need to be many cases
in which conditions producing sensitive chains actually obtain and produce a bodily
movement believed to be a ϕ-ing . . . in order for us to be able to discern an under-
lying pattern of beliefs and desires” (Peacocke 1979b, 109).
15. This picture does not preclude accepting that agential systems can be much
more complex than this simple view, which seems correct for most intentional
bodily movements like walking, eating, or moving an arm. Things get more
complicated when the type of intended action and hence the accompanying inten-
tion require from the agential system things like guidance or wholehearted
commitment.
7 What Are You Causing in Acting?
Rowland Stout
My target for attack in this essay is the fairly widespread view in the phi-
losophy of action that what an agent is doing in acting in a certain kind
of way is causing an event of some corresponding type. On this view
agency is characterized by the agent’s causing of events. To pick one of
many manifestations of this view, here are Maria Alvarez and John Hyman:
We can describe an agent as something or someone that makes things happen. And
we can add that to make something happen is to cause an event of some kind.
(Alvarez and Hyman 1998, 221)
In raising your arm you are causing the event of your arm’s rising.
Such claims about the causal nature of action are sometimes presented as
conceptual claims: claims about when it is correct to describe someone as
performing such an action. But I am interested here in the possibility of
making a constitutive claim: a claim about what such an action is. Actions
seem to be causings in some sense yet to be worked out. The causal theories
that I am questioning take someone’s action of raising their arm to consist
in that person (or perhaps some of his or her mental states or events)
causing the event of his or her arm’s rising.
Despite its widespread philosophical currency, there is something puz-
zling about the idea of causing an event. The relation of causing, like the
property of acting, is not a timeless relation. By this I mean that when we
attribute this relation or property to things we must specify or presume a
time for the attribution. Gavrilo Princip was assassinating the Archduke
Ferdinand at one time but not a year earlier or a year later. But events are
things that are usually predicated timelessly in that sense; standardly,
when philosophers of causation talk about a causal relation between
events, they take it to be a timeless relation. Saying that the assassination
102 R. Stout
of the Archduke Ferdinand caused the First World War, even though we
employ the past tense of the verb “to cause,” is to attribute a relation time-
lessly. It is not that it caused it then and continues to cause it now. Rather,
we can attribute this relation between the events without having to specify
a time for that attribution.
So a standard approach to the philosophy of causation focuses on the
timeless relation of causality holding between events. And that is why it
seems appropriate to think of this in terms of the relation of counterfactual
dependence, for example, which holds timelessly in the same way. But
when we address the constitutive question about actions we are concerned
with processes that happen at one time and not at others. And if what
Princip was doing was causing something, then he was doing it then but
not at other times.
Given this, how are we to understand the claim that he was causing at
some particular time an event—the event of the death of the Archduke or
perhaps the event of the First World War? One way to understand his
causing these events is in terms of his initiating processes, the completion
of which constituted these events.1 We can say that as Princip was squeez-
ing the trigger he was initiating a process in the gun, giving momentum
to the bullet. This process in turn initiated the process of the bullet moving
under its own momentum, which initiated a process of the bullet causing
a perforation of the jugular vein of the Archduke, and then the process of
the Archduke dying as a result of this damage. Perhaps this initiated an
international relations process leading to war being declared and pursued.
Princip set the ball rolling, as it were, by squeezing his finger. Once he had
done this, various mechanisms outside of him took over one after the
other, resulting eventually in the Archduke’s being dead. In this way we
can say that in squeezing the trigger he was causing the event that was the
death of the Archduke. He was initiating these things at one time and not
at others. And his initiating the dying of the Archduke was his action of
killing the Archduke.
The idea under attack in this essay is that all actions are like this.
The target idea is that in acting I am causing an event by initiating (or
perhaps sustaining) a process whose completion is that event. Given this
idea, my role as agent is separate from the process that is initiated. Even
if we start off by identifying my action with me and some event in a causal
relation, the bit that is really associated with my agency does not include
that event itself. We are forced to accept the model of action in which I
do my stuff and then as a result the world does its—a model that forces
agency inward.
What Are You Causing in Acting? 103
But the example of killing someone, which leads to this idea, may have
peculiarities that mean that its treatment cannot be generalized to all
actions. Philosophy of action has an unhealthy obsession with murder. It
also needs to have something to say about phoning someone up, saying
something, going for a walk, eating a healthy lunch, writing a paper,
buying a train ticket, and so on. It is not at all clear that what we should
say about killing people will generalize to these other cases. In particular,
it is a peculiarity of Princip’s action of killing the Archduke that it is an
initiation of a series of processes.
On the face of it, this aspect of doing something and then waiting to
let nature take its course is not shared by all actions. My writing a paper,
buying a train ticket, going for a walk, or saying something are not obvi-
ously cases of initiating processes; in these cases I do not do my bit and
then sit back and let nature take its course. In none of these cases is there
a plausible candidate for being the event that is caused by me as I act. For
example, the event of the paper being written is not the completion of
some process initiated by me as I exercise my agency. It is the completion
of the process of my exercising my agency. It is my action, not some further
event caused as I act. Indeed, even in the assassination case, Princip did
not sit back and let nature take its course. For in reality he did not just
take a shot at the Archduke. If he had missed he would have shot at him
again. Seeing that he had hit him, he went on to shoot the Archduke’s
wife instead.
So on the face of it, many actions are not causings of events. But
this initial rejection of the target idea would be too quick if you thought
that every action is really a moving of parts of one’s body and that
every such moving is a causing of the event of those parts of one’s body
moving. Although there is no plausible candidate for being the event
caused in writing a paper, there seems to be a plausible candidate for being
the event caused when I move my body—namely, the event of my body
moving.
Donald Davidson famously argued for the conclusion that “we never
do more than move our bodies: the rest is up to nature” (Davidson 1980,
59). His argument has two premises. The first premise is that whatever we
do, we do by moving our bodies. The second premise (following Anscombe
1963) identifies our actions with the things by which we do them. So, if
the Queen killed the King by emptying the vial into his ear, her action of
killing the King is the same event as her action of emptying the vial into
his ear. And if she did that by moving her hand in a particular way, then
it is the same event as that movement of her hand.
104 R. Stout
One might deny either premise. In particular, it is not clear that what-
ever we do, we do by moving our body. Think of the action of checking
whether the baby is asleep. There may be some moving of bodies involved
in this. But there is also plenty of watching and listening. And watching
and listening are not done by moving your body. Arguably all action
involves some perceptual feedback of this sort. Or think of the action of
walking. Only when you are relearning to walk after a major injury do you
do it by moving your legs in certain ways. And even then what you do is
more than just change the relative positions of bits of your body; you have
to employ the friction of the surface you are walking on to propel the
weight of your body forward. This is not just putting one foot in front of
the other. Equally, it seems to be the wrong answer to the question, “How
do you write a philosophy paper?” to say “You do it by moving your fingers
in a certain very complicated way.”
But even if we accepted that every action is a moving of parts of one’s
body, to get to the target idea under attack in this essay we would also
have to accept the claim that moving part of one’s body is causing the
event of that body part’s moving. And I want to reject that too. In particu-
lar I want to reject the following claim:
In raising your arm you are causing the event of your arm’s rising.
If such a claim is to stand a chance, the event of your arm’s rising better
be distinct from your action of raising your arm. An action cannot be
identical with the causing of itself. In the final section of this essay I will
challenge the idea that the event of your arm’s rising is usually distinct
from that of your raising your arm. Although there may be odd examples
where we can identify a distinct event of your arm’s rising that is caused
by you as you raise your arm, I will argue that this is not the normal case.
But first I want to question the approach to causality and processes that
drives one to this sort of theory.
It does seem clear that in raising your arm you do cause your arm to
rise. But we can resist the further step to saying that in raising your arm
you cause the event of your arm’s rising. The phrase “your arm to rise” is
not really a noun phrase at all and certainly does not encode some implicit
reference to an entity that is the event of your arm’s rising.
To echo Zeno Vendler’s useful treatment of results and effects, results
are fact-like rather than event-like. Vendler gives the example of a pro-
longed frost in which the water under the pavement turned to ice, which
caused the ground to swell, which caused the pavement to crack (Vendler
1962, 13). The phrase “the ground to swell” can be nominalized to “the
What Are You Causing in Acting? 105
swelling of the ground.” And once nominalized in this way we can describe
it as a result of the water turning to ice. But here we can talk interchange-
ably of the result of the water turning to ice being the fact that the ground
swelled and its being the swelling of the ground. Other ways of understand-
ing the phrase “the swelling of the ground” are more event-like, however.
For example, if we say that the swelling of the ground was gradual, it is
clearly the event, or perhaps process, rather than the fact that is being
described as gradual. With this distinction in mind, the result of your
raising your arm looks like it must be taken to be the fact that your arm
rises, not a particular event or process of rising.
The need to locate an event as the result of a causal process characterizes
what we might think of as a Humean approach to the relationship between
causation and particular happenings in nature. According to this approach,
causation is not to be found in real things but between them. The cause
and effect are taken to be real things, and are usually described as events
in modern Humeanism. But the causing is not taken to be another thing.
In the Humean model, a basic happening is usually understood very
simply as something being in one state at one time and then in a different
state at a subsequent time—so-called Cambridge change. Happenings are
sequences of states. This model perhaps reached its classic formulation in
Russell’s conception of motion. Russell wrote: “Motion is the occupation
by one entity of a continuous series of places at a continuous series of
times” (Russell 1903, section 442). This claim can be extended to processes
generally, so that we have the claim that a process is a series of states of
affairs. For each kind of process there is a characteristic type of series of
states. The obtaining of a succession of such states is the Russellian concep-
tion of a process.
In this model causality is a relation external to happenings—not itself
something that happens but lying instead between those things that do
happen. Causality can only be part of a happening in this Humean model
if the happening consists of a sequence of lesser happenings linked by the
causal relation. But on this model, if what happens is taken to be the sum
of the component happenings, then causality is not really part of what
happens; hence Humean skepticism about causation.
Opposed to the Humean model of causality as a relation between real
things is an Aristotelian approach, which allows causings to be real things.
Causings—or causal processes—are basic constituents of our dynamic
world; causality is internal to happenings. And unlike the Humean model,
since this model takes the causing itself to be an identifiable particular in
the world, there is no need to take the result to be one as well.
106 R. Stout
Applying this to the case of human agency, the answer to the question
of what you cause when you act need not be that you cause some constitu-
ent of the dynamic world—some event or process. What you cause when
you raise your arm is not the process or event of your arm’s rising. What
you cause is your arm to rise, and that need not be taken to be an entity
itself.
If causings or causal processes are identifiable elements of nature, then
they are things that we can identify at one time but which have implica-
tions, conditional on nothing interfering, for what will happen at a later
time. So these things incorporate natural necessity of a sort; when the
causal process is happening, what is present is the conditional necessity
for certain results. This means that there are two aspects to its nature: what
makes it identifiable at the time, and what its existence at that time
requires to be the case afterward.
If you identify one of these dual natures in an object, O, you can see
that O has the property that results, R, will follow if nothing interferes.
You are identifying a conditional necessity in O for R. To put it another
way, you are identifying the actualization of a potentiality in O for R. The
reason for calling it the actualization of a potentiality is that there is often
a need to distinguish between more or less stable intrinsic properties of
the object that contribute to this dual nature and those features that can
be introduced from outside but which also contribute to this dual nature.
We can call O’s having these relatively stable intrinsic features O’s having
a potentiality for R, and O’s having all these features the actualization of
O’s potentiality for R. A car has the potentiality to accelerate under pressure
to the gas pedal, even when the engine is not switched on. That potential-
ity is actualized only if the engine is also running, the gears are engaged,
and so on. But since I am not concerned here with unactualized potentiali-
ties I do not need to pursue this distinction now.
We do not have to limit results to single end-points of causal processes.
A causal process typically has a characteristic sequence of stages as its
result. For example, we might identify the process of an object moving in
a straight line at a certain velocity with the process of that object’s momen-
tum causing it to continue traveling in that line at that velocity. It would
not make sense to identify what is caused here with the process of the
object traveling under its own momentum, since that is the causal process
itself and so cannot also be its result. But nor should we just identify what
is caused with the state of the object being at a certain end-point. The
result is that it continues traveling in a certain direction at a certain veloc-
ity, and this requires that it be not just at the end-point at a later time but
What Are You Causing in Acting? 107
be happening is not just that that structure of stages obtains, but that there
is present a potentiality for such a structure. When a potentiality is fully
present (or actualized), the Russellian process that it is a potentiality for is
not yet complete. The Aristotelian process is fully present; the conditions
for the potentiality are fully realized. But what it is a potentiality for is still
incomplete.2
So a potentiality has two sets of conditions. It has underlying conditions
whose satisfaction means that the potentiality is actualized. And it has the
conditions that characterize its results, whose satisfaction follows from its
being actualized. This essentially dual nature of Aristotelian processes can
be easily missed with certain readings of the idea of the actualization of a
potentiality. If we take a potentiality to be merely a possibility and its
actualization to be nothing more than the thing the possibility is a pos-
sibility of, then this dual nature is lost. The possibility of there being a cup
on the table in front of me is now actualized; there is a cup on the table
in front of me. But nothing is happening.
In the same way if we think of the actualization of a potentiality as the
exercise of a power, then the notion is trivialized. My power to be frighten-
ing is exercised as I reveal my most hideous facial expression; but again
nothing is happening. My exercising my power to be frightening is not
separate from my being frightening. There is nothing gained by describing
my acting as my exercising my power to act. What is crucial for the Aris-
totelian idea to yield a proper notion of a process is that the actualization
of the potentiality be distinct from whatever that potentiality is a poten-
tiality for.
Why has the Aristotelian conception of processes been so unpopular?
Empiricists like Hume assumed that one could not experience potentiali-
ties. The perceivable qualities that were the building blocks of experience
for these early modern empiricists were supposed to be present in indi-
vidual flashes of experience. Potentialities, if they are perceivable at all,
can only be perceived through a process of engagement with the thing
that has the potentialities. A purely passive model of perception, in which
a quality in an object transfers itself to the mind of the perceiver and
imprints itself on that mind, can make no sense of the perception of poten-
tialities. And if it follows that we can have no idea of a potentiality, then
we can have no more idea of the realization of a potentiality.
But if that is the objection to the Aristotelian model it should be
dropped, since the passive model of perception as a kind of imprinting
from world to mind no longer has any currency. You perceive the world
by engaging with it, exploring, interrogating, experimenting, tracking.
What Are You Causing in Acting? 109
It is a necessary condition of the truth of “a φTs b” that a cause b to φI. In that case
movementsT of the body are events that cause body movementsI. (Hornsby 1980, 13)
Alvarez and Hyman (1998) argue that I am the cause of the event of my
arm’s rising. Since my action of raising my arm is identified with my
causing that event, it cannot be identified with that event itself. They go
on to say that it is not an event at all. In this respect they hold on to part
of the Humean conception of causal processes: events are the effects of
causal processes and sometimes the causes, but they are not the bits in the
middle—the causings. If causings are not allowed any metaphysical iden-
tity, we are faced with the choice of denying that actions are causings and
denying that actions have any metaphysical identity.
If the event of my arm’s rising is distinct from the event of my raising
of my arm, then it might appear to be a good candidate for being the thing
that is caused when I raise my arm. But as Ursula Coope (2007) has recently
argued, my arm rising, if it is taken to be an Aristotelian process, should
be taken to be the very same process as the process of my raising my arm.
And although this claim may seem to fly in the face of Hornsby’s account
of action, we can see that it is very close to something she has been arguing
for too. For Hornsby rejects the idea that the event of an arm rising is
something we could describe as physical rather than as mental. The event
of the arm rising is, then, to be identified not with a series of changes in
110 R. Stout
position of the arm, but as something that may essentially involve the
agent.
This is expressed clearly in the postscript to “Bodily Movements, Actions,
and Epistemology” (Hornsby 1997, 102 ff.), where she considers a disjunc-
tive approach to bodily movements. She raises the question of whether a
bodily movement that is just a reflex or the result of some external manipu-
lation could have been the sort of movement that is associated with action,
and she answers that it is fairly evident that it could not (ibid., 103).
Despite the fact that one might be ignorant as to whether a movement is
or is not associated with the action of an agent, whether it is or is not
associated with an action is essential to its identity.
Hornsby cannot here be identifying an arm rising with a series of posi-
tions of the arm through the air. She is not thinking of the Russellian
conception of a process of arm-rising. For there is nothing essential to the
series of states that an arm is in when it rises that links the rising with an
action. The natural alternative is that an arm rising is being considered as
an Aristotelian process; it is the realization of an arm-rising potentiality.
It is a different potentiality that is realized when an arm rises as a result
of the agency of the owner of that arm from the potentiality that is real-
ized when an arm rises as a reflex or because of external manipulation.
And it seems reasonable to say that the realizations of these different
potentialities are also different.
It does not follow that there is no process in common between these
processes. There might be a highest common factor between an arm rising
as a result of external manipulation and an arm rising as a result of the
arm owner’s agency. If there were, then this might count as a neutral arm-
rising. Where might this potentiality be? Do one’s muscles have the poten-
tiality to raise one’s arm? No. They have the potentiality to shorten the
distance between the points at each end of the muscles, but that is some-
thing else.
One might try to argue that the system of muscles in the arm and
shoulder has the potentiality to raise the arm inasmuch as under certain
circumstances of electronic nerve inputs to these muscles the arm will rise.
But in fact this is not the case. Those nerve inputs only result in the arm
going up if the arm is oriented in exactly one way at the start of the
process—both with respect to the body and with respect to gravity—and
has the precise weight it has, the muscles have precisely the responsiveness
they have, and so on. If any of these factors is different then that set of
nerve inputs applied to the muscle mechanism will result in something
quite different from the arm going up.
What Are You Causing in Acting? 111
series of stages of my arm’s position in space. So, why not identify it with
my action of raising my arm?5
It might be thought that the process of my arm rising must be different
from the process of my raising my arm since they have different agents.
One is the process of me doing something; the other is a process of my
arm doing something. But my arm is the patient rather than the agent in
this process. It is not raising itself; it is being raised by me. My arm rising
under my agency is the same process as my raising my arm, just as the
butter melting under the sun’s agency is the same process as the sun
melting the butter. This appears to be Aristotle’s view (Physics, Book 3,
chapter 3; and see Coope 2007, 123–124). So the process of my arm rising
is the process of my arm rising under some agency. There are not two
processes occurring here; there are not two potentialities being realized or
two mechanisms working.
If this is right then what is caused when I raise my arm is not normally
the process of my arm’s rising. Although I cause my arm to rise I do not
normally cause the process of my arm’s rising. And if my raising my arm
is correctly construed as the realization of some potentiality in me, then
this potentiality is not the potentiality for the process of my arm rising,
also construed as the realization of some potentiality.
So, what is the potentiality whose realization is my raising my arm (or
my arm rising) a potentiality for? Coope (2007, 114) argues that it is not
the potentiality for another process, but the potentiality for a state to
obtain. In particular, it is the potentiality for my arm to be up. But this is
too simple. For it is essential to my raising my arm that my arm pass
through all the appropriate stages of a rising. The complete realization of
the potentiality cannot be characterized just by an end-state. Suppose that
I get my arm to be up by pulling it in close to my body and then shooting
it out again at a higher angle. I haven’t in this case raised my arm though
I have done something that results in it being up.
What seems to be the natural candidate for the job of being the thing
that my arm-raising potentiality is a potentiality for is the Russellian con-
ception of the process of my arm rising. It is not just the end-state of my
arm being up but a particular kind of structure of stages between my arm
being down and my arm being up. There is a structure of stages character-
istic of an arm-rising, and my raising my arm is the realization of a poten-
tiality for a series of states that match that characteristic structure.
So I propose to extend Coope’s account by saying that the process of
my raising my arm or my arm rising is the realization of a potentiality for
the arm to be in a series of states characteristic of arm-rising, rather than
What Are You Causing in Acting? 113
just being the realization of a potentiality for the arm to be up. In raising
my arm and realizing that potentiality, I am causing my arm to be in that
characteristic series of stages. But I am not initiating a separate process of
my arm’s rising, nor am I causing the event of my arm’s rising.
Notes
2. This might make sense of Aristotle’s claim that processes are incomplete actual-
izations of potentialities (1983, 201b31–33). What is incomplete is not the degree
to which the potentiality is actualized but the structure of stages that characterizes
what the potentiality is for.
4. Even if there are deliberate bodily movements that do just consist in instructing
subpersonal mechanisms to do their stuff, it would be absurd to generalize this to
all bodily movements, including controlled movements like raising one’s arm. I may
be able to make my arm move, though not in a very controlled way, by initiating
some process of arm-moving. But this, like the example of Princip’s assassination
of the Archduke, would be a rather special case.
5. Adrian Haddock (2005) also argues in this way and recommends that Hornsby
adopt the stronger disjunctive approach to body movements.
8 Omissions and Causalism
Carolina Sartorio
1 Introduction
Third, a causalist can claim that there are two (or maybe more) concepts
of causation, and that omissions and other absences can only be causes
and effects in the sense captured by only one (or some) of those concepts.
For example, it could be argued that there is a “productive” concept of
cause and a “counterfactual” concept of cause (as in Hall 2004), and that
omissions can be causes and effects in the counterfactual sense but not in
the productive sense. Still, to the extent that both concepts are genuine
concepts of causation, it is open to the causalist to say that an agent behaves
intentionally when his moving in a certain way, or his not moving in a
certain way, is caused by the agent’s intentions in the normal way.
On any of these views, then, what makes an omission intentional is
similar to what makes an action (a “positive” action) intentional: the fact
that a relevant piece of behavior (positive or negative) is caused by the
agent’s intentions in the normal way. For example, my failure to jump
into the water in Drowning Child is an intentional omission because I
formed the relevant intention not to jump in and such intention caused
my not jumping in, in the normal way. This is parallel to the way in which,
if I had intentionally jumped into the water to save the child, my forming
the opposite intention (the intention to jump in) would have caused the
bodily movement consisting in my jumping in, in the normal way. As we
have seen, there are different ways in which a causalist can resolve the
issue of how omissions can be causes and effects. But, to the extent that
omissions can be causes and effects, it might seem that causalism has the
resources to account for intentional omissions in basically the same way
it accounts for intentional (positive) actions.8
In what follows I argue that omissions pose a recalcitrant problem for
causalism, that is to say, a problem that persists even under the assumption
that omissions can be causes and effects in any of the ways outlined above.
Interestingly, it is a problem that bears some similarities to what can be
construed as a different challenge to the view: the challenge of the causal
exclusion of the mental by the physical (Kim 1993). This is because the
recalcitrant problem of omissions can be seen as an exclusion problem.9
Briefly, the exclusion problem for the mental and the physical is this.
According to nonreductive physicalism, a widely held view in the philoso-
phy of mind, mental states are realized by, but not identical to, physical
states. For any piece of behavior that a mental state allegedly causes, there
is an alternative explanation that appeals only to the underlying physical
state. We want to say that the physical world is “causally closed,” and thus
that the physical state is a cause of the behavior. Hence, it is tempting to
conclude that the mental states don’t really do any causal work. And, if
Omissions and Causalism 119
so, causalism doesn’t seem to get off the ground. Many people think that
this problem is not intractable.10 But what I will suggest is that the problem
that omissions pose for causalism is an exclusion problem of its own: one
that threatens not to show that mental states in general are causally inef-
ficacious, but only that, in the specific case of omissions, the relevant
mental states (in particular, intentions) cannot do the causal work that the
causalist would want them to do. For there is an alternative, and arguably
better, explanation that doesn’t appeal to those mental states, even if
mental states in general are causally efficacious, and even if omissions in
general are causes and effects.
I say “roughly” because many causalists would reject the idea that inten-
tionally φ-ing requires forming an intention to φ. Still, the consensus is that
a closely related intention is required.11 For simplicity, I will assume that
the intention in question is the intention not to jump in.
At first sight (again, assuming that there is no problem with absences
being causes and effects, or with mental events and states in general being
causes and effects), Claim 1 seems very plausible: it seems natural to say
that I didn’t jump into the water because I formed the intention not to do
so. On the face of it, intentions (and other mental events or states) can
cause people not to do things just as they can cause them to do things.
For instance, it seems that my abstaining from voting in an election can
be the result of a careful process of deliberation ending in my forming the
intention not to vote, just like my voting for a certain candidate can be
the result of a careful process of deliberation ending in my forming the
intention to vote for that particular candidate. Thus it might seem that,
once we resolve the issue of how omissions can be causes and effects, and
the issue of how mental events and states can be causes and effects, the
claim that causalism can account for intentional omissions in the same
way it accounts for (positive) actions is very plausible. I will argue, however,
that this view is misguided and that Claim 1 should be rejected.
I said that I would bypass the question of whether omissions should be
regarded as actions in their own right, on a par with “positive” actions. By
120 C. Sartorio
Whereas Claim 1 says that the cause of O2 is what I did (mentally), Claim
2 says that it is what I omitted to do (mentally). Which one is more likely
to be true? Or can both of them be true simultaneously? In the next section
I argue for the truth of Claim 2 and for the idea that Claim 2’s truth
threatens to undermine Claim 1’s truth. I will call this thesis the thesis of
“causal exclusion for omissions” (CEO).
(P1) Claim 4 is true and its truth undermines the truth of Claim 3.
(P2) If P1 is true, then Claim 2 is true and its truth undermines the truth
of Claim 1.
In other words, the argument suggests that the best way of conceiving
my relationship to the outcome of the child’s death is as a negative rela-
tionship throughout the causal chain. This includes my mental behavior:
the child died because of what I omitted to do, including what I omitted
to intend to do. Even if I also formed a positive intention not to be involved
in certain ways, the fact that I formed that intention seems causally irrel-
evant; all that was causally relevant is the fact that I omitted to intend to
be involved in certain ways. The argument relies heavily on an analogy
between bodily acts and mental acts. The main claim is that, if what
accounts for the outcome of the child’s death is what I didn’t do “extra-
mentally,” then what accounts for what I didn’t do extramentally is, in
turn, what I didn’t do—this time, mentally.
An important clarification is in order. I don’t mean to suggest that omis-
sions can only have other omissions as causes—or, in general, that absences
can only be caused by other absences. All I want to suggest is that this is
true of the type of situation that is our focus here. It is certainly possible
for omissions—and for absences in general—to have positive occurrences
as causes. Imagine that, besides not jumping in myself, I talked the life-
guard into thinking that it is not worth risking one’s own life to save other
people’s lives, and, as a result, the lifeguard also failed to jump in. In this
case my talking to the lifeguard (an action) caused his omission. Or imagine
Omissions and Causalism 123
are excluded by other items. Those other items are better suited to play
the relevant causal role than the candidates identified by the causalist.
Crucially, the problem for omissions doesn’t rest on a general “exclusion
principle” according to which no phenomenon can have more than one
sufficient cause, or on the claim that there is no widespread overdetermina-
tion, or on any other claim in the vicinity. In this sense the exclusion
problem for omissions is very much unlike the traditional exclusion
problem for the mental and the physical, as it is typically laid out in the
literature.17
What does the argument for CEO rely on, if not a general exclusion
principle? As I pointed out, it relies on an important analogy between
bodily and mental items. The claim is that, given what we want to say
about the causal powers of the bodily items, we should say something
similar about the causal powers of the mental items. In particular, given
that my eating ice cream isn’t a cause of the death (my failing to jump in
is), my intending not to jump in also isn’t a cause of my omitting to jump
in (my omitting to intend to jump in is). This is so even if, at first sight,
the claim that the intention had those causal powers seemed plausible.
What justifies the claim about the causal powers of the bodily items, to
begin with? That is, what justifies the claim that my eating ice cream didn’t
cause the child’s death, but, instead, my failure to jump in did? There are
several things one could say to answer this question. But, on the face of
it, it seems enough to point out that, on the assumption that omissions
can be causes, the view that my failure to jump in is a cause of the death
and my eating ice cream isn’t is very intuitively plausible (as suggested
above). Again, on the face of it, there are certain things that I cause in
virtue of eating ice cream and there are other things that I cause in virtue
of not jumping into the water. Perhaps there are also other things that I
cause in virtue of both eating ice cream and failing to jump in (maybe my
remaining above my ideal weight, if I would have weighed less by dieting
or exercising?). But certainly not everything I cause in virtue of eating ice
cream is something that I cause in virtue of failing to jump in, or vice
versa. In particular, just as it seems that I cause myself to feel sick to my
stomach by eating ice cream, and not by failing to jump in, conversely, it
seems that I cause the child to die by failing to jump in, and not by eating
ice cream. Again, this is not motivated by a general exclusion principle of
any sort: it’s just a claim that seems very plausible on its own.18 (More on
the causal powers of bodily actions and omissions in section 5.)
This concludes my discussion of the argument for CEO. How could the
causalist try to respond to the argument? In the following sections I discuss
Omissions and Causalism 125
First, the causalist might want to reply in the following way. An event
consisting in my arm moving is not an action if it was the result of
someone else’s grabbing my arm and making it move in a certain way; in
that case it is a “nonactional” event, a mere bodily movement (something
that merely “happens” to the agent, as opposed to something that the
agent does). To borrow an analogy by Mele,19 an intrinsic duplicate of a
US dollar bill fails to be a genuine bill if it is not the output of a certain
causal process involving the US Treasury Department (e.g., if it is counter-
feit); similarly, an event fails to be an action if it is not the output of a
causal process involving mental items of a particular kind. In particular,
the causalist would want to say, it is not an action unless it is the output
of a causal process involving intentions of the relevant kind. And the same
goes for (intentional) omissions, the causalist might claim: my failing to
jump into the water in Drowning Child would not be intentional unless
it were caused by a relevant intention in the relevant way. Imagine that I
didn’t jump in because someone restrained me when I was about to do so.
In that case, the causalist would say, I didn’t intentionally fail to jump in.
Although it is true that I didn’t jump in, my not jumping in isn’t an inten-
tional omission but a nonactional state (a mere “bodily state,” something
that “happens” to me, but not something I intentionally omit to do).
In other words, the objection is that the analogy on which the argument
for CEO rests breaks down: although we don’t have reason to believe that
my eating ice cream causes the child’s death (all the work is plausibly done
by my failing to jump in), we do have reason to believe that an intention
with a relevant content causes my failure to jump in. For this failure is not
any failure: it is an intentional failure, and it would not have been inten-
tional unless it was caused by a relevant intention in the relevant way.
However, this objection fails. I agree that my not jumping in wouldn’t
have been intentional if someone had been restraining me the whole time,
just like I wouldn’t have intentionally raised my arm if someone had forced
my arm upward. But this isn’t enough to show that I wouldn’t have inten-
tionally failed to jump in unless A1 (my forming the intention not to jump
in), or my forming a similar intention, had caused it. Why not? Because
it is very plausible to think that my failure to jump would be intentional
126 C. Sartorio
Alternatively, the causalist might want to object to the claim about bodily
acts on which the argument for CEO rests: the claim that my eating ice
cream (A2) isn’t a cause of the child’s death. One way in which the causal-
ist could try to make this reply is this. As I have suggested, the child died
because I didn’t jump in to save him. However, I didn’t jump in to save
him, in turn, because I was eating ice cream on the shore (since, given that
I was eating ice cream on the shore, I couldn’t have been jumping in).
Therefore, by transitivity, the child died because I was eating ice cream on
the shore.
The main problem with this suggestion is that, even if all of this were
right, it still wouldn’t follow that A2 caused the child’s death. For consider
the claim that I didn’t jump in to save the child (at t) because I was eating
ice cream on the shore (at t). If this claim is true, there is an explanatory
connection between A2 and O2.23 But this explanatory connection is non-
causal. (For one thing, A2 and O2 obtain simultaneously, whereas it is
generally thought that causes precede their effects.) So, even if it were true
that the child died because I was eating ice cream on the shore, it still
wouldn’t follow that A2 caused the child’s death.
Alternatively, the causalist might want to suggest that A2 caused the
child’s death, although it did so “directly” (i.e., not by way of causing O2).
However, I find this reply unmotivated. Anscombe dismissed a similar view
in a two-sentence paper.24 But I am going to try to do (a bit) more to con-
vince you that this view is not very plausible.
Why would anyone be tempted by this view? One might think that
there is some intuitive support for it. Imagine that Jim spent the night
previous to the exam partying instead of studying, and then he flunked
the exam on the following day. We are tempted to say: “Jim’s partying the
night before the exam caused him to flunk it” (instead of, in my view, the
more appropriate claim: “His failing to study the night before the exam caused
him to flunk it”). But, should we take this literally? Should we think, on
this basis, that Jim’s partying was also a cause of his flunking the exam?
Or should we think that we are speaking loosely in claiming that it was?
Here is an argument that we should think the latter. As I am imagining
the example, to the extent that we judge that Jim’s partying caused his
128 C. Sartorio
flunking the exam, it’s because he was parting instead of studying (not
because, say, too much partying impaired his writing or thinking capaci-
ties, which were a necessary requirement for doing well on the exam). But
then, by the same token, anything else that he could have done instead
of studying would be a cause too, in the corresponding scenario. In par-
ticular, had Jim been caring for convalescent Grandma all night long
instead of partying, his caring for Grandma would have caused him to
flunk the exam. Also, had he been reading a book on how to pass exams,
his reading such a book would have caused him to flunk the exam. And
so on. But these results are implausible (again, unless the book’s advice
was really bad!). Instead, it seems preferable to hold that it wasn’t really
Jim’s partying, but what that entailed (namely, the fact that he didn’t touch
the books) that caused him to flunk the exam.
Why does it seem so appealing, then, to mention Jim’s partying in con-
nection with his flunking the exam? Presumably, because it’s a vivid way
of implicating that he didn’t study for the exam, when he should have
been studying for the exam. We mention his partying because it is a more
colorful way to describe what happened, not because the partying is a cause
of the flunking of the exam per se. Again, unless there was something
about the partying itself that accounts for Jim’s doing badly on the exam,
it seems that he flunked because he didn’t study, not because of what he
did instead of studying.25
Finally, the causalist might want to argue that, although O2 was the
“main” cause of the child’s death in Drowning Child, A2 still played a
causal role in some “secondary” or “derivative” sense. Consider an example
by Yablo (1992): a pigeon, Sophie, is conditioned to peck at (all and only)
red objects; one day she is presented with a scarlet triangle and she pecks.
According to Yablo, although the triangle’s being red plays the major
causal role (it plays the role of being the cause of Sophie’s pecking, in
Yablo’s terminology), the triangle’s being scarlet (a determinate of the
determinable red) is still causally relevant to Sophie’s pecking. The idea, I
take it, is this: something’s being scarlet is a way of being red; thus the
triangle is red, on this occasion, by being scarlet. So on this occasion the
triangle has the causal powers that it has, in some sense, thanks to its being
scarlet. This role is “derivative” or “secondary” in that being scarlet only
gets to play that role in virtue of the causal powers that being red has;
however, one might argue that it still is an important role. Similarly, the
causalist could say, although O2 plays the major causal role in the drown-
ing child case, A2 is still causally relevant to the child’s death. For my
eating ice cream on the shore is, also, a way of failing to jump in (I fail to
jump in, on this occasion, by eating ice cream).
Omissions and Causalism 129
Now, imagine that this were right, that is, imagine that it were right to
say that A2 played a derivative causal role with respect to the child’s death.
Then the causalist could say that A1 plays a similar derivative role: one
that depends on the role played by O1. Would this help the causalist? I
don’t think so. For presumably, the causalist wants to say that mental items
like intentions play a primary role in giving rise to intentional acts, not
one that is parasitic on the role that something else plays. At least, this is
what the causalist wants to say about intentional actions. So, if intentions
played a primary role in the case of actions but not omissions, this would
still make for an important asymmetry between actions and omissions,
and thus it would present a problem for causalism as a general theory of
intentional behavior.26
6 Conclusion
I conclude that omissions pose a serious problem for causalism. Briefly, the
problem is that, whereas omissions can be intentional, causalism cannot
account for them in the same way that it accounts for intentional actions.
This is not so because omissions cannot be causes and effects, for it is quite
plausible to think that they can. The problem is, rather, that omissions are
not caused (at least ordinarily) by those mental items that the causalist
identifies as causes in the case of actions. As a result, causalism, conceived
as a theory of what it is for agents to behave intentionally, threatens to be
an either incomplete or highly disjunctive theory.
Acknowledgments
Notes
1. For example, omissions were responsible for Lewis’s claiming that causation is
not a relation (Lewis 2004), and for Thomson’s and McGrath’s claiming that it is a
normative notion (Thomson 2003; McGrath 2005). My focus here is also omissions
and causation, in particular, on the question of whether omissions can be accom-
modated by causal theories of agency.
130 C. Sartorio
2. On this point, see Vermazen 1985, 104, and also Mele 2003, 151.
3. Different philosophers have different views of intentions: some believe that they
are reducible to belief-desire pairs, others believe that they are irreducible mental
states. But causalists seem to agree about the key role that intentions play in the
etiology of intentional action.
5. In Davidson’s original work, there are only two brief references to omissions:
Davidson 1963, n. 2, and Davidson ,1971, 49,. In those places Davidson seems to
want to make room for omissions, but he is not very explicit about how.
6. See, e.g., Dowe 2000 and Beebee 2004. Davidson’s own view of causation in
Davidson 1967 appears to be of this kind (although he seems to take it back in his
discussion of Vermazen’s proposal, which I discuss below).
8. There are several questions that I’ll bypass here. For example, if we think that
there are two concepts of causation, what makes them both concepts of causation,
as opposed to concepts of something else? The two-concepts proposal only helps
the causalist to the extent that the nonproductive concept is genuinely a concept
of causation. Also, about Vermazen’s proposal: it’s unclear that the proposal explains
why my failure in Drowning Child is intentional. Imagine that, had I not formed
the intention not to jump in, I would have remained undecided. In that case it’s
not true that, had I not formed the intention not to jump in, I would have formed
the opposite intention, which would have caused my jumping into the water. So,
then, in what sense did my forming the intention not to jump in cause my not
jumping in?
9. However, as I will note in due course, there are also very important differences
between the two challenges. Notably, the strongest formulation of the problem of
omissions doesn’t appeal to a general exclusion principle. To my mind, this makes
the problem of omissions much more powerful than the traditional exclusion
problem (more on this later).
10. There are two main options: to insist that mental states are still causally effica-
cious, or to restate causalism as the claim that the physical realizers of mental states
are the causes of actions.
11. See Mele 1992 and Mele and Moser 1994. For arguments that intentionally φ-ing
doesn’t require an intention to φ, see Harman 1976 and Bratman 1984.
that exist are “primitive” or “basic” actions, or mere bodily movements (the actions
that take place “inside the agent’s skin”). Now, the sense in which I flip the switch
by moving my finger is not the same sense in which I fail to jump in by eating ice
cream. I flip the switch by moving my finger because the moving of my finger causes
the switch to be flipped; by contrast, I don’t fail to jump in by eating ice cream in
this sense: the eating of my ice cream doesn’t cause my not being in the water (more
on this later). The class of omissions that is of interest to us is that of primitive
bodily nonmovements (see Vermazen 1985, 102–103). Davidson acknowledges this
in his reply to Vermazen (Davidson 1985).
13. Or consider Ginet’s example (in Ginet 2004, 105): S intentionally did not mow
the grass in her backyard this summer because she wanted it to revert to a wild state.
As Ginet claims, it would be very implausible to suggest that there is something S
intended to do this summer in virtue of which she intentionally not mowed the
grass. For related arguments, see Weinryb 1980, Higginbotham 2000, and Vihvelin
and Tomkow 2005.
14. Note that this assumption is consistent with different views of omissions. In
particular, it’s consistent with views according to which some, but not all, omissions
are identical with actions.
16. By calling these cases “ordinary” and “paradigmatic” I do not mean to suggest
that there aren’t many cases of intentional omission of a different sort, say, cases
where the agent has to take active measures to counteract an existing trend or habit.
All I mean to imply is that the cases that are my focus here are the ones with the
simplest structure, given that the nonmovement simply flows from another omis-
sion. Thanks to Richard Holton for discussion of this point.
17. Kim famously grounded his exclusion argument in a general exclusion princi-
ple. For discussion of this principle, see Kim 1989.
18. In particular, note that this claim is consistent with the existence of cases
where both an agent’s action and an omission by the same agent are sufficient causes
of an outcome. Imagine that a sick patient will die at t unless his doctor gives
him a certain drug before that time. Imagine that, besides not giving him the drug,
he injects him with a poisonous drug that takes effect at t. In that case, arguably,
both the doctor’s failure to inject the patient with the medicine and his poisoning
him cause the patient’s death. Now, I think it is clear that the Drowning Child
case doesn’t have a relevantly similar structure: whereas here there is a good
reason to think that both the action and the omission are causes, there isn’t such
132 C. Sartorio
20. Zimmerman (1981), Ginet (2004), and Clarke (forthcoming) believe this. But
what if I had remained undecided about what to do until the child died? In that
case, you’d still want to blame me for not jumping in; I was aware of the presence
of the child in the water, I knew that I could save him, etc. Could one argue that
my omission is still intentional in this case, even if I don’t form an intention one
way or the other? I think that the causalist can plausibly argue that my omission
isn’t intentional in this case. Maybe it’s not unintentional either. But even if it’s not
unintentional, some philosophers see a middle ground between intentional and
unintentional behavior (see, e.g., Mele and Moser 1994), and it is plausible to suggest
that my omission in this case falls in that middle ground. Another potential coun-
terexample to the claim that intentionally failing to jump in requires an intention
with the relevant content is this: a neuroscientist has been closely monitoring my
brain; he lets me fail to intend to jump in (which I do intentionally), but he prevents
me from forming the intention not to jump in (or any other intention with a similar
content). Is this scenario possible? I don’t know; fortunately, we don’t need to decide
this issue here.
22. Another potential challenge that I have chosen to set aside is the challenge that
negative intentions are impossible. According to some views of intentions, forming
an intention requires settling on a plan of action (Bratman 1984; Mele 1992; Enç
2003). This view creates some pressure to reject negative intentions. For it’s hard to
say what the plan might be in the case of omissions (for an argument that omissions
don’t involve “plans,” or “methods,” see Thomson 1996).
23. Although, is it really true that I didn’t jump in because I was eating ice cream?
Let’s assume that, if I was eating ice cream on the shore, then I couldn’t have been
Omissions and Causalism 133
jumping into the water at the same time, maybe in the sense that it was physically
impossible for me to do both at once. Does this mean that A2 explains O2? Compare:
I couldn’t have been a professional philosopher and a professional basketball player.
Does my being a philosopher explain my not being a basketball player? Or is this
explained by my lacking the relevant qualities for being a basketball player?
24. That’s right: a two-sentence paper (in Analysis). Here is the full text of the paper:
“The nerve of Mr. Bennett’s argument is that if A results from your not doing B,
then A results from whatever you do instead of B. While there may be much to be
said for this view, still it does not seem right on the face of it” (Anscombe 1966).
25. It might be argued that our judgments whether Jim’s partying caused his flunk-
ing the exam depend on the contrast class with respect to which we are making the
assertion: whereas it’s not the case that his partying rather than his caring for
Grandma caused him to flunk, his parting rather than studying did cause him to
flunk. (For a recent defense of a contrastive view of causation, see Schaffer 2005.) If
causation were a contrastive relation instead of a two-place relation, maybe the
causalist could make a similar claim about the intention not to jump in: whereas
it’s not the case that my intending not to jump in rather than my merely omitting
to intend to jump in caused my omitting to jump in, my intending not to jump in
rather than my intending to jump in did cause my omitting to jump in. I cannot
do full justice to this view here. But let me just note two things. First, causalism
would have to be revised accordingly, as the claim that intentions of a certain type
rather than intentions of another type cause the relevant bodily states in the relevant
way. Second, whereas the claim that explanation is not a two-place relation (but a
three-place relation, or even a four-place relation) bears some initial plausibility, the
corresponding claim about causation is very counterintuitive.
26. On similar grounds, Kim argues that the nonreductive physicalist shouldn’t
settle for the claim that the mental is causally efficacious but the causal powers of
the mental are parasitic on the causal powers of the physical (Kim 1998, 45).
9 Intentional Omissions
Randolph Clarke
Often when one omits to do a certain thing, one’s omission is due to one’s
simply not having considered, or one’s having forgotten, to do that thing.
When this is so, one does not intentionally omit to do that thing. But
sometimes one intentionally omits to do something. For example, Ann
was asked by Bob to pick him up at the airport at 2:30 AM, after his arrival
at 2:00. Feeling tired and knowing that Bob can take a taxi, Ann decides
at midnight not to pick him up at 2:30, and she intentionally omits to do
so. Other examples of intentional omissions include instances of abstain-
ing, boycotting, and fasting.1
Intentional omissions would seem to have much in common with
intentional actions. But the extent of the similarity is not immediately
obvious. Intentional omission has been recognized as a problem for theo-
ries of agency, but it is one on which, especially lately, little effort has been
expended. My aim here is to advance a conception of intentional omission,
address a number of claims that have been made about it, and examine
the extent to which an account of it should parallel an account of inten-
tional action. I’ll argue that although there might indeed be interesting
differences, there are nevertheless important similarities, and similarities
that support a causal approach to agency.
Although much of our interest in omissions concerns responsibility for
omitting, my focus is on the metaphysical and mental dimensions of inten-
tional omission. What sort of thing (if it is a thing at all) is an omission?
What, if any, mental states or events must figure in cases of intentional
omission, and how must they figure? Answers to these questions have some
bearing on the moral issue, but the questions are interesting in their own
right. And they stand in some degree of mutual independence from the
moral issue, as there can be intentional omissions for which no one is
responsible, and (on the assumption that we can be responsible for any-
thing at all) we can be responsible for omissions that aren’t intentional.2
136 R. Clarke
Let us call actions of a familiar sort, such as raising one’s arm, walking, or
speaking, “positive actions.” It is on positive actions that action theory
has, understandably, largely focused. How are omissions related to positive
actions? For one thing, when one intentionally omits to A, is one’s omis-
sion identical with some intentional positive action that one then
performs?
Perhaps sometimes it is. Imagine a child crouching behind a chair and
holding still for several minutes while playing hide and seek.5 The child’s
holding still is arguably an intentional action; it requires the sending of a
pattern of motor signals to certain muscles, perhaps the inhibition of other
motor signals, the maintenance of balance, with fine adjustments made in
response to feedback, at least much of which arguably results from the
child’s intending to hold still. The child’s not moving is an intentional
omission. And perhaps the child’s not moving in this case is just her
holding still.6
It might be objected that the child might have not moved even if she
hadn’t intentionally held still—she could have been frozen stiff. But we
may grant this possibility without accepting that the child’s holding still
(that particular event) is distinct from her not moving, just as we may
grant that on some occasion when I walked slowly, I might have walked
without walking slowing, without thereby committing ourselves to the
implausible view that I performed two acts of walking when I walked
slowly.
On a minimizing view of act individuation, when one flips the switch,
turns on the light, illuminates the room, and startles the burglar, one
might perform only one action, which might be intentional under some
Intentional Omissions 137
an intention a “mental act,” but we should not lose sight of the difference
between such an occurrence and an intentional action.
We might decide to call intentional omissions “acts of omission” simply
because they’re intentional, or on the grounds that (we think) they express
intentions. (Whether intentionally omitting requires having a pertinent
intention is a question I’ll address below.) There is warrant for this choice
of terminology, as what is done intentionally is, in some sense, a manifes-
tation of agency. (I’ll return to this point in section 6.)
Still, we ought not assume that intentional positive actions and such
negative acts are thoroughly alike. The question of whether there are in
fact significant differences arises at several points in the discussion to
follow. Since it will be convenient to have an economic way to refer spe-
cifically to positive actions, henceforth when I use “action” or “act”
without qualification, I intend positive action. Using the terms this way
is, of course, meant to be consistent with what I observed in the preceding
paragraph.
2 Absences
Ann doesn’t pick up Bob, her omission is such an absence, and nothing
more, even if more is required for it to be an intentional omission.
It can seem puzzling just when and where omissions occur. Does Ann
omit to pick up Bob at midnight, when she decides not to pick him up,
or at 2:30, when she isn’t at the airport to pick him up, or during some
portion, or all, of the interval from that earlier time to the later one? Does
her omission take place at her house, where Ann is located throughout
that interval, or at the airport, or along the route that Ann would have
taken had she gone to pick up Bob? If omissions (those that aren’t actions)
are absences, and absences aren’t things, then (these) omissions don’t
occur anytime or anywhere. There isn’t an action by Ann at 2:30 of picking
up Bob at the airport. The time and place in question are some pertinent
time and place at which there isn’t such an action. That there isn’t such
an action at that time and place is what it is for there to be such an absence.
Which absences of actions are omissions? Some philosophers (e.g.,
Fischer 1985–1986, 264–265) take it that there is an omission anytime an
agent does not perform a certain action. Somewhat less generously, others
(e.g., Zimmerman 1981, 545) hold that there is an omission whenever (and
only when) an agent is able to perform some action A and does not A.9
Whether omitting to A requires that one be able to A is a complicated
matter, for there are several different sorts of thing each of which may
fairly be called an ability to act. Arguably, some type of ability to do other
than what one actually does is ruled out if determinism is true. Some other
types of unmanifested abilities, such as talents or skills, general capacities,
or powers to do certain things, are plainly compatible with determinism.
Similarly, it seems that in cases of preemptive overdetermination, in which
an agent does a certain thing on her own, but would have been made to
do it anyway had she not done it on her own, some type of ability to do
otherwise is precluded, while the agent might nonetheless retain a capacity
or power to act that, it is ensured, she won’t exercise.10
It hardly seems to follow from the truth of determinism that no one
ever omits to send holiday greetings, wear their seat belts, and so forth, or
that we never abstain, boycott, or fast. It doesn’t seem credible that omit-
ting to A, or that intentionally omitting to A, requires that one have any
sort of ability to A that would be ruled out by determinism. And agents in
cases of preemptive overdetermination might omit to do things that, in
some sense, they’re unable to do.11
On the other hand, lacking an ability of another sort can seem to pre-
clude one’s intentionally omitting to do a certain thing. We might plau-
sibly judge that an agent who intended not to get out of bed, and who
Intentional Omissions 141
didn’t so act, didn’t intentionally omit to get out of bed if, unbeknownst
to her, she was paralyzed and wouldn’t have risen from bed even if she
had tried (Ginet 2004, 108).12 I suspect that it would be a delicate matter
to say exactly what type of ability to act is required for omission, or for
intentional omission, and I’ll not attempt that project here.
Although for some purposes we might wish to say that there is an omis-
sion whenever an agent doesn’t perform a certain action that she is, in a
relevant sense, able to perform, we don’t commonly use the term so
broadly. Setting aside cases of intentional omission and those in which it
is intentional of some agent that she doesn’t do a certain thing, in ordinary
contexts we tend to take “omission” to be applicable only when an action
isn’t performed despite being recommended or required by some norm
(not necessarily a moral norm; cf. Feinberg 1984, 161; Smith 1990; Wil-
liams 1995, 337). We may sensibly count as omissions only those absences
of actions that satisfy some such restriction (as well as whatever ability
requirement is appropriate).
In any case, since the focus here is on intentional omissions, the
absences that count will be restricted in a different way. Only absences of
actions in cases in which the agents have certain mental states are inten-
tional omissions.
3 Intentions
At least generally, in cases of intentional action, the agent has some inten-
tion with relevant content. Typically, when I intentionally walk, I intend
to walk. However, arguably, even if intentionally A-ing requires having an
intention, it doesn’t require intending to A (or having an intention to A).
While walking, I might intentionally take a certain step, without intending
specifically to take that step. It might suffice that while taking that step I
intend to walk then, I’m a competent walker fully capable at the moment
of exercising that competence, there’s no obstacle requiring any special
adjustment of my walking, and my taking that step results in a normal
way from my intending to walk then. (On this type of case, see Mele 1997a,
242–243; I’ll describe below some further cases in which, apparently, one
can intentionally A without intending to A.)
Does intentionally omitting to A require having some intention with
relevant content? If so, what content must the intention have? And when
must one have the intention?
Suppose that one intentionally omits to A during t. Must one (as Ginet
2004 maintains) intend throughout the interval t not to A?13
142 R. Clarke
No. Some actions require preparation, and preparatory steps must some-
times be taken by a certain time prior to the action in question. If I am to
attend a meeting in a distant city on Monday afternoon, I must earlier
book a flight, get to the airport, and so forth. Suppose that having decided
not to attend, I intentionally don’t perform such preparatory actions.
Having forgone the preparations, I have no further need of the intention
not to attend, and with other things on my mind, I may dispense with it.
(The claim isn’t that the intention couldn’t be retained, only that it need
not be.) Nevertheless, when I don’t show up at the meeting, I might inten-
tionally omit to do so.14
Does intentionally omitting to A during t require having, at some rel-
evant time, an intention not to A? Several writers (e.g., Ginet 2004 and
Zimmerman 1981) have claimed that it does, but again the claim appears
mistaken. An intention with some other content might do.
Suppose that Charles wants to abstain from smoking for a week, but he
thinks it unlikely that he’ll succeed. Cautious fellow that he is, Charles
forms only an intention to try not to smoke. He plans to spend time with
friends who don’t smoke, to chew gum to diminish his desire to smoke,
and so forth, which he hopes will enable him to resist the temptation.
Suppose that Charles then makes the effort and succeeds, and there’s
nothing magical or fluky about his success: his plan works just as he hoped
it would. Charles omits to smoke because he tries not to smoke. He inten-
tionally omits to smoke for a week, even though he didn’t have an inten-
tion not to smoke for a week.15
The case parallels one of action in which, though thinking success
unlikely, one intends to try to A, one makes the effort, and one unexpect-
edly succeeds. If the success isn’t a fluke, one might then have intention-
ally A-ed without having intended to A (cf. Mele 1992, 131–133).
It might be objected that Charles has it as his aim or goal that he not
smoke, and to take something as an aim or goal is to intend that thing.16
But one can have something as a hoped-for goal without intending that
thing. Arguably, there is a negative belief constraint on rationally intend-
ing, such that it isn’t rational to intend to A while believing that one
probably won’t succeed.17 Given his expectations, Charles might take
abstaining from smoking for a week only as a hoped-for goal, intending
no more than to try his best.
Some different cases also suggest that one might intentionally omit to
A without intending to omit to A. Suppose that while walking in the
countryside you come to a fork in the path. You’re aware that the path on
the left is more pleasant, and you realize that should you take the path on
Intentional Omissions 143
the right your walk will be less enjoyable. Suppose that you nevertheless
decide to take the path on the right (perhaps believing that path shorter),
and you then do so, aware that in so doing you aren’t taking the left path.
It seems that you needn’t intend not to take the left path in order for it
to be the case that you intentionally don’t take (omit to take) that path.
As several theorists see it, one can carry out an intention to A and be
aware that by A-ing one will do something B, without then intending to
B, and yet intentionally B. This might be so when one is aware of a reason
not to B and one decides to A despite that consideration. For example, I
might intentionally start my car in the morning despite being aware that
(since my car is very noisy) by so doing I’ll disturb my neighbors’ sleep. I
might then intentionally disturb them without having intended to do so
(Ginet 1990, 76; cf. Harman 1976/1997, 151–152).18 Examples such as that
in the preceding paragraph make an equally strong case for the view that
one can carry out an intention to A and be aware that, in intentionally
A-ing, one will not do something B, without then intending not to B, and
yet intentionally omit to B.19
Must any intention with relevant content figure in the history of an
intentional omission? Suppose that I see a child struggling in a pond but
intentionally omit to jump into the water to save the child. I don’t intend
to jump in. Might it suffice for my omitting to jump in to be intentional
that this omission results from my intentionally omitting to intend to
jump in (as Sartorio 2009, 523 suggests)?20
What would make it the case that my not intending to jump in is itself
an intentional omission? One might say: “I voluntarily failed to form that
intention, after deliberating about whether to do so, after considering
reasons for and against doing so, etc.” (Sartorio 2009, 523). But the claim
that my failure to intend was voluntary seems to presuppose that, rather
than explain how, it was intentional. And that my not intending to jump
in comes after deliberation about whether to do so, and after consideration
of reasons for and against jumping in, does not suffice to make my not so
intending intentional. I might have simply failed to make up my mind.
Suppose the case unfolds this way. I deliberate about whether to jump
in or not, never making up my mind. As I continue deliberating, I realize
that the child is drowning and I’m doing nothing to save her. In this
version of the case, do I intentionally omit to jump in to save the child
despite having no intention with pertinent content?
Deliberating is itself activity, and typically it is intentional activity. (We
can intentionally try to think of relevant considerations, intentionally turn
attention to one thing or another, intentionally try to make up our minds.)
144 R. Clarke
is no need to see the attitudinal mode as any different from that of intend-
ing to act.
Sometimes, in intentionally omitting something, one performs some
action as a means to that omission (e.g., chewing gum so that one won’t
smoke). An intention to omit to A can include a more or less elaborate
plan for so omitting. But there are cases in which one’s intention not to
A need not include any plan at all about how not to A. One need not
always, in order to intentionally omit to A, perform or intend to perform
any action at all as a means to not A-ing.21
Moreover (contrary to Wilson 1989, 137–142), no intention that one
has in a case of intentional omission need refer to anything that the agent
in fact does; the agent need not intend of some positive behavior in which
she engages that it not be (or not include, or be done instead of, or be
allowed not to be, or not allow her to perform) the omitted action (cf.
Ginet 2004, 104–106). Ann intends not to pick up Bob at the airport. She
need not also intend of her piano playing, or of her going to bed, or of
anything else, any of the suggested things. (I might boycott veal for the
remainder of my life, without intending of any particular thing I do—and
certainly without intending of all of it—that it not be the purchasing of
veal.)
At least generally, when one intentionally omits to A, for some relevant
time period, one does not intend to A. Certainly one might have earlier
intended to A and have since changed one’s mind. And one might cease
to intend to A and come to intend not to A without ever revoking one’s
earlier intention. One might forget that one so intends, cease to so intend
because one so forgets, and not remember the earlier intention when one
later acquires the intention not to A. Finally, just as one can have a nonoc-
current intention to act of which one is unaware and intentionally do
something contrary to that intention, so one can have a nonoccurrent
intention to A and yet intentionally omit to A. Intentionally omitting to
A does not strictly require (for any time period) not intending to A.
4 Causes
mental states (or by mental events involving those agents), either because
absences can’t be caused, or because absences of mental states or events
cause these omissions. Still, I contend, the agents’ mental states (or mental
events involving those agents) play a causal role in such cases, one that
parallels, in interesting respects, the role they are required by causal theo-
ries of action to play in cases of intentional action. Relevant mental states
(or events) must cause the agent’s subsequent thought or action, even if
they needn’t cause the absence of some action.
If Diana’s omission to jump into the water is caused by her not intend-
ing to jump in, the omission’s being so-caused evidently isn’t what renders
it an intentional omission.25 There might be other folks on the shore who
also don’t intend to jump in, who also don’t jump in, whose not intending
to jump in is as good a candidate for a cause of their omission as Diana’s
is of hers, but who nevertheless don’t intentionally omit to jump in.
Perhaps the lifeguard fails to notice that the child is in trouble; perhaps
someone else, though noticing the trouble, doesn’t think of jumping in to
help.
We’ve seen reason to think that in order to intentionally omit to A, one
must have an intention with relevant content. As the story goes, Diana
has such an intention: in deciding not to jump into the water, she forms
an intention not to jump in.
Is it enough for Diana’s omission to be intentional simply that she have
this intention? Need the intention do anything at all? Suppose that, upon
deciding not to jump in, Diana immediately wonders what to do instead,
forms an intention to walk over to get a better view of the impending
tragedy, and then does so (eating her ice cream all the while). In the
normal case, we would take it that the intention not to jump in is among
the causes of this subsequent sequence of thought and action.26 Moreover,
this causal role seems more than incidental; if we suppose that the inten-
tion causes no such things, then it no longer seems that we have a case of
intentional omission.
Suppose that Diana’s intention not to jump in, whatever it is—some
distributed state of her brain, perhaps—comes to exist with the usual causal
powers of such a state but is from its start prevented from causally influ-
encing anything—prevented, that is, from manifesting its powers. Just after
deciding not to jump in, Diana happens to wonder what to do instead,
decides to walk over for a better view, then does so. Something causes this
stream of thought and action—perhaps a chip implanted earlier in Diana’s
brain by a team of neuroscientists, who just happen to have picked the
present moment to test their device—but her intention not to jump in
148 R. Clarke
isn’t a cause of any of what Diana does. In this case, does she intentionally
omit to jump into the water?
It seems clear that she does not. Her not jumping in is intended—and
she’s guilty of so intending—but she doesn’t intentionally omit to jump
in, because her intention doesn’t in any way influence her subsequent
thought or action. It’s pure happenstance that what she does accords in
any way with her intention. For all her intention has to do with things,
what she was caused to do might just as well have been to jump into the
water and save the child.
The case is analogous to one in which an agent intends to perform a
certain action, the appropriate bodily movement occurs, but the agent
doesn’t perform the intended action, because his intention is ineffective.
Unaware that my arm has become paralyzed, I might intend to raise it
now. Unaware that I currently so intend, my doctor might test the new
motor-control device that she implanted in me during recent surgery. It’s
sheer coincidence that the movement caused by the doctor accords with
my intention, and I don’t intentionally raise my arm.
Lest one think that Diana’s not jumping in fails to be intentional
because, it might now seem, her omission doesn’t counterfactually depend
on her not intending to jump in,27 suppose that the implanted chip has
an unforeseen flaw: it will remain inert if, when the activation signal is
sent to it, the agent in whose brain it’s implanted has just decided to jump
into water. In any case, an omission’s being intentional doesn’t require
that the omission depend counterfactually on the absence of an intention
to perform the action in question. Diana might intentionally omit to jump
into the water even if, had she acquired an intention to jump in, she might
have changed her mind.
In some cases, the role played by an intention not to act might be less
pronounced. Suppose that having decided not to jump in to help, Diana
simply stays where she is and continues eating her ice cream. Arguably,
an intention she already had—to eat the ice cream—causes her continuing
activity. Nevertheless, there would seem also to be a causal role played by
her newly formed intention not to jump in. That intention might play a
sustaining role, contributing to her continuing to intend to stand there
eating the ice cream, and thus to her continuing activity. As intentions
usually do, this one could be expected to inhibit further consideration of
the question that it settles—the question of whether to help the child. It
thereby causally influences Diana’s flow of thought. And it might play a
causal role that isn’t just a matter of what it in fact causes, that of standing
Intentional Omissions 149
one’s omission, there is causal work for the reasons for which one omits
to A.
Still, if for either of the reasons identified at the start of section 4, inten-
tional omissions that are absences aren’t caused by the agents’ intentions,
does this fact constitute trouble for “causalism as an attempt to explain
what it is for an agent to behave intentionally” (Sartorio 2009, 513)? It
presents no problem for causal theories of action. Such theories are not
theories of omissions that aren’t actions, and they strictly imply nothing
about such things.
We might sensibly construe agency more broadly as encompassing all
that is done intentionally, and so as including intentional omissions. At
least typically, things done intentionally fulfill intentions and are done for
reasons. Such things may fairly be said to be manifestations of our agency.
We might then wonder about the prospects for a causal theory of this
broader phenomenon.
But with agency so construed, it should not be expected that it must
be possible to construct a uniform theory of it, for the phenomenon
itself lacks uniformity, including, as it does, actions as well as things that
aren’t actions. It will be no fault of any theory of intentional action if it
does not apply, in a straightforward way, to all of what is then counted
as intentional agency. And a comprehensive theory of agency (if any
such thing is possible) might play out one way in the case of action and
another in the case of omission. If such an account has to have a disjunc-
tive character, that need might accurately reflect the diversity of its subject
matter.
We might nevertheless expect that the right account of intentional
omission will resemble in important respects the right account of inten-
tional action. If what I’ve said here is correct, the resemblance to causal
theories of action is significant. But to the extent that omissions that aren’t
actions are unlike actions, it should not be surprising that the similarity is
imperfect.
Acknowledgments
For helpful comments on earlier versions of this essay, I wish to thank Carl
Ginet, Alison McIntyre, Al Mele, Carolina Sartorio, Kadri Vihvelin, Michael
Zimmerman, and an anonymous referee for Noûs.
152 R. Clarke
Notes
1. The last two of these, and more, are mentioned by McIntyre (1985).
2. Some theorists (e.g., Bennett 2008, 49) hold that one is responsible for something
only if one is either blameworthy or praiseworthy for that thing. Plainly on this
view there can be intentional omissions—those that are morally neutral or
indifferent—for which no one is responsible. But there can be such omissions even
if there can be moral responsibility for morally neutral things. Some agents (e.g.,
young children, or people suffering from certain mental illnesses) lacking some of
the capacities required for responsibility nevertheless engage in intentional action
and intentionally omit to do certain things. And while many theorists hold that
responsibility for unintentional omissions must stem from something done inten-
tionally, others (e.g., Smith 2005) deny this claim. In sum, the relation between
responsibility and what is done intentionally is both complex and contested. This
fact constitutes one reason for taking a direct approach to the topic of intentional
omission.
3. Cf. Mele 2003, 152, and McIntyre 1985, 47–48. McIntyre draws the distinction
as one between intentionally omitting and its being intentional on one’s part that
one omits. Note that we might say that Ulysses intentionally prevented himself
from jumping into the sea. We should then recognize that one can intentionally
prevent oneself from doing a certain thing and yet not intentionally omit to do that
thing. Finally, one might take this case to support the view that intentionally omit-
ting to A requires that one be able to A. I briefly discuss such a requirement in
section 2 below.
5. The example is from Mele (1997a, 232), though he employs it for a different
purpose.
6. Note that the child doesn’t simply prevent herself from moving; she both inten-
tionally holds still and intentionally omits to move. Unlike Ulysses, at the time in
question she isn’t trying to do what she does not do.
7. This kind of case was suggested in conversation by Al Mele. Note that one can
omit to intentionally omit to A, and yet not A. I might plan not to A, forget my
plan, but also not think to A (and thus not A).
8. But can’t we think about nonexistent things? Sure, but the intentionality or
directedness of a thought isn’t a genuine relation (Brentano 1995, 271–274; cf.
Molnar 2003, 62).
9. Fischer and Zimmerman make these claims with respect to a “broad” conception
of omission; both recognize that there are narrower conceptions.
Intentional Omissions 153
10. Cases of this sort are offered by Frankfurt (1969) to rebut the thesis that one
can be responsible for what one has done only if one could have done otherwise.
11. There is a sizable literature addressing the question of whether agents in Frank-
furt-type cases might be responsible for omitting to do certain things even though
they’re unable to do those things. See, e.g., Byrd 2007; Clarke 1994; Fischer 1985–
1986; Fischer and Ravizza 1998, ch. 5; McIntyre 1994; and Sartorio 2005. Partici-
pants in this debate evidently take there to be some type of ability to A that isn’t
required for omitting to A.
12. Ginet (2004) takes the case to support his claim that intentionally not A-ing at
t requires that one could have A-ed at t (or at least could have done something by
which one might have A-ed at t). I’ve suggested that whether the requirement holds
depends on which type of ability to act is being invoked.
13. To be precise, it is “intentionally not doing” that is the target of Ginet’s analysis.
However, his examples are cases of what I’m calling intentional omissions.
14. McIntryre (1985, 79–80) discusses a similar case, though for a different purpose.
17. For discussion of such a requirement, see Mele 1992, ch. 8. Bratman’s video-
game case (1987, 113–115) supports a different line of argument for the claim that
one can take something as a hoped-for goal, and try to achieve it, without intending
it. The case also supports the view that one can intentionally A without intending
to A.
18. Some writers distinguish between “direct intention” and “oblique intention,”
holding that although one might lack a direct intention to bring about certain
consequences, one has an oblique intention to bring them about if one’s so doing
is foreseen (or considered likely or certain). (The distinction stems from Bentham
1996, 86.) An oblique intention is said to be “a kind of knowledge or realisation”
(Williams 1987, 421).
Such a state differs importantly from what action theorists commonly call inten-
tion. The latter is typically distinguished by its characteristic functional role. Having
an intention to perform a certain action at some point in the nonimmediate future
(a future-directed intention) tends to inhibit subsequent deliberation about whether
to do that thing (though having such an intention doesn’t altogether preclude
reconsideration); it tends to inhibit consideration of actions obviously incompatible
with what is intended; and it tends to promote further reasoning about means to
what is intended. When one becomes aware that the time for action has arrived, a
future-directed intention to act tends to cause (or become) an intention to act
straightaway. Such a present-directed intention tends to cause an attempt to perform
the intended action. When carried out, a present-directed intention typically
154 R. Clarke
triggers, sustains, and guides action, often in response to feedback. (Bratman 1987
and Mele 1992, part 2, develop this conception of intention.) So-called oblique
intentions don’t play such a role in practical reasoning or action.
One might suggest that actions that are only obliquely intended can only be
obliquely intentional. It is hard to see more to the claim than an acknowledgment
that an action can be intentional even though the agent lacked an intention (of the
sort just characterized) to perform it.
19. I’ve described several cases (that of an individual step taken while walking, that
of trying to do something when one expects not to succeed, and that of foreseen
consequences that one has reason to avoid) that have been taken to undermine a
certain thesis about intentional action, viz., that intentionally A-ing requires intend-
ing to A. As a referee for Noûs observed, one might seek to defend that thesis by
appealing to a distinction between an action’s being intentional (under some
description or other) and its being intentional under a certain specified description.
One might claim, for example, that while what I do when I take the step is inten-
tional under some description (such as “walking”), it isn’t intentional under the
description “taking that step.” I don’t myself find this claim convincing. However,
it isn’t my aim here to refute the indicated thesis. What I’ve aimed to do is present
some forceful considerations that have been brought against it and show that there
is an equally strong, largely parallel case to be made against the view that intention-
ally omitting to A requires intending not to A.
20. Sartorio is mainly concerned to argue that no intention need cause an inten-
tional omission. Though she seems doubtful that any relevant intention need even
be possessed, she doesn’t commit herself on this latter question.
21. Sartorio (2009, 528, n. 22) suggests that “negative intentions” might have to be
rejected, for “it’s hard to say what the plan might be in the case of omissions.” In
some cases, one’s plan for not A-ing might be just not-to-A.
22. Some causalists (e.g., Mele 1992, ch. 2), in response to the problem of mental
causation, hold that (roughly) it is enough for intentional action if the neural real-
izers of the agent’s mental states play the appropriate causal role, provided that what
the agent does counterfactually depends, in a certain way, on her mental states. I’ll
henceforth simplify the causalist view by omitting this variation.
23. Weinryb (1980) argues that omissions have no causal effects. Beebee (2004)
argues that absences aren’t causes or effects.
24. For one thing, the argument relies heavily on explanatory claims. But true
explanatory claims don’t always cite causes, even when the explanations are causal.
On this point, see Beebee 2004.
25. As discussed earlier, Sartorio takes the omission to be intentional because, she
says, it is caused by the agent’s intentionally omitting to intend to jump into the
Intentional Omissions 155
water. I’ve raised questions about how it can be made out that the agent’s not so
intending is itself intentional.
26. Might an argument like the one sketched earlier in this section show that, in
the normal case, it would be the absence of an intention to jump in, and not the
intention not to jump in, that causes not just the omission but also the subsequent
thought and action? Diana’s decision and her subsequent thinking might be brain
occurrences, and (in our original version of the case) the production of the latter
by the former might consist in just the sort of transfer of energy that we have in
standard cases of direct causal production. Moreover, her decision seems as plainly
a cause of her positive behavior in this version of the case as our intentions to act
seem to be causes of our intentional actions. One might try pushing a standard
argument against mental causation at this juncture, but the special argument
focused on omissions seems inapplicable.
28. Bennett (1995, ch. 6) makes a similar point, albeit about what I take to be a
different distinction, that between making happen and allowing to happen.
29. For discussion of the problem of causal deviance, and for some proposed solu-
tions, see Bishop 1989, ch. 5; Brand 1984, 17–30; Davidson 1980, 79; Enç 2003, ch.
4; Goldman 1970, 61–63; and Mele 2003, 51–63.
30. It’s an interesting question, and one that I can’t answer here, whether the
required nondeviance in the case of intentional omission is susceptible to concep-
tual analysis. Given how little causal work might be required of one’s intention in
such a case, there’s reason to doubt that causation “in the right way” is analyzable
here in the same way that it is in the case of intentional action. Whether the impos-
sibility of conceptual analysis would be fatal to a causal theory of intentional omis-
sion depends on what is to be expected of such a theory. (I’m grateful to a referee
for raising this issue.)
31. One might (as a referee suggested) take Diana’s not jumping in to be a causal
consequence of her intending not to jump in on the following grounds: her so
intending causes her standing on the shore, and (one might hold) her not jumping
in is a logical consequence of her standing on the shore. It would seem that, barring
time travel, Diana’s jumping into the water at t is logically incompossible with her
standing on the shore at t. However, it is far from obvious that an event causes the
absence of whatever is logically incompossible with what that event causes.
10 Comments on Clarke’s “Intentional Omissions”
Carolina Sartorio
Clarke argues for two main claims in his essay. The first is:
(ii) The causal role the relevant intention plays in each case is that (or
includes the fact that) it causes the agent’s subsequent thought and action.
This comment is partly a request for clarification, since I’m not sure I
understand exactly what Clarke’s argument for (i) is. Clarke doesn’t seem
to have objections to my claim that, when an agent intentionally omits
to A, the agent’s omission to intend to A causes his omitting to A. But,
presumably, he thinks that this isn’t enough to explain why the agent’s
omitting to A is intentional. For, he seems to think, such a fact would only
explain why the omission to A is intentional if it were clear that the omis-
sion to intend to A is also intentional; however, it is hard to say what it is
for an agent’s omitting to intend to act to be intentional (Clarke, this vol.,
chap. 9, 143–144 and n. 24).1
158 C. Sartorio
Clarke argues that, when Diana decides not to save the drowning child
and to continue to eat her ice cream on the shore, her intention not to
jump in causes her subsequent action of continuing to eat ice cream on
the shore. This is an interesting proposal that would (if true) preserve the
causal efficacy of negative intentions, and would also bring actions and
omissions a bit closer together, as Clarke points out. I will argue, however,
that some plausible assumptions (including assumptions that Clarke
explicitly accepts) suggest that Diana’s negative intention doesn’t actually
play such a causal role.
For Clarke, a main reason not to identify omissions with actions (at least
generally) is that actions and omissions tend to have different causal
powers. As an example of this, he gives the following case (137). Ann
Comments on Clarke’s “Intentional Omissions” 159
promised to pick up Bob at the airport late at night. Feeling lazy, and
thinking that he can take a cab, she decides to stay home playing piano
instead. This keeps her neighbor awake. Clarke believes that, whereas Ann’s
playing piano causes her neighbor to stay awake, her omitting to pick up
Bob doesn’t. Again, this is a reason not to identify Ann’s omission to pick
up Bob with her action of playing piano. I agree.
Now consider the intentions that Ann forms in this case. Presumably,
Clarke would say that we should distinguish a “positive” intention—the
intention to play piano—from a “negative” intention—the intention not
to pick up Bob at the airport. Which of these intentions causes her to play
piano? Surely, her intention to play piano causes her to play piano. But,
does her intention not to pick up Bob also cause her to play piano? Given
what Clarke wants to say about the Diana case, it seems that he would
have to say that it does: given that Ann’s omitting to pick up Bob was
intentional, her intention not to pick up Bob must have caused her sub-
sequent thought and action, including her playing piano. But this is an
odd result, given Clarke’s initial assumption about the causal powers of
actions and omissions. Why think that bodily positive and negative acts
have different causal powers but their mental counterparts (positive and
negative intentions) don’t? If there is good reason to think that Ann’s
omitting to pick up Bob doesn’t cause what her playing piano causes, isn’t
there also good reason to think that her intending to omit to pick up Bob
doesn’t cause what her intending to play piano causes?
I believe that there is, in fact, good reason to think this. Presumably,
the reason to think that Ann’s omitting to pick up Bob doesn’t cause her
neighbor to stay awake is that her neighbor stays awake, intuitively, not
because of what she doesn’t do that night, but because of what she does.
Similarly, it also seems that Ann plays piano that night, not because
of what she intends not to do that night, but because of what she intends
to do.
One could try to object to this by saying: there is a sense in which she
plays piano because of what she intends not to do. Namely: in the circum-
stances, were it not for the fact that she intended not to pick up Bob, she
couldn’t have intended to stay at home playing piano, and then she
wouldn’t have played piano. But the same is true of her omitting to pick
up Bob and the outcome of her neighbor’s staying awake: were it not for
the fact that she omitted to pick up Bob, she couldn’t have stayed at home
playing piano, and then her neighbor wouldn’t have stayed awake. So it
seems that whatever reasons we have for thinking that Ann’s omitting to
pick up Bob doesn’t cause her neighbor to stay awake are also reasons for
160 C. Sartorio
thinking that her intending not to pick up Bob doesn’t cause her playing
piano.
In support of his claim that the agent’s negative intention must cause
the subsequent positive behavior in order for his omission to be inten-
tional, Clarke imagines a case where the agent (Diana) forms a negative
intention but then the intention gets to play no causal role at all (it is
preempted by a chip implanted by neuroscientists, which causes Diana to
continue eating her ice cream on the shore). In this case, Clarke argues
that Diana’s omitting to jump in isn’t intentional. But, even if this is true,
this doesn’t show that, for Diana’s omission to be intentional, her intend-
ing not to jump in must cause her continuing to eat ice cream. Perhaps
she must intentionally omit to intend to jump in and such omission must
cause her omitting to jump in (as I propose). Or perhaps I am wrong and
her intending not to jump in must cause her omitting to jump in. Or,
finally, perhaps three things have to happen: (i) she must intend not to
jump in; (ii) she must intend to perform a different act incompatible with
jumping in (e.g., eating ice cream on the shore at the time); and (iii) such
an intention must cause that action. There could be other possibilities.
Presumably, none of these sets of conditions is met in the neuroscientist
scenario. The chip, and only the chip, accounts for Diana’s subsequent
behavior in that case. Hence the neuroscientist case doesn’t support
Clarke’s claim that, for an agent’s omission to be intentional, the agent’s
negative intention must cause her subsequent thought and action.
Acknowledgment
Notes
2. A natural thing to say, at least in this case, is that what makes Diana’s omission
to intend to jump in intentional is the fact that she considered possible reasons to
jump in but she took those reasons to be outweighed by reasons to continue eating
her ice cream on the shore.
11 Reply to Sartorio
Randolph Clarke
After drinking with his buddies one evening, Tom was tired. While they
vowed to carry on all night—and did—he went home and slept. Tom
intentionally omitted to join them in toasting the sunrise. Was he engaged
in some kind of behavior at dawn? Something dormitive, perhaps, but
nothing of the kind that action theory aims to characterize.
We can perfectly well use the term “behavior” in a broader sense, to
cover all things done intentionally, including Tom’s omitting to drink till
dawn. But if we do, we should see that “behavior” is a disparate category,
including actions and things that aren’t actions. And we should then not
expect a highly uniform theory of this broader phenomenon.
If, then, the right account of intentional omission doesn’t precisely
parallel a causal theory of action, does this fact make trouble for “causalism
as an attempt to explain what it is for an agent to behave intentionally”
(as Sartorio, this vol., chap. 8, 115, claims)?1 That depends on just how the
correct account goes and to what exactly “causalism” is committed.
Sartorio alleges that a proponent of causalism will hold that when one
intentionally omits to A, one’s omission is caused by one’s forming an
intention not to A (or an intention with some other relevant content). But
a causal theory of action doesn’t commit one to this view, since it isn’t a
theory of things that aren’t actions; and one can be a causalist about
intentional behavior, broadly construed, without holding this view.
Indeed, Sartorio’s account of intentional omission is causalist while reject-
ing the indicated view, for she holds that a paradigmatic intentional omis-
sion is caused by one’s omission to intend.
Still, as her account is spelled out, something that a standard causal
theory requires for intentional action—an important causal role for mental
states or events—is not said to be required for intentional omission. If there
is no such requirement, that is an important fact, even if it doesn’t force
the rejection altogether of a causal approach to intentional behavior.
162 R. Clarke
immediately intend not to take another step forward. I refrain from walking
further.) If one’s omitting to intend to A (e.g., to continue walking) might
nevertheless be intentional, it remains to be explained how this can be so.
Indeed, strictly speaking, lacking an intention to A isn’t necessary for
intentionally omitting to A. One can have intentions of which one is
unaware, just as one can have beliefs and desires of which one is unaware.
Unaware that one intends not to A, one might A, and do so intentionally—
meaning then to A (and A-ing attentively, carefully, expertly). Similarly,
unaware that one intends to A, one might refrain from A-ing, meaning
then not to A.
Just how far Sartorio’s view of intentional omission diverges from what
causal theorists require for intentional action depends on what she thinks
about omitting for reasons, something that she doesn’t discuss. Things
done intentionally are typically done for reasons. If one’s omitting to
intend on some occasion is intentional, and done for reasons, we might
ask in virtue of what the latter is so. Must certain of one’s reason-states—
one’s beliefs, desires, affections, aversions, and the like—be causes of one’s
omitting to intend? If Sartorio accepts such a requirement, that will render
her account more thoroughly causalist, even if it still denies intentions any
necessary causal role. If she denies the requirement, then we might fairly
request a sketch of some alternative view.
I claimed in my essay that in order to intentionally omit to A, one’s
intention not to A (or some other pertinent intention) must cause some
of one’s subsequent thought and action. It must play a causal role with
respect to what does happen, even if it need not cause any absences. Sar-
torio asks what my argument for this claim is. I observed that in standard
cases of intentional omission, such an intention does in fact cause such
things. And when I considered a case in which this was not so, it seemed
to me that the omission was then not intentional, and that it failed to be
intentional because the intention in question didn’t play the indicated
causal role.
In the imagined case (this vol., chap. 9, 147–148), Diana decided not
to jump into a pond to save a drowning child. She then wondered what
to do instead, formed an intention to walk over to get a better view of the
impending tragedy, and did so. But her intention not to jump in caused
none of these things; they were all caused by a chip that had been implanted
in Diana’s brain earlier by a team of neuroscientists, who (unaware of
Diana’s decision) just happened to have picked this moment to test their
device. They got lucky, for the chip, I noted, had an unforeseen flaw: it
would have remained inert if, when the activation signal was sent, the
164 R. Clarke
agent in whose brain it was implanted had just decided to jump into
water.
Sartorio finds no support here for the causal requirement I proposed.
There are, she says, other possible explanations of why Diana’s not jumping
in isn’t intentional. She suggests, first, that Diana’s omission might fail to
be intentional because her omitting to intend to jump in doesn’t cause her
not jumping in. But is that so? The chip causes Diana’s thought and action,
but, as Sartorio recognizes, it’s a further question what causes Diana’s omit-
ting to jump in. As the case is imagined, if Diana had intended to jump
in, she would have; her not so intending made a difference to what she
did. If omissions to intend ever cause omissions to act, it isn’t clear why
this one doesn’t.
Second, Sartorio suggests that Diana’s omitting to jump in might fail to
be intentional because her intending not to jump in doesn’t cause that
omission. But if that is correct, it suggests that intentions not to act must
play an even more robust causal role in intentional omissions than what
I argued, and the right account of intentional omission will, after all,
closely parallel a causal theory of intentional action.
Finally, Sartorio suggests that in order for Diana to have intentionally
omitted to jump in, she must have intended to perform some act incom-
patible with her jumping in, and that intention must have caused that
action. But, as I think the paint case from my essay (this vol., 138) shows,
intentionally omitting to A doesn’t require performing any action that one
takes to be incompatible with A-ing; it doesn’t require, either, intending
to perform any such action.
There might, in fact, be more than one correct explanation of why
Diana’s omitting to jump in isn’t intentional, as there will be if several
necessary conditions are unsatisfied. Though the case seems to me sup-
portive, perhaps no single example will decisively show my proposal
correct. The proposal can, however, be undermined if there is a case in
which someone intentionally omits something and yet no relevant inten-
tion plays the indicated causal role.
I said that in standard cases of intentional omission, one’s intention
not to act (or other relevant intention) is in fact a cause of one’s subsequent
thought and action. Sartorio disputes this claim. In my case involving Ann
(135), she maintains, it is Ann’s intention to play piano, not her intention
not to pick up Bob, that causes her piano playing. But the two intentions
aren’t competitors; they are, respectively, later and earlier members of a
causal sequence leading to Ann’s playing piano. The intention not to pick
up Bob causes Ann to consider what to do instead, which causes her to
Reply to Sartorio 165
decide to play piano, which causes her piano playing. One need not hold
that causation is necessarily transitive to find plausible the claim that Ann’s
intention not to pick up Bob indirectly causes her piano playing.
Comparing Ann’s intention not to pick up Bob with her omitting to
pick him up, Sartorio asks, “Why think that bodily positive and negative
acts have different causal powers but their mental counterparts (positive
and negative intentions) don’t?” (chap. 10, 159). But Ann’s intention not
to pick up Bob is an intention, whereas her omitting to pick him up is not
an action. Were the case real, the former would be an actually existing
being, negative only in its content (just as is my belief that Santa Claus
doesn’t exist). In contrast, the latter (I’m inclined to think) would be an
absence of being—the absence of an act by Ann of picking up Bob at the
airport. A causal impotence of the latter is no reflection on the former.
Although I don’t, in my essay, dispute Sartorio’s claim that omissions
to intend cause intentional omissions, I’m in fact doubtful about this. At
bottom, I find it doubtful that an absence of being can cause something.
Sartorio writes of omissions having causal powers; but I don’t see how
nonbeings can have any such powers.
I’ve advanced considerations favoring an account of intentional omis-
sion that makes no appeal to causation by absences. The proffered view
accords a causal role to mental states, including intentions, and sees inten-
tional omissions as resulting—even if noncausally—from such states. If an
account along these lines is correct, omissions make trouble neither for
causal theories of action nor for causal approaches to intentional behavior,
broadly construed.
Note
1. Unless otherwise noted, all page numbers refer to Sartorio’s chapters in this
volume.
12 Causal and Deliberative Strength of Reasons for Action:
The Case of Con-Reasons
David-Hillel Ruben
(2) “if reasons are causes, it is natural to suppose that the strongest reasons
are the strongest causes” (Davidson 2001a, xvi).
Any causal view is going to have to address the question: how are rational
and causal strength related? It is this question that gives rise to the problem
that the causalist faces with weakness of the will. In a case of weakness of
will, the rationally stronger reason is not the reason that causes or moti-
vates the agent to act, if reasons do in fact cause actions. The agent acts
on a rationally weaker reason, but one that is causally strong enough,
where the rationally stronger reason is not causally strong enough. “Caus-
ally strong enough or not so” just means: the rationally weaker reason
causes the action it supports, and the rationally stronger reason does not
cause the action it supports.
I do not know whether the causalist can really successfully deal with
the phenomenon of weakness of the will. But I want, in this essay, to
address a different issue, unconnected to weakness of the will. Let’s start
by trying to trace out the causal chains that lead from the con-reasons, for
on the causalist view con-reasons must have some effects, whatever they
might be. The thought that there are events, con-reasons, which have no
effects at all, is not one likely to appeal to the causalist. To be part of the
causal order is surely to have both causes and effects.5
There are two importantly different cases that I want to distinguish
(from the causalist point of view). In cases of Type I, the pro-reason and
the con-reason jointly cause the same effect; in cases of Type II, they have
separate and causally independent effects.
Type I: These cases are the ones that will most naturally spring to mind
on a causalist view, but I believe that such cases are more limited than one
Causal and Deliberative Strength of Reasons 171
in many cases, it need not be. That is, on the causalist program, there could
be other cases (Type II) in which the presence of the con-reason has some
effect, but has no effect of any sort on the actual action taken, but instead
has some effect on something else. I do not know how to prove that there
must be such cases for the causalist, but it seems to me intuitively clear
that there could be.
What would the causalist have to hold, in order to deny this claim that
I have just made? Since in cases of joint causation of a single effect by
multiple part causes, it follows that the effect would have been different,
or probably would have been different, had any one of the part causes
been different (or altogether absent), the causalist who wishes to deny the
possibility of cases of Type II would have to say that:
(3) In every case of action for which the agent has both pro- and con-
reasons that figure into his deliberations, had the agent not had such a
con-reason, the action he took would have been different or altered, or
probably would have been different or altered in some way, or occurred
at a different time.6
I just don’t think that (3) could be true. I can envisage many cases in which,
were the con-reason absent, the action taken could be qualitatively the
same (in all nontrivial respects) as the action that was actually taken. We
can say of such cases: “the agent had, and acknowledged that he had, a
less weighty reason not to do something, which figured into his delibera-
tive activity, but that less weighty reason did not at all causally influence
his eventual choice to do what he did, in any way.” Perhaps such a case
might be one in which the agent has, and acknowledges that he has, a
weak moral reason that he does consider in his deliberations, but the weak
moral reason has in the end no actual effect on his eventual choice or
behavior. Or a case in which the ass considers both hay piles in its delib-
eration but is so determined to get to hay pile A that he would make exactly
the same choice regardless of what he acknowledges to be the lesser but
not negligible attractiveness of hay pile B, and so the ass would make the
identical choice—a choice qualitatively identical in character, timing, and
so on—had hay pile B not been available at all. That is, had the agent not
have had the con-reason, his actual choice would have been (or probably
would have been) qualitatively the same in all relevant respects. Cases of
Type II already presume that reasons and causation even on the causalist
program can part company to this extent: a con-reason must cause some-
thing, but the con-reason might not be a part-cause of the same effect that
the pro-reason causes or part-causes.
Causal and Deliberative Strength of Reasons 173
chain leading to the action taken, and the con-reason must initiate a dif-
ferent, independent causal chain that leads to something else. One thing
that the con-reason can certainly not cause is the action it favors, since
that action never happened and therefore nothing could cause it. To be
sure, that something does not occur can have a cause, but what does not
occur can have no cause since it does not exist.
If we assume that the con-reason does not also contribute to the causa-
tion of the action it disfavors, but rather would have to cause something
else, there is any number of possible candidates for the effects of such
con-reasons available to the causalist. Perhaps a person’s con-reason directly
causes regret (Williams 1981, 27ff.), or causes some other change in his
mental landscape (his dispositions to act, for example), or causes some
psychological illness in him. He does the action favored by the pro-reason,
but since he had reasons against it, his con-reason ends in him regretting
what he did, or some such. Or perhaps the effect of the con-reason is not
even at the personal level at all. Might its effect not be some physiological
or brain event of which the actor is perhaps ignorant or unaware?7 (Or,
“some further physiological or brain event,” if the having of a con-reason
is such a physical event too.)
The important feature of all these candidate effects for cases of Type II
is that they require a second causal chain, in addition to the one that goes
from the stronger pro-reason to the action taken. If so, there would be one
causal chain leading from his having a pro-reason to his subsequent action.
There would be another quite independent causal chain leading from his
con-reason to his subsequent regret, or illness, or to some (further) physi-
ological or similar event. The causal chains would not converge causally
on the final choice or action, as they would if both pro- and con-reasons
causally contributed to the same action taken, as we sketched above in
cases of Type I.
On this rather simple picture, the pro-reason initiates a causal chain
leading to the action; the con-reason initiates a wholly independent,
second causal chain, leading to the regret or brain state or whatever. One
thing to note about this view is that it might not permit us to capture
causally the idea that both pro- and con-reason are rationally or delibera-
tively relevant to the same token final choice or action. The con-reason
might not be a reason against acting in a certain way in virtue of whatever
causal role it plays. A con-reason could not be the con-reason it is (a reason
not to do what was done) in virtue of its causing something else other
than that action. At the level of reasons for choice and action, the two
reasons bear differently on (one favors and the other disfavors) the same
Causal and Deliberative Strength of Reasons 175
choice or action, but the causal story might not mirror this in any way.
There would be just two distinct causal chains, each of which leads to a
different result; one leads to an action, the other to some psychological or
neurophysiological or dispositional state. But perhaps a causal model of
how pro- and con-reasons work in choice situations need not capture
within the causal model this fact about the rational significance that both
types of reasons have to the same action or choice, one in favor of it and
one against it, so I don’t take this as a decisive objection to the suggestion
under discussion.
My argument now focuses only on cases of Type II, such that pro- and
con-reason initiate independent causal chains. I do not deny that there
can be many cases of Type I, but these are not the ones I wish to consider.
In the cases on which I now focus, the pro-reason initiates one causal chain
and the con-reason initiates another, whatever it might be and to wherever
it leads.
In Type II examples, consider the actual situation, c. In c, the pro-reason
to A is rationally weightier for the agent than the con-reason to B. Caus-
ally, assuming (2), if there is no weakness of will, it is the pro-reason that
causes the agent to A, rather than the con-reason causing the agent to B
(so the con-reason causes something else, whatever that might be). But
now consider a counterfactual situation, c*.
c* is just like c, save in one feature, and whatever is a causal consequence
of that one feature: in c*, although the pro-reason retains the same delib-
erative weight that it has in c, the con-reason becomes much weightier.
This sort of scenario is very common. At a later time, an agent can assess
a reason as having more “gravitas” than he earlier imagined it had. It might
weigh more with him than it did before. So in c*, the con-reason counts
more for the agent. The agent does not judge that the reason to A has
become less strong than it was; it is just that the reason to B has become
deliberatively stronger, and so stronger than the reason to A.
The reason to B now rationally outweighs the reason to A in the agent’s
deliberations, so the agent now Bs rather than As. In c*, the reason to B
has become the pro-reason and the reason to A has become the con-reason.
Something about the reason to B has changed, and consequently the
ordinal information about relative strength of reasons has changed. But
nothing about the reason to A need have changed, other than certain
relational, ordinal truths about its deliberative strength.
At the level of decision, choice, and reason, this is all straightforward.
But how should we represent the allegedly underlying causal facts of the
matter in c* (in order to obtain a coherent causalist story)? In c*, the reason
176 D.-H. Ruben
If so, then the reason to A should have the same causal strength in c*
as it had in c (even though it is now rationally outweighed by the reason
to B), and since the reason to A caused the agent to A in c, then the reason
to A should cause the agent to A in c* as well (with one possible exception,
described below). If the reason to A in c was strong enough to cause the
agent to A, then it should still have the same causal strength in c*, and
therefore should be strong enough to cause in c* whatever it caused in c,
given that there are no causal nonrelational differences between c and c*
as far as the reason to A is concerned. If the reason to A has the same causal
strength or power in both, then its effects should be the same in both
circumstances. What it is strong enough to cause in one, it should be
strong enough to cause in the other. The relational difference that in c*
the agent’s reason to A is outweighed rationally by his reason to B can’t
make a difference to what the former is causally strong enough to do, since
its causal strength is intrinsic.
So why doesn’t the agent do A in c*, just as he did in c? If the reason
to A is able in c to cause the agent to A, and if it has the same causal
properties in c* that it had in c, then it should still cause the agent to A in
c*. True, the reason to B gains in deliberative strength in c* (and so the
relational facts about the relative strength of both reasons will change from
c to c*). Given the causalist’s (2), what the reason to B causes, what its
causal strength is, must have changed from c to c*, a causal change on
which its new deliberative strength supervenes. So the reason to B should
also cause the agent to B in c*. There should be, in c*, as far as we can tell,
a standoff: the agent should be caused both to do A and to do B.
To be sure, the agent can’t do both A and B; by assumption, the agent
is not able to do both on a single occasion. But in the counterfactual situ-
ation c*, causally speaking, there should be no grounds for thinking that
the con-reason will now win out “over” the pro-reason. The con-reason is
now rationally and hence causally strong enough to cause the agent to B,
but the pro-reason remains at the same intrinsic causal strength and hence,
on the causalist view, should still be strong enough to cause the agent to
A. So why should we expect the agent to do one or the other? Why doesn’t
the agent do A rather than B, even in the counterfactual situation, since
his reason to do A remained in principle strong enough to cause him to
do A, or why doesn’t he do nothing at all, as in a true Buridan’s ass
example, since the two causes might cancel themselves out?
I mentioned one possible exception, above, to the claim that “since the
reason to A was strong enough to cause A in c, then the reason to A should
be strong enough to cause the agent to A in c* as well.” We need to take
178 D.-H. Ruben
note of this qualification. Perhaps the causal chains initiated by the reason
to A and the reason to B are independent in the actual circumstances, c
(they are not joint causes of a single effect or joint effects of a single cause),
but they might not remain independent in the counterfactual situation c*.
If the reason to B gains deliberative strength in c*, this relational change
might supervene on some changed intrinsic causal fact about it. Suppose
that in c* the reason to B, in addition to causing the agent to B, is now
able to interrupt the causal chain that would otherwise lead from the
reason to A to the agent’s A-ing, and that explains why the agent does not,
after all, do A in c*. Let’s consider this possible rejoinder to the difficulty
we have detected.
There would be some flexibility in deciding just where, in c*, the req-
uisite inhibitor blocked or stopped the chain commencing with the reason
to A from leading to its “natural” conclusion, A, as long as the chain did
not get all the way to that action. For the sake of argumentative simplicity,
let us suppose that the reason to B inhibited the very next link on the
chain. On such a chain, let m be the node that would have followed imme-
diately after the reason to A. So let us say that, in the counterfactual situ-
ation, what happened is that the reason to B inhibited or prevented m
from occurring, prevented or inhibited the reason to A from causing m,
and hence prevented the action A. That is why the agent does B instead
of A in the counterfactual situation, and why his reason to A does not lead
to his A-ing in c*, an explanation entirely consistent with the causalist
position.8 (In fact, the same result would be achieved if something else
other than his reason to B was the blocker or inhibitor, but the reason to
B is going to prove the most likely candidate for that role.)
The problem with this solution is simply that it is not true to the phe-
nomenological facts of the case. What this purported solution does is to
try and construe an agent’s not acting on a causally strong enough reason
that he has as a case of having that reason blocked or impeded by a con-
flicting reason that he also has. The identification doesn’t succeed.
Even apart from cases of weakness of the will, there is an indefinitely
large number of ways in which an agent’s wishes, wants, desires, and so
on can be thwarted. Bad luck affects us all. A typical sign of this happening
is agent frustration. In weak-willed cases, according to causalism, the
agent’s rationally strongest reason does not commence a causal chain
leading to an action because it is not causally strong enough; a rationally
weaker but causally strong enough reason does.
On the other hand, in the rather different case we are now considering,
the causalist rejoinder has it that the agent’s otherwise-causally-strong-
Causal and Deliberative Strength of Reasons 179
to do so), because the frustration arises from the causal failure, not from
a rational failure. At a rational level, the agent would be happy that the
reason to A did not lead to his A-ing, since he had more reason to B than
to A. But at a causal level, he was primed to do A just as much as to do B.
But if he does not do what he is causally primed to do, he would feel
frustrated. It would indeed be like not being able to scratch an itch, when
the desire to scratch was as causally strong as any reason not to do so was.
How much weight should we put on these sorts of phenomenological
facts in deciding metaphysical matters? It is, I think, too easy to be a skeptic
about this. What we are deciding are not just metaphysical matters gener-
ally, but specifically issues in action theory. If some view in action theory
attributes to agents various kinds of mental states, or has the consequence
that they have those states, what better check is there than introspection?
I think that many views in action theory can be judged in this way, for
example, ones that attribute various second-order mental states to agents,
or ones that require of the agent almost a limitless stock of beliefs (Ruben
2003, chap. 4). One might dispute my argument that the causalist view
does imply that the agent would experience the kind of frustration that I
claim. But if the argument about this implication is sound, then introspec-
tion is the only way I know to test the claim.
What the causalist was trying to do was to give a causal model for the
case in which the agent Bs, because his reason to B has become delibera-
tively weightier than his reason to A, even though his reason to A has
retained its original nonrelational, causal strength. In this case, surely the
truth of the matter is that nothing needs to be thwarted and the agent
need feel no frustration. He gladly “surrenders” his reason to A, at least in
the circumstances, to his now-superior-because-weightier reason to B. It is
not true that his reason to B prevents or blocks him from acting on his
otherwise causally strong enough reason to A. In the case at hand, he
chooses not to do A, because he takes his reason to do A as relatively of
less importance or weight than his reason to do B, and in the case as we
have constructed it, I do not see how this fact can be modeled causally.
There is a perfectly clear deliberative story about what goes on in this case,
but it is a story for which the causalist can provide no convincing causal
counterpart.
There is, I submit, no fully convincing way causally to model decision
making that includes con-reasons, at least for cases of Type II. It is the
element of relational deliberative weight, comparative strength, which
cannot be captured causally, at least in those cases in which the con-reason
Causal and Deliberative Strength of Reasons 181
does not contribute causally to the action taken. What matters in delibera-
tion is the comparative or relative strength of reasons. If reasons were
causes, there would be nonrelational truths about the causal strength of
reasons. Because of these nonrelational causal truths, the two scenarios,
the causal/motivational and the rational/normative, won’t mesh. As long
as one thinks only about pro-reasons for action causing the actions they
favor, the point is not salient. But once con-reasons are introduced, it
becomes clearer that there is no plausible causal modeling for all the ways
in which con-reasons work in our deliberation scheme.
Acknowledgments
Notes
2. Although Jonathan Dancy (2000, 4) notes their existence: “but still I will nor-
mally speak as if all the reasons that do motivate all pull in the same direction.”
3. A con-reason is also a pro-reason in its own right for the action not taken, and
is a con-reason only in the sense that it counts against the action that was taken.
Similarly, a pro-reason is only a pro-reason for the action taken and is itself also a
con-reason for the action not taken. In what follows, to simplify terminology, I will
only use the idea of a pro-reason to be the reason that counts for the action one
takes, and the con-reason to be the reason that counts for the action one does not
take, the reason that gets outweighed. In the light of this, it would be wrong to
think of pro-reasons and con-reasons as two different sorts of reasons. I was careful
above only to say that reasons can function in these two different ways, depending
on context.
182 D.-H. Ruben
5. I have often wondered why the principle “Every event has an effect” does not
have quite the same intuitive appeal as “Every event has a cause.” It might seem
obvious that they should stand or fall together.
6. Of course, the choice to A that he would have made or the A-ing he would have
performed had he not had a reason to B must differ from the choice to A that he
did actually make or the A-ing he actually did do in at least one way, simply in
virtue of the fact that it would have been a choice made in the absence of having
a conflicting reason to B. The qualification “in some intrinsic way” is meant to
exclude such trivial differences.
7. I do not think that one should underestimate the importance of the shift from
the personal to the subpersonal level, in order to maintain (1) and (2), broadened
to include con-reasons. It is a major concession on the part of the causalist. I do
not intend to develop the point here, but certainly the hope that lay behind the
causalist program for reasons for action was that reasons could be construed as
causes, yet doing so was compatible with understanding reasons and actions in their
own terms, sometimes called “the space of reasons.” This program was not neces-
sarily committed to construing reasons and actions as “really” about brain states
and gross behavior (even if they turn out to be identical to brain states and gross
behavior). The language of psychology and action was meant to have an internal
coherence and integrity all its own. To that extent, this option can easily take the
causalist program somewhere it had not intended to go.
8. Note that this example is not one of preemption, as some have suggested to me.
If it were a case of preemption, one would have two reasons both favoring the same
line of action, the first of which causes the action and the other of which did not
cause the action but would have caused the same action, had one not had the first
reason. In causal preemption, the inhibition or prevention is by the preempting
cause of some node on the chain that would have led from the preempted cause to
the effect. This is certainly not the case we are considering. But, arguably, all cases
of preemption involve some sort of causal inhibition or prevention, as does the case
we are considering.
13 Teleological Explanations of Actions: Anticausalism
versus Causalism
Alfred R. Mele
[1] The man, wondering where his hat is, sees it on the roof, fetches the ladder, and
immediately begins his climb. [2] Moreover, the man is aware of performing these
movements up the ladder and knows, at least roughly, at each stage what he is about
to do next. [3] Also, in performing these movements, he is prepared to adjust or
modulate his behavior were it to appear to him that the location of his hat has
changed. [4] Again, at each stage of his activity, were the question to arise, the man
would judge that he was performing those movements as a means of retrieving his
hat. (Wilson 1989, 290)
A while ago, Norm started climbing a ladder to fetch his hat. After he
climbed a few rungs, the Martians took over. Although they controlled
Norm’s next several movements while preventing him from trying to do
anything, they would have relinquished control to him if his plan had
changed (e.g., in light of a belief that the location of his hat had changed).
Return to facts 1 through 4. Fact 1 obtains in this case. What about fact
2? It is no less true that Norm performs his next several movements than
that the man who clutches the live electric wire performs convulsive move-
ments. And the awareness of performing movements mentioned in fact 2
is no problem. The wire clutcher can be aware of bodily “performances”
of his that are caused by the electrical current, and Norm can be aware of
bodily “performances” of his that are caused by M-rays. Norm also satisfies
a “knowledge” condition of the sort I identified. If Wilson is right in think-
ing that an ordinary ladder climber knows, in some sense, that he is about
to perform a movement of his left hand onto the next rung, Norm can
know this too. What he does not know is whether he will perform the
movement on his own or in the alternative way. But that gives him no
weaker grounds for knowledge than the ordinary agent has, given that the
subject matter is the performance of movements in Wilson’s broad sense
and given what Norm knows about the Martians’ expertise. Fact 3 also
obtains. Norm is prepared to adjust or modulate his behavior, and one
may even suppose that he is able to do so. Although the Martians in fact
initiated and controlled Norm’s next several movements up the ladder
while preventing him from trying to do anything, they would not have
done so if his plans had changed. Fact 4 obtains too. In Wilson’s sense of
“perform a movement,” Norm believes that he is performing his move-
ments “as a means of retrieving his hat.” (He does not believe that the
Martians are controlling his behavior; after all, he realizes that they very
rarely do so.)
Even though these facts obtain, Norm does not sentiently direct his next
several movements up the ladder at getting his hat because he is not sen-
tiently directing these movements at all. Wilson maintains that sentiently
directing a bodily movement that one performs entails exercising one’s
“mechanisms of . . . bodily control” in performing that movement (Wilson
1989, 146). However, Norm did not exercise these mechanisms in his
performance of the movements at issue. Indeed, he did not make even a
minimal effort to perform these movements; owing to the Martian inter-
vention, he made no effort at all—that is, did not try—to do anything at
the time. And it is a platitude that one who did not try to do anything at
all during a time t did not sentiently direct his bodily motions during t.
Teleological Explanations of Actions 187
It might be suggested that although Norm did not directly move his
body during the time at issue, he sentiently directed his bodily motions
in something like the way his sister Norma sentiently directed motions of
her body when she vocally guided blindfolded colleagues who were carry-
ing her across an obstacle-filled room as part of a race staged by her law
firm to promote teamwork. If Norma succeeded, she may be said to have
brought it about that she got across the room, and her bringing this about
is an action.8 Notice, however, that there is something that she was trying
to do at the time. For example, she was trying to guide her teammates. By
hypothesis, there is nothing that Norm was trying to do at the relevant
time, for the Martians blocked brain activity required for trying. And this
is a crucial difference between the two cases. The claim that Norma sen-
tiently directed motions of her body at some goal at the time is consistent
with T3; the comparable claim about Norm is not.9
Wilson proposed sufficient conditions for its being true that a person’s
movements were sentiently directed by him at promoting his getting back
his hat. Norm satisfies those conditions even though it is false that the
“movements” at issue were sentiently directed by him. So those conditions
are not in fact sufficient.
Can Wilson’s proposal be rescued simply by augmenting it with an
anti-intervention condition? No. If the addition of such a condition does
contribute to conceptually sufficient conditions for a person’s sentiently
directing his movements at a goal, it may do so because the excluded
kinds of intervention prevent, for example, the obtaining of normal causal
connections between mental items or their neural realizers and bodily
motions. An anticausalist who augments Wilson’s proposal with an anti-
intervention condition also needs to produce an argument that the condi-
tion does not do its work in this way.
2 Sehon on Norm
movements in his thin sense, I doubt that he would count a man whose
limb motions are caused by Martians pulling and pushing on his limbs as
performing movements with those limbs. Naturally, I thought the M-rays
were just fine for my purposes, but instead I could have portrayed the
Martians as moving Norm’s paralyzed body with just the right sorts of
electrical jolts to muscles and joints. Call this E-manipulation.
Sehon wonders why my Martians interfere with Norm. “What’s in it for
the Martians?” he asks (Sehon 2005, 168). In Mele 2003, I neglected to
mention that the Martians had read page 290 of Wilson’s (1989) book and
wanted to provide a living counterexample to his proposal. In any case,
Sehon’s reflection on his question leads him to the following claim:
Mele stipulates that the Martians are going to make Norm’s body do exactly what
Norm planned to do anyway. If this were an ironclad promise from the Martians,
or better yet, something that followed necessarily from their good nature, then
. . . I have little problem saying that Norm is still acting, despite the fact that the
causal chain involved is an unusual one. If he commits a murder under these cir-
cumstances, we will definitely not let him off. (Sehon 2005, 169)
to him.) I do not see how the claim that, in this scenario, Norm is climbing
the ladder can be regarded as anything but utterly preposterous. Yet, unless
Sehon can identify a crucial difference between the use of M-rays and this
alternative mode of Martian body manipulation, he is committed to having
“little problem saying” that Norm is climbing it.
Sehon is willing to grant that when my Martians are at work rather than
his, Norm is not acting (Sehon 2005, 168). He contends that, in my story,
“since Norm fails . . . to satisfy” the following condition, “his behavior
does not count as goal directed” on his “account of the epistemology of
teleology” (ibid., 169): (R1) “Agents act in ways that are appropriate for
achieving their goals, given the agent’s circumstances, epistemic situation,
and intentional states” (ibid., 155). If I am right, Norm is not acting at all,
in which case invoking R1 is overkill. And if Norm is not acting, as Sehon
is willing to grant, then Wilson’s proposal about sufficient conditions for
its being true that a person’s movements were sentiently directed by him
at promoting his getting back his hat is false, which is what I set out to
show with the Martian example in Mele 2003.
Some readers may feel that they have lost the plot. The following obser-
vation will help. One thing that Sehon would like to show is that a pro-
ponent of AT can “accommodate our intuition that Norm is not acting”
in my story (Sehon 2005, 170). He argues that “Norm’s motion is not that
of an agent, because in a range of nearby counterfactual situations his
behavior is not appropriate to his goals. Specifically, in all those situations
in which the Martians simply change their mind about what they want to
have Norm’s body do, Norm’s body will do something quite different”
(ibid.).
Sehon’s explanation of why Norm is not acting is seriously problematic.
Imagine a case in which the Martians consider interfering with Norm but
decide against doing that. Norm walks to the kitchen for a beer without
any interference from the Martians. There are indefinitely many variants
of this case in which the Martians change their minds about not interfering
and make Norm’s body do something else entirely. So “in a range of nearby
counterfactual situations his behavior is not appropriate to his goals”
(ibid.). But this certainly does not warrant the judgment that Norm is not
acting in the actual scenario. Obviously, he is acting in that scenario: he
is walking to the kitchen for a beer. If Sehon is thinking that his counter-
factual test for whether an agent is acting is to be applied in scenarios in
which the Martians interfere with Norm but not in scenarios in which they
do not interfere with him, he does not say why this should be so.
Teleological Explanations of Actions 191
One could alter the example by making Sally’s neurological disorder much more
general, such that she rarely does what she intends; but with that revision, my own
intuitions about the case grow flimsy. I’m not sure what to say about her agency in
such a case, and I’m not too troubled by the conclusion that she is not exhibiting
genuine goal-directed behavior at any particular moment. (Ibid.)
Two conclusions may now be drawn. First, Sehon has not shown that
my objection to Wilson’s proposal is unsuccessful. In fact, insofar as he
concedes that Norm is not acting in my story, he apparently concedes that
the objection is successful. (He nowhere claims that Norm does not satisfy
Wilson’s proposed conditions.) It is perhaps worth mentioning in this
connection that Sehon might have misled not only his readers but also
himself by treating my objection to Wilson’s proposal as though it were
an argument for the claim that no version of AT can “accommodate our
intuition that Norm is not acting” in my story (ibid., 170). Other anticau-
salists about action explanation—for example, Carl Ginet (1990) and R.
Jay Wallace (1999)—have proposed other sufficient conditions for a human
being’s performing an action or acting in pursuit of a particular goal, and
my objections to Ginet’s and Wallace’s proposals in Mele 2003 were very
different from my objection to Wilson’s proposal.11 Naturally, the objec-
tions I offered were designed to apply to the details of the specific propos-
als. Second, Sehon’s attempt to produce a version of AT that distinguishes
cases of action from cases like Norm’s is unsuccessful.
whatever the best account of causation is, their view about what actions
are or how they are to be explained is the correct one to take, and they
may want to avoid hitching their wagon to a specific theory of causation.
The same goes for causal explanation.
David Lewis defends the thesis that “to explain an event is to provide
some information about its causal history” (Lewis 1986, 217). Presumably,
any anticausalist about action explanation who takes even some actions
to be events would reject this thesis. But suppose it were agreed on all sides
that a sufficient condition for an explanation of an event being a causal
explanation is that the explanation explains the event at least partly by
providing some information about its causal history. With this agreement
in place, one can ask, for example, whether every acceptable teleological
explanation of an action is a causal explanation of the action. Are there
acceptable teleological explanations of actions that do not explain the
actions even partly by providing some information about their causal
history? (Teleological explanations of actions, again, are explanations in
terms of aims, goals, or purposes of the agents of those actions.)
Sehon asserts that “Teleological explanations simply do not purport to
be identifying the cause of a behavior” (Sehon 2005, 218). But, as Lewis
observes, speaking in terms of “the cause of something” can easily generate
confusion (Lewis 1986, 215). Lewis adds: “If someone says that the bald
tire was the cause of the crash, another says that the driver’s drunkenness
was the cause, and still another says that the cause was the bad upbringing
which made him so reckless, I do not think any of them disagree with me
when I say that the causal history includes all three” (ibid.). In any case,
causalists like me do not purport to be identifying the cause of an action
when we offer causal explanations of actions in terms of agents’ aims,
goals, or purposes. The basic idea—oversimplifying a bit—is that a putative
teleological explanation of an action in terms of a goal, aim, or purpose G
does not explain the action unless the agent’s wanting or intending to (try
to) achieve G has a relevant effect on what he does. Obviously, the notion
of having an effect is a causal notion; and the assertion, for example, that
an agent’s intending to achieve G had an effect on what he did places his
intending to do that in the causal history of what he did.
At this point, some causalists part company with others. Some causalists
posit token mental states such as intentions, desires, and beliefs and attri-
bute causal roles to these states or to their neural realizers in the production
of actions.13 Other causalists are wary of postulating such token states of
mind.14 I take no stand on this issue here.
Teleological Explanations of Actions 195
for which of the two reasons he mowed his lawn this morning and tell her
how you figured it out. You decide to follow Sehon’s lead and to consider
various counterfactual scenarios. You know that Al dislikes mowing his
lawn in even a light rain, and you start by asking yourself what he would
have done this morning if there had been a light rain. You think that if
he would have mowed his lawn anyway, “that is good evidence that in
the actual circumstances [he] was directing [his] behavior” (Sehon 2005,
158) at getting revenge, because the rain, for Al, would outweigh schedule-
related convenience. “Would he have mowed it anyway?” you ask yourself.
And you find that you are stumped. You realize that if you had substantial
grounds for believing that Al mowed his lawn to get revenge, you could
use those grounds to support the claim that he would have mowed it even
in a light rain; and you realize that if you had substantial grounds for
believing that Al mowed his lawn only for reasons of convenience, you
could use them to support the claim that he would not have mowed it if
it had been raining. It dawns on you that the strategy of trying to identify
the reason for which Al actually acted by trying to figure out what he
would have done in the counterfactual scenario I mentioned and other
such scenarios puts the cart before the horse. Asking your counterfactual
question about the rain scenario is nothing more than a heuristic device—
and not a very useful one. The truth about what Al would have done in a
light rain is grounded partly in the truth about the reason for which he
actually acted.
As I pointed out in Mele 2003, 51, in response to an earlier proposal by
Sehon that featured counterfactuals, the truth of true counterfactuals is
grounded in facts about the actual world; and if, for example, relevant
counterfactuals about Al are true for the reasons one expects them to be,
their truth is grounded partly in Al’s acting for the reason for which he
acted. As far as Davidson’s challenge is concerned, we are back to square
one. Certain counterfactuals about Al are true partly because he acted for
a certain reason. But in virtue of what is it true that he acted for that
reason? Sehon’s proposal about counterfactuals leaves this question
unanswered.
For a critical examination of several leading anticausalist treatments of
action-explanation, see chapter 2 of Mele 2003. In the present essay I have
focused on some elements of Sehon’s recent defense of AT. If I am right,
Sehon has not undermined my objection to Wilson’s proposal, has not
produced a version of AT that distinguishes cases of action from cases like
Norm’s, and has not offered an adequate reply to Davidson’s challenge.
This is bad news for at least one version of AT.
Teleological Explanations of Actions 197
Acknowledgments
Notes
1. See Bishop 1989; Brand 1984; Davidson 1980; Goldman 1970; Mele 1992, 2003;
Thalberg 1977; and Thomson 1977.
2. See, e.g., Sehon 1994, 1997, 2005; Taylor 1966; and Wilson 1989, 1997.
4. This paragraph and the next eight are borrowed, with some minor modifications,
from Mele 2003, 48–50.
5. See Adams and Mele 1992, 325; Armstrong 1980, 71; McCann 1975, 425–427;
and McGinn 1982, 86–87.
6. See James 1981, 1101–1103. For discussion of a case of this kind, see Adams and
Mele 1992, 324–331.
action-individuation. The same goes for the expressions that take the place of “A”
in concrete examples.
8. Readers who regard the claim that Norma brought it about that she got across
the room as an exaggeration may be happy to grant that she helped to bring that
about. Her helping to do that is an action.
9. On a case that may seem to be problematic for T3, see Mele 2003, 64–65, n. 22.
10. In my original story, the Martians interfere with Norm only “on rare occasions”
(Mele 2003, 49). So, seemingly, they may interfere with him much less often than
Sally has her finger problem. The variant of Norm’s case just sketched renders specu-
lation about this comparative issue otiose.
11. For my discussion of Ginet 1990 and Wallace 1999, see Mele 2003, 39–45.
12. Incidentally, I have never offered an analysis of action nor of acting in pursuit
of a particular goal, although I have defended causalism in both connections (Mele
1992, 2003). Paul Moser and I (Mele and Moser 1994) have offered an analysis of
what it is for an action to be an intentional action.
13. See, e.g., Brand 1984; Davidson 1980, ch. 1; and Mele 1992, 2003.
15. There is considerable disagreement about what reasons for action are, and I have
spun my story in a way that is neutral on this issue. For example, the story does
not identify the reasons mentioned in it with belief-desire pairs, as Davidson (1980,
ch. 1) does; nor does it deny that this identity holds.
14 Teleology and Causal Understanding in Children’s
Theory of Mind
1 The Puzzle
What evidence is there to suggest that young children have some under-
standing of intentional action? Some psychologists maintain that even
toward the end of the first year, as infants begin to engage in joint atten-
tion interactions with others, they perceive and understand others’ actions
as goal-directed (Tomasello 1999). Others have argued that such under-
standing manifests itself in 18-month-olds’ more sophisticated capacities
for imitation of intended actions (Meltzoff 1995). Here we will focus on
evidence provided by (slightly older) children’s performance on classical
false-belief tasks. The question put to children in such tasks is what the
protagonist in some story will do next. For example: Suppose Maxi’s mother
transferred the chocolate Maxi put into the green kitchen cupboard to the
blue cupboard while he is out playing. Maxi, feeling peckish, returns to
the house. Where will he look for the chocolate? Three-year-olds’ performance
on this task is poor, but far from random. They reliably predict that Maxi
will look in the blue cupboard. They don’t suggest that he will look under
the kitchen table or in the playground or in the loft. What explains this
Teleology in Children’s Theory of Mind 201
and
2 Desire Psychology
There is, in the developmental literature, an influential view that may seem
to offer a simple solution to our puzzle. The basic idea is that while think-
ing of someone as acting intentionally certainly requires understanding
something about the mental states causally responsible for intentional
actions, such understanding may be more or less comprehensive. A fully
developed conception of intentional action will require a large and complex
set of psychological notions, including, of course, the notion of belief. But,
the claim is, children may have a rudimentary grasp of intentional action
in virtue of understanding something about the explanatory role of desires,
without yet appreciating how desires tend to interact with other states,
such as beliefs (Bartsch and Wellman 1995).
Straight off, though, it is not obvious how this suggestion speaks to our
puzzle. To understand Maxi’s behavior as intentional, you have to put two
things together: the purpose of the action and the means by which Maxi
seeks to accomplish his purpose. A simple “desire psychology” may enable
you to identify Maxi’s purpose (his purpose is to get hold of his chocolate),
202 J. Perner and J. Roessler
3 Objective Reasons
to retrieve the object as giving us a reason to look there. From the perspec-
tive of deliberation, only true propositions—facts—can provide genuine
reasons. This point is not inconsistent with BD. The claim is not that BD
is false. It is only that BD does not exhaust the commonsense psychology
of practical reasons. In this observation, we suggest, lies the solution to
our “conceptual difficulty.” Young children find intentional actions intel-
ligible in terms of “objective” practical reasons.
Let’s clarify the basic idea with the help of a relatively uncontroversial
example from Bernard Williams. Suppose you believe of the content of a
certain bottle that “this stuff is gin,” when in fact the bottle contains
petrol. You feel like a gin and tonic. Should we say that you have a reason
to mix the stuff with tonic and drink it? Williams suggests that you do not
have such a reason, although you think you do, and although it would
certainly be rational for you to drink the stuff (Williams 1981b, 102). One
might object that to say that it would be rational for you to drink the stuff
just is to say that you have a reason to drink it—a reason that might be
appealed to, for example, in offering a “reason-giving” explanation of your
action. But Williams is surely right in drawing our attention to the fact
that you are mistaken in thinking you have a reason to drink the stuff. Your
putative reason can be set out as follows: “I need a gin and tonic. This stuff
is gin. So I should mix it with tonic and drink it.” Certainly the inference
reveals your action to be rational from your point of view. Correlatively,
appeal to the inference may figure in a reason-giving explanation of your
action. But the fact remains that the inference is unsound. Given that the
second premise is incorrect, the inference fails to establish the truth of its
conclusion: you are mistaken in taking the premises to establish that you
should drink the stuff. In this sense, you are mistaken in thinking you
have a reason for drinking it. This is perfectly consistent with acknowledg-
ing that there is a sense in which you do have such a reason. The point is
sometimes put by saying that you lack a justifying or guiding reason, but
have an explanatory reason to drink the stuff (Raz 1978). But this can be
misleading, given that an explanatory reason too may be said to justify, at
least in the “anaemic” sense (Davidson 1963/1980) of revealing the action
to be justified or rational from your perspective. Marking the distinction as
one between justifying and explanatory reasons would, in the current
context, be awkward in another way. For the suggestion we want to pursue
would now (confusingly) have to be put by saying that young children
explain intentional actions in terms of justifying (rather than explanatory)
reasons. So we’ll simply distinguish between subjective and objective reasons:
you have a subjective reason to drink the stuff, but you lack an objective
204 J. Perner and J. Roessler
reason to do so. Note that by itself the distinction does not imply that we
are talking about different sorts of things here. Subjective reasons need not
be taken to be mental states. Instead they might be taken to be propositions
forming the contents of mental states. The distinction turns on whether
or not a reason statement is to be understood as relativized to the agent’s
current perspective. To say that you have a subjective reason to drink the
stuff is to say that, from your perspective, it looks as if you have an (objec-
tive) reason to do so.
It might be said that even objective reasons have to be relativized in
one respect: they must be relativized to the agent’s set of desires or objec-
tives or projects. But on reflection it is not clear that this is so. As is often
pointed out by critics of the Humean theory of motivation5 (and as is
acknowledged by some of its defenders6), practical deliberation does not
always or even typically start from reflection about one’s current desires.
Practical inferences are often premised on evaluative propositions, to the
effect that some action or some state of affairs is important or desirable;
or, more specifically, on propositions involving “thick” evaluative con-
cepts (e.g., promise, treachery, brutality, courage), apparently embodying
“a union of fact and value” (Williams 1985, 129); or again, as in the
example above, on propositions to the effect that someone needs, or needs
to do, a certain thing, where claims of need are also best understood as a
species of evaluative propositions, not to be confused with, or reduced to,
ascriptions of desire (Wiggins 1987).7 This suggests that there may after all
be such a thing as a fully objective reason, relativized neither to the sub-
ject’s instrumental beliefs nor to her set of desires and projects. A possible
illustration might be the suggestion that the subject in Williams’s example
not only lacks an (objective) reason to drink the stuff, but actually has an
(objective) reason not to drink it—a reason provided by the fact that drink-
ing petrol is bad for your health.8
stands in a causal relation to the event of his opening the green cupboard.
In contrast, the hybrid view may seem to provide for causes, conceived as
particulars. True, even on the hybrid view, the explanatory force of early
reason explanations cannot be exhaustively explained in terms of causal
relations between “mental items” and actions. External facts also play a
vital explanatory role. But defenders of the hybrid view might argue that
such facts should be seen as “standing conditions,” whose explanatory role
essentially depends on that of causally efficacious “items.” Their role may
be not unlike that of the dryness of the ground in an explanation of a
forest fire caused by someone’s dropping a cigarette. In brief, the Davidso-
nian challenge might seem to provide materials for an a priori argument
in favor of the hybrid view.
We want to suggest that this argument rests on an implausible premise.
The Davidsonian challenge consists of two central claims. One is that
action-explanation must be a species of causal explanation. This is usually
motivated by arguing, convincingly, that action-explanations are explana-
tions of the occurrence of events, and that it’s hard to see how the occur-
rence of an event can be made intelligible other than in causal terms. The
second claim is that causal explanation has to appeal to causes, conceived
as particulars. The correct teleological response to the a priori argument
for the hybrid view, we suggest, is to accept the first but reject the second
claim. The key question here is: What does it mean for an explanation to
be causal? Adapting Bernard Williams’s remark about truth, perhaps the
right thing to say here is that causal explanation in itself isn’t much. It’s
not explanation in terms of laws of nature; it’s not explanation in terms
of event causation; it’s not explanation in terms of causal processes or
causal mechanisms. The basic idea of a more minimalist account is that
causal explanations advert to facts that “make a difference,” where this is
to be spelled out in terms of patterns of counterfactual dependence.13 Our
suggestion, then, is that teleological explanation is a species of causal
explanation. So we agree with the idea underpinning the Davidsonian
challenge to teleology, that the development of the commonsense concep-
tion of intentional action is inextricably entwined with the development
of causal understanding. But we deny that causal understanding in this
area takes the form of understanding causal relations between “mental
items” and actions.
Specifically, the version of the difference-making approach we propose
to draw on is the so-called interventionist approach to causation and causal
explanation (Woodward 2003). The central idea of interventionism is this.
To say that there is a causal relation between two variables X and Y is to
Teleology in Children’s Theory of Mind 209
6 The Evidence
is an alternative reading. The child may make the man’s action intelligible
by stating the intention with which it was performed: his intention in
putting the car up was to fix it. The example reveals the child’s understand-
ing that people act on the basis of good reasons, and provides a partial
reconstruction of the reason operative in this case, identifying the purpose
informing the action. Note that on this reading, it’s possible that Ross takes
the reason to be provided by the objective desirability of the man’s fixing
the car, rather than by the man’s desire to fix it. Thus, the example pro-
vides no evidence against the teleological account.
hider hides the penny in one hand or the other, and then invites a guess.
This is repeated over a row of trials, after which the participants change
roles. In the role of hider “the child was judged to be competitive if he
expressed displeasure on any of the trials in which E found the marble, or
if any of the following events occurred: 1. when E selected the marble-
holding hand, S refused to show the marble or made an attempt to transfer
it to his other hand; 2. S extended an empty hand for E to guess from. A
child was judged to be non-competitive if none of the above events occurred.
The non-competitive children frequently told E where the marble would
be hidden, after a trial in which E had failed to guess correctly; and many
of them extended the marble in an open hand on all trials” (Gratch 1964,
53–54).
The proportion of children who displayed competitive spirit increased
steadily from 5 percent below the age of 3 years to 58 percent at 4.5 years
to practically 95 percent at around 6 years. This fits very well the typical
developmental trend on false-belief tests (see Perner, Zauner, and Sprung
2005, figure 4, for a graphic display). Gratch’s study is particularly helpful
insofar as he analyzed indicators of competitive spirit separately from
indicators of the ability to deceive, fool, or conceal information from the
opponent. Since deception requires an understanding of false belief, a
developmental link between understanding the deceptive aspects of the
game and understanding false belief would not be terribly interesting and
no support for the teleological theory developed here. The shortcoming of
Gratch’s data for present purposes is, of course, that there is no direct
comparison of how many of the children in his sample would have passed
the false-belief test. Ideally one would also see the use of a control task
with similar cognitive demands except for the competitiveness and in
which children of all ages can succeed.
The good news is that more recently several studies included the penny-
guessing game and the false-belief tasks (Baron-Cohen 1992; Chasiotis et
al. 2006; Hughes and Dunn 1997, 1998). The bad news is that, without
exception, these studies only analyzed the hand-guessing behavior for
indicators of deceptive abilities (or a mix of combative spirit and decep-
tion). Hence the reported correlations with false belief understanding
provide no convincing evidence against the hybrid theory and, therefore,
also no support for teleology.
We should also mention two other pieces of evidence that go well with
teleology. Although children seem to have problems understanding the
point of competition, they are quite concerned about obeying the rules of
a game (Rakoczy, Warneken, and Tomasello 2008) even at the age of 2
216 J. Perner and J. Roessler
years when they just start to use “desire” terms for other people (Bartsch
and Wellman 1995; Imbens-Bailey, Prost, and Fabricius 1997). At 3 years
(36 months) this concern becomes almost obsessive. Clearly they expect
people to act a certain way because it is the right, conventional way, and
they seem to have little understanding for idiosyncratic deviation.
Following the pioneering work by Shultz and Shamash (1981), several
studies reported that children have difficulty distinguishing intentions
from desires (see reviews by Astington 1999, 2001). The latest study by
Schult (2002) included children as young as 3 years. They had to toss bean
bags into three different colored buckets, some of which contained a ticket
for a prize. For each toss they had to indicate which bucket they intended
to hit. On some trials they hit the intended bucket, on others they missed
it; on some they won a prize, on others they didn’t, resulting in four dif-
ferent combinations. The 4- and 5-year-olds were remarkably accurate in
answering all types of questions. The 3-year-olds, on the other hand, had
serious problems with questions about their intentions, in particular when
satisfaction of their intention contrasted with satisfaction of their desire,
as shown in table 14.1.
This pattern of results follows from our assumption that children remain
basic teleologists until about 4 years, when they can understand differences
of perspective. They have no problem knowing what they want, i.e. the
desirable goal of the action (winning the prize), and whether they got it
or not. They also understand intentions to hit a particular bucket, though
only insofar as there are objective reasons for such intentions. Consider now
a case of fortuitous success, where children accidentally get the prize after
hitting a bucket they didn’t intend to hit. To understand that they didn’t
intentionally hit the bucket, children have to understand that they had
Table 14.1
Data from Schult (2002): number of children giving correct answers to the satisfac-
tion questions, “Did you do what you were trying to do?” (intention), and “Did you
get what you wanted?” (desire).
no reason for hitting that particular bucket, despite the fact that doing so
turned out to be conducive to reaching their goal. Or consider a case of
bad luck, where they hit a certain bucket without getting a prize. To under-
stand that they hit the bucket intentionally, children have to understand
that they did have a reason for hitting that bucket, despite the fact that
doing so turned out not to be conducive to reaching their goal. Under the
teleological interpretation, it is unsurprising that young children have
problems under these kinds of circumstances. Correct judgment of these
cases only becomes possible when one understands that one acted on the
assumption that the prize might be in the bucket one was aiming for. Since
in the critical cases this assumption turns out to be false, the intentionality
of the intended action can only be understood if one can understand it in
terms of the perspective of that assumption.
In sum, there is some suggestive evidence against the hybrid theory and
some support for the teleological account in Gratch’s finding that children
show little competitive spirit before the age at which they are able to
understand false belief as a motivating reason. More importantly (for the
developmental psychologists), our account also provides us with a clearer
analysis of what young children should find difficult about competition
and incompatibility of goals. In appendix 2, we discuss further evidence
that is prima facie relevant to the debate between teleology and the hybrid
view (evidence concerning children’s understanding of emotional reac-
tions to the satisfaction or frustration of desires). In appendix 3, we present
an outline of a new experimental paradigm to test for children’s ability to
understand competitive actions.
7 Teleology in Perspective
disabling conditions. In its subtler (adult) form, the idea is this: if someone
doesn’t know that action x causes event y, it’s unsurprising that she won’t
perform x, despite having reason to cause y—for she lacks a subjective
reason to perform x. Correlatively, her performing x can be intelligible in
terms of her knowledge that x causes y. Note that explanations in terms
of knowledge are simultaneously teleological (they accord an explanatory
role to reason-giving facts, knowledge being a factive state) and psychologi-
cal (knowledge being a psychological state).
Second, mature interpreters are also able to explain actions in terms of
considerations the agent takes to provide her with a reason, without
endorsing the consideration or the claim that they constitute reasons. It
is this dimension of the “adult theory” that enables us to figure out why
someone is adding petrol to her tonic or to understand competition. And
it is here that we find the rationale for BD: adopting a detached, relatively
noncommittal stance toward others’ intentional activities requires finding
them intelligible in terms of reasons provided by their propositional
attitudes.
The teleological genealogy of the adult theory has important implica-
tions for the sorts of psychological properties invoked in the mature con-
ception of intentional action. It should lead us to question the relentless
focus on just two kinds of mental states, beliefs and desires, encouraged
by the “belief-desire model” of action-explanation. The teleological geneal-
ogy highlights the explanatory role given in the adult theory to knowl-
edge.19 It also suggests that the notion of desire in BD should be interpreted
not in the narrow Humean sense but along the lines of Davidson’s “pro-
attitudes,”20 as subsuming the immense variety of attitudes that constitute
agents’ perspectives on the purposes for which they act. Reflection on the
teleological origin of commonsense psychology also helps to shed light on
the sort of explanation required for understanding intentional action. It is
not enough to think of certain mental states as the causes of bodily move-
ments. What matters is the ability to see how some of the agent’s psycho-
logical properties provide her with considerations that from her point of
view can be seen to amount to a practical reason. Understanding the sub-
jective reason informing someone’s intentional action requires delineating
what, from her perspective, presents itself as an objective reason.
0.9
Cumulative Proportion of Children
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
18 21 24 27 30 33 36 39 42
Age in Months
Figure 14.1
Proportion of children who use different categories of mental verbs.
Starting with Clements and Perner (1994), research has recently focused
on using indirect tests of knowledge: Children are not asked any question
about the protagonist’s belief or action, but one of three other measures
is used. (1) Looking time: The duration of looking at an erroneous action
is compared to looking at a successful action. Longer looking at the suc-
cessful action than at the erroneous action in cases of false belief is inter-
preted as children being surprised about a successful action when the actor
has a false belief. These data indicate sensitivity to the protagonist’s belief
as early as 14 or 15 months (e.g., Onishi and Baillargeon 2005; Surian,
Caldi, and Sperber 2007). One problem with these data is that the looking-
time differences are multiply interpretable and not a clear indicator of
expectation (Perner and Ruffman 2005; Sirois and Jackson 2007). (2)
Looking in expectation: A better measure in this respect is children’s direc-
tion of eye gaze as an indication of where they expect the protagonist to
reappear (Clements and Perner 1994). This method has recently indicated
Teleology in Children’s Theory of Mind 221
As we mentioned in the text, there is more than one way to interpret the
claim that children understand the “subjectivity of desires.” One reading
is: they understand subjective preferences. A second reading is: they under-
stand that a desire provides an agent with a subjective reason, a reason
that makes it rational for the agent (but for no one else—unless they wish
to cooperate) to act in a certain way. We argued that evidence regarding
subjective preference provides no support for the claim that children
understand subjective reasons. Some of the experimental work in this area,
however, is concerned with a third claim. This is the claim that young
children are able to understand and predict someone’s emotional reaction
to the satisfaction or frustration of some desire even when they don’t share
the desire. For example, Yuill (1984) addressed the question of whether
children can predict that an agent will take pleasure in the satisfaction of
a wicked desire (such as a desire to hit someone). Similarly, Perner, Zauner,
and Sprung (2005) investigated children’s ability to attribute emotional
reaction to the satisfaction/frustration of desires in the case of two pro-
tagonists with mutually incompatible desires.
Does evidence from these paradigms help to settle the issue between
teleology and the hybrid view? There are two reasons for skepticism. One
is simply that the experimental state of play regarding the third claim is
currently inconclusive. Crudely: Yuill’s (1984) and Perner, Zauner, and
Sprung’s (2005) findings suggest that until they pass the false-belief task,
children have great difficulty attributing emotional reactions to the satis-
faction of goals they don’t take to be objectively desirable. But more
recently, Rakoczy, Warneken, and Tomasello (2007) and Rakoczy,
Warneken, and Tomasello (2008) reported some evidence that children
can attribute appropriate emotions in competitive situations before they
pass the false-belief test.
Teleology in Children’s Theory of Mind 223
Appendix 3: Sabotage
Conflict:
far
Abe wants the coconut. ?
Bea, too, wants the
coconut.
Base manipulation:
Abe
Collaborative: Tree either on far
Abe wants the coconut. or near island.
Bea wants to help him
bring it home.
?
Neutral: No Bea. near
Bea
Figure 14.2
Test question: Which board will Abe use to get to the coconut?
Notes
2. It is tempting to think the solution to our puzzle may be provided by some recent
experimental findings, widely taken to show that under certain conditions even very
young children have some grasp of the causal role of false beliefs. In Appendix 1
we explain why we think the temptation should be resisted.
4. See Schueler 2009 for illuminating discussion of the importance, and the theoreti-
cal implications, of the “putting-together” point.
5. See e.g., Parfit 1997; Scanlon 1998; Schueler 2003, 2009; Hornsby 2008.
7. Crudely, claims of need have implications to do with matters such as harm and
flourishing, but they have no immediate implications regarding the agent’s desires.
(That you need some vitally important medicine does not imply that you want to
take it.)
226 J. Perner and J. Roessler
9. Although this is not the way Bartsch and Wellman put things, some of their
suggestions fit well with the hybrid account; they stress the importance of desire
psychologists’ drawing on their own knowledge of the world in predicting how
someone will go about satisfying his or her desires (see Bartsch and Wellman 1995,
155).
Teleology in Children’s Theory of Mind 227
12. Of course, there is a (to adults, natural) way of taking these latter propositions
that would make them conflicting without being inconsistent, along the lines of
“there is something to be said for A’s winning, and there is something to be said
for B’s winning.” But construed in this way, the propositions could not do the work
a teleologist expects them to do. To provide teleological explanations, the relevant
evaluative propositions have to license conclusions as to what the agent should do
or has most reason to do; prima facie reasons, on their own, do not license such
conclusions. That there is something to be said for A’s winning, and that A can win
by doing x, only gives A a reason to do x if there are no other more important con-
siderations in favor, e.g., of letting B win. In brief, teleology, as we conceive it here,
has to appeal to “all-out” practical reasons.
13. See Woodward forthcoming for a suggestive discussion of the contrast between
“difference-making” and “causal process” theories of causation. See also Steward
1997 and Hornsby 1993.
14. See Woodward 2003; see also Campbell 2007 for helpful discussion.
15. The case of imitation illustrates that teleology has the resources to conceive of
reasons as “agent-specific.” Although teleology appeals to worldly facts rather than
the individual agent’s mental states, this does not mean that two agents cannot
have reason to perform different actions. One way in which different agents may
be seen to have reasons to do different things is in virtue of their distinctive roles.
Having just banged my toy a couple of times, I, currently performing the part of
the model, have no reason to keep banging; but you, performing the part of the
imitator, have every reason to bang your toy. Again, the very same objective purpose
or value can be seen to yield different sorts of practical reasons, depending on the
agent’s skills and circumstances.
16. Of course, it is not obvious whether imitation is, or when it begins to be, tied
up with explanation. Certainly to begin with, infant imitation may well be a more
228 J. Perner and J. Roessler
primitive phenomenon than the ability to make rational sense of others’ behavior.
All we are saying here is that imitation provides compelling materials for teleological
explanations. Acquiring a conception of intentional action may partly be a matter
of learning to exploit such materials.
17. See Doherty 2009 for helpful discussion of the notion of engagement.
18. For a “grown-up” version of this point, compare John McDowell: “Finding an
action or propositional attitude intelligible, after initial difficulty, may not only
involve managing to articulate for oneself some hitherto merely implicit aspect of
one’s conception of rationality, but actually involve becoming convinced that one’s
conception of rationality needed correcting, so as to make room for this novel way
of being intelligible” (McDowell 1998, 332).
19. See Williamson 2000, and especially Hornsby 2008, on the explanatory role of
knowledge.
Fred Adams
1 Introduction
There are theories of action that are not causal theories, but I will not
discuss them here because few (maybe no) sparks fly when combining
them with claims of embodied cognition. Potentially, sparks fly at the
intersection of causal theories of action and embodied theories of cogni-
tion, so in the spirit of generating light, we shall explore there.
Causal theories of bodily action2 maintain that what makes a mere
bodily movement an action is its causal history. This is so regardless of
whether one considers the action the bodily movement (a product of the
right kinds of mental causes) that is brought about in the right way or
considers the action the very causing of the movement by those right kinds
of mental causes. On the product view (Mele 1992), my clenched fist opens
and is an action because I intended to open it and my intention produces
my open fist in the right way. On the causing view (Dretske 1988), my
action is my opening of my fist, where the opening is a causing of the
230 F. Adams
(2) Information about S’s ongoing behavior B is fed back into the system
as input and is compared with R;
All goal-directed behavior fits this model.3 I (Adams 1986a, 1997) think4
all intentional action fits a similar model that has come to be known as
the “Simple View.” So on my account, S does A intentionally only if: (1)
S intends to A; (2) S does A; and (3) S’s A-ing causally depends on S’s inten-
tion to A (in the right way)—the intention guides and sustains S’s A-ing
via goal-directed feedback control systems. Here is how Mele and I (1989)
described the role of intentions in their functional roles as motivators of
action, initiators of action and sustainers of action.
Intentions are functionally specifiable mental states. The control model identifies
the essence of the functional role of intentions. They must: (i) set the goal or plan
of the action; (ii) be involved in causally initiating and informationally updating
progress toward the action; (iii) provide a standard for determining error and cor-
rection or damage control when the plan goes awry; (iv) provide the criterion for
goal-success (help determine when the intended action has been completed, disen-
gage the plan’s implementation, and so on; (v) play a crucial role in the counter-
factual dependency of output behavior (bodily movement) upon intention and
information input (the perception of present state as compared with one’s intended
goal-state). (Adams and Mele 1989, 514)
Mele and I (Adams and Mele 1992) later extended our discussion to include
trying. We said that tryings were simply intentions at work. They do not
involve special effort. They are not willings. Tryings exist in every case
where an intention to A issues in an intentional A-ing. Trying is the agent’s
contribution to the action. Tryings are not mediators between proximal
intentions (an intention to A here and now) and actions, in the sense that
intentions cause tryings and then tryings cause actions. Instead, trying is
a continuous unfolding of the intention’s doing its work. Intentions are
not ballistic. We put it this way:
For us, tryings are effects of the normal functioning of appropriate intentions.
Roughly, trying to A is an event or process that has A-ing as a goal and is initiated
and (normally) sustained by a pertinent intention. Successful tryings to A, rather
than causing A-ings, are A-ings. . . . On the view to be defended, tryings begin in
the brain. Their initiation is the immediate effect of the formation or acquisition of
a proximal intention. Action begins where trying begins—in the brain. (Adams and
Mele 1992, 326)
idea that the mind and cognition are for action and consequently cognitive
processing has its roots and grounding (Pecher and Zwaan 2005) in sensory
(Barsalou 1999) and motor processing (Jeannerod 2006). The sensory and
motor systems are not just input–output systems for cognition, contin-
gently causally connected to cognition, but are constitutive of cognition
(just how varies among the proponents). Hence, all concepts (their content
and how they drive thought and action) can be understood properly only
in relation to their sensory and motor origins. That’s the positive side of
the program. On the negative side is an urging away from theories of
cognition that see concepts as arbitrary abstract symbols understandable
(and functioning) independently of their contingent connections to per-
ception or action (Turing, Fodor, Chomsky).
Much of the excitement is due to new empirical findings that link cog-
nitive activity and behavior to sensory and motor priming, such that one
unexpectedly finds faster cognitive reaction times when subjects are cog-
nitively tasked with experimental paradigms that involve sensory or motor
priming. In most cases, the conclusion drawn (Glenberg and Kaschak 2002)
is that the best explanation for the empirical results is that cognition is
not only grounded but happens in the sensory and motor systems. Some
of the excitement is also due to the claims made that a paradigm shift to
embodied cognition will even help us solve the symbol-grounding problem
(Searle 1980; Harnad 1990).
Among the specific claims being made by proponents of embodied
cognition (Wilson 2002), I will look mainly at those that have the most
direct relevance for models of human action. Take the claim that cognition
is situated. Here is how Wilson unpacks “situated”:
Simply put, situated cognition is cognition that takes place in the context of task-
relevant inputs and outputs. That is, while a cognitive process is being carried out,
perceptual information continues to come in that affects processing, and motor
activity is executed that affects the environment in task-relevant ways. Driving,
holding a conversation, and moving around a room while trying to imagine where
the furniture should go, are all cognitive activities that are situated in this sense.
(Wilson 2002, 4)
situated cognition is nevertheless the bedrock of human cognition due to our evo-
lutionary history . . . invoking a picture of our ancestors relying almost entirely on
situated skills . . . obtaining food, avoiding predators. Thus, situated cognition may
represent our fundamental cognitive architecture, even if this is not always reflected
in the artificial activities of our modern world. (Wilson 2002, 5)
To her credit, Wilson finds such appeals strained. As she notes, it is likely
that even our ancestors engaged in counterfactual reasoning about food
and predators, while hunting and gathering or, while parenting, warning
children about possible dangers to avoid. However, for our purposes, what
matters is not whether all cognition is situated. The clear implication of
the embodied cognition movement is that cognitive abilities arise as situ-
ated and never lose their connection to their sensory and motor origins.
Furthermore, we will be interested mainly in action where the cognitive
states involved in action may be seen as situated. The best case for this is
in ongoing activity—where current cognitive states are causally involved
in guiding and sustaining action. How much more situated can it get? We
will be interested in what embodied cognition tells us about cognitive
states driving situated action.
Closely associated with cognition’s being situated is its being time-
pressured. As Wilson puts it, “agents must deal with the constraints of ‘real
time’ or ‘runtime’” (Wilson 2002, 6). She continues:
others are more familiar from everyday experience with situated and time-
constrained action. In the shower, I accidentally dislodge the soap from
the soap tray. In a blink of an eye I bump it up against the wall of the
shower, visually and tactileley track its trajectory, and catch it before it
hits the shower floor. Catching it was lucky, but not accidental or unin-
tentional. If it was purposive behavior, then action theory (AT, along the
lines of Adams and Mele 1992) is committed to there being intentional
and representational mental states active in the motivation, initiation,
production, guiding, sustaining, and terminating of this bodily behavior.
Mark Rowlands (2006, 102–104) uses the term “deed” to designate activ-
ity that is both situated and time compressed. Two of his examples of deeds
include catching a cricket ball, where “typically you will have less than
half a second before the ball, which may be traveling in excess of 100 mph,
reaches you.” Another of his examples is playing Chopin’s Fantasie
Impromptu in C# Minor, where “your fingers have to traverse the keys in
the sort of bewildering display necessary to successfully negotiate this
notoriously difficult piece.” Rowlands coins the term “deed” to cover these
because he maintains that “they do not fit the strict conception of action.”
Specifically, he maintains that the direct antecedents of these deeds are
not intentional or representational states, and he maintains that a general
antecedent intention (to catch the ball or to play Chopin) is not sufficient
for the relevant doings to be individuated properly as actions. What makes
an action the action that it is cannot, thus, be the intentional content of
a prior intention, because any number of deeds (online corrections) can
be involved in satisfying an antecedent intention. He appeals to both
phenomenology (we are not consciously aware of all the moves we make
while carrying out these deeds) and science (research on dorsal vs. ventral
stream processing supports the first point) to support his claim that the
direct antecedents of deeds are not representational or intentional in the
normal sense.
One might think, as I do, that motor-intentions can come to the rescue
in just such cases. Rowlands does not, and he develops a view of deeds
that gives these movements themselves the representational properties
that the received view of action reserves for the intentional states that
cause the bodily movements. Here is what Rowlands says about his rejec-
tion of motor-intentions:
We might explain the status of a deed in terms of its connection to a general ante-
cedent intention, but then individuate the deed in terms of the motor representa-
tion that causally produces it. However, if this line is to be convincing, two questions
Action Theory Meets Embodied Cognition 237
One important problem is that a majority of cognitive scientists continue to use the
R-word and do so in ways that are often not clear. In the case of action it is nothing
more than a handy, but often confused and misleading term, a bad piece of heuris-
tics, an awkward place-holder for an explanation that needs to be cast in dynamical
terms of an embodied, environmentally embedded, and enactive model. . . . It may
take more energy to define and distinguish any legitimate sense of representation
. . . than it would to explain the phenomenon in non-representationalist terms. And
if one can explain the phenomenon in non-representationalist terms, then the
concept of representation is at best redundant. (Gallagher 2008, 365–366)
I think the dynamic systems view cannot do without something that plays
the role of representations, and I will explain why. I will also do my best
to give an account of how motor-intentions get their representational
content and what that content might be. For now, let’s see why Gallagher
thinks that no account of representation is likely to fit the bill for the kinds
of situated, time-pressure actions we are considering.
To be sure we are talking about the same thing, paraphrasing Michael
Wheeler (2005, 197) on “action-oriented representations” (AORs), here is
what Gallagher says about these “minimal representations” (what we are
calling motor-intentions). They are “temporary egocentric motor maps of
the environment . . . fully determined by the situation-specific action
required” (Gallagher 2008, 353). They don’t represent a preexisting world
via a type of internal image or neural pattern (a visual field from ventral
stream processing, say). Instead, “how the world is is itself encoded in
terms of possibilities for action” (ibid.). Sound familiar? This is very similar
to the remarks of Rowlands, and involves the situated, time-pressured type
of situation of EC applied to action-oriented representations. Gallagher
adds that according to Wheeler, what is represented (in an AOR) is not
knowledge that the environment is x, but knowledge of how to negotiate
the environment. “AORs are action-specific, egocentric relative to the
Action Theory Meets Embodied Cognition 239
agent, and context-dependent” (ibid., 354). So, we are indeed talking about
the same thing as Pacherie’s M-intentions.
Now let’s consider Gallagher’s reasons for thinking that AORs or
M-intentions cannot be representations. The first is decoupleability.11 The
argument is that representations are necessarily decoupleable and AORs
aren’t, so AORs can’t be representations:
that normally closes the hand. Signals received generate an actish experi-
ential state in the subject. We might call it a hallucination of acting (not
unlike a hallucination of you that I might have as a sensory state, when
you are not actually there). If they don’t actually cause a bodily movement,
does this mean that these are not genuine representations? I don’t see why
that would be true of motor-intentions any more than it would be true of
a hallucination as of you, if you were not there. Hallucinations are repre-
sentations, if anything is.
True, one would still have to explain where these AORs (M-intentions)
get their representational contents and what those contents are, if they are
representations. But the fact that they can decouple doesn’t seem to mean
they can’t be representations any more than the fact that I can suffer hal-
lucinations of you means that when I actually see you, I am not in a state
that represents you. Gallagher12 is right that off-line representations are
different from online ones. One type is veridical (seeing you) or a genuine
trying (I’m trying to catch the soap) precisely because it is online. Yet, this
does not mean that the same types of representations cannot in principle
decouple. Indeed, they can even still be tryings. When Penfield told his
subject to try to close his fist (while his efferent path to the muscles was
blocked) it would be false to say the subject didn’t try because the subject
didn’t succeed. He did try. His brain sent the signal. Penfield just blocked
it. The subject’s motor-intention was decoupled, but it still was sent “to
the hand” and was telling the hand “to make a fist.”
Today one can perform the same type of experiment with subjects using
transcranial magnetic stimulation (TMS) to stop the motor signals from
reaching the muscles. Subjects may be tapping with their fingers (to the
beat of a sound they are shadowing). When TMS interrupts their tapping,
the subjects report that they were trying to continue tapping but were
unable to produce the movement. This is a type of decoupling of motor
command from implementing a movement in the muscles. It is not exactly
“off-line.” The subjects are not just “imagining” making the movement.
They were making them prior to the TMS activation and they continue
making them after the TMS activation is turned off, and the subjects didn’t
do anything different in between (from their perspective).13
Okay, what’s next? Gallagher mentions two other features14 of represen-
tation that he thinks motor-intentions (or AORs) won’t have. Discussing
why he thinks a dynamic system approach is preferable, he says: “Nothing
in this dynamically dissipating process amounts to a representation, if we
take representation to involve: an internal image or symbol or sign, a
Action Theory Meets Embodied Cognition 241
The visual percept . . . has a rich informational content about the object, but it has
no conceptual content: it remains non-conscious and is ignored by the perceiver.
If visual processing were to stop at this stage, as may occur in pathological condi-
tions (Jacob and Jeannerod, 2003), the object could not be categorized, recognized
or named. It is only at the later stage of the processing that conceptualization occurs.
The representation of a goal-directed action operates the other way around. The
conceptual content, when it exists (i.e. when an explicit desire to perform an action
is formed), is present first. Then, at the time of execution, a different mechanism
comes into play where the representation loses its explicit character and runs auto-
matically to reach the desired goal. (Jeannerod 2006, 4–5)
This consideration thus joins those concerning the distributed nature of the opera-
tional level. One can easily imagine that this level consists of different sensorimotor
channels, each independent, each characterized by the type of sensory information
it is capable of processing and by the type of movement it is capable of producing.
The sensorimotor channel which processes “book thickness” information produces
a movement that is adapted to this information, namely, contraction of the muscles
which form a pincer grasp; the channel which processes the “distance with respect
to the body” information produces movements which extend the arm, etc. In order
to produce the correct movement to grasp a book, the different channels involved
must be activated simultaneously. Simple observation reveals, in fact, the pincer
grasp is forming at the same time as the arm is being extended, and that the fingers
do not wait until they are close to the book before adopting the correct posture.
The channels therefore function in a parallel fashion . . . because of their respective
independence—an advantageous arrangement when a large quantity of information
must be processed in a minimum amount of time, as is the case here. (Jeannerod
1985, 125–126)
sensorimotor channels, and still satisfy the general nature of being a rep-
resentation. I will now say more about why these motor-intentions are
representations.
Earlier I said that the dynamic systems account that Gallagher prefers
probably cannot succeed without representations. When I’m trying to
catch the soap, something is still directing my reach for the soap, and this
cannot simply be due to the causal influence of the soap itself and its
structural properties impacting my sensory systems. Why not? Because
what I do and how I shape my hand is dependent on my larger background
goals (don’t waste time in the shower, don’t knock things from their proper
places without trying to catch them) and so on. And as in reaching for a
book, my goals are guiding the direction of my reach and the nature of
my grip. The fact that these are not random but are being purposively
coordinated by my motor system tells us that the elements for repre-
sentation are there—nomic coordination and direction of a purposive
movement.
There are two more matters to address, if motor-intentions are indeed
representations. The first (earlier raised by Rowlands) is the matter of from
where they acquire their representational content. The second is the format
of their representational content. Let’s consider the first matter. Motor-
intentions function largely in service of proximal (P-intentions) or more
distal future intentions (F-intentions). As noted earlier, my future and
proximal intentions are normally conscious and constrained by consider-
ations of rationality. As I plan to interact in the world and leave my mark,
I consider all the possibilities of interacting with various goal objects and
the array of goal states that I may desire to bring about. With a range of
possibilities identified, I must go from the future intention (or what some
call a distal intention, D-intention) to the more proximal intention, and
then this must all be translated into the motor-intentions. Pacherie (2008)
describes beautifully the cascade of content and dependency between these
various intentions:
In this Pacherie agrees with Jeannerod (1997, 2006) and MacKay (1981)
that there is a hierarchy of motor representations such that the goals and
parameters of acts coded for at the higher levels act as constraints on the
lower-level representations.17 The lower-level representations that drive the
movement represent both the body in motion (a generator of forces) and
a goal of the action encoded in a “pragmatic” mode, distinct from the
“semantic” mode. For one, she cites his and other studies that suggest the
“amount” of force needed for the movement is encoded. (On the subjective
side there is a “sensation of effort” that accompanies this representation.)
The representation of a goal represents both an object and a final state.
Jeannerod suggests that these representations “fall between” a sensory
function and a motor function. Pragmatic representations activate prede-
termined motor functions related to the specific objects and their affor-
dances (Gibson 1979). Motor representations are relational, representing
neither states of the body nor states of the environment, but states of the
relations between the two: motor patterns that objects afford the agent.
Pacherie adds that motor-intentions are at least partially modularly encap-
sulated and only moderately cognitively penetrable. However, some cogni-
tive penetration is possible; how an object is grasped is not just a function
of its size, but what we intend to do with it. So the systems and levels of
intention have some degree of cross-talk. And the environment “affords”
much more than the motor system responds to. So the response is deter-
mined by our other cognitive states (and thus there are limits to the
encapsulation of M-intentions).
Subjects are not even always aware of what bodily movements their
motor-intentions are producing. The role of F-intentions (or distal, D-inten-
tions) is not lost, once an action begins. There must still be a causal depen-
dency on the goal state represented in that D-intention. Pacherie puts it
246 F. Adams
this way: “the relation between the three levels is not one of mere co-
existence. They form an intentional cascade,18 with D-intentions causally
generating P-intentions and P-intentions causally generating in turn
M-intentions” (Pacherie 2008, 188).
M-intentions inherit their representational content from distal and
proximal intentions. As the agent forms goals (whether long-term or short-
term) and plans how to achieve these goals, the other levels of intentions
chain together and form an intentional cascade and a dependency of
content working its way up and down the cascade. This helps explain why
motor-intentions are context sensitive (e.g., changing grip posture to a
precision grip or a power grip in time-pressured reaching depends on the
goal of the use of the object being gripped). Hence there is a mechanism
of dependency and inheritance of content that runs through the three
types of intentions described by Pacherie.19
Now let’s consider further the format of the representations that are
motor-intentions. Consider the following types of acts. One might pick
up: a needle, a pencil, a baby shoe, a bronzed baby shoe, a book, a copy
of the OED, a bar of gold. Additionally, one might catch: a feather, a ping
pong ball, a golf ball, a baseball, a bowling ball, an egg, a water balloon.
fMRI studies show two things. If one is imagining doing these acts, the
motor cortex is activated. What is more, just reading about them activates
the sensorimotor areas that would be activated if one were to perform the
acts. There is a first-person phenomenology of “what it’s like” to do these
acts that one can almost feel in just reading about or imagining them. And
this neural imagining is just the sort of activity in the brain that one can
use in brain–machine interfaces to learn to manipulate external robot arms
and computer devices just by imagining acting.
Earlier I talked about “directional tuning” and other sorts of fine-tuning
in the cortex that allow the brain to form correlations between firings of
populations of neurons and bodily movements (whether muscle-specific
or direction-specific types of movements). These types of movements are
not innate. Yes, there are innate20 types of motor routines, such as infant
tongue protrusion and the ability to imitate shape of mouth (Meltzoff and
Moore 1977), but these types of action listed above aren’t some of them.
These require a history of fine-tuning. That is, the agent has to make these
movements, witness their outcomes, and then select for doing them again
in order to set up a repertoire of abilities to make them. The movement
types are learned, skilled.
What, then, is the format of the content of a motor signal that is sent?
It is in the form of an imperative (Mandik 2005). It is in the form of a type
Action Theory Meets Embodied Cognition 247
is available via the sensory system about the type of movement the
motor signal produced. So there are two systems of representation with
different directions of fit (mind–world, world–mind) harnessed and
working closely together—and both serving the purposes of the goal-
directed system. Hence, motor-intentions (and their motor signals) become
recruited by the brain for the types of movements they are able to
make, based on past experience and “fine-tuning.” As Mandik (2005) says,
instead of being selected for their ability to indicate the way the world is
(Dretske 1988) on the sensory side, these representations are being selected
for the ability to perform various types of manipulations on the body and
world.
5 Conclusion
Acknowledgments
Notes
1. Ken Aizawa says I’m understating the current influence of the movement.
2. Mental actions are interesting in their own right, but I won’t discuss them here.
3. Mele and I (Adams and Mele 1989) realized at the time that there were many
types of feedback that would be involved in these processes (Adams 1986b). Today
Grush’s (2004) work on emulators has done an excellent job of bringing things up
to date. Our view does not require that things do not go “ballistic” at some point.
Indeed, probably all bodily actions become ballistic at some point, but to be goal-
directed they must be products of systems that have these goal-directed mechanisms
and actions must originate in such systems.
4. Mele (1992) and I part company here, though his view, the “single phenomenon”
view, is mainly a departure due to Bratman’s (1987) attack on the Simple View. I
have elsewhere explained why I don’t find Bratman’s attack persuasive, and Mele
and I have skirmished in various journals over related points from time to time, but
our views remain very close and we both accept the control model overall. For these
differences with Mele, see Adams 1997, 1994a,b.
6. Of course, though they can come apart, Goodale (2004, 1170) also says that “both
systems are required for purposive behavior—one system to select the goal object
from the visual array, the other to carry out the required computations for goal-
directed action.” It is in the latter instance that the motor system is involved and
in which I will argue below that representations are involved.
8. I’ll say more below about how future intentions, proximal intentions, and motor-
intentions are related on Pacherie’s views and will try to answer this question about
the content of motor-intentions. I’ll also say why Searle’s “intentions in action,”
though relevant, cannot give us the content of motor-intentions.
9. I won’t here attempt to counter the theory Rowlands develops. Let me just say
that if I can answer the questions he asks about motor representations, then I will
have undercut some of the motivation for replacing the received view with this new
view of “deeds.”
10. The feature of being beneath the level of consciousness means that motor-
intentions are not equivalent to Searle’s (1983) “intentions in action.” For Searle
says a person correctly expresses the content of his intentions in action to raise his
arm “quite precisely when he says ‘I am raising my arm.’” (Searle 1983, 106ff). He
adds that “if one wants to carve off the intentional content from its satisfaction he
can say, ‘I am trying to raise my arm’” (ibid., 107). All of this clearly takes place at
the conscious level and is a type of processing that one may find in the ventral
visual stream (not dorsal).
11. Gallagher (2008, 358) notes that Wheeler (2005, 219) gives up that AORs must
be decoupleable, but Gallagher continues to press the point.
12. Gallagher considers but rejects that emulators (Clark and Grush 1999) might be
the way to go. He rejects this because he thinks that once the representation and
what it represents come apart, the game is over. The representation cannot then be
guiding the action: “it ceases to be part of a forward motor control mechanism”
(Gallagher 2008, 358). Actually this is not so. That is the beauty of emulators. They
are part of the forward control. They anticipate what is happening before it does,
so the system does not have to delay for feedback from the muscles and world. But
even if Gallagher were right, the same is true for sensory states. Once you aren’t
there I’m not seeing you. The question is whether these representation types can
decouple (in principle). It is not whether they can decouple while doing their cogni-
tive work guiding an action or directly perceiving the world. No one thinks that for
veridical perceptual states. So why hold motor-intentions to those standards? Curi-
ously, Gallagher seems to agree that decoupled, emulator states and others may be
representations when not guiding action. But he questions why they must be rep-
resentations when coupled and guiding action. However, I think it would be far
more curious if a representational state lost its representational status just because
it became coupled to and began guiding and sustaining a purposive bodily
movement.
Action Theory Meets Embodied Cognition 251
13. There is a two-second window where what is to be sent is already “in the pipe-
line.” So for two seconds after the TMS is activated, subjects continue to tap nor-
mally. Then, after the two-second delay, their tapping is interrupted. What is more,
research on brain–machine interfaces (neural prostheses) show that subjects can
learn to send the proper motor signals to a machine to make a robotic arm move a
device and serve a purpose. This shows that one can learn to transform a decoupled
motor-intention (before one learns to make the robot obey) into a coupled motor-
intention (when one learns to successfully drive a robotic device) from the motor
cortex.
14. Actually, Gallagher gives six things that he thinks disqualify motor-intentions
from being representations, but they are not all about EC—(1) they are not internal
but extended into the environment, (2) are not discrete, enduring, (3) are not
passive, but enactive, (4) are not decoupleable, (5) are not strongly instructional,
and (6) are not homuncular, involving interpretation. I have dealt with extended
mind issues elsewhere (Adams and Aizawa 2008). I will be addressing their being
enactive in addressing their content. The whole point of a motor-intention is to be
enactive. I won’t here discuss the claim that they are not “strongly instructional.”
Gallagher doesn’t either. He merely says that only EC can solve the frame problem
and says representations would have to contain something like Searle’s (1983)
“background” to do so. I am working indirectly on this in thinking about closure
and knowledge. If knowledge is not closed, it is precisely because there are channel
conditions necessary for knowledge representations that contain information not
contained in things they enable the subject to know (Adams, Barker, and Figurelli
manuscript). And I won’t discuss the homuncular objection because no one in the
naturalized semantics camp thinks there is anything to “interpretation” of a repre-
sentation that can’t be reduced to purely natural causal networking. Gallagher
doesn’t argue for these so much as refer to others who have. Since I’ve addressed
these other items elsewhere, I won’t address them here.
16. Though I think there is nothing like a Frege puzzle in the motor system, Pete
Mandik reminds me that there may be something similar to failure of existential
generalization. For instance, when one TMS’s the motor system (causing decoupling)
and a signal is sent to the muscles “to move,” the muscles may not move. Of course,
this is not exactly a failure of existential generalization because these are impera-
tives, not indicatives. But there will be a failure of compliance conditions being met.
If this counts as an intensional phenomenon, then there is intentionality even in
the context of the content of a motor-intention. However, as soon as one backtracks
up the intentional cascade to the proximal or future intentions, then one finds
intensionality with a vengeance.
17. Al Mele points out to me that MacKay (1981, 630) also believes that motor
schemas involved in handwriting extend to lower components of movements
252 F. Adams
18. Of course, Pacherie admits that some actions are performed “on the fly” and
involve only the P-intentions and M-intentions (Pacherie 2008, 189).
19. Some may think that it is not necessary for the proximal intentions to cascade
down to the level of the motor signals. For instance, Mele (1992, 222) suggests that
the proximal intention to button one’s shirt in the normal way may be sufficient
to trigger the motor routines of buttoning, saying, “In normal cases, the plan is, by
default, the agent’s normal plan for that activity.” But there is no normal plan for
catching the bar of soap in the shower or deciding to catch the falling pen with a
precision or power grip. Some of this is done on the fly and in compensation for
changes in environmental contingencies. This tells me that there must be a cascade
of information and representation downward through the layers of the control
systems, as described by Pacherie, Jeannerod, and MacKay.
20. Gallagher (2005) thinks these might even by conscious, but does not argue for
it. Both Pacherie (personal communication) and I think they are purposive and
under control of feedback-controlled mechanisms. They are likely not consciously
intentional. It turns out that this ability shows up in neonates, then diminishes,
and then later returns. This is likely because the basal ganglia are inhibitory. In the
interim between when this imitative activity starts and stops, the basal ganglia
develop. When developed they only allow motor impulses that achieve a certain
threshold to get through and cause these activities once again. At this later point,
it is likely that the infants are indeed protruding their tongues voluntarily and
perhaps fully intentionally.
Alicia Juarrero
1 Background
theories of action succeeds. But then neither do the attempts that offer
behaviorist or identity-theory reductions of the purposiveness of action.
The impediment, I argued in those earlier works, is their inability to frame
a scientifically acceptable way of understanding mereological (especially
top-down) causality. I claim that concepts borrowed from complex dynam-
ical systems theory can provide just such a theory-constitutive metaphor
that makes intentional causation tractable.
During classical times it is left to Aristotle to provide a thorough analysis
of the difference between voluntary, involuntary, and nonvoluntary
behavior. As is well known, Aristotle accounts for all phenomena, not just
intentional behavior, in terms of four causes. Suppose I intend to write a
book. While my arm and hand movements serve as the efficient cause of
the actual writing, the goal of producing a book functions as its final or
purposive cause. Its material cause, the stuff from which it is physically
constructed, includes the ink, paper, and so on. What makes the behavior
a case of writing a book—instead of something else—is the formal defining
or essential cause that sustains the behavior along that essential path and
guides it to completion.
Less well known, however, is the role that another of Aristotle’s prin-
ciples plays in the history of action theory: the principle that there exists
no circular or recursive causality; no self-cause. The concepts of potentiality
and actuality lead to the conclusion that whatever happens is caused to
occur because of something other than itself. Since nothing can be both
potential and actual at the same time with respect to the same phenom-
enon, mover (actual) and moved (potential) cannot be identical. Even in
the case of the (apparent) self-motion of organisms, one aspect of the
organism qua active principle changes a second aspect from passive to
active; this aspect, in turn, now qua active principle can move a third . . .
and so on, until the animal moves.
By the time the intellectual giants of the seventeenth century and their
followers got through changing the way philosophy and science are done,
Aristotle’s formal and final causes had been discarded as superstitious
nonsense. With material cause left to one side, only efficient cause—the
instantaneous, billiard-ball-collision-type causality of mechanistic sci-
ence—qualified as cause. While discarding three of the four causes, however,
modern science retained the Aristotelian thesis that nothing causes itself.
When combined with the reductionist belief that wholes are no different
from aggregates, the claim that anything that is caused must be caused by
something other than itself—and only as a consequent of an efficient
Intentions as Complex Dynamical Attractors 255
into the dynamics of a Bénard cell, that is, its behavior is constrained by
its role in the overall rolling process. If one is willing to broaden one’s
understanding of causality and also to take causal potency so understood
as the mark of the real, even in such elementary phenomena as these, and
despite the claims of mechanistic science, dynamical processes provide
empirical evidence that wholes can be more than just epiphenomenal
aggregates reducible to the sum of their component parts. The newly orga-
nized arrangement shows emergent macroscopic characteristics that cannot
be derived from laws and theories pertaining to the microphysical level;
they also represent eddies of local order that exert active power on their
constituents, top-down. Moreover, the integrity, identity, and characteris-
tics of the overall pattern that constitutes a Bénard cell are decoupled from
its material basis in the sense that different types of viscous fluids can
present the same form of organization, which is no longer identified by
the microarrangement of its constituents. Theories of individual identity
based on token-identifications lose all relevance in these cases.
As intriguing as Bénard cells are, it is nevertheless undeniable that the
conditions within which such dissipative structures appear are set from
without: the rolling hexagonal cells form only because we’ve placed the
fluid in a container of a certain size; because we’ve cranked up the heat
that created the gradient that took the system far from equilibrium and
precipitated the discontinuous transformation from conduction to convec-
tion. Other dissipative structures such as hurricanes and dust devils also
form only because of the external metereological and atmospheric condi-
tions. Clearly, in each of these cases, if the boundary conditions are
removed the structure will disintegrate. Because the constraints within
which the self-organization takes place are set externally, the type of emer-
gence on display in these examples is only a weak form of emergence.
Although such structures can therefore only be said to self-maintain—and
not self-create—they nonetheless offer a “theory-constitutive metaphor” for
rethinking autonomy and top-down causality in a scientifically respectable
fashion.
Things get even more interesting at the chemical level, as Peirce, Mill,
and other precursors of complexity theory recognized (see Juarrero and
Rubino 2008). In the case of the B-Z reaction, the fourth step of the process,
a positive feedback loop in which the product of the process is necessary
for the process itself, represents precisely the type of circular causality
forbidden since the time of Aristotle. As the fourth autocatalytic step iter-
ates, the reinforced, accelerating hypercycle drives the system further and
further from equilibrium until a threshold of instability is reached, at
258 A. Juarrero
3 Constraints
those of patients suffering from deep dyslexia: If the word presented was
bed, the erroneous output might be cot; if the word presented was orchestra,
the erroneous output might be band. The authors conclude that these
remarkable results can be explained only by postulating that as a result of
the circular loops, the network self-organizes a semantic attractor, that is,
a high-dimensional dynamic pattern whose emergent properties embody
semantic relationships. Circular causality is real, and it is responsible for a
creative evolutionary spiral. Instead of representing meaning in a symbol
structure, however, a dynamical neurological organization embodies
meaning in the topographical configurations—the set of self-organized
context-dependent constraints—of its phase space.
The trajectories of complex dynamical processes are characterized by
so-called strange or complex attractors, patterns of behavior so intricate that
it is difficult to identify an overarching order amid the variations they
allow. Strange attractors describe ordered global patterns with such a high
degree of local fluctuation that individual trajectories never quite repeat
exactly, the way a regular pendulum does. Complex attractors are therefore
said to be “thick” because they allow individual trajectories to diverge so
widely that even though they are located within the attractor’s basin, they
are uniquely individuated. The width and convoluted trajectories described
by strange attractors imply that the overall pathways they describe are
multiply realizable. The butterfly-shaped Lorenz attractor is by now a well-
known complex attractor.
Context-sensitive constraints are important not only for information
transmission or in connectionist networks; they are also responsible for
the creation of natural complexity. Feedback loops and chemical catalysts,
as we saw in the B-Z reaction, are natural first-order context-sensitive
constraints. As natural embodiments of first-order context-dependent
constraints, catalysts are one example of how natural dynamics create
complexity by interrelating and correlating what were heretofore indepen-
dent particles. Reentrant loops in the nervous system are another. Once
the catalytic loop achieves closure and a new level of complexity emerges,
the lower-level constituents are henceforth characterized by conditional
probability distributions different from that embodied in their prior prob-
ability. As distributed process wholes, complex structures such as Bénard
cells and B-Z chemical waves thus impose what one might call second-order
contextual constraints on their components by restricting their degrees of
freedom. Particles behave differently once caught up in the higher-level
organization than they would have as independent isolated particles.
Whereas first-order contextual constraints are enabling and generative
262 A. Juarrero
6 Intentional Action
being entrained into) complex neural states with emergent mental properties.
That is, brain states can have causal efficacy in virtue of the mental content
they carry. The same brain states, token-identified, may have different (or
no) causal effects depending on whether or not they are entrained into a
mental attractor at all, or depending on whether or not they are entrained
in the same mental attractor, type-identified. On this view the micro-
physical configuration of the nervous system’s dynamical network exer-
cises its causal power and produces a particular output in virtue of
embodying the top-down context-sensitive constraints of emergent mental
properties.
Does this way of looking at the mind–brain problem resolve concerns
over causal overdetermination and conservation laws? Paul Humphreys
maintains that unlike aggregates, whose individual components retain
their identities, emergence happens only when microstates fuse (Hum-
phreys 1997). When they fuse, particles comprising unified wholes “no
longer exist as separate entities and therefore do not have all their indi-
vidual causal powers available for use at the global level.” Because compo-
nents “go out of existence” when they fuse, Humphreys maintains, worries
concerning causal overdetermination and the causal closure of the physical
are avoided and top-down causality is possible. Humphreys warns, however,
that fusion and multiple realizability are incompatible: any claim that
mental properties can be variously instantiated in components that do not
“go out of existence” reintroduces the threat of overdetermination, he
insists.
Complex systems have taught us that dynamic self-organization pro-
duces high-level emergents capable of top-down causation without thereby
violating conservation laws. Phase transitions, symmetry breaking, and
other forms of dynamic transformations entrain components into higher-
level wholes without thereby fusing the particles. Instead, the global pat-
terns that emerge as a result of these qualitative changes are embodied as
the conditional probability distributions of the components. The operation
of fusion, a static notion that implies that once fused, there is no going
back, is unlike the operation of integration, despite the thermodynamically
irreversible nature of the latter.4 Fusion is like the operation of context-free
constraints, which, as we saw, close off possibilities in a bottleneck that
prevents open-ended evolution. In contrast, bottom-up context-sensitive
constraints represent interactions that are Goldilocks-like—not too tight,
not too loose—and allow the same microarrangement token-identified to
take part in different global dynamics, type-identified, both synchronically
and diachronically. If the disruptive perturbation or fluctuation is strong
Intentions as Complex Dynamical Attractors 267
enough the global structure disintegrates, but while the constraints hold,
the complex dynamics remain coherent over time (Ulanowicz 2005). The
emergence of dynamical integration, I proposed earlier, is nothing but the
effects of second-order context-sensitive constraints, embodied as a set of
conditional probabilities that are invariant over time and that modulate
and direct the behavior of particular—but now no longer independent—
microphysical constituents in such a way that the mental content carried
by the overall dynamics cascades into the action performed. In graph-
theoretic terms, this unique balance between integration and differentia-
tion can be measured in terms of a network’s causal density (the fraction
of interactions among nodes in a network that are causally significant).
Analysis shows that high causal density is consistent with a high dynami-
cal balance between differentiation and integration, and therefore with
high complexity (Seth 2005, 2006; Seth and Edelman 2007). The robust-
ness characteristic of complex systems is thus due to dynamics that are
globally coordinated while component details remain distinct; compo-
nents do not fuse, and yet the overall system displays a remarkable resil-
ience and metastability in its functional powers despite radical differences
in the arrangements of its component parts.
We are at last in a position to understand how top-down causality of
the sort described above makes intention-caused actions possible. Accord-
ing to this framework, intentions are similarly high-dimensional, neurologi-
cally embodied long-range attractors with emergent properties. No doubt
the billion-plus-neuron human brain possesses an indefinite number of
imbricated dynamics. Intentions, by definition, involve motor attractors,
as well as others embodying high-level properties of meaning (the semantic
content of the intention), emotional valence, and so on. Aesthetic, ethical,
or moral value judgments, I suppose, are embodied in even higher-level
attractors, which self-organize later in both individual development and
neurological evolution.
When intentions strongly entrain motor neurons such that they pre-
cipitate a certain type of behavior with probability near 1, they constitute
proximate intentions; when they merely prime the likelihood of future
motor activity they embody prior intentions (Bratman 1987). Prior inten-
tions restructure a multidimensional neural state into a new organization
characterized by a new set of coordinates and a new dynamics; the context-
sensitive constraints that partition a prior intention’s contrast space carry
emergent properties of meaning, emotional valence, and so on. On this
account logical and syntactical relationships are also embodied in the
higher-level relationships between the various attractors.
268 A. Juarrero
the fact that feedback loops embodied in those constraints extend outward
into the environment and back in time, then the following alternative
interpretation opens up: If the subject’s intention is formulated in vague
and general terms as suggested earlier—“I will press the button every so
often”—the agent can then let the environment carry out the detailed
movements without thereby obviating the higher level’s control on the
lower. Rodney Brooks calls the process “letting the world serve as its own
model” (Brooks 1991). If I decide merely to “drive home” (as opposed to
the more precise and specific “drive home along route X”), I can let the
lay of the land—the traffic pattern or road conditions—determine for me
whether to turn right on Oak Street or left on Poplar Street. A well-known
example is the following: after driving home from work along our daily
commuter route all of us have experienced the peculiar sensation of real-
izing that we can’t even remember if the traffic light at Elm and Maple
was red or green that day. Following Libet’s reasoning, it is obvious that
in such situations the actions of depressing the accelerator or brake bypass
conscious decision making. But are they not therefore not constrained by
awareness? I don’t think so. Had a rabbit or a deer suddenly jumped in
our path the automaticity would have been quickly replaced by con-
sciously aware decision making.
The problem with the Libet-like experiments, on my view, is their reli-
ance on a traditional understanding of causality as series of discrete events
acting as efficient causes. In contrast, the thesis put forth in this essay has
been that functional, informational, symbolic, and representational pro-
cesses operate in the brain as top-down, second-order context-sensitive
constraints. Unlike efficient causes, which have traditionally been under-
stood to be instantaneous, atomistic events, the dynamics of higher
informational regulatory systems—the genetic and neural systems, for
example—often operate at a slower and longer time scale than that of their
constituents; that these slower and longer dynamics can nonetheless con-
strain the lower-level dynamics that constitute them is an illustration of
what Haken calls “slaving.” Phase separations between levels of organiza-
tion are often speed and time differences between levels of organization.
Phases are clearly demarcated wherever crisp differences in time scales exist
between the higher-level emergent dynamics and processes at the lower
level; the Gaillard research mentioned earlier is just one of the most recent
to show that dynamic phase separations are significant. Salthe (1985, 2002)
notes that higher-level laws and principles can apply to dynamics that are
slower and longer than those of the lower level; but the higher level is not
always slower or faster than the lower: The regulatory genetic system oper-
Intentions as Complex Dynamical Attractors 271
ates faster than the metabolic system; the regulatory neural system works
faster than the metabolic system. My point here is merely to point out that
“there’s more to heaven and earth than is dreamt of in your [mechanistic]
philosophy, Libet,” and so empirical research designed from a mechanistic
framework may be chasing red herrings. Research on the role of timing in
brain processing is still in its early stages (Carey 2008), but I am confident
that future research into dynamic phase separation (such as that between
alpha and beta patterns in the brain) will shed light on causal propagation
and the process of top-down modulation control and regulation. In light
of the discussion Libet’s work has occasioned, further work on this feature
of brain dynamics is warranted, especially phase differences between levels
of neural organizations such as the regulatory informational system and
the lower-level motor networks under the former’s control.
Elsewhere I argued that the information-theoretic concepts of equivoca-
tion and ambiguity might help track the effectiveness of intentional con-
straints over the course of the behavior’s trajectory. Newly developed
techniques designed to capture the interdependencies between multiple
time series should prove even more useful in determining the flow of
information from one part of the brain to another. Granger causality,
mentioned earlier in connection with the research on the distributed char-
acter of conscious processes in the brain, is one such technique that com-
pares streams of data known as time series, such as fluctuations in neural
firing patterns. Granger causality helps determine whether correlations are
mere coincidences or reflect one process influencing another process. In
one recent study,
Researchers gave volunteers a cue that a visual stimulus would be appearing soon
in a portion of a computer display screen, and asked them to report when the
stimulus appeared and what they saw. Corbetta’s group previously revealed that this
task activated two brain areas: the frontoparietal cortex, which is involved in the
direction of the attention, and the visual cortex, which became more active in the
area where volunteers were cued to expect the stimulus to appear.
Scientists believed the frontoparietal cortex was influencing the visual cortex,
but the brain scanning approach they were using, functional magnetic resonance
imaging (fMRI), can only complete scans about once every two seconds, which was
much too slow to catch that influence in action. When researchers applied Granger
causality, though, they were able to show conclusively that as volunteers waited for
the stimulus to appear, the frontoparietal cortex was influencing the visual cortex,
not the reverse. (Washington University School of Medicine 2008)
7 Autonomy
I would like to close with some thoughts concerning the implications that
complex adaptive systems have with respect to the philosophically thorny
subject of free will. I propose that these dynamical processes allow one to
rethink autonomy and self-direction in a scientifically respectable way and
with enough of a payoff to warrant the appellation a kind of free will worth
wanting.
We saw earlier that the change from physical Bénard cells to chemical
autocatalytic cycles occurs when the boundary constraints within which
the organization takes place are created by the process itself—when, in
other words, the production of the order parameter is brought inside the
system itself, dynamically speaking. When the endogenous dynamics
themselves control the overall process, the slack or decoupling between type
and token that turns regulatory control and direction over to type-identified
criteria provides a measure of self-direction and autonomy to chemical
processes that is absent in merely physical ones.
Increased structural complexity through chemical catalysis production,
however, in the end becomes brittle and reaches a bottleneck; chemistry
soon reaches its limit, that is, at a choke point that prevents further evolu-
tion. As we saw earlier, because it relies solely on context-free constraints,
mitotic biological reproduction quickly does too. Lila Gatlin, whom I cited
earlier, argues that, evolutionarily speaking, the evolutionary bottleneck
was breached and truly open-ended evolution made possible only after
vertebrates discovered a way to retain context-free constraints stable while
simultaneously allowing context-sensitive constraints to expand. The
research team of Alvaro Moreno, Juli Peretó, and Kepa Ruiz-Mirazo at the
University of the Basque Country (Spain) published a series of papers (Ruiz-
Mirazo and Moreno 1998, 2000, 2006; Ruiz-Mirazo, Peretó, and Moreno
2004; Moreno 2008) individually and severally in which they expand on
Gatlin’s insight: open-ended evolution necessitated a way of preserving
earlier evolutionary advances while at the same time continuing the
Intentions as Complex Dynamical Attractors 273
Notes
2. See Wheeler and Clark 1999 for additional examples of new properties created
by the causal spread of context-sensitive constraints.
5. Like DNA, however, and despite the claims of Foucault and other postmodernists,
the top-down constraints of language are not completely rigid; instead they allow
for the creation of new meaning accomplished through play and exploration.
17 The Causal Theory of Action and the Still Puzzling
Knobe Effect
Thomas Nadelhoffer
One can test attempted philosophical analyses of intentional action partly by ascer-
taining whether what these analyses entail about particular actions is in line with
what the majority of non-specialists would say about these actions. . . . [I]f there is
a widely shared concept of intentional action, such judgments provide evidence
about what the concept is, and a philosophical analysis of intentional action that
is wholly unconstrained by that concept runs the risk of having nothing more than
a philosophical fiction as its subject matter.
—Alfred Mele (2001, 27)
1 Introduction
Oliver Wendell Holmes once famously remarked that “even a dog distin-
guishes between being stumbled over and being kicked” (Holmes 1963,
The Causal Theory of Action and the Knobe Effect 279
Belief Desire
ෑෑ
Intention
Skill Awareness
ෑෑෑ
Intentionality
In a series of recent studies, Knobe set out to determine whether folk intu-
itions about the intentionality of foreseeable yet undesired side effects are
influenced by moral considerations (Knobe 2003a,b). Each of the 78 par-
ticipants in the first of these side-effect experiments was presented with a
vignette involving either a “harm condition” or a “help condition.” Those
who received the harm condition read the following vignette:
The vice-president of a company went to the chairman of the board and said, “‘We
are thinking of starting a new program. It will help us increase profits, but it will
also harm the environment.’” The chairman of the board answered, “‘I don’t care
at all about harming the environment. I just want to make as much profit as I can.
Let’s start the new program’.” They started the new program. Sure enough, the
environment was harmed. (Knobe 2003a, 191)
They were then asked to judge how much blame the chairman deserved
for harming the environment (on a scale from 0 to 6) and to say whether
they thought the chairman harmed the environment intentionally. Of the
participants, 82 percent claimed that the chairman harmed the environ-
ment intentionally. Participants in the help condition, on the other hand,
read the same scenario except that the word “harm” was replaced by the
word “help.” They were then asked to judge how much praise the chair-
man deserved for helping the environment (on a scale from 0 to 6) and to
say whether they thought the chairman helped the environment inten-
tionally. Only 23 percent of the participants claimed that the chairman
intentionally helped the environment (ibid., 192).
In another side-effect experiment, Knobe got similar results. This time
each of the 42 participants received one of the following two vignettes:
Harm Condition
A lieutenant was talking with a sergeant. The lieutenant gave the order:
“‘Send your squad to the top of Thompson Hill.’” The sergeant said: “‘But
if I send my squad to the top of Thompson Hill, we’ll be moving the men
into the enemy’s line of fire. Some of them will surely be killed!’”
The lieutenant answered: “‘Look, I know that they’ll be in the line of fire,
The Causal Theory of Action and the Knobe Effect 283
and I know that some of them will be killed. But I don’t care at all about
what happens to our soldiers. All I care about is taking control of
Thompson Hill’.” The squad was sent to the top of Thompson Hill. As
expected, the soldiers were moved into the enemy’s line of fire, and some
of them were killed (ibid.).
Help Condition
A lieutenant was talking with a sergeant. The lieutenant gave the order:
“‘Send your squad to the top of Thompson Hill.’” The sergeant said: “‘But
if I send my squad to the top of Thompson Hill, we’ll be taking them out
of the enemy’s line of fire. They’ll be rescued!’” The lieutenant answered:
“‘Look, I know that we’ll be taking them out of the line of fire, and I know
that some of them would have been killed otherwise. But I don’t care at
all about what happens to our soldiers. All I care about is taking control
of Thompson Hill’.” The squad was sent to the top of Thompson Hill. As
expected, the soldiers were moved out of the enemy’s line of fire, and some
of them were saved (ibid.).
In a pair of recent papers, Guglielmo and Malle set out to put pressure on
Knobe’s two central findings—namely, (a) the “side-effect findings,” and
(b) the “skill findings.” The former—which potentially undermine the
The Causal Theory of Action and the Knobe Effect 285
As Guglielmo and Malle once again correctly point out, if the bidirectional
model of intentionality were correct, one would expect participants to
select the first and second action descriptions as the most accurate.
However, as was the case in the earlier study, only a minority of partici-
pants found these to be the “most accurate descriptions.” In fact, the first
two descriptions were selected as either the most or second most accurate
by only 16 percent of the participants. Conversely, 83 percent of the par-
ticipants deemed the fourth action description to be either the most or the
second most accurate, while 20 percent selected the third description. In
short, the results of this study provide evidence that “when people were
asked to indicate which action the CEO performed intentionally, they
showed striking agreement: He intentionally adopted a profit-raising
program that he knew would harm the environment; he did not intention-
ally harm the environment” (ibid.). Guglielmo and Malle believe these
findings lay to rest the bidirectional model of intentionality favored by
Knobe and others. On their view, participants in earlier studies only judged
that the CEO harmed the environment intentionally because they were
forced to give a dichotomous intentionality judgment. However, once
participants are given more choices, their intuitions no longer seem to
provide any evidence for the bidirectional model. On the surface, these
new findings admittedly pose a serious challenge to the Knobe effect.
However, I don’t think things are as straightforward as Guglielmo and
Malle have assumed. By my lights, more work needs to be done before they
“disconfirm” the claim that moral considerations sometimes influence folk
ascriptions of intentional action.
Perhaps the biggest shortcoming of Guglielmo and Malle’s new studies
is that they don’t actually directly test the Knobe effect. To see why, keep
in mind that Knobe’s key finding was that participants’ intuitions were
different in the harm condition than they were in the help condition—that
is, participants judged bad side effects to be more intentional than good
side effects. In order to fully explain away these findings, Guglielmo and
Malle would have needed to include both harm and help conditions—
something they failed to do. After all, even if their participants were admit-
tedly less inclined than Knobe’s participants to judge that the CEO harmed
the environment intentionally, it is still possible that had they been pre-
sented with a help condition, their ratings of intentionality would have
been even lower still. In short, Knobe’s findings were relational. It wasn’t
just that participants in the harm condition overwhelmingly judged
that the CEO intentionally harmed the environment. It was also that those
in the help condition overwhelmingly judged that the CEO did not
288 T. Nadelhoffer
If forced to select one of these as the most accurate, I suspect some people
might judge that it sounds most accurate to say that she knowingly or
willingly played volleyball. After all, both are presumably necessary condi-
tions of intentionally or purposely playing volleyball. But this would not
mean that they do not also think it is accurate to say that Paige played
volleyball intentionally or purposely. Conversely, even if the majority of
people selected “intentionally” (or “purposely”) as the most accurate
description of Paige’s behavior, it wouldn’t follow that they think it is in
inaccurate to say that she played volleyball willingly and knowingly. By
my lights, if one truly wanted to get at the salient intuitions in this case,
one would need to provide participants with conjunctive choices as well
such as:
which they failed to do so, they cannot rule out the possibility that par-
ticipants judged that the CEO harmed the environment intentionally.
Of course, this criticism admittedly only applies to the first aforemen-
tioned study from Guglielmo and Malle. Their second study, on the other
hand, is immune to this worry since it provides participants with precisely
the kind of more fine-grained action descriptions I was lobbying for above.
However, a closer look at the actual descriptions they used raises additional
worries. To see why, consider once again the choices that were available
to participants:
Accordingly, I ran two new studies that were designed to put these worries
to the test.
The vice-president of a company went to the chairman of the board and said, “We
are thinking of starting a new program. It will help us increase profits, but it will
also harm [help] the environment.” The chairman of the board answered, “I don’t
care at all about harming [helping] the environment. I just want to make as much
profit as I can. Let’s start the new program.” They started the new program. Sure
enough, the environment was harmed [helped].
This situation has certain ambiguities and leaves some questions open. How can
we best describe what the CEO did? Please circle the number of the most accurate
description:
the environment and he intentionally adopted the program, the CEO did not inten-
tionally harm [help] the environment.
In the harm condition, 77 percent of the participants judged that the CEO
intentionally harmed the environment whereas only 23 percent judged
that he did not intentionally harm the environment. In the help condi-
tion, on the other hand, only 19 percent of the participants judged that
the CEO intentionally helped the environment whereas 81 percent judged
that he did not intentionally help the environment. These results—which
are in line with Knobe’s earlier results—are statistically significant ( p < 0.01,
FET).
On the surface, these findings put pressure on the explanation of
Knobe’s studies put forward by Guglielmo and Malle. After all, on their
view, people find it most natural to say that the CEO harmed the environ-
ment knowingly but not intentionally. However, at least with respect to
this first study, participants were far more likely to judge that the CEO
knowingly and intentionally harmed the environment than they were to
say that he knowingly but did not intentionally harm it. Conversely, par-
ticipants were far more likely to say that the CEO knowingly but did not
intentionally help the environment than they were to say that he inten-
tionally helped it. These results are precisely what one would expect based
on the earlier side-effect studies run by Knobe and others.
Of course, one might worry that I may have unwittingly stacked the
cards in favor of the Knobe effect given the specific wording of the two
choices. For instance, in the harm condition, the first answer actually
explains why one might deem the side effect intentional—namely, the CEO
intentionally adopted a program that he knew would harm the environ-
ment. The second answer, on the other hand, did not provide participants
with the reason why one might judge that the CEO did not intentionally
harm the environment—namely, that he did not intend to do so. Instead,
the second option once again provided participants with the same reason
for deeming the side effect intentional that the first option provided. By my
lights, this is a legitimate concern. Consequently, I ran another study that
was designed to correct for it.
In the second study, participants were 130 people who completed an
online questionnaire.12 Each was randomly assigned to one of two condi-
tions—a harm condition (59 participants) or a help condition (71 partici-
pants). Once again, the wording of the two respective vignettes—which
were based on the original Knobe CEO studies—was the same but for the
moral valence of the outcome. The two conditions were as follows:
292 T. Nadelhoffer
The vice-president of a company went to the chairman of the board and said, “We
are thinking of starting a new program. It will help us increase profits, but it will
also harm [help] the environment.” The chairman of the board answered, “I don’t
care at all about harming [helping] the environment. I just want to make as much
profit as I can. Let’s start the new program.” They started the new program. Sure
enough, the environment was harmed [helped].
This situation has certain ambiguities and leaves some questions open. How can
we best describe what the CEO did? Please circle the number of the most accurate
description:
In the harm condition, 88 percent of the participants judged that the CEO
intentionally harmed the environment whereas only 12 percent of the
participants judged that the CEO did not intentionally harm the environ-
ment. In the help condition, on the other hand, only 25 percent judged
that the CEO intentionally helped the environment whereas 75 percent
judged that the CEO did not intentionally help the environment. Once
again these results—which are statistically significant (p < 0.01, FET)—are
perfectly in line with Knobe’s earlier findings. Moreover, they put further
pressure on the explanation of the Knobe effect that was put forward by
Guglielmo and Malle.
Keep in mind that on their view, when participants are provided with
the option of saying that the CEO knowingly harmed the environment
but neither intended nor intentionally harmed it, they will not judge that
the CEO harmed the environment intentionally. However, in my second
study, participants were provided with both options but they nevertheless
overwhelmingly preferred to say that the CEO both knowingly and inten-
tionally harmed the environment. As such, it is unclear that Guglielmo
and Malle have accomplished their goal of shielding the standard unidi-
rectional view from Knobe’s side-effect findings. Indeed, in both of my two
latest studies, participants’ responses provide new support for the bidirec-
tional view while further challenging both the simple view and the causal
theory of action. In light of these latest findings, I minimally believe that
I have shown that more work still needs to be done before either side in
this ongoing debate can claim a decisive victory. As it stands, I believe that
The Causal Theory of Action and the Knobe Effect 293
the puzzling nature of the Knobe effect remains something that needs to
be either further explained or further explained away.
That being said, it is worth mentioning that there are several lingering
shortcomings with my two latest studies. First, as Guglielmo and Malle
point out, nearly all of the studies on the folk concept of intentionality
that have been run thus far share the limitation of using a vignette design.
Mine are no different in this respect. I find their suggestion of using mock
jury designs and visual stimuli very intriguing. The results of these kinds
of studies would obviously shed important new light on the nature of folk
intentionality judgments. Second, I did not collect data on the cognitive
timing of my participants’ responses to the vignettes. Guglielmo and Malle
are apparently already running some reaction time studies, and I very
much look forward to seeing the fruits of their labors. Finally, and most
importantly, my studies admittedly did not address the “pro-attitude”
hypothesis put forward by Guglielmo and Malle to partly explain the
harm–help asymmetry. On their view, not caring about harming the envi-
ronment is not on par with not caring about helping it. I entirely agree.
Indeed, I voiced the same worry about Knobe’s original vignettes in Nadel-
hoffer 2004b. However, for present purposes, my main goal was to test
Guglielmo and Malle’s “knowingly but not intentionally” explanation of
the Knobe effect. Further testing their “pro-attitude” hypothesis is a task
for another day. Hopefully, psychologists and philosophers will continue
to work together on these issues in an effort to better understand the
nature of our intuitions and beliefs concerning intentional action.
6 Conclusion
In this chapter, I set out to provide an overview of one of the more hotly
contested recent debates in the philosophy of action. More specifically, I
wanted to show that Guglielmo and Malle’s recent attempt to lay the
Knobe effect to rest falls short even if they have admittedly taught us many
important lessons along the way. By my lights, it is clear that they have
thrown down the gauntlet in defense of the standard model of intentional-
ity (and the causal theory of action more generally). Their studies are both
more sophisticated and powerful than the previous research that has been
done on this front. As such, any subsequent work on the folk concept of
intentional action must carefully take Guglielmo and Malle’s insightful
findings into account. Unfortunately, I was admittedly only able to scratch
the surface of their research in this chapter. In the future, I hope to give
their work the further attention it deserves. For now, I have merely tried
294 T. Nadelhoffer
to take a few small steps toward better understanding the still puzzling
nature of the folk concept of intentionality. Whether people think I pushed
the debate forward or backward remains to be seen.
Notes
4. Two points of clarification are in order at this point. First, just because I think
that data concerning folk intuitions are relevant to some philosophical debates—e.g.,
free will—it does not follow that I believe that these data are relevant to all philo-
sophical debates—e.g., mereology. Second, just because I think that folk intuitions
are relevant to some philosophical problems, it does not follow that I believe they
solve these problems. To my knowledge, no experimental philosopher tries to move
from “the folk think that x is the case” to “x is the case.” Instead, to the extent that
the folk intuitions are philosophically relevant, they serve as starting points and
constraints to philosophical investigation and not final arbiters of philosophical
truth.
The Causal Theory of Action and the Knobe Effect 295
6. See, e.g., Adams and Steadman 2004a,b; Cushman and Mele 2008; Feltz and
Cokely 2007; Hindriks 2008; Knobe 2003a,b, 2004a,b), 2005a,b; Knobe and Burra
2006a,b; Knobe and Mendlow 2004; Leslie, Knobe, and Cohen 2006; Machery 2008;
Malle 2001, 2006; Malle and Knobe 1997; Mallon 2008; McCann 2006; Meeks 2004;
Nadelhoffer 2004a,b,c, 2005, 2006a,b,c; Nado forthcoming; Nanay forthcoming;
Nichols and Ulatowski 2007; Phelan and Sarkissian 2008, 2009; Turner 2004; Wiland
2007; Wright and Bengson 2009; Young et al. 2006.
7. Indeed, McCann has actually run some studies of his own on the folk concept
of intentional action (McCann 2006). For my response to his interpretation of the
data he collected, see Nadelhoffer 2006c.
8. See, e.g., Mele 1992; Mele and Moser 1994; Mele and Sverdlik 1996; and Malle
and Knobe 1997.
9. For two recent overviews of the action theory literature in experimental philoso-
phy, see Feltz 2008 and Nado forthcoming.
10. Proponents of this view defend it on a number of grounds. First and foremost,
the simple view purportedly captures our pretheoretical intuitions and coheres with
our ordinary usage of the concepts of intending and intentional action (McCann
1998, 210). After all, in ordinary contexts it would admittedly sound strange for me
to say that I dialed my friend’s phone number intentionally even though I did not
intend to do so. Second, given that the SV is the seemingly uncontroversial claim
that intending to x is necessary for intentionally x-ing, the view has the virtue of being,
well, simple or “uncluttered” (Adams 1986a, 284). Third, it “gives us reason to
believe that our intentions causally guide our actions in virtue of their content”
(ibid.)—thereby supporting our ordinary view of ourselves whereby the contents of
our intentions to x play an important role in our intentionally x-ing.
Adams, F. 1986a. Intention and intentional action: The simple view. Mind and Lan-
guage 1:281–301.
Adams, F. 1994b. Trying, desire, and desiring to try. Canadian Journal of Philosophy
24:613–626.
Adams, F. 2003a. Thoughts and their contents: Naturalized semantics. In The Black-
well Guide to the Philosophy of Mind, ed. T. Warfied and S. Stich. Oxford: Blackwell.
Adams, F. 2007. Trying with the hope. In Rationality and the Good, ed. M. Timmons,
J. Greco, and A. Mele. Oxford: Oxford University Press.
Adams, F., and K. Aizawa. 2008. The Bounds of Cognition. Oxford: Blackwell.
Adams, F., Barker, J., and J. Figurelli. Manuscript. Towards closure on closure.
Adams, F., and A. Mele. 1989. The role of intention in intentional action. Canadian
Journal of Philosophy 19:511–532.
Adams, F., and A. Mele. 1992. The intention/volition debate. Canadian Journal of
Philosophy 22:323–338.
298 References
Adams, F., and A. Steadman. 2004a. Intentional action in ordinary language: Core
concept or pragmatic understanding. Analysis 74:173–181.
Adams, F., and A. Steadman. 2004b. Intentional actions and moral considerations:
Still pragmatic. Analysis 74:264–267.
Aguilar, J., and A. Buckareff. 2009. Agency, consciousness, and executive control.
Philosophia 37:21–30.
Alvarez, M., and J. Hyman. 1998. Agents and their actions. Philosophy 73:219–245.
Annas, J. 1978. How basic are basic actions? Proceedings of the Aristotelian Society
78:195–213.
Apperly, I. A., and S. A. Butterfill. 2008. Do humans have two systems to track beliefs
and belief-like states? Unpublished manuscript, Department of Psychology, Univer-
sity of Birmingham, UK.
Aristotle. 1983. Physics, Books III and IV. Trans. E. Hussey. Oxford: Oxford University
Press.
Armstrong, D. 1980. The Nature of Mind. Ithaca, N.Y.: Cornell University Press.
Astington, J. W. 1999. The language of intention: Three ways of doing it. In Develop-
ing Theories of Intention: Social Understanding and Self-Control, ed. P. D. Zelazo, J. W.
Astington, and D. R. Olson. Mahwah, N.J.: Erlbaum.
Audi, R. 1993. Intending. In his Action, Intention, and Reason. Ithaca, N.Y.: Cornell
University Press.
Bartsch, K., M. D. Campbell, and G. L. Troseth. 2007. Why else does Jenny run?
Young children’s extended psychological explanations. Journal of Cognition and
Development 8:33–61.
Bartsch, K., and H. M. Wellman. 1995. Children Talk about the Mind. Oxford: Oxford
University Press.
Bedau, M., and P. Humphreys, eds. 2008. Emergence: Contemporary Readings in the
Philosophy of Science. Cambridge, Mass.: MIT Press.
Bennett, J. 2008. Accountability (II). In Free Will and Reactive Attitudes: Perspectives
on P. F. Strawson’s “Freedom and Resentment,” ed. M. McKenna and P. Russell.
Farnham: Ashgate Press.
Berthoz, A., and J. L. Petit. 2006. Phenomenologie et physiologie de l’action. Paris: Odile
Jacob.
Bishop, J. 1989. Natural Agency: An Essay on the Causal Theory of Action. Cambridge:
Cambridge University Press.
Bishop, J. 2007. Believing by Faith: An Essay in the Epistemology and Ethics of Religious
Belief. Oxford: Clarendon Press.
Bittner, R. 2001. Doing Things for Reasons. New York: Oxford University Press.
Borghi, A. 2005. Object concepts and action. In Grounding Cognition, ed. D. Pecher
and R. Zwaan. Cambridge: Cambridge University Press.
Brand, M. 1984. Intending and Acting: Toward a Naturalized Action Theory. Cambridge,
Mass.: MIT Press.
Bratman, M. 1987. Intention, Plans, and Practical Reason. Cambridge, Mass.: Harvard
University Press.
Bratman, M. 2001. Two problems about human agency. Proceedings of the Aristotelian
Society 101:309–332.
combined TMS and behavioral study. Brain Research. Cognitive Brain Research
24:355–363.
Buckareff, A., and J. Zhu. 2009. The primacy of the mental in the explanation of
human action. Disputatio 3(26):73–88.
Care, N. S., and C. Landesman, eds. 1968. Readings in the Theory of Action. Bloom-
ington, Ind.: Indiana University Press.
Carey, B. 2008. Anticipating the future to “see” the present. New York Times. June
10.
Carpenter, M., J. Call, and M. Tomasello. 2002. A new false belief test for 36-month-
olds. British Journal of Developmental Psychology 20:393–420.
Chasiotis, A., F. Kiessling, J. Hofer, and D. Campos. 2006. Theory of mind and
inhibitory control in three cultures: Conflict inhibition predicts false belief under-
standing in Germany, Costa Rica, and Cameroon. International Journal of Behavioral
Development 30:249–260.
Child, W. 1994. Causality, Interpretation, and the Mind. Oxford: Clarendon Press.
Chisholm, R. 1966. Freedom and action. In Freedom and Determinism, ed. K. Lehrer.
New York: Random House.
Clark, A. 1997. Being There: Putting Brain, Body, and World Together Again. Cambridge,
Mass.: MIT Press.
Clark, A., and R. Grush. 1999. Towards a cognitive robotics. Adaptive Behavior
7:5–16.
302 References
Collins, J., N. Hall, and L. A. Paul, eds. 2004. Causation and Counterfactuals. Cam-
bridge, Mass.: MIT Press.
Coope, U. 2004. Aristotle’s account of agency in Physics III.3. Boston Area Colloquium
in Ancient Philosophy 20:201–221.
Coope, U. 2007. Aristotle on action. Proceedings of the Aristotelian Society (suppl. vol.)
81:109–138.
Csibra, G., and G. Gergely. 1998. The teleological origins of mentalistic action
explanations: A developmental hypothesis. Developmental Science 1:255–259.
Cushman, F., and A. Mele. 2008. Intentional action: Two-and-a-half folk concepts?
In Experimental Philosophy, ed. J. Knobe and S. Nichols. New York: Oxford University
Press.
Damasio, A. 1994. Descartes’ Error: Emotion, Reason, and the Human Brain. London:
Penguin.
Darley, J., K. Carlsmith, and P. Robinson. 2000. Incapacitation and just deserts as
motives for punishment. Law and Human Behavior 24:659–683.
Davidson, D. 1971. Agency. In Agent, Action, and Reason, ed. R. Binkley et al. Toronto:
University of Toronto Press. Reprinted in Davidson 1980.
Davidson, D. 1973. Freedom to act. In Essays on Freedom of Action, ed. Ted Hond-
erich. London: Routledge & Kegan Paul. Reprinted in Davidson 1980.
Davidson, D. 1978. Intending. In Philosophy of History and Action, ed. Y. Yovel. Dor-
drecht: D. Reidel. Reprinted in Davidson 1980.
References 303
Davidson, D. 1980. Essays on Actions and Events. Oxford: Oxford University Press.
Davidson, D. 2001a. Essays on Actions and Events, 2nd ed. Oxford: Oxford University
Press.
Dennett, D. C., and K. Lambert. 1978. The Philosophical Lexicon. Privately printed.
Doherty, M. J. 2009. Theory of Mind: How Children Understand Others’ Thoughts and
Feelings. Hove: Psychology Press.
Dretske, F. 1981. Knowledge and the Flow of Information. Cambridge, Mass.: MIT Press.
Duff, A. 2004. Action, the act requirement, and criminal liability. In Agency and
Action, ed. J. Hyman and H. Steward. Cambridge: Cambridge University Press.
Enç, B. 2003. How We Act: Causes, Reasons, and Intentions. Oxford: Oxford University
Press.
Enç, B., and F. Adams. 1992. Functions and goal-directedness. Philosophy of Science
59:635–654.
Farahany, N. 2009. The interface between freedom and agency. Stanford Technology
Review. http://www.stlr.stanford.edu.
304 References
Feinberg, J. 1984. Harm to Others: The Moral Limits of the Criminal Law, vol. 1. New
York: Oxford University Press.
Feltz, A. 2008. The Knobe effect: A brief overview. Journal of Mind and Behavior
28:265–278.
Feltz, A., and E. Cokely. 2007. An anomaly in intentional action ascription: More
evidence of volk diversity. In Proceedings of the 29th Annual Meeting of Cognitive
Science Society, ed. D. S. McNamara and G. Trafton. Mahwah, N.J.: Lawrence Erlbaum.
Fischer, J. M., and M. Ravizza. 1998. Responsibility and Control: A Theory of Moral
Responsibility. Cambridge: Cambridge University Press.
Frankfurt, H. 1988. The Importance of What We Care About. New York: Cambridge
University Press.
Frankfurt, H. 1998. Necessity, Volition, and Love. New York: Cambridge University
Press.
Gallagher, S. 2005. How the Body Shapes the Mind. Oxford: Oxford University Press.
Gallese, V., L. Fadiga, L. Fogassi, and G. Rizzolati. 1996. Action recognition in pre-
motor cortex. Brain 119:593–609.
References 305
Gatlin, L. 1972. Information and the Living System. New York: Columbia University
Press.
Ginet, C. 2004. Intentionally doing and intentionally not doing. Philosophical Topics
32:95–110.
Goodale, M. 2004. Perceiving the world and grasping it: Dissociations between
conscious and unconscious visual processing. In The Cognitive Neurosciences III, ed.
M. Gazzaniga. Cambridge, Mass.: MIT Press.
Gopnik, A., and A. N. Meltzoff. 1997. Word, Thoughts, and Theories. Cambridge,
Mass: MIT Press.
Goodale, M. A., and A. D. Milner. 1992. Separate visual pathways for perception
and action. Trends in Neurosciences 15:20–25.
Greene, J. 2003. From neural “is” to moral “ought”: What are the moral implications
of neuroscientific moral psychology? Neuroscience 4:847–850.
Greene, J. 2007. The secret joke of Kant’s soul. In Moral Psychology, vol. 3: The Neu-
roscience of Morality: Emotion, Disease, and Development, ed. W. Sinnott-Armstrong.
Cambridge, Mass.: MIT Press.
Grush, R. 1997. Yet another design for a brain? Review of Port and van Gelder (eds.),
Mind as Motion. Philosophical Psychology 10:233–242.
Guglielmo, S. and B. F. Malle (n.d.b). Enough skill to kill: Intentional control and
the judgment of immoral actions. University of Oregon.
Haddock, A. 2005. At one with our actions, but at two with our bodies: Hornsby’s
account of action. Philosophical Explorations 8:157–172.
Haggard, P., and M. Elmer. 1999. On the relations between brain potentials and the
awareness of voluntary movements. Experimental Brain Research 126:128–133.
Haidt, J. 2001. The emotional dog and its rational tail: A social intuitionist approach
to moral judgment. Psychological Review 108:814–834.
Haidt, J. 2003. The emotional dog does learn new tricks: A reply to Pizarro and
Bloom. Psychological Review 110:197–198.
Hall, L., and P. Johansson. 2009. Choice blindness: You don’t know what you want.
New Scientist 18:26–27.
Happé, F., and E. Loth. 2002. “Theory of mind” and tracking speakers’ intentions.
Mind and Language 17:24–36.
Hart, H. L. A., and A. Honoré. 1959. Causation and the Law. Oxford: Oxford Univer-
sity Press.
Haynes, J., K. Sakai, G. Rees, S. Gilbert, C. Frith, and R. E. Passingham. 2007. Reading
hidden intentions in the human brain. Current Biology 17:323–328.
Hobbes, T. [1654] 1999. Hobbes’s Treatise Of Liberty and Necessity. In Hobbes and
Bramhall on Liberty and Necessity, ed. V. Chappell. New York: Cambridge University
Press.
Hobbes, T. [1656] 1999. Selections from Hobbes, The Questions Concerning Liberty,
Necessity, and Chance. In Hobbes and Bramhall on Liberty and Necessity, ed. V. Chap-
pell. New York: Cambridge University Press.
Holmes, O. W., Jr. 1963. The Common Law. Boston: Little, Brown.
Hornsby, J. 1993. Agency and causal explanation. In Mental Causation, ed. J. Heil
and A. Mele. Oxford: Clarendon Press.
Hornsby, J. 2004. Agency and actions. In Agency and Action, ed. J. Hyman and H.
Steward, 1–23. Cambridge: Cambridge University Press.
Hughes, C., and J. Dunn. 1997. Pretend you didn’t know: Preschoolers’ talk about
mental states in pretend play. Cognitive Development 12:381–403.
Hughes, C., and J. Dunn. 1998. Understanding mind and emotion: Longitudinal
associations with mental-state talk between young friends. Developmental Psychology
34:1026–1037.
Hume, D. [1777] 1975. Enquiries Concerning Human Understanding and Concerning the
Principles of Morals. Oxford: Clarendon Press.
Jacob, P., and M. Jeannerod. 2003. Ways of Seeing: The Scope and Limits of Visual
Cognition. Oxford: Oxford University Press.
James, W. [1890] 1981. The Principles of Psychology, vol. 2. Cambridge, Mass.: Harvard
University Press.
James, W. 1956. The will to believe. In The Will to Believe and Other Essays in Popular
Philosophy, and Human Immortality. New York: Dover.
Jeannerod, M. 1985. The Brain Machine. Cambridge, Mass.: Harvard University Press.
Johansson, P., L. Hall, S. Sikstrom, and A. Olsson. 2005. Failure to detect mismatches
between intention and outcome in a simple decision task. Science 310:116–119.
Juarrero, A., and C. Rubino, eds. 2008. Emergence, Complexity, and Self-Organization:
Precursors and Prototypes. Mansfield, Mass.: ISCE Publishing.
Kamm, F. 1994. Action, omission, and the stringency of duties. University of Penn-
sylvania Law Review 142:1492–1512.
Kane, R. 1996. The Significance of Free Will. Oxford: Oxford University Press.
Kim, J. 1993. The non-reductivist’s troubles with mental causation. In Mental Causa-
tion, ed. J. Heil and A. Mele. New York: Oxford University Press.
Knobe, J. 2004a. Folk psychology and folk morality: Response to critics. Journal of
Theoretical and Philosophical Psychology 24:270–279.
Knobe, J. 2005a. Theory of mind and moral cognition: Exploring the connections.
Trends in Cognitive Sciences 9:357–359.
Knobe, J. 2005b. Cognitive processes shaped by the impulse to blame. Brooklyn Law
Review 71:929–937.
Knobe, J., and A. Burra. 2006a. The folk concept of intention and intentional action:
A cross-cultural study. Journal of Cognition and Culture 6:113–132.
Knobe, J., and A. Burra. 2006b. Experimental philosophy and folk concepts: Meth-
odological considerations. Journal of Cognition and Culture 6:331–342.
Knobe, J., and G. Mendlow. 2004. The good, the bad, and the blameworthy: Under-
standing the role of evaluative reasoning in folk psychology. Journal of Theoretical
and Philosophical Psychology 24:252–258.
Ladd, J. 1965. The ethical dimension of the concept of action. Journal of Philosophy
62:633–645.
Landesman, C. 1965. The new dualism in the philosophy of mind. Review of Meta-
physics 19:329–345.
Leslie, A., J. Knobe, and A. Cohen. 2006. Acting intentionally and the side-effect
effect: “Theory of mind” and moral judgment. Psychological Science 17:421–427.
Lewis, D. 1986. Causal explanation. In Philosophical Papers, vol. 2. New York: Oxford
University Press.
Lewis, D. 2004. Void and object. In Causation and Counterfactuals, ed. J. Collins, N.
Hall, and L. A. Paul. Cambridge, Mass.: MIT Press.
Libet, B. 1985. Unconscious cerebral initiative and the role of conscious will in
voluntary action. Behavioral and Brain Sciences 8:529–539.
Libet, B. 2004. Mind Time: The Temporal Factor in Consciousness. Cambridge, Mass.:
Harvard University Press.
Lowe, E. J. 2008. Personal Agency: The Metaphysics of Mind and Action. New York:
Oxford University Press.
Machery, E. 2008. The folk concept of intentional action: Philosophical and experi-
mental issues. Mind and Language 23:165–189.
MacKay, D. 1981. Behavioral plasticity, serial order, and the motor program. Behav-
ioral and Brain Sciences 4:630–631.
Mackie, J. L. 1974. The Cement of the Universe: A Study of Causation. Oxford: Claren-
don Press.
Malle, B. 2001. Folk explanations and intentional action. In Intentions and Intention-
ality: Foundations of Social Cognition, ed. L. Moses, B. Malle, and D. Baldwin. Cam-
bridge, Mass.: MIT Press.
Malle, B. 2004. How the Mind Explains Behavior: Folk Explanations, Meaning, and Social
Interaction. Cambridge, Mass.: MIT Press.
Malle, B., and J. Knobe. 1997. The folk concept of intentional action. Journal of
Experimental Social Psychology 33:101–121.
Mallon, R. 2008. Knobe vs. Machery: Testing the trade-off hypothesis. Mind and
Language 23:247–255.
McCann, H. 1998. The Works of Agency: On Human Action, Will and Freedom. Ithaca:
Cornell University Press.
McCann, H. 2006. Intentional action and intending: Recent empirical studies. Philo-
sophical Psychology 18:737–748.
McDowell, J. 1998. Functionalism and anomalous monism. In his Mind, Value, and
Reality. Oxford: Oxford University Press.
Meeks, R. 2004. Unintentionally biasing the data: Reply to Knobe. Journal of Theoreti-
cal and Philosophical Psychology 24:220–223.
Mele, A. 1981. The practical syllogism and deliberation in Aristotle’s causal theory
of action. New Scholasticism 55:281–316.
Mele, A. 1992. Springs of Action: Understanding Intentional Behavior. New York: Oxford
University Press.
Mele, A., ed. 1997b. The Philosophy of Action. Oxford: Oxford University Press.
Mele, A. 2001. Acting intentionally: Probing folk notions. In Intentions and Inten-
tionality: Foundations of Social Cognition, ed. B. F. Malle, L. J. Moses, and D. A.
Baldwin. Cambridge, Mass.: MIT Press.
Mele, A. 2003. Motivation and Agency. New York: Oxford University Press.
Mele, A. 2006. Free Will and Luck. Oxford: Oxford University Press.
Mele, A. 2009. Effective Intentions: The Power of Conscious Will. Oxford: Oxford Uni-
versity Press.
Mele, A., and S. Sverdlik. 1996. Intention, intentional action, and moral responsibil-
ity. Philosophical Studies 82:265–287.
Meltzoff, A., and M. Moore. 1977. Imitation of facial and manual gestures by human
neonates. Science 198:75–78.
Moll, H., and M. Tomasello. 2007. How 14- and 18-month-olds know what others
have experienced. Developmental Psychology 43:309–317.
Moore, M. 1988. Mind, brain, and the unconscious. In Mind, Science and Psycho-
analysis, ed. P. Clark and C. Wright. Oxford: Blackwell.
Moore, M. 1993. Act and Crime: The Philosophy of Action and Its Implications for
Criminal Law. Oxford: Clarendon Press.
Moore, M. 1997. Intentions and mens rea. In M. Moore, Placing Blame: A General
Theory of the Criminal Law. Oxford: Oxford University Press.
Moore, M. 2009a. Causation and Responsibility: An Essay in Law, Morals, and Metaphys-
ics. Oxford: Oxford University Press.
Nadelhoffer, T. 2004b. Praise, side effects, and intentional action. Journal of Theoreti-
cal and Philosophical Psychology 24:196–213.
Nadelhoffer, T. 2004c. Blame, badness, and intentional action: A reply to Knobe and
Mendlow. Journal of Theoretical and Philosophical Psychology 24:259–269.
Nadelhoffer, T. 2005. Skill, luck, control, and intentional action. Philosophical Psy-
chology 18:343–354.
Nadelhoffer, T. 2006a. Bad acts, blameworthy agents, and intentional actions: Some
problems for jury impartiality. Philosophical Explorations 9:203–220.
Nadelhoffer, T. 2006c. On trying to save the simple view. Mind and Language
21:565–586.
Nadelhoffer, T., and E. Nahmias. 2007. The past and future of experimental philoso-
phy. Philosophical Explorations 10:123–149.
Nichols, S., and J. Ulatowski. 2007. Intuitions and individual differences: The Knobe
effect revisited. Mind and Language 22:346–365.
Nisbett, R. E. 2003. The Geography of Thought: How Asians and Westerners Think Dif-
ferently . . . and Why. New York: Free Press.
Nisbett, R. E., and L. Ross. 1980. Human Inference: Strategies and Shortcomings of Social
Judgment. Englewood Cliffs, N.J.: Prentice-Hall.
O’Connor, T. 2002. Persons and Causes: The Metaphysics of Free Will. Oxford: Oxford
University Press.
314 References
Parfit, D. 1997. Reasons and motivation. Proceedings of the Aristotelian Society (suppl.
vol.) 71:99–129.
Pecher, D., and R. Zwaan, eds. 2005. Grounding Cognition. Cambridge: Cambridge
University Press.
Perner, J. 2004. Wann verstehen Kinder Handlungen als rational? In Der Mensch—ein
“animal rationale”? Vernunft—Kognition—Intelligenz, ed. H. Schmidinger and C.
Sedmak. Darmstadt: Wissenschaftliche Buchgemeinschaft.
Perner, J., B. Lang, and D. Kloo. 2002. Theory of mind and self-control: More than
a common problem of inhibition. Child Development 73:752–767.
Perner, J., B. Rendl, and A. Garnham. 2007. Objects of desire, thought, and reality:
Problems of anchoring discourse referents in development. Mind & Language
22:475–517.
Perner, J., and T. Ruffman. 2005. Infants’ insight into the mind: How deep? Science
308:214–216.
Perner, J., P. Zauner, and M. Sprung. 2005. What does “that” have to do with point
of view? The case of conflicting desires and “want” in German. In Why Language
Matters for Theory of Mind, ed. J. W. Astington and J. Baird. New York: Oxford Uni-
versity Press.
Phelan, M., and H. Sarkissian. 2008. The folk strike back: Or, Why you didn’t do it
intentionally, though it was bad and you knew it. Philosophical Studies
138:291–298.
Phelan, M., and H. Sarkissian. 2009. Is the “trade-off hypothesis” worth trading for?
Mind and Language 24:164–180.
References 315
Pink, T. 2004. Suarez, Hobbes, and the Scholastic tradition in action theory. In The
Will and Human Action: From Antiquity to the Present Day, ed. T. Pink and M. Stone.
London: Routledge.
Povinelli, D. J., and S. deBlois. 1992. Young children’s (Homo sapiens) understanding
of knowledge formation in themselves and others. Journal of Comparative Psychology
106:228–238.
Price, A. 2004. Aristotle, the Stoics, and the will. In The Will and Human Action: From
Antiquity to the Present Day, ed. T. Pink and M. Stone. London: Routledge.
Rakoczy, H., F. Warneken, and M. Tomasello. 2007. “This way!,” “No! that way!”—
3-year olds know that two people can have mutually incompatible desires. Cognitive
Development 22:47–68.
Raz, J. 1978. Introduction. In Practical Reasoning, ed. J. Raz. Oxford: Oxford Univer-
sity Press.
Reid, Thomas. [1788] 1983. Essays on the Active Powers of Man. In The Works of
Thomas Reid, D.D., ed. W. Hamilton. Hildesheim: G. Olms Verlagsbuchhandlung.
Reingold, E. M., and P. M. Merikle. 1993. Theory and measurement in the study of
unconscious processes. In Consciousness, ed. M. Davies and G. W. Humphreys.
Oxford: Blackwell.
Repacholi, B. M., and A. Gopnik. 1997. Early reasoning about desires: Evidence from
14- and 18-month-olds. Developmental Psychology 33:12–21.
Rizzolati, G., L. Fadiga, V. Gallese, and L. Fogassi. 1996. Premotor cortex and the
recognition of motor actions. Brain Research: Cognitive Brain Research 3:131–141.
Ruben, D. 1985. The Metaphysics of the Social World. London: Routledge & Kegan
Paul.
Ruben, D. 2003. Action and Its Explanation. Oxford: Oxford University Press.
Ruffman, T., W. Garnham, A. Import, and D. Connolly. 2001. Does eye gaze indicate
implicit knowledge of false belief? Charting transitions in knowledge. Journal of
Experimental Child Psychology 80:201–224.
Ruiz-Mirazo, K., and A. Moreno. 1998. Autonomy and emergence: How systems
become agents through the generation of functional constraints. In Emergence,
Complexity, Hierarchy, Organization, ed. G. L. Farre and T. Oksala. Acta Polytechnica
Scandinavica. Espoo-Helsinki: The Finnish Academy of Technology.
Ruiz-Mirazo, K., and A. Moreno. 2000. Searching for the roots of autonomy: The
natural and artificial paradigms revisited. Communication and Cognition—Artificial
Intelligence (CC-AI): The Journal for the Integrated Study of Artificial Intelligence, Cognitive
Science, and Applied Epistemology 17:209–228.
Ruiz-Mirazo, K., and A. Moreno. 2006. The maintenance and open-ended growth
of complexity in nature: Information as a decoupling mechanism in the origins of
life. In Rethinking Complexity: Perspectives from North and South, ed. F. Capra, P. Soto-
longo, A. Juarrero, and J. van Uden. Mansfield, Mass.: ISCE Publishing.
Sandis, C., ed. 2009. New Essays on the Explanation of Action. Basingstoke: Palgrave
Macmillan.
Scanlon, T. 1998. What We Owe to Each Other. Cambridge, Mass.: Harvard University
Press.
References 317
Schueler, G. F. 2003. Reasons and Purposes: Human Rationality and the Teleological
Explanation of Action. Oxford: Clarendon Press.
Searle, J. 1980. Minds, brains, and programs. Behavioral and Brain Sciences
3:417–457.
Sehon, S. 1994. Teleology and the nature of mental states. American Philosophical
Quarterly 31:63–72.
Sehon, S. R. 1997. Deviant causal chains and the irreducibility of teleological expla-
nation. Pacific Philosophical Quarterly 78:195–213.
Sehon, S. 2005. Teleological Realism: Mind, Agency, and Explanation. Cambridge, Mass:
MIT Press.
Sellars, W. 1963. Philosophy and the scientific image of man. In Science, Perception,
and Reality. New York: Routledge & Kegan Paul.
Seth, A. K. 2006. Causal networks in neural systems: From water mazes to conscious-
ness. In Proceedings of the 2006 Meeting on Brain Inspired Cognitive Systems, ed. I.
Aleksander et al. Millet, Alberta: ICSC Interdisciplinary Research.
Shultz, T. R., and F. Shamash. 1981. The child’s conception of intending act and
consequence. Canadian Journal of Behavioural Science 13:368–372.
Simons, D. J., and C. F. Chabris. 1999. Gorillas in our midst: Sustained inattentional
blindness for dynamic events. Perception 28:1059–1074.
318 References
Sirois, S., and I. Jackson. 2007. Social cognition in infancy: A critical review of
research on higher order abilities. European Journal of Developmental Psychology
4:46–64.
Smith, A. 2005. Responsibility for attitudes: Activity and passivity in mental life.
Ethics 115:236–271.
Smith, M. 2004. Ethics and the A Priori: Selected Essays on Moral Psychology and Meta-
Ethics. New York: Cambridge University Press.
Soon, C. S., M. Brass, H.-J. Heinze, and J. Haynes. 2008. Unconscious determinants
of free decisions in the human brain. Nature Neuroscience 11:543–545.
Sorabji, R. 2004. The concept of the will from Plato to Maximus the Confessor. In
The Will and Human Action: From Antiquity to the Present Day, ed. T. Pink and M.
Stone. London: Routledge.
Sorabji, R. 1979. Body and soul in Aristotle. In Articles on Aristotle, vol. 4: Psychology
and Aesthetics, ed. J. Barnes, M. Schofield, R. Sorabji. London: Duckworth.
30th Anniversary of Premack and Woodruff’s Seminal Paper, “Does the Chimpanzee
Have a Theory of Mind?” (BBS 1978). Organized by A. Hamilton, I. Apperly, and D.
Samson, University of Nottingham, September 11–12, 2008.
Southgate, V., A. Senju, and G. Csibra. 2007. Action anticipation through attribution
of false belief by 2-year-olds. Psychological Science 18:586–592.
Symposium on M. Moore’s Act and Crime book. 1994. University of Pennsylvania Law
Review 142: 1443–1748.
Thalberg, I. 1977. Perception, Emotion, and Action. New Haven: Yale University Press.
Thomson, J. 1977. Acts and Other Events. Ithaca: Cornell University Press.
Thomson, J. 1996. Critical study on Jonathan Bennett’s The Act Itself. Noûs
30:545–557.
Tomasello, M., and K. Haberl. 2003. Understanding attention: 12- and 18-month-
olds know what is new for other persons. Developmental Psychology 39:906–912.
Turner, J. 2004. Folk intuitions, asymmetry, and intentional side effects. Journal of
Theoretical and Philosophical Psychology 24:214–219.
320 References
Ulanowicz, R. 1997. Ecology: The Ascendent Perspective. New York: Columbia Univer-
sity Press.
Van Mill, D. 2001. Liberty, Rationality, and Agency in Hobbes’s “Leviathan.” Albany:
SUNY Press.
Van Orden, G. C., H. Kloos, and S. Wallot. 2009. Living in the pink: Intentionality,
wellbeing, and complexity. In Handbook of the Philosophy of Science, vol. 10: Philoso-
phy of Complex Systems, ed. C. Hooker. General editors D. M. Gabbay, P. Thagard,
and J. Woods. Amsterdam: Elsevier BV.
Velleman, J. D. 2000. The Possibility of Practical Reason. New York: Oxford University
Press.
Vihvelin, K., and T. Tomkow. 2005. The dif. Journal of Philosophy 102:183–205.
Wallace, R. J. 1999. Three conceptions of rational agency. Ethical Theory and Moral
Practice 2:217–242.
Watson, G., ed. 1982. Free Will. Oxford: Oxford University Press.
Wegner, D. 2002. The Illusion of Conscious Will. Cambridge, Mass.: MIT Press.
Wheeler, M. 2005. Reconstructing the Cognitive World: The Next Step. Cambridge,
Mass.: MIT Press.
Wheeler, M., and A. Clark. 1999. Genic representation: Reconciling content and
causal complexity. British Journal for the Philosophy of Science 50:103–135.
Wiggins, D. 1987. Claims of need. In his Needs, Values, Truth. Oxford: Blackwell.
Wiland, E. 2007. Intentional action and “in order to.” Journal of Theoretical and
Philosophical Psychology 27:113–118.
Williams, B. 1973. Morality and the emotions. In his Problems of the Self. Cambridge:
Cambridge University Press.
Williams, B. 1981a. Internal and external reasons. In his Moral Luck. Cambridge:
Cambridge University Press.
Williams, B. 1995a. Acts and omissions, doing and not doing. In Virtues and Reasons:
Philippa Foot and Moral Theory, ed. R. Hursthouse, G. Lawrence, and W. Quinn.
Oxford: Clarendon Press.
Williams, B. 1995b. Internal reasons and the obscurity of blame. In his Making Sense
of Humanity. Cambridge: Cambridge University Press.
Williamson, T. 2000. Knowledge and Its Limits. Oxford: Oxford University Press.
Wilson, G. 1989. The Intentionality of Human Action, 2nd ed. Stanford: Stanford
University Press.
Wilson, G. 1997. Reasons as causes for action. In Contemporary Action Theory. vol.
1. ed. G. Holmström-Hintikka and R. Tuomela. Dordrecht: Kluwer.
Wilson, M. 2002. Six views of embodied cognition. Psychonomic Bulletin and Review
9:625–636.
Wimmer, H., and H. Mayringer. 1998. False belief understanding in young children:
Explanations do not develop before predictions. International Journal of Behavioral
Development 22:403–422.
Young, L., F. Cushman, R. Adolphs, D. Tranel, and M. Hauser. 2006. Does emotion
mediate the relationship between an action’s moral status and its intentional status?
Neuropsychological evidence. Journal of Cognition and Culture 6:291–304.
Zimmerman, M. 1981. Taking some of the mystery out of omissions. Southern Journal
of Philosophy 19:541–554.
Contributors
Deliberation, 5–6, 13, 27, 119, 143, 169, Hume, David, 4, 45, 66, 105, 108–109,
172–173, 175, 181, 203–204, 226 204, 218, 255
Desire, 3–6, 8, 10, 13, 16–17, 27–28,
30, 36–38, 45–47, 49–54, 57, 62, 65, Identification, 13
87, 116, 144, 163, 178, 183, 194, Intending, 116, 124, 136, 141–148,
199–202, 204–207, 211–214, 159–160, 164, 194, 284
216–223, 230, 241–242, 244, Intention, 9–11, 17, 27–28, 30, 35–41,
277–278, 281–282, 284 65, 73–74, 80, 85–93, 96–98,
Determinism, 70, 140 116–129, 137–151, 157–165, 167,
Developmental psychology, 202. See 183, 194, 207, 213–214, 216, 224,
also Psychology 229–231, 234–248, 253, 255,
Differential explanation, 95 267–270, 274–275, 277, 279, 281
Dispositions, 30, 174–175. See also Intentional action, 1, 3, 8–9, 11, 16–17,
Causal power 73–75, 80, 87, 90, 92–95, 97,
Doing, 27–29, 34, 41, 46–50, 52–53, 115–116, 118, 129, 135–139, 141,
55, 58, 61, 63–64, 76, 101–103, 145, 147, 149–151, 161, 199–203,
111–112, 115, 120–121, 126, 138, 205, 207–209, 213–214, 217, 218,
143–144, 184–185, 188, 199, 207, 230–231, 265, 277–281, 284, 287,
235–236, 246, 284, 288 293
Donagan, Alan, 2–3 Intentional behavior, 9, 86, 129, 161,
Duns Scotus, 5 165, 254
Intentionality, 217, 248, 255, 278–279,
Embodied cognition, 229, 231–235 281–282, 284, 286–287, 293–294
Emotion, 73, 212, 219, 222–223, 267,
274 Kant, Immanuel, 255–256
Endorsement, 13 Knobe, Joshua, 277–278, 281–285,
287–293
Fischer, John Martin, 139–140 Knowledge how, 45–46, 48–49, 64
Frankfurt, Harry, 13
Free action, 8, 73, 279. See also Free will Lewis, David, 30, 80, 194
Free will, 4, 38, 69–70, 262, 269, 272, Libet, Benjamin, 43, 269–271
275, 280
Malle, Bertram, 26, 278, 281, 284–293
Gallagher, Shaun, 238–241, 244, McCann, Hugh, 280
247–248 Mele, Alfred R., 7, 17–18, 20, 69, 125,
Ginet, Carl, 7, 115–116, 141–143, 145, 230–231
183, 192–193 Mental action, 14, 19, 33–35, 77,
Goldman, Alvin, 38–39, 137 121–123, 125
Guidance, 92, 234, 245 Mental causation, 1, 3–4, 8–14, 28–29,
35–41, 69, 71, 73–77, 80, 85,
Hobbes, Thomas, 2, 4–6, 9 116–119, 123–125, 129, 145, 147,
Hornsby, Jennifer, 12, 29, 31, 34, 150, 161, 165, 183, 187, 194,
47–55, 109–111 229–230, 265–267
Index 327
Moore, Michael S., 11, 19, 27, 31, 36 Reliability, 20, 93–98, 258
Motivation, 5, 13, 168, 170, 176, 181 Responsibility, 14, 27, 33, 39–40, 69,
72, 92
Nadelhoffer, Thomas, 21 Rowlands, Mark, 236–238, 241, 244
Neuroscience, 38–40 Ruben, David-Hillel, 20
Ryle, Gilbert, 6
Omissions, 19–20, 34–35, 48–49, 55, 58,
115–129, 135–151, 157–160, 161–165 Sartorio, Carolina, 20, 143, 146, 151,
154–155, 161–165
Pacherie, Elisabeth, 230, 235, 237, Searle, John, 239, 247
244–246, 248 Sehon, Scott, 20, 187–196
Peacocke, Christopher, 10–11, 52–53, Simple View of intentional action, 231,
80, 85–86, 89–93 284–285, 292, 295. See also
Practical reason(ing), 6, 80, 154, Intentional action
202–204, 206, 209–211, 217–218, Smith, Michael, 11, 18, 19, 57–66
253. See also Reasons for action Standard story of action, 1, 6, 9, 13,
Psychology, 14, 18, 199–202 19, 27–28, 38, 45–55, 57–66,
developmental psychology, 20–21, 115–116, 149, 253
199–202, 208, 210, 214–215, 217
social psychology, 18, 295 Taylor, Richard, 12, 30
Top-down causation, 254, 256–257,
Rational(ity), 53–55, 63–66, 73, 142, 260, 26–275. See also Causation
168–170, 174–81, 203, 210, 212–214, Trying, 14, 29, 31–32, 40, 77, 136, 231,
222–223 235, 240, 244
Reasons-explanation, 1, 5–11, 13,
16–18, 20–21, 29–30, 38, 49–53, Velleman, J. David, 13
59–60, 62–65, 69–71, 115, 118–119, Volition, 27–28, 30, 37. See also Trying
150–151, 167–168, 178, 183–196, Volitionist theory of action, 14, 77
200, 202, 203, 205, 207–213,
218–219, 277 Wants, 7, 13, 17, 144. See also Desire
anti-psychologistic (objective) theory Weakness of will (akrasia), 2, 73, 170,
of, 18, 200, 205, 207–213, 218–219 175–176, 178–179
causal theory of, 16–18, 20, 38, 49–53, Will(ing), 5–6, 14, 27, 31–32, 34–35,
59, 62–65, 115, 150–151, 167–168, 77. See also Volition
178–80, 183–196 Wilson, George, 16–17, 20, 145,
teleological theory of, 16–18, 20, 183–187
183–96, 202, 207–213, 218–219, Wittgenstein, Ludwig, 2, 6, 69, 75,
227–228 77–78
Reasons for action, 1, 5–11, 16–18,
20–21, 38, 71, 116, 137, 150–51,
167–181, 195–196
con-reasons vs. pro-reasons, 20, 167,
169–170, 172–175, 180–181