Philosophy and Neuroscience
Philosophy and Neuroscience
Philosophy and Neuroscience
Peter Machamer
3 Cf. Bogen, J. and Woodward, J., “Saving the Phenomena,” The Philosophical Review, v. 97, (1988),
pp. 303-352.
184
operative mechanisms for all learning and memory phenomena. This is questionable
belief, given that the cognitive revolution of the 60s challenged the adequacy of
exactly these paradigms as being sufficient to account for human cognition.
2. Systems and mechanisms
Most scientists explain most phenomena by discovering, elaborating, and studying
mechanisms. In an earlier paper Lindley Darden, Carl Craver and myself put forward
a tentative characterizing definition.4 We said:
“Mechanisms are entities and activities organized such that they are productive of
regular changes5 from start or set-up to finish or termination conditions…
Mechanisms are composed of both entities (with their properties) and activities.
Activities are the producers of change… Entities are the things that engage in
activities.”6
Some years after, there is one error in this that needs to be changed. Delete the
word “regular” is too law-like, and it is used to make a claim that is just false. There
may be, and most certainly are, mechanisms that operate only once. And since we will
not allow counterfactuals as part of the analysis, we cannot hold that regularity is
necessary. We do not allow counterfactuals because their truth conditions are not
clear or non-existent. By the way, this is not the case for many conditional (if…then)
statements, and we do allow conditional reasoning (who wouldn’t?). But conditionals
cannot be part of the analysis of how the mechanism works.
Carl Craver wrote in recent draft: “The boundaries of what counts as a mechanism
(what is inside and what is outside) are fixed by reference to the phenomenon to be
explained.”7 But “fixed” is most often too strong. There are many ways to skin a cat;
and, the same protein may be made in so many different ways that it often becomes
impossible to form a generalization (or law) that there is a single “gene” responsible
or the mechanism used is a particular case is always the mechanism for making that
protein.8 In most any case in neuroscience, the termination conditions allow for many
alternative paths that would bring them into being. (Carl Hempel recognized this
when talking about functional explanations when he noted that they lack necessity
because of alternative mechanisms for achieving the same end.) 9
A realistic example will help specify this, and further allow us to bring out some
more points. Patricia Churchland and Terrance Senjowski report some work that was
done on an
4
Cf. Machamer, P. K., Darden, L. and Craver, C. F., “Thinking About Mechanisms,” pp. 1-25.
5
I think “regular” should be dropped from the definition. Jim Bogen has argued forcefully that
there might be mechanisms that operated only once and a while or even one that worked only once.
6
Machamer, P. K., Darden, L. and Craver, C. F., “Thinking About Mechanisms,” p. 3.
7
Craver, C. F., Explaining the Brain: What a Science of the Mind-Brain Could Be, Oxford University
Press, Oxford, forthcoming. He cites Bechtel, W. and Richardson, R., Discovering Complexity:
Decomposition and Localization as Strategies in Scientific Research, Princeton University Press,
Princeton, 1992, as well as Glennan, S., “Mechanisms and the Nature of Causation,” Erkenntnis, v. 44,
(1996), pp. 49-71.
8
Cf. Griffiths, P. and Stotz, K., “Representing Genes,” ww.pitt.edu/~kstotz/genes/genes.html, and
more generally Mitchell, S. D., Biological Complexity and Integrative Pluralism (Cambridge Studies in
Philosophy and Biology), Cambridge University Press, Cambridge, 2003.
9 Cf. Hempel, C. G., “The Logic of Functional Analysis,” in Gross, Ll. (ed.), Symposium on
Sociological Theory, P. Row, Evanston, IL, 1959, pp. 271-307; reprinted in Hempel, C. G., Aspects of
Scientific Explanation and other Essays in the Philosophy of Science, Free Press, N. York, 1965, pp. 297-
330. 186
185
Contemporary Perspectives in Philosophy and Methodology of Science
avoidance response of the leech. 10 The leech is a model organism, they say, because of
the simplicity of its central nervous system, the ease of experimenting on it, and the
availability of the organism —an interesting mix of kinds of criteria. But our focus is
this: there is basically a straightforward decomposition strategy used is examining
how the leech performs this aversive reaction of bending when prodded. We break
large goal oriented tasks, into subtasks performed by subsystems, and the whole is
then the mechanism of the overall system. 11 This strategy and the assumptions about
the leech’s functioning are teleological at every stage. The teleological tone is set by
the very first description of the case; “The local bending reflex is the behavior of the
leech… is a very simple escape behavior wherein potential harm is recognized and
avoidance action taken. Basically the leech withdraws from an irritating stimulus by
making a kind of large kink in the length of its hoselike body.”12
The leech, they report, is a “segmented animal, and within each segment a
ganglion of neurons coordinates behaviors.13 Nine types of interneurons have been
identified interposed between the sensory and motor neurons, which mediate dorsal
bending by receiving excitatory inputs (from P cells) and output to excitatory motor
neurons. They comment, “There are no functional connections between the
interneurons.”14 This latter statement means, I presume, we must treat each
interneuron as an independent part of the mechanisms. There, however, is a seeming
paradox that needs to be explained away. Each interneuron has multiple inputs, and
some of these inputs excite motor neurons that activate ventral bending (i.e. bending
towards the stimulus, rather than the avoidance away from.) They try explaining this
untoward (non teleological) activity in terms of its usefulness to other behaviors that
the leech performs, e.g. swimming. What this implies is that some interneurons are
part of a number of different systems having different goal states, which are all
functioning together at any given time.
Similarly, every human sensory path carries multimodal information, part of
which we neglect when we study the mechanisms of the visual system. This means
that what counts as mechanism for a given end state is partial function of the
purposes of the investigator, which is what I have called perspectival teleology. But
only a partial function; the other constraints come from the world and more
specifically our background knowledge about the world that constrain where the
investigator looks, what she studies and tries to isolate and identify, what can be
discovered, and, most importantly, what end or goal state is be chosen to be
investigated.
This description is a pastiche of the discovery of mechanisms procedures that
Lindley Darden and Carl Craver have begun to work out.15 Notice though that the
goal condition, aversive behavior, is taken as an unproblematic natural response, but
it needs to be specified
10 Cf. Churchland, P. and Sejnowski, T., The Computational Brain, Bradford Book, The MIT Press,
Cambridge, MA, 1992, pp. 336f.
11 For decomposition see Bechtel, W. and Richardson, R., Discovering Complexity: Decomposition and
Localization as Strategies in Scientific Research, Chapter 2.
12 Churchland, P. and Sejnowski, T., The Computational Brain, p. 341.
13 The Computational Brain, p. 342.
14 Churchland, P. and Sejnowski, T., The Computational Brain, pp. 343-344.
15 See Craver, C. F. and Darden, L., “Discovering Mechanisms in Neurobiology: The Case of Spatial
Memory,” in Machamer, P., Grush, R. and McLaughlin, P. (eds.), Theory and Method in the
Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001, pp. 112-137; and Craver, C. F.,
“Interlevel Experiments and Multilevel Mechanisms in the Neuroscience of Memory,” Philosophy of
Science, v. 69, (2002), pp. S83-S97.
186
Philosophy and Neuroscience: The Problems
in more detail. Aversive behavior was selected by the researcher because of its
seemingly typical importance to organisms of many kinds in order to avoid pain or
harm. Yet, there is no incompatibility in this case between being natural and being
socially or personally chosen. It is both perspective teleology and natural teleology.
The researcher selects the perspectival goal of the mechanism, although it is a
naturally occurring outcome of a production in nature. It has a natural telos. The
researcher also has to make many decisions as to what to include in the mechanism
that produces the goal state, and these decisions too are constrained by what’s there,
what’s happening, and what techniques she has available for investigation. One way
to think about the need for such choices in naturalistic contexts is that there are no
closed systems in nature, and so the researcher must put limits on them (close them)
for explanatory and investigatory purposes. This echoes the rationale given for
controlled experiments. Another dimension in the Churchland and Sejnowski leech
case, the neurobiological information we have also constrains the building of a
computational model. “The most basic lesson from LeechNet I and LeechNet II is that
neural circuits could be constructed for performing the desired behavior, with
components whose properties are consistent with the limited [anatomical and
neurobiological] data available.”16 There seem to be some limits. We noted the lack of
functional connection among interneurons above. This provides us with anatomical
constraint on what counts as the mechanism. More philosophically, we generally do
not countenance mechanism explanations for “non natural,” Cambridge complex
properties (e.g., the object or event formed my mixing a martini today at 6 p.m. and
Giovanni’s turning off his Puccini recording yesterday.) We do not think we could
ever find any productive connection among these activities. We have no useable
criteria or background knowledge that places these events into a unified system. Our
presumed knowledge at a time is used to specify what counts as a system as well as
what system will be studied. If we wish to study some visual mechanism, our
knowledge (borne, as Tom Kuhn said, from our training) of what constitutes the
visual system puts boundaries on where and what kinds of mechanisms, entities and
activities we may look for. Yet we may decide to question that knowledge. What count
as proper phenomena for explanations by mechanisms depend, most usually, on the
criteria of importance that are in place in the discipline at a time. This means there is
always an evaluative component operative in selection of goal states to be explained.
Yet, there are physical systems with equilibrium states, where being in equilibrium or
in homeostasis is the natural goal, and we seek to discover the mechanism by which
such equilibrium is achieved. Here we might be tempted to say that this is a natural
teleological system, the goal given by nature, and we just become interested in it. But
if somehow we established there were no true equilibrium systems in nature, or that
what we call an aversive response is really myriad of different systems operating in
different ways, then it is we who would have made the mistake in identifying the
phenomenon of our research and treating it as unitary. There is another kind of
mistake possible, which takes us back the reasons Churchland and Sejnowski gave for
picking the leech as model organism. Their second criterion was ease of
experimentation. This surely is a reasonable pragmatic constraint. But it can lead to
error when the experimental paradigms we use are chosen primarily because we
know they may be applied cleanly, rather than because they allow us to explore the
phenomenon
16
Churchland, P. and Sejnowski, T., The Computational Brain, p. 352.
187
Contemporary Perspectives in Philosophy and Methodology of Science
of interest.17 I am raising here questions about the validity of some quite reliable data.
There is no time here to go into the problem in any depth, but consider Churchland
and Sejnowski’s characterization of the leech as “recognizing potential danger” when
prodded. The behavior we see is that the leech bends away from the stimulus, and we
call it aversive, and then gloss “aversive” as “response to potential danger.” This
description makes the behavior intelligible to us. Such a gloss is probably harmless
enough in this case. But consider the experimental paradigms of learning used in
neurobiology. They come in three types: repeated stimulation, classical conditioning
and reward conditioning. There are problems internal to each of these paradigms that
could lead one to question their use and, especially, the interpretation they provide
for the data that is supposed to be learning. But let me raise a bigger question. At the
time of the cognitive revolution (during the 60s, though one can find harbingers of
this before, e.g. Bartlett 1932), 18 a major reason for shifting the paradigm from
behaviorism was the inadequacy of just these three paradigms for learning. Could it
be that neurobiologists use these paradigms because they can be applied despite the
fact that much of the learning we are interested in, say in higher primates, cannot
explained using only these limited kinds?
3. Reduction
All the above topics are related to old philosophical problem of reduction. In
perhaps its classic form, Schaffner argued that uni-level reduction is the hallmark of a
fully “clarified science,”19 and that its realization requires the satisfaction of two
specific conditions. First, causal generalizations contained in the explanans of a given
theory must be completely reduced to terminology referring to in the processes
occurring at one specific level of aggregation. Secondly, the explanandum must be uni-
level in so far as it contains a generalization statement situated at the same or
different level of aggregation as the explanans.
Scientists’ use of the term “reduction” differs from the philosophers use. Scientists
most often just mean that at least part of the explanation of a phenomenon is provided
exhibiting the lower level mechanisms that show how that phenomenon is produced.
The mechanisms are operating at different levels and, in some cases, are constitutive
of the entities and activities at higher levels.
In fact, it seems that all or almost all mechanisms are multi-level. If this is so, the
way philosophers see reduction, as reducing “higher” levels to a single lower level, is
impossible. Some philosophers —Darden, Wimsatt, and Schaffner— have recognized
this.20
17
Aspects of this problem have been discussed in Weber, M., “Under the Lamp Post: Comments on
Schaffner,” in Machamer, P. K., Grush, R. and McLaughlin, P. (eds.), Theory and Method in the
Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001, pp. 231-249.
18
Cf. Bartlett, F. C., Remembering: A Study in Experimental and Social Psychology, Cambridge
University Press, Cambridge, 1932.
19
Cf. Schaffner, K., Discovery and Explanations in Biology and Medicine, The University of Chicago
Press, Chicago, 1993.
20
Cf. Darden, L., “Discovering Mechanisms in Molecular Biology: Finding and Fixing Incompleteness and
Incorrectness,” in Schickore, J. and Steinle, F. (eds.), Revisiting Discovery and Justification, Max Planck
Institute for History of Science, Berlin, 2002, pp. 143-154; Wimsatt, W. C., “Reductionism and its
Heuristics: Making Methodological Reductionism Honest,” paper for a conference on Reductionism,
Institute Jean Nicod, Paris, November 2003. Proceedings forthcoming in Synthese; and Schaffner, K.,
Discovery and Explanations in Biology and Medicine, pp. 296-322.
188
Philosophy and Neuroscience: The Problems
5. Information
A ubiquitous concept, information, is found throughout he neuroscience
literature. It’s use in science dates back to Claude E. Shannon and Warren Weaver, 25
who developed a mathematical concept of information, where information is simply a
calculated probability that some bit of a structure at one end of channel (the sender)
will end up at the other end, a receiver. This type of information cannot be used to talk
about content, and has quite rigorous criteria for being applied to any communication
system. Almost all of the time, when the term “information” is used in neuroscience it
is not used in this mathematical sense.
21
Cf. Bickle, J., Psychoneural Reduction, The MIT Press, Cambridge, MA, 1998; and Bickle, J. Philosophy
and Neuroscience: A Ruthlessly Reductive Account, Kluwer, Dordrecht-Boston, 2003.
22
Cf. Nagel, T., “What is it like to be a Bat,” The Philosophical Review, v. 83, n. 4, (1974), pp. 435-450. 23
Cf. Kim, J., Mind in a Physical World, Cambridge University Press, Cambridge, 1998.
24
Bechtel, W., “Cognitive Neuroscience,” in Machamer, P., Grush, R. and McLaughlin, P. (eds.), Theory
and Method in the Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001, pp. 81-111.
25
Cf. Shannon, C. and Weaver, W., The Mathematical Theory of Communication, University of Illinois
Press, Urbana, 1949, republished in paperback, 1963.
189
26
Cf. Dretske, F., Knowledge and the Flow of Information, The MIT Press, Cambridge, MA, 1981.
27
Cf. Millikan, R., Varieties of Meaning, The MIT Press, Cambridge, MA, 2004.
28
Cf. Gibson, J. J., The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, 1979
(reprinted in Lawrence Erlbaum Associates, 1987).
190
can be (is) made in number of different ways. Similar remarks may be told about
synaptic transmission or about the visual system. There is nothing, no structure, that
seems to remain intact and flow along a channel. It’s always much more complicated.
We may identify a structure (or content) in the blueprint or, perhaps in a string
of codons, in that this originating state, somehow to be explained in each case,
represents the goal state or product, and which may be actively used by an agent, as
instructions, in order to produce the goal state, though much more information and
many more activities are needed to achieve the goal. The goal state is specified only
partially by the information that is presented in the blueprint, i.e., in a quite limited
way the blueprint presents aspects of what the house will look like when built. It is a
limited 2-dimensional representation of what is to become a 3-dimensional object,
and limited in that many details are missing, many skills needed to execute the
blueprint are not described, etc. Conversely, because of the specificity of structures
one can compare the house to the blueprint and see, in some respects, how well the
builders followed their plan. But only in some respects; in just those respects where
the information available in the blue print maybe compared with the information
available from looking at the finished house. Similarly, information in DNA is
information for the mRNA and it carries information about which amino acids, bases,
nucleotides and other chemical structures may collaborate in the construction of a
protein. But this is just to say we have a mechanism. The idea of information is not
really doing any work.
So here is one moral: Information is a noun that describes “things” that provide
information for some use by some one (or something). The major uses of information
are as evidence for an inference, often inductive; a basis for a prediction; a basis for an
expectation; or a basis that directs the subsequent behavior of a mechanism. The
information in the blueprint to direct persons’ (builders’) actions, i.e., those actions
are the tasks of construction necessary to build a house that looks like the one in the
blueprint. The blue print portrays how to connect those things, but may not say what
type of nails to use. Information is always information for an activity that will use it,
and “use” means the information directs and controls the activity(ies) or production
to some extent. Maybe we could say usefully, that information constrains the activities
involved in the production.
Here is another moral: Things use information for some purpose. So
information systems are always teleological in this sense. So information is always
information about-where information “points towards some end.”
We can distinguish two basic types of purposes: Information for rational use
(as in inferences or as basis for interpretation), and information for mechanical use
(for control or for production.)
The information in the blueprint is then information about the house. And this
is true despite the fact that there is no message in the blue print (so whatever
information is, it is not like letter or telegraph.) The encoding need not send signals,
but the information is for the active producer and is used to produce by the producer.
And we can check the end product, against the information in the starting conditions
to see if it was successfully produced. This means information is always information
about an end stage, and so is a relation between originating conditions and this
termination stage, and the intervening activities are directed by the originating state
towards the final product. So information is teleological.
191
Natural mechanisms may follow such selective and productive patterns also.
Information, in this sense, is what directs a mechanism’s activities of production such
that they result in a selected end, where the end bears an appropriate specified
relation to the beginning. So, as said above, it is helpful to think of information as
constraining the possible outcomes by controlling the means and materials of
production. In perception complex features of the environment are actively picked up
by a perceiver and used by that perceiver to guide subsequent behavior. The
information in a DNA segment is about what bases are to belong to the protein to be
synthesized, and how those bases are to be arranged. But what makes it so is not the
transmission of a signal. Just what information the DNA segment contains depends on
the arrangements of the bases in its codons. DNA and mNRA segments feature
different bases. As protein synthesis proceeds the patterns in which they are arranged
are replaced by new patterns. Even if we could make sense out of the claim that the
DNA contains information about the protein to be synthesized, no signal is
transmitted from the DNA to the structures involved in the later stages of the
synthesis.29
From this, we might say that the structure of the start up conditions selects the
subsequent activities and entities that are responsible for bringing into existence
termination conditions (which is our phenomenon of interest). One might be mislead
by this way of speaking into thinking that intentional animism has crept in again, with
the concept of responsibility and control and the need to attribute these to the proper
entities and activities that we take to comprise the mechanism. However, this is a
question of what is controlling the production of the end state and what relation that
end state bears to the originating state. Often in such conditions, we are tempted to go
outside of the mechanism and seek larger causes for and a greater specification of why
the termination conditions are a preferred state. But this is true for any selective
processes.30 Noting this feature helps explain the desire to bring in evolution as a
selection principle everywhere we find a mechanism because we think then we have a
“big” reason for the goal. This is the job that God used to do. Unfortunately, in many
cases evolution works no better than God did.
6. Computational Models
Computational models are often used to bridge the “gap” between
neurobiology and gross physiology and behavior. Models present highly abstract sets
of relations that assume that neurons in the brain may be described as functioning
items in a digital mechanisms or networks, which if compounded and manipulated in
the proper ways may show how possibly a brain can give rise to behavior. We
refereed to one such model above, Churchland and Sejnowski’s Leechnet I & II. 31
which was a computer model for the leech’s dorsal bending.
Such models obtain their explanatory power because they may be used to
mediate between known or assumed neurobiological data that serve as constraints
for the model. Further constraints may come from location data and cognitive tasks.
Such models may be used to direct research into the mechanisms implicated in some
cognitive task.
29
For details, see Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K. and Walter, P., The Molecular
Biology of the Cell, 4th ed., Garland, New York, 2002, Chapter 7.
30
Cf. Machamer, P. K., “Teleology and Selective Processes,” in Colodny, R. (ed.), Logic, Laws, and Life:
Some Philosophical Complications, University of Pittsburgh Press, Pittsburgh, 1977, pp. 129-142.
31
On Churchland and Sejnowski’s Leechnet I & II, cf. Churchland, P. S. and Sejnowski, T., The
Computational Brain, passim.
192
33
Cf. Neisser, U., Cognitive Psychology, Appleton Century Crofts, N. York, 1967.
193
Contemporary Perspectives in Philosophy and Methodology of Science
this is to note that truth conditions, at least in practice, are use conditions, and even
those “norms set by the world” (empirical constraints) structure human practices
concerning correct or acceptable use, e.g., of descriptions of use of instruments or
even the right ways to fix a car. Human practices are social practices, and the way we
humans use truth as a criterion is determined not only by the world, but also by those
traditions we have of inquiring about the world, of assessing the legitimacy of
descriptions and claims made about the world, and of evaluating actions performed in
the world.
As extrinsic epistemologists have stressed there is a need to refocus
philosophical concern to concentrate on reliability (rather than truth.) From our
discussion above we may infer that the reliability of memory entails answers about
reliability of knowledge.
One way to begin to think about reliability and its connection to truth is, what
are the conditions for “reliable” or appropriate assertions (or other kinds of speech
acts)? That is, what conditions must obtain or what presuppositions must be fulfilled
in order to make a speech act reliable or appropriate? What are the criteria of success
for such speech acts? And how are such criteria established? Validated? And changed?
More generally what evaluative criteria apply to actions? Speech acts are actions. How
do actions show yourself, you who perform them, that they are effective?
Appropriate? These are different. You may effectively insult someone, but it maybe
highly inappropriate. On what grounds are they judged to be appropriate by other
people?
Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K. and Walter, P., The Molecular Biology
of the Cell, 4th ed., Garland, New York, 2002.
Anscombe, G. E. M., “Causality and Determination,” Inaugural lecture at the University of
Cambridge, 1971; reprinted in Anscombe, G. E. M., Metaphysics and the Philosophy of Mind, The
Collected Philosophical Papers, 1981, vol. 2., University of Minnesota Press, Minneapolis, pp. 133-147.
Aristotle, Nichomachean Ethics, edited with translation by H. Rackham, Harvard University
Press, Cambridge, 1934.
195
Glennan, S., “Mechanisms and the Nature of Causation,” Erkenntnis, v. 44, (1996), pp. 49-71.
Glennan, S., “Rethinking Mechanical Explanation,” Philosophy of Science, v. 69, (2002), pp.
S342-S353.
Gibson, J. J., The Senses Considered as Perceptual Systems, Houghton Mifflin, Boston, 1966.
Gibson, J. J., The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, 1979
(reprinted in Lawrence Erlbaum Associates, 1987).
Griffiths, P. and Stotz, K., “Representing Genes,” www.pitt.edu/~kstotz/genes/genes.html.
Hempel, C. G., “The Logic of Functional Analysis,” in Gross, Ll. (ed.), Symposium on Sociological
Theory, P. Row, Evanston, IL, 1959, pp. 271-307; reprinted in Hempel, C. G., Aspects of Scientific
Explanation and other Essays in the Philosophy of Science, Free Press, N. York, 1965, pp. 297-330.
Kim, J., Mind in a Physical World, Cambridge University Press, Cambridge, 1998.
Machamer, P. K., “Gibson and the Conditions of Perception,” in Machamer, P. K. and
Turnbull, R. G. (eds.), Perception: Historical and Philosophical Studies, Ohio State University Press,
Columbus, OH, 1975, pp. 435-466.
Machamer, P. K., “Teleology and Selective Processes,” in Colodny, R. (ed.), Logic, Laws, and
Life: Some Philosophical Complications, University of Pittsburgh Press, Pittsburgh, 1977, pp. 129-142.
Machamer, P. K., Darden, L. and Craver, C. F., “Thinking About Mechanisms,” Philosophy of
Science v. 67, n. 1, (2000), pp. 1-25.
Machamer, P. K., Grush, R. and McLaughlin, P. (eds.), Theory and Method in the
Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001.
Machamer, P. K. and Osbeck, L., “The Social in the Epistemic,” in Machamer, P. K. and
Wolters, G. (eds.), Science, Values and Objectivity, University of Pittsburgh Press, Pittsburgh, 2004, pp.
78-89.
Millikan, R., Varieties of Meaning, The MIT Press, Cambridge, MA, 2004.
Mitchell, S. D., Biological Complexity and Integrative Pluralism (Cambridge Studies in Philosophy
and Biology), Cambridge University Press, Cambridge, 2003.
Nagel, T., “What is it like to be a Bat,” The Philosophical Review, v. 83, n. 4, (1974), pp. 435-450.
Neisser, U., Cognitive Psychology, Appleton Century Crofts, N. York, 1967.
Richardson, R., “Discussion: How Not to Reduce a Functional Psychology,” Philosophy of
Science, v. 49, (1982), pp. 125-137.
Richardson, R., “Cognitive Science and Neuroscience: New-Wave Reductionism,” Philosophical
Psychology, v. 12, n. 3, (1999), pp. 297-307.
Robinson, H., “Dualism,” in Stich, S. and Warfield, T. (eds.), The Blackwell Guide to Philosophy
of Mind, Blackwell, Oxford, 2003, pp. 85-101.
Robinson, H., “Dualism,” The Stanford Encyclopedia of Philosophy (Fall 2003 Edition), Zalta, E.
N. (ed.), URL =.
197
Contemporary Perspectives in Philosophy and Methodology of Science
Sarkar, S., “Models of Reduction and Categories of Reductionism,” Synthese, v. 91, n. 3, (1992),
pp. 167-194.
Schaffner, K., “Approaches to Reductionism,” Philosophy of Science, v. 34, (1967), pp. 137-147.
Schaffner, K., “Theory Structure, Reduction and Disciplinary Integration in Biology,” Biology
and Philosophy, v. 8, n. 3, (1993), pp. 319-347.
Schaffner, K., Discovery and Explanations in Biology and Medicine, The University of Chicago
Press, Chicago, 1993.
Schaffner, K., “Interactions Among Theory, Experiment, and Technology in Molecular Biology,”
Proceedings of the Biennial Meetings of the Philosophy of Science Association, v. 2, (1994), pp. 192-205.
Schaffner, K., Reductionism and Determinism in Human Genetics: Lessons from Simple
Organisms, University of Notre Dame Press, Notre dame, IN, 1995.
Schouten, M. and Loreen-de-Jong, H., “Reduction, Elimination, and Levels: The Case of the
LTP-Learning Link,” Philosophical Psychology, v. 12, n. 3, (1999), pp. 237-262.
Shannon, C. and Weaver, W., The Mathematical Theory of Communication, University of Illinois
Press, Urbana, 1949, republished in paperback, 1963.
Smart, J. J. C., “Sensations and Brain Processes,” Philosophical Review, v. 68, (1959), pp. 141-
156.
Weber, M., “Under the Lamp Post: Comments on Schaffner,” in Machamer, P. K., Grush, R.,
and McLaughlin, P. (eds.), Theory and Method in the Neurosciences, University of Pittsburgh Press,
Pittsburgh, 2001, pp. 231-249.
Wimsatt, W., “The Ontology of Complex Systems: Levels of Organization, Perspectives and
Causal Thickets,” Canadian Journal of Philosophy of Science, Supplementary Volume 20, (1994), pp. 207-
274.
Wimsatt, W. C., “Reductionism and its Heuristics: Making Methodological Reductionism
Honest,” paper for a conference on Reductionism, Institute Jean Nicod, Paris, November 2003.
Proceedings forthcoming in Synthese.
Woodward, J., “Data and Phenomena,” Synthese, v. 79, (1989), pp. 393-472.
198