Philosophy and Neuroscience

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

Philosophy and Neuroscience: The Problems

Peter Machamer

1. Philosophy and Neuroscience: The Issues


If one does not hold the view that philosophy is a totally sui generis enterprise
and if one does not hold that the domain of philosophical inquiry is somehow limited
to the apriori, abstract objects or some other collection of “phenomena” that is
hermetically sealed off from the domains treated by the sciences, then it ought to be
that studies in neuroscience have relevance to issues of mind/brain, the nature of
knowledge and knowledge acquisition and use, and more general issues in
epistemology and metaphysics.
It is also the case that neuroscience, in whatever sub discipline, i.e., cognitive
neuroscience, neurobiology, etc., since it is a science, should raise all the problems
about explanation, evidence, role of experiment, discovery of mechanisms, etc. that all
sciences have. That is, the basic issues in philosophy of science may be raised
specifically about neuroscience.
If the above is true, then this means philosophy of neuroscience has two types of
problems:
(1) The old philosophical kinds: subjectivity, consciousness, experience, belief and
knowledge; and
(2) Problems arising from treating neuroscience as a branch of science, the
traditional philosophy of science questions.
There are methodological schools of psychology that hold, just like the
behaviorists of old, that psychology need not attend to the “hardware” or “wetware” of
what goes in the brain. Any epistemic and cognitive theories may be advanced and
tested independently of any neuro-data. Sometimes a computer analogy, specifically
the distinction between program level descriptions and logic circuits or even
hardware, has been invoked (by Fodor and Dennett) to support such claims. 1 A more
empiricist, old positivistic, yet recently contemporary, view holds that all the theory
that one needs or is entitled to have will come from analyzing data in the proper way.
Using Bayes nets is a favorite here, though other forms of computational modeling
subscribe to the similar principles and goals. I shall deal with this radical empiricism
only in passing.
Let us first consider the philosophy of science problems. First, what is the nature
of neuro-scientific explanations? I think it is correct to claim that most neuroscientists
seek to explain the phenomena they are interested in by discovering and elaborating
mechanisms,2 and further that this is how they characterize their own work. However,
before we delve into this problem certain preliminary distinctions need to be made
clear.
1 Cf. Fodor, J., “Special Sciences (Or: The Disunity Of Science As A Working Hypothesis),” Synthese,
v. 28, (1974), pp. 97-115, and Dennett, D., The Intentional Stance, The MIT Press, Cambridge, MA, 1987.
2 Cf. Machamer, P. K., Darden, L. and Craver, C. F., “Thinking About Mechanisms,” Philosophy of
Science v. 67, n. 1, (2000), pp. 1-25.
183
Contemporary Perspectives in Philosophy and Methodology of Science

In discussing the sciences of the brain, it will prove convenient to distinguish


among neurobiology, cognitive neuroscience and computational neuroscientists. We
could also add the medical field of neurology, and number of other research areas.
Philosophically, the most important distinction is between phenomena and data. 3
Phenomena are what are being explained, conceived or observed, often in a “pre-
scientific” way. “Pre-scientific” means we have somehow isolated a type of event that
we deem to be important, and which we then will try to scientifically investigate it
through experiment, observation and other means. For example, one wants to explain
human memory loss during old age, or what kinds of things are remembered the
longest, or how perceptual recognition occurs, or word learning, etc. Biologically, one
may wish to find, e.g., out under what conditions and what is it in neurons that
changes such that they fire more frequently to types of stimuli.
Within the field of neuroscience it is common to distinguish three areas of
research: cognitive neuroscience, neurobiology and computational neuroscience.
Cognitive neuroscience attempts to discover the neural or brain mechanisms that
cause (most often they say, are implicated in) behavior and actions and cognitive
functions. Often times use imaging devices (fMRI, nMRI or PET scans) combined with
operationalized experimental tasks to attempt to localize of where in the brain these
cognitive functions are occurring. Sometimes this area is called neuropsychology.
Occasionally a daring researcher will purport to give us information about
consciousness, subjectivity, or the self.
Neurobiology investigates the electro-chemical processes that occur at the
cellular or neuronal level. Examples are inducing LTP (long term potentiation),
examining patterns of dendritic growth, causing neuronal learning (firing to a
stimulus type), and finding examples of neuronal plasticity where one set of neurons
takes on a function that before had been carried on by another set or another system.
Data, as opposed to the phenomena, are what is the outcome of experimental or
measuring procedures used by the scientists during their research. It is most often
presumed that the data collected bears an obvious and a straightforward relationship
to the phenomena being investigated. This is not always the case. This relation
between data and phenomena is most often talked about as the problem of validity (as
opposed to reliability.) We may have quite reliable experimental procedures that are
inter-subjectively useable, and that yield data that accurately reflect the outcomes of
the experiments giving us a coherent self consistent data set, yet they may have no
validity vis a vis the phenomena of interest. They are measurements, but do not
measure what we are really interested in.
In neurobiology the types of data collected usually are measures of the frequency
of some behavior or output deemed relevant to a cell or neuron’s functioning or
resulting from the operation of some mechanism. Very often they are measures of the
frequencies of cell and neuronal firing discharges, in the presence of some controlled
stimulus or input.
The major experimental paradigms from which data are collected in cognitive
neuroscience are repeated stimuli, classical and operant conditioning, priming, and
recall and recollection experiments. The theoretical assumptions presupposed by
most experimental paradigms are that repeated stimulation and/or classical, or
operant conditioning are the

3 Cf. Bogen, J. and Woodward, J., “Saving the Phenomena,” The Philosophical Review, v. 97, (1988),
pp. 303-352.
184

Philosophy and Neuroscience: The Problems

operative mechanisms for all learning and memory phenomena. This is questionable
belief, given that the cognitive revolution of the 60s challenged the adequacy of
exactly these paradigms as being sufficient to account for human cognition.
2. Systems and mechanisms
Most scientists explain most phenomena by discovering, elaborating, and studying
mechanisms. In an earlier paper Lindley Darden, Carl Craver and myself put forward
a tentative characterizing definition.4 We said:
“Mechanisms are entities and activities organized such that they are productive of
regular changes5 from start or set-up to finish or termination conditions…
Mechanisms are composed of both entities (with their properties) and activities.
Activities are the producers of change… Entities are the things that engage in
activities.”6
Some years after, there is one error in this that needs to be changed. Delete the
word “regular” is too law-like, and it is used to make a claim that is just false. There
may be, and most certainly are, mechanisms that operate only once. And since we will
not allow counterfactuals as part of the analysis, we cannot hold that regularity is
necessary. We do not allow counterfactuals because their truth conditions are not
clear or non-existent. By the way, this is not the case for many conditional (if…then)
statements, and we do allow conditional reasoning (who wouldn’t?). But conditionals
cannot be part of the analysis of how the mechanism works.
Carl Craver wrote in recent draft: “The boundaries of what counts as a mechanism
(what is inside and what is outside) are fixed by reference to the phenomenon to be
explained.”7 But “fixed” is most often too strong. There are many ways to skin a cat;
and, the same protein may be made in so many different ways that it often becomes
impossible to form a generalization (or law) that there is a single “gene” responsible
or the mechanism used is a particular case is always the mechanism for making that
protein.8 In most any case in neuroscience, the termination conditions allow for many
alternative paths that would bring them into being. (Carl Hempel recognized this
when talking about functional explanations when he noted that they lack necessity
because of alternative mechanisms for achieving the same end.) 9
A realistic example will help specify this, and further allow us to bring out some
more points. Patricia Churchland and Terrance Senjowski report some work that was
done on an
4
Cf. Machamer, P. K., Darden, L. and Craver, C. F., “Thinking About Mechanisms,” pp. 1-25.
5
I think “regular” should be dropped from the definition. Jim Bogen has argued forcefully that
there might be mechanisms that operated only once and a while or even one that worked only once.
6
Machamer, P. K., Darden, L. and Craver, C. F., “Thinking About Mechanisms,” p. 3.
7
Craver, C. F., Explaining the Brain: What a Science of the Mind-Brain Could Be, Oxford University
Press, Oxford, forthcoming. He cites Bechtel, W. and Richardson, R., Discovering Complexity:
Decomposition and Localization as Strategies in Scientific Research, Princeton University Press,
Princeton, 1992, as well as Glennan, S., “Mechanisms and the Nature of Causation,” Erkenntnis, v. 44,
(1996), pp. 49-71.
8
Cf. Griffiths, P. and Stotz, K., “Representing Genes,” ww.pitt.edu/~kstotz/genes/genes.html, and
more generally Mitchell, S. D., Biological Complexity and Integrative Pluralism (Cambridge Studies in
Philosophy and Biology), Cambridge University Press, Cambridge, 2003.
9 Cf. Hempel, C. G., “The Logic of Functional Analysis,” in Gross, Ll. (ed.), Symposium on
Sociological Theory, P. Row, Evanston, IL, 1959, pp. 271-307; reprinted in Hempel, C. G., Aspects of
Scientific Explanation and other Essays in the Philosophy of Science, Free Press, N. York, 1965, pp. 297-
330. 186

185
Contemporary Perspectives in Philosophy and Methodology of Science

avoidance response of the leech. 10 The leech is a model organism, they say, because of
the simplicity of its central nervous system, the ease of experimenting on it, and the
availability of the organism —an interesting mix of kinds of criteria. But our focus is
this: there is basically a straightforward decomposition strategy used is examining
how the leech performs this aversive reaction of bending when prodded. We break
large goal oriented tasks, into subtasks performed by subsystems, and the whole is
then the mechanism of the overall system. 11 This strategy and the assumptions about
the leech’s functioning are teleological at every stage. The teleological tone is set by
the very first description of the case; “The local bending reflex is the behavior of the
leech… is a very simple escape behavior wherein potential harm is recognized and
avoidance action taken. Basically the leech withdraws from an irritating stimulus by
making a kind of large kink in the length of its hoselike body.”12
The leech, they report, is a “segmented animal, and within each segment a
ganglion of neurons coordinates behaviors.13 Nine types of interneurons have been
identified interposed between the sensory and motor neurons, which mediate dorsal
bending by receiving excitatory inputs (from P cells) and output to excitatory motor
neurons. They comment, “There are no functional connections between the
interneurons.”14 This latter statement means, I presume, we must treat each
interneuron as an independent part of the mechanisms. There, however, is a seeming
paradox that needs to be explained away. Each interneuron has multiple inputs, and
some of these inputs excite motor neurons that activate ventral bending (i.e. bending
towards the stimulus, rather than the avoidance away from.) They try explaining this
untoward (non teleological) activity in terms of its usefulness to other behaviors that
the leech performs, e.g. swimming. What this implies is that some interneurons are
part of a number of different systems having different goal states, which are all
functioning together at any given time.
Similarly, every human sensory path carries multimodal information, part of
which we neglect when we study the mechanisms of the visual system. This means
that what counts as mechanism for a given end state is partial function of the
purposes of the investigator, which is what I have called perspectival teleology. But
only a partial function; the other constraints come from the world and more
specifically our background knowledge about the world that constrain where the
investigator looks, what she studies and tries to isolate and identify, what can be
discovered, and, most importantly, what end or goal state is be chosen to be
investigated.
This description is a pastiche of the discovery of mechanisms procedures that
Lindley Darden and Carl Craver have begun to work out.15 Notice though that the
goal condition, aversive behavior, is taken as an unproblematic natural response, but
it needs to be specified

10 Cf. Churchland, P. and Sejnowski, T., The Computational Brain, Bradford Book, The MIT Press,
Cambridge, MA, 1992, pp. 336f.
11 For decomposition see Bechtel, W. and Richardson, R., Discovering Complexity: Decomposition and
Localization as Strategies in Scientific Research, Chapter 2.
12 Churchland, P. and Sejnowski, T., The Computational Brain, p. 341.
13 The Computational Brain, p. 342.
14 Churchland, P. and Sejnowski, T., The Computational Brain, pp. 343-344.
15 See Craver, C. F. and Darden, L., “Discovering Mechanisms in Neurobiology: The Case of Spatial
Memory,” in Machamer, P., Grush, R. and McLaughlin, P. (eds.), Theory and Method in the
Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001, pp. 112-137; and Craver, C. F.,
“Interlevel Experiments and Multilevel Mechanisms in the Neuroscience of Memory,” Philosophy of
Science, v. 69, (2002), pp. S83-S97.
186
Philosophy and Neuroscience: The Problems

in more detail. Aversive behavior was selected by the researcher because of its
seemingly typical importance to organisms of many kinds in order to avoid pain or
harm. Yet, there is no incompatibility in this case between being natural and being
socially or personally chosen. It is both perspective teleology and natural teleology.
The researcher selects the perspectival goal of the mechanism, although it is a
naturally occurring outcome of a production in nature. It has a natural telos. The
researcher also has to make many decisions as to what to include in the mechanism
that produces the goal state, and these decisions too are constrained by what’s there,
what’s happening, and what techniques she has available for investigation. One way
to think about the need for such choices in naturalistic contexts is that there are no
closed systems in nature, and so the researcher must put limits on them (close them)
for explanatory and investigatory purposes. This echoes the rationale given for
controlled experiments. Another dimension in the Churchland and Sejnowski leech
case, the neurobiological information we have also constrains the building of a
computational model. “The most basic lesson from LeechNet I and LeechNet II is that
neural circuits could be constructed for performing the desired behavior, with
components whose properties are consistent with the limited [anatomical and
neurobiological] data available.”16 There seem to be some limits. We noted the lack of
functional connection among interneurons above. This provides us with anatomical
constraint on what counts as the mechanism. More philosophically, we generally do
not countenance mechanism explanations for “non natural,” Cambridge complex
properties (e.g., the object or event formed my mixing a martini today at 6 p.m. and
Giovanni’s turning off his Puccini recording yesterday.) We do not think we could
ever find any productive connection among these activities. We have no useable
criteria or background knowledge that places these events into a unified system. Our
presumed knowledge at a time is used to specify what counts as a system as well as
what system will be studied. If we wish to study some visual mechanism, our
knowledge (borne, as Tom Kuhn said, from our training) of what constitutes the
visual system puts boundaries on where and what kinds of mechanisms, entities and
activities we may look for. Yet we may decide to question that knowledge. What count
as proper phenomena for explanations by mechanisms depend, most usually, on the
criteria of importance that are in place in the discipline at a time. This means there is
always an evaluative component operative in selection of goal states to be explained.
Yet, there are physical systems with equilibrium states, where being in equilibrium or
in homeostasis is the natural goal, and we seek to discover the mechanism by which
such equilibrium is achieved. Here we might be tempted to say that this is a natural
teleological system, the goal given by nature, and we just become interested in it. But
if somehow we established there were no true equilibrium systems in nature, or that
what we call an aversive response is really myriad of different systems operating in
different ways, then it is we who would have made the mistake in identifying the
phenomenon of our research and treating it as unitary. There is another kind of
mistake possible, which takes us back the reasons Churchland and Sejnowski gave for
picking the leech as model organism. Their second criterion was ease of
experimentation. This surely is a reasonable pragmatic constraint. But it can lead to
error when the experimental paradigms we use are chosen primarily because we
know they may be applied cleanly, rather than because they allow us to explore the
phenomenon

16
Churchland, P. and Sejnowski, T., The Computational Brain, p. 352.

187
Contemporary Perspectives in Philosophy and Methodology of Science

of interest.17 I am raising here questions about the validity of some quite reliable data.
There is no time here to go into the problem in any depth, but consider Churchland
and Sejnowski’s characterization of the leech as “recognizing potential danger” when
prodded. The behavior we see is that the leech bends away from the stimulus, and we
call it aversive, and then gloss “aversive” as “response to potential danger.” This
description makes the behavior intelligible to us. Such a gloss is probably harmless
enough in this case. But consider the experimental paradigms of learning used in
neurobiology. They come in three types: repeated stimulation, classical conditioning
and reward conditioning. There are problems internal to each of these paradigms that
could lead one to question their use and, especially, the interpretation they provide
for the data that is supposed to be learning. But let me raise a bigger question. At the
time of the cognitive revolution (during the 60s, though one can find harbingers of
this before, e.g. Bartlett 1932), 18 a major reason for shifting the paradigm from
behaviorism was the inadequacy of just these three paradigms for learning. Could it
be that neurobiologists use these paradigms because they can be applied despite the
fact that much of the learning we are interested in, say in higher primates, cannot
explained using only these limited kinds?
3. Reduction
All the above topics are related to old philosophical problem of reduction. In
perhaps its classic form, Schaffner argued that uni-level reduction is the hallmark of a
fully “clarified science,”19 and that its realization requires the satisfaction of two
specific conditions. First, causal generalizations contained in the explanans of a given
theory must be completely reduced to terminology referring to in the processes
occurring at one specific level of aggregation. Secondly, the explanandum must be uni-
level in so far as it contains a generalization statement situated at the same or
different level of aggregation as the explanans.
Scientists’ use of the term “reduction” differs from the philosophers use. Scientists
most often just mean that at least part of the explanation of a phenomenon is provided
exhibiting the lower level mechanisms that show how that phenomenon is produced.
The mechanisms are operating at different levels and, in some cases, are constitutive
of the entities and activities at higher levels.
In fact, it seems that all or almost all mechanisms are multi-level. If this is so, the
way philosophers see reduction, as reducing “higher” levels to a single lower level, is
impossible. Some philosophers —Darden, Wimsatt, and Schaffner— have recognized
this.20

17
Aspects of this problem have been discussed in Weber, M., “Under the Lamp Post: Comments on
Schaffner,” in Machamer, P. K., Grush, R. and McLaughlin, P. (eds.), Theory and Method in the
Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001, pp. 231-249.
18
Cf. Bartlett, F. C., Remembering: A Study in Experimental and Social Psychology, Cambridge
University Press, Cambridge, 1932.
19
Cf. Schaffner, K., Discovery and Explanations in Biology and Medicine, The University of Chicago
Press, Chicago, 1993.
20
Cf. Darden, L., “Discovering Mechanisms in Molecular Biology: Finding and Fixing Incompleteness and
Incorrectness,” in Schickore, J. and Steinle, F. (eds.), Revisiting Discovery and Justification, Max Planck
Institute for History of Science, Berlin, 2002, pp. 143-154; Wimsatt, W. C., “Reductionism and its
Heuristics: Making Methodological Reductionism Honest,” paper for a conference on Reductionism,
Institute Jean Nicod, Paris, November 2003. Proceedings forthcoming in Synthese; and Schaffner, K.,
Discovery and Explanations in Biology and Medicine, pp. 296-322.

188
Philosophy and Neuroscience: The Problems

4. Brain, Mind and Reduction


Since neuroscience is the science of the brain (loosely speaking) most people, it
seems, are interested in what it has to tell about the mind/body, or perhaps better the
mind/brain problem. The variations of the answers to this question are well known
in outline to all who have studied philosophy. The major contrast class answers are,
of course, materialism (or, nowadays, physicalism) and dualism. Materialism basically
holds that mental phenomena do not exist, while dualism claims ontological
independent existence of the two domains.
In contemporary times, the mind/brain conflict is usually stated in reductivist
terms. Physicalists, in the extreme, claim that all mental locutions or entities can be
eliminated or reduced to the purely physical. The Paul and Patricia Churchland are
qualified holders of this view, while John Bickle is its most strong and ardent
supporter.21 It is hard to find true philosophical dualists these days, though when the
issue of the nature of consciousness or sometimes qualia of mental states is raised,
there appear some who argue that consciousness or qualia cannot be reduced to
physical states. 22
As I shall speak more about below, much of the plausibility of the claims
depends upon what is meant by “reduction” and/or “independence.” Further, Since
Spinoza there have been a variety of positions seeking to establish a tertium modum
between these two extreme positions. The most favored forms today include
supervenience, whereby a mental like state is supposed to be a quasi-causal
consequence of a physical state, and so it is not truly independent, yet it is held to
carry some sort of autonomy.23 A more mystifying version is found in John Searle who
basically hold that everything is physical and causal, including the mind, but that we
need to talk about it independently since we cannot explain anything mental by
anything physical.
I shall not here go into anything more about these BIG questions (for one
reason, see Bechtel).24 I shall have something to say, towards the end of this talk,
about consciousness and I shall discuss reduction. I shall not say anything further
about the mind/brain-body problem.

5. Information
A ubiquitous concept, information, is found throughout he neuroscience
literature. It’s use in science dates back to Claude E. Shannon and Warren Weaver, 25
who developed a mathematical concept of information, where information is simply a
calculated probability that some bit of a structure at one end of channel (the sender)
will end up at the other end, a receiver. This type of information cannot be used to talk
about content, and has quite rigorous criteria for being applied to any communication
system. Almost all of the time, when the term “information” is used in neuroscience it
is not used in this mathematical sense.

21
Cf. Bickle, J., Psychoneural Reduction, The MIT Press, Cambridge, MA, 1998; and Bickle, J. Philosophy
and Neuroscience: A Ruthlessly Reductive Account, Kluwer, Dordrecht-Boston, 2003.
22
Cf. Nagel, T., “What is it like to be a Bat,” The Philosophical Review, v. 83, n. 4, (1974), pp. 435-450. 23
Cf. Kim, J., Mind in a Physical World, Cambridge University Press, Cambridge, 1998.
24
Bechtel, W., “Cognitive Neuroscience,” in Machamer, P., Grush, R. and McLaughlin, P. (eds.), Theory
and Method in the Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001, pp. 81-111.
25
Cf. Shannon, C. and Weaver, W., The Mathematical Theory of Communication, University of Illinois
Press, Urbana, 1949, republished in paperback, 1963.

189

Contemporary Perspectives in Philosophy and Methodology of Science

An attempt was made by Fred Dretske to develop a contentful concept of


information.26 Information was supposed to carry meaningful content. But it is
universally agreed that it failed. Most recently, Ruth Millikan has just published a new
philosophical account of information, 27 but it has not been evaluated as yet.
“Information” has a bad philosophical reputation now. As my former colleague, Paul
Griffiths said, information is a metaphor in search of theory. I think maybe it does not
need a theory.
In neuroscience, mechanisms are said to carry information about various
things. Neurotransmitters are said to carry information across synaptic clefts. More
generally, perception systems are said to carry and process information about the
environment. The “cognitivist” J. J. Gibson was one of the pioneers in this attempt to
use information in a non-mathematical, useful way.28
Consider an ancient example: How does an eye become a seeing eye? Answer:
by taking on the form of the object seen. The eye becomes informed by the object. This
is a perfectly good concept of information, and though the theory of the seeing eye and
the knowing mind has changed in many ways, the explanation for vision still proceeds
in somewhat the same way. The eye [somehow to be specified in some degree of
detail] picks up information about the world from an eye-world relation. So perhaps
we ought to say about “information” what Tom Kuhn said about “paradigm,” that it
was a perfectly good word until he got hold of it. So too “information” was perfectly
good, appropriately descriptive word before Shannon and Weaver mathematically
defined it. There is nothing wrong with their probabilistic mathematical concept of
information transmission. It just does not carry the semantic content that most people
assume when using the concept.
Let us look at a house builder for an example of information and see how its
relation to mechanisms and productive activities might help establish our intuitions.
Let us say the builder starts with a blueprint, on which is recorded the information
about what the owners have approved for their house. Therefore, in some clear sense
the blueprint contains information insofar as it represents the desired house, i.e., the
house actually built as desired is the goal. The blueprint serves the builder as a plan
that if used properly can produce the goal. The builder uses the information in the
blue print to build. The blueprint also serves as the part of the normative criteria for
adequacy as to whether the plan has been executed properly.
In this use of information there is there is no signal nor other pattern passed on
that remains intact from blue print to actual house during the period of construction.
There is no channel along which anything flows. The information in the blue print is
used to build the house, but it is not that the blueprint (as information) is somehow
transduced and retains some structural aspect throughout the productive activities
that bring the house into being. (Though, in some cases there may be such
transductions.) Maybe a similar story that could be told about the biological case of
DNA producing RNA, which then collects together specific strings of amino acids
which make specific proteins on specific occasions, but here again the story gets
complicated by the fact that any protein seemingly

26
Cf. Dretske, F., Knowledge and the Flow of Information, The MIT Press, Cambridge, MA, 1981.
27
Cf. Millikan, R., Varieties of Meaning, The MIT Press, Cambridge, MA, 2004.
28
Cf. Gibson, J. J., The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, 1979
(reprinted in Lawrence Erlbaum Associates, 1987).

190

Philosophy and Neuroscience: The Problems

can be (is) made in number of different ways. Similar remarks may be told about
synaptic transmission or about the visual system. There is nothing, no structure, that
seems to remain intact and flow along a channel. It’s always much more complicated.
We may identify a structure (or content) in the blueprint or, perhaps in a string
of codons, in that this originating state, somehow to be explained in each case,
represents the goal state or product, and which may be actively used by an agent, as
instructions, in order to produce the goal state, though much more information and
many more activities are needed to achieve the goal. The goal state is specified only
partially by the information that is presented in the blueprint, i.e., in a quite limited
way the blueprint presents aspects of what the house will look like when built. It is a
limited 2-dimensional representation of what is to become a 3-dimensional object,
and limited in that many details are missing, many skills needed to execute the
blueprint are not described, etc. Conversely, because of the specificity of structures
one can compare the house to the blueprint and see, in some respects, how well the
builders followed their plan. But only in some respects; in just those respects where
the information available in the blue print maybe compared with the information
available from looking at the finished house. Similarly, information in DNA is
information for the mRNA and it carries information about which amino acids, bases,
nucleotides and other chemical structures may collaborate in the construction of a
protein. But this is just to say we have a mechanism. The idea of information is not
really doing any work.
So here is one moral: Information is a noun that describes “things” that provide
information for some use by some one (or something). The major uses of information
are as evidence for an inference, often inductive; a basis for a prediction; a basis for an
expectation; or a basis that directs the subsequent behavior of a mechanism. The
information in the blueprint to direct persons’ (builders’) actions, i.e., those actions
are the tasks of construction necessary to build a house that looks like the one in the
blueprint. The blue print portrays how to connect those things, but may not say what
type of nails to use. Information is always information for an activity that will use it,
and “use” means the information directs and controls the activity(ies) or production
to some extent. Maybe we could say usefully, that information constrains the activities
involved in the production.
Here is another moral: Things use information for some purpose. So
information systems are always teleological in this sense. So information is always
information about-where information “points towards some end.”
We can distinguish two basic types of purposes: Information for rational use
(as in inferences or as basis for interpretation), and information for mechanical use
(for control or for production.)
The information in the blueprint is then information about the house. And this
is true despite the fact that there is no message in the blue print (so whatever
information is, it is not like letter or telegraph.) The encoding need not send signals,
but the information is for the active producer and is used to produce by the producer.
And we can check the end product, against the information in the starting conditions
to see if it was successfully produced. This means information is always information
about an end stage, and so is a relation between originating conditions and this
termination stage, and the intervening activities are directed by the originating state
towards the final product. So information is teleological.

191

Contemporary Perspectives in Philosophy and Methodology of Science

Natural mechanisms may follow such selective and productive patterns also.
Information, in this sense, is what directs a mechanism’s activities of production such
that they result in a selected end, where the end bears an appropriate specified
relation to the beginning. So, as said above, it is helpful to think of information as
constraining the possible outcomes by controlling the means and materials of
production. In perception complex features of the environment are actively picked up
by a perceiver and used by that perceiver to guide subsequent behavior. The
information in a DNA segment is about what bases are to belong to the protein to be
synthesized, and how those bases are to be arranged. But what makes it so is not the
transmission of a signal. Just what information the DNA segment contains depends on
the arrangements of the bases in its codons. DNA and mNRA segments feature
different bases. As protein synthesis proceeds the patterns in which they are arranged
are replaced by new patterns. Even if we could make sense out of the claim that the
DNA contains information about the protein to be synthesized, no signal is
transmitted from the DNA to the structures involved in the later stages of the
synthesis.29
From this, we might say that the structure of the start up conditions selects the
subsequent activities and entities that are responsible for bringing into existence
termination conditions (which is our phenomenon of interest). One might be mislead
by this way of speaking into thinking that intentional animism has crept in again, with
the concept of responsibility and control and the need to attribute these to the proper
entities and activities that we take to comprise the mechanism. However, this is a
question of what is controlling the production of the end state and what relation that
end state bears to the originating state. Often in such conditions, we are tempted to go
outside of the mechanism and seek larger causes for and a greater specification of why
the termination conditions are a preferred state. But this is true for any selective
processes.30 Noting this feature helps explain the desire to bring in evolution as a
selection principle everywhere we find a mechanism because we think then we have a
“big” reason for the goal. This is the job that God used to do. Unfortunately, in many
cases evolution works no better than God did.
6. Computational Models
Computational models are often used to bridge the “gap” between
neurobiology and gross physiology and behavior. Models present highly abstract sets
of relations that assume that neurons in the brain may be described as functioning
items in a digital mechanisms or networks, which if compounded and manipulated in
the proper ways may show how possibly a brain can give rise to behavior. We
refereed to one such model above, Churchland and Sejnowski’s Leechnet I & II. 31
which was a computer model for the leech’s dorsal bending.
Such models obtain their explanatory power because they may be used to
mediate between known or assumed neurobiological data that serve as constraints
for the model. Further constraints may come from location data and cognitive tasks.
Such models may be used to direct research into the mechanisms implicated in some
cognitive task.

29
For details, see Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K. and Walter, P., The Molecular
Biology of the Cell, 4th ed., Garland, New York, 2002, Chapter 7.
30
Cf. Machamer, P. K., “Teleology and Selective Processes,” in Colodny, R. (ed.), Logic, Laws, and Life:
Some Philosophical Complications, University of Pittsburgh Press, Pittsburgh, 1977, pp. 129-142.
31
On Churchland and Sejnowski’s Leechnet I & II, cf. Churchland, P. S. and Sejnowski, T., The
Computational Brain, passim.

192

Philosophy and Neuroscience: The Problems

7. Knowledge, representation and memory


An organism is said to have knowledge when it acquires and maintains
information from its environment or some other source (including itself). I would add
the additional condition, that it must be able to use this information in a normatively
appropriate way. In any discourse on knowledge there is a set of topics that both
philosophers and scientists must address. Basically these boil down to (1) acquisition
or learning, (2) maintenance or memory systems, (3) use and (4) normative
constraints that obtain for what is used to count as knowledge. 32
Many studies in neuroscience focus on how learning occurs and how memory
is maintained. It makes little sense to speak of acquisition or learning if what is
acquired if what is learned is not represented somehow in some of the organism’s
systems. So, any knowledge, the product of learning mechanisms, must be
“represented” in one or more of the memory systems, whether sitting calmly as a
trace, as a somewhat permanent systemic modification of some kind (e.g., perhaps as
a set of long term potentiations at a micro-level or as a revalued cognitive or neural
network at a higher systemic or computational level) or in motor systems
representing a tendency towards or an ability to perform certain actions. This
memorial maintenance requirement does not entail an identity between knowledge
and the representation (or the content of the representation), for there is much
evidence that, at least some kinds of, knowledge as expressed under certain task
conditions comes from constructing or reconstructing by adding to some stored
(represented) elements. The most usual situations of this kind are recall tasks. Think
about the windows on the ground floor of your house. Count them. The
phenomenology of this task suggests strongly that you are constructing this visual
image as you are proceeding through the house, counting as you go. Yet by performing
this counting task, you become aware that you know the number of windows.
It is fairly traditional in today’s neuroscience to identify kinds of knowledge
with kinds of representations in different kinds of memory systems. The classic
cognitive taxonomy for the different systems were: iconic storage; short term
memory; and long term memory.33
In neuroscience the first representations (formerly iconic) are present in the
various subsystems of sensory systems. In perception, information is extracted from
the environment, and processed through various hierarchies of the perceptual
systems. These have outputs that, in cases where long term, declarative memories will
occur, are passed into the working memory system (formerly short term memory).
Working memory is assumed to be at least partially located in the hippocampus.
Working memory is where the binding together of various inputs from the different
sensory systems occurs. From there, perhaps when some sort of threshold has been
attained, the information goes into one of three or four different types of long term
systems: (1) Declarative (long term, encoded in “bits,” or concepts, categories); (2)
Semantic (linguistic; words, propositions); (3) Episodic (autobiographical, relation to
“I”; personal experiences); and (4) Spatial (spatial relations specifying where (a point
of view or location system). The shift from working memory into a long term system
calls for memory consolidation.
32
A version of part of this section looking at social aspects of learning, memory and
knowledge was published in Machamer, P. K. and Osbeck, L., “The Social in the
Epistemic,” in Machamer, P. K. and Wolters, G. (eds.), Science, Values and Objectivity,
University of Pittsburgh Press, Pittsburgh, 2004, pp. 78-89.

33
Cf. Neisser, U., Cognitive Psychology, Appleton Century Crofts, N. York, 1967.

193
Contemporary Perspectives in Philosophy and Methodology of Science

There is also, most importantly, another memory system labeled (5)


Procedural Memory. There are probably a number of different systems grouped under
this rubric. Procedural memories are formed from having learned how to perform
different sorts of actions or skills. They too must be represented in the bodily system,
but do not at least usually seem to go through the hippocampal system of working
memory. There are also (many) (6) emotional systems. Finally, some sorts of (7)
attention systems have to be considered. Exactly where all these systems are located
in brain is under some debate. Also there are questions as to whether the types of
systems I have just enumerated are the “right” ones.
Very often network connections in long term memory are modeled as networks
of information, taken as categorical structures or schemata, into which inputs fit and
which change in interconnective relational weight as learning occurs. The connections
in the network then represent inference possibilities (lines of inference) and/ or
expectations that may follow from a particular categorization. Also often the memory
schema relates various appropriate actions or motor skills. There are also “off-line”
networks, which are representation systems (perhaps like imagination) that allow us
to process information and draw conclusions without actually acting.
A few philosophical points need to be drawn from the above descriptions of
memory or knowledge systems. First, from the detail of these different systems it
should be clear that the philosopher’s distinction between knowing how and knowing
that is quite inadequate. Philosophers need to think of many more kinds of knowledge
than they have heretofore considered.
Second, while not directly the concern of most neuroscientists, it is of
philosophical note that in order for any procedural or declarative or otherwise
“encoded” in memory to count as knowledge, it must be presupposed that there are
shared social norms of appropriate or correct application of what has been acquired.
These application norms apply to the uses of what has been internalized and function
as criteria as to whether one has acquired knowledge at all. To count as knowledge,
whatever is learned and stored must be able to be used by the knower. It is
epistemologically insufficient to theorize knowledge only in terms of acquisition and
representation. One must also consider how people exhibit that learning has taken
place through appropriate or correct action (including correct uses of language). For
this reason one major research interest in neuroscience is the inextricable connection
between what used to be called afferent and efferent systems, or perception, memory
and motor systems. But this very terminology fails to take into account that in some
sense in perception, memory and knowledge, there is a larger system whose
mechanisms need to become the focus of more research.
Knowledge, in part, is constituted by action, broadly defined, e.g. action here
includes appropriate judgment or use of words. And it is inseparably so constituted.
That is, knowledge always has a constitutive practical part. We might say that every
bit of theoretical knowledge is tied inextricably with practical reason, a thesis that
dates back to Aristotle.34 Importantly, these knowledge acts occur in a social space
wherein their application is publicly judged as appropriate or adequate (or even
correct). One way to see
34
Cf. Aristotle, Nichomachean Ethics, edited with translation by H. Rackham, Harvard University Press,
Cambridge, 1934, book VI, 7, 1141b13ff.
194
Philosophy and Neuroscience: The Problems

this is to note that truth conditions, at least in practice, are use conditions, and even
those “norms set by the world” (empirical constraints) structure human practices
concerning correct or acceptable use, e.g., of descriptions of use of instruments or
even the right ways to fix a car. Human practices are social practices, and the way we
humans use truth as a criterion is determined not only by the world, but also by those
traditions we have of inquiring about the world, of assessing the legitimacy of
descriptions and claims made about the world, and of evaluating actions performed in
the world.
As extrinsic epistemologists have stressed there is a need to refocus
philosophical concern to concentrate on reliability (rather than truth.) From our
discussion above we may infer that the reliability of memory entails answers about
reliability of knowledge.
One way to begin to think about reliability and its connection to truth is, what
are the conditions for “reliable” or appropriate assertions (or other kinds of speech
acts)? That is, what conditions must obtain or what presuppositions must be fulfilled
in order to make a speech act reliable or appropriate? What are the criteria of success
for such speech acts? And how are such criteria established? Validated? And changed?
More generally what evaluative criteria apply to actions? Speech acts are actions. How
do actions show yourself, you who perform them, that they are effective?
Appropriate? These are different. You may effectively insult someone, but it maybe
highly inappropriate. On what grounds are they judged to be appropriate by other
people?

8. Consciousness: What is the problem?


The problem of consciousness is that it is not one problem. Many things have
been discussed under the ideas of consciousness. We have little time remaining so I
shall be merely provocatively assertive here. The big question about consciousness
can be put: why are certain forms of information used be people displayed in the
qualitative form of conscious experience. Put slightly differently, what is there that a
person can do if one perceptual systems display information in this modally specific
qualitative ways, and how can this information be used in ways that it cannot if it
were made available in some other form? One general answer to this question is
called the conscious workspace hypotheses. Yet another aspect that gets raised in
what evolutionary advantage much such a system have such that it would have been
selected for? Finally, there are some neuroscientists who have been working on trying
to establish the mechanisms that produce this form of information display. But these
issues take us into new realms, and cannot explored further here.

9. Selected Bibliography (and for further reading)

Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K. and Walter, P., The Molecular Biology
of the Cell, 4th ed., Garland, New York, 2002.
Anscombe, G. E. M., “Causality and Determination,” Inaugural lecture at the University of
Cambridge, 1971; reprinted in Anscombe, G. E. M., Metaphysics and the Philosophy of Mind, The
Collected Philosophical Papers, 1981, vol. 2., University of Minnesota Press, Minneapolis, pp. 133-147.
Aristotle, Nichomachean Ethics, edited with translation by H. Rackham, Harvard University
Press, Cambridge, 1934.

195

Contemporary Perspectives in Philosophy and Methodology of Science

Bartlett, F. C., Remembering: A Study in Experimental and Social Psychology, Cambridge


University Press, Cambridge, 1932.
Bechtel, W., Philosophy of Science. An Overview for Cognitive Science, Lawrence Erlbaum
Associates, Hillsdale, NJ, 1988.
Bechtel, W. and Richardson, R., Discovering Complexity: Decomposition and Localization as
Strategies in Scientific Research, Princeton University Press, Princeton, 1992.
Bechtel, W., “Cognitive Neuroscience,” in Machamer, P., Grush, R. and McLaughlin, P. (eds.),
Theory and Method in the Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001, pp. 81-111.
Bickle, J., Psychoneural Reduction, The MIT Press, Cambridge, MA, 1998.
Bickle, J., Philosophy and Neuroscience: A Ruthlessly Reductive Account, Kluwer, Dordrecht-
Boston, 2003.
Bogen, J. and Woodward, J., “Saving the Phenomena,” The Philosophical Review, v. 97, (1988),
pp. 303-352.
Chalmers, D., The Conscious Mind, Oxford University Press, New York, 1996.
Churchland, P. S., Neurophilosophy: Towards a Unified Science of the Mind-Brain, The MIT
Press, Cambridge, MA, 1986.
Churchland, P. M. and Churchland, P. S., “Intertheoretic Reduction: A Neuroscientist’s Field
Guide,” Seminars in the Neurosciences, v. 2, (1991), pp. 249-256.
Churchland, P. S. and Sejnowski, T., The Computational Brain, Bradford Book, The MIT Press,
Cambridge, MA, 1992.
Craver, C. F., Neural Mechanisms: On the Structure, Function, and Development of Theories in
Neurobiology, Ph.D. Dissertation, University of Pittsburgh, Pittsburgh, PA, 1998.
Craver, C. F., “Role, Mechanisms, and Hierarchy,” Philosophy of Science, v. 68, (2001), pp. 53-74.
Craver, C. F. and Darden, L., “Discovering Mechanisms in Neurobiology: The Case of Spatial
Memory,” in Machamer, P., Grush, R. and McLaughlin, P. (eds.), Theory and Method in the
Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001, pp. 112-137.
Craver, C. F., “Interlevel Experiments and Multilevel Mechanisms in the Neuroscience of
Memory,” Philosophy of Science, v. 69, (2002), pp. S83-S97.
Craver, C. F., Explaining the Brain: What a Science of the Mind-Brain Could Be, Oxford University
Press, Oxford, forthcoming.
Darden, L., “Discovering Mechanisms in Molecular Biology: Finding and Fixing Incompleteness
and Incorrectness,” in Schickore, J. and Steinle, F. (eds.), Revisiting Discovery and Justification, Max
Planck Institute for History of Science, Berlin, 2002, pp. 143-154.
Dennett, D., The Intentional Stance, The MIT Press, Cambridge, MA, 1987.
Dretske, F., Knowledge and the Flow of Information, The MIT Press, Cambridge, MA, 1981.
Fodor, J., “Special Sciences (Or: The Disunity Of Science As A Working Hypothesis),” Synthese, v.
28, (1974), pp. 97-115.
Fodor, J., “Special Sciences: Still Autonomous After All These Years,” in Toberlin, J. (ed.),
Philosophical Perspectives 11: Mind, Causation, and World, Blackwell, Boston, 1997, pp. 149-163.
196

Philosophy and Neuroscience: The Problems

Glennan, S., “Mechanisms and the Nature of Causation,” Erkenntnis, v. 44, (1996), pp. 49-71.
Glennan, S., “Rethinking Mechanical Explanation,” Philosophy of Science, v. 69, (2002), pp.
S342-S353.
Gibson, J. J., The Senses Considered as Perceptual Systems, Houghton Mifflin, Boston, 1966.
Gibson, J. J., The Ecological Approach to Visual Perception, Houghton Mifflin, Boston, 1979
(reprinted in Lawrence Erlbaum Associates, 1987).
Griffiths, P. and Stotz, K., “Representing Genes,” www.pitt.edu/~kstotz/genes/genes.html.
Hempel, C. G., “The Logic of Functional Analysis,” in Gross, Ll. (ed.), Symposium on Sociological
Theory, P. Row, Evanston, IL, 1959, pp. 271-307; reprinted in Hempel, C. G., Aspects of Scientific
Explanation and other Essays in the Philosophy of Science, Free Press, N. York, 1965, pp. 297-330.
Kim, J., Mind in a Physical World, Cambridge University Press, Cambridge, 1998.
Machamer, P. K., “Gibson and the Conditions of Perception,” in Machamer, P. K. and
Turnbull, R. G. (eds.), Perception: Historical and Philosophical Studies, Ohio State University Press,
Columbus, OH, 1975, pp. 435-466.
Machamer, P. K., “Teleology and Selective Processes,” in Colodny, R. (ed.), Logic, Laws, and
Life: Some Philosophical Complications, University of Pittsburgh Press, Pittsburgh, 1977, pp. 129-142.
Machamer, P. K., Darden, L. and Craver, C. F., “Thinking About Mechanisms,” Philosophy of
Science v. 67, n. 1, (2000), pp. 1-25.
Machamer, P. K., Grush, R. and McLaughlin, P. (eds.), Theory and Method in the
Neurosciences, University of Pittsburgh Press, Pittsburgh, 2001.
Machamer, P. K. and Osbeck, L., “The Social in the Epistemic,” in Machamer, P. K. and
Wolters, G. (eds.), Science, Values and Objectivity, University of Pittsburgh Press, Pittsburgh, 2004, pp.
78-89.
Millikan, R., Varieties of Meaning, The MIT Press, Cambridge, MA, 2004.
Mitchell, S. D., Biological Complexity and Integrative Pluralism (Cambridge Studies in Philosophy
and Biology), Cambridge University Press, Cambridge, 2003.
Nagel, T., “What is it like to be a Bat,” The Philosophical Review, v. 83, n. 4, (1974), pp. 435-450.
Neisser, U., Cognitive Psychology, Appleton Century Crofts, N. York, 1967.
Richardson, R., “Discussion: How Not to Reduce a Functional Psychology,” Philosophy of
Science, v. 49, (1982), pp. 125-137.
Richardson, R., “Cognitive Science and Neuroscience: New-Wave Reductionism,” Philosophical
Psychology, v. 12, n. 3, (1999), pp. 297-307.
Robinson, H., “Dualism,” in Stich, S. and Warfield, T. (eds.), The Blackwell Guide to Philosophy
of Mind, Blackwell, Oxford, 2003, pp. 85-101.
Robinson, H., “Dualism,” The Stanford Encyclopedia of Philosophy (Fall 2003 Edition), Zalta, E.
N. (ed.), URL =.
197
Contemporary Perspectives in Philosophy and Methodology of Science

Sarkar, S., “Models of Reduction and Categories of Reductionism,” Synthese, v. 91, n. 3, (1992),
pp. 167-194.
Schaffner, K., “Approaches to Reductionism,” Philosophy of Science, v. 34, (1967), pp. 137-147.
Schaffner, K., “Theory Structure, Reduction and Disciplinary Integration in Biology,” Biology
and Philosophy, v. 8, n. 3, (1993), pp. 319-347.
Schaffner, K., Discovery and Explanations in Biology and Medicine, The University of Chicago
Press, Chicago, 1993.
Schaffner, K., “Interactions Among Theory, Experiment, and Technology in Molecular Biology,”
Proceedings of the Biennial Meetings of the Philosophy of Science Association, v. 2, (1994), pp. 192-205.
Schaffner, K., Reductionism and Determinism in Human Genetics: Lessons from Simple
Organisms, University of Notre Dame Press, Notre dame, IN, 1995.
Schouten, M. and Loreen-de-Jong, H., “Reduction, Elimination, and Levels: The Case of the
LTP-Learning Link,” Philosophical Psychology, v. 12, n. 3, (1999), pp. 237-262.
Shannon, C. and Weaver, W., The Mathematical Theory of Communication, University of Illinois
Press, Urbana, 1949, republished in paperback, 1963.
Smart, J. J. C., “Sensations and Brain Processes,” Philosophical Review, v. 68, (1959), pp. 141-
156.
Weber, M., “Under the Lamp Post: Comments on Schaffner,” in Machamer, P. K., Grush, R.,
and McLaughlin, P. (eds.), Theory and Method in the Neurosciences, University of Pittsburgh Press,
Pittsburgh, 2001, pp. 231-249.
Wimsatt, W., “The Ontology of Complex Systems: Levels of Organization, Perspectives and
Causal Thickets,” Canadian Journal of Philosophy of Science, Supplementary Volume 20, (1994), pp. 207-
274.
Wimsatt, W. C., “Reductionism and its Heuristics: Making Methodological Reductionism
Honest,” paper for a conference on Reductionism, Institute Jean Nicod, Paris, November 2003.
Proceedings forthcoming in Synthese.
Woodward, J., “Data and Phenomena,” Synthese, v. 79, (1989), pp. 393-472.
198

You might also like