Consciousness Notes
Consciousness Notes
Consciousness Notes
Explaining the nature of consciousness is one of the most important and perplexing
areas of philosophy, but the concept is notoriously ambiguous. The abstract noun
“consciousness” is not frequently used by itself in the contemporary literature, but
is originally derived from the Latin con (with) and scire (to know). Perhaps the
most commonly used contemporary notion of a conscious mental state is captured
by Thomas Nagel’s famous “what it is like” sense (Nagel 1974). When I am in a
conscious mental state, there is something it is like for me to be in that state from
the subjective or first-person point of view. But how are we to understand this? For
instance, how is the conscious mental state related to the body? Can consciousness
be explained in terms of brain activity? What makes a mental state be a conscious
mental state? The problem of consciousness is arguably the most central issue in
current philosophy of mind and is also importantly related to major traditional
topics in metaphysics, such as the possibility of immortality and the belief in free
will. This article focuses on Western theories and conceptions of consciousness,
especially as found in contemporary analytic philosophy of mind.
The two broad, traditional and competing theories of mind are dualism and
materialism (or physicalism). While there are many versions of each, the former
generally holds that the conscious mind or a conscious mental state is non-physical
in some sense, whereas the latter holds that, to put it crudely, the mind is the brain,
or is caused by neural activity. It is against this general backdrop that many
answers to the above questions are formulated and developed. There are also many
familiar objections to both materialism and dualism. For example, it is often said
that materialism cannot truly explain just how or why some brain states are
conscious, and that there is an important “explanatory gap” between mind and
matter. On the other hand, dualism faces the problem of explaining how a non-
physical substance or mental state can causally interact with the physical body.
Table of Contents
Objection 3: Mysterianism
Objection 4: Zombies
Varieties of Materialism
Neural Theories
First-Order Representationalism
Higher-Order Representationalism
Quantum Approaches
Philosophical Psychopathology
Interest in the nature of conscious experience has no doubt been around for as long
as there have been reflective humans. It would be impossible here to survey the
entire history, but a few highlights are in order. In the history of Western
philosophy, which is the focus of this entry, important writings on human nature
and the soul and mind go back to ancient philosophers, such as Plato. More
sophisticated work on the nature of consciousness and perception can be found in
the work of Plato’s most famous student Aristotle (see Caston 2002), and then
throughout the later Medieval period. It is, however, with the work of René
Descartes (1596-1650) and his successors in the early modern period of philosophy
that consciousness and the relationship between the mind and body took center
stage. As we shall see, Descartes argued that the mind is a non-physical substance
distinct from the body. He also did not believe in the existence of unconscious
mental states, a view certainly not widely held today. Descartes defined “thinking”
very broadly to include virtually every kind of mental state and urged that
consciousness is essential to thought. Our mental states are, according to
Descartes, infallibly transparent to introspection. John Locke (1689/1975) held a
similar position regarding the connection between mentality and consciousness,
but was far less committed on the exact metaphysical nature of the mind.
Perhaps the most important philosopher of the period explicitly to endorse the
existence of unconscious mental states was G.W. Leibniz (1686/1991, 1720/1925).
Although Leibniz also believed in the immaterial nature of mental substances
(which he called “monads”), he recognized the existence of what he called “petit
perceptions,” which are basically unconscious perceptions. He also importantly
distinguished between perception and apperception, roughly the difference
between outer-directed consciousness and self-consciousness (see Gennaro 1999
for some discussion). The most important detailed theory of mind in the early
modern period was developed by Immanuel Kant. His main work Critique of Pure
Reason (1781/1965) is as equally dense as it is important, and cannot easily be
summarized in this context. Although he owes a great debt to his immediate
predecessors, Kant is arguably the most important philosopher since Plato and
Aristotle and is highly relevant today. Kant basically thought that an adequate
account of phenomenal consciousness involved far more than any of his
predecessors had considered. There are important mental structures which are
“presupposed” in conscious experience, and Kant presented an elaborate theory as
to what those structures are, which, in turn, had other important implications. He,
like Leibniz, also saw the need to postulate the existence of unconscious mental
states and mechanisms in order to provide an adequate theory of mind (Kitcher
1990 and Brook 1994 are two excellent books on Kant’s theory of mind.).
Over the past one hundred years or so, however, research on consciousness has
taken off in many important directions. In psychology, with the notable exception
of the virtual banishment of consciousness by behaviorist psychologists (e.g.,
Skinner 1953), there were also those deeply interested in consciousness and
various introspective (or “first-person”) methods of investigating the mind. The
writings of such figures as Wilhelm Wundt (1897), William James (1890) and
Alfred Titchener (1901) are good examples of this approach. Franz Brentano
(1874/1973) also had a profound effect on some contemporary theories of
consciousness. Similar introspectionist approaches were used by those in the so-
called “phenomenological” tradition in philosophy, such as in the writings of
Edmund Husserl(1913/1931, 1929/1960) and Martin Heidegger (1927/1962). The
work of Sigmund Freud was very important, at minimum, in bringing about the
near universal acceptance of the existence of unconscious mental states and
processes.
It must, however, be kept in mind that none of the above had very much scientific
knowledge about the detailed workings of the brain. The relatively recent
development of neurophysiology is, in part, also responsible for the unprecedented
interdisciplinary research interest in consciousness, particularly since the 1980s.
There are now several important journals devoted entirely to the study of
consciousness: Consciousness and Cognition, Journal of Consciousness Studies,
and Psyche. There are also major annual conferences sponsored by world wide
professional organizations, such as the Association for the Scientific Study of
Consciousness, and an entire book series called “Advances in Consciousness
Research” published by John Benjamins. (For a small sample of introductory texts
and important anthologies, see Kim 1996, Gennaro 1996b, Block et. al. 1997,
Seager 1999, Chalmers 2002, Baars et. al. 2003, Blackmore 2004, Campbell 2005,
Velmans and Schneider 2007, Zelazo et al. 2007, Revonsuo 2010.)
3. The Metaphysics of Consciousness: Materialism vs. Dualism
Historically, there is also the clear link between dualism and a belief in
immortality, and hence a more theistic perspective than one tends to find among
materialists. Indeed, belief in dualism is often explicitly theologically motivated. If
the conscious mind is not physical, it seems more plausible to believe in the
possibility of life after bodily death. On the other hand, if conscious mental activity
is identical with brain activity, then it would seem that when all brain activity
ceases, so do all conscious experiences and thus no immortality. After all, what do
many people believe continues after bodily death? Presumably, one’s own
conscious thoughts, memories, experiences, beliefs, and so on. There is perhaps a
similar historical connection to a belief in free will, which is of course a major
topic in its own right. For our purposes, it suffices to say that, on some definitions
of what it is to act freely, such ability seems almost “supernatural” in the sense that
one’s conscious decisions can alter the otherwise deterministic sequence of events
in nature. To put it another way: If we are entirely physical beings as the materialist
holds, then mustn’t all of the brain activity and behavior in question be determined
by the laws of nature? Although materialism may not logically rule out immortality
or free will, materialists will likely often reply that such traditional, perhaps even
outdated or pre-scientific beliefs simply ought to be rejected to the extent that they
conflict with materialism. After all, if the weight of the evidence points toward
materialism and away from dualism, then so much the worse for those related
views.
One might wonder “even if the mind is physical, what about the soul?” Maybe it’s
the soul, not the mind, which is non-physical as one might be told in many
religious traditions. While it is true that the term “soul” (or “spirit”) is often used
instead of “mind” in such religious contexts, the problem is that it is unclear just
how the soul is supposed to differ from the mind. The terms are often even used
interchangeably in many historical texts and by many philosophers because it is
unclear what else the soul could be other than “the mental substance.” It is difficult
to describe the soul in any way that doesn’t make it sound like what we mean by
the mind. After all, that’s what many believe goes on after bodily death; namely,
conscious mental activity. Granted that the term “soul” carries a more theological
connotation, but it doesn’t follow that the words “soul” and “mind” refer to entirely
different things. Somewhat related to the issue of immortality, the existence of near
death experiences is also used as some evidence for dualism and immortality. Such
patients experience a peaceful moving toward a light through a tunnel like
structure, or are able to see doctors working on their bodies while hovering over
them in an emergency room (sometimes akin to what is called an “out of body
experience”). In response, materialists will point out that such experiences can be
artificially induced in various experimental situations, and that starving the brain of
oxygen is known to cause hallucinations.
Three serious objections are briefly worth noting here. The first is simply the issue
of just how does or could such radically different substances causally interact. How
something non-physical causally interacts with something physical, such as the
brain? No such explanation is forthcoming or is perhaps even possible, according
to materialists. Moreover, if causation involves a transfer of energy from cause to
effect, then how is that possible if the mind is really non-physical? Gilbert Ryle
(1949) mockingly calls the Cartesian view about the nature of mind, a belief in the
“ghost in the machine.” Secondly, assuming that some such energy transfer makes
any sense at all, it is also then often alleged that interactionism is inconsistent with
the scientifically well-established Conservation of Energy principle, which says
that the total amount of energy in the universe, or any controlled part of it, remains
constant. So any loss of energy in the cause must be passed along as a
corresponding gain of energy in the effect, as in standard billiard ball examples.
But if interactionism is true, then when mental events cause physical events,
energy would literally come into the physical word. On the other hand, when
bodily events cause mental events, energy would literally go out of the physical
world. At the least, there is a very peculiar and unique notion of energy involved,
unless one wished, even more radically, to deny the conservation principle itself.
Third, some materialists might also use the well-known fact that brain damage
(even to very specific areas of the brain) causes mental defects as a serious
objection to interactionism (and thus as support for materialism). This has of
course been known for many centuries, but the level of detailed knowledge has
increased dramatically in recent years. Now a dualist might reply that such
phenomena do not absolutely refute her metaphysical position since it could be
replied that damage to the brain simply causes corresponding damage to the mind.
However, this raises a host of other questions: Why not opt for the simpler
explanation, i.e., that brain damage causes mental damage because mental
processes simply are brain processes? If the non-physical mind is damaged when
brain damage occurs, how does that leave one’s mind according to the dualist’s
conception of an afterlife? Will the severe amnesic at the end of life on Earth retain
such a deficit in the afterlife? If proper mental functioning still depends on proper
brain functioning, then is dualism really in no better position to offer hope for
immortality?
It should be noted that there is also another less popular form of substance dualism
called parallelism, which denies the causal interaction between the non-physical
mental and physical bodily realms. It seems fair to say that it encounters even more
serious objections than interactionism.
Two other views worth mentioning are epiphenomenalism and panpsychism. The
latter is the somewhat eccentric view that all things in physical reality, even down
to micro-particles, have some mental properties. All substances have a mental
aspect, though it is not always clear exactly how to characterize or test such a
claim. Epiphenomenalism holds that mental events are caused by brain events but
those mental events are mere “epiphenomena” which do not, in turn, cause
anything physical at all, despite appearances to the contrary (for a recent defense,
see Robinson 2004).
Finally, although not a form of dualism, idealism holds that there are only
immaterial mental substances, a view more common in the Eastern tradition. The
most prominent Western proponent of idealism was 18th century empiricist George
Berkeley. The idealist agrees with the substance dualist, however, that minds are
non-physical, but then denies the existence of mind-independent physical
substances altogether. Such a view faces a number of serious objections, and it also
requires a belief in the existence of God.
Some form of materialism is probably much more widely held today than in
centuries past. No doubt part of the reason for this has to do with the explosion in
scientific knowledge about the workings of the brain and its intimate connection
with consciousness, including the close connection between brain damage and
various states of consciousness. Brain death is now the main criterion for when
someone dies. Stimulation to specific areas of the brain results in modality specific
conscious experiences. Indeed, materialism often seems to be a working
assumption in neurophysiology. Imagine saying to a neuroscientist “you are not
really studying the conscious mind itself” when she is examining the workings of
the brain during an fMRI. The idea is that science is showing us that conscious
mental states, such as visual perceptions, are simply identical with certain neuro-
chemical brain processes; much like the science of chemistry taught us that water
just is H2O.
There are also theoretical factors on the side of materialism, such as adherence to
the so-called “principle of simplicity” which says that if two theories can equally
explain a given phenomenon, then we should accept the one which posits fewer
objects or forces. In this case, even if dualism could equally explain consciousness
(which would of course be disputed by materialists), materialism is clearly the
simpler theory in so far as it does not posit any objects or processes over and above
physical ones. Materialists will wonder why there is a need to believe in the
existence of such mysterious non-physical entities. Moreover, in the aftermath of
the Darwinian revolution, it would seem that materialism is on even stronger
ground provided that one accepts basic evolutionary theory and the notion that
most animals are conscious. Given the similarities between the more primitive
parts of the human brain and the brains of other animals, it seems most natural to
conclude that, through evolution, increasing layers of brain areas correspond to
increased mental abilities. For example, having a well developed prefrontal cortex
allows humans to reason and plan in ways not available to dogs and cats. It also
seems fairly uncontroversial to hold that we should be materialists about the minds
of animals. If so, then it would be odd indeed to hold that non-physical conscious
states suddenly appear on the scene with humans.
There are still, however, a number of much discussed and important objections to
materialism, most of which question the notion that materialism can adequately
explain conscious experience.
Joseph Levine (1983) coined the expression “the explanatory gap” to express a
difficulty for any materialistic attempt to explain consciousness. Although not
concerned to reject the metaphysics of materialism, Levine gives eloquent
expression to the idea that there is a key gap in our ability to explain the
connection between phenomenal properties and brain properties (see also Levine
1993, 2001). The basic problem is that it is, at least at present, very difficult for us
to understand the relationship between brain properties and phenomenal properties
in any explanatory satisfying way, especially given the fact that it seems possible
for one to be present without the other. There is an odd kind of arbitrariness
involved: Why or how does some particular brain process produce that particular
taste or visual sensation? It is difficult to see any real explanatory connection
between specific conscious states and brain states in a way that explains just how
or why the former are identical with the latter. There is therefore an explanatory
gap between the physical and mental. Levine argues that this difficulty in
explaining consciousness is unique; that is, we do not have similar worries about
other scientific identities, such as that “water is H2O” or that “heat is mean
molecular kinetic energy.” There is “an important sense in which we can’t really
understand how [materialism] could be true.” (2001: 68)
David Chalmers (1995) has articulated a similar worry by using the catchy phrase
“the hard problem of consciousness,” which basically refers to the difficulty of
explaining just how physical processes in the brain give rise to subjective
conscious experiences. The “really hard problem is the problem of experience…
How can we explain why there is something it is like to entertain a mental image,
or to experience an emotion?” (1995: 201) Others have made similar points, as
Chalmers acknowledges, but reference to the phrase “the hard problem” has now
become commonplace in the literature. Unlike Levine, however, Chalmers is much
more inclined to draw anti-materialist metaphysical conclusions from these and
other considerations. Chalmers usefully distinguishes the hard problem of
consciousness from what he calls the (relatively) “easy problems” of
consciousness, such as the ability to discriminate and categorize stimuli, the ability
of a cognitive system to access its own internal states, and the difference between
wakefulness and sleep. The easy problems generally have more to do with the
functions of consciousness, but Chalmers urges that solving them does not touch
the hard problem of phenomenal consciousness. Most philosophers, according to
Chalmers, are really only addressing the easy problems, perhaps merely with
something like Block’s “access consciousness” in mind. Their theories ignore
phenomenal consciousness.
There are many responses by materialists to the above charges, but it is worth
emphasizing that Levine, at least, does not reject the metaphysics of materialism.
Instead, he sees the “explanatory gap [as] primarily an epistemological problem”
(2001: 10). That is, it is primarily a problem having to do with knowledge or
understanding. This concession is still important at least to the extent that one is
concerned with the larger related metaphysical issues discussed in section 3a, such
as the possibility of immortality.
Perhaps most important for the materialist, however, is recognition of the fact that
different concepts can pick out the same property or object in the world (Loar
1990, 1997). Out in the world there is only the one “stuff,” which we can
conceptualize either as “water” or as “H2O.” The traditional distinction, made
most notably by Gottlob Frege in the late 19th century, between “meaning” (or
“sense”) and “reference” is also relevant here. Two or more concepts, which can
have different meanings, can refer to the same property or object, much like
“Venus” and “The Morning Star.” Materialists, then, explain that it is essential to
distinguish between mental properties and our concepts of those properties. By
analogy, there are so-called “phenomenal concepts” which uses a phenomenal or
“first-person” property to refer to some conscious mental state, such as a sensation
of red (Alter and Walter 2007). In contrast, we can also use various concepts
couched in physical or neurophysiological terms to refer to that same mental state
from the third-person point of view. There is thus but one conscious mental state
which can be conceptualized in two different ways: either by employing first-
person experiential phenomenal concepts or by employing third-person
neurophysiological concepts. It may then just be a “brute fact” about the world that
there are such identities and the appearance of arbitrariness between brain
properties and mental properties is just that – an apparent problem leading many to
wonder about the alleged explanatory gap. Qualia would then still be identical to
physical properties. Moreover, this response provides a diagnosis for why there
even seems to be such a gap; namely, that we use very different concepts to pick
out the same property. Science will be able, in principle, to close the gap and solve
the hard problem of consciousness in an analogous way that we now have a very
good understanding for why “water is H2O” or “heat is mean molecular kinetic
energy” that was lacking centuries ago. Maybe the hard problem isn’t so hard after
all – it will just take some more time. After all, the science of chemistry didn’t
develop overnight and we are relatively early in the history of neurophysiology and
our understanding of phenomenal consciousness. (See Shear 1997 for many more
specific responses to the hard problem, but also for Chalmers’ counter-replies.)
Nagel imagines a future where we know everything physical there is to know about
some other conscious creature’s mind, such as a bat. However, it seems clear that
we would still not know something crucial; namely, “what it is like to be a bat.” It
will not do to imagine what it is like for us to be a bat. We would still not know
what it is like to be a bat from the bat’s subjective or first-person point of view. The
idea, then, is that if we accept the hypothesis that we know all of the physical facts
about bat minds, and yet some knowledge about bat minds is left out, then
materialism is inherently flawed when it comes to explaining consciousness. Even
in an ideal future in which everything physical is known by us, something would
still be left out. Jackson’s somewhat similar, but no less influential, argument
begins by asking us to imagine a future where a person, Mary, is kept in a black
and white room from birth during which time she becomes a brilliant
neuroscientist and an expert on color perception. Mary never sees red for example,
but she learns all of the physical facts and everything neurophysiologically about
human color vision. Eventually she is released from the room and sees red for the
first time. Jackson argues that it is clear that Mary comes to learn something new;
namely, to use Nagel’s famous phrase, what it is like to experience red. This is a
new piece of knowledge and hence she must have come to know some non-
physical fact (since, by hypothesis, she already knew all of the physical facts).
Thus, not all knowledge about the conscious mind is physical knowledge.
The influence and the quantity of work that these ideas have generated cannot be
exaggerated. Numerous materialist responses to Nagel’s argument have been
presented (such as Van Gulick 1985), and there is now a very useful anthology
devoted entirely to Jackson’s knowledge argument (Ludlow et. al. 2004). Some
materialists have wondered if we should concede up front that Mary wouldn’t be
able to imagine the color red even before leaving the room, so that maybe she
wouldn’t even be surprised upon seeing red for the first time. Various suspicions
about the nature and effectiveness of such thought experiments also usually
accompany this response. More commonly, however, materialists reply by arguing
that Mary does not learn a new fact when seeing red for the first time, but rather
learns the same fact in a different way. Recalling the distinction made in section
3b.i between concepts and objects or properties, the materialist will urge that there
is only the one physical fact about color vision, but there are two ways to come to
know it: either by employing neurophysiological concepts or by actually
undergoing the relevant experience and so by employing phenomenal concepts. We
might say that Mary, upon leaving the black and white room, becomes acquainted
with the same neural property as before, but only now from the first-person point
of view. The property itself isn’t new; only the perspective, or what philosophers
sometimes call the “mode of presentation,” is different. In short, coming to learn or
know something new does not entail learning some new fact about the world.
Analogies are again given in other less controversial areas, for example, one can
come to know about some historical fact or event by reading a (reliable) third-
person historical account or by having observed that event oneself. But there is still
only the one objective fact under two different descriptions. Finally, it is crucial to
remember that, according to most, the metaphysics of materialism remains
unaffected. Drawing a metaphysical conclusion from such purely epistemological
premises is always a questionable practice. Nagel’s argument doesn’t show that bat
mental states are not identical with bat brain states. Indeed, a materialist might
even expect the conclusion that Nagel draws; after all, given that our brains are so
different from bat brains, it almost seems natural for there to be certain aspects of
bat experience that we could never fully comprehend. Only the bat actually
undergoes the relevant brain processes. Similarly, Jackson’s argument doesn’t
show that Mary’s color experience is distinct from her brain processes.
McGinn does not entirely rest his argument on past failed attempts at explaining
consciousness in materialist terms; instead, he presents another argument for his
admittedly pessimistic conclusion. McGinn observes that we do not have a mental
faculty that can access both consciousness and the brain. We access consciousness
through introspection or the first-person perspective, but our access to the brain is
through the use of outer spatial senses (e.g., vision) or a more third-person
perspective. Thus we have no way to access both the brain and consciousness
together, and therefore any explanatory link between them is forever beyond our
reach.
Materialist responses are numerous. First, one might wonder why we can’t
combine the two perspectives within certain experimental contexts. Both first-
person and third-person scientific data about the brain and consciousness can be
acquired and used to solve the hard problem. Even if a single person cannot grasp
consciousness from both perspectives at the same time, why can’t a plausible
physicalist theory emerge from such a combined approach? Presumably, McGinn
would say that we are not capable of putting such a theory together in any
appropriate way. Second, despite McGinn’s protests to the contrary, many will
view the problem of explaining consciousness as a merely temporary limit of our
theorizing, and not something which is unsolvable in principle (Dennett 1991).
Third, it may be that McGinn expects too much; namely, grasping some causal link
between the brain and consciousness. After all, if conscious mental states are
simply identical to brain states, then there may simply be a “brute fact” that really
does not need any further explaining. Indeed, this is sometimes also said in
response to the explanatory gap and the hard problem, as we saw earlier. It may
even be that some form of dualism is presupposed in McGinn’s argument, to the
extent that brain states are said to “cause” or “give rise to” consciousness, instead
of using the language of identity. Fourth, McGinn’s analogy to lower animals and
mathematics is not quite accurate. Rats, for example, have no concept whatsoever
of calculus. It is not as if they can grasp it to some extent but just haven’t figured
out the answer to some particular problem within mathematics. Rats are just
completely oblivious to calculus problems. On the other hand, we humans
obviously do have some grasp on consciousness and on the workings of the brain
— just see the references at the end of this entry! It is not clear, then, why we
should accept the extremely pessimistic and universally negative conclusion that
we can never discover the answer to the problem of consciousness, or, more
specifically, why we could never understand the link between consciousness and
the brain.
Unlike many of the above objections to materialism, the appeal to the possibility of
zombies is often taken as both a problem for materialism and as a more positive
argument for some form of dualism, such as property dualism. The philosophical
notion of a “zombie” basically refers to conceivable creatures which are physically
indistinguishable from us but lack consciousness entirely (Chalmers 1996). It
certainly seems logically possible for there to be such creatures: “the conceivability
of zombies seems…obvious to me…While this possibility is probably empirically
impossible, it certainly seems that a coherent situation is described; I can discern
no contradiction in the description” (Chalmers 1996: 96). Philosophers often
contrast what is logically possible (in the sense of “that which is not self-
contradictory”) from what is empirically possible given the actual laws of nature.
Thus, it is logically possible for me to jump fifty feet in the air, but not empirically
possible. Philosophers often use the notion of “possible worlds,” i.e., different
ways that the world might have been, in describing such non-actual situations or
possibilities. The objection, then, typically proceeds from such a possibility to the
conclusion that materialism is false because materialism would seem to rule out
that possibility. It has been fairly widely accepted (since Kripke 1972) that all
identity statements are necessarily true (that is, true in all possible worlds), and the
same should therefore go for mind-brain identity claims. Since the possibility of
zombies shows that it doesn’t, then we should conclude that materialism is false.
(See Identity Theory.)
It is impossible to do justice to all of the subtleties here. The literature in response
to zombie, and related “conceivability,” arguments is enormous (see, for example,
Hill 1997, Hill and McLaughlin 1999, Papineau 1998, 2002, Balog 1999, Block
and Stalnaker 1999, Loar 1999, Yablo 1999, Perry 2001, Botterell 2001, Kirk
2005). A few lines of reply are as follows: First, it is sometimes objected that the
conceivability of something does not really entail its possibility. Perhaps we can
also conceive of water not being H2O, since there seems to be no logical
contradiction in doing so, but, according to received wisdom from Kripke, that is
really impossible. Perhaps, then, some things just seem possible but really aren’t.
Much of the debate centers on various alleged similarities or dissimilarities
between the mind-brain and water-H2O cases (or other such scientific identities).
Indeed, the entire issue of the exact relationship between “conceivability” and
“possibility” is the subject of an important recently published anthology (Gendler
and Hawthorne 2002). Second, even if zombies are conceivable in the sense of
logically possible, how can we draw a substantial metaphysical conclusion about
the actual world? There is often suspicion on the part of materialists about what, if
anything, such philosophers’ “thought experiments” can teach us about the nature
of our minds. It seems that one could take virtually any philosophical or scientific
theory about almost anything, conceive that it is possibly false, and then conclude
that it is actually false. Something, perhaps, is generally wrong with this way of
reasoning. Third, as we saw earlier (3b.i), there may be a very good reason why
such zombie scenarios seem possible; namely, that we do not (at least, not yet) see
what the necessary connection is between neural events and conscious mental
events. On the one side, we are dealing with scientific third-person concepts and,
on the other, we are employing phenomenal concepts. We are, perhaps, simply
currently not in a position to understand completely such a necessary connection.
v. Varieties of Materialism
Taking the notion of multiple realizability very seriously has also led many to
embrace functionalism, which is the view that conscious mental states should
really only be identified with the functional role they play within an organism. For
example, conscious pains are defined more in terms of input and output, such as
causing bodily damage and avoidance behavior, as well as in terms of their
relationship to other mental states. It is normally viewed as a form of materialism
since virtually all functionalists also believe, like the token-token theorist, that
something physical ultimately realizes that functional state in the organism, but
functionalism does not, by itself, entail that materialism is true. Critics of
functionalism, however, have long argued that such purely functional accounts
cannot adequately explain the essential “feel” of conscious states, or that it seems
possible to have two functionally equivalent creatures, one of whom lacks qualia
entirely (Block 1980a, 1980b, Chalmers 1996; see also Shoemaker 1975, 1981).
Some materialists even deny the very existence of mind and mental states
altogether, at least in the sense that the very concept of consciousness is muddled
(Wilkes 1984, 1988) or that the mentalistic notions found in folk psychology, such
as desires and beliefs, will eventually be eliminated and replaced by physicalistic
terms as neurophysiology matures into the future (Churchland 1983). This is meant
as analogous to past similar eliminations based on deeper scientific understanding,
for example, we no longer need to speak of “ether” or “phlogiston.” Other
eliminativists, more modestly, argue that there is no such thing as qualia when they
are defined in certain problematic ways (Dennett 1988).
Finally, it should also be noted that not all materialists believe that conscious
mentality can be explained in terms of the physical, at least in the sense that the
former cannot be “reduced” to the latter. Materialism is true as an ontological or
metaphysical doctrine, but facts about the mind cannot be deduced from facts
about the physical world (Boyd 1980, Van Gulick 1992). In some ways, this might
be viewed as a relatively harmless variation on materialist themes, but others
object to the very coherence of this form of materialism (Kim 1987, 1998). Indeed,
the line between such “non-reductive materialism” and property dualism is not
always so easy to draw; partly because the entire notion of “reduction” is
ambiguous and a very complex topic in its own right. On a related front, some
materialists are happy enough to talk about a somewhat weaker “supervenience”
relation between mind and matter. Although “supervenience” is a highly technical
notion with many variations, the idea is basically one of dependence (instead of
identity); for example, that the mental depends on the physical in the sense that any
mental change must be accompanied by some physical change (see Kim 1993).
4. Specific Theories of Consciousness
a. Neural Theories
The more direct reductionist approach can be seen in various, more specific, neural
theories of consciousness. Perhaps best known is the theory offered by Francis
Crick and Christof Koch 1990 (see also Crick 1994, Koch 2004). The basic idea is
that mental states become conscious when large numbers of neurons fire in
synchrony and all have oscillations within the 35-75 hertz range (that is, 35-75
cycles per second). However, many philosophers and scientists have put forth other
candidates for what, specifically, to identify in the brain with consciousness. This
vast enterprise has come to be known as the search for the “neural correlates of
consciousness” or NCCs (see section 5b below for more). The overall idea is to
show how one or more specific kinds of neuro-chemical activity can underlie and
explain conscious mental activity (Metzinger 2000). Of course, mere “correlation”
is not enough for a fully adequate neural theory and explaining just what counts as
a NCC turns out to be more difficult than one might think (Chalmers 2000). Even
Crick and Koch have acknowledged that they, at best, provide a necessary
condition for consciousness, and that such firing patters are not automatically
sufficient for having conscious experience.
i. First-Order Representationalism
Whatever the merits and exact nature of the argument from transparency (see Kind
2003), it is clear, of course, that not all mental representations are conscious, so the
key question eventually becomes: What exactly distinguishes conscious from
unconscious mental states (or representations)? What makes a mental state a
conscious mental state? Here Tye defends what he calls “PANIC theory.” The
acronym “PANIC” stands for poised, abstract, non-conceptual, intentional content.
Without probing into every aspect of PANIC theory, Tye holds that at least some of
the representational content in question is non-conceptual (N), which is to say that
the subject can lack the concept for the properties represented by the experience in
question, such as an experience of a certain shade of red that one has never seen
before. Actually, the exact nature or even existence of non-conceptual content of
experience is itself a highly debated and difficult issue in philosophy of mind
(Gunther 2003). Gennaro (2012), for example, defends conceptualism and
connects it in various ways to the higher-order thought theory of consciousness
(see section 4b.ii). Conscious states clearly must also have “intentional content”
(IC) for any representationalist. Tye also asserts that such content is “abstract” (A)
and not necessarily about particular concrete objects. This condition is needed to
handle cases of hallucinations, where there are no concrete objects at all or cases
where different objects look phenomenally alike. Perhaps most important for
mental states to be conscious, however, is that such content must be “poised” (P),
which is an importantly functional notion. The “key idea is that experiences and
feelings…stand ready and available to make a direct impact on beliefs and/or
desires. For example…feeling hungry… has an immediate cognitive effect,
namely, the desire to eat….States with nonconceptual content that are not so poised
lack phenomenal character [because]…they arise too early, as it were, in the
information processing” (Tye 2000: 62).
One objection to Tye’s theory is that it does not really address the hard problem of
phenomenal consciousness (see section 3b.i). This is partly because what really
seems to be doing most of the work on Tye’s PANIC account is the very functional
sounding “poised” notion, which is perhaps closer to Block’s access consciousness
(see section 1) and is therefore not necessarily able to explain phenomenal
consciousness (see Kriegel 2002). In short, it is difficult to see just how Tye’s
PANIC account might not equally apply to unconscious representations and thus
how it really explains phenomenal consciousness.
Other standard objections to Tye’s theory as well as to other FOR accounts include
the concern that it does not cover all kinds of conscious states. Some conscious
states seem not to be “about” anything, such as pains, anxiety, or after-images, and
so would be non-representational conscious states. If so, then conscious experience
cannot generally be explained in terms of representational properties (Block 1996).
Tye responds that pains, itches, and the like do represent, in the sense that they
represent parts of the body. And after-images, hallucinations, and the like either
misrepresent (which is still a kind of representation) or the conscious subject still
takes them to have representational properties from the first-person point of view.
Indeed, Tye (2000) admirably goes to great lengths and argues convincingly in
response to a whole host of alleged counter-examples to representationalism.
Historically among them are various hypothetical cases of inverted qualia (see
Shoemaker 1982), the mere possibility of which is sometimes taken as devastating
to representationalism. These are cases where behaviorally indistinguishable
individuals have inverted color perceptions of objects, such as person A visually
experiences a lemon the way that person B experience a ripe tomato with respect to
their color, and so on for all yellow and red objects. Isn’t it possible that there are
two individuals whose color experiences are inverted with respect to the objects of
perception? (For more on the importance of color in philosophy, see Hardin 1986.)
There are various kinds of HO theory with the most common division between
higher-order thought (HOT) theories and higher-order perception (HOP) theories.
HOT theorists, such as David M. Rosenthal, think it is better to understand the
HOR as a thought of some kind. HOTs are treated as cognitive states involving
some kind of conceptual component. HOP theorists urge that the HOR is a
perceptual or experiential state of some kind (Lycan 1996) which does not require
the kind of conceptual content invoked by HOT theorists. Partly due to Kant
(1781/1965), HOP theory is sometimes referred to as “inner sense theory” as a way
of emphasizing its sensory or perceptual aspect. Although HOT and HOP theorists
agree on the need for a HOR theory of consciousness, they do sometimes argue for
the superiority of their respective positions (such as in Rosenthal 2004, Lycan
2004, and Gennaro 2012). Some philosophers, however, have argued that the
difference between these theories is perhaps not as important or as clear as some
think it is (Güzeldere 1995, Gennaro 1996a, Van Gulick 2000).
A common initial objection to HOR theories is that they are circular and lead to an
infinite regress. It might seem that the HOT theory results in circularity by defining
consciousness in terms of HOTs. It also might seem that an infinite regress results
because a conscious mental state must be accompanied by a HOT, which, in turn,
must be accompanied by another HOT ad infinitum. However, the standard reply is
that when a conscious mental state is a first-order world-directed state the higher-
order thought (HOT) is not itself conscious; otherwise, circularity and an infinite
regress would follow. When the HOT is itself conscious, there is a yet higher-order
(or third-order) thought directed at the second-order state. In this case, we have
introspection which involves a conscious HOT directed at an inner mental state.
When one introspects, one’s attention is directed back into one’s mind. For
example, what makes my desire to write a good entry a conscious first-order desire
is that there is a (non-conscious) HOT directed at the desire. In this case, my
conscious focus is directed at the entry and my computer screen, so I am not
consciously aware of having the HOT from the first-person point of view. When I
introspect that desire, however, I then have a conscious HOT (accompanied by a
yet higher, third-order, HOT) directed at the desire itself (see Rosenthal 1986).
A second objection has been referred to as the “problem of the rock” (Stubenberg
1998) and the “generality problem” (Van Gulick 2000, 2004), but it is originally
due to Alvin Goldman (Goldman 1993). When I have a thought about a rock, it is
certainly not true that the rock becomes conscious. So why should I suppose that a
mental state becomes conscious when I think about it? This is puzzling to many
and the objection forces HO theorists to explain just how adding the HO state
changes an unconscious state into a conscious. There have been, however, a
number of responses to this kind of objection (Rosenthal 1997, Lycan, 1996, Van
Gulick 2000, 2004, Gennaro 2005, 2012, chapter four). A common theme is that
there is a principled difference in the objects of the HO states in question. Rocks
and the like are not mental states in the first place, and so HO theorists are first and
foremost trying to explain how a mental state becomes conscious. The objects of
the HO states must be “in the head.”
A related and increasingly popular version of representational theory holds that the
meta-psychological state in question should be understood as intrinsic to (or part
of) an overall complex conscious state. This stands in contrast to the standard view
that the HO state is extrinsic to (that is, entirely distinct from) its target mental
state. The assumption, made by Rosenthal for example, about the extrinsic nature
of the meta-thought has increasingly come under attack, and thus various hybrid
representational theories can be found in the literature. One motivation for this
movement is growing dissatisfaction with standard HO theory’s ability to handle
some of the objections addressed in the previous section. Another reason is
renewed interest in a view somewhat closer to the one held by Franz Brentano
(1874/1973) and various other followers, normally associated with the
phenomenological tradition (Husserl 1913/1931, 1929/1960; Sartre 1956; see also
Smith 1986, 2004). To varying degrees, these views have in common the idea that
conscious mental states, in some sense, represent themselves, which then still
involves having a thought about a mental state, just not a distinct or separate state.
Thus, when one has a conscious desire for a cold glass of water, one is also aware
that one is in that very state. The conscious desire both represents the glass of
water and itself. It is this “self-representing” which makes the state conscious.
These theories can go by various names, which sometimes seem in conflict, and
have added significantly in recent years to the acronyms which abound in the
literature. For example, Gennaro (1996a, 2002, 2004, 2006, 2012) has argued that,
when one has a first-order conscious state, the HOT is better viewed as intrinsic to
the target state, so that we have a complex conscious state with parts. Gennaro calls
this the “wide intrinsicality view” (WIV) and he also argues that Jean-Paul Sartre’s
theory of consciousness can be understood in this way (Gennaro 2002). Gennaro
holds that conscious mental states should be understood (as Kant might have
today) as global brain states which are combinations of passively received
perceptual input and presupposed higher-order conceptual activity directed at that
input. Higher-order concepts in the meta-psychological thoughts are presupposed
in having first-order conscious states. Robert Van Gulick (2000, 2004, 2006) has
also explored the alternative that the HO state is part of an overall global conscious
state. He calls such states “HOGS” (Higher-Order Global States) whereby a lower-
order unconscious state is “recruited” into a larger state, which becomes conscious
partly due to the implicit self-awareness that one is in the lower-order state. Both
Gennaro and Van Gulick have suggested that conscious states can be understood
materialistically as global states of the brain, and it would be better to treat the
first-order state as part of the larger complex brain state. This general approach is
also forcefully advocated by Uriah Kriegel (Kriegel 2003a, 2003b, 2005, 2006,
2009) and is even the subject of an entire anthology debating its merits (Kriegel
and Williford 2006). Kriegel has used several different names for his “neo-
Brentanian theory,” such as the SOMT (Same-Order Monitoring Theory) and,
more recently, the “self-representational theory of consciousness.” To be sure, the
notion of a mental state representing itself or a mental state with one part
representing another part is in need of further development and is perhaps
somewhat mysterious. Nonetheless, there is agreement among these authors that
conscious mental states are, in some important sense, reflexive or self-directed.
And, once again, there is keen interest in developing this model in a way that
coheres with the latest neurophysiological research on consciousness. A point of
emphasis is on the concept of global meta-representation within a complex brain
state, and attempts are underway to identify just how such an account can be
realized in the brain.
It is worth mentioning that this idea was also briefly explored by Thomas
Metzinger who focused on the fact that consciousness “is something that unifies or
synthesizes experience” (Metzinger 1995: 454). Metzinger calls this the process of
“higher-order binding” and thus uses the acronym HOB. Others who hold some
form of the self-representational view include Kobes (1995), Caston (2002),
Williford (2006), Brook and Raymont (2006), and even Carruthers’ (2000) theory
can be viewed in this light since he contends that conscious states have two
representational contents. Thomas Natsoulas also has a series of papers defending a
similar view, beginning with Natsoulas 1996. Some authors (such as Gennaro
2012) view this hybrid position to be a modified version of HOT theory; indeed,
Rosenthal (2004) has called it “intrinsic higher-order theory.” Van Gulick also
clearly wishes to preserve the HO is his HOGS. Others, such as Kriegel, are not
inclined to call their views “higher-order” at all and call it, for example, the “same-
order monitoring” or “self-representational” theory of consciousness. To some
extent, this is a terminological dispute, but, despite important similarities, there are
also key subtle differences between these hybrid alternatives. Like HO theorists,
however, those who advocate this general approach all take very seriously the
notion that a conscious mental state M is a state that subject S is (non-inferentially)
aware that S is in. By contrast, one is obviously not aware of one’s unconscious
mental states. Thus, there are various attempts to make sense of and elaborate upon
this key intuition in a way that is, as it were, “in-between” standard FO and HO
theory. (See also Lurz 2003 and 2004 for yet another interesting hybrid account.)
Daniel Dennett (1991, 2005) has put forth what he calls the Multiple Drafts Model
(MDM) of consciousness. Although similar in some ways to representationalism,
Dennett is most concerned that materialists avoid falling prey to what he calls the
“myth of the Cartesian theater,” the notion that there is some privileged place in the
brain where everything comes together to produce conscious experience. Instead,
the MDM holds that all kinds of mental activity occur in the brain by parallel
processes of interpretation, all of which are under frequent revision. The MDM
rejects the idea of some “self” as an inner observer; rather, the self is the product or
construction of a narrative which emerges over time. Dennett is also well known
for rejecting the very assumption that there is a clear line to be drawn between
conscious and unconscious mental states in terms of the problematic notion of
“qualia.” He influentially rejects strong emphasis on any phenomenological or
first-person approach to investigating consciousness, advocating instead what he
calls “heterophenomenology” according to which we should follow a more neutral
path “leading from objective physical science and its insistence on the third person
point of view, to a method of phenomenological description that can (in principle)
do justice to the most private and ineffable subjective experiences.” (1991: 72)
Objections to these cognitive theories include the charge that they do not really
address the hard problem of consciousness (as described in section 3b.i), but only
the “easy” problems. Dennett is also often accused of explaining away
consciousness rather than really explaining it. It is also interesting to think about
Baars’ GWT in light of the Block’s distinction between access and phenomenal
consciousness (see section 1). Does Baars’ theory only address access
consciousness instead of the more difficult to explain phenomenal consciousness?
(Two other psychological cognitive theories worth noting are the ones proposed by
George Mandler 1975 and Tim Shallice 1988.)
d. Quantum Approaches
Finally, there are those who look deep beneath the neural level to the field of
quantum mechanics, basically the study of sub-atomic particles, to find the key to
unlocking the mysteries of consciousness. The bizarre world of quantum physics is
quite different from the deterministic world of classical physics, and a major area
of research in its own right. Such authors place the locus of consciousness at a very
fundamental physical level. This somewhat radical, though exciting, option is
explored most notably by physicist Roger Penrose (1989, 1994) and
anesthesiologist Stuart Hameroff (1998). The basic idea is that consciousness
arises through quantum effects which occur in subcellular neural structures known
as microtubules, which are structural proteins in cell walls. There are also other
quantum approaches which aim to explain the coherence of consciousness
(Marshall and Zohar 1990) or use the “holistic” nature of quantum mechanics to
explain consciousness (Silberstein 1998, 2001). It is difficult to assess these
somewhat exotic approaches at present. Given the puzzling and often very
counterintuitive nature of quantum physics, it is unclear whether such approaches
will prove genuinely scientifically valuable methods in explaining consciousness.
One concern is simply that these authors are trying to explain one puzzling
phenomenon (consciousness) in terms of another mysterious natural phenomenon
(quantum effects). Thus, the thinking seems to go, perhaps the two are essentially
related somehow and other physicalistic accounts are looking in the wrong place,
such as at the neuro-chemical level. Although many attempts to explain
consciousness often rely of conjecture or speculation, quantum approaches may
indeed lead the field along these lines. Of course, this doesn’t mean that some such
theory isn’t correct. One exciting aspect of this approach is the resulting
interdisciplinary interest it has generated among physicists and other scientists in
the problem of consciousness.
As was seen earlier in discussing neural theories of consciousness (section 4a), the
search for the so-called “neural correlates of consciousness” (NCCs) is a major
preoccupation of philosophers and scientists alike (Metzinger 2000). Narrowing
down the precise brain property responsible for consciousness is a different and far
more difficult enterprise than merely holding a generic belief in some form of
materialism. One leading candidate is offered by Francis Crick and Christof Koch
1990 (see also Crick 1994, Koch 2004). The basic idea is that mental states become
conscious when large numbers of neurons all fire in synchrony with one another
(oscillations within the 35-75 hertz range or 35-75 cycles per second). Currently,
one method used is simply to study some aspect of neural functioning with
sophisticated detecting equipments (such as MRIs and PET scans) and then
correlate it with first-person reports of conscious experience. Another method is to
study the difference in brain activity between those under anesthesia and those not
under any such influence. A detailed survey would be impossible to give here, but
a number of other candidates for the NCC have emerged over the past two decades,
including reentrant cortical feedback loops in the neural circuitry throughout the
brain (Edelman 1989, Edelman and Tononi 2000), NMDA-mediated transient
neural assemblies (Flohr 1995), and emotive somatosensory haemostatic processes
in the frontal lobe (Damasio 1999). To elaborate briefly on Flohr’s theory, the idea
is that anesthetics destroy conscious mental activity because they interfere with the
functioning of NMDA synapses between neurons, which are those that are
dependent on N-methyl-D-aspartate receptors. These and other NCCs are explored
at length in Metzinger (2000). Ongoing scientific investigation is significant and an
important aspect of current scientific research in the field.
One problem with some of the above candidates is determining exactly how they
are related to consciousness. For example, although a case can be made that some
of them are necessary for conscious mentality, it is unclear that they are sufficient.
That is, some of the above seem to occur unconsciously as well. And pinning down
a narrow enough necessary condition is not as easy as it might seem. Another
general worry is with the very use of the term “correlate.” As any philosopher,
scientist, and even undergraduate student should know, saying that “A is correlated
with B” is rather weak (though it is an important first step), especially if one
wishes to establish the stronger identity claim between consciousness and neural
activity. Even if such a correlation can be established, we cannot automatically
conclude that there is an identity relation. Perhaps A causes B or B causes A, and
that’s why we find the correlation. Even most dualists can accept such
interpretations. Maybe there is some other neural process C which causes both A
and B. “Correlation” is not even the same as “cause,” let alone enough to establish
“identity.” Finally, some NCCs are not even necessarily put forth as candidates for
all conscious states, but rather for certain specific kinds of consciousness (e.g.,
visual).
c. Philosophical Psychopathology
Philosophers have long been intrigued by disorders of the mind and consciousness.
Part of the interest is presumably that if we can understand how consciousness
goes wrong, then that can help us to theorize about the normal functioning mind.
Going back at least as far as John Locke (1689/1975), there has been some
discussion about the philosophical implications of multiple personality disorder
(MPD) which is now called “dissociative identity disorder” (DID). Questions
abound: Could there be two centers of consciousness in one body? What makes a
person the same person over time? What makes a person a person at any given
time? These questions are closely linked to the traditional philosophical problem of
personal identity, which is also importantly related to some aspects of
consciousness research. Much the same can be said for memory disorders, such as
various forms of amnesia (see Gennaro 1996a, chapter 9). Does consciousness
require some kind of autobiographical memory or psychological continuity? On a
related front, there is significant interest in experimental results from patients who
have undergone a commisurotomy, which is usually performed to relieve
symptoms of severe epilepsy when all else fails. During this procedure, the nerve
fibers connecting the two brain hemispheres are cut, resulting in so-called “split-
brain” patients (Bayne 2010).
Philosophical interest is so high that there is now a book series called Philosophical
Psychopathology published by MIT Press. Another rich source of information
comes from the provocative and accessible writings of neurologists on a whole
host of psychopathologies, most notably Oliver Sacks (starting with his 1987 book)
and, more recently, V. S. Ramachandran (2004; see also Ramachandran and
Blakeslee 1998). Another launching point came from the discovery of the
phenomenon known as “blindsight” (Weiskrantz 1986), which is very frequently
discussed in the philosophical literature regarding its implications for
consciousness. Blindsight patients are blind in a well defined part of the visual
field (due to cortical damage), but yet, when forced, can guess, with a higher than
expected degree of accuracy, the location or orientation of an object in the blind
field.
There is also philosophical interest in many other disorders, such as phantom limb
pain (where one feels pain in a missing or amputated limb), various agnosias (such
as visual agnosia where one is not capable of visually recognizing everyday
objects), and anosognosia (which is denial of illness, such as when one claims that
a paralyzed limb is still functioning, or when one denies that one is blind). These
phenomena raise a number of important philosophical questions and have forced
philosophers to rethink some very basic assumptions about the nature of mind and
consciousness. Much has also recently been learned about autism and various
forms of schizophrenia. A common view is that these disorders involve some kind
of deficit in self-consciousness or in one’s ability to use certain self-concepts. (For
a nice review article, see Graham 2002.) Synesthesia is also a fascinating abnormal
phenomenon, although not really a “pathological” condition as such (Cytowic
2003). Those with synesthesia literally have taste sensations when seeing certain
shapes or have color sensations when hearing certain sounds. It is thus an often
bizarre mixing of incoming sensory input via different modalities.
One of the exciting results of this relatively new sub-field is the important
interdisciplinary interest that it has generated among philosophers, psychologists,
and scientists (such as in Graham 2010, Hirstein 2005, and Radden 2004).
Two final areas of interest involve animal and machine consciousness. In the
former case it is clear that we have come a long way from the Cartesian view that
animals are mere “automata” and that they do not even have conscious experience
(perhaps partly because they do not have immortal souls). In addition to the
obviously significant behavioral similarities between humans and many animals,
much more is known today about other physiological similarities, such as brain and
DNA structures. To be sure, there are important differences as well and there are,
no doubt, some genuinely difficult “grey areas” where one might have legitimate
doubts about some animal or organism consciousness, such as small rodents, some
birds and fish, and especially various insects. Nonetheless, it seems fair to say that
most philosophers today readily accept the fact that a significant portion of the
animal kingdom is capable of having conscious mental states, though there are still
notable exceptions to that rule (Carruthers 2000, 2005). Of course, this is not to say
that various animals can have all of the same kinds of sophisticated conscious
states enjoyed by human beings, such as reflecting on philosophical and
mathematical problems, enjoying artworks, thinking about the vast universe or the
distant past, and so on. However, it still seems reasonable to believe that animals
can have at least some conscious states from rudimentary pains to various
perceptual states and perhaps even to some level of self-consciousness. A number
of key areas are under continuing investigation. For example, to what extent can
animals recognize themselves, such as in a mirror, in order to demonstrate some
level of self-awareness? To what extent can animals deceive or empathize with
other animals, either of which would indicate awareness of the minds of others?
These and other important questions are at the center of much current theorizing
about animal cognition. (See Keenan et. al. 2003 and Beckoff et. al. 2002.) In some
ways, the problem of knowing about animal minds is an interesting sub-area of the
traditional epistemological “problem of other minds”: How do we even know that
other humans have conscious minds? What justifies such a belief?
The possibility of machine (or robot) consciousness has intrigued philosophers and
non-philosophers alike for decades. Could a machine really think or be conscious?
Could a robot really subjectively experience the smelling of a rose or the feeling of
pain? One important early launching point was a well-known paper by the
mathematician Alan Turing (1950) which proposed what has come to be known as
the “Turing test” for machine intelligence and thought (and perhaps consciousness
as well). The basic idea is that if a machine could fool an interrogator (who could
not see the machine) into thinking that it was human, then we should say it thinks
or, at least, has intelligence. However, Turing was probably overly optimistic about
whether anything even today can pass the Turing Test, as most programs are
specialized and have very narrow uses. One cannot ask the machine about virtually
anything, as Turing had envisioned. Moreover, even if a machine or robot could
pass the Turing Test, many remain very skeptical as to whether or not this
demonstrates genuine machine thinking, let alone consciousness. For one thing,
many philosophers would not take such purely behavioral (e.g., linguistic)
evidence to support the conclusion that machines are capable of having
phenomenal first person experiences. Merely using words like “red” doesn’t ensure
that there is the corresponding sensation of red or real grasp of the meaning of
“red.” Turing himself considered numerous objections and offered his own replies,
many of which are still debated today.
Another much discussed argument is John Searle’s (1980) famous Chinese Room
Argument, which has spawned an enormous amount of literature since its original
publication (see also Searle 1984; Preston and Bishop 2002). Searle is concerned to
reject what he calls “strong AI” which is the view that suitably programmed
computers literally have a mind, that is, they really understand language and
actually have other mental capacities similar to humans. This is contrasted with
“weak AI” which is the view that computers are merely useful tools for studying
the mind. The gist of Searle’s argument is that he imagines himself running a
program for using Chinese and then shows that he does not understand Chinese;
therefore, strong AI is false; that is, running the program does not result in any real
understanding (or thought or consciousness, by implication). Searle supports his
argument against strong AI by utilizing a thought experiment whereby he is in a
room and follows English instructions for manipulating Chinese symbols in order
to produce appropriate answers to questions in Chinese. Searle argues that, despite
the appearance of understanding Chinese (say, from outside the room), he does not
understand Chinese at all. He does not thereby know Chinese, but is merely
manipulating symbols on the basis of syntax alone. Since this is what computers
do, no computer, merely by following a program, genuinely understands anything.
Searle replies to numerous possible criticisms in his original paper (which also
comes with extensive peer commentary), but suffice it to say that not everyone is
satisfied with his responses. For example, it might be argued that the entire room or
“system” understands Chinese if we are forced to use Searle’s analogy and thought
experiment. Each part of the room doesn’t understand Chinese (including Searle
himself) but the entire system does, which includes the instructions and so on.
Searle’s larger argument, however, is that one cannot get semantics (meaning) from
syntax (formal symbol manipulation).
Despite heavy criticism of the argument, two central issues are raised by Searle
which continue to be of deep interest. First, how and when does one distinguish
mere “simulation” of some mental activity from genuine “duplication”? Searle’s
view is that computers are, at best, merely simulating understanding and thought,
not really duplicating it. Much like we might say that a computerized hurricane
simulation does not duplicate a real hurricane, Searle insists the same goes for any
alleged computer “mental” activity. We do after all distinguish between real
diamonds or leather and mere simulations which are just not the real thing. Second,
and perhaps even more important, when considering just why computers really
can’t think or be conscious, Searle interestingly reverts back to a biologically based
argument. In essence, he says that computers or robots are just not made of the
right stuff with the right kind of “causal powers” to produce genuine thought or
consciousness. After all, even a materialist does not have to allow that any kind of
physical stuff can produce consciousness any more than any type of physical
substance can, say, conduct electricity. Of course, this raises a whole host of other
questions which go to the heart of the metaphysics of consciousness. To what
extent must an organism or system be physiologically like us in order to be
conscious? Why is having a certain biological or chemical make up necessary for
consciousness? Why exactly couldn’t an appropriately built robot be capable of
having conscious mental states? How could we even know either way? However
one answers these questions, it seems that building a truly conscious Commander
Data is, at best, still just science fiction.
In any case, the growing areas of cognitive science and artificial intelligence are
major fields within philosophy of mind and can importantly bear on philosophical
questions of consciousness. Much of current research focuses on how to program a
computer to model the workings of the human brain, such as with so-called “neural
(or connectionist) networks.”