PDF Philosophy and Computing Essays in Epistemology Philosophy of Mind Logic and Ethics 1st Edition Thomas M. Powers (Eds.) download
PDF Philosophy and Computing Essays in Epistemology Philosophy of Mind Logic and Ethics 1st Edition Thomas M. Powers (Eds.) download
PDF Philosophy and Computing Essays in Epistemology Philosophy of Mind Logic and Ethics 1st Edition Thomas M. Powers (Eds.) download
com
https://textbookfull.com/product/philosophy-and-computing-
essays-in-epistemology-philosophy-of-mind-logic-and-
ethics-1st-edition-thomas-m-powers-eds/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/philosophy-in-educational-research-
epistemology-ethics-politics-and-quality-1st-edition-david-bridges-
auth/
textboxfull.com
https://textbookfull.com/product/beyond-faith-and-rationality-essays-
on-logic-religion-and-philosophy-ricardo-sousa-silvestre/
textboxfull.com
https://textbookfull.com/product/essays-in-the-philosophy-of-
chemistry-1st-edition-fisher/
textboxfull.com
https://textbookfull.com/product/ethics-beyond-the-limits-new-essays-
on-bernard-williams-ethics-and-the-limits-of-philosophy-sophie-grace-
chappell/
textboxfull.com
https://textbookfull.com/product/grammar-philosophy-and-logic-1st-
edition-bruce-silver-auth/
textboxfull.com
https://textbookfull.com/product/implicit-bias-and-philosophy-
volume-1-metaphysics-and-epistemology-1st-edition-brownstein/
textboxfull.com
https://textbookfull.com/product/logic-and-philosophy-of-time-themes-
from-prior-1st-edition-hasle/
textboxfull.com
Philosophical Studies Series
Philosophy
and
Computing
Essays in Epistemology, Philosophy of
Mind, Logic, and Ethics
Philosophical Studies Series
Volume 128
Editor-in-Chief
Luciano Floridi, University of Oxford, Oxford Internet Institute, United Kingdom
Mariarosaria Taddeo, University of Oxford, Oxford Internet Institute, United Kingdom
123
Editor
Thomas M. Powers
Department of Philosophy
University of Delaware
Newark, DE, USA
v
vi Contents
11 Big Data, Digital Traces and the Metaphysics of the Self . . . . . . . . . . . . . . 209
Soraj Hongladarom
12 Why Big Data Needs the Virtues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Frances S. Grodzinsky
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Chapter 1
Introduction: Intersecting Traditions
in the Philosophy of Computing
Thomas M. Powers
This volume consists of selected papers from the 2015 joint international
conference—the first-ever meeting of the Computer Ethics-Philosophical Enquiry
conference series of the International Society for Ethics and Information
Technology, and the International Association for Computing and Philosophy—held
at the University of Delaware from June 22–25 of 2015. The organizing themes of
the conference are well represented in the volume. They include theoretical topics at
the intersection of computing and philosophy, including essays that explore current
issues in epistemology, philosophy of mind, logic, and philosophy of science,
and also normative topics on matters of ethical, social, economic, and political
import. All of the essays provide views of their subject matter through the lens of
computation.
Two general types of question motivate the foregoing essays. First, how has
computation changed philosophical inquiry into the nature of mind and cognition?
Second, how we can come to terms with the ethical and political aspects of the
computer and information-technology revolution? It is worth noting that these
questions—though they have lingered on the surface and beneath many philosoph-
ical discussions for decades (and in some cases, for centuries)—are given a new
treatment by the authors precisely because of recent developments in the science
and technology of computation. Almost all philosophers know the general landscape
of these questions well—What is the nature of explanation? What is thought?
How does language represent the world? What is it to act in such a way as to be
responsible? Formerly, answers to these questions placed humans at the center of
such inquiries; philosophy was concerned with explanations given by humans and
for humans. We considered—as though it were tautological—only the possibility of
human (natural) languages, learning, thought, agency, and the like. But philosophy
cannot ignore the fact that computational machines are capable of joining us at the
center of this inquiry, and thus we can now ponder machine language, learning,
thought, agency, and other quite revolutionary concepts.
The impetus for the new treatments in these essays comes from startling
developments in contemporary technologies which allow pervasive information
gathering and analyses far beyond human capacities—the era of Big Data—and new
scientific insights into our own cognition. We are also beginning to see sophisticated,
autonomous, and deadly functionality in machines, and there is even talk of the
possibility of a “post-human” society due to the convergence of genetics and
genomics with cognitive neuroscience and information technologies. So indeed, we
are entering uncharted territory for science, ethics, politics, and civilization.
That philosophy would once again be forced to reconstitute itself because of
developments in science and technology would have been well appreciated by
Descartes, through his study of microscopy, as well as by Kant’s grounding of a crit-
ical metaphysics to acknowledge the contributions of Newton’s mechanics. Indeed,
the essays in this volume also fall in the tradition of philosophy reconstituting itself,
as described in Floridi’s “fourth revolution.”
The virtue of a collection that ranges over philosophical questions, such as this
one does, lies in the prospects for a more integrated understanding of issues. These
are early days in the partnership between philosophy and information technology,
and many foundational issues are still being sorted out. It is to be expected that many
of the tools of philosophy will have to be deployed to establish this foundation, and
this volume admirably showcases those tools in the hands of scholars. Here briefly
is what the reader can expect.
In Michael Rescorla’s essay “Levels of Computational Explanation” he analyzes
three levels in describing computing systems (representational, syntactic, and
physical) and argues that syntactic description is key because it mediates between
the representational description and physical construction of artificial systems.
However, while this three-level view works well for artificial computing systems,
it is inadequate for natural computing systems (e.g., humans) on his view. By
focusing on explanatory practice in cognitive science, he brings to the foreground
the pragmatic advantages of syntactic description for natural computing systems.
In “On the Relation of Computing to the World” William J. Rapaport surveys a
multitude of ways of construing said relation. One would be hard-pressed to find
a stone unturned here: questions of semantic and syntactic computation (and even
syntactic semantics), the nature and teleology of algorithms, program verification
and the limits of computation, symbols and meanings in computation, and inputs
and outputs in a Turing machine are all considered.
Paul Schweizer’s “Cognitive Computation sans Representation” addresses what
he considers to be the deep incompatibility between the computational and the
representational theories of mind (CTM vs. RTM). Attempts to join them, he argues,
are destined to fail because the unique representational content claimed by RTM is
superfluous to the formal procedures of CTM. In a computing mechanism, after
syntactic encoding “it’s pure syntax all the way down to the level of physical
implementation.”
1 Introduction: Intersecting Traditions in the Philosophy of Computing 3
In “Software Error as a Limit to Inquiry for Finite Agents: Challenges for the
Post-human Scientist” John F. Symons and Jack K. Horner revisit C.S. Peirce’s
intuition about truth being the output of inquiry in the limit by considering the
limits for the future of software-driven science. They use their previously-developed
concept of a finite post-human agent, constrained “only by mathematics and the
temporal finiteness of the universe,” to argue that greater use of software or other
rule-governed procedures will lead to decreased ability to control for error in
scientific inquiry.
Concerns about a post-human world are also at the center of Selmer and
Alexander Bringsjord’s “The Singularity Business: Toward a Realistic, Fine-grained
Economics for an AI-Infused World.” Here they are interested literally in business
and economic questions that will arise in a future with “high-but-sub-human-level
artificial intelligence.” They investigate the implications of an artificial intelligence
that is impressive and useful, from the standpoint of contemporary science and
engineering, but falls well short of the awareness or self-consciousness promised
by The Singularity. They argue that even this “minimal” artificial intelligence will
have tremendous labor market and social implications, arising from the tasks to
which it can be set.
In “Artificial Moral Cognition: Moral Functionalism and Autonomous Moral
Agency” by Don Howard and Ioan Muntean, we find a model of an Artificial
Autonomous Moral Agent (AAMA) that will engage in moral cognition by learning
moral-behavioral patterns from data. Unlike a rule-following autonomous agent,
their AAMA will be based on “soft computing” methods of neural networks and
evolutionary computation. They conceive of the resulting agent as an example of
virtue ethics for machines, having a certain level of autonomy and complexity, and
being capable of particularistic moral judgments. They apply the concept of the
AAMA to Hardin’s metaphor of the “lifeboat” in the ethics of choice with limited
resources.
Similarly, Shannon Vallor takes a virtue ethics approach to questions of machine
and robotic ethics in her “AI and the Automation of Wisdom.” Like the Bringsjords,
Vallor is concerned with economic, political, and technological implications—here,
primarily, to the future development of human wisdom in a world that may seem
to eschew it. Knowledge and skills in the coming workforce will change to meet
societal needs, as they always have, but more troubling is the threat posed by an
“algorithmic displacement of human wisdom.” Drawing on sources from Aristotle
to the present, she forewarns of a weakened ability to rise to our environmental,
civic, and political responsibilities.
Mario Verdicchio presents “An Analysis of Machine Ethics from the Perspective
of Autonomy” in order to urge a return to what he calls classic ethics to guide
researchers in machine ethics. He rejects the call for a new discipline of ethics for
machines—one that would focus on embedding ethical principles into programs. He
argues instead that industry-driven standards for robotic safety are sufficient, and
that nothing in robotics presents a fundamental challenge to the ethics of design;
rather, new machine capabilities show us why we ought to focus on the traditional
(classic) ethics that guide human choices. While he acknowledges that cutting-edge
4 T.M. Powers
robots may have a higher degree of autonomy than those in the past, he does not
think that such autonomy is sufficient to require a new ethics discipline.
We turn to questions of research ethics in the era of Big Data with “Beyond
Informed Consent: Investigating Ethical Justifications for Disclosing, Donating or
Sharing Personal Data in Research” by Markus Christen, Josep Domingo-Ferrer,
Dominik Herrmann, & Jeroen van den Hoven. In this essay the authors consider
how the modern digital research ecosystem challenges notions of informed consent,
control of personal data, and protection of research subjects from unintended effects
of research in a rapidly changing social and scientific landscape. They develop
arguments around three core values—autonomy, fairness and responsibility—to
show how an active community of research participants can be educated through
and involved in research over time. Such a community would enable user-centric
management of personal data, including generation, publication, control, exploita-
tion, and self-protection.
Soraj Hongladarom turns to ontological concerns in “Big Data, Digital Traces
and the Metaphysics of the Self.” This investigation begins with a conception of
self and identity that depends on one’s Internet activity. Identity is constituted by
“how one leaves her traces digitally on the Internet.” This view borrows from
the well-known “extended mind” thesis of Chalmers and Clark and issues in
a conception of the (digitally) extended self. Hongladarom suggests that these
traces—the distributed parts of the ontologically-whole self—nonetheless belong
to the owner. Thus, they deserve protection and generate strong claims of privacy
and respect for individuals.
In the final essay of this volume, “Why Big Data Needs the Virtues” by Frances
S. Grodzinsky, we encounter an analysis of Big Data and its value, with an argument
on how it can be harnessed for good. Grodzinsky starts with an account of Big Data’s
volume, velocity, variety, and veracity. She goes on to critique claims that statistical
correlations in Big Data are free of theory, ready to gleaned from data sets. Turning
to the ethics of the “Big Data Scientist,” she sketches a virtuous epistemic agent
who incorporates both virtue ethics and virtue epistemology. Such an agent will be
well placed, she believes, to act responsibly to use Big Data for socially-beneficial
ends.
Through analyses of the foregoing issues, the philosophical work in these
chapters promises to clarify important questions and help develop new lines of
research. It is hoped that readers will find much of value in these intersecting
traditions in philosophy and computing.
Chapter 2
Levels of Computational Explanation
Michael Rescorla
Abstract It is widely agreed that one can fruitfully describe a computing system
at various levels. Discussion typically centers on three levels: the representational
level, the syntactic level, and the hardware level. I will argue that the three-
level picture works well for artificial computing systems (i.e. computing systems
designed and built by intelligent agents) but less well for natural computing systems
(i.e. computing systems that arise in nature without design or construction by
intelligent agents). Philosophers and cognitive scientists have been too hasty to
extrapolate lessons drawn from artificial computation to the much different case
of natural computation.
It is widely agreed that one can fruitfully describe a computing system at various
levels. Discussion typically centers on three levels that I will call the represen-
tational level, the syntactic level, and the hardware level. To illustrate, consider
a computer programmed to perform elementary arithmetical operations such as
addition, multiplication, and division:
– At the representational level, we individuate computational states through their
representational properties. For instance, we might say that our computer divides
the number 2 into the number 5 to yield remainder 1. This description implicitly
presupposes that the computer’s states represent specific numbers.
– At the syntactic level, we individuate computational states non-representationally.
We describe our computer as manipulating numerals, rather than performing
arithmetical operations over numbers. For example, we might say that the
computer performs certain syntactic operations over the numerals “2” and “5”
M. Rescorla ()
Department of Philosophy, University of California, Los Angeles, CA, USA
e-mail: [email protected]
and then outputs the numeral “1.” When offering this description, we do not
presuppose that the computer’s states represent numbers.
– At the hardware level, we describe the physical realization of computational
states. We specify our computer’s components, how those components are
assembled, and how the computer’s physical state evolves according to well-
understood physical laws.
A three-level picture along these lines figures prominently in many philosophical
and scientific discussions (Chalmers 2011, 2012; Fodor 1981, 1987, 1994, 2008;
Haugeland 1985; Pylyshyn 1984).
I will argue that the three-level picture works well for artificial computing
systems (i.e. computing systems designed and built by intelligent agents) but
less well for natural computing systems (i.e. computing systems that arise in
nature without design or construction by intelligent agents). Philosophers and
cognitive scientists have been too hasty to extrapolate lessons drawn from artificial
computation to the much different case of natural computation. I discuss artificial
computation in Sects. 2 and 3 and natural computation in Sect. 4. I compare the two
cases in Sect. 5.
Researchers across philosophy, computer science (CS), and cognitive science use
the phrase “representation” in various ways. Following common philosophical
usage (e.g. Burge 2010, p. 9), I tie representation to veridicality-conditions. To
illustrate:
2 Levels of Computational Explanation 7
– Beliefs are evaluable as true or false. My belief that Barack Obama is president
is true if Barack Obama is president, false if he is not.
– Declarative sentences (e.g. “Barack Obama is president”) as uttered in specific
conversational contexts are likewise evaluable as true or false.
– Perceptual states are evaluable as accurate or inaccurate. A perceptual state that
represents presence of a red sphere is accurate only if a red sphere is before me.
– Intentions are evaluable as fulfilled or thwarted. My intention to eat chocolate is
fulfilled if I eat chocolate, thwarted if I do not eat chocolate.
Truth-conditions, accuracy-conditions, and fulfillment-conditions are species of
veridicality-conditions. Complex representations decompose into parts whose rep-
resentational properties contribute to veridicality-conditions. For example, the
truth-condition of “John loves Mary” is determined by the denotation of “John,” the
denotation of “Mary,” and the satisfaction-condition of “loves.” Representational
description invokes veridicality-conditions or representational properties that con-
tribute to veridicality-conditions.
I distinguish two ways that a system may come to have representational
properties: it may have representational properties at least partly by virtue of its
own activity; or it may have representational properties entirely because some
other system has imposed those properties upon it. For example, the human mind
has representational properties at least partly due to its own activity. In contrast,
words in a book represent entirely by virtue of their connection to our linguistic
conventions. The book does not contribute to representational properties of its
component words.
Philosophers commonly evoke this distinction using the labels original ver-
sus derived intentionality (Haugeland 1985) or intrinsic versus observer relative
meanings (Searle 1980). To my ear, these labels suggest that the main con-
trast concerns whether a system is solely responsible for generating its own
representational properties. Yet Burge (1982) and Putnam (1975) have argued
convincingly that the external physical and social environment plays a large role
in determining representational properties of mental states, so that not even the
mind is solely responsible for generating its own representational properties. I
prefer the labels indigenous versus inherited, which seem to me to carry fewer
misleading connotations. Representational properties of human mental states are
indigenous, because human mental activity plays at least some role in generating
representational properties of mental states. Representational properties of words
in a book are inherited, because the book plays no role in generating those
properties.
Are representational properties of artificial computing systems inherited or
indigenous? For the artificial computing systems employed in our own society, the
answer is usually “inherited.” For example, a simple pocket calculator represents
numbers entirely by virtue of our linguistic conventions regarding numerals. A sim-
ilar diagnosis applies to many far more sophisticated systems. Some philosophers
maintain that artificial computing machines in principle cannot have indigenous
representational properties (Searle 1980). I think that this position is implausible
8 M. Rescorla
and that existing arguments for it are flawed. I see no reason why a sufficiently
sophisticated robot could not confer representational properties upon its own inter-
nal states. We could equip the robot with sensors or motor organs, so that it causally
interacts with the external world in a suitably sophisticated way. So equipped, I
see no reason why the robot could not achieve indigenous representation of its
external environment. Whether any actual existing artificial computing systems have
indigenous representational properties is a trickier question that I set aside.
By a “natural system,” I mean one that arises in nature without design or oversight
by intelligent agents. Whether a system counts as “natural” is a matter of its etiology,
not its material constitution. A computing system constructed by humans from DNA
or other biochemical material is not “natural,” because it is an artifact. A silicon-
based creature that evolved through natural selection on another planet counts as
“natural,” even though it is not constructed from terrestrial biochemical materials.
According to the computational theory of mind (CTM), the mind is a computing
system. Classical CTM holds that the mind executes computations similar to those
executed by Turing machines (Fodor 1975, 1987, 1994, 2008; Gallistel and King
2009; Putnam 1975; Pylyshyn 1984). Connectionist CTM models mental activity
using neural networks (Horgan and Tienson 1996; Ramsey 2007; Rumelhart et al.
1986). Both classical and connectionist CTM trace back to seminal work of McCul-
loch and Pitts (1943). In (Rescorla 2015b), I surveyed classical, connectionist,
and other versions of CTM. For present purposes, I do not assume any particular
version of CTM. I simply assume that the mind in some sense computes. Under
that assumption, it makes sense to talk about “natural computing systems.” We may
therefore ask how Sect. 1’s levels of description apply to natural computation—
specifically, mental computation.2
Hardware description is vital to the study of mental computation. Ultimately, we
want to know how neural tissue physically realizes mental computations. Everyone
agrees that a complete cognitive science will include detailed hardware descriptions
that characterize how neural processes implement mental activity. Unfortunately,
satisfactory hardware descriptions are not yet available. Although we know quite a
1
When representational properties are inherited rather than indigenous, syntactic description offers
further advantages over representational description. I argue in (Rescorla 2014b) that inherited
representational properties of computational states are causally irrelevant: one can freely vary
inherited representational properties without altering the underlying syntactic manipulations, so
representational properties do not make a difference to the computation. Representational descrip-
tion does not furnish genuinely causal explanations of a system whose representational properties
are all inherited. No such analysis applies to a computational system whose representational
properties are indigenous. In that case, I claim, representational properties can be causally relevant
(Rescorla 2014b).
2
For purposes of this paper, “mental computation” indicates computation by a natural system with
a mind. I leave open the possibility that an artificial system (such as a sophisticated robot) might
also have a mind.
14 M. Rescorla
lot about the brain, we still do not know how exactly neural processing physically
realizes basic mental activities such as perception, motor control, navigation,
reasoning, decision-making, and so on.
What about representational and syntactic description? Will these also figure in
any complete cognitive science? I discuss representation in Sect. 4.1 and syntax in
Sect. 4.2.
3
Piccinini (2015) assigns a central role to non-representational, multiply realizable descriptions
of artificial and natural computation, including mental computation. He declines to call these
descriptions “syntactic.” Nevertheless, the worries developed below regarding syntactic description
of mental computation also apply to Piccinini’s approach. For further discussion, see (Rescorla
2016b).
2 Levels of Computational Explanation 17
Fodor (1981, 2008) maintains that cognitive science already prioritizes syntactic
description of mental activity. I disagree. Contrary to what Fodor suggests, formal
syntactic description does not figure in current scientific theorizing about numer-
ous mental phenomena, including perception, motor control, deductive inference,
decision-making, and so on (Rescorla 2012, 2014b; 2017). Bayesian percep-
tual psychology describes perceptual inference in representational terms rather
than formal syntactic terms (Rescorla 2015a). Bayesian sensorimotor psychology
describes motor control in representational terms rather than formal syntactic
terms (Rescorla 2016a). There may be some areas where cognitive science offers
syntactic explanations. For example, certain computational models of low-level
insect navigation look both non-neural and non-representational (Rescorla 2013b).
But formal syntactic description is entirely absent from many core areas of cognitive
science.
Plausibly, one always can describe mental activity in syntactic terms. The ques-
tion is whether one thereby gains any explanatory benefits. There are innumerable
possible ways of taxonomizing mental states. Most taxonomic schemes offer no
explanatory value. For instance, we can introduce a predicate true of precisely those
individuals who believe that snow is white or who want to drink water. However, it
seems unlikely that this disjunctive predicate will play any significant explanatory
role within a finished cognitive science. Why expect that syntactic taxonomization
will play any more significant a role?
To focus the discussion, consider Chalmers’s functionalist conception of syntax.
Given a true representational or neurophysiological theory of a mental process,
we can abstract away from representational and neural details to specify a causal
topology instantiated by the process. But why suspect that we thereby gain any
explanatory benefits? We can abstract away from a true scientific theory of any
phenomenon to specify a causal topology instantiated by the phenomenon. In most
cases, we do not thereby improve our explanations of the target phenomenon.
Here is a non-psychological example. The Lotka-Volterra equations are first-
order nonlinear differential equations used in ecology to model simple predator-prey
systems (Nowak 2006):
dx
dt
D x .a by/
.LV/
dy
dt
D y .dx c/
These diverse systems instantiate the same causal topology. We can specify their
shared causal topology more explicitly by taking LV’s Ramsey sentence, thereby
suppressing all ecological details.
Ecologists explain predator/prey population levels by using LV where x and
y are interpreted as prey and predator population levels. We do not improve
ecological explanation by noting that LV describes some chemical or economic
system when x and y are reinterpreted as chemical or economic variables, or
by supplementing LV with LV’s Ramsey sentence.4 What matters for ecological
explanation are the ecological interactions described by LV, not the causal topology
obtained by suppressing ecological details. That some ecological system shares
a causal topology with certain chemical or economic systems is an interesting
coincidence, not an explanatory significant fact that illuminates population levels.
The causal topology determined by LV is not itself explanatory. It is just a byproduct
of underlying ecological interactions described by LV when x and y are interpreted
as prey and predator population levels.
Cognitive science describes causal interactions among representational mental
states. By suppressing representational and neural properties, we can specify a
causal topology instantiated by mental computation. But this causal topology looks
like a mere byproduct of causal interactions among representational mental states.
In itself, it does not seem explanatory. Certainly, actual cognitive science practice
does not assign an explanatorily significant role to abstract descriptions of the
causal topology instantiated by perception, motor control, or numerous other mental
phenomena.
Philosophers have offered various arguments why cognitive science requires
syntactic description of mental activity. I will quickly address a few prominent
arguments. I critique these and other arguments more thoroughly in (Rescorla 2017).
Argument from Computational Formalism (Fodor 1981, p. 241; Gallistel
and King 2009, p. 107; Haugeland 1985, p. 106) Standard computational for-
malisms found in computability theory operate at the syntactic level. We can model
the mind as a computational system only if we postulate formal syntactic items
manipulated during mental computation.
Reply The argument misdescribes standard computational formalisms. Contrary to
what the argument maintains, many standard formalisms are couched at an abstract
level that remains neutral regarding the existence of formal syntactic items. We
can construe many standard computational models as defined over states that are
individuated representationally rather than syntactically. Computational modeling
per se does not require syntactic description. My previous writings have expounded
this viewpoint as applied to Turing machines (Rescorla 2017), the lambda calculus
(Rescorla 2012), and register machines (Rescorla 2013a).
4
See (Morrison 2000) for further examples along similar lines.
2 Levels of Computational Explanation 19
Argument from Causation (Egan 2003; Haugeland 1985, pp. 39–44) Repre-
sentational properties are causally irrelevant to mental activity. Thus, intentional
psychology cannot furnish causal explanations. We should replace or supplement
intentional psychology with suitable non-representational descriptions, thereby
attaining genuinely causal explanations of mental and behavioral outcomes.
Reply The argument assumes that representational properties are causally irrele-
vant to mental activity. This assumption conflicts with pre-theoretic intuition and
with scientific psychology (Burge 2007, pp. 344–362), which both assign represen-
tational aspects of mentality a crucial causal role in mental activity. We have no
good reason to doubt that representational properties are causally relevant to mental
activity. In (Rescorla 2014a), I argue that indigenous representational properties
of a computing system can be causally relevant to the system’s computations.
Since mental states have indigenous representational properties, it follows that
representational properties can be causally relevant to mental computation.
Argument from Implementation Mechanisms (Chalmers 2012; Fodor 1987,
pp. 18–19) We would like to describe in non-representational terms how the
mind reliably transits between representational mental states. In other words, we
would like to isolate non-intentional implementation mechanisms for intentional
psychology. We should delineate a syntactic theory of mental computation, thereby
specifying non-intentional mechanisms that implement transitions among represen-
tational mental states.
Reply I agree that we should isolate non-intentional implementation mechanisms
for intentional psychology. However, we can take the implementation mechanisms
to be neural rather than syntactic (Rescorla 2017). We can correlate representational
mental states with neural states, and we can describe how transitions among neural
states track transitions among representationally-specified states. As indicated
above, we do not yet know how to do this. We do not yet know the precise
neural mechanisms that implement intentional mental activity. In principle, though,
we should be able to isolate those mechanisms. Indeed, discovering the neural
mechanisms of cognition is widely considered a holy grail for cognitive science.
What value would mental syntax add to an eventual neural theory of implementation
mechanisms?
Argument from Explanatory Generality (Chalmers 2012) Syntactic description
prescinds from both representational and neural properties. Thus, it offers a degree
of generality distinct from intentional psychology and neuroscience. This distinctive
generality provides us with reason to employ syntactic description. In particular,
a syntactic theory of implementation mechanisms offers advantages over a neural
theory of implementation mechanisms by supplying a different degree of generality.
Reply The argument relies on a crucial premise: that generality is always an
explanatory virtue. One can disambiguate this premise in various ways, using
different notions of “generality.” I doubt that any disambiguation of the premise
will prove compelling. As Potochnik (2010, p. 66) notes, “Generality may be of
20 M. Rescorla
explanatory worth, but explanations can be too general or general in the wrong
way.” One can boast generality through disjunctive or gerrymandered descriptions
that add no explanatory value to one’s theorizing (Rescorla 2017; Williamson 2000).
To illustrate, suppose we want to explain why John failed the test. We might note
that
John did not study all semester.
Alternatively, we might note that
John did not study all semester or John was seriously ill.
There is a clear sense in which the second explanation is more general than
first. Nevertheless, it does not seem superior. One might try to disbar such
counterexamples by saying that generality is a virtue when achieved in a non-
disjunctive or non-gerrymandered way.5 But then one would need to show that
syntactic description is itself non-disjunctive and non-gerrymandered, which carries
us back to the question whether syntactic description is explanatorily valuable.
Thus, I doubt that generic methodological appeals to explanatory generality support
syntactic modeling of the mind.6
Overall, philosophical discussion of mental computation has vastly overem-
phasized formal mental syntax. Certain areas of cognitive science may posit
formal syntactic mental items, but there is no clear reason to believe that mental
computation in general is fruitfully described in syntactic terms.
5
Strevens (2008) offers a detailed theory of explanation based on the core idea that good
explanation abstracts away from as many details as possible. However, his finished theory
significantly compromises that core idea, precisely so as to impugn disjunctive explanations.
Strevens seeks to eliminate disjunctive explanations through a causal contiguity condition on good
explanation (2008, pp. 101–109): when we explain some phenomenon through a causal model,
all the model’s realizers should form a “contiguous set” in “causal similarity space.” He says
that we should pursue greater abstraction only to the extent that we preserve cohesion. He says
that overly disjunctive explanantia violate cohesion, because they have non-cohesive realizers.
Strevens’s causal contiguity condition has dramatic consequences for scientific psychology.
Psychological properties are multiply realizable, so psychological explanations are apparently
realized by processes that form a “non-contiguous set” in “causal similarity space.” Hence, as
Strevens admits (pp. 155–165, p. 167), the cohesion requirement prohibits causal models from
citing psychological properties. This prohibition applies just as readily to syntactic description as
to representational description. So Strevens’s treatment does not provide any support for syntactic
explanation of mental activity. He castigates both syntactic explanation and intentional explanation
as non-cohesive.
6
Potochnik (2010) argues that generality is an explanatory virtue only when it advances the
research program to which an explanation contributes. Theoretical context heavily shapes whether
it is explanatorily beneficial to abstract away from certain details. On this conception, one cannot
motivate syntactic description through blanket appeal to the virtues of explanatory generality. One
would instead need to cite specific details of psychological inquiry, arguing that the generality
afforded by syntactic description promotes psychology’s goals. I doubt that any such argument
will prove compelling.
2 Levels of Computational Explanation 21
To illustrate the themes of this section, let us consider mammalian cognitive maps.
These have veridicality-conditions. For example, a cognitive map that represents
a landmark as present at some physical location is veridical only if the landmark
is indeed present at that location. Detailed, empirically fruitful theories describe
how mammalian cognitive maps interface with sensory inputs, motor commands,
and self-motion cues. The theories describe computations through which mam-
mals form, update, and deploy cognitive maps. In describing the computations,
researchers cite representational properties that contribute to veridicality-conditions
— e.g. they cite the physical location that a cognitive map attributes to a landmark.
Thus, representational description plays a central role within current theories of
mammalian navigation (Rescorla in press).
Neurophysiological description also plays a central role. In comparison with
other areas of cognitive science, we know a fair amount about the neural under-
pinnings of map-based navigation. For example:
– The rat hippocampus contains place cells, each responding selectively to a
specific spatial location.
– The rat entorhinal cortex contains grid cells, each responding selectively to
multiple spatial locations in the available environment. They are called “grid
cells” because the locations where a given cell fires form a periodic grid that
covers the environment.
Neuroscientists have developed mathematical models describing how place cells,
grid cells, and other such cells support mammalian navigation (Evans et al. 2016;
Giacomo et al. 2011). The models aim to illuminate the neurophysiological mecha-
nisms that underlie formation, updating, and deployment of cognitive maps. To be
sure, we are still a long way from completely understanding those mechanisms.
Conspicuously lacking from current scientific research into mammalian naviga-
tion: anything resembling syntactic description. The science describes navigational
computations in representational terms, and it explores the neural mechanisms that
implement those representationally-described computations. It does not describe the
mechanisms in multiply realizable, non-representational terms. It does not abstract
away from neural details of the mechanisms. On the contrary, neural details are
precisely what researchers want to illuminate. Of course, one might propose that
we supplement representational and neurophysiological description of mammalian
navigation with syntactic description. For example, one might articulate a causal
topology that prescinds from representational and neural details. But we have yet
to identify any clear rationale for the proposed supplementation. Certainly, current
scientific practice provides no such rationale. Taking current science as our guide,
syntactic description of mammalian navigation looks like an explanatorily idle
abstraction from genuinely explanatory representational and neural descriptions.
22 M. Rescorla
I have drawn a sharp distinction between artificial and natural computing systems.
Syntactic description plays a vital role in mediating between representational
description and physical construction of artificial computing systems. In contrast,
many mental computations are usefully described in representational terms rather
than syntactic terms. Why the disparity? Why is syntactic description so much more
important for artificial computation than natural computation?
Sect. 3 emphasized the crucial pragmatic role that syntax plays within computing
practice. By abstracting away from representational properties, syntactic description
offers a workable blueprint for a physical machine. By abstracting away from
physical properties, syntactic description ignores hardware details that are irrelevant
for many purposes. These are practical advantages that immeasurably advance a
practical task: design and construction of physical machines.
Admittedly, we can imagine a computing practice that eschews syntactic descrip-
tion. However, our own reliance on syntactic description secures important advan-
tages over any such hypothetical practice. To illustrate, suppose an agent designs and
builds a machine to execute the Euclidean algorithm. Suppose the agent describes
his machine in representational terms and hardware terms but not syntactic terms.
Now consider a second machine that has very different hardware but instantiates the
same causal topology. Both duplicates satisfy a common abstract causal blueprint.
This commonality is notable even if the agent does not register it. The agent could
have achieved his computing goals by building the second machine rather than
first. In eschewing talk about syntax, the agent foregoes valuable descriptions that
promote his own computing ends. He does not employ syntactic descriptions, but
he should.
Thus, norms of good computing design ensure a key role for syntactic description
of artificial computing systems. Syntactic description enables pragmatically fruitful
suppression of representational and hardware properties.
No such rationale applies to the scientific study of mental computation. Psy-
chology is not a practical enterprise. Cognitive scientists are not trying build a
computing system. Instead, they seek to explain activity in pre-given computing
systems. Constructing an artificial computing system is a very different enterprise
than understanding a pre-given computing system. That formal syntactic description
advances the practical task of designing and constructing artificial computers
does not establish that it advances the explanatory task of understanding a pre-
given computational system. We have seen no reason to think that suppressing
representational and hardware properties of natural computing systems advances
our study of those systems. We have seen no reason to think that formal syntactic
description adds explanatory value to representational and neural description of
mental computation.
Any artificial computing machine was designed by intelligent agents. Good
design practice dictates that those agents sometimes adopt a syntactic viewpoint
Exploring the Variety of Random
Documents with Different Content
NANDI TERRAE Mo
TVS RVINA PROS
TRAVIT SVMPTV PRO
PRIO RESTITVIT [562]
CAPITOLO PRIMO.
Il Colosseo — Origine di questa voce.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com