Quinto Pozos Cormier - Modality and Structure Is Spoken and Signed Languages PDF
Quinto Pozos Cormier - Modality and Structure Is Spoken and Signed Languages PDF
Quinto Pozos Cormier - Modality and Structure Is Spoken and Signed Languages PDF
The realization that signed languages are true languages is one of the great
discoveries of the last thirty years of linguistic research. The work of many
sign language researchers has revealed deep similarities between signed and
spoken languages in their structure, acquisition, and processing, as well as
differences arising from the differing articulatory and perceptual constraints
under which signed languages are used and learned. This book provides a
crosslinguistic examination of the properties of many signed languages, in-
cluding detailed case studies of American, Hong Kong, British, Mexican, and
German Sign Languages. The contributions to this volume, by some of the
most prominent researchers in the field, focus on a single question: to what
extent is linguistic structure influenced by the modality of language? Their
answers offer particular insights into the factors that shape the nature of lan-
guage and contribute to our understanding of why languages are organized as
they are.
kearsy cormier is a lecturer of Deaf Studies in the Centre for Deaf Studies
at the University of Bristol. She earned her doctorate in linguistics at the
University of Texas at Austin. Her dissertation explores phonetic properties
of verb agreement in American Sign Language.
edited by
Richard P. Meier, Kearsy Cormier,
and David Quinto-Pozos
http://www.cambridge.org
v
vi Contents
Index 469
Figures
viii
List of figures ix
xi
xii List of tables
karen emmorey is a Senior Staff Scientist at the Salk Institute for Biolog-
ical Studies, La Jolla, CA. She studies the processes involved in how Deaf
people produce and comprehend sign language and how these processes are
represented in the brain. Her most recent book is titled Language, cognition,
and the brain: Insights from sign language research (2002).
xiii
xiv List of contributors
Few readers will be surprised to learn that this volume is the fruit of a confer-
ence. That conference – one of an annual series sponsored by the Texas Linguis-
tics Society – was held at the University of Texas at Austin on February 25–27,
2000; the topic was “The effects of modality on language and linguistic theory.”
It was, we believe, a very successful meeting, one marked by the high quality of
the papers and of the ensuing discussions. There are many people and organiza-
tions to whom we are indebted for their financial support of the conference and
for their hard work toward its realization. Here there are two sets of friends and
colleagues whom we especially want to thank: Adrianne Cheek, Heather Knapp,
and Christian Rathmann were our co-organizers of the conference. We owe a
particular debt to the interpreters who enabled effective conversation between
the Deaf and hearing conferees. The skill and dedication of these interpreters –
Kristen Schwall-Hoyt, Katie LaSalle, and Shirley Gerhardt – were a foundation
of the conference’s success.
This book brings together many of the papers from that conference. All are
now much updated and much revised. The quality of the revisions is due not
only to the hard work of the authors but also to the peer-review process. To every
extent possible, we obtained two reviews for each chapter, one from a scholar
who works on signed languages and one from a scholar who, while expert in
linguistics or psycholinguistics, works primarily on spoken languages. There
were two reasons for this: first we sought to make sure that the chapters would
be of the highest possible quality. And, second, we sought to ensure that the
chapters would be accessible to the widest possible audience of researchers in
linguistics and related fields.
To obtain these reviews, we abused many of our colleagues here at the
University of Texas at Austin, including Ralph Blight, Megan Crowhurst,
Lisa Green, Scott Myers, Carlota Smith, Steve Wechsler, and Tony Woodbury
from the Department of Linguistics and Randy Diehl, Cathy Echols, and Peter
MacNeilage from the Department of Psychology. We, and our authors, also
benefited from the substantive and insightful reviews provided by Diane Brentari
(Purdue University, West Lafayette, IN), Karen Emmorey (The Salk Insti-
tute, La Jolla, CA), Elisabeth Engberg-Pedersen (University of Copenhagen,
xvii
xviii Acknowledgments
Denmark), Susan Fischer (National Technical Institute for the Deaf, Rochester,
NY), Harry van der Hulst (University of Connecticut), Manfred Krifka
(HumboldtUniversity,Berlin,Germany),CecileMcKee(University ofArizona),
David McKee (Victoria University of Wellington, New Zealand), Irit Meir
(University of Haifa, Israel), Jill Morford (University of New Mexico), Carol
Neidle (Boston University), Carol Padden (University of California, San Diego),
Karen Petronio (Eastern Kentucky University), Claire Ramsey (University of
Nebraska), Wendy Sandler (University of Haifa, Israel), and Sherman Wilcox
(University of New Mexico). We thank all of these colleagues for the time that
they gave to this volume.
Christine Bartels, who at the outset was our acquisitions editor at Cambridge
University Press, shaped our thinking about how to put this book together.
We are greatly indebted to her. The Children’s Research Laboratory of the
Department of Psychology of the University of Texas at Austin provided the
physical infrastructure for our work on this book. During the preparation of this
book, David Quinto-Pozos was supported by a predoctoral fellowship from the
National Institutes of Health (F31 DC00352). Last – but certainly not least –
we thank the friends and spouses who have seen us through this process, in par-
ticular Madeline Sutherland-Meier and Mannie Quinto-Pozos. Their patience
and support have been unstinting.
Richard P. Meier
1.1 Introduction
This is a book primarily about signed languages, but it is not a book targeted just
at the community of linguists and psycholinguists who specialize in research
on signed languages. It is instead a book in which data from signed languages
are recruited in pursuit of the goal of answering a fundamental question about
the nature of human language: what are the effects and non-effects of modality
upon linguistic structure? By modality, I and the other authors represented in
this book mean the mode – the means – by which language is produced and
perceived. As anyone familiar with recent linguistic research – or even with
popular culture – must know, there are at least two language modalities, the
auditory–vocal modality of spoken languages and the visual–gestural modality
of signed languages. Here I seek to provide a historical perspective on the issue
of language and modality, as well to provide background for those who are
not especially familiar with the sign literature. I also suggest some sources of
modality effects and their potential consequences for the structure of language.
Why Bloomfield was so certain that speech was the source of any and all
complexity in these gesture languages is unclear. Perhaps he was merely echoing
1
2 Richard P. Meier
Edward Sapir (1921:21) or other linguists who had articulated much the same
views.
Later, Hockett (1960) enumerated a set of design features by which we can
distinguish human language from the communication systems of other animals
and from our own nonlinguistic communication systems. The first of those 13
design features – the one that he felt was “perhaps the most obvious” (p.89) –
is the vocal-auditory channel. Language, Hockett argued, is a phenomenon
restricted to speech and hearing. Thus, the early conclusion of linguistic research
was that there are profound differences between the oral–aural modality of
spoken languages and the visual–gestural modality of Bloomfield’s “gesture
languages.” On this view, those differences were such that human language
was only possible in the oral–aural modality.
However, the last 40 years of research – research that was started by William
Stokoe (1960; Stokoe, Casterline, and Croneberg 1965) and that was thrown
into high gear by Ursula Bellugi and Edward Klima (most notably, Klima and
Bellugi 1979) – has demonstrated that there are two modalities in which human
language may be produced. We now know that signed and spoken languages
share many properties. From this, we can safely identify many non-effects of
the modality in which language happens to be produced; see Table 1.1. Signed
and spoken languages share the property of having conventional vocabularies
in which there are learned pairings of form and meaning. Just as each speech
community has its own idiosyncratic pairings of sound form and meaning, so
does each sign community. In sign as in speech, meaningful units of form
being that when arguments are signaled morphologically ASL exhibits “null
arguments,” that is, phonologically empty subjects and objects (Lillo-Martin
1991). As Diane Lillo-Martin reviews in her chapter, Brazilian Sign Language –
unlike ASL, perhaps – allows a further tradeoff, such that agreeing verbs sanc-
tion preverbal objects, whereas only SVO (subject – verb – object) order is
permitted with non-agreeing verbs (Quadros 1999).
Studies of the acquisition of ASL and other signed languages have revealed
strong evidence that signed languages are acquired on essentially the same
schedule as spoken languages (Newport and Meier 1985; Meier 1991; Petitto
and Marentette 1991). There is evidence of an optimal maturational period – a
critical period – for the acquisition of signed languages, just as there is for the
acquisition of spoken languages (Mayberry and Fischer 1989; Newport 1990).
In the processing of signed languages, as in the processing of spoken languages,
there is a crucial role for the left hemisphere (Poizner, Klima, and Bellugi 1987)
although there is ongoing controversy about whether there might be greater
right hemisphere involvement in the processing of signed languages than there
is in spoken languages (e.g., Neville, Bavelier, Corina, Rauschecker, Karni,
Lalwani, Braun, Clark, Jezzard, and Turner 1998; and for discussion of these
results, Corina, Neville, and Bavelier 1998; Hickok, Bellugi, and Klima 1998).
On the basis of results such as those outlined above, there were two conclu-
sions that many of us might have drawn in the early 1980s. One conclusion is
unassailable, but the other is more problematic:
Conclusion 1: The human language capacity is plastic: there are at least two modalities –
that is, transmission channels – available to it. This is true despite the fact that every
known community of hearing individuals has a spoken language as its primary language.
It is also true despite plausible claims that humans have evolved – at least in the form
of the human vocal tract – specifically to enable production of speech.
The finding that sign and speech are both vehicles for language is one of the
most crucial empirical discoveries of the last decades of research in any area of
linguistics. It is crucial because it alters our very definition of what language
is. No longer can we equate language with speech. We now know that funda-
mental design features of language – such as duality of patterning, discreteness,
and productivity – are not properties of a particular language modality. Instead
these design features are properties of human language in general: properties
presumably of whatever linguistic or cognitive capacities underlie human lan-
guage. Indeed, we would expect the same properties to be encountered in a
third modality – e.g. a tactile gestural modality – should natural languages be
indentified there.2
Conclusion 2: There are few or no structural differences between signed and spoken
languages. Sure, the phonetic features are different in sign and speech: speech does
2 In his contribution to this volume, David Quinto-Pozos discusses how deaf-blind signers use
ASL in the tactile–gestural modality.
Explaining effects and non-effects of modality 5
not have handshapes and sign does not have a contrast between voiced and nonvoiced
segments, but otherwise everything is pretty much the same in the two major language
modalities. Except for those rules that refer specifically to articulatory features – or to
auditory or visual features – any rule of a signed language is also a possible rule of a
spoken language, and vice versa.
It is this second conclusion that warrants re-examination. The hypothesis that
there are few or no structural differences between sign and speech is the subject
of the remainder of this chapter. The fact that we know so much more now
about signed languages than we did when William Stokoe began this enterprise
in 1960 means that we can be secure in the understanding that discussion of
modality differences does not threaten the fundamental conclusion that signed
languages are indeed languages. The last 40 years of research have demon-
strated conclusively that there are two major types of naturally-evolved human
languages: signed and spoken.
Why should we be interested in whether specific aspects of linguistic structure
might be attributable to the particular properties of the transmission channel?
Exploration of modality differences holds out the hope that we may achieve a
kind of explanation that is rare in linguistics. Specifically, we may be able to
explore hypotheses that this or that property of signed or spoken language is
attributable to the particular constraints that affect that modality.
ASL, but also DGS, Australian Sign Language, and Japanese Sign Language
(Nihon Syuwa or NS). Gary Morgan and his colleagues discuss how Christo-
pher – a hearing language savant – learned aspects of British Sign Language
(BSL). Research on signed languages other than ASL means that discussion of
modality differences is not confounded by the possibility that our knowledge
of signed languages is largely limited to one language that might have many
idiosyncratic properties. Just as we would not want to make strong conclusions
about the nature of the human language capacity on the basis of analyses that
are restricted to English, we would not want to characterize all signed languages
just on the basis of ASL.
Sign Speech
mandible, lips, and velum surely comes as no surprise to anyone.3 Table 1.3
lists a number of ways in which the oral and manual articulators differ. The
oral articulators are small and largely hidden within the oral cavity; the fact
that only some of their movements are visible to the addressee accounts for
the failure of lipreading as a means of understanding speech. In contrast, the
manual articulators are relatively large. Moreover, the sign articulators are
paired; the production of many signs entails the co-ordinated action of the
two arms and hands. Yet despite the impressive differences between the oral
and manual articulators, their consequences for linguistic structure are far from
obvious. For example, consider the fact that the sound source for speech is
internal to the speaker, whereas the light source for the reflected light that
carries information about the signer’s message is external to that signer.4
3 The articulators in speech or sign seem so different that, when we find common properties of
sign and speech, we are tempted to think that they must be due to general, high-level proper-
ties of the human language capacity or perhaps to high-level properties of human cognition.
But a cautionary note is in order: there are commonalities in motoric organization across the
two modalities that mean that some similar properties of the form of sign and speech may be
attributable to shared properties of the very disparate looking motor systems by which speech
and sign are articulated (Meier 2000b). Here are two examples: (1) in infancy, repetitive, non-
linguistic movements of the hands and arms emerge at the same time as vocal babbling (Thelen
1979). This motoric factor may contribute to the apparent coincidence in timing of vocal and
manual babbling (Petitto and Marentette 1991; Meier and Willerman 1995). More generally, all
children appear to show some bias toward repetitive movement patterns. This may account for
certain facts of manual babbling, vocal babbling, early word formation, and early sign formation
(Meier, McGarvin, Zakia, and Willerman 1997; Meier, Mauk, Mirus, and Conlin 1998). (2) The
sign stream, like the speech stream, cannot be thought of as a series of beads on a string. Instead,
in both modalities, phonological units are subject to coarticulation, perhaps as a consequence
of principles such as economy of effort to which all human motor performance – linguistic or
not – is subject. Instrumented analyses of handshape production reveal extensive coarticulation
in the form of ASL handshapes, even in very simple sign strings (Cheek 2001; in press).
4 There are communication systems – both biological and artificial – in which the light source is
internal: the most familiar biological example is the lightening bug.
8 Richard P. Meier
This fact may limit the use of signed languages on moonless nights along
country roads, but may have no consequence for how signed languages are
structured.5
To date, the articulatory factor that has received the most attention in the
sign literature involves the relative size of the articulators in sign and speech.
In contrast to the oral articulators, the manual articulators are massive. Large
muscle groups are required to overcome inertia and to move the hands through
space, much larger muscles than those required to move the tongue tip. Not
surprisingly, the rate at which ASL signs are produced appears to be slower
than the rate at which English words are produced, although the rate at which
propositions are produced appears to be the same (Bellugi and Fischer 1972;
Klima and Bellugi 1979). How can this seeming paradox be resolved? Klima
and Bellugi (1979; see also Bellugi and Fischer 1972) argued that the slow
rate of sign production encourages the simultaneous layering of information
within the morphology of ASL; conversely, the slow rate of sign production
discourages the sequential affixation that is so prevalent in spoken languages.6
Consistent with this suggestion, when Deaf signers who were highly experi-
enced users of both ASL and Signing Exact English (SEE) were asked to sign
a story, the rate at which propositions were produced in SEE was much slower
than in ASL (a mean of 1.5 seconds per proposition in ASL, vs. 2.8 seconds
per proposition in SEE). In SEE, there are separate signs for the morphology of
English (including separate signs for English inflections, function words, and
derivational morphemes). In this instance an articulatory constraint may push
natural signed languages, such as ASL, in a particular typological direction,
that is, toward nonconcatenative morphology. The slow rate at which propo-
sitions are expressed in sign systems such as SEE that mirror the typological
5 Similarly, the use of spoken languages is limited in environments in which there are very high
levels of ambient noise, and in such environments – for example, sawmills – sign systems may
develop (Meissner and Philpott 1975).
6 Measurements of word/sign length are, of course, not direct measurements of the speed of oral
or manual articulators; nor are they measures of the duration of movement excursions. Some
years ago, at the urging of Ursula Bellugi, I compared the rate of word production in English and
Navaho. The hypothesis was that the rate of word production (words/minute) would be lower
in Navaho than in English, consistent with the fact that Navaho is a polysynthetic language
with an elaborate set of verbal prefixes. The results were consistent with this hypothesis. Wilbur
and Nolen (1986) attempted a measure of syllable duration in ASL. They equated movement
excursion with syllable, such that, in bidirectional signs and in reduplicated forms, syllable
boundaries were associated with changes in movement direction. On this computation, syllable
durations in sign were roughly comparable at 250 ms to measures of English syllable duration
that Wilbur and Nolen pulled from the phonetics literature. Note, however, that there is little
phonological contrast – and indeed little articulatory change – across many of the successive
“syllables” within signs; in a reduplicated or bidirectional form, the only change from one
syllable to the next would be in direction of path movement. See Rachel Channon’s contribution
to this volume (Chapter 3) for a discussion of repetition in signs.
Explaining effects and non-effects of modality 9
organization of English may account for the fact that such systems have not
been widely adopted in the Deaf community.
The two language modalities may also differ in whether they make a single
predominant oscillator available for the production of language, as I discussed in
an earlier paper (Meier 2000b). Oscillatory movements underlie human action,
whether walking, chewing, breathing, talking, or signing. Although there are
several relatively independent oral articulators (e.g. the lips, the tongue tip, the
tongue dorsum, the velum, and the mandible), MacNeilage and Davis (1993;
also MacNeilage 1998) ascribe a unique status to one of those articulators.
They argue that oscillation of the mandible provides a “frame” around which
syllable production is organized. Repeated cycles of raising and lowering the
mandible yield a regular alternation between a relatively closed and relatively
open vocal tract. This articulatory cycle is perceived as an alternation between
consonants and vowels. Mandibular oscillation may also be developmentally
primary: MacNeilage and Davis argue that, except for the mandible, children
have little independent control over the speech articulators; cycles of raising and
lowering the mandible account for the simple consonant–vowel (CV) syllables
of vocal babbling.
When we observe individual ASL signs we see actions – sometimes repeated,
sometimes not – of many different articulators of the arm and hand. ASL signs
can have movement that is largely or completely restricted to virtually any joint
on the arm: The sign ANIMAL requires repeated in-and-out movements of
the shoulder. Production of the sign DAY entails the rotation of the arm at the
shoulder. The arm rotates toward the midline along its longitudinal axis. The
signs GOOD and GIVE (citation form) are articulated through the extension of
the arm at the elbow, whereas TREE involves the rotation of the forearm at the
radioulnar joint. YES involves the repeated flexion and extension of the wrist.
The movement of still other signs is localized at particular articulators within the
hand (e.g. TURTLE: repeated internal bending of the thumb; BIRD: repeated
bending of the first finger at the first knuckle; COLOR: repeated extension and
flexion of the four fingers at the first knuckle; BUG: repeated bending at the
second knuckle). Still other signs involve articulation at more than one joint;
for example, one form of GRANDMOTHER overlays repeated rotation of the
forearm on top of an outward movement excursion executed by extension of
the arm at the elbow. Facts such as these suggest that it will be hard to identify
a single, predominant oscillator in sign that is comparable to the mandibular
oscillation of speech. This further suggests that analysts of syllable structure
in sign may not be able to develop a simple articulatory model of syllable
production comparable to the one that appears possible for speech. On the view
suggested by MacNeilage and Davis’s model, speech production – but not sign
production – is constrained to fit within the frame imposed by a single articulator.
10 Richard P. Meier
Table 1.4 Some properties of the sensory and perceptual systems subserving
sign vs. speech
Sign Speech
7 In an earlier article that addressed some of the same issues as discussed here (Meier 1993), I
listed categorical perception as a modality feature that may distinguish the perception of signed
and spoken languages. The results of early studies, in particular Newport (1982), suggested that
handshape and place distinctions in ASL were not categorically perceived, a result that indicated
to Newport that categorical perception might be a property of audition. Very recent studies
raise again the possibility that distinctions of handshape and of linguistic and nonlinguistic
facial expression may be categorically perceived (Campbell, Woll, Benson, and Wallace 1999;
McCullough, Emmorey, and Brentari 2000).
Explaining effects and non-effects of modality 11
The basic tools of a coding scheme using such a channel are an inventory of
distinguishable symbols and their concatenation. Thus, grammars for spoken
languages must map propositional structures onto a serial channel . . .” In her
chapter, Susan McBurney makes an interesting distinction between the modality
and the medium of a human language. For her, modality is the biological or phys-
ical system that subserves a given language; thus, for signed languages it is the
manual and visual systems that together make up the visual–gestural modality.
Crucially, she defines the medium “as the channel (or channels) through which
a language is conveyed. More specifically, channel refers to the dimensions of
space and time that are available to a given language.” Like Pinker and Bloom,
she considers the medium for speech to be fundamentally one-dimensional;
speech plays out over time. But sign languages are conveyed through a mul-
tidimensional medium: the articulatory and perceptual characteristics of the
visual–gestural modality give signed languages access to four dimensions of
space and time. The question then becomes: to what extent do signed languages
utilize space and what consequences does the use of space have for the nature
of linguistic structure in sign?
speech became the predominant medium of human language not because it is so well
suited to the segmented and combinatorial requirements of symbolic communication (the
manual modality is equally suited to the job), but rather because it is not particularly
good at capturing the mimetic components of human communication (a task at which
the manual modality excels).
1.4.4 The youth of sign languages and their roots in nonlinguistic gesture
As best we can tell, signed languages are young languages, with histories that
hardly extend beyond the mid-eighteenth century. With some effort we can trace
the history of ASL to seventeenth century Martha’s Vineyard (Groce 1985).
The youngest known signed language – Nicaraguan Sign Language – has a
history that extends back only to the late 1970s (Kegl, Senghas, and Coppola
1999; Polich 2000). We also know of one class of young spoken languages –
specifically, the creole languages – and, importantly, these languages tend to be
very uniform in structure (Bickerton 1984).
The demographics of Deaf communities mean that children may have been,
and may continue to be, key contributors to the structure of signed languages.
Few deaf children have native signing models. Only third-generation deaf
children – in other words, those with a deaf grandparent – have at least one
native-signing parent. The fact that most deaf children do not have native-
signing models in the home – indeed the preponderance of deaf children (specif-
ically, the 90 percent of deaf childen who are born to hearing parents) do not
even have fluent models in the home – may mean that deaf children have freer
rein to use linguistic forms that reflect their own biases, as opposed to the con-
ventions of an established linguistic community. The biases of different deaf
children are likely to have much in common. That deaf children can create
linguistic structure has been shown in a variety of situations:
r in the innovated syntax of the “home sign” systems developed by deaf children
born to nonsigning, hearing parents (Goldin-Meadow and Feldman 1977;
Goldin-Meadow and Mylander 1990);
r in the acquisition of ASL by a deaf child who had input only from deaf
parents who were late – and quite imperfect – learners of ASL (Singleton and
Newport, in press);
r in the innovated use of spatial modification of verbs (“verb agreement”) by
deaf children exposed only to Signing Exact English with its thoroughly
nonspatial syntax (Supalla 1991); and
Explaining effects and non-effects of modality 13
r in the apparent creation of Nicaraguan Sign Language since the late 1970s
(Kegl et al. 1999).
Young spoken and signed languages need not be structured identically, given
the differing “substrates” and “superstrates” that contributed to them and the
differing constraints upon the oral–aural and visual–gestural modalities. For
young spoken languages – that is, for creole languages – the preponderance
of the vocabulary derived from the vocabulary of whatever the dominant (or
“superstrate”) language was in the society in which the creole arose; so, French
Creoles such as Haitian drew largely from the vocabulary of French. But signed
languages could draw from rather different resources: one source may have
been the gestures that deaf children and their families sometimes innovate in
the creation of home sign systems. Other contributors to the vocabularies of
signed languages may have been the gestures that are in general use among the
deaf and hearing populations; in their chapter, Terry Janzen and Barbara Shaffer
trace the etymology of certain modal signs in ASL and in French Sign Language
(Langue des Signes Française or LSF) back to nonlinguistic gesture. Because
many gestures – whether they be the gestures of young deaf home signers or the
gestures of hearing adults – are somehow motivated in their form, these gestures
may exhibit some internal form–meaning associations. It seems possible that
such latent regularities may be codified and systematized by children, yielding
elaborate sign-internal morphology of a sort that we would not expect within
the words of a spoken creole (Meier 1984).
1. Not much: Signed and spoken languages share the same linguistic properties. Obviously the
distinctive features of sign and speech are very different, but there are no interesting structural
differences.
2. Statistical tendencies: One modality has more instances of some linguistic feature than the
other modality.
3. Preferred typological properties differ between the modalities.
4. Rules or typological patterns that are unique to a particular modality.
5. Relative uniformity of signed languages vs. Relative diversity of spoken languages.
14 Richard P. Meier
Note that this statistical difference between sign and speech in the frequency
of iconic lexical items may indeed be a consequence of differences in the oral–
aural and visual–gestural modalities. Yet this difference may have few or no
consequences for the grammar of signed and spoken languages. And, thus, the
linguist could continue to believe a variant of Outcome 1: specifically, that
linguists could quite reasonably believe that, with regard to grammar, not much
differs across the two modalities. Even so, there could be consequences for
acquisition, but I do not think that there are (for reviews, see Newport and
Meier 1985; Meier 1991). Or there could be consequences for the creation of
new languages. And, indeed, there may be. For example, the greater resources
for iconic representation in the visual–gestural modality allow deaf children of
hearing parents to innovate gestures – “home signs” – that can be understood
by their parents or other interlocutors (Goldin-Meadow and Feldman 1977).
This may jump-start the creation of new signed languages.8
8 Having said this, there is at least anecdotal evidence (discussed in Meier 1982) that deaf children
of hearing parents are not limited by the iconicity of their home signs. For example, Feldman
(1975) reports that one deaf child’s home sign for ice cream resembled the action of licking an
ice cream cone. Early on, the gesture was used only in contexts that matched this image. But,
with development, the child extended the gesture to other contexts. So, this same gesture was
used to refer to ice cream that was eaten from a bowl.
Explaining effects and non-effects of modality 17
Cecile McKee in their contribution to this volume (Chapter 6). Many deaf chil-
dren in the USA are exposed to some form of Manually Coded English (MCE) as
part of their school curriculum. Supalla (1991) examined the signing of a group
of children who had been exposed to Signing Exact English (SEE 2), one of the
MCE systems currently in use in the schools. This artificial sign system follows
the grammar of English. Accordingly, SEE 2 does not use the spatial devices
characteristic of ASL and other natural signed languages, but does have separate
signs for each of the inflectional affixes of English. Thus, in SEE 2, verb agree-
ment is signaled by a semi-independent sign that employs the S handshape (i.e. a
fist) and that has the distribution of the third-person singular suffix of spoken
English. Supalla’s subjects were deaf fourth- and fifth-graders (ages 9–11), all
of whom came from hearing families and none of whom had any ASL exposure.
The SEE 2 exposed children neglected to use the affixal agreement sign that had
been modeled in their classrooms; instead they innovated the use of directional
modifications of verbs, despite the fact that their input contained little such mod-
ification.9 Through such directional modifications, many verbs in conventional
sign languages such as ASL – and in the innovative uses of the SEE 2 exposed
children – move from a location in space associated with the subject to a loca-
tion associated with the object. No affixes mark subject and object agreement;
instead an overall change in the movement path of the verb signals agreement.10
What about rules or patterns that are unique to signed languages? Such rules
or patterns are perhaps most likely to be found in pronominal/agreement systems
and in spatial descriptions where the resources available to signed languages
are very different than in speech. Here are three candidates:
r The signed languages examined to date distinguish first and nonfirst person –
and ASL has lexical first-person plural signs WE and OUR – but may have no
grammatical distinction between second and third person, whereas all spoken
languages distinguish first, second, and third persons (Meier 1990). Spatial
distinctions – not person ones – allow reference to addressees to be distin-
guished from reference to non-addressed participants. This characterization of
the pronominal system of ASL has gained wide acceptance (see, for example,
Neidle et al. 2000, as well as the chapters in this volume by Diane Lillo-Martin
[Chapter 10] and by Christian Rathmann and Gaurav Mathur [Chapter 14])
and has also been adopted in the analysis of signed languages other than ASL:
for example, Danish Sign Language (Engberg-Pedersen 1993) and Taiwanese
Sign Language (Smith 1990). However, this is a negative claim about signed
languages: specifically that signed languages lack a grammatical distinction
that is ubiquitous in spoken languages.11
r Signed languages favor object agreement over subject agreement, unlike spo-
ken languages. For verbs that show agreement, object agreement is obliga-
tory, whereas subject agreement is optional.12 Acceptance of this apparent
difference between signed and spoken languages depends on resolution of
the now raging debate as to the status of verb agreement in signed languages.
Is it properly viewed as a strictly gestural system (Liddell 2000), or is it a
linguistically-constrained system, as argued in the chapters in this volume
by Diane Lillo-Martin (Chapter 10) and by Christian Rathmann and Gaurav
Mathur (Chapter 14; see also Meier 2002)?
r Instead of the kinds of spatial markers that are familiar in spoken languages
(e.g. prepositions such as in, on, or under in English), signed languages always
seem to use the signing space to represent the space being described. This is
the topic of Karen Emmorey’s contribution to this volume (Chapter 15).
11 Acceptance of the first–nonfirst analysis of person in ASL and other signed languages is by no
means universal. Liddell (2000) and McBurney (this volume, Chapter 13) have each argued for
an analysis of sign pronominal systems that makes no person distinctions.
12 However, Engberg-Pedersen (1993) cites the work of Edward Keenan to the effect that there are
a couple of known spoken languages that show object but not subject agreement.
Explaining effects and non-effects of modality 19
(Quadros 1999), and German (Rathmann 2000). Signed languages also vary in
their predominant word order; some like ASL are predominately SVO, whereas
others – including Japanese Sign Language – are SOV (subject – object – verb).
And, as Roland Pfau demonstrates in his chapter (Chapter 11), the grammar of
negation varies across signed languages.
However, as Newport and Supalla (2000) have observed, the variation that
we encounter in signed languages seems much more limited than the variation
found in spoken languages. Spoken languages may be tonal, or not. Spoken lan-
guages may be nominative/accusative languages or they may be ergative. They
may have very limited word-internal morphology or they may have the elabo-
rate morphology of a polysynthetic language. And some spoken languages have
elaborate systems of case morphology that permit great freedom of word order,
whereas others have little or no such morphology. Why is variation apparently
so limited in signed languages? The distinctive properties of the visual–gestural
modality may be a contributor. But, as mentioned before, the limited variation
in signed languages could be less a product of the visual–gestural modality, than
of the youth of the languages that are produced and perceived in that modality.
1.6 Conclusion
What I have sketched here is basically a classification of potential causes and
potential effects. It is not a theory by any means. The chapters that follow
allow us to jettison this meager start in favor of something much meatier: rich
empirical results placed within much richer theoretical frameworks.
But even from this brief review, we have seen, for example, that recent
research on a range of signed languages has led to the surprising suggestion that
signed and spoken languages exhibit distinct patterns of variation (Newport and
Supalla 2000). Although signed languages differ in their vocabularies, in word
order, in the presence of auxiliary-like elements, and in other ways, they seem on
the whole to be much less diverse typologically than are spoken languages. The
relative uniformity of signed languages, in contrast to the typological diversity
of spoken languages, may be due to the differing resources available to sign
and speech and the differing perceptual and articulatory constraints imposed
by the visual–gestural and oral–aural modalities. The apparent fact that signed
languages are young languages may also contribute to their uniformity.
The suggestion that signed languages are less diverse than spoken ones is
a fundamental hypothesis about the factors that determine what structures
are available to individual human languages. Yet this hypothesis has hardly
been tested. Doing so will demand that we examine a large sample of signed
languages. But just like many spoken languages, the very existence of some
signed languages is threatened (Meier 2000a). The pressures of educational
policy, of more prestigious spoken and signed languages, and of the ease of
Explaining effects and non-effects of modality 21
Acknowledgments
In thinking and writing about the issues discussed here, I owe a particular debt to
Elissa Newport and Ted Supalla’s recent chapter (Newport and Supalla 2000) in
which they raise many of the issues discussed here. I also thank Wendy Sandler,
Gaurav Mathur, and Christian Rathmann for their thoughtful comments on a
draft of this chapter.
1.7 References
Aronoff, Mark, Irit Meir, and Wendy Sandler. 2000. Universal and particular aspects
of sign language morphology. Unpublished manuscript, State University of New
York at Stony Brook, NY.
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Bellugi, Ursula and Susan Fischer. 1972. A comparison of sign language and spoken
language. Cognition 1:173–200.
Bickerton, Derek. 1984. The language bioprogram hypothesis. Behavioral and Brain
Sciences 7:173–221.
Bloomfield, Leonard. 1933. Language. New York: Holt, Rinehart and Winston.
Bos, Heleen F. 1990. Person and location marking in SLN: Some implications of a
spatially expressed syntactic system. In Current trends in European sign language
research, ed. Siegmund Prillwitz and Tomas Vollhaber, 231–248. Hamburg: Signum
Brentari, Diane, ed. 2001. Foreign vocabulary in sign languages: A cross-linguistic
investigation of word formation. Mahwah, NJ: Lawrence Erlbaum Associates.
Campbell, Ruth, Bencie Woll, Philip J. Benson, and Simon B. Wallace. 1999. Categorical
perception of face actions: Their role in sign language and in communicative facial
displays. Quarterly Journal of Experimental Psychology 52:67–95.
Cheek, Adrianne. 2001. The phonetics and phonology of handshape in American Sign
Language. Doctoral dissertation, University of Texas at Austin, TX.
Cheek, Adrianne. In press. Synchronic handshape variation in ASL: Evidence of coar-
ticulation. Northeastern Conference on Linguistics (NELS) 31 Proceedings.
Corina, David P., Helen J. Neville, and Daphne Bavelier. 1998. Response from Corina,
Neville and Bavelier. Trends in Cognitive Sciences 2:468–470.
22 Richard P. Meier
Psychology, ed. Patricia Siple and Susan D. Fischer, 85–109. Chicago, IL:
University of Chicago Press.
Supalla, Ted and Elissa L. Newport. 1978. How many seats in a chair? The derivation of
nouns and verbs in American Sign Language. In Understanding language through
sign language research, ed. Patricia Siple, 91–133. New York: Academic Press.
Sutton-Spence, Rachel and Bencie Woll. 1998. The linguistics of British Sign Language:
An introduction. Cambridge: Cambridge University Press.
Thelen, Esther. 1979. Rhythmical stereotypes in normal hearing infants. Animal
Behaviour, 27, 699–715.
Wilbur, Ronnie B. and Susan B. Nolen. 1986. The duration of syllables in American
Sign Language. Language and Speech 29:263–280.
Woodward, James. 1982. Single finger extension: Toward a theory of naturalness in
sign language phonology. Sign Language Studies 37:289–304.
Woodward, James. 2000. In The Signs of Language Revisited, ed. Karen Emmorey and
Harlan Lane, 23–47. Mahwah, NJ: Lawrence Erlbaum Associates.
Part I
27
28 Phonological structure in signed languages
salience of the sign parameters, and on sign movement in particular, deaf sub-
jects’ judgments reflected both perceptual salience and linguistic relevance.
The results of Corina and Hildebrant’s phonological similarity study corrob-
orate prior suggestions that movement is the most salient element within the
sign (see also Sandler 1993). Sign language researchers suggest that sign lan-
guage movement – whether as large as a path movement or as small as a hand
configuration change – forms the nucleus of the sign syllable (Wilbur 1987;
Perlmutter 1993; Corina 1996; Brentari 1998).
Curious as to whether the greater perceptibility of vowels is due to physi-
cal properties of the signal or to their syllabic status, Corina and Hildebrandt
observed that hand configurations in sign have a dual status. Depending on
whether the posture of the hands remains constant throughout a sign – in which
case the dynamic portion of the sign comes from path movement – or whether
the posture of the hands changes while the other parameters are held constant,
hand configurations can be thought of as non-nuclear (C) or nuclear (V) in
a sign syllable. Accordingly, Corina and Hildebrandt conducted a phoneme-
monitoring experiment in which subjects were asked to quickly identify the
presence of specific handshapes in signs. Each handshape was presented in a
static and a dynamic context. Their finding – that this context has little effect
on the speed with which handshapes are identified – leads them to suggest that
differences found in natural language perception of nuclear and non-nuclear seg-
ments rests on physical differences in the signal, not on the syllabic status of the
segment.
Generally, Corina and Hildebrandt find that the influence of phonological
form on sign perception is less robust than one might expect. They discuss the
possibility that the differential transparency of the articulatory structures of sign
and speech may have important consequences for language perception. They
hypothesize that, compared to speech, in sign there is greater transparency be-
tween the physical signal directly observed by an addressee and the addressee’s
internal representation of the signs being produced. In the perception of signed
languages the addressee observes the language articulators directly. In contrast,
in speech the listener perceives the acoustic effects of the actions of the speaker’s
articulators.
Similarly, Diane Brentari, who in this volume (Chapter 2) utilizes her Prosodic
Model of sign language phonology as a theoretical framework by which to
evaluate the influence of modality on phonology, attributes much of the dif-
ference between signed and spoken phonological representations to phonetics.
Specifically, she invokes the realm of phonetics whereby a physical signal is
transformed into a linguistic one. Rather than appealing to greater transparency,
Brentari argues that representational differences between sign and speech can-
not be separated from the visual system’s capacity for vertical processing. The
advantage of the visual–gestural modality for vertical processing, or processing
Phonological structure in signed languages 31
items presented at the same time, stands in contrast to the advantage that the
auditory–vocal modality has with respect to horizontal processing, or the abil-
ity to process temporally adiscrete items. This distinction allows – and in fact
requires, says Brentari – a different organization of the phonological units of
signed languages.
Brentari’s model of sign language phonology is not a simple transformation
of spoken language models. She accords movement features, which she labels
prosodic (PF), an entirely different theoretical status from handshape and place
of articulation features, which she labels inherent (IF). This distinction was
developed to account for sign languages in particular, but it succeeds in capturing
some more general aspects of phonology. Syllables in sign have visual salience,
which is analogous to acoustic sonority, and prosodic features are vowel-like
while inherent features are consonant-like. Brentari argues that properties such
as PFs (or Vs) and IFs (or Cs), along with other properties common to both sign
and speech, are likely candidates for UG. For example, both language modalities
exhibit structures that can be divided into two parts. One part “carries most of the
paradigmatic contrasts,” and the other part “comprises the medium by which
the signal is carried over long distances” (the salient movement features or
vowels). These observations suggest that UG requires both highly contrastive
and highly salient phonological elements.
Rachel Channon’s chapter (Chapter 3) also supports the idea that signed and
spoken languages have different phonological representations. She observes
that the different phonological structures of the languages in each modality
lead to different patterns of repeated elements. Spoken words, for example,
are composed of contrastive segments that may repeat in an irregular fashion.
Simple signs, however, are composed of a bundle of features articulated si-
multaneously; in such signs, elements repeat in a regular “rhythmic” fashion.
In her statistical analysis of the types of repetition found within a sign and
within a word, Channon develops two models of repetition. One model pre-
dicts speech data well, but not sign data. The other predicts sign data well, but
is a poor predictor of repetition in speech. She concludes that differences in
the occurrence of repetition in sign and in speech are systematic, and that the
systematicity is a function of different phonological representations of the two
modalities.
Not only do the chapters in this volume advance ideas about differences in the
phonological representations of sign and speech, they also highlight possible
differences between sign and speech that may have little phonological import,
but that are of real psycholinguistic interest. For example, in Hohenberger et al.’s
comparison of the DGS slips data to slips in spoken German, they report that
errors are detected and repaired earlier in sign languages (typically within a
sign) compared to spoken languages (almost always after the word). A possible
explanation for this difference can be found in the relative duration of signs
32 Phonological structure in signed languages
vs. words. Specifically, an individual sign takes about twice as long as a word
to be articulated, so errors in sign are more likely to be caught before the sign
is completed (however, for evidence that the rates at which propositions are
expressed in sign and speech are not different, see Klima and Bellugi 1979).
This difference in how signers or speakers repair their language is a modality
effect that does not reach into the phonology of the language.
The last chapter in this section – that by Samuel J. Supalla and Cecile McKee
(Chapter 6) – argues that there are also educational consequences of properties
of word formation that may be specific to signed languages. Various systems for
encoding English in sign have attempted to graft the morphological structure of
English onto the signs of ASL. According to Supalla and McKee the unintended
result of these well-intentioned efforts has been to create systems of Manually
Coded English (MCE) that violate constraints on sign complexity first noticed
by Battison (1978). These constraints limit the number of distinct handshapes
or distinct places of articulations within signs. Even children with little or no
exposure to ASL seem to expect sign systems to fit within these constraints.
Supalla and McKee suggest that these constraints have their origins in perceptual
strategies by which children segment the sign stream. Supalla and McKee’s
chapter (Chapter 6) reminds us that linguistic and psycholinguistic work on
the structure of signed languages can have very immediate consequences for
educational practice in the education of deaf children.
The research reported in this part of the volume speaks to some possible
causes and effects of an organizational difference between spoken and sign
language phonology. Each author discusses similarities and differences be-
tween sign and speech, bringing us closer to understanding how and why the
phonological representations of signs could be different from those of spoken
words. The three chapters by Hohenberger et al., Corina and Hildebrandt, and
Supalla and McKee report on behavioral data, interpreting their results with an
eye toward this modality issue, whereas Channon and Brentari approach the
problem from a predominantly theoretical perspective, offering insights into
the phonological representation for sign language. The convergence of behav-
ioral, statistical, and theoretical research methods on a central problem – the
extent to which the sensory and articulatory modality of a language shapes
its phonological structure – yields a surprisingly consistent picture. What we
learn is that many elements of the linguistic representations of signs are guided
by their paradigmatic nature, and that this organization is likely associated
with the ability of our visual systems to process a large number of linguis-
tic features simultaneously. Perceptual and production consequences naturally
follow.
References
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Bellugi, Ursula and Patricia Siple. 1974. Remembering with and without words. In
Current problems in psycholinguistics, ed. François Bresson. Paris: Centre National
de la Recherche Scientifique.
Bellugi, Ursula, Edward S. Klima, and Patricia Siple. 1975. Remembering in signs.
Cognition 3:93–125.
Brentari, Diane. 1990. Licensing in ASL handshape. In Sign language research: Theo-
retical issues, ed. Ceil Lucas, 57–68. Washington, DC: Gallaudet University Press.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Corina, David P. 1996. ASL syllables and prosodic constraints. Lingua 98:73–102.
Corina, David P., Howard Poizner, Ursula Bellugi, Tod Feinberg, Dorothy Dowd, and
Lucinda O’Grady-Batch. 1992. Dissociation between linguistic and non-linguistic
gestural systems: A case for compositionality. Brain and Language 43:414–447.
Corina, David P. and Wendy Sandler. 1993. On the nature of phonological structure in
sign language. Phonology 10:165–207.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liberman, Alvin M., and Michael Studdert-Kennedy. 1977. Phonetic perception. In
Handbook of sensory physiology, ed. R. Held, H. Leibowitz, and H. L. Tueber.
Heidelberg: Springer-Verlag.
Liddell, Scott K. 1984. THINK and BELIEVE: Sequentiality in American Sign Lan-
guage. Language 60:372–392.
Liddell, Scott K. and Robert E. Johnson. 1986. American Sign Language compound
formation processes, lexicalization, and phonological remnants. Natural Language
and Linguistic Theory 4:445–513.
Liddell, Scott K. and Robert E. Johnson. 1989. American Sign Language: the phono-
logical base. Sign Language Studies 64:197–278.
Perlmutter, David M. 1993. Sonority and syllable structure in American Sign Language.
In Phonetics and phonology: Current issues in ASL Phonology, ed. Geoffrey R.
Coulter, 227–261. New York: Academic Press.
Sandler, Wendy. 1993. Sign language and modularity. Lingua 89:315–351.
Stokoe, William C. 1960. Sign language structure. Studies in linguistics occasional
papers 8. Buffalo: University of Buffalo Press.
Stokoe, William C., Dorothy C. Casterline, and Carl G. Croneberg. 1965. A dictionary
of American Sign Language. Washington, DC: Gallaudet College Press.
Wilbur, Ronnie B. 1987. American Sign Language linguistic and applied dimensions,
2nd edition. Boston, MA: College-Hill Press.
2 Modality differences in sign language
phonology and morphophonemics
Diane Brentari
2.1 Introduction
In this chapter it is taken as given that phonology is the level of grammatical
analysis where primitive structural units without meaning are combined to
create an infinite number of meaningful utterances. It is the level of grammar
that has a direct link with the articulatory and perceptual phonetic systems,
either visual–gestural or auditory–vocal. There has been work on sign language
phonology for about 40 years now, and at the beginning of just about every piece
on the topic there is some statement like the following:
The goal is, then, to propose a model of ASL [American Sign Language] grammar at
a level that is clearly constrained by both the physiology and by the grammatical rules.
To the extent that this enterprise is successful, it will enable us to closely compare the
structures of spoken and signed languages and begin to address the broader questions
of language universals . . . (Sandler 1989: vi)
The goal of this chapter is to articulate some of the differences between the
phonology of signed and spoken languages that have been brought to light
in the last 40 years and to illuminate the role that the physiological bases
have in defining abstract units, such as the segment, syllable, and word. There
are some who hold a view that sign languages are just like spoken languages
except for the substance of the features (Perlmutter 1992). I disagree with this
position, claiming instead that the visual–gestural or auditory–vocal mode of
communication infiltrates the abstract phonological system, causing differences
in the frequency of a phenomenon’s occurrence, as well as differences due to
the signal, articulatory, or perceptual properties of signed and spoken languages
(see Meier, this volume).
I argue that these types of differences should lead to differences in the phono-
logical representation. That is, if the goal of a phonological grammar is to ex-
press generalizations as efficiently and as simply as possible – and ultimately
to give an explanatory account of these generalizations – then the frequency
with which a given phenomenon occurs should influence its representation. A
grammar should cover as many forms as possible with the fewest number of
exceptions, and frequent operations should be easy to express, while infrequent
35
36 Diane Brentari
Vision Audition
and auditory systems due to signal transmission and peripheral processing, and
a few of these are listed in Table 2.1.
In general, the advantage in vertical processing tasks goes to vision, while
the advantage in horizontal processing tasks goes to audition. For example, the
time required for a subject to detect temporally discrete stimuli is a horizontal
processing task. Hirsch and Sherrick (1961) show that the time required for the
higher order task of recognition, or labeling of a stimulus, called “threshold
of identification” (involving more cortical involvement) is roughly the same
in both vision and audition, i.e. approximately 20 ms. The time required for
the more peripheral task of detection – called “threshold of flicker fusion”
in vision (Chase and Jenner 1993) and “threshold of temporal resolution” in
audition (Kohlrausch et al. 1992) – is quite different. Humans can temporally
resolve auditory stimuli when they are separated by an interval of only 2 ms,
(Green 1971; Kohlrausch et al. 1992), while the visual system requires at least
a 20 ms interstimulus interval to resolve visual stimuli presented sequentially
(Chase and Jenner 1993). The advantage here is with audition. Meier (1993)
also discusses the ability to judge duration and rate of stimulus presentation;
both of these tasks also give the advantage to audition.
Comparing vertical processing tasks in audition and vision – e.g. pattern
recognition, localization of objects – is inherently more difficult because of the
nature of sound and light transmission. To take just two examples, vision has no
analogue to harmonics, and the difference between the speed of transmission
of light waves vs. sound waves is enormous: 299,274 km/s for light waves vs.
331 m/s for sound waves. As a result of these major differences, I could find no
tasks with exactly the same experimental design or control factors; however,
we can address vertical processing in a more general way. One effect of the
speed of light transmission on the perception of objects is that vision can take
advantage of light waves reflected not only from the target object, but also by
other objects in the environment, thereby making use of “echo” waves, i.e.
those reflected by the target object onto other objects. These echo waves are
available simultaneously with the waves reflected from the target object to the
retina (Bregman 1990). This same echo phenomenon in audition is available to
the listener much more slowly. Only after the sound waves produced by the
38 Diane Brentari
target object have already struck the ear will echo waves from other objects
in the environment do the same. The result of this effect is that a more three-
dimensional image is available more quickly in vision due, in part, to the speed
at which light travels. Moreover, the localization of visual stimuli is registered
at the most peripheral stage of the visual system, at the retina and lens, while the
spatial arrangement of auditory stimuli can only be inferred by temporal and
intensity differences of the signal between the two ears (Bregman 1990). Meier
(1993; this volume) also discusses the transmission property of bandwidth,
which is larger in vision, and spatial acuity, which is the ability to pinpoint
accurately an object in space (Welsh and Warren 1986); both of these properties
also give the advantage to vision.
In sum, the auditory system has an advantage in horizontal processing, while
the visual system has an advantage in vertical processing. An expected result
would be that phonological representations in signed and spoken languages re-
flect these differences. This would not present a problem for a theory of universal
grammar (UG), but it may well have an effect on proposals about the principles
and properties contained in the part of UG concerned with phonology. At the
end of the chapter, a few such principles for modality-independent phonology
are proposed. These principles can exploit either type of language signal.
a.
root
b.
root
The “vocal mechanism” in speech includes the tongue, lips, and larynx as
the primary active articulators, and the teeth, palate, and pharyngeal area as
target places of articulation (i.e. the passive articulators). Although there are
exceptions to this – since the lips and glottis can be either active or passive
articulators – other articulators have a fixed role. The tongue is always active
and the palate always passive in producing speech, so to some extent structures
have either an active or a passive role in the articulatory event. This is not the case
in sign languages. Each part of the body involved in the “signing mechanism” –
the face, hands, arms, torso – can be active or passive. For example, the hand
can be an active articulator in the sign THINK, a passive articulator in the
sign TOUCH, and a source of movement in the sign UNDERSTAND; this
is shown in Figure 2.1. The lips and eyes are the articulator in the bound
morpheme CAREFUL(LY) but the face is the place of articulation in the sign
BEAUTIFUL. This is one reason models of sign language phonology must be
grouped by phonological role; however, just as in spoken languages, articulatory
considerations play an important secondary role in these groupings.
Within the Prosodic Model features are divided into mutually exclusive sets
of inherent features (IF) and prosodic features (PF). Movement features are
grouped together as prosodic features, based on the use of the term by Jakobson
et al. (1951), who stated that prosodic features are “defined only with reference
to a time series.” The inherent features are the articulator and place of articulation
features. The articulator refers to features of the active articulator, and place of
articulation (POA) refers to features of the passive articulator. The relation of
the articulator with the POA is the orientation relation.
There are several arguments for organizing features in the representation this
way, rather than according to articulatory structure. A few are given here; for
more details and for additional arguments see Brentari (1998). When consider-
ing their role in the phonological grammar, not only the number of distinctive
features, but also the complexity and the type of constraints on each of the IF and
42 Diane Brentari
Figure 2.2 ASL signs showing different timing patterns of handshape and
path movement: 2.2a INFORM shows the handshape and path movement in
a cotemporal pattern; 2.2b DESTROY shows the handshape change happen-
ing only during the second part of the bidirectional movement; 2.2c BACK-
GROUND shows a handshape change occurring during a transitional move-
ment between two parts of a repeated movement
PF feature trees must be considered. The number of IFs is slightly larger (24)
than the number of PFs (22). The structure of the IF branch of structure is also
more complex and yields more potential surface contrasts than the PF branch.
In addition, the constraints on outputs of the PF tree are much more restrictive
than those on the outputs of the IF tree. A specific PF branch constraint sets a
minimum of 1 and a maximum of 3 of movement components in any lexical item,
and another PF branch constraint limits the number of features from each class
node to 1 in stems. PFs are also subject to principles of Alignment, which insure
that a sign with movements involving both handshape and arm movements
will have the correct surface pattern; examples of such signs are INFORM,
DESTROY, and BACKGROUND, shown in Figure 2.2. The IF branch is subject
to fewer and more general constraints, and IFs are generally realized across the
entire prosodic word domain.
PFs also have the ability to undergo “movement migration,” while IFs do
not. A joint of the arm or even the torso can realize movements specified as
handshape movements (i.e. aperture changes) or wrist movements. Some of the
reasons for movement migration that have been documented in the literature:
lexical emphasis (Wilbur 1999; 2000), linguistic maturation (Meier et al. 1998;
Holzrichter and Meier 2000; Meier 2000), loudness (Crasborn 2001), and motor
impairment due to Parkinson’s Disease (Brentari and Poizner 1994). Finally,
PFs participate in the operation of “segment generation,” while IFs do not. This
is explained further in Section 4 below.
Within the Prosodic Model, the following units of analysis are used, and they
are defined as follows:
(3) Units of phonological analysis
a. prosodic word (p-words): the phonological domain consisting of a
stem + affixes;
Modality differences in phonology and morphophonemics 43
4 I am considering only forms from different morphological paradigms, so FLY and FLY-THERE
would not be a minimal pair. Perlmutter (1992) refers to some signs as having geminate Positions,
but the two Positions in such cases have different values, so they are not, strictly speaking,
geminates.
Modality differences in phonology and morphophonemics 45
The complete set of reasons for an analogy between the IFs and consonants
in spoken languages is summarized in (7); they are further explained in the
following paragraph.
These facts about IFs and PFs have already been mentioned in Section 2.2.2,
but here they have new relevance because they are being used to make the
consonant:IF and vowel:PF analogy. In summary, if sign language Cs are prop-
erties of the IF tree and sign language Vs are properties of the PF tree, the
major difference between sign and spoken languages in this regard is that in
sign languages IFs and PFs are realized at the same time.
x x x x x x
48 Diane Brentari
(9) Complex movement: two or more branching class nodes in the PF tree
INFORM STEAL FALL ACCOMPLISH
EASILY
PF PF PF PF
x x aperture x x x x
x x
ASL grammar exhibits sensitivity to the distinction between simple and com-
plex movements in nominalization of two types – reduplicative nominalization,
and in the formation of activity verbs (i.e. gerunds) – and in word order prefer-
ences. The generalization about this sensitivity is given in (10).
(10) Movement-internal sensitivity: ASL grammar is sensitive to the
complexity of movements, expressed as the number of movement
components.
With regard to nominalization, only simple movements – shown in (8) –
undergo either type of nominalization. The first work on noun–verb pairs in
ASL (Supalla and Newport 1978) describes reduplicative nominalization: the
input forms are selected by the following criteria: (a) they contain a verb that
expresses the activity performed with or on the object named by the noun, and
(b) they are related in meaning. The structural restrictions for reduplicative
nominalization are given in (11); typical forms that undergo this operation are
given in (12a–b). All of the forms in the Supalla and Newport (1978) study,
which undergo reduplication, are simple movement forms.6 There are also a
few reduplicative nouns which do not follow the semantic conditions of Supalla
and Newport (1978), but these also obey the structural condition of being simple
movements (12c–d); a typical form that undergoes reduplication is shown in
Figure 2.3.
(11) Reduplication nominalization input conditions
a. They contain a verb that expresses the activity performed with or
on the object named by the noun.
6 The movements of both syllables are also produced in a restrained manner. I am referring here
only to the nominalization use of reduplication. Complex movements can undergo reduplication
in other contexts, e.g. in various temporal aspect forms.
Modality differences in phonology and morphophonemics 49
(a) (b)
7 This definition of “trilled movement” is based on Liddell (1990). Miller (1996) argues that the
number of these movements is, at least in part, predictable due to the position of such movements
in the prosodic structure.
8 If a [trilled] movement feature does co-occur with a stem having a complex movement, it is
predicted that the more proximal of the movement components will delete (e.g. LEARNING,
BEGGING).
50 Diane Brentari
(a) (b)
PF PF PF
WU WU WU WU
class node class node class node class node
[trilled]
(a) (b)
b. signs that undergo this operation containing more than one branch-
ing PF node:
INFORM, ACCOMPLISH-EASILY, FASCINATED, RUN-OUT-
OF, FALL-ASLEEP, FINALLY
Because the segment, defined as above, is needed to capture these lengthening
phenomena, this is evidence that it is a necessary unit in the phonology of sign
languages.
12 Space does not permit me to give a more detailed set of arguments against these alternatives
here.
Modality differences in phonology and morphophonemics 55
(23)–(24). A schema for the root-feature-segment relation for both spoken and
signed languages is given in (25a–b).13
root root root root root root root root root root
d a t d u d d a t
IF PF IF PF IF PF
[–] [–] [–] [–]
aperture
path path
[tracing] [direction:>1]
[open] IF [repeat: 180°] [repeat]
x x x x x x xx x
root melody
melody x
13 These are surface representations; for example, in English (23b) the /u/ in /dud/ is lengthened
before a voiced coda consonant resulting in an output [du:d]. Also, in ASL (24b) DESTROY,
the input form generates four segments due to the two straight path shapes located at the highest
node of the PF tree; however, since the second and third segments are identical in bidirectional
movements (indicated by the [repeat: 180o ] feature), one is deleted to satisfy the Obligatory
Contour Principle (OCP) (Brentari 1998; Chapter 5).
56 Diane Brentari
language: segments are necessary – but predictable – and the canonical rela-
tionship between roots and segments is 1:2, rather than 1:1.
Monosyllabic Polysyllabic
Except for the relatively rare morphemic change by ablaut marking past
preterit in English (sing-present/sang-preterit; ring-present/rang-
preterit), or for person marking in Hua (Haiman 1979), indicated by the
[±back] feature on the vowel, spoken languages tend to create polymorphemic
words by adding sequential material in the form of segments or syllables. Even
in Semitic languages, which utilize non-concatenative morphology, lexical roots
and grammatical vocalisms alternate with one another in time; they are not lay-
ered onto the same segments used for the root as they are in sign languages.
This difference is a remarkable one; in this regard, sign languages constitute
a typological class unto themselves. No spoken language has been found that
is both as polysynthetic as sign languages and yet makes the morphological
distinctions primarily in monosyllabic forms. An example of a typical poly-
morphemic, monosyllabic structure in a sign language is given in Figure 2.6.14
fingers1 fingers1
fingers0
fingers0 thumb
[unopposed] quantity ref
quantity ref [one] [ulnar]
[one] [ulnar] [all]
[all]
Unlike the other sections of this chapter, the central point of this section is to
show that minimal pairs in signed and spoken language are not fundamentally
different, but that a different structure is required for sign languages if we are
to see this similarity. If features dominate segments, as I have described is the
case for sign languages, this similarity is quite clear; if segments dominate
features, as is the case for spoken languages, the notion of the minimal pair in
sign language becomes difficult to capture.
The reason for this is as follows. If the handshapes for AIRPLANE and
MOCK are minimally different, then all things in other structures being equal,
the signed words in which they occur should also be minimally different. This is
the intuition of native signers, and this is the basis upon which Stokoe (1960) and
Klima and Bellugi (1979) established minimal pairs. In the Hold–Movement
Phonological Model proposed by Liddell and Johnson (1983; 1989) – which is
a model where segments dominate features – such signs are not minimal pairs,
Modality differences in phonology and morphophonemics 59
(a) (b)
because MOCK and AIRPLANE are signs where differences exist in more
than one segment. MOCK and AIRPLANE each have four segments, and the
handshape is the same for all of the segments. In the Prosodic Model, barring
exceptional circumstances, IFs spread to all segments.
IF PF IF PF
[–] [–] [–] [–]
hsa hsb
path path
[direction:>1] [direction:>1]
[repeat] [repeat]
x xx x x xx x
I have suppressed the details of the representations that are not relevant here. The
important point is that the handshape features are represented once in Prosodic
Model, but once per segment in the Hold–Movement Model. The Prosodic
60 Diane Brentari
This chapter has shown that all of the divergent properties in (30) are due to
greater sensitivity to paradigmatic structure. This sensitivity can be traced to
the advantage of the visual system for vertical processing. Certain structural re-
arrangement and elaboration is necessary to represent sign languages efficiently,
well beyond simply re-naming features. The common properties in (29) are not
nearly as homogeneous in nature as the divergent ones, since they are not
attributable to physiology; these are likely candidates for UG.
Acknowledgments
I am grateful to Arnold Davidson, Morris Halle, Michael Kenstowicz, Richard
Meier, Mary Niepokuj, Cheryl Zoll, and two anonymous reviewers for their
helpful discussion and comments on a previous version of this chapter.
2.7 References
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Bregman, Albert S. 1990. Auditory scene analysis. Cambridge, MA: MIT Press.
Brentari, Diane. 1990a. Theoretical foundations of American Sign Language phonol-
ogy. Doctoral dissertation, University of Chicago. (Published 1993, University of
Chicago Occasional Papers in Linguistics, Chicago, IL.)
Brentari, Diane. 1990b. Licensing in ASL handshape. In Sign language research: Theo-
retical issues, ed. Ceil Lucas, 57–68. Washington, DC: Gallaudet University Press.
Brentari, Diane. 1995. Sign language phonology: ASL. In A handbook of phonological
theory, ed. John Goldsmith, 615–639. New York: Basil Blackwell.
Brentari, Diane. 1996. Trilled movement: Phonetic realization and formal representation.
Lingua 98:43–71.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Brentari, Diane and Howard Poizner. 1994. A phonological analysis of a deaf Parkinso-
nian signer. Language and Cognitive Processes 9: 69–99.
Chao, Y. R. 1968. A grammar of spoken Chinese. Berkeley: University of California
Press.
Chase, C. and A. R. Jenner. 1993. Magnocellular visual deficits affect temporal process-
ing of dyslexics. Annals of the New York Academy of Sciences 682:326–329.
Chomsky, Noam and Morris Halle. 1968. The sound pattern of English. New York:
Harper and Row.
Clements, George N. 1985. The geometry of phonological features. Phonology Yearbook
2:225–252.
Clements, George N. and Elizabeth V. Hume. 1995. The internal organization of speech
sounds. In A handbook of phonological theory, ed. John Goldsmith, 245–306.
New York: Basil Blackwell.
62 Diane Brentari
Corina, David, and Wendy Sandler. 1993. On the nature of phonological structure in
sign language. Phonology 10:165–207.
Coulter, Geoffrey. 1982. On the nature of ASL as a monosyllabic language. Paper pre-
sented at the Annual Meeting of the Linguistic Society of America, San Diego, CA.
Coulter, Geoffrey, ed. 1993. Phonetics and phonology, Vol. 3: Current issues in ASL
phonology. San Diego, CA: Academic Press.
Coulter, Geoffrey and Stephen Anderson. 1993. Introduction. In Coulter, ed. (1993),
1–17.
Crasborn, Onno. 2001. Phonetic implementation of phonological categories in Sign
Language of the Netherlands. Doctoral dissertation, HIL, Leiden University.
Dixon, R. M. W. 1977. A grammar of Yidiny. Cambridge/New York: Cambridge Uni-
versity Press.
Emmorey, Karen and Harlan Lane. 2000. The signs of language revisited: Festschrift for
Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum Associates.
Fischer, Susan and Janis Wynn. 1990. Verb sandwiches in American Sign Language. In
Current trends in European sign language research, ed. Siegmund Prillwitz and
Tomas Vollhaber, 279–294. Hamburg, Germany: Signum Press.
Fortescue, Michael D. 1984. West Greenlandic. London: Croom Helm.
Frost, Ram and Shlomo Bentin. 1992. Reading consonants and guessing vowels: Visual
word recognition in Hebrew orthography. In Orthography, phonology, morphol-
ogy and meaning, ed. Ram Frost and Leonard Katz, 27–44. Amsterdam: Elsevier
(North-Holland).
Goldsmith, John. 1976. Autosegmental phonology. Doctoral dissertation, MIT, Cam-
bridge, MA. (Published 1979, New York: Garland Press.)
Goldsmith, John. 1992. Tone and accent in Llogoori. In The joy of syntax: A festschrift in
honor of James D. McCawley, ed. D. Brentari, G. Larson, and L. MacLeod, 73–94.
Amsterdam: John Benjamins.
Goldsmith, John. 1995. A handbook of phonological theory. Oxford/Cambridge, MA:
Basil Blackwell.
Green, David M. 1971. Temporal auditory acuity. Psychological Review 78:540–551.
Haiman, John. 1979. Hua: A Papuan language of New Guinea. In Languages and their
status, ed. Timothy Shopen, 35–90. Cambridge, MA: Winthrop.
Hirsh, Ira J., and Carl E. Sherrick. 1961. Perceived order in different sense modalities.
Journal of Experimental Psychology 62:423–432.
Holzrichter, Amanda S. and Richard P. Meier. 2000. Child-directed signing in ASL. In
Language acquisition by eye, ed. Charlene Chamberlain, Jill P. Morford and Rachel
Mayberry, 25–40. Mahwah, NJ: Lawrence Erlbaum Associates.
Itô, Junko. 1986. Syllable theory in prosodic phonology. Doctoral dissertation, Univer-
sity of Massachusetts, Amherst. (Published 1989, New York: Garland Press.)
Jakobson, Roman, Gunnar Fant, and Morris Halle. 1951, reprinted 1972. Preliminaries
to speech analysis. Cambridge, MA: MIT Press.
Jenkins, J., W. Strange, and M. Salvatore. 1994. Vowel identification in mixed-speaker
silent-center syllables. The Journal of the Acoustical Society of America 95:1030–
1035.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Kohlrausch, A., D. Püschel, and H. Alphei. 1992. Temporal resolution and modulation
analysis in models of the auditory system. In The Auditory processes of speech:
Modality differences in phonology and morphophonemics 63
Stack, Kelly. 1988. Tiers and syllable structure: Evidence from phonotactics. M.A. thesis,
University of California, Los Angeles.
Stokoe, William C. 1960. Sign language structure: An outline of the visual communi-
cation systems of the American Deaf. Studies in Linguistics, Occasional Papers 8.
Silver Spring, MD: Linstok Press.
Stokoe, William C., Dorothy C. Casterline, and Carl G. Croneberg. 1965. A dictionary
of American Sign Language on linguistic principles. Silver Spring, MD: Linstok
Press.
Strange, Winifred. 1987. Information for vowels in formant transitions. Journal of Mem-
ory and Language 26:550–557.
Supalla, Ted and Elissa Newport. 1978. How many seats in a chair? The derivation of
nouns and verbs in American Sign Language. In Understanding language through
sign language research, ed. Patricia Siple, 91–133. New York: Academic Press.
Uyechi, Linda. 1995. The geometry of visual phonology. Doctoral dissertation, Stanford
University, Stanford, CA. Published 1996, CSLI, Stanford, California.
van der Hulst, Harry. 1993. Units in the analysis of signs. Phonology 102:209–241
van der Hulst, Harry. 1995. The composition of handshapes. University of Trondheim,
Working Papers in Linguistics, 1–17. Dragvoll, Norway: University of Trondheim.
van der Hulst, Harry. 2000. Modularity and modality in phonology. In Phonological
knowledge: Conceptual and empirical issues, ed. Noel Burton-Roberts, Philip Carr,
and Gerard J. Docherty. Oxford: Oxford University Press.
van der Hulst, Harry and Anne Mills. 1996. Issues in sign linguistics: Phonetics, phonol-
ogy and morpho-syntax. Lingua 98:3–17.
Welch, R. B. and D. H. Warren. 1986. Intersensory interactions. In Handbook of percep-
tion and human performance, Volume 1: Sensory processes and perception, ed. by
Kenneth R. Boff, Lloyd Kaufman, and James P. Thomas, 25–36. New York: Wiley.
Wilbur, Ronnie. 1987. American Sign Language: Linguistic and applied dimensions,
2nd edition. Boston, MA: Little, Brown.
Wilbur, Ronnie B. 1999. Stress in ASL: Empirical evidence and linguistic issues. Lan-
guage and Speech 42:229–250.
Wilbur, Ronnie B. 2000. Phonological and prosodic layering of non-manuals in Amer-
ican Sign Language. In Emmorey and Lane, 213–241.
Zec, Draga and Sharon Inkelas. 1990. Prosodically constrained syntax. In The
phonology-syntax connection, ed. Sharon Inkelas and Draga Zec, 365–378.
Chicago, IL: University of Chicago Press.
3 Beads on a string?
Representations of repetition in spoken
and signed languages
Rachel Channon
3.1 Introduction
Someone idly thumbing through an English dictionary might observe two char-
acteristics of repetition in words. First, segments can vary in the number of times
they repeat. In no, Nancy, unintended, and unintentional, /n/ occurs one, two,
three, and four times respectively. In the minimal triplet odder, dodder, and
doddered, /d/ occurs one, two, and three times.
A second characteristic is that words repeat rhythmically or irregularly:
(1) Rhythmic repetition: All the segments of a word can be temporally
sliced to form at least two identical subunits, with patterns like aa,
abab, and ababab. Examples: tutu (abab), murmur (abcabc).
(2) Irregular repetition: any other segment repetition, such as abba, aabb,
abca, etc. Examples: tint (abca), murmuring (abcabcde).
If asked to comment on these two characteristics, a phonologist might shrug
and quote from a phonology textbook:
[An] efficient system would stipulate a small number of basic atoms and some simple
method for combining them to produce structured wholes. For example, two iterations
of a concatenation operation on an inventory of 10 elements . . . will distinguish 103
items . . . As a first approximation, it can be said that every language organizes its lexicon
in this basic fashion. A certain set of speech sounds is stipulated as raw material. Distinct
lexical items are constructed by chaining these elements together like beads on a string.
(Kenstowicz 1994:13)
Repetition, the phonologist might say, is the meaningless result of the fact that
words have temporal sequences constructed from segments. Because languages
have only a limited segment set, then by chance some sequences include a
varying number of repeating segments. This explains Nancy and the others.
As for rhythmic and irregular repetition, by chance some segment sets repeat
exactly and some do not. And that would be the end of the story.
These data, however, are interesting when compared to sign language data.
Sign phonologists agree that signs have the same function in sign language
65
66 Rachel Channon
that words have in spoken language.1 Because their function is the same, one
might reasonably expect that the representations are similar and that signs, like
words, have segment sequences. If so, then segment repetition should behave
similarly. Irregular repetition should be common, and contrasts should occur
between one, two, three, or more repetitions within a sign.
But this is not what happens. The following seem to be correct generaliza-
tions for signs (with certain exceptions and explications to be given along the
way):
(3) Two, three, or more repetitions are not contrastive in signs.
(4) Simple signs have only rhythmic repetition.
(5) Compound signs have only irregular repetition.
The simple sign TEACH shows that number of repetitions is not contrastive;
the sign illustrates rhythmic repetition patterns. Both hands are raised to face
level. The hands move away from the face, parallel to the ground, then back
several times: out–in–out–in. The out–in motion may occur two times (abab),
three times (ababab), or more times without altering the sign’s meaning.
The compound TEACH-PERSON ‘teacher’ illustrates irregular repetition.
TEACH is made as described. Then the hands move in parallel lines down the
sides of the body (PERSON). The outward–inward repeated direction change
precedes a completely different motion down the body that does not repeat, for
an irregular repetition pattern of out–in–out–in–down (ababc).
This chapter explores the possibility that repetition is different in words and
signs because their phonological structures are different. Words have contrastive
repeating temporal sequences that must be represented with repeating segments.
Simple signs do not have contrastive repetition sequences, and a single segment
can, and should, represent them. They function like spoken words, but their
representations are like spoken segments.
Section 2 discusses contrast in the number of repetitions, and Section 3,
rhythmic and irregular repetition. Signs and words are shown to be dissimilar
for both characteristics. Section 4 presents two representations: Multiseg with
multiple segments, and Oneseg with one segment for simple signs and two for
compounds. Most examples are American Sign Language (ASL) from Costello
(1983) and use her glosses. Only citation forms of signs, or dictionary entries,
are included here, so inflected verbs, classifier predicates, and fingerspelling
are not considered.
1 Signs are not morphemes, because signs can have multiple morphemes (PARENTS has mor-
phemes for mother and father). Translation shows that signs are similar to words. Because most
sign translations are one word/one sign, it seems reasonable to assume that signs are not phrases,
but words, and attribute the occasional phrasal translation to lexical gaps on one side or the other.
While a sign may properly be called a “word” or a “sign word”, here “word” refers to spoken
words only, to keep the distinction between sign and speech unambiguous.
Representations of repetition 67
The Abbé’s signs for the perfect and pluperfect are impossible because they
use a contrastive number of repetitions. In an ordinary, non-emphatic utterance,
two or three repetitions are usual, but variation in number of repetitions cannot
produce a minimal pair.
Repetition in speech can be defined in terms of segments, but because the
question raised here is what representation a sign has, notions of segments
within a sign cannot be relied on to define the repetition unit. A pre-theoretic
notion of motion in any one direction is used instead. TEACH is an example
of repeated outward and inward motion.2 If the action is circular, and the hand
returns to the beginning point and continues around the same circle, this also
counts as repetition. In ALWAYS, the index finger points up and circles several
times. In arcing signs, each arc counts as a motion, as in GRANDMOTHER,
with repeating arcs moving outward from the chin, where the hand does not
return to the starting point to repeat, but instead each arc begins at the last arc’s
ending point. Repeated handshape changes, such as opening and closing are
2 For some signs with a back and forth motion, such as PAPER and RESEARCH, the action
returning the hands to the original starting point is epenthetic (Supalla and Newport 1978;
Perlmutter 1990; Newkirk 1998), and the underlying repetition pattern is a–a. For other signs,
such as TEACH or DANCE, the underlying repetition pattern is ab–ab. This difference does
not affect the arguments presented here (both types have noncontrastive number of repetitions,
and are rhythmic) so abab is used for both.
68 Rachel Channon
Irregular Rhythmic
n n % n %
Irregular Rhythmic
n n % n %
Simple signs
ASL 1135 0 0.0 527 100.0
IPSL 282 0 0.0 87 100.0
ISL 1490 0 0.0 648 100.0
NS 370 0 0.0 124 100.0
MSL 114 0 0.0 31 100.0
Total simple signs 3391 0 0.0 1417 100.0
Compound signs
ASL 75 19 100.0 0 0.0
IPSL 6 2 100.0 0 0.0
ISL 124 85 100.0 0 0.0
NS 95 47 100.0 0 0.0
MSL 35 10 100.0 0 0.0
Total compounds 335 163 100.0 0 0.0
All signs
ASL 1210 19 3.5 527 96.5
IPSL 288 2 2.2 87 97.8
ISL 1614 85 11.6 648 88.4
NS 465 47 27.5 124 72.5
MSL 149 10 24.4 31 75.6
Total all signs 3725 163 10.3 1413 89.7
Table 3.2 shows the number and percentage of signs with rhythmic and irreg-
ular repetition. All signs from the following sources were examined: Costello
1983 for ASL, Savir 1992 for ISL, Japanese Federation of the Deaf 1991
for Japanese Sign Language (Nihon Syuwa or NS), the appendix of Zeshan
2000 for IndoPakistan Sign Language (IPSL), the appendix to Shuman and
only 64 different types; the Korean example has 60 tokens and 48 different types. All tokens
are counted (because of the time-consuming nature of determining which words are duplicates).
This does not seriously affect the percentages of rhythmic and irregularly repeating words
because the percentage of types and tokens should be approximately the same, since there is
no reason that tokens should include more examples of repetition or nonrepetition than types
do. In the Korean example, 63 percent of the types, and 67 percent of the tokens have irregular
repetition. In the American English example, 13 percent of the types and 10 percent of the tokens
have irregular repetition. (Neither example has rhythmically repeating words.) There are only
three rhythmically repeating words in the entire IPA sample, none of which have more than one
token, so the major point that irregular repetition is overwhelmingly preferred to rhythmic is
true regardless of whether tokens or types are counted.
72 Rachel Channon
Cherry-Shuman 1981 for a Yucatec Mayan sign language used in the village of
Nohya (MSL).7
ASL is one of the oldest sign languages and may have the largest population of
native and nonnative signers of any sign language. MSL is at the other extreme.
Nohya’s population of about 300 included about 12 deaf people. The oldest
deaf person seems to have been the first deaf person in Nohya, and claims to
have invented the language (Shuman 1980:145), so the language is less than
100 years old. As the table shows, language age and number of users does not
significantly affect repetition patterns.
Compounds, the concatenation of two signs to make a single sign, fall along
a continuum from productive to lexical, terms loosely borrowed from Liddell
and Johnson (1986). Productive compounds strongly resemble the two signs
they are formed from. Any signer can identify the two parts. Examples are
TEACH-PERSON ‘teacher’, SHOWER-BATHE ‘shower’, SLEEP-CLOTHES
‘pajamas’ (Costello 1983), and FLOWER-GROW ‘plant’ (Klima and Bellugi
1979:205). These signs can have irregular repetition.8
7 Occasionally, a dictionary lists the same sign on different pages. Because there is no simple
way to ensure that each sign is only counted once, each token is counted. This should not
affect the reported percentages. The ISL dictionary has one ambiguous symbol: a circle with
an arrowhead, specified in the key as “full circle movement.” It is impossible to tell whether
those cycles occur once or more than once. In some signs, such as (ocean) WAVE or BICYCLE,
iconicity strongly suggests that the circular motion repeats. Furthermore, in ASL most circular
signs repeat. Therefore, all 105 signs with this symbol count as rhythmically repeating. Email
to Zeshan resolved several symbols for IPSL. A closed circle with an arrow counts as repeating,
an open circle as nonrepeating. Hands opening and closing count as repeating. In NS, repetition
may be undercounted, because pictures and descriptions do not always show repetition, though
iconicity suggests it. For example, HELP is described as “pat the thumb as if encouraging the
person.” But pat is not specified as repeated, and the picture does not show repetition, so it is
not counted as repeating. MSL signs that use aids other than the signer’s own body (pulling the
bark off a tree for BARK) are omitted.
8 Only productive compounds are systematically distinguished in the dictionaries. The ASL, NS,
and MSL dictionaries have written descriptions as well as pictures, and the compilers usually
indicate which signs they consider compounds. The ASL dictionary indicates compounds by
a “hint” (“X plus X”) and/or with two pictures labeled 1 and 2; the IPSL and NS have two
pictures labeled 1 and 2. The MSL dictionary usually indicates which are compounds, but
some judgments are needed. For example, BIRD, an upward point, followed by arm flapping,
is coded as a compound. The ISL dictionary does not explicitly recognize compounds, and
has no descriptions, but productive compounds have two pictures, instead of the normal one.
(Many signs can be identified as compounds, because the two parts can be found as separate signs
elsewhere.) However, some signs that clearly are not compounds also have two pictures; usually,
signs with handshape changes. Those ISL signs with two pictures where the difference is only
one handshape, place, or orientation change are therefore counted as simple signs. Including
these two-picture signs as simple signs decreases the count of rhythmically repeating compound
signs. The irregular repetition count is not affected, because none of these signs has irregular
repetition. Because the other dictionaries have almost no rhythmically repeating compounds,
and these signs do not look like compounds, the choice seems justified. Klima and Bellugi
(1979) list tests for whether two signs are compounds or phrases, but these cannot be used in a
dictionary search. So some signs counted as compounds are probably phrases. Including all but
the most obvious (such as how are you) was preferred to omitting signs nonsystematically.
Representations of repetition 73
Lexical compounds have been so changed from their two-sign origin that in
almost every respect they look like noncompound signs. Often, only historical
evidence identifies a compound origin. An example is ASL DAUGHTER, where
the flat hand touches the jaw and then the elbow crook, from GIRL (the fist
hand draws the thumb forward along the jaw line) and BABY (the two arms
cradle an imaginary baby and rock back and forth). The sources do not identify
these lexical compounds as compounds. They cannot be identified as such
without knowing the language, and so are counted as simple signs. From here
forward, noncompounds and lexical compounds are called simple signs, and
the productive compounds, compounds.
Table 3.2 supports the generalizations for repetition shown in (8), (9), and
(10).
(8) Simple signs repeat rhythmically, not irregularly.
(9) Compound signs repeat irregularly, not rhythmically.
(10) Rhythmic repetition is common in signs instead of rare as in speech.
Other researchers have observed generalizations (8) and (9). Uyechi (1996:118)
notes that “a well formed repeated gesture is articulated with two identical ges-
tures.” Supalla (1990:14–15) observes that simple signs can only have redupli-
cation (i.e. rhythmic repetition):
ASL signs show restricted types of syllabic structure . . . Coulter (1982) has argued that
simple signs are basically monosyllabic. He has suggested that the multi-syllabic forms
that exist in ASL are all either reduplicated forms or compounds (excluding the category
of fingerspelled loans). (Note that a compound is not a good example of a simple sign
since it consists of two signs.) Among the simple signs, Wilbur (1987) pointed out that
there is no ASL sign with two different syllables, as in the English verb “permit.” The
only multisyllabic signs other than compounds are with reduplicated syllables.
Table 3.2 shows that simple signs only repeat rhythmically,9 confirming gener-
alization (8), and that compound signs only repeat irregularly, confirming gen-
eralization (9). Rhythmic repetition in repeating words occurs about 1 percent of
9 Four irregularly repeating signs are excluded from the simple sign counts. Costello does not
recognize SQUIRREL as a compound. Nevertheless, this sign seems analyzable as a compound
in which the contact with the nose or chin is a reference to a squirrel’s pointed face and the
second part a reference to the characteristic action of a squirrel holding its paws near the upper
chest. ISL SICK, SMART, and NAG each have only one picture, implying they are simple signs.
However, SICK and NAG both have two places – one at the forehead and one on the hand – and
are probably compounds. The drawing for SMART, which may be misleading, has an unusual
motion that seems to be a straight motion followed by wiggling. Including these as simple
signs would change the percentage of simple signs with rhythmic repetition from 100 percent
to 99.7 percent. Costello recognizes three rhythmically repeating signs as compounds, PENNY,
NICKEL, and QUARTER. For each, Costello uses the wording “x plus x” which is one of the
guides to whether she considers a sign a compound. However, these three signs are clearly highly
assimilated lexical compounds, with a single place (forehead), and a single motion (outward,
with simultaneous finger wiggling for QUARTER). They are therefore counted as simple signs.
74 Rachel Channon
10 Note that repetition often deletes: TEACH-PERSON can be pronounced as a single outward
motion followed by a downward motion. This does not affect the point here, which is that if
compounds repeat on a part, they can only have three possible patterns. A fourth pattern is
nonrepeating followed by nonrepeating, which produces a nonrepeating compound, not of in-
terest here. Note that two rhythmically repeating signs will not usually produce a rhythmically
repeating compound. By definition a rhythmically repeating sign must have two or more iden-
tical subunits, so two signs have at least four subunits ab, ab, cd, and cd. Unless ab and cd
are very similar, the concatenated compound cannot be rhythmically repeating. Over time, of
course, these productive compounds do alter to become lexical compounds that are rhythmically
repeating or nonrepeating.
Representations of repetition 75
11 A one-segment string cannot repeat; a two-segment string can only repeat rhythmically. Irregular
repetition can only occur in strings of three and more segments. Note the difference from
repetition as a segment feature, where one-segment signs can repeat rhythmically and two-
segment signs can repeat irregularly.
12 These constraints are likely to operate against both irregular and rhythmic strings, so they would
be unlikely to explain why rhythmic is so dispreferred. For example, a constraint that restricts
codas to nasals would eliminate many possible irregularly repeating words like tat or district,
but it would also eliminate rhythmically repeating words like murmur.
76 Rachel Channon
Why are simple signs and compound signs so different? Why do compound
signs have only three irregular repetition patterns?
I propose that all the repetition facts for signs can be economically explained
if simple signs are not segment strings, but single segments, and if repetition
is not caused by string permutations, but instead by a feature [repeat]. To show
this, I compare the characteristics of a multisegmental and single segment rep-
resentation, here abbreviated to Multiseg and Oneseg.
Before comparing the two representations, first consider an unworkable, but
instructive, minimal representation, which allows only one occurrence of any
feature in one unordered segment. A word like da can be represented, because
d and a have enough disjoint features to identify them. Example (15) shows
a partial representation. (Order problems are ignored here, but a CV syllable
structure can be assumed which allows da but not ad. See also footnote 15
below.) Although this representation can handle a few words, it cannot handle
repetition, either rhythmic as in dada or irregular as in daa or dad, because each
feature can only occur once.
(15) da as one segment
[da]
Multiseg can generate any irregular repetition pattern. While this is correct for
words, it is too powerful for signs, which only have three irregular patterns.
Multiseg systematically overgenerates non-occurring signs with a variety of
irregular patterns, as shown in (18).
Representations of repetition 77
near eye near ear near nose near ear near mouth
[] []
[] [] [] []
[] [] [] [] [] []
an equally serious problem. Finally, Multiseg cannot explain the sharp con-
trast between simple and compound signs in terms of rhythmic and irregular
repetition, since it makes no special distinction between simple and compound
forms.
Oneseg is a much less powerful solution, which is nevertheless a better fit
for simple signs and compounds. Oneseg has one segment for simple signs,
two segments for compounds and adds a feature [repeat]. Brentari (1998),13
Perlmutter (1990), and others have proposed similar features for repetition.
The default value for [repeat] is that whatever changes in the sign repeats. A
sign with handshape change repeats the handshape change: SHOWER opens
and closes the hand repeatedly. If the hand contacts a body part, the contact
is repeated: MOTHER contacts the chin repeatedly. If the hand moves along a
path, as in TEACHER, the path repeats.14
Like other features, [repeat] cannot occur more than once in a segment. The
representation for da (15) does not change, but Oneseg can represent words that
repeat as a whole (rhythmic repetition): (22) shows dada.
(22) Oneseg: dada/dadada as one segment
[]
palm of Y
weak hand handshape
13 Her [repeat] feature, however, has more details than needed here.
14 A few signs have more than one change, and some constraint hierarchy probably controls this.
For example, in signs with both a handshape change and a path motion – as in DREAM – it
may be generally true that only the handshape change repeats. This issue may have more than
one solution, however, and further details of [repeat], and its place in a possible hierarchical
structure, are left for future research.
Representations of repetition 79
[repeat] palm of Y
weak hand handshape
compound signs, and any irregular repetition pattern. It represents many sets of
signs that are systematically non-occurring, and produces multiple representa-
tions for existing signs.
Oneseg allows only rhythmic repetition in simple forms, only three irreg-
ularly repeating patterns in compounds, and only a noncontrastive number of
repetitions. These are the characteristics seen in simple and compound signs.
While Oneseg cannot possibly represent all possible words, it can represent the
repetition facts in sign. An important goal in linguistics is to use the least pow-
erful representation, i.e. the representation that allows all and only the attested
language forms. Undergeneration is bad, but so is overgeneration. If Oneseg
can represent all signs, and predict systematic gaps in the lexicon that Multiseg
cannot, it is preferable to Multiseg for signs.
Two important points need to be mentioned. The first is that a sign unit
does not have to be a “segment”. What is essential is some unit of time, which
could be segmental, syllabic or other. One can disagree, as Wilbur (1993) does,
with the statement that any phonological representation of a sign or word must
have at least one segment. But it should be noncontroversial to claim that any
phonological representation for a sign or word must have at least one unit of
time. Multiseg and Oneseg are intended to look at the two logical possibilities of
one timing unit or more than one timing unit with as little additional apparatus
as possible. What must happen if signs or words have multiple timing units, or
only one timing unit? Because the phonological representations assumed here
have little detail, “timing unit” could be substituted for every occurrence of
“segment” (when referring to sign languages), because here segment is only
one possible instantiation of a single timing unit. This use of segment, however,
implies that segments are structurally unordered, or have no internal timing
units. Not every phonologist has so understood segment. For example, van der
Hulst’s (2000) single segment sign representation is an ordered segment with
multiple timing units, and therefore an example of a Multiseg representation,
not Oneseg.
If the two representations are understood as generic multiple or single timing
unit representations, then Multiseg is the basis for all commonly accepted rep-
resentations for speech, as well as almost all representations proposed for signs,
including the multisegmental representations of Liddell and Johnson (1989),
Sandler (1989), and Perlmutter (1993), and those using other units such as
syllable, cell, or ordered segment (Wilbur 1993; Uyechi 1996; Brentari 1998;
Osugi 1998; van der Hulst 2000).
Probably the closest to Oneseg is Stokoe’s (1960) conception of a sign as
a simultaneous unit. Oneseg however does not claim that everything about a
sign is simultaneous, but only that sequence does not imply sequential structure
(multiple segments/timing units), and that features can handle the simple and
constrained sequences that actually occur in signs. Repetition is one example
Representations of repetition 81
of sequence for which a featural solution exists, as described above. Two other
examples are handshape and place sequences, which features or structure could
handle (Corina 1993; Crain 1996).
A second point is that whether a multisegmental or single segment repre-
sentation is correct should be considered and decided before, and separately
from, questions of hierarchy, tiers, and other details within the segment. Au-
tosegmental characteristics are irrelevant. The models discussed here are neither
hierarchical nor nonhierarchical, and representation details are deliberately left
vague. Multiseg and Oneseg are not two possible representations among many.
At this level of detail, there are only two possible representations: either a rep-
resentation has one segment/timing unit or it has more than one segment/timing
unit. When this question has been answered, then more detail within a repre-
sentation can be considered, such as timing unit type and hierarchical structure.
[forehead] B handshape
Since the hand moves toward the forehead, contact must be final. IDEA (27)
can be represented with a feature [out].16
(27) IDEA
[]
Since the hand moves out from the body, contact must be initial. Thus, while
there are variations in when the hand contacts the body, these variations do not
require structural sequence.
A final challenge is the apparent temporal contrasts of some inflected signs.
As already mentioned, the proposal made here does not apply to all signs, but
only to simple signs and compounds: the kind of signs found in an ordinary
sign language dictionary. Inflected signs (both inflected verbs and classifier
16 Note that in KNOW, the hand may approach the forehead from any phonetically convenient
direction, but in IDEA, the hand must move in a specific direction, namely out. Note also
that Multiseg must represent most signs with phonologically significant beginning and ending
places; Oneseg represents most signs as having one underlying place.
Representations of repetition 83
predicates) are excluded. The verb KNOCK is one example of the problems of
inflected forms for a single segment representation. It can be iconically inflected
to have a contrastive number of repetitions (knocking a few times vs. knock-
ing for a long time). A second example is the Delayed Completive (Brentari
1998), an inflection with a prolonged initial length that appears to contrast
with the shorter initial length of the uninflected sign, and which iconically
represents a prolonged initial period of inaction (Taub 1998). These types of
contrasts occur only within the inflected verbs and classifier predicate domain,
and Oneseg cannot represent them. I argue that they are predictably iconic, and
this iconicity affects their representation, so that some elements of inflected
signs have no phonological representation.17 Channon (2001) and Channon
(2002) explain these exclusions in more detail. If it were the case that inflected
signs could not be explained as proposed, then one might invoke a solution
similar to Padden’s (1998) proposal that ASL has vocabulary groups within
which different rules apply. Regardless of the outcome for inflected verbs, it
remains true that Oneseg represents simple signs and compounds better than
Multiseg.
To turn the tables, (28) offers some examples of physically possible, but
non-occurring, simple signs as a challenge to Multiseg.
(28) a. Contact the ear, nose, mouth, chest in that order and no other.
b. Open the fist hand to flat spread, then to an extended index.
c. Contact the ear, brush the forehead, then draw a continuous line
down the nose.
d. Contact the ear for a prolonged time, then contact the chin for a
normal time.
e. Wiggle the hand while moving from ear to forehead, then move
without wiggling to mouth (see Sandler 1989:55; Perlmutter 1993).
The existence of such signs would be convincing evidence that signs require
Multiseg; the absence of such signs is a serious theoretical challenge for
Multiseg, which predicts that signs like these are systematically possible. Be-
cause these are systematic gaps, a representation must explain why these signs
do not exist. In English, the absence of blick is an accidental gap in the lexicon
that has no phonological explanation, but the absence of bnick is systematic,
and has a phonological explanation: bn is not an acceptable English consonant
cluster. Proponents of Multiseg must likewise explain the systematic gaps illus-
trated above. Note that Oneseg explains these gaps easily: it cannot represent
them.
17 This proposal also excludes fingerspelling. Its dependence on the sequence of English letters
means that it has repetition patterns more like speech than the signs examined here. This must
somehow be encoded in the representation, but is left for future research.
84 Rachel Channon
3.5 Conclusion
This chapter has shown that there are interesting and even surprising differ-
ences in repetition characteristics in the two language modalities. Number of
repetitions is contrastive in words, but not signs. Only a few repeating words
are rhythmic, but all repeating simple signs are rhythmic. Words and com-
pound signs have similar high rates of irregular repetition, but words allow any
irregular repetition pattern, while compound signs allow only three.
The models Multiseg for words and Oneseg for signs economically explain
these differences. Multiseg must represent different numbers of repetitions
contrastively; Oneseg cannot represent number of repetitions contrastively.
Multiseg can represent both rhythmic and irregular repetition. Possible seg-
ment string permutations suggest that irregular repetition should be common
and rhythmic repetition rare. Oneseg can only represent rhythmic repetition
in simple signs, but allows irregular repetition in two segment compounds.
Multiseg allows any irregular repetition pattern, but Oneseg allows only three.
Multiseg correctly represents the repetition data for words, but overgenerates
for signs; Oneseg undergenerates for words, and correctly represents the data
for signs. A single segment for simple signs, and two segments for compounds,
plus a [repeat] feature, is therefore a plausible representation.
Acknowledgments
I thank Linda Lombardi for her help. She has been an exceptionally consci-
entious, insightful and intelligent advisor. I thank Richard Meier and the two
anonymous reviewers for their helpful comments, Thomas Janulewicz for his
work as ASL consultant, Ceil Lucas for discussion of some of the issues raised
here, and the audiences at the Student Conference in Linguistics at the Uni-
versity of Arizona at Tucson, the Texas Linguistic Society conference at the
University of Texas at Austin, the student conference at the University of Texas
at Arlington, and the North American Phonology Conference in Montreal.
3.6 References
Ameka, Felix K. 1999. The typology and semantics of complex nominal duplication in
Ewe. Anthropological Linguistics 41:75–106.
Anderson, Stephen R. 1993. Linguistic expression and its relation to modality. In Coulter,
ed. (1993), 273–290.
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Channon, Rachel. 2001. The protracted inceptive verb inflection and phonological
Representations of repetition 85
Lane, Harlan. 1984. When the mind hears: A history of the deaf. New York: Vintage
Books.
Liddell, Scott K. and Robert E. Johnson. 1986. American Sign Language compound
formation processes, lexicalization, and phonological remnants. Natural Language
and Linguistic Theory 4:445–513.
Liddell, Scott K. and Robert E. Johnson. 1989. American Sign Language: The phono-
logical base. Sign Language Studies 64:195–278.
Miller, Christopher Ray. 1996. Phonologie de la langue des signes québécoise: Structure
simultanée et axe temporel. Doctoral dissertation, Université de Québec, Montreal.
Newkirk, Don. 1998. On the temporal segmentation of movement in American sign
Language. Sign Language and Linguistics 1:173–211.
Osugi, Yutaka. 1998. In search of the phonological representation in American Sign
Language. Doctoral dissertation, University of Rochester, NY.
Padden, Carol. 1998. The ASL Lexicon. Sign Language and Linguistics 1:39–60.
Penn, Claire, Dale Ogilvy-Foreman, David Simmons, and Meribeth Anderson-Forbes.
1994. Dictionary of Southern African signs for communicating with the deaf.
Pretoria, South Africa: Joint Project of the Human Sciences Research Council
and the South African National Council for the Deaf.
Perlmutter, David M. 1990. On the segmental representation of transitional and bidi-
rectional movements in American Sign Language phonology. In Fischer and Siple,
67–80.
Perlmutter, David M. 1993. Sonority and syllable structure in American Sign Language.
In Coulter, ed. (1993), 227–261.
Pertama, Edisi. 1994. Kamus sistem isyarat bahasa Indonesia (Dictionary of Indonesian
sign language). Jakarta: Departemen Pendidikan dan Kebudayaan.
Reikhehof, Lottie. 1985. The joy of signing. Springfield, MO: Gospel Publishing House.
Sandler, Wendy. 1989. Phonological representation of the sign: Linearity and nonlin-
earity in American Sign Language. Dordrecht: Foris.
Savir, Hava. 1992. Gateway to Israeli Sign Language. Tel Aviv: The Association of the
Deaf in Israel.
Shroyer, Edgar H. and Susan P. Shroyer. 1984. Signs across America. Washington, DC:
Gallaudet College Press.
Shuman, Malcolm K. 1980. The sound of silence in Nohya: a preliminary account of
sign language use by the deaf in a Maya community in Yucatan, Mexico. Language
Sciences 2:144–173.
Shuman, Malcolm K. and Mary Margaret Cherry-Shuman. 1981. A brief annotated sign
list of Yucatec Maya sign language. Language Sciences 3:124–185.
Son, Won-Jae. 1988. Su wha eui kil jap i (Korean Sign Language for the guide). Seoul:
Jeon-Yong Choi.
Sternberg, Martin L. A. 1981. American Sign Language: A comprehensive dictionary.
New York: Harper and Row.
Stokoe, William C. 1960. Sign language structure: An outline of the visual communi-
cation systems of the American deaf. Studies in Linguistics, Occasional Papers 8.
Silver Spring, MD: Linstok Press.
Stokoe, William C., Dorothy C. Casterline, and Carl G. Croneberg. 1965, reprinted 1976.
A dictionary of American Sign Language on linguistic principles. Silver Spring,
MD: Linstok Press.
Representations of repetition 87
4.1 Introduction
Linguistic categories (e.g. segment, syllable, etc.) have long enabled cogent
descriptions of the systematic patterns apparent in spoken languages. Begin-
ning with the seminal work of William Stokoe (1960; 1965), research on the
structure of American Sign Language (ASL) has demonstrated that linguistic
categories are useful in capturing extant patterns found in a signed language. For
example, recognition of a syllable unit permits accounts of morphophonologi-
cal processes and places constraints on sign forms (Brentari 1990; Perlmutter
1993; Sandler 1993; Corina 1996). Acknowledgment of Movement and Loca-
tion segments permits descriptions of infixation processes (Liddell and Johnson
1985; Sandler 1986). Feature hierarchies provide accounts of assimilations that
are observed in the language and also help to explain those that do not occur
(Corina and Sandler 1993). These investigations of linguistic structure have led
to a better understanding of both the similarities and differences between signed
and spoken language.
Psycholinguists have long sought to understand whether the linguistic cat-
egories that are useful for describing patterns in languages are evident in the
perception and production of a language. To the extent that behavioral reflexes
of these theoretical constructs can be quantified, they are deemed as having a
‘psychological reality’.1 Psycholinguistic research has been successful in es-
tablishing empirical relationships between a subject’s behavior and linguistic
categories using reaction time and electrophysiological measures.
This chapter describes efforts to use psycholinguistic paradigms to explore
the psychological reality of form-based representations in ASL. Three online
reaction time experiments – Lexical-Decision, Phoneme Monitoring, and Sign–
Picture Naming – are adaptations of well-known spoken language psycholin-
guistic paradigms. A fourth off-line experiment, developed in our laboratory,
uses a novel display technique to explore form-based similarity judgments of
1 While psychological studies provide external evidence for the usefulness of these theoretical
constructs, the lack of a “psychological reality” does not undermine the importance of these
constructs in the description of linguistic processes.
88
Psycholinguistic investigations of phonological structure 89
signs. This chapter discusses the results of these experiments, which, in large
part, fail to establish reliable form-based effects of ASL phonology during lex-
ical access. This surprising finding may reflect how differences in the modality
of expression impact lexical representations of signed and spoken languages.
In addition, relevant methodological factors2 and concerns are discussed.
4.2.1 Method
In this sign lexical decision paradigm a subject viewed two successive signs.
The first of the pair was always a true ASL sign, the second was either a true
2 Although a full accounting of the specific details behind each experiment is beyond the scope of
this chapter, it should be noted that rigorous experimental methodologies were used and necessary
controls for order effects were implemented.
(a) (b)
850 850
800 800
650 650
Location Movement Location Movement
Figure 4.1 Reaction times for the two versions of the experiment: (a) 100ms ISI; (b) 500ms ISI.
Psycholinguistic investigations of phonological structure 91
4.2.2 Results
The results shown in Figure 4.1 illustrate the reaction times for the two versions
of the experiment (i.e. ISI 100 msec and 500 msec). There was an expected and
highly significant difference between sign and nonsign lexical decisions. It took
subjects longer to reject nonsigns than to accept true signs. Most surprising,
however, are the small, statistically weak effects of related vs. unrelated signs.
Specifically, in Version 1 (i.e. 100 msec ISI), lexical decisions were slower in
the related context relative to the unrelated context; however, these inhibitory
differences only approach statistical significance, which is traditionally set at
p < .05 (Movement Related X = 746, Movement Unrelated X = 735, p = .064;
Location Related X = 733, Location Unrelated X = 721, p = .088). The
500 msec ISI condition also demonstrates a lack of effects attributable to form-
based relations (Movement Related X = 680, Movement Unrelated X = 674,
p = .62; Location Related X = 669, Location Unrelated X = 667, p = .76). Taken
together, the present study reveals limited influence of shared formational over-
lap in ASL lexical decisions. Specifically, trends for early and temporally limited
inhibition were noted; however, these effects were not statistically significant.4
3 Nonsign forms also shared phonological relations with the prime. However, the discussion of
these data is beyond the scope of this chapter.
4 The categories of movement types examined in the present experiment included both “path”
movements, and examples of “secondary movements.” Limiting the analysis to just path
movements did not significantly change the pattern of results.
92 David P. Corina and Ursula C. Hildebrandt
4.2.3 Discussion
When deaf subjects made lexical decisions to sign pairs that shared a location
or a movement, we observed weak inhibition effects when stimuli were sepa-
rated by 100 msec, but these effects completely disappeared when the stimulus
pairs were separated by 500 msec. These findings stand in contrast to an earlier
study reported by Corina and Emmorey (1993), in which significant form-based
effects were observed. Specifically, signs that shared movement showed signif-
icant facilitatory priming, and signs that shared a common location exhibited
significant inhibition. How can we reconcile these differences between studies?
Two important issues are noteworthy: methodological factors and the nature of
form-based relations.
In the Corina and Emmorey (1993) study, only 10 pairs of each parame-
ter category (handshape, movement, and location) were tested. In contrast, the
present study included examinations of a much larger pool of contrasts (39 in
each condition). In addition, the mean reaction time in the earlier study to related
and unrelated sign was 1033 msec, while reaction time in the present experi-
ment averaged 704 msec. The small number of stimuli pairs in the Corina and
Emmorey study may have encouraged more strategy-based decisions. Indeed,
the relatively long reaction times are consistent with a mediated or controlled
processing strategy. In contrast, the present study may represent the effects of
automatic priming. That is, these effects may represent effects of lexical, rather
than post-lexical, access.
Recent spoken language studies have sought to clarify the role of shared
phonetic-featural level information vs. segment level information. Experiments
utilizing stimuli that were phonetically confusable by virtue of featural over-
lap (i.e. bone-dung) have reported temporally limited inhibition (Goldinger
et al. 1993). In contrast, when stimuli share segmental overlap (i.e. bone-
bang), facilitation may be more apparent because these stimuli allow subjects
to adopt prediction strategies characteristic of controlled rather than automatic
processing. In the present ASL experiment, form-based overlap was limited to
a single parameter, either movement or location. If we believe these stimuli
are more analogous to phonetically confusable spoken stimuli, then we might
expect to observe temporally limited inhibitory effects similar to those that
have been reported for spoken languages. The existence of (albeit weak) in-
hibitory effects that are present at the 100 msec ISI are consistent with this
hypothesis.
Finally, it should be noted that several models of word recognition suggest
that as activation of a specific lexical entry grows, so does inhibition of compet-
ing entries (Elman and McClelland 1988). These mechanisms of inhibition may
result in a situation where forms that are closely related to a target are inhibited
relative to an unrelated entry. The work of Goldinger et al. (1993) and Lupker
Psycholinguistic investigations of phonological structure 93
and Colombo (1994) has appealed to these spreading activation and inhibition
models to support patterns of inhibition for phonetic-featural overlap. It would
not be surprising if similar mechanisms were at work in the case of ASL recog-
nition. Thus, it is possible that the patterns observed for form-based priming in
sign language are not so different from those of spoken language processing.
Further work is required to establish the reliability of these findings. Future
studies will manipulate the degree of shared overlap (for example, using pairs
that share both location and movement) in order to examine form-based effects
in ASL.
4.3.1 Method
A list of signs was presented on videotape at a rate of one sign every two
seconds. The subjects were instructed to press a response key when they had
detected the target handshape. An individual subject monitored four handshapes
5 Some reviewers have questioned the reasonableness of this hypothesis. The hypothesis is moti-
vated, in part, by the observation that human visual systems show specialization for perception
of movement and specialization related to object processing. Hence, we ask, could movement
properties of signs be processed differently from static (i.e. object) properties? I have presented
evidence that the inventory of contrastive handshape changes observed within signs is a subset
of the handshape changes that occur between signs. A possible explanation for this observation
is that human linguistic systems are less readily able to rectify fine differences of handshape in a
sign with a dynamic handshape change. However, given sufficient time (as occurs between signs)
the acuity tolerances are more relaxed, permitting a wider range of contrastive forms (Corina
1992; 1993). These observations motivated the investigation of these two classes of handshape
in ASL.
Psycholinguistic investigations of phonological structure 95
Table 4.1 Instruction to subjects: “Press the button when you see
a ‘1’ handshape”
ABLE S no response
AGREE 1 yes! static handshape
PUZZLE 1→X yes! handshape change, first
handshape is the target
FIND 5→F no response
ASK S→1 yes! handshape change, second
handshape is the target
VACATION 5 no response
drawn from a total of six different handshapes. This set included three marked
handshapes (X, F, V) and three unmarked handshapes (1, S, 5) (after Battison
1978). Each subject monitored for two marked and two unmarked handshapes.
Prior to testing all subjects were informed that the target might occur as a
part of a handshape change or not, and were explicitly shown examples of these
contrasts. Table 4.1 shows a representative example of the stimulus conditions
and the intended subject response. Two conditions were included in the exper-
iment, “real-time” and “video-animated” signing. In the latter, the stimuli are
constructed by capturing the first “hold” segment of a sign and freezing this
image for 16 frames and then capturing the final hold segment of a sign and
freezing this image for 16 frames. When viewed in sequence one observed an
animated sign form in which the actual path of movement is inferred. This ma-
nipulation provides a control condition to examine temporal properties of sign
recognition. The order of the target handshapes and the order of conditions were
counterbalanced across subjects. Due to the temporal qualities of handshape
changes, the crucial comparison is between the first handshape of a contouring
form compared to a static handshape that is stable throughout the sign.
4.3.2 Results
The graphs in Figure 4.2 illustrate detection times for identifying handshapes
in ASL signs from 32 signers (22 native signers and 10 late learners of ASL).
The left half of the graph shows results for moving signs and the right half plots
results from the “video-animated” control conditions. HS-1 and HS-2 refer to
the first and second shape of “contour” handshape signs (for example, in the
sign ASK, HS-1 is an “S” handshape and HS-2 is a “G” handshape). The third
category is composed of signs without handshape changes (i.e. no handshape
change or NHSC).
(a) (b)
900 900
* p < .013
800 800
n.s. n.s.
700 700
n.s.
600 600
n.s.
400 400
HS-1 HS-2 NHSC HS-1 HS-2 NHSC
Figure 4.2 Reaction times for detection of handshapes in ASL signs for (a) moving signs and (b) static signs
Psycholinguistic investigations of phonological structure 97
Several patterns emerge from these data. Importantly, the detection of the
first element of a handshape (HS-1) that is changing during the course of a
sign is not significantly different from the detection of that same handshape
when it is a member of a sign that involves NHSC ( p> .05). These preliminary
data from this detection paradigm suggest no behavioral processing differences
associated with handshapes as a function of their syllabic composition (i.e.
whether they are constituents of an M or a P segment). Moreover, statistical
analysis reveals no significant group differences ( p > .05). A second finding
emerges from consideration of the video-animated condition. Here, the overall
reaction time to detect handshapes is much longer for these signs than for the
real time signs. In addition, late learners appear to have more difficulty detecting
the second handshape of these signs. This may indicate that the late learners
have a harder time suppressing the initial handshape (thus resulting in slower
second handshape detection) or, alternatively, that these subjects are perturbed
when a sign lacks movement.
4.3.3 Discussion
In Experiment 2, a phoneme monitoring experiment was used to examine the
recognition of handshapes under two conditions: in one condition a hand-
shape appeared in a sign in which the handshape configuration remained static
throughout the course of a sign’s articulation, while in the other the handshape
was contoured. The results revealed that a target handshape could be equally
well detected in a sign without a handshape change as in a sign with a handshape
change. Under some analyses, for a sign in which the only dynamic component
is the handshape change, the handshape change constitutes the most sonorant
element of the sign form (see Corina 1993). Viewed from this perspective it
would then appear that the syllabic environment of the handshape does not
noticeably alter its perceptibility.
In spoken languages, phoneme monitoring times are affected by the major
class status of the target phoneme, with consonants being detected faster than
semi-vowels, which are detected faster than vowels (Savin and Bever 1970;
van Ooijen et al. 1992; Cutler et al. 1996). However, a central concern in
the phoneme monitoring literature is whether behavioral patterns observed in
the studies of consonants and vowels are attributable to the composition of the
signal (i.e. vowels are steady state, consonants are time-varying) or rather reflect
their function in the language (i.e. vowels constitute the nucleus of a syllable,
consonants are the margins of syllables). These facts have been inexorably
confounded in the spoken language domain. Attempts have been made (albeit
unsuccessfully) to control for this confound in spoken language (see van Ooijen
et al. 1992).
If we accept the premise that handshape properties hold a dual status, that is,
that handshapes may be a constituent of a movemental or positional segment
98 David P. Corina and Ursula C. Hildebrandt
(or alternatively a constituent of the syllable nucleus or not), then ASL provides
an important control condition. Following this line of argumentation, the present
data suggest that it is not the syllabic status of a segment that determines the
differences in reaction time, but rather the physical differences in the signal.
Thus, the physical differences between a handshape that is a member of a
handshape change compared to its nonchanging variant does not have significant
consequences for detection times as measured in the present experiment.
A methodological point concerns whether the phoneme monitoring task in
the context of naturalistic signing speeds has led to ceiling effects, such that true
processing differences have not been revealed. Further studies with speeded or
perceptually degraded stimuli could be used to address this concern.
Finally several theoretical points must be raised. It must be acknowledged
that the homologies between segments, phonemes, and features in spoken and
signed languages are quite loose. Thus, some may argue that these data have
little bearing on the signal vs. constituent controversy in spoken language. In
addition, as noted, the assumed status distinction between a static handshape and
a handshape change within a sign may be, in some fundamental way, incorrect.
A related concern is that the featural elements of the handshapes themselves
are not diagnostic of sonority, but of some more abstract property of the sign
(for example, the integration of articulatory information across time). Thus,
monitoring for specific handshape posture may not be a valid test of the role of
syllable information in ASL.
6 The term “Interfering Stimulus” (IS) is the accepted nomenclature in this research area. Note,
however, the effects of interference may be either inhibitory or facilitatory.
Psycholinguistic investigations of phonological structure 99
unrelated IS. In addition, in these paradigms one may systematically vary the
temporal relationships, or stimulus onset asynchronies (SOAs), between when
the picture appears and when the IS is delivered. The differential effects of the
IS are further illuminated by varying these temporal properties.
Several findings have shown that in the early-onset conditions (–150 msec
SOA), picture naming latencies are greater in the presence of semantic IS com-
pared to unrelated IS, whereas phonological IS have little effect (the phonolog-
ical stimuli shared word onsets). It is suggested that the selective interference
reflects an early stage of semantic activation. In the post-onset IS condition
(+150 msec SOA), no semantic interference is evident, but a significant facil-
itatory effect for phonological stimuli is observed. These results support the
model of word production in which a semantic stage is followed by a phono-
logical or word-form stage of activation.
Figure 4.3 shows results obtained from our laboratory on replication and
extension of Schriefers et al.’s (1990) paradigm. This study included over 100
subjects at 5 different SOAs (–200, –100, 0, +100, +200). Figure 4.3 illus-
trates the difference in magnitude of reaction times for the unrelated condition
compared to the interfering stimulus conditions (thus, a negative value reflects
a slowing of reaction time) for aurally presented words under a variety of
conditions. As shown in the figure, at early points in time semantically re-
lated ISs produce significant interference. This semantic interference is greatly
diminished by the 0 msec SOA. At roughly –100 msec SOA we observe ro-
bust phonological facilitation, which was absent at the –200 msec SOA. This
Reaction Time (unrelated: interfering stimulus)
75
50
25 Semantic
Phonological onset
0
Phonological rhyme
−25
−50
−300
−200
−100
100
200
300
0
facilitation begins to diminish at +200 msec SOA. Also plotted are the results
of phonologically rhyming stimuli. Here we observe an early facilitatory effect
that rapidly diminishes. These studies show that there is an early point in time
during which semantic information is being encoded, followed by a time in
which phonological information is being encoded (in particular, word onset in-
formation). This experimental paradigm permits us to tap selectively into these
different phases of speech production. The inclusion of the rhyme condition (a
condition not reported by Schriefers et al. 1990) provides a useful data point
for the assessment of phonological effects in ASL where the concept of shared
onset vs. rhyme is not transparent.
4.4.1 Method
In the sign language adaptation of this experiment, a subject is shown a picture
and is asked to sign the name of the picture as fast as possible. A reaction time
device developed in our laboratory stops a clock when the subject raises his or
her hands off the table to sign the target item. The interference task is achieved
by superimposing the image of a signer on the target picture. This is achieved
through the use of transparent dissolve, a common technique in many digital
video-effects kits. Impressionistically, the effect results in a “semi-transparent”
signer ghosted over the top of the object to be named. In the present experiment
we examined the roles of three interference conditions: semantically related
signs (e.g. picture: cat; sign: COW); phonologically related signs (e.g. picture:
cat; sign: INDIAN); or an unrelated sign (e.g. picture: cat; sign: BALL). In
the majority of cases the phonological overlap consisted of at least two shared
parameters; however, in this initial study these parameter combinations were
not systematically varied.
We examined data from 14 native deaf signers and 25 hearing subjects re-
sponding to a third visual experiment conducted in English. In the English
experiment the target was a picture, while the interfering stimulus was a written
word superimposed on the target picture. This was done to isolate a within-
modality interference effect rather than the crossmodal effects from the audi-
tory experiments. Note that the current ASL experiment was conducted only
for a simultaneous (0 msec SOA) interference condition.
4.4.2 Results
Shown in Figure 4.4 is a comparison of a sign-picture condition with a written-
word condition. At this SOA we observe robust facilitation for written words
that share phonemic overlap with the target. In addition, we observe semantic
interference (whose effects are likely abating at this SOA; see above). ASL sign-
ers show a different pattern; little or no phonological facilitation was observed.
Psycholinguistic investigations of phonological structure 101
Reaction Times (Unrelated: Interfering Stimuli)
40
**
30 * p < .05
** p. < .01 ASL phonology
20
English phonology
10
n.s.
0
ASL semantics
−10
* English semantics
**
−20
0 msec SOA
This stands in contrast to the robust and consistent findings reported in spoken
and written language literature. However, as with written English, we do ob-
serve effects of semantic interference at this SOA. These results are intriguing
and suggest a difference between a semantic and phonological stage of pro-
cessing in sign recognition. However, at this SOA, while both phonological and
semantic effects are observed in the English experiment, we find only significant
evidence for semantic effects for ASL.
4.4.3 Discussion
The third experiment used a measure of sign production to examine the effects
of semantic and phonological interference during picture naming in sign lan-
guage. These results showed significant effects for semantic interference but
no effects for phonological interference. These results stand in contrast to sim-
ilar experiments conducted with both written and spoken English, which have
shown opposing effects for semantic and phonological interference.
One of the strengths of the word picture paradigms is the incorporation of
a temporal dimension in the experimental design. By varying the temporal
relationship between the onset of the picture and the onset of the interfering
stimuli, Levelt, Schriefers, Vorberg, Meyer, and colleagues (1991) have been
able to chart out differential effects of semantic and phonological interference.
The choice of the duration of these SOAs has been derived in part from es-
timates of spoken word recognition. The temporal difference in the duration
of words and signs – coupled with the differences in recognition times for
102 David P. Corina and Ursula C. Hildebrandt
words vs. signs (see Emmorey and Corina 1990) – make it difficult to fully
predict what the optimal time windows will be for ASL. The present sign ex-
periment used a 0 msec ISI. For English (and Dutch) this corresponds to a
time when phonological effects are beginning to show a maximal impact and
semantic effects are beginning to wane. In the ASL experiment, we observed
semantic effects but no phonological effects. The data may reflect that the tem-
poral window in which to observe these effects in signed language is shifted
in time. Ongoing experiments in our laboratory are currently exploring this
possibility.
much like artistic manipulation of the melodic line in spoken poetry (Rose
1992; Blondel and Miller 1998). Often, the locations and movements of signs
are manipulated to create cohesiveness and continuity between signs. Signs also
may overlap, or be shortened or lengthened, in order to create a rhythmic pattern.
These devices are combined to create strong visual imagery unique to visual–
gestural languages (Cohn 1986). Examples of language games in ASL include
“ABC stories” and “proper name stories.” In these games a story is told with the
constraint that each successive sign in the story must use a handshape drawn
from the manual alphabet in a sequence that follows the alphabet or spells
a proper name. Cheerleader fight songs also evidence repetition of rhythmic
movement patterns and handshape alliteration.
Taken together, these examples demonstrate that sign forms exhibit com-
ponent structures that are accessible to independent manipulation (e.g. hand-
shapes) and provide hints of structural relatedness (e.g. similar movement
paths). However, it should be noted that there is no generally accepted notion
of a naturally occurring structural grouping of sign properties that constitute
an identifiable unit in the same sense that a “rhyme” does for users of spoken
languages. While several researchers have used the term “rhyme” to describe
phonemic redundancy in signs (Poizner, Klima, and Bellugi 1987; Valli 1990)
it remains to be determined whether a specific combination of structural prop-
erties serves this function.
The current exploratory studies gather judgments of sign similarity as rated
by native users of ASL in order to provide insight into the relationship between
theoretical models of sign structure and psychological judgments of similar-
ity. Two experiments sought to uncover psychological judgments of perceptual
similarity of nonmeaningful but phonologically possible signs. We were inter-
ested in what combination of parameters observers would categorize as being
most similar. These experiments tapped into native signer intuitions as to which
parameter, or combination of shared parameters, makes two signs seem particu-
larly similar. These paradigms provided an opportunity to tap into phonological
awareness by investigating phonological similarity in ASL. The similarity judg-
ments of hearing subjects unfamiliar with ASL provided an important control.
A few studies have examined whether similarity ratings of specific compo-
nents of signs – especially, handshape (Stungis 1981), location (Poizner and
Lane 1978), and movement (Poizner 1983) – differ between signers and non-
signers. The rating of handshape and location revealed very high correlations
between signers and hearing nonsigners (r = .88, r = .82, respectively). These
data suggest that linguistic experience has little effect on these perceptual sim-
ilarity ratings. In contrast, Poizner’s (1983) study examined ratings of signs
filmed as point light displays and found some deaf and hearing differences for
qualities of movement. Post hoc analysis suggested that deaf signers’ patterns of
dimensional salience mirrored those dimensions that are linguistically relevant
104 David P. Corina and Ursula C. Hildebrandt
4.5.1 Method
The stimuli for these experiments were created by videotaping a deaf male
signer signing a series of ASL nonsigns. Nonsigns are pronounceable, phono-
logically possible signs that are devoid of any meaning. In the first of two
experiments, each trial had a target nonsign in a circular field in the middle of
the screen. Surrounding the central target were alternative nonsign forms, one
in each corner of the screen. Three of these nonsigns shared two parameters
with the target nonsign and differed on one parameter. One shared movement
and location (M + L) and differed in handshape, one shared movement and
handshape (M + H) and differed in location, and one shared location and hand-
shape (L + H) and differed in movement. The remaining flanking nonsign was
phonologically unrelated to the target. All signs (surrounding signs and target
sign) were of equal length and temporally synchronized.
In the second experiment, the target sign shared only one parameter with
three surrounding signs. For both experiments, these synchronized displays
were repeated concurrently five times (for a total of about 15 seconds) with
5 seconds of a black screen between test screens. The repetitions permitted
ample time for all participants to carefully inspect all flanking nonsigns and to
decide which was similar to the target.
The instructions for these experiments were purposely left rather open ended.
The subjects were simply asked to look at the target sign and then decide which
of the flanking signs was “most similar.” We stressed the importance of using
“intuition” in making the decisions, but purposely did not specify a basis for
the similarity judgment. Rather, we were interested in examining the patterns
of natural grouping that might arise during these tasks.
Three subject groups were run on these experiments (native deaf signers, deaf
late learners of ASL, and hearing sign-naive subjects). A full report of these
data may be found in Hildebrandt and Corina (2002). The present discussion
is limited to a comparison of native deaf signers and hearing subjects. Twenty-
one native signers and 42 hearing nonsigners participated in the two shared
parameter study, and 29 native signers and 51 hearing nonsigners participated
in the single shared parameter experiment.
Psycholinguistic investigations of phonological structure 105
60
Loc + mov
50 Hand + mov
Loc + hand
40 Random
Percentage
30
20
10
0
Native Hearing
Groups
4.5.2 Results
Results from the first experiment (i.e. two-shared parameter condition) are
shown in Figure 4.5. Both deaf and hearing subjects chose signs that shared
movement and location as the most similar in relation to the other combinations
of parameters (M = 45.79%, SD = 15.33%). Examination of the remaining con-
trasts, however, reveals a systematic difference; while the hearing group chose
M + H more often than L + H (t (123) = 3.685, two-tailed p < .001), the na-
tive group chose those two combinations of shared parameters equally often
(t (60) = .455, two-tailed p = .651).
In the second experiment (i.e. single parameter condition) the native signers
and hearing groups again made nearly identical patterns of similarity judgments.
Both hearing and deaf subjects rated signs that share a movement, or signs that
share a location with the target sign as highly similar (all ps < .01). Although
Figure 4.6 suggests that movement was more highly valued by the native signers,
this difference did not reach statistical significance ( p < .675).
4.5.3 Discussion
Several important findings emerge from these data. First, and most striking, is
the overall similarity of native deaf signers and hearing subjects on these judg-
ments. First, both deaf and sign-naive subjects chose signs that share movement
and location as the most similar, indicating that this combination of parameters
enables a robust perceptual grouping. The salience of movement and location is
also highlighted in the second experiment, where these combinations of param-
eters once again served as the basis of preferred similarity judgment. It is only
when we consider the parameters of M + H vs. L + H that we find group differ-
ences, which we assume here to be an influence of linguistic knowledge of ASL.
106 David P. Corina and Ursula C. Hildebrandt
45
Movement
40 Location
Handshape
35
30
Percentage
25
20
15
10
0
Native Hearing
Groups
Several theories of syllable structure in ASL have proposed that the combi-
nation of movement and location serves as the skeletal structure from which
syllables are built, and that movement is the most sonorant element of the
sign syllable (see, for example, Sandler 1993). In these models, handshape
is represented on a separate linguistic structural tier in order to account for
such phenomena as the spreading of handshape across location and movement
(Sandler 1986). Languages commonly capitalize on robust perceptual distinc-
tions as a basis for linguistic distinctions. The cross-group similarities observed
in the present study reinforce this notion. However, language knowledge does
appear to play a role in these judgments; the lack of a clear preference between
M + H vs. L + H indicates that each of these combinations is an equally poor
grouping. This may be related to the fact that groupings of M + H or L + H
are not coherent syllabic groupings. Consider a spoken language analogue, in
which subjects make judgments of similarity:
Target: dat Flankers: zat, dut, dal
Assume the pair judged most similar is dat–zat. In contrast we find equal non-
preferences for the pairs dat–dut (shared initial onset and final consonant), and
dat–dal (shared initial consonant and vowel). Thus, we conjecture that in the
pair dat–zat the hierarchical structure of the syllable coda provides a basis of
a similarity judgment, while the nonpreferred groupings fail to benefit from
coherent syllabic constituency.
The hearing subjects’ preference for M + H over L + H combinations is likely
to reflect the perceptual salience of the movement for these sign-naive subjects,
for whom the syllabic violations are not an issue.
Psycholinguistic investigations of phonological structure 107
Acknowledgments
This work was supported by a NIDCD Grant (R29-DC03099) awarded to David
Corina. We thank the deaf volunteers who participated in this study. We ac-
knowledge the help of Nat Wilson, Deba Ackerman, and Julia High. We thank
the reviewers for helpful comments.
4.7 References
Ades, Anthony E. 1977. Vowels, consonants, speech, and nonspeech. Psychological
Review 84:524–530.
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Psycholinguistic investigations of phonological structure 109
Blondel, Marion and Christopher Miller. 1998. The relation between poetry and phonol-
ogy: Movement and rhythm in nursery rhymes in LSF. Paper presented at the 2nd
Intersign Workshop, Leiden, The Netherlands, December.
Bradley, Lynette L. and Peter E. Bryant. 1983. Categorizing sounds and learning to read:
A causal connection. Nature 301:419–521.
Brentari, Diane. 1990. Licensing in ASL handshape change. In Sign language re-
search: Theoretical issues, ed. Ceil Lucas. Washington, DC: Gallaudet University
Press.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Cohn, Jim. 1986. The new deaf poetics: Visible poetry. Sign Language Studies 52:263–
277.
Corina, David P. 1991. Towards an understanding of the syllable: Evidence from linguis-
tic, psychological, and connectionist investigations of syllable structure. Doctoral
dissertation, University of California , San Diego, CA.
Corina, David P. 1992. Biological foundations of phonological feature systems: Evidence
from American Sign Language. Paper presented to the Linguistics Departmental
Colloquium, University of Chicago, IL.
Corina, David P. 1993. To branch or not to branch: Underspecification in ASL handshape
contours. In Phonetics and Phonology, Vol. 3: Current issues in ASL Phonology,
ed. Geoffrey R. Coulter, 63–95. San Diego, CA: Academic Press.
Corina, David P. 1996. ASL syllables and prosodic constraints. Lingua 98:73–102.
Corina, David P. 1998. Aphasia in users of signed languages. In Aphasia in atypical pop-
ulations, ed. Patrick Coppens, Yvan Lebrum, and Anna Baddo, 261–309. Hillsdale,
NJ: Lawrence Erlbaum Associates.
Corina, David P. 2000. Some observations regarding paraphasia and American Sign
Language. In The signs of language revisited: An anthology to honor Ursula Bellugi
and Edward Klima, ed. Karen Emmorey and Harlan Lane, 493–507. Mahwah, NJ:
Lawrence Erlbaum Associates.
Corina, David P. and Karen Emmorey. 1993. Lexical priming in American Sign
Language. Paper presented at the Linguistic Society of America Conference,
Philadelphia, PA.
Corina, David P., Susan L. McBurney, Carl Dodrill, Kevin Hinshaw, Jim Brinkley,
and George Ojemann. 1999. Functional roles of Broca’s area and SMG: Evi-
dence from cortical stimulation mapping in a deaf signer. NeuroImage 10:570–
581.
Corina, David, and Wendy Sandler. 1993. On the nature of phonological structure in
sign language. Phonology 10:165–207.
Cutler, Anne, Brit van Ooijen, Dennis Norris, and Rosa Sánchez-Casas. 1996. Speeded
detection of vowels: A cross-linguistic study. Perception and Psychophysics
58:807–822.
Cutler, Anne, and Takashi Otake. 1998. Perception and suprasegmental structure in a
non-native dialect. Journal of Phonetics 27:229–253.
Elman, Jeffrey L. and James L. McClelland. 1988. Cognitive penetration of the
mechanisms of perception: Compensation for coarticulation of lexically restored
phonemes. Journal of Memory and Language 27:143–165.
Emmorey, Karen. 1987. Morphological structure and parsing in the lexicon. Doctoral
dissertation, University of California, Los Angeles.
110 David P. Corina and Ursula C. Hildebrandt
Emmorey, Karen and David Corina 1990. Lexical recognition in sign language: Effects
of phonetic structure and morphology. Perceptual and Motor Skills 71:1227–1252.
Goldinger, Stephen D., Paul A. Luce, David B. Pisoni, and Joanne K. Marcario. 1993.
Form-based priming in spoken word recognition: The role of competition and bias.
Journal of Experimental Psychology: Memory, Learning, and Cognition 18:1211–
1238.
Hickock, Gregory, Ursula Bellugi, and Edward S. Klima. 1998. The neural organization
of language: Evidence from sign language aphasia. Trends in Cognitive Science
2:129–136.
Hildebrandt, Ursula C., and David P. Corina. 2002. Phonological similarity in American
Sign Language. Language and Cognitive Processes 17(6).
Klima, Edward and Ursula Bellugi. 1979. The Signs of Language. Cambridge, MA:
Harvard University Press.
Levelt, Willem J. M., Herbert Schriefers, Dirk Vorberg, Antje S. Meyer. 1991. The
time course of lexical access in speech production: A study of picture naming.
Psychological Review 98:122–142.
Liberman, Alvin M. 1996. Speech: A special code. Cambridge, MA: MIT Press.
Liberman, Alvin M., Franklin S. Cooper, Donald P. Shankweiler, and Michael
Studdert-Kennedy. 1967. Perception of the speech code. Psychological Review
74:431–461.
Liddell, Scott K. and Robert E. Johnson. 1985. American Sign Language: The phono-
logical base. Manuscript, Gallaudet University, Washington DC.
Lundberg, Ingvar, Ake Olofsson, and Stig Wall. 1980. Reading and spelling skills in
the first school years predicted from phonemic awareness skills in kindergarten.
Scandinavian Journal of Psychology 21:159–173.
Lupker, Stephen J. and Lucia Colombo. 1994. Inhibitory effects in form priming: Evalu-
ating a phonological competition explanation. Journal of Experimental Psychology:
Human perception and performance 20:437–451.
Mattingly, Ignatius G. and Michael Studdert-Kennedy. 1991. Modularity and the motor
theory of speech perception. Hillsdale, NJ: Lawrence Erlbaum Associates.
Mehler, Jacques, Jean Y. Dommergues, Uli Frauenfelder, and Juan Segui. 1981. The
syllable’s role in speech segmentation. Journal of Verbal Learning and Verbal
Behavior 20:298–305.
Paulesu, Eraldo, Christopher D. Frith, and Richard S. J. Frackowiak. 1993. The neural
correlates of the verbal component of working memory. Nature 362:342–345.
Perlmutter, David M. 1993. Sonority and syllable structure in American Sign Language.
In Phonetics and Phonology, Vol. 3: Current issues in ASL, ed. Geoffrey R. Coulter,
227–261. San Diego, CA: Academic Press.
Petersen, Steven E., Peter T. Fox, Michael I. Posner, Mark A. Mintun, and Marcus E.
Raichle. 1989. Positron-emission tomographic studies of the processing of single
words. Journal of Cognitive Neuroscience 1:153–170.
Poeppel, David. 1996. A critical review of PET studies of phonological processing.
Brain and Language 55:317–351.
Poizner, Howard. 1983. Perception of movement in American Sign Language: Effects
of linguistic structure and linguistic experience. Perception and Psychophysics
3:215–231.
Poizner, Howard, Edward S. Klima, and Ursula Bellugi. 1987. What the hands reveal
about the brain. Cambridge, MA: MIT Press.
Psycholinguistic investigations of phonological structure 111
Poizner, Howard and Harlan Lane. 1978. Discrimination of location in American Sign
Language. In Understanding language through sign language research, ed. Patricia
Siple, 271–287. San Diego, CA: Academic Press.
Rayman, Janice and Eran Zaidel. 1991. Rhyming and the right hemisphere. Brain and
Language 40:89–105.
Repp, Bruno H. 1984. Closure duration and release burst amplitude cues to stop conso-
nant manner and place of articulation. Language and Speech 27:245–254.
Rose, Heidi. 1992. A semiotic analysis of artistic American Sign Language and perfor-
mance of poetry. Text and Performance Quarterly 12:146–159.
Sandler, Wendy. 1986. The spreading hand autosegment of ASL. Sign Language Studies
15:1–28.
Sandler, Wendy. 1993. Sonority cycle in American Sign Language. Phonology 10:243–
279.
Savin, H. B. and T. G. Bever. 1970. The nonperceptual reality of the phoneme. Journal
of Verbal Learning and Verbal Behavior 9:295–302.
Schriefers, Herbert, Antje S. Meyer, and Willem J. Levelt. 1990. Exploring the time
course of lexical access in language production: Picture/word interference studies.
Journal of Memory and Language 29:86–102.
Sergent, Justine, Eric Zuck, Michel Levesque, and Brennan MacDonald. 1992. Positron-
emission tomography study of letter and object processing: Empirical findings and
methodological considerations. Cerebral Cortex 40:68–80.
Slowiaczek, Louisa M., and Marybeth Hamburger. 1992. Prelexical facilitation and lexi-
cal interference in auditory word recognition. Journal of Experimental Psychology:
Learning, Memory, and Cognition 18:1239–1250.
Slowiaczek, Louisa M., Howard. C. Nusbaum, and David B. Pisoni. 1987. Phonolog-
ical priming in auditory word recognition. Journal of Experimental Psychology:
Learning, Memory, and Cognition 13:64–75.
Stokoe, William C. 1960. Sign language structure: An outline of the visual communi-
cation systems of the American deaf. Studies in Linguistics, Occasional Papers 8.
Silver Spring, MD: Linstok Press.
Stokoe, William C., Dorothy C. Casterline, and Carl G. Croneberg. 1965. A Dictionary
of American Sign Language on linguistic principles. Washington, DC: Gallaudet
University Press.
Stungis, Jim. 1981. Identification and discrimination of handshape in American Sign
Language. Perception and Psychophysics 29:261–276.
Valli, Clayton. 1990. The nature of a line in ASL poetry. In SLR87: Papers from the 4th
International Symposium on Sign Language Research. Lappeenranta, Finland July,
1987. (International Studies on Sign Language and Communication of the Deaf;
10.) Eds.William H. Edmondson and Fred Karlsson, 171–182. Hamburg: Signum.
van Ooijen, Brit, Anne Cutler, and Dennis Norris. 1992. Detection of vowels and con-
sonants with minimal acoustic variation. Speech Communication 11:101–108.
Zattore, Robert J., Ernst Meyer, Albert Gjedde, and Alan C. Evans. 1996. PET studies
of phonetic processing of speech: Review, replication, and reanalysis. Cerebral
Cortex 6:21–30.
5 Modality-dependent aspects of sign language
production: Evidence from slips of the hands
and their repairs in German Sign Language
5.1 Introduction
In the present study, we investigate both slips of the hand and slips of the tongue
in order to assess modality-dependent and modality-independent effects in lan-
guage production. As a broader framework, we adopt the paradigm of generative
grammar, as it has been developed over the past 40 years (Chomsky 1965; 1995,
and related work of other generativists). Generative grammar focuses on both
universal and language-particular aspects of language. The universal charac-
teristics of language are known as Universal Grammar (UG). UG defines the
format of possible human languages and delimits the range of possible variation
between languages. We assume that languages are represented and processed
by one and the same language module (Fodor 1983), no matter what modal-
ity they use. UG is neutral with regard to the modality in which a particular
language is processed (Crain and Lillo-Martin 1999).
By adopting a psycholinguistic perspective, we ask how a speaker’s or
signer’s knowledge of language is put to use during the production of lan-
guage. So far, models of language production have been developed mainly
on the basis of spoken languages (Fromkin 1973; 1980; Garrett 1975; 1980;
Butterworth 1980; Dell and Reich 1981; Stemberger 1985; Dell 1986; MacKay
1987; Levelt 1989; Levelt, Roelofs, and Meyer 1999). However, even the set
of spoken languages investigated so far is restricted (with a clear focus on
English). Thus, Levelt et al. (1999:36) challenge researchers to consider a
greater variety of (spoken) languages in order to broaden the empirical basis
for valid theoretical inductions. Yet, Levelt and his colleagues do not go far
enough. A greater challenge is to include sign language data more frequently in
all language production research. Such data can provide the crucial evidence for
the assumed universality of the language processor and can inform researchers
what aspects of language production are modality dependent and what aspects
are not.
112
Modality-dependent aspects of sign language production 113
1 In Chomsky’s minimalist framework (1995), syntax has two interfaces: one phonetic-articulatory
(Phonetic Form, PF) and one logical-semantic (Logical Form, LF). Syntactic representations
have to meet wellformedness constraints on these two interfaces, otherwise the derivation fails.
LF is assumed to be modality neutral; PF, however, imposes different constraints on signed and
spoken languages. Therefore, modality differences should be expected with respect to the PF
interface.
2 In this sense, spoken German shares some aspects of nonconcatenativity with German Sign
Language. Of course, DGS displays a higher degree of nonconcatenativity due to the many
features that can be encoded simultaneously (spatial syntax, facial gestures, classifiers, etc.). In
spoken German, however, grammatical information can also be encoded simultaneously. Ablaut
(vowel gradation) is a case in point: the alternation of the stem such as /gXb/ yields various forms:
geben (‘to give,’ infinitive), gib (‘give’, second person singular imperative), gab (‘gave,’ first and
third person singular past tense), die Gabe (‘the gift,’ noun), g äbe (‘give,’ subjunctive mode).
Here, morphological information is realized by vowel alternation within the stem – a process
of infixation – and not by suffixation, the default mechanism of concatenation. A forteriori,
Semitic languages with their autosegmental morphology (McCarthy 1981) and tonal languages
(Odden 1995) also pattern with DGS. In syntax, sign languages also pattern with various spoken
languages with respect to particular parametric choices. Thus, Lillo-Martin (1986; 1991; see
also Crain and Lillo-Martin 1999) shows that ASL shares the Null Subject option with Italian
(and other Romance languages) and the availability of empty discourse topics with languages
such as Chinese.
Modality-dependent aspects of sign language production 115
extreme pole on the continuum of isolating vs. fusional morphology (see Sec-
tion 5.5.3.1).
CONCEPTUALIZER
Discourse model,
Message
situation knowledge,
generation
encyclopedia, etc.
Monitoring
Parsed speech
Preverbal message
FORMULATOR SPEECH--COMPREHENSION
SYSTEM
Grammatical
encoding
LEXICON
Surface lemmas
structure
forms
Phonological
encoding
Phonetic plan
(internal speech) Phonetic string
ARTICULATOR AUDITION
overt speech
The investigation of slips of the hand is still relatively young. Klima and
Bellugi (1979) and Newkirk, Klima, Pedersen, and Bellugi (1980) were the
first to present a small corpus of slips of the hand (spontaneous as well as
videotaped slips) in American Sign Language (ASL). Sandler and Whittemore
added a second small corpus of elicited slips of the hand (Whittemore 1987).
In Europe, as far as we know, our research on slips of the hand is the first.
Slips (of the tongue or of the hand) offer the rare opportunity to glimpse inside
the brain and to obtain a momentary access to an otherwise completely covert
process: language production is highly automatic and unconscious
(Levelt 1989). Slips open a window to the (linguistic) mind (Wiese 1987).
This is the reason for the continued interest of psycholinguists in slips. They
are nonpathological and involuntary deviations from an original plan which
can occur at any stage during language production. Slips are highly charac-
teristic of spontaneous language production. Although a negative incident, or
an error, a slip reveals the normal process underlying language production. In
Modality-dependent aspects of sign language production 117
analyzing the error we can find out what the production process normally looks
like.4
Figure 5.3a SEINE [Y-hand]; 5.3b ELTERN [Y-hand]; 5.3c correct: SEINE
[B-hand]
Affected entity
Phonology: Hand
Slip of the hand type n % Word sum Handshape orientation Move Place Other h1+h2 Combination Morpheme
Anticipation 44 21.7 9 32 16 4 2 5 5 3
Perseveration 45 22.1 12 31 11 9 1 3 3 3 1 2
Harmony 13 6.4 13 10 1 2
Substitution 5 2.5 4 1
semantic 38 18.7 35 3
formal 1 0.5 1
semantic and formal 1 0.5 1
Blend 32 15.7 30 1
Fusion 18 8.8 18
Exchange 2 1.0 1 1
Deletion 4 2.0 2 2 1 1
Total 203 112 78 12
Total (as percentage) 100.0 55.2 38.4 6
Modality-dependent aspects of sign language production 121
5.5 Results
10 The categories and the affected entities are those described in Section 5.3. The phonological
features are further specified as handshape, hand orientation, movement, and place of articula-
tion. The category ‘other’ concerns other phonological errors; for example the proper selection
of fingers or the contact. The category ‘h1 and h2’ concerns the proper selection of hands, e.g. a
one-handed sign is changed into a two-handed sign. The category ‘combination’ concerns slips
where more than one phonological feature is changed.
11 Compare example (1) in Section 5.4.
12 In a serial, modular perspective (as in Garrett, Levelt), the problem with syntagmatic errors
concerns the proper binding of elements to slots specified by the representations on the respec-
tive level. From a connectionist perspective, the problem with syntagmatic errors concerns the
proper timing of elements. Both approaches, binding-by-evaluation and binding-by-timing are
competing conceptions of the language production process (see also Levelt, Roelofs, and Meyer
1999).
122 A. Hohenberger, D. Happ, and H. Leuninger
and blends referred to in Section 5.4.1 affect words. Example (2) is a semantic
substitution (with a conduite d’approche13 ):
(2) (Context: A boy is looking for his missing shoe)
VA(TER) [BUB → VATER] S(OHN) [conduite: BUB → SOHN] BUB
father son boy
‘the father, son, boy’
The signer starts with the erroneously selected lemma VATER (‘father’) given
in Figure 5.4a. That BUB and not VATER is the intended sign can be inferred
from the context in which the discourse topic is the boy who is looking for his
missing shoe. After introducing the boy, the signer goes on to say where the
boy is looking for his shoe. Apart from contextual information, the repair BUB
also indicates the target sign. Immediately after the onset of the movement of
VATER, the signer changes the handshape to the F-hand with which SOHN
(‘son’) is shown in Figure 5.4b.14 Eventually, the signer converges on the target
sign BUB (‘boy’) as can be seen in Figure 5.4c.
Linearization errors such as anticipation, perseveration, and harmony errors
typically affect phonological features. Example (3) is a perseveration of the
handshape of the immediately preceding sign:
13 A conduite d’approche is a stepwise approach to the target word, either related to semantics or
to form. In (2) the target word BUB is reached only via the semantically related word SOHN,
the conduite.
14 In fact the downward movement is characteristic of TOCHTER (‘daughter’); SOHN (‘son’)
is signed upwards. We have, however, good reasons to suppose that SOHN is, in fact, the in-
tended intermediate sign which only coincidentally surfaces as TOCHTER because of the com-
pelling downward movement from VATER to the place of articulation of BUB. Thus, the string
VATER–SOHN–BUB behaves like a compound.
Modality-dependent aspects of sign language production 123
Figure 5.5a VATER [B-hand]; 5.5b slip: MOTHER [B-hand]; 5.5c correct:
MOTHER [G-hand]
Figure 5.6a MANN [forehead]; 5.6b slip: FRAU [forehead]; 5.6c correct:
FRAU [breast]
Bellugi (1979; see also Newkirk et al. 1980, and Section 5.5.3). They report
49.6 percent of handshape errors which equals our ratio of 47.4 percent. Our re-
sult is also confirmed by findings in sign language aphasia, where phonological
paraphasia mostly concerns the handshape parameter (Corina 1998).
Other phonological features – such as hand orientation, movement, and place
of articulation – are only rarely affected. In (4) we introduce a place of articu-
lation error:
(4) (Context: The signer suddenly realizes that the character he had re-
ferred to as a man is, in fact, a woman)
MANN FRAU [POA: MANN]
man woman
‘The man is a woman.’
In (4) the signer perseverates the place of articulation of MANN [forehead]
(see Figure 5.6a) on the sign FRAU (see Figure 5.6b). The correct place of
articulation of FRAU is at the chest (see Figure 5.6c). All other phonological
parameters (hand orientation, movement, handshape) are from FRAU.
Fusions are another slip category that are sensitive to linearization. Here,
two neighboring signs fuse. Each loses parts of its phonological specification;
together they form a single sign (syllable), as in (5):
(5) (Context: A boy is looking for his missing shoe)17
mouth gesture: blowing
SCHUH DA (ER) NICHT-VORHANDEN [F-hand → V-hand]
SCHAUT [path movement → circular movement]
shoe here (he) not-there
looks
‘He looks for the shoe, and finds nothing.’
17 We represent the fusion by stacking the glosses for both fused signs, SCHAUT and NICHT-
VORHANDEN, as phonological features of both signs realized simultaneously. The nonmanual
feature of NICHT-VORHANDEN – the mouth gesture (blowing) – has scope over the fusion.
The [F]-handshape of NICHT-VORHANDEN, however, is suppressed, as is the straight or arc
movement and hand orientation of SCHAUEN.
Modality-dependent aspects of sign language production 125
In (5), the signer fuses the two signs SCHAUT (‘looks’) and NICHT-
VORHANDEN (‘nothing’). The [V] handshape is from SCHAUT; the circular
movement, the hand orientation, and the mouth gesture (blowing out a stream
of air) is from NICHT-VORHANDEN. The fused elements are adjacent and
have a syntagmatic relation in the phrase. Their positional frames are fused
into a single frame; phonological features stem from both signs. Interestingly,
a nonmanual feature (the mouth gesture) is also involved.18
Fusions in spoken languages are not a major slip category but have been
reported in the literature (Shattuck-Hufnagel 1979; Garrett 1980; Stemberger
1984). Fusions are similar to blends, formationally, but involve neighboring
elements in the syntactic string, whereas blends involve paradigmatically re-
lated semantic items. Stemberger (1984) argues that they are structural errors
involving two words in the same phrase for which, however, only one word
node is generated. In our DGS data, two neighboring signs are fused into a
single planning slot, whereby some phonological features stem from the one
sign and some from the other; see (5). Slips of this type may relate to regular
processes such as composition by which new and more convenient signs are
generated synchronically and diachronically. Therefore, one might speculate
that fusions are more frequent in sign language than in spoken language, as
our data suggest. This issue, however, is not well understood and needs further
elaboration.
Word blends are frequent paradigmatic errors in DGS. In (6) two semantically
related items – HOCHZEIT (‘marriage’) and HEIRAT (‘wedding’) – compete
for lemma selection and phonological encoding. The processor selects both of
them and an intricate blend results; this blend is complicated by the fact that
both signs are two-handed signs:
(6) HEIRAT
PAAR// HOCHZEIT// HEIRAT PAAR
marriage
couple// wedding// marriage couple
‘wedding couple’
The two competing items in the blend (6) are HEIRAT (‘marriage’) (see Fig-
ure 5.7b) and HOCHZEIT (‘wedding’) (see Figure 5.7c).19 In the slip (see
Figure 5.7a), the dominant hand has the [Y] handshape of HOCHZEIT and
also performs the path movement of HOCHZEIT, while the orientation and
configuration of the two hands is that of HEIRAT. For the sign HEIRAT, the
dominant hand puts the ring on the non-dominant hand’s ring finger as in the
18 It is important not to confuse fusions and blends. Whereas in fusions neighboring elements in
the syntagmatic string interact, only signs that bear a paradigmatic (semantic) relation engage in
a blend. While SCHAUT and NICHT-VORHANDEN have no such relation, the signs involved
in blends like (6) do.
19 Note that this blend has presumably been triggered by an “appropriateness” repair, namely the
extension of PAAR (‘couple’) to HEIRATSPAAR (‘wedding couple’).
126 A. Hohenberger, D. Happ, and H. Leuninger
wedding ceremony. In the slip, however, the dominant hand glides along the
palm of the non-dominant hand and not over its back, as in HOCHZEIT. Inter-
estingly, features of both signs are present simultaneously, but distributed on the
two articulators, the hand; this kind of error is impossible in spoken languages.
The blend is corrected after the erroneous sign. This time, one of the competing
signs, HEIRAT, is correctly selected.
degree. One might conjecture that the bigger the inventory, the more error-prone
the process of selection both because there is higher competition between the
members of the set and because the representational space has a higher density.
Furthermore, the motor programs for activating these handshapes involve only
minor differences; this might be an additional reason for mis-selection. Note
that the inventory for hand orientation is much smaller – there are only six
major hand orientations that are used distinctively – and the motor programs
encoding this parameter can be relatively imprecise. Hand orientation errors,
accordingly, are less frequent.
In spoken language, phonological features are also not equally affected in
slips; the place feature (labial, alveolar, palatal, glottal, uvular, etc.) is most
frequently involved (Leuninger, Happ, and, Hohenberger 2000b).
In order to address the question of modality, we have to make a second com-
parison, this time with a corpus of slips of the tongue. We use the Frankfurt
corpus of slips of the tongue.21 This corpus includes approximately
5000 items. Although both corpora differ with respect to the method by which
the data were gathered and with respect to categorization, we provide a broad
comparison.
As can be seen from Tables 5.1 and 5.3,22 there is an overall congruence for
affected entities and slip categories. There are, however, two major discrepan-
cies. First, there are almost no exchanges in the sign language data, whereas
they are frequent in the spoken language data. Second, morphemes are rarely
affected in DGS, whereas they are affected to a higher degree in spoken German.
These two results become most obvious in the absence of stranding errors in
DGS. In Section 5.5.3.1 we concentrate on these discrepancies, pointing out
possible modality effects.
21 We are in the process of collecting slips of the tongue from adult German speakers in the same
setting, so we have to postpone the exact quantitative intermodal comparison for now.
22 In Table 5.3 the following categories from Table 5.1 are missing: harmony, formal, and semantic
and formal substitutions. These categories were not included in the set of categories by the
time this corpus had been accumulated. Harmony errors are included in the anticipation and
perseveration category.
128 A. Hohenberger, D. Happ, and H. Leuninger
Table 5.3 Slip categories/affected entities for the German slip corpus
Affected entity
Slip of the tongue type n % Word Phoneme Morpheme Phrase
The absence of this category in DGS and ASL calls for some explanation. First
of all, we have to exclude a sampling artifact. The data in both corpora (DGS vs.
spoken German) were collected in a very different fashion: the slips of the tongue
stem from a spontaneous corpus; the slips of the hand from an elicited corpus
(for details, see Section 5.4). The distribution of slip categories in the former
type of corpora is known to be prone to listeners’ biases (compare Meyer 1992;
Ferber 1995; see also Section 5.4). Stranding errors are perceptually salient,
and because of their spectacular form they are more likely to be recorded and
added to a slip collection. In an objective slip collection, however, this bias is not
operative.23 Pending the exact quantification of our elicited slips of the tongue,
we now turn to linguistic reasons that are responsible for the differences. The
convergent findings in ASL as well as in DGS are significant: if morphemes do
not strand in either ASL or DGS this strongly hints at a systematic linguistic
reason.
What first comes to mind is the difference in morphological type: spoken
German is a concatenative language to a much higher degree than DGS or ASL.
Although spoken German is fusional to a considerable degree (see Section 5.2),
it is far more concatenative than DGS in that morphemes typically line up
neatly one after the other, yielding, for example, ‘mein malay-isch-er Kolleg-e’
(‘my Malay colleague’) with one derivational morpheme (-isch), one stem-
generating morpheme (-e) and one case/agreement morpheme (-er). In DGS this
complex nominal phrase would contain no such functional morphemes but take
the form: MEIN KOLLEGE MALAYISCH (‘my Malay colleague’). For this
reason, no stranding can occur in such phrases in the first place. Note that this
is not a modality effect but one of language type. We can easily show that
this effect cuts across languages in the same modality, simply by looking at
the English translation of (7b): ‘my Malay colleague.’ In English comparable
stranding could also not occur because the bound morphemes (on the adjective
and the noun) are not overt, as in DGS. English, however, has many other bound
morphemes that are readily involved in stranding errors (as in 7a), unlike in
DGS.
Now we are ready for the crucial question, namely, whether we are to expect
no stranding errors in DGS (or ASL) at all? The answer is no. Stranding errors
should, in principle, occur (see also Klima and Bellugi 1979).24 What we have
to determine is what grammatical morphemes could be involved in such sign
morpheme strandings. The answer to this question relates to the second reason
for the low frequency of DGS stranding errors: high vs. low separability of
23 A preliminary inspection of our corpus of elicited slips of the tongue suggests that stranding
errors are also a low-frequency error category in spoken languages, so that the apparent difference
is not one between language types but is, at least partly, due to a sampling artifact.
24 Klima and Bellugi (1979) report on a memory study in which signers sometimes misplaced the
inflection. Although this is not the classical case of stranding (where the inflections stay in situ
while the root morphemes exchange), this hints at a possible separability of morphemes during
online production.
130 A. Hohenberger, D. Happ, and H. Leuninger
PERSON and FRAG- (‘to ask’) could be exchanged. This would result in the
hypothetical slip (9 ):
(9 ) ICH FRAG+++ ICH PERSONJEDEN-EINZELNEN
I askplural I personeach-of-them
Gee and Goodhart (1988) have convincingly argued that spoken and signed
languages differ with respect to the amount of information that can be conveyed
in a linguistic unit and in a particular time. This topic is intimately related to
language production and therefore deserves closer inspection (Leuninger et al.
2000a). Spoken languages, on the one hand, make use of very fine motoric
articulators (tongue, velum, vocal chords, larynx, etc.). The places of articula-
tion of the various phonemes are very close to each other in the mouth (teeth,
alveolar ridge, lips, palate, velum, uvula, etc.). The oral articulators are capable
of achieving a very high temporal resolution of signals in production and can
thus convey linguistic information at a very high speed.
Signed languages, on the other hand, make use of coarse motoric articulators
(the hands and arms, the entire body). The places of articulation are more
distant from each other. The temporal resolution of signed languages is lower.
Consequently, sign language production must take longer for each individual
sign.
The spatio-temporal and physiological constraints of language production
in both modalities are quite different. On average, the rate of articulation for
words doubles that of signs (4–5 words per second vs. 2.3–2.5 signs per sec-
ond; see Klima and Bellugi 1979). Surprisingly, however, signed and spoken
languages are on a par with regard to the ratio of propositional information per
time rate. Spoken and signed sentences roughly have the same production time
(Klima and Bellugi 1979). The reason for this lies in the different information
density of each sign.27 A single monosyllabic sign is typically polymorphemic
(remember the nine morphemes in (8); compare Brentari 1998). The condensa-
tion of information is not achieved by the high-speed serialization of segments
and morphemes but by the simultaneous output of autosegmental phonological
features and morphemes.
Thus, we witness an ingenious trade-off between production time and in-
formational density which enables both signed and spoken languages to come
up with what Slobin (1977) formulated as a basic requirement of languages,
namely that they “be humanly processible in real time” (see also Gee and
Goodhart 1988).
If we follow this line of argumentation it follows quite naturally that signed
languages – due to their modality-specific production constraints – will always
be attracted to autosegmental phonology and fusional morphology. Spoken
languages being subject to less severe constraints will be free to choose between
the available options. We therefore witness a greater amount of variability
among them.
27 Klima and Bellugi (1979) suggest that the omission of function words such as complementizers,
determiners, auxiliaries, etc. also economizes time. We do not follow them here because there
are also spoken languages that have many zero functors, although they are obviously not pressed
to economize time by omitting them.
Modality-dependent aspects of sign language production 133
Given the fact that all major phonological features (handshape, place of ar-
ticulation, hand orientation, and movement) can be affected in “simple” signing
errors where only one element is affected as in anticipations, perseverations,
and harmony errors (see Table 5.1), one wonders why they should not also fig-
ure in “complex” signing errors where two elements are affected. Handshape
exchanges like the one in (11) should, therefore, be expected. There is no reason
to suppose that sign language features cannot be separated from each other. In
fact, it was one of the main goals of Newkirk et al. (1980) to demonstrate that
there are also sub-lexical phonological features in ASL, and to provide em-
pirical evidence against the unwarranted view that signs are simply indivisible
wholes, holistic gestures not worth being seriously studied by phonologists.
Note that spoken and signed languages differ in the following way with
respect to phonological errors in general and phonological exchanges in partic-
ular. Segments of concatenating spoken languages such as English and German
are lined up like beads on a string in a strictly serial order as specified in the
lexical item’s word form (lexeme). If two complete segments are exchanged,
the “syllable position constraint” is always obeyed. The same, however, cannot
hold true of the phonological features of a sign. They do not behave as segments:
they are not realized linearly, but simultaneously. It is a modality specificity,
indeed, that the sign’s phonological features are realized at the same time,
although they are all represented on independent autosegmental tiers. Obvi-
ously, the “frame-content metaphor” (MacNeilage 1998) cannot be transferred
to signed languages straightforwardly. The “frame-content metaphor” states
that “a word’s skeleton and its segmental content are independently generated”
(Levelt 1992:10). This is most obvious in segmental exchanges. If we roughly
attribute handshape, hand orientation, and place of articulation consonantal sta-
tus and movement vocalic status, then of the two constraints on phonological
errors – the “segment class constraint” and the “syllable position constraint”–
sign languages obey only the former (compare Perlmutter 1992). Typically, one
handshape is replaced with another handshape or one movement with another
movement. The latter constraint, however, cannot hold true of the phonological
features of a sign because they are realized simultaneously. Thus, phonological
slips in sign languages compare to segmental slips in spoken languages, but
there is no equivalence for segmental slips in sign language.
We still have to answer the question why exchanges across all entities are so
rare in sign language. As Stemberger (1985) pointed out, the true number of
exchanges may be veiled by what he calls “incompletes,” i.e. covered exchanges
that are caught and repaired by the monitor after the first part of the exchange
has taken place. (An incomplete is an early corrected exchange.) These errors,
then, do not surface as exchanges but as anticipations. As a null hypothesis
we assume that the number of exchanges, true anticipations, and incompletes
is the same for spoken and signed languages, unless their incidence interacts
Modality-dependent aspects of sign language production 135
with other processes that change the probability of their occurrence. We will, in
fact, argue below that monitoring is such a process. In Section 5.6 we point out
that the cut-off points in signed and spoken languages are different. Errors are
detected and repaired apparently earlier in sign languages, preferentially in the
problem item itself, whereas repairs in spoken languages start later, after the
erroneous word or even later. If this holds true, exchanges may be more likely
to surface in spoken languages simply because both parts of the error would
have already occurred before the monitor was able to detect them.
Delayed repairs where some material intervenes between the error and the repair
are rare (7.3 percent) as are early repairs before word onset (7.3 percent).
The cut-off points in spoken language (here, Dutch) are different.32 The typ-
ical locus of repair in spoken language is after the word. Corrections within the
word are rarer, and delayed repairs are more frequent. For DGS, however, re-
pairs peak on very fast repairs within the word, followed by increasingly slower
repairs. However, we do not invoke modality as an explanation for this apparent
difference because it is only a superficial, albeit interesting, explanation.
From the discussion in Section 5.5 of the different production times for
spoken vs. signed languages (the ratio of which is 2:1), we can easily predict
that the longer duration of a sign word will influence the locus of repair, provided
that the overall capacity of the spoken and the sign language monitor is the same.
The following prediction seems to hold: because a signed word takes twice as
long as a spoken word, errors will be more likely to be caught within the word in
sign language, but after the word in spoken language. Note that this difference
becomes even more obvious when we characterize the locus of repair in terms
of syllables. In DGS, the error is caught within a single syllable, whereas for
spoken Dutch, the syllable counting begins only after the trouble word (not
counting any syllables within the error).
Again, the reason is that words in signed language (monomorphemic as well
as polymorphemic) tend to be monosyllabic (see Section 5.5). This one syllable,
however, has a long production time and allows for a repair at some point during
its production.33 Thus, the comparison of signed and spoken language repairs
reveals once more the different temporal expansion of identical linguistic ele-
ments, i.e. words and syllables. This is a modality effect, but not a linguistic
one. This effect is related to the articulatory interface. Note that in Chomsky’s
minimalist program (1995) PF, which is related to articulation and perception,
is one of the interfaces with which the language module interacts. Obviously,
spoken and signed languages are subject to very different anatomical and phys-
iological constraints with regard to their articulators. Our data reveal exactly
this difference.
Would it be more appropriate to characterize the locus of repair not in terms
of linguistic entities but in terms of physical time? If we did this we would
find that in both language types repairs would, on average, be provided equally
fast. With this result any apparent modality effect vanishes. We would not
know, however, what differences in the temporal resolution of linguistic entities
exist in both languages, and that these differences result in a very different
32 Levelt distinguishes word-internal corrections (without further specifying where in the word),
corrections after the word, and delayed corrections that are measured in syllables after the error.
33 It is even possible that both the erroneous word and the repair share a single syllable. In these
cases, the repair is achieved by a handshape change during the path movement. This is in accord
with phonological syllable constraints (Perlmutter 1992) which allow for handshape changes
on the nucleus of a sign.
138 A. Hohenberger, D. Happ, and H. Leuninger
monitor behavior. Stopping after the problem word has been completed or while
producing the problem word itself makes a difference for both the producer and
the interlocutor.
Acknowledgments
Our research project is based on a grant given to Helen Leuninger by the German
Research Council (Deutsche Forschungsgemeinschaft DFG), grant number
LE 596/6-1 and LE 596/6-2.
5.8 References
Abd-El-Jawad, Hassan and Issam Abu-Salim. 1987. Slips of the tongue in Arabic and
their theoretical implications. Language Sciences 9:145–171.
Baars, Bernard J., ed. 1992. Experimental slips and human error. Exploring the archi-
tecture of volition. New York: Plenum Press.
Baars, Bernard J. and Michael T. Motley. 1976. Spoonerisms as sequencer conflicts:
Evidence from artificially elicited errors. American Journal of Psychology 89:467–
484.
Baars, Bernard J., Michael T. Motley and Donald G. MacKay. 1975. Output editing for
lexical status in artificially elicited slips of the tongue. Journal of Verbal Learning
and Verbal Behavior 14:382–391.
Berg, Thomas. 1988. Die Abbildung des Sprachproduktionsprozesses in einem Akti-
vationsflussmodell. Untersuchungen an deutschen und englischen Versprechern.
Linguistische Arbeiten 206. Tübingen: Niemeyer.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Butterworth, Brian. 1980. Some constraints on models of language production. In Lan-
guage production, Vol. 1: Speech and talk, ed. B. Butterworth, 423–459. London:
Academic Press.
Caramazza, Alfonso. 1984. The logic of neuropsychological research and the problem
of patient classification in aphasia. Brain and Language 21:9–20.
Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press.
Corina, David. 1998. The processing of sign language. Evidence from aphasia. In Hand-
book of neurolinguistics, ed. Brigitte Stemmer and Harry A. Whitaker, 313–329.
San Diego, CA: Academic Press.
Crain, Stephen and Diane Lillo-Martin. 1999. An introduction to linguistic theory and
language acquisition. Oxford: Blackwell.
Cutler, Anne. 1982. Speech errors: A classified bibliography. Bloomington, IN: Indiana
University Linguistics Club.
140 A. Hohenberger, D. Happ, and H. Leuninger
6.1 Introduction
A pressing question related to the well-being of deaf children is how they
develop a strong language base (e.g. Liben 1978). First or native language
proficiency plays a vital role in many aspects of their development, ranging
from social development to educational attainment to their learning of a second
language. The target linguistic system should be easy to learn and use. A natural
signed language is clearly a good choice for deaf children. While spoken English
is a natural language, it is less obvious that a signed form of English is also
a natural language. At issue is the development of Manually Coded English
(MCE), which can be described as a form of language planning aimed at making
English visible for deaf children (Ramsey 1989). MCE demonstrates a living
experiment in which deaf children are expected to learn signed English as well
as hearing children do spoken English. If MCE is a natural language, learning
it should be effortless, with learning patterns consistent with what we know
about natural language acquisition in general.
American Sign Language (ASL) is a good example of how a sign system is
defined as a natural language with the capacity of becoming a native language for
deaf children, especially those of deaf parents who use ASL at home (Newport
and Meier 1985; Meier 1991). However appropriate ASL is for deaf children
of deaf parents, it is not the case that all deaf children are exposed to ASL.
Many are born to hearing parents who do not know how to sign. One area
of investigation is how children from nonsigning home environments develop
proficiency in ASL. Attention to that area has diverted us from studying how
deaf children acquire English through the signed medium. For this chapter,
we ask whether MCE constitutes, or has the capacity of becoming, a natural
language. If it does not, why not?
The idea that modality-specific constraints shape the structure of language
requires attention. We ask whether MCE violates constraints on the percep-
tion and processing of a signed language. We find structural deficiencies in
MCE, and suggest that such problems may compromise any sign system whose
grammatical structure is based on a spoken language. The potential problems
143
144 Samuel J. Supalla and Cecile McKee
associated with the structure of MCE are confounded by the fact that the input
quality for many deaf children may be less than optimal, thus affecting their
acquisition. The question of input quality dominates the literature regarding
problems associated with MCE in both home and school settings (for a review,
see Luetke-Stahlman and Moeller 1990). We regard impoverished input as an
external factor. We propose to study the way MCE is structured, which is best
described as an internal factor. Problematic internal factors can undermine the
learning of any linguistic system, including MCE. In other words, regardless
of the quality of the input, deficiencies in a linguistic system will hamper its
acquisition. The focus of this chapter are the internal factors affecting MCE
acquisition. We consider, for example, how words are formed in the visual–
gestural modality. Such morphological considerations bear on the question of
whether MCE is a natural language. First, however, we discuss some historical
precedents to MCE and some theoretical assumptions underlying the language
planning efforts associated with MCE.
term and the concepts associated with it have changed since the 1960s. How-
ever, because the point we are making here does not hinge on departures from
Chomsky’s original observations, we use the term LAD and refer only gener-
ally to the idea that universal structural principles restrict possible variations in
language. On this hypothesis, the child’s task is to discover which variations
apply to the particular language he or she is acquiring. The LAD limits natural
languages. That is, a natural language is one that is allowed by the guidelines
encoded in the LAD (whatever they are).
Another important consideration in examining the question of what makes
a natural language is processing constraints. As Bever (1970) observed, every
language must be processed perceptually. Further, a language’s processing must
respect the limits of the mind’s processing capacities. Thus, for a system to be a
natural language (i.e. acquired and processed by humans), it must meet several
cognitive prerequisites. What is still not clear is whether modality plays a role
in shaping language structure and language processes. Whatever position is
taken on the modality question has direct ramifications on the feasibility of the
language planning effort as described for the field of deaf education.
De l’Epée acknowledged the relationship of cognition and language at least
intuitively. Not only did he hold the position that a signed language is fitting for
deaf children, but he was also convinced that it had the capacity of incorporating
the structure of a spoken language effectively. He assumed that modality did not
play a role in the structuring of a signed language. First, de l’Epée’s encounters
with deaf children and their signing prior to the school’s founding provided
him with the basis needed to make an effective argument for their educational
potential. Second, this is where the idea was conceived of making the spoken
language, French in his case, visible through the signed medium. De l’Epée
then made a formal distinction between Natural Sign and Methodical Sign,
reflecting his awareness of relevant structural differences. The former referred
to signing by deaf children themselves and the latter to the sign system that he
developed to model French.
A more radical approach would have been to create a French-based sign
system without reference to Natural Sign. In other words, de l’Epée could
have invented a completely new system to sign French. Instead, he chose to
adjust an existing sign system to the structure of French. That is, he made
Methodical Sign by modifying Natural Sign. This language planning approach
is consistent with Epée’s warning about the possibility of structural deviation
leading to the breakdown of a linguistic system in the eyes of a deaf child. Pure
invention would increase the likelihood of such an outcome. Even with these
considerations, Methodical Sign did not produce positive results and failed to
meet de l’Epée’s expectations.
The problems plaguing Methodical Sign were serious enough for the third
director of the school, Roche-Ambroise Bebian, to end its use with deaf students.
146 Samuel J. Supalla and Cecile McKee
At the time of Bebian’s writing, over 40 years had passed since the founding
of de l’Epée’s school. The continued reference to Natural Sign indicates that
it had persisted regardless of the adoption of Methodical Sign as a language
of instruction. Despite de l’Epée’s use of Natural Sign to develop Methodical
Sign, it appears that the latter was not learned well. Bebian’s insights on this
are valuable. He identified the problem as one that concerned deaf children’s
perceptual processing of the French-based sign system. The distortion affecting
these children’s potential for successful language acquisition suggests serious
problems associated with the structure of Methodical Sign (Bebian described
it as “disfigured”).
Bebian’s reference to the special nature of signed languages to account for
de l’Epée’s failed efforts raises doubts that the structure of a spoken language can
successfully map onto the signed medium. This alone represents a significant
shift from de l’Epée’s position, and it is part of Bebian’s argument in favor of
Natural Sign as a language of instruction over Methodical Sign. Unfortunately,
Bebian was not completely successful in describing what went wrong with
Methodical Sign. He did not elaborate on possible structural deficiencies of
the French-based sign system. The basis for making the theoretical shift in the
relationship of signed languages and spoken languages was thus lacking.
With this historical background, we need to re-examine recent language plan-
ning efforts associated with deaf children and spoken languages. Methodical
Sign cannot be further studied because it has ceased to exist. Its English descen-
dants, on the other hand, provide us with the opportunity to examine several
similar systems. We turn now to English-based sign systems. Next, we address
deaf children’s learning patterns and their prospect for mastering English in the
visual–gestural modality.
example, ASL has three distinct signs for three of the concepts behind the En-
glish word ‘right’ (i.e. correct, direction, and to ‘have a right’). SEE 2 borrows
only one ASL sign and uses it for all three concepts.
At the morphological level, ASL signs are also adopted. They serve as roots,
to which invented prefixes and suffixes are added. These represent English
inflections for tense, plurality, and so on. SEE 2 relies on both invented and
borrowed signs to provide a one-to-one mapping for English pronouns, prepo-
sitions, and conjunctions. As a result, borrowing from ASL is primarily at the
lexical level. Some function morphemes (i.e. free closed class elements) are
also borrowed from ASL. All bound morphemes are invented. The invention of
this class of morphemes is due to the fact that ASL does not have a productive
set of prefixes and suffixes.
We turn now to the formational principles that underlie SEE 2 prefixes and
suffixes. Importantly, SEE 2’s planners attempted to create English versions of
bound morphemes in the most natural way possible. If a form is to function as
a linear affix, it is phonologically independent of the root. This does not mean
that a linear affix will not influence the phonological properties of the root at
some point. For example, the sound of the English plural is determined by the
last consonant of the root: bats vs. bells vs. churches. We want to emphasize
that some aspects of the form of the linear affix remain constant even though
other aspects of its form may change. One approach was to create a linear affix
with all four of the basic parameters in a full ASL sign (S. Supalla 1990). For
example, the SEE 2 suffix -ING involves the handshape I (representing the
initial letter of the suffix), a location in neutral signing space, outward rotation
of the forearm, and a final orientation of the palm facing away from the signer’s
body (see Figure 6.1a). Another example is the SEE 2 suffix -MENT, which
involves two handshapes: the dominant one as M (representing the initial letter
of the suffix) located on the palm of the other, a path movement across the
palm surface, and an orientation of the dominant palm facing away from the
signer’s body (see Figure 6.1b). The majority of SEE 2’s affixes are, like -ING
and -MENT, complete with respect to full sign formational structure: 43 out of
49, or 88%. The remaining six affixes use only three of the four parameters;
they omit movement (either internal or path). Figure 6.2 exemplifies one of
these latter six affixes, the suffix -S for singular present tense verbs and plural
nouns.
Thus, most of the bound morphemes in SEE 2 are sign-like. Further, it seems
that the invented nature of bound morphemes in MCE is not necessarily prob-
lematic. Analysis confirms that all sign-like affixes conform to how a sign
should be formed in ASL (S. Supalla 1990). But is this enough? We consider
now what constitutes a sign in ASL, or perhaps in any signed language. A word
in any linguistic system is formed according to certain formational rules. These
The role of MCE in language development of deaf children 149
(a)
(b)
Figure 6.1a The SEE 2 sign -ING; 6.1b The SEE 2 sign -MENT
rules involve both the phonological and the morphological systems. Battison’s
(1978) pioneering typological work indicates that an ASL sign has its own
formational rules based on the physical dynamics of the body as manifested
primarily in gestural articulation and visual perception. These constraints of
production and perception on the formational aspects of signs have resulted
in the development of a highly integrated linguistic system comparable to the
phonological and morphological components of a spoken language.
[T]wo [handshapes and locations] is the upper limit of complexity for the formation
of signs. A simple sign can be specified for no more than two different locations (a
sign may require moving from one location to another), and no more than two different
handshapes (a sign may require that the handshape change during the sign). (Battison
1978:48)
150 Samuel J. Supalla and Cecile McKee
Battison also observed that the two handshapes in a sign must be phonologically
related, with one changing into the other by opening or closing. For example,
a straight, extended finger may bend or fully contract into the palm, or the
contracted fingers may extend fully. Thus, these constraints on handshape for-
mation not only limit the number of handshapes to two; they also require the
handshapes to be related.
Such sign formational properties presumably relate to complexity issues.
If constraints like Battison’s prove to be true of all signed languages, then
they might affect the learnability of any manually coded linguistic system,
both ASL and SEE 2 alike. If, on the other hand, such constraints characterize
only ASL (and other signed languages allow, for example, three handshapes
or two unrelated handshapes in a sign), then such constraints are important
only for some language planning efforts. General learnability would not be the
issue. Thus, an important question to resolve before we can fully evaluate the
learnability of MCE systems is the generalizability of constraints like the ones
that Battison described.
At this point, the strong relationship between SEE 2 and ASL needs to be
summarized. Not only is a large inventory of signs borrowed from ASL to form
the free morphemes in SEE 2, but most of the bound morphemes were created
based on how signs are formed in ASL. It is not a question of whether individual
linear affixes are formed appropriately. We focus instead on how these elements
are combined with a root. More specifically, the adoption of linear affixation as
a morphological process in MCE may exceed the formational constraints for
signs in ASL. In contrast, nonlinear affixation is consistent with such constraints
and leads to the desired outcome of successful acquisition by deaf children
The role of MCE in language development of deaf children 151
(for a review on linear and nonlinear affixation types occurring in ASL and
spoken languages, see S. Supalla 1990). The following discussion covers this
particular morphological process in ASL and how it differs from the linear
type.
(a)
(b)
(c)
Figure 6.3 Three forms of the ASL sign IMPROVE: 6.3a the citation form;
6.3b the form inflected for continuative aspect; and 6.3c a derived noun
locations is reduced from two to one. The SEE 2 examples undergoing mor-
phological change through linear affixation, on the other hand, had completely
different outcomes.
(a)
(b)
and orientation. The handshape for IMPROVE is B, which is distinct from the
I and M of -ING and -MENT, respectively. The location of IMPROVE is on the
arm, whereas -ING and -MENT are signed in neutral space. The movement is
path/arc for IMPROVE while the movement for -MENT is path/straight, and
the movement for -ING is internal with no path at all. Finally, the orientation
of the IMPROVE handshape is tilted with the ulnar side of the hand facing
downward. In contrast, the orientation for both -ING and -MENT is upright
with the palm facing away from the signer. Taken together, it can be seen that
affixes developed for SEE 2 are phonologically distinct, across four parameters,
from the roots that they are sequenced with.
Another important point to consider are the cases where the handshape and
location constraints are exceeded with the multi-morphemic signs in SEE 2. In
IMPROVING, the B and I handshapes are both “open,” and there is no relation-
ship between them. IMPROVEMENT also has two handshapes. Again, there is
no relationship between them; they are formationally distinct (i.e. four extended
fingers for the first handshape and three bent fingers for the second handshape).
If there were a relationship between the two handshapes (as Battison maintained
for ASL), the two handshapes should be formationally consistent (e.g. extended
four fingers to bent four fingers).
In the case of IMPROVES (including the SEE 2 affix that omits movement, as
shown in Figure 6.2), this linearly affixed sign meets both handshape number
and handshape relationship constraints; that is, the B and S handshapes are
related (“open” and “closed”; four extended fingers fully contract into the palm).
But IMPROVES has three locations. It exceeds the location number constraint.
In this example, the first two locations are made on the arm, and the last location
is made in neutral signing space. IMPROVING and IMPROVEMENT also
exceed the two-location limit, in addition to failing the handshape constraints.
We turn now to consider assimilation, a phonological operation that occurs
in natural languages, signed and spoken alike. Another example from SEE 2
shows what happens when the two morphological units in KNOWING are com-
bined. Figure 6.5a shows the combination prior to assimilation and Figure 6.5b
after assimilation. The ASL counterpart can be seen in a lexical compound. For
example, the two signs FACE + STRONG (meaning ‘resemble’) show how a
SEE 2 root might “blend” with the sign-like linear affix. According to Liddell
and Johnson (1986), lexical compounds in ASL undergo phonological restruc-
turing and realignment. The comparable change in the form of KNOWING
involves the removal of KNOW’s reduplication and the reversal of its hand-
shape’s orientation (i.e. from palm facing toward the signer to away from the
signer). A path movement is created between KNOW and -ING, whereas it is
absent in the non-assimilated form.
Assimilation cuts the production time of the non-assimilated form in half.
The assimilated version of KNOWING is comparable in production time to
The role of MCE in language development of deaf children 155
(a)
(b)
Figure 6.5 The SEE 2 sign KNOWING: 6.5a prior to assimilation and
6.5b after assimilation
the average single sign in ASL (S. Supalla 1990). Nevertheless, this form still
violates ASL handshape constraints in terms of number and relationship. The
two handshapes used in this inflected MCE sign are not related (i.e. B and I
are both open and comparable to those of the earlier example, IMPROVING).
Had the suffix’s handshape (i.e. I) been removed to meet the handshape number
constraint (i.e. using B from KNOW only), the phonological independence of
the suffix would be lost. This particular combination of KNOW and -ING would
be overtly blended. The fully assimilated versions of KNOWING and KNOWS
would appear identical and noncontrastive, for example (S. Supalla 1990).
As shown by the examples discussed here, the combination of a root and
bound morpheme (sign-like and less-than-sign; non-assimilated and assimi-
lated) in SEE 2 can produce at least three scenarios:
r The combination may exceed the location number constraint.
r The combination may exceed the handshape number constraint.
r While meeting the handshape number constraint, the combination of two
unrelated handshapes may result in a violation of the handshape relationship
constraint.
156 Samuel J. Supalla and Cecile McKee
Not included here is the combination of two or possibly all three constraint
violation types. According to our analyses, MCE morphology does not meet
the constraints on sign structure.
the subjects. The results of this study suggest that perceptual distortion per-
sisted for MCE morphology. Undergoing assimilation, linearly affixed signs
exceeded sign boundaries and were perceived as two signs, not one. Non-
linearly affixed signs, modeled on ASL, were consistently perceived as one
sign. This sign-counting task was also performed by a group of novice signers,
all sharing the same perceptual biases in regard to where a sign begins and ends.
Interestingly, the novice signers shared the same perceptual knowledge that ac-
counted for the sign-counting of those having extensive signing experience
(via ASL or MCE).
Further, S. Supalla’s (1990) use of NZSL as the foreign language stimulus
holds ramifications for understanding the relationship of signed and spoken
languages. Individual NZSL signs were selected to function as roots and others
as linear affixes. They underwent assimilation upon combination. The subjects
in the study did not know the original meanings of the NZSL signs or the fact
that they were all originally free morphemes. The subjects (even those who had
no signing experience) were able to recognize the formational patterns within
the linearly affixed signs and to identify the sign boundaries splitting the linearly
affixed signs into two full words, not one word. Such behavior is comparable to
deaf children who are exposed to SEE 2 and other English-based sign systems.
Note that NZSL and ASL are two unrelated signed languages, yet they appear
to share the same formational constraints of signs.
A critical implication of these findings is that deaf children may have per-
ceptual strategies (as found among adults) that they apply in their word-level
segmentation of MCE morphology. More specifically, these children may be
able to identify signs in a stream based on the structural constraints as de-
scribed, but they cannot perceive linear affixes as morphologically related to
roots. Linear affixes rather stand by themselves and are wrongly perceived as
“full words.” This is an example of how a language’s modality may shape its
structure, which in turn relates to its learnability. For adult models using MCE,
the combination of a linear affix to a root, assimilated or not, seems to result in
the production of a too-complex sign which affects the potential naturalness of
MCE morphology.
The notion of modality-specific constraints for the structure of signed lan-
guages is not new. Newport and T. Supalla (2000), for example, reviewed the
issues and highlighted T. Supalla and Webb’s (1995) study of the morphological
devices in 15 different signed languages. The explanation for the striking sim-
ilarity of morphology in these languages lies in nonlinguistic resources. That
is, space and motion were described as what “propels sign languages more
commonly toward one or a few of the several ways in which linguistic systems
may be formed” (Newport and T. Supalla 2000:112). Recall the earlier discus-
sion on how inflectional and derivational processes in ASL employ changes
in sign-internal movement and location (e.g. IMPROVE). These processes can
160 Samuel J. Supalla and Cecile McKee
6.5 References
Andersen, Roger W. 1983. A language acquisition interpretation of pidginization and
creolization. In Pidginization and creolization as language acquisition, ed. Roger
W. Anderson, 1–56. Rowley, MA: Newbury House.
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Bebian, Roch-Ambroise A. 1984. Essay on the deaf and natural language, or introduction
to a natural classification of ideas with their proper signs. In The deaf experience:
Classics in language and education, ed. Harlan Lane, 129–160. Cambridge, MA:
Harvard University Press.
Bellugi, Ursula and Susan Fischer. 1972. A comparison of sign language and spoken
language: Rate and grammatical mechanisms. Cognition 1:173–200.
Bever, Thomas. 1970. The cognitive basis for linguistic structures. In Cognition and the
development of language, ed. John R. Hayes, 279–352. New York: Wiley.
Bornstein, Harry, Karen L. Saulnier, and Lillian B. Hamilton. 1980. Signed English: A
first evaluation. American Annals of the Deaf 125:467–481.
Brown, Roger. 1973. The first language: The early stages. Cambridge, MA: Harvard
University Press.
Chomsky, Noam 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Chomsky, Noam 1981. Lectures on government and binding. Dordrecht, Holland: Foris.
de l’Epée, Charles M. 1984. The true method of educating the deaf, confirmed by much
experience. In The deaf experience: Classics in language and education, ed. Harlan
Lane, 51–72. Cambridge, MA: Harvard University Press.
Gaustad, Martha G. 1986. Longitudinal effects of manual English instruction on deaf
children’s morphological skills. Applied Psycholinguistics 7:101–127.
Gee, James and Wendy Goodhart. 1985. Nativization, linguistic theory, and deaf lan-
guage acquisition. Sign Language Studies 49:291–342.
Gee, James and Judith L. Mounty. 1991. Nativization, variability, and style shifting in
the sign language development of deaf children of hearing parents. In Theoretical
issues in sign language research, Vol. 2: Psychology, ed. Patricia Siple and Susan
D. Fischer, 65–83. Chicago, IL: University of Chicago Press.
Gerken, Louann, Barbara Landau, and Robert E. Remez. 1990. Function morphemes
in young children’s speech perception and production. Developmental Psychology
26:204–216.
Gilman, Leslea A. and Michael J. M. Raffin. 1975. Acquisition of common morphemes
by hearing-impaired children exposed to Seeing Essential English sign system.
Paper presented at the American Speech and Hearing Association, Washington, DC.
The role of MCE in language development of deaf children 163
and learnability, eds. Barbara Lust, Margarita Suñer, and John Whitman, 101–133.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Meadow, Kathryn P. 1981. Deafness and child development. Berkeley, CA: University
of California Press.
Meier, Richard P. 1991. Language acquisition by deaf children. American Scientist
79:60–70.
Newport, Elissa and Richard P. Meier. 1985. The acquisition of American Sign Language.
In The cross-linguistic study of language acquisition, ed. Dan Slobin, 881–938.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Newport, Elissa P. and Ted Supalla. 2000. Sign language research at the millennium. In
The signs of language revisited, eds. Karen Emmorey and Harlan Lane, 103–114.
Mahweh, NJ: Lawrence Erlbaum Associates.
Raffin, Michael. 1976. The acquisition of inflectional morphemes by deaf children using
Seeing Essential English. Doctoral dissertation, University of Iowa.
Ramsey, Claire. 1989. Language planning in deaf education. In The sociolinguistics of
the deaf community, ed. Ceil Lucas, 123–146. San Diego, CA: Academic Press.
Schein, Jerome D. 1984. Speaking the language of sign. New York: Doubleday.
Schlesinger, I. M. 1978. The acquisition of bimodal language. In Sign language of the
deaf: Psychological, linguistic, and social perspectives, eds. I. M. Schlesinger and
Lila Namir, 57–93. New York: Academic Press.
Slobin, Dan. 1977. Language change in childhood and in history. In Language learning
and thought, ed. John Macnamara, 185–214. New York: Academic Press.
Stedt, Joe D. and Donald F. Moores. 1990. Manual codes in English and American Sign
Language: Historical perspectives and current realities. In Manual communication,
ed. Harry Bornstein, 1–20. Washington, DC: Gallaudet University Press.
Stokoe, William C. 1960. Sign language structure: An outline of the visual communi-
cation systems of the American deaf. Studies in Linguistics, Occasional Papers 8.
Silver Springs, MD: Linstok Press.
Stokoe, William C., Dorothy C. Casterline and Carl G. Croneberg. 1965. A dictionary
of American Sign Language. Washington, DC: Gallaudet College Press.
Supalla, Samuel J. 1990. Segmentation of Manually Coded English: Problems in the
mapping of English in the visual/gestural mode. Doctoral dissertation, University
of Illinois at Urbana-Champaign.
Supalla, Samuel J. 1991. Manually Coded English: The modality question in signed
language development. In Theoretical issues in sign language research, Vol. 2:
Psychology, eds. Patricia Siple and Susan Fischer, 85–109. Chicago, IL: University
of Chicago Press.
Supalla, Samuel J., Tina Wix, and Cecile McKee. 2001. Print as a primary source
of English for deaf learners. In One mind, two languages: Bilingual language
processing, ed. Janet L. Nicol, 177–190. Malden, MA: Blackwell.
Supalla, Ted and Elissa Newport. 1978. How many seats in a chair? The derivation of
nouns and verbs in American Sign Language. In Understanding language through
sign language research, ed. Patricia Siple, 91–132. New York: Academic Press.
Supalla, Ted, and Rebecca Webb. 1995. The grammar of International Sign: A new look
at pidgin languages. In Language, gesture, and space, eds. Karen Emmorey and
Judy Reilly, 333–352. Mahwah, NJ: Lawrence Erlbaum Associates.
Suty, Karen A. and Sandy Friel-Patti. 1982. Looking beyond signed English to describe
the language of two deaf children. Sign Language Studies 35:153–166.
The role of MCE in language development of deaf children 165
Swisher, M. Virginia. 1991. Conversational interaction between deaf children and their
hearing mothers: The role of visual attention. In Theoretical issues in sign lan-
guage research, Vol. 2: Psychology, eds. Patricia Siple and Susan Fischer, 111–134.
Chicago, IL: University of Chicago Press.
Wodlinger-Cohen, R. 1991. The manual representation of speech by deaf children and
their mothers, and their teachers. In Theoretical issues in sign language research,
Vol. 2, Psychology, eds. Patricia Siple and Susan Fischer, 149–169. Chicago, IL:
University of Chicago Press.
Woodward, James. 1973. Some characteristics of Pidgin Sign English. Sign Language
Studies 3:39–46.
Part II
The term “gesture” is used to denote various human actions. This is even true
among linguists and psychologists, who for the past two decades or more have
highlighted the importance of gestures of various sorts and their significant role
in language production and reception. Some writers have defined gestures as
the movements of the hands and arms that accompany speech. Others refer to
the articulatory movements of speech as vocal gestures and those of signed
languages as manual gestures. For those who work in signed languages, the
term nonmanual gesture usually refers to facial expressions, head and torso
movements, and eye gaze, all of which are vital parts of signed messages. In the
study of child language acquisition, some authors have referred to an infant’s
reaches, points, and waves as prelinguistic gesture.
In Part II we introduce two works that highlight the importance of the study
of gesture and one that addresses iconicity (a closely related topic). We also
briefly summarize some of the various ways in which gesture has been defined
and investigated over the last decade. A few pages of introductory text are not
enough to review all the issues that have arisen – especially within the last
few years – concerning gesture and iconicity and their role in language, but this
introduction is intended to give the reader an idea of the breadth and complexity
of these topics.
Some authors claim that gesture is not only an important part of language as
it is used today, but that formal language in humans began as gestural commu-
nication (Armstrong, Stokoe, and Wilcox 1995; Stokoe and Marschark 1999;
Stokoe 2000). According to Armstrong et al., gestures1 were likely used for
communication by the first groups of humans that lived in social groups. Fur-
thermore, they argue that visible gesture can exhibit both word and syntax at
the same time. They also point out that visible gestures can be iconic, that is,
a gesture can resemble its referent in some way, and visual iconicity can work
as a bridge to help the perceiver understand the meaning of a gesture. Not only
1 Armstrong et al. (1995:38) provide the following general definition of gesture: “Gesture can
be understood as neuromuscular activity (bodily actions, whether or not communicative); as
semiotic (ranging from spontaneously communicative gestures to more conventional gestures);
and as linguistic (fully conventionalized signs and vocal articulations).”
167
168 Gesture and iconicity in sign and speech
do these authors claim that language began in the form of visible gestures, but
that visible gestures continue to play a significant role in language production.
This viewpoint is best illustrated with a quote from Armstrong et al. (1995:42):
“For us, the answer to the question, ‘If language began as gesture, why did it
not stay that way?’ is that it did.”
One of the earliest systematic records of gesture is a description of its use in
a nineteenth-century Italian city. In an English translation of a book published
in Naples in 1832, the work of Andrea de Jorio – one of the first authors
to write about gesture from an anthropological perspective – is revived. De
Jorio compared gestures used in everyday life in Naples in the early nineteenth
century to those gestures that are depicted on works of art from centuries past –
with particular attention to the complexity of some gestures. In one example,
de Jorio describes a gesture used to protect against evil spirits. The same gesture,
according to de Jorio, can also be directed toward a person, and it is referred
to as the “evil eye” in this instance. Interestingly, the form of this nineteenth-
century gesture greatly resembles the current American Sign Language (ASL)
sign glossed as MOCK. Given the historical connection between ASL and
French Sign Language (Langue de Signes Française or LSF), perhaps there is
also a more remote connection between the Neopolitan gesture and the ASL
sign. It appears that de Jorio (2000) might allow us to explore some possible
antecedents of current signed language lexicons.
Along those lines, some authors have analyzed the manner in which contem-
porary gestures can evolve into the signs of a signed language. Morford and
Kegl (2000) describe how in Nicaragua over the last two decades conventional
gestures have been adopted by deaf and hearing individuals for use in homesign
communication, and then such gestures have become lexicalized as a result of
interaction between homesigners. Some of these forms have then gone on to
become signs of Idioma de Señas de Nicaragua (Nicaraguan Sign Language)
as evidenced by the fact that they now accept the bound morphology of that
language.
Not only has the phylogenetic importance of gestures been asserted in the
literature, but their role in child development has been the focus of much re-
search (for an overview, see Iverson and Goldin-Meadow 1998). According to
Goldin-Meadow and Morford (1994), both hearing and deaf infants use single
gestures and two-gesture strings as they develop. Gesture for these authors is
defined as an act that must be directed to another individual (i.e. it must be com-
municative) and an act that must not be a direct manipulation of some relevant
person or object (i.e. it must not serve any function other than communication).
In addition to the importance of gesture for the phylogenetic and ontogenetic
development of language, some writers claim that gesture is an integral com-
ponent of spoken language in everyday settings (McNeill 1992; Iverson and
Goldin-Meadow 1998). Gesture (or gesticulation, as McNeill refers to it) used
Gesture and iconicity in sign and speech 169
in this sense refers to the movements of the hands and arms that accompany
speech. According to McNeill (1992:1), analysis of gestures helps to understand
the processes of language.
Just as binocular vision brings out a new dimension of seeing, gesture reveals a new
dimension of the mind. This dimension is the imagery of language which has lain hidden.
We discover that language is not just a linear progression of segments, sounds, and words,
but is also instantaneous, nonlinear, holistic, and imagistic. The imagistic component
co-exists with the linear-segmented speech stream and the coordination of the two gives
us fresh insights into the processes of speech and thought.
Gesticulation, however, differs from the use of gesture without speech. Singleton
et al. (1995:308) claim that gesture without speech (or nonverbal gesture) ex-
hibits language-like properties and can represent meaning on its own, whereas
gesticulation is much more dependent on the accompanying speech for repre-
senting meaning and is not “an independent representational form.”
One of the most compelling reasons to study gesture is that it is a robust
phenomenon that occurs in human communication throughout the world, among
different languages and cultures (McNeill 1992; Iverson and Goldin Meadow
1998). Gesture is even found in places where one would not expect to find
it. For example, gesture is exhibited by congenitally blind children as they
speak, despite the lack of visual input from language users in their environment
(Iverson and Goldin-Meadow 1997; Iverson et al. 2000) and gesture can be
used in cases when speech is not possible (Iverson and Goldin-Meadow 1998).
While the visible gesture that has been described thus far can be distinguished
from speech because of the different modalities (gestural vs. oral) in which
production occurs, the same type of gesturing in signed languages is far more
difficult to identify. If, for the sake of argument, we posit that gestures are
paralinguistic elements that alternate with formal linguistic units (morphemes,
words), how does one go about defining what is gestural and what is linguistic
(or morphemic) in signed languages where both types of communication involve
the same articulators? Early in the study of ASL, Klima and Bellugi (1979:15)
described how ASL comprises not only signs and strings of signs with certain
formational properties, but also what they termed “extrasystemic gesturing.”
On their view, ASL – and presumably other signed languages as well – utilize
“a wide range of gestural devices, from conventionalized signs to mimetic
elaboration on those signs, to mimetic depiction, to free pantomime” (p.13).
Not only are all these devices used in the production of ASL, but signers also
go back and forth between them and lexical signs regularly; at times with no
obvious signal that a switch is being made. A question that has long animated
the field is the extent to which these devices are properly viewed as linguistic
or as gestural (for contrasting views on this question, see Supalla 1982; Supalla
1986; and Emmorey, in press).
170 Gesture and iconicity in sign and speech
There are, of course, ways that sign linguists have proposed to differenti-
ate gesture from linguistic elements of a signed language. Klima and Bellugi
(1979:18–19) claimed that pantomime (presumably a type of gesture) differs
from ASL signs in various respects:
r Each pantomime includes a number of thematic images whereas regular ASL
signs have only one.
r Pantomimes are much longer and more varied in duration than ASL signs.
r Sign formation requires brief temporal holding of the sign handshape before
initiating movement of the sign, whereas pantomime production does not
require these holds and movement is much more continuous.
r Pantomime production is temporally longer than a semantically equivalent
sign production.
r Handshapes are much freer in pantomime production than in sign production.
r Pantomime often involves hand and arm movements that are not seen (al-
lowed) in sign production.
r Pantomime includes head and body movement while only the hands move in
sign production.
r The role of eye gaze seems to differ in pantomime production as opposed to
sign production.
In addition to manual gesturing with the hands and arms, a signer or speaker
can gesture nonmanually, with facial expressions, head movement, and/or body
postures (Emmorey 1999). It has been suggested that a signer can produce a
linguistic sign (or part of a linguistic sign) with one articulator and a gesture with
another articulator (Liddell and Metzger 1998; Emmorey 1999). This is possible
in signed language, of course, because manual and nonmanual articulations can
take place simultaneously.
In this volume, Okrent (Chapter 7) discusses gesture in both spoken and
signed languages. She suggests that gesture and language can be produced
simultaneously in both modalities. Okrent argues that we need to re-analyze
what gesture means in relationship to language and to re-evaluate where we are
allowing ourselves to find gesture. A critical component of analyzing gesture
is the classification of different types; to that end Okrent describes the kinds of
gestures that signers and speakers regularly produce. She explains to the reader
that some gestures are used often by many speakers/signers and denote specific
meanings; these gestures are “emblems.”2 Other gestures co-occur with speech
and are called speech synchronized gestures (see McNeill 1992); a specific class
of those is “iconics.” Okrent then tackles the meaning of the term “morpheme,”
and she explains how classification of a form as morphemic or not is often the
2 Emblems, according to McNeill (1992), have also been described by Efron 1941; Ekman and
Friesen 1969; Morris et al. 1979; Kendon 1981.
Gesture and iconicity in sign and speech 171
referred to iconic signs as those lexical items whose form resembles some
aspect of what they denote. As an example, onomatopoetic words in spoken
languages such as buzz and ping-pong are iconic, but such words tend to be few
in spoken languages. That, however, is not the case for signed languages. In
the signed languages studied thus far, large percentages of signs are related to
visual characteristics of their referents. Of course, these correspondences do not
necessarily determine the exact form of a sign. For example, Klima and Bellugi
described the sign TREE as it is produced in ASL, Danish Sign Language,
and Chinese Sign Language. Each of the three signs is related, in some way,
to the visual characteristics of a tree. Yet, the three signs differ substantially
from each other, and those differences can be described in terms of differences
in handshape, place of articulation, and movement. It is important, however,
to note that iconicity is not present in all signs, especially those that refer to
abstract concepts that are not identifiable with concrete objects.
In this volume, Guerra Currie, Meier, and Walters (Chapter 9) suggest that
iconicity is one of the factors accounting for the relatively high degree of judged
similarity between signed language lexicons. Another related factor is the in-
corporation – into signed languages – of gestures that may be shared by the
larger ambient hearing cultures that surround signed languages.3 In order to
examine the degree of similarity between several signed language vocabular-
ies, Guerra Currie et al. analyze lexical data from four different languages:
Spanish Sign Language (LSE), Mexican Sign Language (LSM), French Sign
Language (LSF), and Japanese Sign Language (Nihon Syuwa or NS). After
conducting pair-wise comparisons of samples drawn from the lexicons of these
four languages, Guerra Currie et al. suggest that signed languages exhibit higher
degrees of lexical similarity to each other than spoken languages do, likely as
a result of the relatively high degree of iconicity present in signed languages.
It is not surprising that this claim is made for those signed languages that have
historical ties, but it is interesting that it also applies to comparisons of un-
related signed languages between which no known contact has occurred and
which are embedded in hearing cultures that are very different (e.g. Mexican
Sign Language compared with Japanese Sign Language). Guerra Currie et al.
suggest, as have other writers (e.g. Woll 1983) that there likely exists a base
level of similarity between the lexicons of all signed languages regardless of
any historical ties that they may or may not share.
david quinto-pozos
3 A similar claim is made by Janzen and Shaffer in this volume, but the questions that they pose
differ from those that Guerra Currie et al. pose. Janzen and Shaffer are concerned with the manner
in which nonlinguistic gestures become grammatical elements of a language, while Guerra Currie
et al. are interested in the manner in which the possible gestural origins of signs may influence
the similarity of signed language vocabularies regardless of where the languages originate.
Gesture and iconicity in sign and speech 173
References
Armstrong, David F., William C. Stokoe, and Sherman E. Wilcox. 1995. Gesture and
the nature of language. Cambridge: Cambridge University Press.
De Jorio, Andrea. 2000. Gesture in Naples and gesture in classical antiquity.
Bloomington: Indiana University Press.
Efron, David. 1941. Gesture and environment. Morningside Heights. NY: King’s Crown
Press.
Ekman, Paul, and Wallace V. Friesen. 1969. The repertoire of nonverbal behavioral
categories: Origins, usage, and coding. Semiotica 1:49–98.
Emmorey, Karen. 1999. Do signers gesture? In Gesture, speech, and sign, ed. Lynn
Messing and Ruth Campbell, 133–159. New York: Oxford University Press.
Emmorey, Karen. In press. Perspectives on classifier constructions. Mahwah, NJ:
Lawrence Erlbaum Associates.
Goldin-Meadow, Susan and Jill Morford. 1994. Gesture in early language. In From
gesture to language in hearing and deaf children, ed. Virginia Volterra and Carol
J. Erting, Washington, DC: Gallaudet University Press.
Iverson, Jana M. and Susan Goldin-Meadow. 1997. What’s communication got to do
with it? Gesture in children blind from birth. Developmental Psychology 33:453–
467.
Iverson, Jana M. and Susan Goldin-Meadow. 1998. Editors’ notes. In The nature and
functions of gesture in children’s communication, eds. Jana M. Iverson and Susan
Goldin-Meadow, 1–7. San Francisco, CA: Josey-Bass.
Iverson, Jana M., Heather L. Tencer, Jill Lany, and Susan Goldin-Meadow. 2000. The
relation between gesture and speech in congenitally blind and sighted language-
learners. Journal of Nonverbal Behavior 24:105–130.
Kendon, Adam. 1981. Geography of gesture. Semiotica 37:129–163.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liddell, Scott K. 2000. Indicating verbs and pronouns: Pointing away from agreement. In
The signs of language revisited, ed. K. Emmorey and H. Lane, 303–320. Mahwah,
NJ: Lawrence Erlbaum Associates.
Liddell, Scott K. and Melanie Metzger. 1998. Gesture in sign language discourse. Journal
of Pragmatics. 30:657–697.
Marschark, Marc. 1994. Gesture and sign. Applied Psycholinguistics 15:209–236.
McNeill, David. 1992. Hand and mind. Chicago, IL: Cambridge University Press.
Morford, Jill P. and Judy A. Kegl. 2000. Gestural precursors to linguistic constructs: How
input shapes the form of language. In Language and gesture, ed. David McNeill,
358–387. Cambridge: Cambridge University Press.
Morris, Desmond, Peter Collett, P. Marsh, and M. O’Shaughnessy. 1979. Gestures: Their
origins and distribution. New York: Stein and Day.
Singleton, Jenny L., Susan Goldin-Meadow, and David McNeill. 1995. The cataclysmic
break between gesticulation and sign: Evidence against a unified continuum of
gestural communication. In Language, gesture, and space, ed. Karen Emmorey
and Judy Reilly, 287–311. Hillsdale, NJ: Lawrence Erlbaum Associates.
Stokoe, William C. and Marc Marschark. 1999. Signs, gestures, and signs. In Gesture,
speech, and sign, ed. Lynn Messing and Ruth Campbell, 161–181. New York:
Oxford University Press.
174 Gesture and iconicity in sign and speech
Stokoe, William C. 2000. Gesture to sign (language). In Language and gesture, ed.
David McNeill, 388–399. Cambridge: Cambridge University Press.
Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American
Sign Language. Unpublished doctoral dissertation, University of California,
San Diego, CA.
Supalla, Ted. 1986. The classifier system in American Sign Language. In Noun classes
and categorization: Typological studies in language, Vol. 7, ed. Collette Craig,
181–214. Philadelphia, PA: John Benjamins.
Woll, Bencie. 1983. The comparative study of different sign languages: Preliminary
analyses. In Recent research on European sign languages, ed. Filip Loncke, Penny
Boyes-Braem, and Yvan Lebrun, 79–91. Lisse: Swets and Zeitlinger.
7 A modality-free notion of gesture and how it can
help us with the morpheme vs. gesture question
in sign language linguistics (Or at least give us
some criteria to work with)
Arika Okrent
are, in situations where the participants are present, determined by the position
of real people in real space. If the man and the woman being discussed were
present in the above example, the verb would move from the real location of the
man to the real location of the woman. The form of the agreement morpheme can
only be described as “the place where the thing I’m talking about is located.”1
The location of a person in space is a fact about the world independently of
anybody’s language and should not be considered linguistic.
I believe that a large part of the conflict between the gestural and grammatical
accounts results from a misunderstanding of what gesture means in relationship
to language and where we are allowed to find gesture. I present McNeill’s (1992)
notion of gesture as a modality-free notion and show how speakers not only use
speech and manual gesture at the same time, but also speech and spoken gesture
at the same time. For speakers, gestural and linguistic codes can be combined
in the same vocal–auditory channel. However, because they do take place in
the same channel, there are restrictions on how that combination is executed. I
argue that signers can also combine a gestural and linguistic code in the same
manual–visual channel, and that the restrictions on how pointing is carried out
need not mean that the location features are morphemic, but rather that there
are restrictions on how the two codes combine in one channel.
with such a meaning, and although it can still have the meaning of a dog’s bark
in a specific context of speaking described above, it does not have that meaning
by virtue of convention. A conventional sign does not need as much contextual
support to have a meaning.
Part of what is needed for a form to be conventionalized is the ability of
that form to be used to mean something stable over a wide range of specific
speaking events. In Langacker’s (1987) terms, it becomes “entrenched” and
“decontextualized.” It becomes entrenched as it is consistently paired with a
meaning and, because of that, it comes to be decontextualized, i.e. it comes
to have a meaning and a form that abstract away from the details of any one
occasion of its use. It is important to note that a symbolic unit can be more
or less entrenched, and more or less decontextualized, but there is no criterial
border that separates the conventionalized from the unconventionalized.
Freyd (1983) suggests that linguistic forms are categorical, as opposed to gra-
dient, as a result of their being conventional. Due to “shareability constraints,”
people stabilize forms as a group, creating categories in order to minimize infor-
mation loss. A form structured around stable categories can be pronounced in
many ways and in many contexts but still retain a particular meaning. A change
in form that does not put that form in a different category does not result in a
change in meaning. But if a form is gradient, a change in that form leads to a
concomitant change of meaning, and the nature of that change is different in
different contexts.
A form can be said to be “conventionalized” if it has a stable form and mean-
ing, which have come into being through consistent pairing and regular use. A
morpheme is a conventionalized form–meaning pairing and, because of its con-
ventionality, has categorical structure. Also, because of its conventionality, it is
decontextualized so it is listable independent from the speech event. It is worth
noting here that while the ideal linguistic unit is a conventionalized unit, it is not
necessarily the case that every conventionalized unit is linguistic. Emblematic
gestures (see Section 7.3.2.1) like ‘thumbs up’ are quite conventionalized, but
we would not want to say they are English words because their form is so vastly
different from the majority of English words. Given that a conventionalized unit
is not necessarily a linguistic unit, is a linguistic unit necessarily conventional-
ized? A linguistic unit necessarily involves conventions. The question remains
as to what the nature of those conventions must be. I explore this question
further in Section 7.5 when I discuss the issues that motivate the criteria for
deciding how to draw the line between morpheme and gesture.
a. b. c. d.
e. f. g. h.
i. j. k. l.
Figure 7.1 Video stills of speaker telling the story of a cartoon he has just
watched
side of the seesaw, propelling the anvil upward. The anvil then comes down on
his head, flattening it, and Tweety escapes.
The gestures pictured in Figure 7.1 should now seem quite transparent in
meaning. They are meaningful not because they are conventionalized, but be-
cause you know the imagery they are based on, and so will see that imagery in
the gestures. The representational significance of the gestures (a–l) pictured in
Figure 7.1 is given in (1).
(1) a. introduction of the seesaw
b. introduction of the weight
c–d. Sylvester throws the weight
e. Sylvester goes up
182 Arika Okrent
8 I am neutral on whether these emblem-type signs should be considered proper lexical signs
or gestures. They have different distributional properties from lexical signs (Emmorey 1999),
but it may be simply that we tend not to view forms with purely pragmatic meaning as fully
linguistic, for example English mmm-hmmm.
184 Arika Okrent
LOOK-AROUND SMILE
Figure 7.3 Illustration of (3)
form. In contrast, there is no convention that determines how the gesture for
‘upness’ should be pronounced. It may, as a gesture, come out as the raising
of a fist, the lift of a shoulder, or the raising of the chin. All such gestures do
share one feature of form; they move upward, and one might be tempted to say
that that aspect of upward movement alone is the conventionalized form for
representing upward movement. Such a form–meaning pairing is tautological:
‘up’ means ‘up,’ and need not appeal to convention for its existence. Addi-
tionally, the spoken phrase “He flew up” can be uttered in sync with a gesture
which moves downward without contradiction. For example, the speaker could
make a gesture for Sylvester’s arms flying down to his sides due to the velocity
of his own upward movement. The only motivation for the form of a speech-
synchronized gesture is the imagery in the speaker’s thought at the moment of
speaking.
That being said, there are some conventions involved in the production of
gesture. There may be cultural conventions that determine the amount of ges-
turing used or that prevent some taboo actions (such as pointing directly at the
addressee) from occurring. There are also cultural conventions that determine
what kind of imagery we access for abstract concepts. Webb (1996) has found
that there are recurring form–meaning pairings in gesture. For example, an “F”
handshape (the thumb and index fingers pinched together with the other fingers
spread) or an “O” handshape (all the fingers pinched together) is regularly used
to represent “preciseness” in the discourses she has analyzed. According to
McNeill (personal communication), it is not the conventionality of the form–
meaning pairing that gives rise to such regularity, but the conventionality of
the imagery in the metaphors we use to understand abstract concepts (in the
sense of Lakoff and Johnson 1980; Lakoff 1987). What is conventional is that
we conceive of preciseness as something small and to be gingerly handled with
the fingertips. The handshape used to represent this imagery then comes to look
alike across different people who share that imagery. The disagreement here is
not one of whether there are conventions involved in the use of gestures. It is
rather one of where the site of conventionalization lies. Is it the forms them-
selves that are conventionalized, as Webb claims, or the conceptual metaphors
that give rise to those forms, as McNeill claims? The issue of “site of con-
ventionalization” is also important for the question of whether the pointing in
agreement verbs in ASL is linguistic or gestural. I give more attention to this
issue in Section 7.5.2.
In any case, what is important here is that the gestures are not formed out of
discrete, conventionalized components in the same way that spoken utterances
are. And even if there are conventionalized aspects to gesturing, they are far
less conventionalized than the elements of speech.
r The form of the gesture patterns meaning onto form in a gradient, as opposed
to a categorical way.
A modality-free notion of gesture 187
a metaphor that associates high vocal frequency with high spatial location. The
low pitch on down expresses the imagery of lowness through the flip side of that
metaphor. The tones are not phonemic features of the words. The tones express
imagery through a directly created form which is simultaneously articulated
with fixed lexical items.
In (6), repetition is exploited for effect.
(6) Work, work, work, rest.
The words in this example are linguistic, listable units. The construction in
which they occur cannot be given a syntactic–semantic analysis. The quantity
of words reflects the imagery of the respective quantity of work and rest. The
ordering of the words reflects the ordering of the actions. A lot of work, followed
by a little rest. The words are chosen from the lexicon. The construction in which
they occur is created in the moment.
The examples of spoken gesture show that speakers can express the linguistic
and the gestural simultaneously, on the same articulators, in the same modality.
Liddell does not propose that agreement verbs are gestures. He rather pro-
poses that they are linguistic units (features of handshape, orientation, etc.) si-
multaneously articulated with pointing gestures (the location features, or “where
the verb points”). Spoken linguistic units can be articulated simultaneously with
spoken gestures. That signs could be articulated simultaneously with manual
gestures is at least a possibility.
candidates for temporal extension. All are continuants and could be sustained
for longer durations. However, the best way to achieve the combination is to
extend the vowel.
(7) a. *llllllong time
b. *longngngng time
This example is intended to show that when speech and gesture must combine
in the same channel, there are restrictions on the way they may combine. There
are linguistic elements that are better suited to carry the gestures than others.
In this case, the vowel is the best bearer of this gesture. It is not always the case
that the gestural manipulation must combine with the vowel. But there seem
to be restrictions determining which kinds of segments are best combined with
certain types of gestures.
7.6 Conclusions
It is unfounded to reject the idea that agreement is gestural simply because
the verbs being produced are linguistic units. People can vocally gesture while
11 There do not seem to be similar restrictions on the combination of manual gesture with speech.
Although speech is tightly synchronized with manual gesture – and conveys meaning which
is conceptually congruent with speech – the particular forms that the manual gestures take do
not appear to be constrained in any way by the specifications on form that the speech must
follow. Combining two semiotic codes in one modality may raise issues for language–gesture
integration that do not arise for situations where the linguistic and the gestural are carried by
separate modalities.
196 Arika Okrent
saying spoken lexical verbs. Signers can manually gesture while signing lexical
verbs. However, the combination of gesture and speech in one channel puts
restrictions on the gesture because important linguistic categorical information,
like lexical tone, must be preserved. The three objections given in Section 7.2
above make reference to restrictions on the way in which pointing is carried out,
and these restrictions are language particular. The fact that there are language-
particular restrictions on the way the pointing is carried out does not in itself
constitute a devastating argument against the gesture proposal.
The title of this chapter promises criteria for deciding what is gestural and
what is morphemic in ASL linguistics. There is no checklist of necessary and
sufficient conditions for membership in either category. There are, however,
three continuous dimensions along which researchers can draw a line between
gesture and language.12 I repeat them here:
r The first is “degree of conventionalization.” How conventionalized must
something be in order to be considered linguistic?
r The second dimension is “site of conventionalization.” What kinds of con-
ventions are linguistic conventions?
r The third dimension is “restriction on combination.” What kinds of conditions
on the combination of semiotic codes are linguistic conditions?
These are not questions I have answers for, but they are the questions that
should be addressed in the morpheme vs. gesture controversy in sign language
linguistics.
Acknowledgments
This research was partially funded by grants to David McNeill from the Spencer
Foundation and the National Institute of Deafness and Other Communicative
Disorders. Some equipment and materials were supplied by the Language Labs
and Archives at the University of Chicago. I am grateful to David McNeill,
John Goldsmith, Derrick Higgins, and my “lunch group” Susan Duncan, Frank
Bechter, and Barbara Luka for their wise advice and comments. Any inaccura-
cies are, of course, my own.
References
Aarons, Debra, Benjamin Bahan, Judy Kegl, and Carol Neidle. 1992. Clausal structure
and a tier for grammatical marking in American Sign Language. Nordic Journal of
Linguistics 15:103–142.
Arnheim, Rudolf. 1969. Visual thinking. Berkeley, CA: University of California Press.
12 I remain agnostic with respect to whether drawing such a line is ultimately necessary, although
I believe that the effort expended in trying to draw that line is very useful for gaining a greater
understanding of the nature of communicative behavior.
A modality-free notion of gesture 197
Bos, Heleen. 1996. Serial verb constructions in the sign language of the Netherlands.
Paper presented at the 5th International Conference on Theoretical Issues in Sign
Language Research. Montreal, September.
Duncan, Susan. 1998. Evidence from gesture for a conceptual nexus of action and entity.
Paper presented at the Annual Conference on Conceptual Structure, Discourse, and
Language, Emory University, October.
Emmorey, Karen. 1999. Do signers gesture? In Gesture, speech, and sign, eds. Lynn
Messing and Ruth Campbell, 133–159. New York: Oxford University Press.
Fauconnier, Gilles. 1994. Mental spaces: Aspects of meaning construction in natural
language. Cambridge: Cambridge University Press.
Fischer, Susan and Bonnie Gough. 1978. Verbs in American Sign Language. Sign Lan-
guage Studies 18:17–48.
Freyd, Jennifer. 1983. Shareability: The social psychology of epistemology. Cognitive
Science 7: 191–210.
Kiparsky, Paul. 1982. From cyclic phonology to lexical phonology. The structure of
phonological representations, I, ed. Harry van der Hulst and Norval Smith, 131–
177. Dordrecht: Foris.
Kita, Sotaro. 2000. How representational gestures help speaking. In Language and
gesture, ed. David McNeill, 162–185. Cambridge: Cambridge University Press.
Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Ladd, Robert. 1996. Intonational phonology. Cambridge: Cambridge University Press.
Lakoff, George and Mark Johnson. 1980. Metaphors we live by. Chicago, IL: University
of Chicago Press.
Lakoff, George. 1987. Women, fire and dangerous things: What categories reveal about
the mind. Chicago, IL: University of Chicago Press.
Langacker, Ronald. 1987. Foundations of Cognitive Grammar, Vol. 1: Theoretical pre-
requisites. Stanford, CA: Stanford University Press.
Langacker, Ronald. 1991. Cognitive Grammar. In Linguistic theory and grammatical de-
scription, ed. Flip Droste and John Joseph, 275–306. Amsterdam: John Benjamins.
Liddell, Scott. 1995. Real, surrogate, and token space: Grammatical consequences in
ASL. In Language, gesture, and space, eds. Karen Emmorey and Judy Reilly,
19–41. Hillsdale, NJ: Lawrence Erlbaum Associates.
Liddell, Scott. 1996. Spatial representation in discourse: Comparing spoken and signed
language. Lingua 98:145–167.
Liddell, Scott and Robert Johnson. 1989. American Sign Language: The phonological
base. Sign Language Studies 64:195–277.
Liddell, Scott. and Melanie Metzger. 1998. Gesture in sign language discourse. Journal
of Pragmatics 30:657–697.
Lillo-Martin, Diane and Edward Klima. Pointing out differences: ASL pronouns in
syntactic theory. In Theoretical issues in sign language research: Vol. 1, eds. Susan
Fischer and Patricia Siple, 191–210. Chicago, IL: University of Chicago Press.
Marschark, Mark. 1994. Gesture and sign. Applied Psycholinguistics 15:209–236.
McNeill, David. 1992. Hand and mind: What gestures reveal about thought. Chicago,
IL: University of Chicago Press.
McNeill, David. 1997. Growth points cross-linguistically. In Language and concep-
tualization, eds. Jan Nuyts and Eric Pederson, 190–212. Cambridge: Cambridge
University Press.
198 Arika Okrent
McNeill, David, Justine Cassell, and Elena Levy. 1993. Abstract deixis. Semiotica 95:
5–19.
McNeill, David, Justine Cassell, and Karl-Erik McCullough. 1994. Communicative ef-
fects of speech-mismatched gestures. Research on language and social interaction
27:223–237.
McNeill, David and Laura Pedelty. 1995. Right brain and gesture. In Language, gesture,
and space, eds. Karen Emmorey and Judy Reilly. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Meier, Richard. 1987. Elicited imitation of verb agreement in American Sign Lan-
guage: Iconically or morphologically determined? Journal of Memory and Lan-
guage 26:362–376.
Metzger, Melanie. 1995. Constructed dialogue and constructed action in American Sign
Language. In Proceedings of the Fourth National Symposium on Sign Language
Research and Teaching, ed. Carol Padden. Silver Spring, MD: National Association
of the Deaf.
Newport, Elissa and Richard Meier. 1985. The acquisition of American Sign Language.
In The crosslinguistic study of language acquisition, Vol. 1: The data, ed. Daniel
Slobin. Hillsdale, NJ: Lawrence Erlbaum Associates.
Padden, Carol, 1988. Interaction of morphology and syntax in American Sign Language.
New York: Garland.
Petitto, Laura. 1987. On the autonomy of language and gesture: Evidence from the
acquisition of personal pronouns in American Sign Language. Cognition 27:1–52.
Poizner, Howard, Edward Klima, and Ursula Bellugi. 1987. What the hands reveal about
the brain. Cambridge, MA: MIT Press.
Taub, Sarah. 1998. Language in the body: iconicity and metaphor in American Sign
Language. Doctoral dissertation, University of California, Berkeley.
Webb, Rebecca. 1996. Linguistic features of metaphoric gestures. In Proceedings of
the Workshop on the Integration of Gesture in Language and Speech, ed. Lynn
Messing, 79–93. Newark, DE: University of Delaware.
Woodbury, Anthony. 1987. Meaningful phonological processes: A consideration of
Central Alaskan Yupik Eskimo prosody. Language 63:685–740.
8 Gesture as the substrate in the process of ASL
grammaticization
8.1 Introduction
Grammaticization is the diachronic process by which:
r lexical morphemes in a language, such as nouns and verbs, develop over time
into grammatical morphemes; or
r morphemes less grammatical in nature, such as auxiliaries, develop into ones
more grammatical, such as tense or aspect markers (Bybee et al. 1994).
Thus any given grammatical item, even viewed synchronically, is understood
to have an evolutionary history. The development of grammar may be traced
along grammaticization pathways, with vestiges of each stage often remain-
ing in the current grammar (Hopper 1991; Bybee et al. 1994), so that even
synchronically, lexical and grammatical items that share similar form can be
shown to be related. Grammaticization is thought to be a universal process; this
is how grammar develops. Bybee et al. claim that this process is regular and
has predictable evidence, found in the two broad categories of phonology and
semantics. Semantic generalization occurs as the more lexical morpheme loses
some of its specificity and, usually along with a particular construction it is
found within, can be more broadly applied. Certain components of the meaning
are lost when this generalization takes place.1 Regarding phonological change,
grammaticizing elements and the constructions they occur in tend to undergo
phonological reduction at a faster rate than lexical elements not involved in
grammaticization.
The ultimate source of grammaticized forms in languages is understood to
be lexical. Most commonly, the source categories are nouns and verbs. Thus,
the origins of numerous grammatical elements, at least for spoken languages,
are former lexical items. Grammaticization is a gradual process that differs
from other processes of semantic change wherein a lexical item takes on new
meaning, but remains within the same lexical category, or word-formation
1 Givón (1975) introduced the term “semantic bleaching” for the loss of meaning. The exact
nature of meaning change in grammaticization is debated by researchers, however. Sweetser
(1988) suggests that the term “generalization” is inadequate, because while certain meanings
are lost, new meanings are added. Thus, Sweetser prefers simply to refer to this phenomenon
as “semantic change.”
199
200 Terry Janzen and Barbara Shaffer
processes by which new lexical items are created through common phenomena
such as compounding. Grammaticization concerns the evolution of grammatical
elements.
Investigations of grammaticization in American Sign Language (ASL) are
still scarce, although Wilcox and Wilcox (1995), Janzen (1995; 1998; 1999),
Wilbur (1999), and Shaffer (1999; 2000), began this study for ASL. It seems
clear that similar diachronic processes do exist for signed languages, but with
one essential difference that results from signed languages occurring within a
visual medium, where gestures of the hands and face act as the raw material
from which formalized linguistic signs emerge, and that is that gesture itself
may be the substrate for the development of new grammatical material.
A crucial link between gesture and more formalized linguistic units has
been proposed by Armstrong et al. (1995) as lexicalization generally, but also
for the gestural roots of ASL morphosyntax, which they demonstrate with
two-handed signs that are themselves “sentences.” In these signs one hand
acts as an agent and the other as a patient, while a gestural movement sig-
nals a pragmatic (and ultimately syntactic) relation between the two. Inter-
estingly, the recent suggestion that mirror neurons may provide evidence of
a neurophysical link between certain gestural (and observed) actions and lan-
guage representation (Rizzolatti and Arbib 1998) strongly supports the idea that
signed languages are not oddities, but rather that they are immensely elaborated
systems in keeping with gestural origins of language altogether (see Hewes
1973).2
The link between gestures and signed language has also been discussed
elsewhere. For example, researchers have addressed how formalized signs differ
from gesture (Petitto 1983; 1990), how gestures rapidly conventionalize into
linguistic-like units in an experimental environment (Singleton et al. 1995),
and how gestures develop into fully conventionalized signs in an emerging
signed language (Senghas et al. 2000). Almost invariably – with the exception
of Armstrong et al. (1995) – these studies involve lexicalization rather than the
development of grammatical features.
The current proposal, that prelinguistic hand and facial gestures are the sub-
strate of signed language grammatical elements, allows for the possibility that
when exploring grammaticization pathways, we may look not only to the ex-
pected lexical material as the sources of newer grams,3 but to even earlier
2 Mirror neurons are identified as particular neurons situated in left hemispheric regions of the
brain associated with Broca’s area, specifically the superior temporal sulcus, the inferior parietal
lobule, and the inferior frontal gyrus. These neurons are activated both when the organism grasps
an object with the hand, and when someone else is observed to grasp the object (but not when
the object is observed on its own). Rizzolatti and Arbib (1998) posit that this associated grasping
action and grasping recognition has contributed to language development in humans.
3 “Gram” is the term Bybee et al. (1994) choose to refer to individual items in grammatical
categories.
Gesture as the substrate in the process of ASL grammaticization 201
gestures as the sources of the lexical items that eventually grammaticize. This
is the case for the linguistic category of modality,4 which we illustrate by
proposing that the development of ASL modals such as FUTURE, CAN, and
MUST take as their ultimate source several generalized prelinguistic gestures.
Topic marking, which we propose developed from an earlier yes–no question
construction, also has a generalized gesture as an even earlier source. In the
case of the grammaticized modals, the resulting forms can be shown to have
passed through a lexical stage as might be expected. The development from
gestural substrate to grammar for the topic marker, however, never does pass
through a lexical stage.
We conclude that for the ASL grammaticization pathways explored in this
study, gesture plays the important role of providing the substrate from which
grammar ultimately emerges. The evidence presented here suggests that the
precursors of these modern-day ASL grammatical devices are gestural in nature,
whether or not a lexical stage is intervening. Thus, the study of ASL provides
new perspectives on grammaticization, in exploring both the sources of grams,
and the role (or lack of role) of lexicon in the developing gram.
In Section 8.2 we discuss data gathered at the start of the twentieth century.
All of the diachronic discourse examples in Section 8.2 were taken from The
Preservation of American Sign Language, a compilation of the early attempts
to document the language on film, made available recently on videotape. All
of the films in this compilation were made in or around 1913. Each shows an
older signer demonstrating a fairly formal register of ASL as it was signed
in 1913. Because the films are representative of monologues of only a fairly
formal register, care must be taken when drawing conclusions regarding what
ASL did not exhibit in 1913. In other words, while we believe it is possible
to use these films to show examples of what ASL was in 1913, they can-
not be used to show what ASL was not. Along with the discourse examples
discussed above, we analyze features of signs produced in isolation. The iso-
lated signs are listed in at least one of several French Sign Language (Langue
de Signes Française or LSF) dictionaries from the mid-1800s, or from ASL
dictionaries from the early 1900s. For those signs not taken from actual dis-
course contexts we are left to rely on the semantic descriptions and glosses
provided by their authors. As with most dictionary images, certain features of
the movement are not retrievable. We were also able to corroborate these im-
ages with the 1913 films in order to draw conclusions regarding phonological
changes.5
4 Throughout this chapter we use the term “modality” to mean the expression of necessity and
possibility; thus, the use of “modals” and such items common to the grammar systems of
language, as opposed to a common meaning of “modality” in signed language research meant
to address differences between signed and spoken channels of production and reception.
5 All examples extracted from the 1913 films of ASL were translated by the authors.
202 Terry Janzen and Barbara Shaffer
6 Similar relationships are said to exist between other signed languages as well. For a discussion
of the historical relationships among various European signed languages, see Eriksson (1998).
Gesture as the substrate in the process of ASL grammaticization 203
8.2.1 FUTURE
We believe that ASL is among those languages whose main future gram devel-
oped from a physical ‘movement toward a goal’ source. Our claim is that the
future gram in ASL, glossed here as FUTURE, developed from an older lexical
sign with the meaning of ‘to go’ or ‘to leave.’7 Evidence of this sign can be found
as far back as the 1850s in France where it was glossed PARTIR (Brouland 1855;
Pèlissier 1856). The OLSF sign PARTIR is shown in Figure 8.1a. The form in
Figure 8.1a was used as a full verb with the meaning ‘to leave.’ Note that the
sign is a two-handed sign articulated just above waist level, with the dominant
hand moving up to make contact with the nondominant palm. Old ASL (OASL)
also shows evidence of this sign, but with one difference. The 19138 films have
examples of the ASL sign GO, similar to the OLSF sign PARTIR; there are
also instances of GO being signed with only one hand. Modern Italian Sign
Language (Lingua Italiana dei Segni or LIS) also has this form (Paul Dudis,
personal communication).9
E.A. Fay in 1913 signs the following:
(3) TWO, THREE DAY PREVIOUS E.M. GALLAUDET GO TO
TOWN PHILADELPHIA10
‘Two or three days before, (E.M.) Gallaudet had gone to Philadelphia.’
7 The gloss FUTURE was chosen because it is the only sense shared among the various discourse
uses of the sign. WILL, for example, limits the meaning and suggests auxiliary status.
8 All references to the 1913 data indicate filmed narratives, available currently on videotape in
The Preservation of American Sign Language, 1997,
c Sign Media Inc.
9 For a discussion regarding the hypothesized relationship between ASL/LSF and other signed
languages such as LIS, see Eriksson (1998).
10 ASL signs are represented by upper case glosses. Words separated by dashes indicate single
signs (e.g. TAKE-UP); PRO.n = pronouns (1s, 2s, etc.); POSS.n = possessive pronouns; letters
separated by hyphens are fingerspelled words (e.g. P-R-I-C-E-L-E-S-S); plus signs indicate
repeated movement (e.g. MORE+++); top = topic marking; y/n-q = yes–no
204 Terry Janzen and Barbara Shaffer
(a) (b)
(c) (d)
Figure 8.1a 1855 LSF PARTIR (‘to leave’); 8.1b 1855 LSF FUTUR (‘fu-
ture’) (Brouland 1855; reproduced with permission.); 8.1c 1913 ASL FU-
TURE (McGregor, in 1997 Sign Media Inc.; reproduced with permission.);
8.1d Modern ASL FUTURE (Humphries et al. 1980; reproduced with
permission)
In this context the sign GO very clearly indicates physical movement. The
speaker is making a reference to a past event and states that Gallaudet had gone
to Philadelphia. What is striking in this example is that GO is signed in a manner
identical to the old form of FUTURE, shown in Figure 8.1b. An example of
this older form of FUTURE is given in another 1913 utterance in a narrative by
R. McGregor, in (4).
(4) WHEN PRO.3 UNDERSTAND CLEAR WORD WORD OUR
FATHER SELF FUTURE [old form] THAT NO MORE
‘When he clearly understands the words of our father he will do that
no more.’
question marking; CL = classifier; form specific notes are given below glosses for clarification
of forms (e.g. CL:C(globe) ).
both hands-----
Gesture as the substrate in the process of ASL grammaticization 205
This example shows the same form as GO in (3) being used to indicate future
time, suggesting that for a time a polysemous situation existed whereby the
sign could be understood in certain constructions to mean ‘to go’ and in others
to mean ‘future.’
Phonological reduction in the signing space is evident by this time, with the
older form of the sign articulated as a large arc at the waist, with the shoulder
as the primary joint involved in the movement, and the newer form, shown
in Figure 8.1d, with a much shorter movement near the cheek, and with the
movement having the wrist (or, at most, the elbow) as the primary joint involved.
The distalization from a more proximal joint to a more distal joint in this way
constitutes phonological reduction in Brentari’s (1998) model of phonology.
Note the example in (5), where G. Veditz articulates both forms in the same
utterance.
(5) YEAR 50 FUTURE [new form] THAT FILM FUTURE [old form]
TRUE P-R-I-C-E-L-E-S-S
‘In fifty years these films will be priceless.’
In (5) FUTURE is produced twice, yet in each the articulation is markedly
different. In the second instance in (5) the sign resembles FUTURE as produced
by McGregor in (4), while in the first instance FUTURE is signed in a manner
consistent with modern ASL FUTURE, which moves forward from the cheek. In
both instances in the construction above FUTURE has no physical ‘movement
toward a goal’ meaning, only a future time reference.
Newer forms of grammaticizing morphemes frequently co-occur with older
forms synchronically. Hopper (1991:22) describes this as “layering” in gram-
maticization:
Within a broad functional domain, new layers are continually emerging. As this happens,
the older layers are not necessarily discarded, but may remain to coexist with and interact
with the newer layers.
Such layering, or co-occurring of two forms, in other words, may exist for a
limited time; it is entirely possible for the older form to die out, or to continue
grammaticizing in a different direction, resulting in yet a different form with
another function and meaning. This has been proposed for at least some of the
various forms and usages of the item FINISH in ASL in Janzen (1995). Two
such polysemous forms co-occurring synchronically, for however long, often
contribute to grammatical variation in a language, and this would seem to be
the case for FUTURE for a time in ASL. It is remarkable, however, to see two
diachronic variants occur in the same utterance, let alone the same text.
In summary, then, we suggest that FUTURE in modern ASL belongs to the
crosslinguistic group of future markers with a ‘movement toward a goal’ source.
FUTURE began in OLSF as a full verb meaning ‘to go’ or ‘to leave.’ By 1855,
GO was produced with the nondominant hand as an articulated “base” hand.
206 Terry Janzen and Barbara Shaffer
8.2.2 CAN
In a discussion of markers of possibility, Bybee et al. (1994) note that there
are several known cases of auxiliaries predicating physical ability that come
to be used to mark general ability as well.11 Two cases are cited. English may
was formerly used to indicate physical ability and later came to express general
ability. The second case noted is Latin potere ‘to be able,’ which is related to
the adjective potens meaning ‘strong’ or ‘powerful,’ and which gives French
pouvoir and Spanish poder, both meaning ‘can’ (1994:190). In modern English
can is polysemous, with many agent-oriented senses ranging from uses with
prototypical agents, to those with no salient agent at all. Some examples are
given in (7).
(7) a. I can lift a piano.
b. I can speak French.
c. The party can start at eight.
In (7a) we see a physical ability sense of can. In (7b) can is used to indicate a
mental ability or skill, while in (7c) we see a use of can that could be interpreted
as either a root possibility (with no salient agent) or a permission reading,
depending on the context in which the sentence was spoken.
11 For the purposes of this chapter the category of modal uses – including physical ability, general
ability, permission, and possibility – are all described under the general heading “possibility,”
since it is believed that all of these modal uses are related and all share the semantic feature of
possibility.
208 Terry Janzen and Barbara Shaffer
(a) (b)
(c)
Figure 8.3a 1855 LSF POUVOIR (Brouland 1855; reproduced with permis-
sion); 8.3b 1913 ASL CAN (Hotchkiss in 1997 Sign Media Inc.; reproduced
with permission); 8.3c Modern ASL CAN (Humphries et al. 1980; reproduced
with permission)
Evidence from the 1913 data suggests that by the start of the twentieth cen-
tury CAN had already undergone a great deal of semantic generalization from
its physical strength source. The 1913 data contain examples of each of the
following discourse uses of CAN: physical ability, nonphysical ability (skills,
etc.) and root possibility. Example (9) shows a root possibility use of CAN
where CAN is used to indicate the possibility of an event occurring.
Examples of permission uses of CAN were not found in this diachronic data,
nor were epistemic uses seen.12 Permission and epistemic uses are, however,
seen in present day ASL. Shaffer (2000) suggests that epistemic CAN is quite
new and is the result of a semantic extension from root possibility uses of CAN,
12 Bybee et al. (1994) state that epistemic modalities describe the extent to which the speaker is
committed to the truth of the proposition. They posit that “the unmarked case in this domain
is total commitment to the truth of the proposition, and markers of epistemic modality indicate
something less than a total commitment by the speaker to the truth of the proposition” (1994:179).
De Haan (1999), by contrast, defines epistemic modality as an evaluation of evidence on the
basis of which a confidence measure is assigned. An epistemic modal will be used to reflect this
degree of confidence.
210 Terry Janzen and Barbara Shaffer
8.2.3 MUST
Turning to ASL MUST, Shaffer (2000) posits another gestural source, namely a
deictic pointing gesture indicating monetary debt. While Shaffer (2000) found
no diachronic evidence of such a gesture with that specific meaning, informal ex-
perimentation with nonsigning English speakers did produce multiple instances
of this gesture. Adults were asked to indicate to another that money was owed.
Each person who attempted to gesture monetary debt used exactly this gesture:
a pointing at the open extended hand. Bybee et al. (1994) cite numerous cases
of verbs indicating monetary debt generalizing to indicate general obligation.
De Jorio (1832, in Kendon 2000) finds evidence of a pointing gesture used
as far back as classical antiquity (and nineteenth-century Naples) to indicate
‘in this place’ and notes that it can express ‘insistence.’ Kendon also states
that the index finger extended and directed to some object is used to point out
that object. Further he finds evidence in nineteenth-century Naples of the flat
upturned hand being used to indicate “a request for a material object” (Kendon
2000:128). What we claim here is that such a gesture existed in nineteenth-
century France and could be used to indicate monetary debt (for a discussion
of pointing gestures and their relation to cognition in nonhuman primates, and
their place in the evolution of language for humans, see Blake 2000).
This pointing gesture entered the lexicon by way of OLSF as a verb indicating
monetary debt, glossed as DEVOIR, then underwent semantic generalization
that resulted in uses where no monetary debt was intended, just a general sense
Gesture as the substrate in the process of ASL grammaticization 211
(a) (b)
(c)
Figure 8.4a 1855 LSF IL-FAUT (‘it is necessary’) (Brouland 1855; reproduced
with permission); 8.4b 1913 ASL OWE (Hotchkiss in 1997 Sign Media Inc.;
reproduced with permission); 8.4c Modern ASL MUST (Humphries et al.
1980; reproduced with permission)
(13) gesture ‘owe’ > OLSF verb ‘owe’ > LSF/ASL ‘must,’ ‘should’ >
epistemic ‘should’
The 1913 data suggest that by the start of the twentieth century the ASL sign
OWE (the counterpart to the OLSF sign with the same meaning) was still
212 Terry Janzen and Barbara Shaffer
in use, with and without a financial component to its meaning. MUST was
also in use, with discourse uses ranging from participant external obligation
and advisability, to participant internal obligation and advisability. Uses with
deontic, or authoritative sources were also seen. Epistemic uses of MUST were
not seen in the diachronic data, but are fairly common in modern ASL (for a
more detailed description, see Shaffer 2000).
In summary, this look at the grammaticization of FUTURE, MUST, and CAN
in ASL traces their sources to the point where gesture enters the lexicon. FU-
TURE, MUST, and CAN, we argue, each have gestural sources which, through
frequent use and ritualization, led to the development of lexical morphemes,
then – following standard grammaticization processes – to the development
of grammatical morphemes indicating modal notions. Shaffer (2000) suggests
gestural sources for other ASL modals, such as CAN’T, and Wilcox and Wilcox
(1995) hypothesize gestural origins for markers of evidentiality (SEEM, FEEL,
OBVIOUS) in ASL as well.
13 From Haiman (1978:570–71), his examples (2b) and (21). int is the interrogative marker;
c.p. in (17b) is a connective particle in Haiman’s notation. Haiman also makes the point that
conditionals in Hua, as in a number of languages, are marked similarly, but details of this are
beyond the present discussion.
14 The backward head tilt is frequently thought to obligatorily accompany the raised eyebrow
marker for topics in ASL, but Janzen (1998) notes that in running discourse this is not the case.
Gesture as the substrate in the process of ASL grammaticization 215
construction retains the form of a yes–no question, but the backward head tilt
may be thought of as an iconic gesture away from any real invitation to respond.
The signer does not wish for any response to the “question” form: the addressee
must read this construction not as a question in its truest interactive sense,
but as providing a ground for some ensuing piece of discourse on the part of
the signer, or as a “pivot” linking some shared or presupposed information to
something new. In other words, it is a grammatical marker signaling a type of
information (or information structuring), and not a questioning cue. Along a
similar vein, Wilbur and Patschke (1999) suggest that the forward head tilt of
yes–no questions indicates inclusiveness for the addressee, whereas a backward
tilt signals an intent to exclude the addressee from the discourse. Examples of
topic marking in running discourse, taken from the monologue texts in Janzen
(1998), are given in (18) and (19).
top
(18) a. WORLD CL:C(globe) MANY DIFFERENT++ LANGUAGE
both hands-----
PRO.3+++15
on ‘globe’
‘There are many different languages in all parts of the world.’
b. LANGUAGE OVER 5000 PRO.3+++ MUCH
both hands
‘There are over five thousand languages in the world.’
c. FIND+++ SAME CHARACTERISTIC FIND+++ LIST
‘(In these languages we) find many of the same characteristics.’
top
d. PEOPLE TAKE-ADVANTAGE lh-[PRO.3]LANGUAGE
COMMUNICATE MINGLE DISCOURSE COMMUNICATE
PRO.3
‘People make use of this language for communicating and
socializing.’
top
e. OTHER COMMUNICATE SKILL lh-[PRO.3] LITTLE-BIT
DIFFERENT PRO.3-pl.alt16
‘Other kinds of communication are a little different from
language.’
top
(19) a. TRAIN ARRIVE(extended movement, fingers wiggle) T-H-E P-A-S
CL:bent V(get off vehicle)
‘The train eventually arrived at The Pas, and (we) got off.’17
15 PRO.3 here is an indexical point to the location of the classifier structure in the sentence.
Whether these points are best analyzed as locative (‘there’), demonstrative (‘that’), or pronouns
‘it,’ ‘them,’ etc.) is not clear, but for these glosses, they will all be given as PRO.3.
16 pl.alt indicates that this sign is articulated with both hands (thus plural) and has an alternating
indexing movement (to two different points in space).
17 The Pas is a town in Manitoba, Canada.
216 Terry Janzen and Barbara Shaffer
top
b. OTHER TRAIN pause T-H-E P-A-Sc TO L-Y-N-N L-A-K-Ed
PRO.3(c upward to d)
‘and took another train north from The Pas to Lynn Lake.’
c. THAT MONDAY WEDNESDAY FRIDAY SOMETHING
THAT PRO.3 CL:A (travelc to d to c )18 THAT
‘That train runs Mondays, Wednesdays, and Fridays – something
like that.’
top
d. 1,3.dual.excl CHANGE CL:bent V(get on vehicle) TRAIN
‘We (the two of us) changed trains,’
e. ARRIVE C-R-A-N-B-E-R-R-Y P-O-R-T-A-G-E
‘and arrived at Cranberry Portage.’
top
f. CL:bent V(get off vehicle) TAKE-UP DRIVE GO-TO
FLIN-FLON
‘(We) got off (the train), and took a car to Flin-Flon.’
As these discourse segments show, the topic-marked constituent may be nominal
or clausal (other elements, such as temporal adverbials, are also frequently
topic-marked). They have the same formal marking as do yes–no questions,19
but the function of this construction in the discourse is very different. The
construction is emancipated from the interactive function of yes–no questions,
and has assumed a further grammatical function. The marker now indicates
a relationship between parts of the discourse text, that is, how one piece of
information relates to the next. As mentioned, the topic-marked constituent has
a grounding or “pivot” function in the text.
In the grammaticization path given in (14) above, “syntactic domain topics”
are suggested as a later stage than “pragmatic domain topics.” While the de-
tails of this differentiation are not addressed here (see Janzen 1998; 1999), it is
thought that information from the interlocutors’ shared world of experience is
available to them as participants in a current discourse event before information
that arises out of the discourse event itself becomes available as shared informa-
tion. Essentially, however, marked topics that draw presupposed information
from interlocutors’ shared world of experience or from prior mention in the
discourse are marked in the same manner. The only difference – and one that
causes some potential confusion for sentence analysis – is that topic-marked
18 This classifier form is similar to what is often glossed as COMMUTE, with an “A” handshape
moving vertically from locus “c” to “d” to “c,” and with the thumb remaining upward.
19 Once again, the backward head tilt is not an obligatory part of the construction. In these texts,
it appears occasionally, but not consistently, and thus is not considered a necessary structural
element to differentiate the two functions.
Gesture as the substrate in the process of ASL grammaticization 217
constituents that arise out of shared pragmatic experience enter the discourse
as “new” information to that discourse event but consist of “given” information
pragmatically, whereas topic-marked constituents that are anaphoric to previ-
ous mention are both “given” by virtue of the previous mention and because
the information, once mentioned, has entered the shared world of experience
(Janzen 1998). Thus the “given–new” dichotomy is not quite so simple.
The gestural eyebrow raise in these grammaticized cases of topic marking
does not mark the full range of yes–no question possibilities for actual yes–no
questions, but only one: do you know X? Notice, however, that even though this
may suggest that the construction described here as a topic may still appear
to be very question-like, it clearly does not function in this way. Consider the
functional practicality of such “questions” as those posed in (20), with (20a)
and (20b) drawing on the discourse text example in (18a) and (18d), and (20c)
and (20d) from (19d) and (19f) above:
(20) a. Do you know ‘the world’?
b. Do you know ‘people’?
c. Do you know ‘the two of us’?
d. Do you know ‘the act of getting off the train’?
These are not yes–no questions that make any communicative sense in their
discourse context. Rather, they are grammaticized constructions with the same
morphological marking as yes–no questions in ASL, but with different gram-
matical function. In these cases, the topic-marked constituents (e.g. ‘the world’
or ‘the two of us’) are clearly grounding information for the new information
that follows within the sentence.
8.4 Conclusions
Grammaticization processes for spoken languages are understood to be system-
atic and pervasive. Diachronic research in the last few decades has brought to
light a vast array of grammatical constructions for which earlier lexical forms
can be shown as their source. The systematicity for grammaticizing functions
is such that, in many languages, polysemous grammatical and lexical items can
be taken to be evidence of grammaticization, even in the absence of detailed
diachronic evidence.
Grammaticization studies on signed languages are rare, but the examples we
have outlined show the potential for signed languages to develop in a man-
ner similar to spoken languages in this respect. In other words, how would
grammatical categories in a signed language emerge, except by the very same
processes? The pursuit of language universals that include both signed and spo-
ken languages is in some senses hampered by differences between vocal and
signed linguistic gestures, and while even the reality of structural universals
has come under question of late (see, for example, Croft 2001), it is thought by
some (Bybee and Dahl 1989; Bybee et al. 1994; Bybee 2001) that real language
universals are universals of language change. In this regard, grammaticization
220 Terry Janzen and Barbara Shaffer
processes in ASL offer significant evidence that crosses the boundary between
spoken and signed languages.
A further advantage to studying grammaticization in a signed language, how-
ever, may be that it offers a unique look at language change. What are commonly
thought of as “gestures” – that is, nonlinguistic but communicative gestures of
the hands and face – are of the same neuromuscular substrate as are the lin-
guistic, fully conventionalized signs of ASL (Armstrong et al. 1995). Several
studies have shown that components of generalized, nonlinguistic gesturing are
evident in ASL in the phonetic inventory (e.g. Janzen 1997), the routinized lexi-
con (Shaffer 2000), and in syntactic and morphosyntactic relations (Armstrong
et al. 1995). The present study, however, shows that the role that gesture plays
in the development of grammatical morphemes is also critical, and not opaque
when viewed through the lens of grammaticization principles.
This study offers something unique to grammaticization theory as well, for
two reasons. First, it is interesting to see the process of gesture > lexical form >
grammatical form as illustrated by the development of modals in ASL. Not only
can we see grammatical forms arise, but also lexical forms as an intermediate
stage in the whole process. Second, we have seen an instance of grammatical
form arising not by way of any identifiable lexical form, but directly from
a more generalized gestural source. This does not cast doubt on the crucial
and pervasive role that lexical items do play in the development of grammar,
but suggests that, under the right circumstances, this diachronic stage may
be bypassed. How this might take place, and what the conditions for such
grammaticization phenomena are, have yet to be explored. In addition, there
is great potential for studying the development of numerous modals, auxiliary
forms, and other grammatical elements in ASL and other signed languages.
Acknowledgments
Portions of this paper were first presented at the 26th Annual Meeting of
the Berkeley Linguistic Society. We wish to thank Sherman Wilcox, Barbara
O’Dea, Joan Bybee, participants at the Texas Linguistic Society 2000 confer-
ence and the reviewers of this book for their comments. We wish to acknowledge
support from SSHRC (Canada) Grant No. 752–95–1215 to Terry Janzen.
8.5 References
Armstrong, David F., William C. Stokoe, and Sherman E. Wilcox. 1995. Gesture and
the nature of language. Cambridge: Cambridge University Press.
Baker, Charlotte, and Dennis Cokely. 1980. American Sign Language: A teacher’s re-
source text on grammar and culture. Silver Spring, MD: T.J. Publishers.
Bergman, Brita. 1984. Non-manual components of signed language: Some sentence
Gesture as the substrate in the process of ASL grammaticization 221
Wilbur, Ronnie B. 1999. Metrical structure, morphological gaps, and possible grammati-
calization in ASL. Sign Language & Linguistics 2:217–244.
Wilbur, Ronnie B. and Cynthia Patschke. 1999. Syntactic correlates of brow raise in
ASL. Sign Language & Linguistics 2:3–40.
Wilcox, Sherman and Phyllis Wilcox. 1995. The gestural expression of modality in ASL.
In Modality in grammar and discourse, ed. Joan Bybee and Suzanne Fleischman,
135–162. Amsterdam: Benjamins.
Woll, Bencie. 1981. Question structure in British Sign Language. In Perspectives on
British Sign Language and deafness, ed. B. Woll, J. Kyle and M. Deuchar, 136–
149. London: Croom Helm.
Woodward, James. 1978. Historical bases of American Sign Language. In Understand-
ing language through sign language research, ed. Patricia Siple, 333–348. New
York: Academic Press.
Woodward, James. 1980. Some sociolinguistic aspects of French and American Sign
Language. In Recent perspectives on American Sign Language, ed. Harlan Lane
and François Grosjean, 103–118. Hillsdale NJ: Lawrence Erlbaum Associates.
Wylie, Laurence. 1977. Beaux gestes: A guide to French body talk. Cambridge, MA The
Undergraduate Press.
9 A crosslinguistic examination of the lexicons
of four signed languages
9.1 Introduction
Crosslinguistic and crossmodality research has proven to be crucial in un-
derstanding the nature of language. In this chapter we seek to contribute to
crosslinguistic sign language research and discuss how this research intersects
with comparisons across spoken languages. Our point of departure is a series
of three pair-wise comparisons between elicited samples of the vocabularies of
Mexican Sign Language (la Lengua de Señas Mexicana or LSM) and French
Sign Language (la Langue des Signes Française or LSF), Spanish Sign Lan-
guage (la Lengua de Signos Española or LSE), and Japanese Sign Language
(Nihon Syuwa or NS). We examine the extent to which these sample vocabular-
ies resemble each other. Writing about “sound–meaning resemblances” across
spoken languages, Greenberg (1957:37) posits that such resemblances are due
to four types of causes. Two are historical: genetic relationship and borrowing.
The other two are connected to nonhistorical factors: chance and shared symbol-
ism, which we here use to mean that a pair of words happens to share the same
motivation, whether iconic or indexic. These four causes are likely to apply to
sign languages as well, although – as we point out below – a genetic linguistic
relationship may not be the most appropriate account of the development of
three of the sign languages discussed in this chapter: LSF, LSM, and LSE.
The history of deaf education through the medium of signs in Mexico sheds
light on why the three specific pair-wise comparisons that form the basis of
this study are informative. Organized deaf education was attempted as early as
April 15, 1861, when President Benito Juárez and Minister Ignacio Ramı́rez
issued a public education law that called for the establishment of a school
for the deaf in Mexico City and expressed clear intentions to establish similar
schools throughout the Republic (Sierra 1934). Eduardo Huet, a deaf Frenchman
educated in Paris who had previously established and directed a school for the
deaf in Brazil, learned of the new public school initiative and decided to travel
to Mexico. He arrived in Mexico City in 1866, and soon after established a
school for the deaf (Sierra 1934). We assume that LSF in some form was at
least initially used as the medium of instruction there, or heavily influenced the
224
The lexicons of four signed languages 225
1 Smith Stark (1990:7, 52) confirms the existence of home signs in Mexico City and states that there
were other manual languages in existence among communities with a high frequency of deaf
members, such as a community of potters near the Texas–Mexico border and Chican, a Yucatec
community. Johnson (1991) reports an indigenous sign language used in a Mayan community in
the Yucatan. However, it is not clear if these sign languages existed in 1866 or what the nature
of those communities might have been at that time.
2 Similarly, in the USA sign languages extant prior to the arrival on these shores of French Sign
Language were likely contributors to the development of ASL. In particular, the sign language
that had developed on Martha’s Vineyard was a probable contributor to ASL (Groce 1985).
226 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters
9.2 Methodology
The LSM data come from dissertation fieldwork (Guerra Currie 1999) con-
ducted by the first author in Mexico City and in Aguascalientes, a city some
300 miles northeast of Mexico City. Six fluent Deaf signers were consulted: the
three consultants in Aguascalientes were all first-generation deaf, whereas the
three consultants from Mexico City were all second- or third-generation deaf
signers. The videotaped data used in the analyses reported here were elicited
using two commercially available sets of flash cards (Palabras basicas represen-
tadas por dibujos 1993, Product No. T-1668, and Mas palabras representadas
por dibujos 1994, Product No. T-1683, TREND Enterprises, Inc. St. Paul, MN).
This data set was augmented by vocabulary drawn from Bickford (1991), and
a few lexical items that occurred spontaneously in conversation between na-
tive signers. To elicit LSM vocabulary, the consultants were shown the set of
flash cards; most contained a picture that illustrated the meaning of the accom-
panying Spanish word. The LSM data comprise 190 signs elicited from each
of the three Mexico City consultants and 115 signs elicited from each of the
three Aguascalientes consultants; thus, a total of 915 LSM sign tokens were
examined. The vocabulary items that were examined consist predominately of
common nouns drawn from the basic or core vocabulary in such categories
as foods, flora and fauna, modes of transportation, household objects, articles
of clothing, calendrical expressions, professions, kinship terms, wh-words and
phrases, and simple emotions. We did not examine number signs, because the
likely similarity of signs such as ONE or FIVE would lead to overestimates
of the similarities in the lexicons of different signed languages. Likewise, we
chose not to elicit signs for body part signs and personal pronouns (Woodward
1978; McKee and Kennedy 2000).
Other sources for the analysis presented here include videotapings of elicited
vocabulary from LSF, LSE, and NS.3 The consultants for LSE and NS were
3 The first author is greatly indebted to Chris Miller for collecting the LSF data while in France, to
Amanda Holzrichter for sharing data from her dissertation corpus (Holzrichter 2000), which she
collected while in Spain, and to Daisuke Sasaki for collecting the NS data while doing research
at Gallaudet University. There may be particularly significant dialect variation across Spain in
its signed language (or languages). The LSE data reported here were collected in Valencia.
The lexicons of four signed languages 227
YES NO
YES NO
shown the same set of flash cards as was used with the LSM consultants.4 The
LSF consultant was simply shown a set of written French words corresponding
to those on the Spanish language flashcards. LSE, LSF, and NS counterparts
were not elicited for all the LSM signs in our data set; thus, the LSM data are
compared to 112 LSF signs, 89 LSE signs, and 166 NS signs.
In our analysis of these data, signs from different sign languages were iden-
tified as “similarly-articulated” if those signs shared approximately the same
meaning and the same values on any two of the three major parameters of hand-
shape, movement, and place of articulation. A subset of similarly-articulated
signs includes those signs that are articulated similarly or identically on all three
major parameters; these are called “equivalent variants.” Figure 9.1 steps the
reader through the process of how the pairs of signs were categorized.
Several examples help to clarify how these criteria were used. The LSM
and LSF signs for COUSIN are identical except for initialization, the LSM
sign being articulated with a P handshape (cf. Spanish ‘primo’) while the
LSF sign is articulated with a C handshape (cf. French ‘cousin’). These are
treated as similarly-articulated signs. Similarly, the LSM and LSF signs for
FIRE are identical except that the former is articulated with a circular move-
ment whereas the latter involves noncircular movement; they also qualify as
similarly-articulated signs despite the difference in articulation on one major
4 For the elicitation of signs from the LSM consultants, the written Spanish word was covered.
This was not done during the elicitation of the LSE data.
228 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters
9.3 Results
Table 9.1 below summarizes the total number of sign pairs included in the
study and the resulting set of similarly-articulated signs for each pair-wise
comparison. As noted above, the members of each sign pair shared the same
approximate meaning. Out of the 112 LSM–LSF sign pairs examined in this
study, 43 pairs (38 percent) were coded as similarly-articulated. In the LSM–
LSE comparison of 89 pairs, 29 (33 percent) were coded as similarly-articulated.
For the third pair-wise comparison, of the 166 LSM–NS sign pairs examined,
39 pairs (23 percent) were coded as similarly-articulated. Not surprisingly, the
largest percentage of similarly-articulated signs was found in the LSM and LSF
pair-wise comparison, whereas the smallest percentage of similarly-articulated
signs was found in the LSM–NS comparison.
The lexicons of four signed languages 229
Total Similarly-articulated
Pair-wise sign Borrowed Shared signs (percentages
comparison pairs signs symbolism Coincidence in parentheses)
Table 9.1 also reports our analyses of the likely sources of similarly-articulated
signs identified in our analyses of the three language pairs. Note that no signs
were analyzed as similarly-articulated due to coincidence; in all cases, similarly-
articulated signs could be attributed to either shared symbolism or borrowing.
A similarly-articulated pair of signs would have been attributed to coincidence
only if the signs were arbitrary in form and came from language pairs (such
as LSM–NS) that share no known historical or cultural links. For these data,
most similarly-articulated signs may be ascribed to shared symbolism, inas-
much as the forms of these signs appear to be drawing from similar imagistic
sources, such as shared visual icons in such sign pairs as ‘fire,’ ‘bird,’ and
‘house.’
By employing the same criteria as those employed in the comparisons be-
tween related languages, the comparison between LSM and NS enables us to
suggest a baseline level of similarity for unrelated signed languages. Out of
166 NS signs, 39 signs (23 percent) are similarly-articulated with respect to
their LSM counterparts. As noted, no LSM signs appear to be borrowings from
NS, a result that is not surprising. The set of signs that are similarly-articulated
consists of iconic signs that are also found to be similarly-articulated in the
other pair-wise comparisons; these sign pairs include those for ‘balloon,’ ‘book,’
‘boat,’ ‘bird,’ ‘fire,’ and ‘fish.’ These signs are also similarly-articulated with
respect to their American Sign Language (ASL) counterparts. We consider the
similarity in these sign pairs to be due to shared symbolism.
Although there are no clear examples of borrowings in the pair-wise compar-
ison of LSM and LSE, the number of similarly-articulated signs is nonetheless
greater than that seen in the LSM–NS pair-wise comparison. The higher level
of similarity between LSM and LSE is perhaps due to shared symbolism, which
is likely to be greater between languages with related ambient cultures (LSM–
LSE) than between languages that have distinct ambient cultures (LSM–NS).
The existence of borrowings between LSM and LSF is not surprising given
the historic and linguistic links between LSM and LSF mentioned in the in-
troduction. The borrowings of signs from LSF into LSM may be attributed to
230 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters
the prestige of LSF signs among the Deaf community during the early period
of Mexican Deaf education. Although these 12 borrowings are articulated sim-
ilarly in LSF and LSM, these signs are not articulated similarly in LSM and
ASL. Thus, the 12 signs that we classify as lexical borrowings from LSF into
LSM cannot be linked to contact with ASL. These 12 signs are:
r kinship terms: HERMANO/FRERE ‘brother,’ HERMANA/SOEUR ‘sister,’
PRIMO/COUSIN ‘cousin,’ SOBRINO/NEVEU ‘nephew’;
r calendric expressions: MES/MOIS ‘month,’ SEMANA/SEMAINE ‘week’;
languages: ESPAÑOL/ESPAGNOL ‘Spanish’ and INGLÉS/ANGLAIS
‘English’;
r terms for natural phenomena: ESTRELLA/ÉTOILE ‘star’ and AGUA/EAU
‘water’;
r an emphatic: VERDAD/VRAI ‘true’; and
r an abstract concept: CURIOSIDAD/CURIOSITÉ ‘curiosity’.
These sign pairs are analyzed as borrowings due to the relatively low iconicity
of these signs; therefore the likelihood of independent parallel development is
quite small.
Although there may be other borrowings among the set of similarly-articulated
signs identified in the pair-wise comparison of LSM–LSF, the status of these
signs is uncertain, and this uncertainty ultimately raises significant questions
about how researchers understand and investigate the diachronic relationships
among sign languages. For example, one sign pair that may be the result of
borrowing from LSF – but that we have not included in the above set of clear
borrowings from that language – is the pair AYUDA/AIDE ‘help.’ In LSF AIDE,
the flat dominant hand (a B hand) contacts the underside of the nondominant
elbow and lifts the nondominant arm (which in our data has a fisted handshape).
In LSM AYUDA, the dominant hand (a fisted A handshape with the thumb
extended) rests on the nondominant B hand; these two hands move upward
together. Our rationale for excluding this pair of signs is that LSM AYUDA
may be borrowed from ASL, inasmuch as the LSM sign resembles the ASL
sign to a greater degree than it resembles the LSF sign.
However, the results of research on ASL lead us to the hypothesis that this
LSM sign might, in fact, have been borrowed from LSF and not ASL. Frishberg
(1975) and Woodward (1976) discuss the form of the ASL sign HELP in the
light of historical contact between LSF and ASL and historical variation within
ASL. They suggest that this sign has undergone language-internal historical
change resulting in centralization of the sign (Frishberg 1975) or in an elbow-
to-hand change in place of articulation (Woodward 1976). This historical change
results in a place of articulation change from the elbow, as in the LSF sign, to
the nondominant hand, as in the ASL and LSM signs. Frishberg and Woodward
suggest that this elbow-to-hand change is a historical process that several ASL
signs have undergone.
The lexicons of four signed languages 231
Interestingly, this process is also seen in the LSM data in the sign SEMANA
‘week.’ The LSF sign is articulated at the elbow; the LSM sign is identi-
cal with a single difference in the place of articulation at the base of the
nondominant hand. What is noteworthy is that, although the members of the
sign pair SEMANA/SEMAINE ‘week’ are similarly-articulated in LSM and
LSF, they are in no way similar to the corresponding ASL sign WEEK. Thus,
it is possible that the same historical change discussed for the ASL and LSF
sign HELP/AIDE ‘help’ may also have occurred for the LSM and LSF signs
SEMANA/SEMAINE. This similarity between the LSF AIDE ‘help’ and SE-
MAINE ‘week,’ on the one hand, and the LSM AYUDA ‘help’ and SEMANA
‘week,’ on the other hand, suggests the possibility that the LSM sign AYUDA
may also have been borrowed or derived from LSF instead of ASL.
As an alternative to our analysis that assumes borrowing to be an impor-
tant source of similarity between LSF and LSM, one might contend that LSM
signs treated here as borrowings from LSF are, in fact, cognate signs. By “cog-
nate signs,” we mean pairs of signs that share a common origin and that are
from genetically related languages. Although some researchers (Stokoe 1974;
Smith Stark 1990) have argued that the relationship that exists between LSF and
American, Mexican, and Brazilian sign languages, among others, is best seen
as a genetic one with French as the “mother” language, we see several reasons
to doubt this claim (although we certainly agree that LSF has influenced the
development of the signed languages it came into contact with in the nineteenth
and twentieth centuries). If LSM and LSF were indeed genetically related, one
might have expected a much higher percentage of similar signs than our analy-
sis reveals.5 As is, the percentage of similarly articulated signs revealed by the
LSM–LSF comparison (38 percent) is only marginally greater than that found
in the LSM–LSE analysis (33 percent). In contrast, signed languages that are
known to be related show a much greater degree of overlap in their lexicons.
Thus, comparisons of British, Australian, and New Zealand Sign Languages
have indicated that these languages may share 80 percent or more of their vo-
cabulary (Johnston, in press; also McKee and Kennedy 2000). Additionally,
“languages arising outside of normal transmission are not related (in the ge-
netic sense) to any antecedent systems,” according to Thomason and Kaufman
(1988:10; emphasis in original). Based on what we know about the history of the
LSM and LSF, it is highly unlikely that these languages are genetically related
inasmuch as they have not arisen from “normal transmission.” Reflecting the
perspective of historical linguists in general, Thomason and Kaufman define
5 However, similarity in basic lexicon does not necessarily indicate a genetic relationship, as the
history of English demonstrates. Thus, the facts that after the Norman invasion in 1066 Middle
English borrowed a substantial fraction of its vocabulary from Norman French and that Early
Modern English borrowed many words from Latin do not mean that English should be considered
a Romance language.
232 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters
9.4 Discussion
Our analysis of these admittedly small data sets leads us to several conclusions.
First, the findings of the pair-wise comparison between LSM and NS suggest a
possible base level of the percentage of similarly-articulated signs due to shared
symbolism between signed languages. Even for these two signed languages –
which are quite remote from each other – similarly-articulated signs constituted
23 percent of the sampled vocabulary. Second, the lexicons of LSF and LSE
show greater overlap with LSM than does NS. This finding is not surprising,
given the known historical and cultural ties that link Mexico, Spain, and France.
Third, only the LSM–LSF comparison revealed strong evidence for borrowings
into LSM. These borrowings are likely due to the use of LSF – or of signs
drawn from LSF – in the language of instruction in the first school for the
deaf in Mexico City. No obvious borrowings were identified in the LSM–LSE
comparison. The comparison between LSM and LSE addresses the commonly
held assumption that there might be a genetic relationship between these two
languages. The limited data available provide no evidence for such a claim.
Several researchers have attempted to assess the similarity of vocabulary
across signed languages (Woodward 1978; Woll 1983; Kyle and Woll 1985;
Woll 1987; Smith Stark 1990). For example, using criteria detailed below, Woll
(reported in Kyle and Woll 1985) compared 15 signed languages and found that
an average of 35 percent to 40 percent of the 257 sampled lexical items were
similarly-articulated between any pair of the 15 languages examined.6 Woll
6 LSF is the only language common to Woll’s study and the current one.
The lexicons of four signed languages 233
7 Having said this, we hasten to add that in many instances sign languages have conventionalized
quite different icons for the same concept. Klima and Bellugi (1979) cite the sign for TREE in
three different sign languages: in ASL the icon seems to be a tree waving in the wind, in Danish
Sign Language the sign seems to sketch the round crown of a tree and its trunk, and in Chinese
Sign Language the icon is apparently the columnar shape of a tree trunk or of some trees.
The lexicons of four signed languages 235
languages may explain why these common terms may have been borrowed from
LSF: “social values.” Social attitudes may have been the factor that motivated
the LSF borrowings to be accepted into the LSM lexicon. It is possible that with
Huet’s establishment of the school for the deaf LSF was introduced, and the
attitudes toward LSF and the teachings of the school were positive, according
them a prestige that tends to come with what is considered educated language
use. These positive attitudes may have contributed to the acceptance of the bor-
rowing of common lexical items into the language of LSM. This same factor,
the prestige of educated language, may also contribute to the relatively high
incidence of initialized signs in LSM.
In conclusion, the resources of the visual – gestural modality – specifically, its
capacity for iconic representation – may promote greater resemblance between
the vocabularies of unrelated signed languages than we would expect between
unrelated spoken languages. The analysis presented here – specifically the com-
parison between LSE and NS – supports Woll’s (1983) proposal that there is
a relatively high base level of similarity between sign language vocabularies
regardless of the degree of historical relatedness. To some extent, the apparent
similarity of sign vocabularies may be an artifact of the relative youth of signed
languages. Time and historical change may obscure the iconic origins of signs
(Frishberg 1975). For an answer to this, we will have to wait and see.
Acknowledgments
This chapter is based on the doctoral dissertation of the first author (Guerra
Currie 1999). We thank David McKee and Claire Ramsey for their very helpful
comments on an earlier draft.
References
Bickford, Albert. 1991. Lexical variation in Mexican Sign Language.Sign Language
Studies 72:241–276.
Frishberg, Nancy. 1975. Arbitrariness and iconicity: Historical change in American Sign
Language. Language 51:696–719.
Greenberg, Joseph. 1957. Essays in linguistics. Chicago, IL: University of Chicago
Press.
Groce, Nora E. 1985. Everyone here spoke sign language: Hereditary deafness on
Martha’s Vineyard. Cambridge, MA: Harvard University Press.
Guerra Currie, Anne-Marie P. 1999. A Mexican Sign Language lexicon: Internal and
crosslinguistic similarities and variations. Doctoral dissertation, The University of
Texas at Austin.
Holzrichter, Amanda S. 2000. Interactions between deaf mothers and their deaf infants:
A crosslinguistic study. Doctoral dissertation, The University of Texas at Austin.
Johnson, Robert E. 1991. Sign language, culture and community in a traditional Yucatec
Maya village. Sign Language Studies 73:461–474.
236 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters
Johnston, Trevor. 2001. BSL, Auslan and NZSL: Three sign languages or one? In Pro-
ceedings of the 7th International Conference on Theoretical Issues in Sign Lan-
guage Research (University of Amsterdam, Amsterdam, July, 2000), ed. Anne
Baker. Hamburg: Signum Verlag.
Johnston, Trevor. In press. BSL. Auslan and NZSL: Three signed languages or one? In
Anne E. Baker, Beppie van den Boagaerde and Onno Crasborn (eds.), A crosslin-
guistic perspective on sign languages. Hamburg: Signum Verlag.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Kyle, J. G. and B. Woll. 1985. Sign language: The study of deaf people and their
language. Cambridge: Cambridge University Press.
McKee, David and Graeme Kennedy. 2000. Lexical comparison of signs from American,
Australian, British and New Zealand Sign Languages. In The signs of language
revisited: An anthology to honor Ursula Bellugi and Edward Klima, ed. Karen
Emmorey and Harlan Lane, 49–76. Mahwah, NJ: Lawrence Erlbaum Associates.
Sierra, Ignacio. 1934. Compañerismo: Organo del club deportivo “Eduardo Huet.”
Mexico City: El Club Eduardo Huet.
Smith Stark, Thomas C. 1990. Una comparación de las lenguas manuales de México y
de Brasil. Paper read at IX Congreso Internaciónal de la Asociación de Lingüistica
y Filogı́a de América Latina (ALFAL), at Campinas, Brasil.
Stokoe, William C. 1974. Classification and description of sign languages. In Current
trends in linguistics, Vol. 12, ed. Thomas A. Sebeok. 345–371. The Hague: Mouton.
Thomason, Sarah G. and Terrence Kaufman. 1988. Language contact, creolization, and
genetic linguistics. Berkeley, CA: University of California Press.
Weinreich, Uriel. 1968. Languages in contact. The Hague: Mouton.
Woll, B. 1983. The comparative study of different sign languages: Preliminary analyses.
In Recent research on European sign languages, ed. Filip Loncke, Penny Boyes-
Braem, and Yvan Lebrun, 79–91. Lisse: Swets and Zeitlinger.
Woll, B. 1987. Historical and comparative aspects of British Sign Language. In Sign and
school: Using signs in deaf children’s development, ed. Jim Kyle, 12–34. Clevedon,
Avon: Multilingual Matters.
Woodward, James. 1976. Signs of change: Historical variation in American Sign Lan-
guage. Sign Language Studies 10:81–94.
Woodward, James. 1978. Historical bases of American Sign Languages. In Under-
standing language through sign language research, ed. Patricia Siple, 333–348.
New York: Academic Press.
Part III
Within the past 30 years, syntactic phenomena within signed languages have
been studied fairly extensively. American Sign Language (ASL) in particular
has been analyzed within the framework of relational grammar (Padden 1983),
lexicalist frameworks (Cormier 1998, Cormier et al. 1999), discourse repre-
sentation theory (Lillo-Martin and Klima 1990), and perhaps most widely in
generative and minimalist frameworks (Lillo-Martin 1986; Lillo-Martin 1991;
Neidle et al. 2000). Many of these analyses of ASL satisfy various syntactic
principles and constraints that are generally taken to be universal for spoken
languages (Lillo-Martin 1997). Such principles include Ross’s (1967) Complex
NP Constraint (Fischer 1974), Ross’s Coordinate Structure Constraint (Padden
1983), Wh-Island Constraint, Subjacency, and the Empty Category Principle
(Lillo-Martin 1991; Romano 1991).
The level of syntax and phrase structure is where sequentiality is perhaps
most obvious in signed languages, and this may be one reason why we can fairly
straightforwardly apply many of these syntactic principles to signed languages.
Indeed, the overall consensus seems to be that the visual–gestural modality of
signed languages results in very few differences between the syntactic structure
of signed languages and that of spoken languages.
The three chapters in this section support this general assumption, revealing
minimal modality effects at the syntactic level. Those differences that do emerge
seem to based on the use of the signing space (as noted in Lillo-Martin’s chapter;
Chapter 10) or on nonmanual signals (as noted in the Pfau and Tang and Sze
chapters; Chapters 11 and 12). Nonmanual signals include particular facial
expressions and body positions that act primarily as grammatical markers in
signed languages. Both the use of space and nonmanual signals are integral
features of the signed modality and are used in all the signed languages that
have been studied to date (Moody 1983; Bos 1990; Engberg-Pedersen 1993;
Pizzuto et al. 1995; Meir 1998; Sutton-Spence and Woll 1998; Senghas 2000;
Zeshan 2000).
Chapter 10 starts with the autonomy of syntax within spoken languages and
extends this concept to signed languages, concluding that while there must be
some modality effects at the articulatory–perceptual level, there need not be
237
238 Syntax in sign: few or no effects of modality
at the syntactic level. Lillo-Martin goes on to explore one facet affecting the
syntax and morphology of signed languages that does show modality effects:
the use of signing space for pronominal and anaphoric reference. This issue
is also explored in more detail in Part IV of this volume on deixis and verb
agreement.
Chapter 11 is an exploration of split negation in German Sign Language
(Deutsche Gebärdensprache or DGS), with comparisons to ASL and also many
spoken languages. Pfau finds that split negation occurs in DGS and closely
resembles split negation patterns found in many spoken languages. In addition,
there is variation in negation patterns within the class of signed languages, so
that while DGS has split negation, ASL does not. Thus, split negation essentially
shows no modality effect. However, Pfau identifies one potential modality effect
related to the nonmanual marking (headshake) associated with negation. This
nonmanual marking, which Pfau argues is essentially prosodic, acts somewhat
differently from prosody in spoken languages.
In Chapter 12 Tang and Sze look at the structure of nominals in Hong Kong
Signed Language (HKSL). They find minimal modality effects at the structural
level, where HKSL nominals have the basic structure [Det Num N]. However,
Tang and Sze note that there may be a modality effect in the types of nominals
that receive definiteness marking. In HKSL bare nouns often receive marking for
definiteness, which in HKSL is realized nonmanually in eye gaze or role shift.
Like Pfau’s negation marking, Tang and Sze note variation of this definiteness
marking across signed languages. Thus, while definiteness is expressed in ASL
with head tilt and eye gaze, only eye gaze is used in HKSL.
Chapters 11 and 12 in particular constitute significant contributions to the
body of literature on signed languages other than ASL, and indicate a very
strong need for more crosslinguistic work on different signed languages. Only
by looking at a wide variety of signed languages will we be able to tease apart
what features of human language are affected by language modality, and what
features are part of universal grammar.
kearsy cormier
References
Bos, Heleen. 1990. Person and location marking in Sign Language of the Netherlands:
Some implications of a spatially expressed syntactic system. In Current trends in
European sign language research, ed. Sigmund Prillwitz and Tomas Vollhaber,
231–246. Hamburg: Signum Press.
Cormier, Kearsy. 1998. Grammatical and anaphoric agreement in American Sign Lan-
guage. Masters thesis, University of Texas at Austin.
Cormier, Kearsy, Stephen Wechsler, and Richard P. Meier. 1999. Locus agreement
in American Sign Language. In Lexical and constructional aspects of linguistic
Syntax in sign: few or no effects of modality 239
explanation, ed. Gert Webelhuth, Jean-Pierre Koenig and Andreas Kathol, 215–
229. Stanford, CA: CSLI Press.
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language. Hamburg: Signum
Press.
Fischer, Susan D. 1974. Sign language and linguistic universals. In Actes du Colloque
Franco-Allemand de Grammaire Transformationnelle, ed. Christian Rohrer and
Nicholas Ruwet, 187–204. Tübingen: Niemeyer.
Lillo-Martin, Diane. 1986. Two kinds of null arguments in American Sign Language.
Natural Language and Linguistic Theory 4:415–444.
Lillo-Martin, Diane. 1991. Universal grammar and American Sign Language. Boston,
MA: Kluwer.
Lillo-Martin, Diane. 1997. The modular effects of sign language acquisition. In Relations
of language and thought: The views from sign language and deaf children, ed.
Marc Marschark, Patricia Siple, Diane Lillo-Martin, Ruth Campbell and Victoria
S. Everheart, 62–109. New York: Oxford University Press.
Lillo-Martin, Diane, and Edward Klima. 1990. Pointing out differences: ASL pronouns
in syntactic theory. In Theoretical issues in sign language research, Vol. 1: Lin-
guistics, ed. Susan D. Fischer and Patricia Siple, 191–210. Chicago, IL: University
of Chicago Press.
Meir, Irit. 1998. Thematic structure and verb agreement in Israeli Sign Language. Doc-
toral dissertation, Hebrew University of Jerusalem.
Moody, Bill. 1983. La langue des signes. Vincennes: International Visual Theatre.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert Lee. 2000.
The syntax of American Sign Language. Cambridge, MA: MIT Press.
Padden, Carol A. 1983. Interaction of morphology and syntax in American Sign Lan-
guage. Doctoral dissertation, University of California at San Diego, CA.
Pizzuto, Elena, Emanuela Cameracanna, Serena Corazza, and Virginia Volterra. 1995.
Terms for spatio-temporal relations in Italian Sign Language. In Iconicity in lan-
guage, ed. Raffaele Simone, 237–256. Amsterdam: John Benjamins.
Romano, Christine. 1991. Mixed headedness in American Sign Language: Evidence
from functional categories. MIT Working Papers in Linguistics 14:241–254.
Ross, J. R. 1967. Constraints on variables in syntax. Doctoral dissertation, MIT.
Senghas, Ann. 2000. The development of early spatial morphology in Nicaraguan Sign
Language. Proceedings of the Annual Boston University Conference on Language
Development 24:696–707.
Sutton-Spence, Rachel, and Bencie Woll. 1998. The linguistics of British Sign Language.
Cambridge: Cambridge University Press.
Zeshan, Ulrike. 2000. Sign language in Indo-Pakistan: A description of a signed lan-
guage. Philadelphia, PA: John Benjamins.
10 Where are all the modality effects?
Diane Lillo-Martin
10.1 Introduction
Sign languages are produced and perceived in the visual modality, while spo-
ken languages are produced and perceived in the auditory modality. Does this
difference in modality have any effect on the structures of these two types
of languages? Much of the research on the structure of sign languages has
mentioned this issue, but it is far from resolved. To some authors, the differ-
ences between sign languages and spoken languages are paramount, because
the study of “modality effects” is a contribution which sign language research
uniquely can make. To others, the similarities between sign languages and spo-
ken languages are most important, for they can tell us how certain properties
of linguistic systems transcend modality and are, therefore, truly universal. Of
course, both of these goals are worthy, and this book is testimony to the fruits
that such endeavors can yield.
In this chapter I address the question of modality effects by first examining the
architecture of the language faculty. By laying out my assumptions about how
language works in the general sense, predictions about the locus of modality
effects can be made. I then take up an issue that is a strong candidate for
a modality effect: the use of space for indicating reference in pronouns and
verbs. I review some of the issues that have been discussed with respect to
this phenomenon, and offer an analysis that is in keeping with the theoretical
framework set up at the beginning. I do not offer this analysis as support for
the theoretical assumptions but, instead, the framework provides support for
the analysis. Interestingly, my conclusions turn out to be rather similar to a
proposal made on the basis of some very different assumptions about the nature
of the language faculty.
Clearly, the syntactic component and the phonological component must con-
nect at some point. In recent literature, this point has been called an “interface.”
Some of the recent research in generative syntax within the Minimalist Program
(Chomsky 1995) has sought to determine where in a derivation the syntax–
phonology interface lies, and the properties it has. For example, it has long
been assumed by some syntacticians (e.g. Chomsky 1981) that there may be
“stylistic” order-changing rules that apply in the interface component connect-
ing syntax with phonology. However, aside from the operations of this interface
level, known as PF (Phonetic Form), it is generally assumed that the operations
of the syntax and of the phonology proper are autonomous of each other. Thus,
quoting again from Jackendoff (1997:29):
For example, syntactic rules never depend on whether a word has two versus three
syllables (as stress rules do); and phonological rules never depend on whether one
phrase is c-commanded by another (as syntactic rules do). That is, many aspects of
phonological structure are invisible to syntax and vice versa.
autonomy just as spoken languages do. What does this mean for the analysis of
phonology and syntax in sign languages?
The phonology is the component of the grammar that interacts with the
“articulatory–perceptual interface.” That is, the output of the phonological com-
ponent is the input to the articulatory component (for production); or the output
of the perceptual component is the input to the phonological component (for
comprehension). While these mappings are far from simple, it is clear that the
modality of language must be felt in the phonological component. Thus, for
example, the features of a sign language representation include notions like
“selected fingers” and “circular movement,” while those of a spoken language
include “tongue tip” and “voice.” In other words, the modality of language
affects the phonological component.
In view of this inescapable conclusion, it is remarkable to notice how many
similar properties sign phonologies and spoken phonologies share. Presumably,
these properties come from the general, abstract properties of the language
faculty (known as Universal Grammar, or UG). Since I do not work in the details
of phonology, I do not offer here any argument as to whether these properties are
specific to language or come from more general cognitive principles, although I
have made my predilections known elsewhere (see Lillo-Martin 1997; however,
for the point of view of some real sign language phonologists, see Sandler 1993;
van der Hulst 2000; Brentari, this volume). Let it suffice for here to say that
although certain aspects of the modality will show up in the phonology, it has
been found that, as a whole, the system of sign language phonology displays in
general the same characteristics as spoken language phonology.
Even though the phonological components for signed languages and spoken
languages must reveal their modalities to some degree, the theory of the auton-
omy of syntax allows for a different claim about that level of the grammar. If
syntax and phonology are autonomous, there is no need for the syntactic com-
ponents of signed and spoken languages to differ. The null hypothesis, then, is
that they do not differ. In other words, the modality of language does not affect
the syntactic component.
This is not to say, of course, that any particular sign language will have
the same syntax as any particular spoken language. Instead, I assume that the
abstract syntactic principles of UG apply equally to languages in the signed
and spoken modalities. Where UG permits variation between languages, sign
languages may vary from spoken languages (and from each other). Where
UG constrains the form of spoken languages, it will constrain sign languages
as well. A clear counterexample to the UG hypothesis for sign language could
come from a demonstration that universal principles of grammar – for example,
the principle of structure dependence or the constraints on extraction – apply
in spoken languages but not in sign languages. To my knowledge, no such
claim has been made. On the contrary, several researchers have claimed that
244 Diane Lillo-Martin
its analysis. As Liddell (1990) points out, for some signs their location in
the signing space simply reflects the articulatory function of space. However,
spatial locations are also used to pick out referents and designate the location
of elements. In such cases, spatial locations are not simply sublexical, but in
addition they convey meaning. Most researchers have assumed (implicitly) that
spatial locations are therefore morphemic, and that – just as Supalla (1982)
provided a componential morphological analysis of the complex movements
found in classifier constructions – some kind of componential morphological
analysis of spatial locations could be provided.
DeMateo (1977) was an exception: he argued that spatial locations could
not be analyzed morphologically. More recently, this same conclusion has been
argued for in a series of publications by Liddell (1990; 1994; 1995; 2000).
Before summarizing Liddell’s position, the following sections briefly describe
the meaningful use of space in American Sign Language (ASL).
(a) (b)
Figure 10.1 ASL verb agreement: 10.1a ‘I ask her’; 10.1b ‘He asks me’
(usually) to the locations of the referents intended as subject and object, re-
spectively. Often, the verb also rotates so that it “faces” the object as well. This
process is illustrated in Figure 10.1.
The process of verb agreement illustrated in Figure 10.1 applies to a class
of verbs, but not to all verbs in ASL. Padden (1983) identified three classes of
verbs:
r those that take agreement as described above;
r those verbs such as ASL PUT that agree with spatial (i.e. locative) arguments;
and
r those that take no agreement at all.
Furthermore, the class of agreeing verbs contains a subclass of “backwards”
verbs, which move from the location of the object to the subject, instead of
vice versa. The distinctions between the various classes of verbs with respect
to agreement have received considerable attention. The importance of these
distinctions for the analysis of verb agreement is brought out below.
HC
L M L
the nonfirst persons are completely componential. Importantly, the first person
form may be used to pick out a referent other than the signer, in contexts of
direct quotation (and what is often called “role shift”), just as first person forms
may do in spoken languages. Thus, according to Meier, ASL marks a two-way
person contrast: first vs. nonfirst.
This conclusion has been shared by numerous authors working on ASL and
other sign languages. For example, Engberg-Pedersen (1993) makes a simi-
lar argument for a first/nonfirst distinction in Danish Sign Language, as does
Smith (1990) for Taiwanese Sign Language, Meir (1998) for Israeli Sign Lan-
guage, and Rathmann (2000) for DGS. The main idea seems to be that there
is a grammatical distinction between first and nonfirst person, with multiple
realizations of nonfirst. Neidle et al. (2000:31) state that although “there is a
primary distinction between first and non-first persons, non-first person can be
further subclassified into many distinct person values.”
physical situation. If not, location fixing might serve to establish a locus for a
“surrogate” (an imaginary referent of full size, used in the cases where verbs
indicate relative height of referents), or a “token” (a schematic referent with
some depth, but equivalent and parallel to the signer).
Crucially, what Liddell (1995:25–26) recognizes is that in order for pronouns
or verbs to make use of the locations of present referents, surrogates, or tokens:
there is no . . . predictability associated with the locations that signs may be directed
toward. The location is not dependent on any linguistic features or any linguistic category.
Instead it comes directly from the signer’s view of the surrounding environment.
Given this state of affairs, Liddell concludes that there is no linguistic process
of verb agreement in ASL. Instead, he proposes (p. 26) that:
the handshapes, certain aspects of the orientations of the hand, and types of movement
are lexically specified through phonological features, but . . . there are no linguistic fea-
tures identifying the location the hands are directed toward. Instead, the hands are
directed . . . by non-discrete gestural means.
employs a specifiable location that is also used for nonreferential lexical con-
trasts. Plural forms (dual, exhaustive, and multiple) have specific morphological
shapes that combine predictably with roots. In fact, as Meier points out, the first
person plural forms WE, OUR, and OURSELVES involve lexically specified
locations “at best only partially motivated” (Meier 1990:180), despite the pos-
sibility for “pointing to” the signer and a locus or loci representing the other
referents.
Liddell himself does not reject the notion that there is a specific first person
form, at least in pronouns (Liddell 1994). However, McBurney (this volume),
adopting Liddell’s framework, argues that the first/nonfirst distinction is not a
grammatical one. For the reasons given here and below, I think the distinction
is real.
Furthermore, there are numerous constraints on the agreement process. For
one thing, as mentioned earlier, only a subset of verbs mark agreement at all.
Meir (1998) characterized verbs that may take agreement as “potential posses-
sors,” because on her analysis agreement verbs have a transfer component in
their predicate–argument structure. Mathur (2000; Rathmann and Mathur, this
volume) characterized verbs that may take agreement as those taking two ani-
mate arguments; similarly, Janis (1995) limited agreement to verbs with particu-
lar semantic relations, including animate patients, experiencers, and recipients.
These characterizations of agreeing verbs are largely overlapping and, impor-
tantly, they bring out the fact that many verbs do not show agreement. Further-
more, agreement affects particular syntactic roles: subject and object for transi-
tive verbs; subject and indirect object for di-transitives. Intransitives do not mark
agreement; di-transitives do not mark agreement with their direct object. If there
is no linguistic process of agreement – but rather a gestural procedure for indi-
cating arguments – why should the procedure be limited by linguistic factors?
To be sure, Liddell (1995) himself points out that the indicating process
must interact closely with the grammar. He points out the observation made by
Padden (1983) – and before that by Meier (1982) – that while object agreement
is obligatory, subject agreement is optional; as well as the fact that certain
combinations are ruled out (e.g. FLIRT-WITH-me). He does not, however,
offer a way to capture these facts under a system with no linguistic agreement
process. Many of the ruled-out forms can be attributed to phonetic constraints, as
offered by Mathur and Rathmann (Mathur 2000; Mathur and Rathmann 2001).
How would such constraints apply to forms generated outside the grammar?
The arguments given so far have also been made by others, including Aronoff
et al. (in submission), Meier (2002), and Rathmann and Mathur (this volume).
Meier (2002) provides several additional arguments, and discusses at length
how the evidence from the development of verb agreement also supports its
existence as a linguistic phenomenon. He discusses development both for the
young child acquiring a sign language (compare Meier 1982), and in terms of
Where are all the modality effects? 251
true for ASL (e.g. Fischer 1974), but the status of such non-SVO (subject–verb–
object) utterances as a single clause in ASL is disputed (compare Padden 1983).
LSB also differs from ASL in that it has an “auxiliary”: a lexical item to mark
agreement with non-agreeing verbs. Such an element, although referred to by
various names, has been observed in Taiwanese Sign Language (Smith 1990),
Sign Language of the Netherlands (Bos 1994), Japanese Sign Language (Fischer
1996), and German Sign Language (Rathmann 2000). Although much more
detailed work is needed – in particular to find the similarities and differences
between the auxiliary-like elements across sign languages – it seems that in
these sign languages the auxiliary element is used when agreement is blocked.
Interestingly, there may be structural differences between sentences with and
without the auxiliary (Rathmann 2000).
The behavior of the auxiliary in the sign languages that have this element is
further evidence for the linguistic status of agreement. However the auxiliary
is to be analyzed, like an agreeing verb it moves between the subject and object
loci. If it interacts with syntactic phenomena, it must be represented in some
way in the derivation, i.e. in the linguistic system.
To summarize, because of the ways that the system known as verb agreement
interacts with other aspects of the linguistic system, it must itself be a linguistic
system. Further, I have argued, at least some part of agreement must be rep-
resented in the syntax, because it interacts with other aspects of the syntax.
However, does it matter to the syntax that verb agreement is realized spatially?
How would this be captured in a syntax autonomous from phonology? As
Liddell has pointed out, if the spatial locations used in agreement could be ana-
lyzed morphemically, the fact that agreement uses space would be no more rele-
vant to the syntax than the fact that UGLY and DRY are minimal pairs differing
only in location. As he pointed out, however, it seems that the spatial locations
cannot be analyzed morphemically. How, then, is this problem to be resolved?
6 In current syntactic theory of the Minimalist Program, indices have been removed from syntactic
representations.
254 Diane Lillo-Martin
pronouns in sign language, these gestures are unlistable (a speaker may point
to any location in space), and they disambiguate the reference of the pronouns
they accompany. The present proposal is that the nonfirst singular sign language
pronoun is lexically and syntactically ambiguous, just as ‘him’ is in the English
example; however, when it combines with a gesture, it may be directed at any
location, and its reference is disambiguated.
So far, my proposal is almost just like Liddell’s. The difference comes when
we move to verb agreement. First, note that although they combine linguistic
and gestural components, I have not refrained from calling pointing in sign
language “pronouns.” As pronouns, they are present in the syntactic structure
and participate in syntactic and semantic processes. For example, I expect that
sign language pronouns adhere to conditions on pronouns such as the principles
of the Binding Theory of Chomsky (1981) or their equivalent. I know of no
evidence against this.
I take the same approach to verb agreement. Again following Liddell, I am
convinced that a combination of linguistic and gestural explanations is necessary
to account for the observed forms of verbs. However, unlike Liddell, I do not
take this as reason to reject the notion that verbs agree. In particular, I have
given reasons above to support the claim that a class of verbs in ASL agree in
person (first vs. nonfirst) and number (singular, dual, and multiple at least) with
their subject and object. Hence, my proposal is that there is a process of verb
agreement whereby verbs agree with their arguments in person and number, but
the realization of agreement must also ensure that coindexing corresponds to
the use of the same locus, a process which must involve a gestural component.
I believe that Meier (2002) concurs with this conclusion when he states that
“although the form of agreement may be gestural, the integration of these
gestural elements into verbs is linguistically determined.”
The proposal that sign language verbs combine linguistic and gestural compo-
nents is different from the English example in that for sign language, both the lin-
guistic and gestural components use the same articulators. Okrent (this volume)
discusses at length this aspect of Liddell’s proposal, and provides helpful infor-
mation about gesture accompanying spoken languages by which to evaluate the
proposal that verbs combine linguistic and gestural elements in sign language.
must use the same R locus. This is so for all instances of intended coreference
within a discourse, unless a new location is established for a referent, either
through repeating a location-assigning procedure or through processes that dis-
place referents (Padden 1983). Within a sentence, multiple occasions of picking
out the same referent will also be subject to this requirement. One type of such
within-sentence coreference is the two uses of the pronoun in sentences like the
ASL sentence meaning ‘Hej thinks hej will win.’ Another type is the corefer-
ence between a noun phrase and its “copy” in “subject pronoun copy” (Padden
1983). The various mechanisms for picking out a referent must be directed at
the same location. However, what that location is need not be specified in the
syntax. Any two instances of coindexing must employ the same location. This
does not mean, however, that a categorial distinction is being made between the
various possible locations for that referent.
In this context, note the observation made by McBurney (this volume) that
no two lexical signs contrast for their locations in “neutral space.” Apparently,
the spatial contrasts used in agreement are not lexically relevant. The claim
here is that they are also not syntactically relevant. If the difference between
various nonfirst locations is irrelevant to the syntax, this means that no syntactic
principle or process would treat a location on the right, say, differently from
a location on the left. On the other hand, the syntax may treat the first person
forms differently from the nonfirst forms as a group. Various arguments that
the first person form is distinct in this way were offered in the discussion of
Meier’s (1990) proposal for a two person system. Another argument he offered
has to do with the use of the first person form in what is commonly known as
“role shifting,” to which I would like to add some comments.
Meier observed that the first person pronoun may pick out a referent other
than the signer, in contexts of “role shifting.” This technique is used for reported
speech, but also more broadly to indicate that a scene is being conveyed from the
point of view of someone other than the signer. Just as in the English example,
‘Bush said, “I won,” ’ or perhaps, ‘Bush is like, “wow, I won!” ’ the first person
pronoun may be used when quoting the words or thoughts or perspective of
another.
What is important for the present purposes is that this special characteristic is
reserved for the first person pronoun. Other pronouns do not “shift” during role
shift (as pointed out by Engberg-Pedersen 1995; also Liddell 1994). In Lillo-
Martin and Klima (1990) and Lillo-Martin (1995) we compared the special
characteristics of the first person pronoun to logophoric elements in languages
such as Ewe and Gokana. These elements have special interpretations in certain
contexts, such as reported speech or verbs reflecting point of view. Many other
proposals have also been made regarding the analysis of “role shift” (see, for
example, Engberg-Pedersen 1995; Poulin and Miller 1995). Whatever the best
analysis for the shifting nature of the first person pronoun in ASL, it is clear that
256 Diane Lillo-Martin
the grammar must be able to refer to the first person form separately from the
nonfirst forms. However, it never seems to be necessary to refer to the nonfirst
form in location “a” distinct from the nonfirst form in location “b.”
Another prediction of this account is that there may be certain special situ-
ations in which the first/nonfirst contrast is evident, but the contrast between
different nonfirst locations is neutralized. One such special situation may come
up in children acquiring ASL. Loew (1984) observed that a three-year-old child
acquiring ASL went through a stage in which different nonfirst characters in a
single discourse were all assigned the same locus: a so-called “stacking” error.
At this point, however, children do not generally make errors with the first per-
son form. This contrast has generally been seen as one showing that children
acquire correct use of pronouns and verb agreement for present referents earlier
than they do for nonpresent referents. However, it is also compatible with the
suggestion that they might first acquire the first/nonfirst contrast, and only later
acquire the distinction between different nonfirst locations.
Poizner et al. (1987) observed the opposite problem in one of their aphasic
signers, Paul D. He made numerous errors with verbal morphology. One exam-
ple cited is the use of three different spatial loci when the same location was
required. However, this error was with spatial verbs (i.e. with verbs that mark
locative arguments), not verbs marking agreement with human arguments. It
would be interesting to know if Paul D made any similar errors with agreeing
verbs. It would also be helpful to re-evaluate data from children, aphasics, and
perhaps other special populations, to look for evidence that the first–nonfirst
contrast may be treated differently from the contrast between various nonfirst
forms in these populations.
adopts the Theory of Distributed Morphology (Halle and Marantz 1993), under
which the morphological component is reserved for processes of affixation. In
Mathur’s model of align-sphere, agreement is not a process of affixation; rather,
it is a “re-adjustment” rule. He hence puts agreement at the phonological level.
Mathur discusses extensively the phonological effects of agreement, and he
shows evidence that the location of endpoints does have an effect on the output
of the agreement process. For example, for a right-handed signer, articulating
an agreeing verb such as WARN with a subject location on the right and object
location on the left presents no problem. However, if the subject location is
on the left and the object location is on the right, the regular output of the
alignment process would violate phonetic constraints on ASL (in fact, it would
be impossible to articulate, given the physical constraints of the human body).
Instead, some other form must be used, such as changing hand dominance for
this sign or omitting the subject agreement marker. This shows that the output
of the phonological process is affected by the specific locations used in the sign.
This conclusion is compatible with Mathur’s proposal that the whole process
of agreement is phonological, and that the phonological component accesses
the gestural.
The idea that agreement as a re-adjustment rule is purely phonological
leads to the expectation that it has no syntactic effects, since phonological
re-adjustment rules do not apply until after syntax. However, we have seen
evidence for syntactic effects of agreement in Section 10.3.3. Independently,
Rathmann and Mathur (this volume) have identified additional syntactic effects
of agreement. Accordingly, the more recent work develops Mathur’s (2000)
proposal by putting forth a model of agreement that contains an explicit “ges-
tural space” connecting conceptual structure with the articulatory–perceptual
interface, but also including syntactic aspects of agreement within the syntac-
tic structure. In this way, the syntactic effects can be accounted for without
losing the detailed account of the articulation of agreeing verbs developed
previously.
10.7 Conclusions
I have argued that there is a linguistic process of agreement in ASL, but I
have agreed with Liddell that in order to account for this process fully some
integration of linguistic and gestural components must be made. It is interesting
that I come to this conclusion given my working assumptions and theoretical
framework, which are quite distinct from his in many ways.
Much further work remains to be done on this issue. In particular, stronger
evidence for the interaction of verb agreement with syntax should be sought.
Additional evidence regarding the status of the various nonfirst loci is also
needed. Another domain for future research concerns the very similar problems
Where are all the modality effects? 259
that arise for spatial verbs and classifiers. Although these predicates do not
indicate human arguments, they make use of space in a way that poses the same
challenge to componential analysis as the agreeing verbs. This challenge should
be further investigated under an approach that combines gestural and linguistic
components.
Finally, in answering the question that forms the title of this chapter, I have fo-
cused on separating phonology from syntax. A deeper understanding of modal-
ity effects must explore this putative separation further, and also delve into the
phonological component, examining where modality effects are found – and
not found – within this part of the grammar.
Acknowledgments
This research was supported in part by NIH grant number DC-00183. My
thoughts on the issues discussed here profited from extensive discussions about
verb agreement in sign language with Gaurav Mathur. I would also like to
thank the organizers, presenters, and audience of the Texas Linguistics Society
meeting on which this volume is based for an energizing and thought-provoking
meeting. It was a pleasure to attend a conference which was so well-focused and
informative. I have also had productive conversations about these issues with
many people, among whom Karen Emmorey and Richard Meier were especially
helpful. Richard Meier also provided helpful comments on the written version.
Finally, I would like to acknowledge the graduate students in my course on
the structure of ASL who – several years ago – encouraged me to consider
the idea that the spatial “problems” of ASL could be addressed by an analysis
employing gesture.
10.8 References
Ahlgren, Inger. 1990. Deictic pronouns in Swedish and Swedish Sign Language. In
Theoretical issues in sign language research, Vol. 1: Linguistics, ed. Susan Fischer
and Patricia Siple, 167–174. Chicago, IL: University of Chicago Press.
Aronoff, Mark, Irit Meir, and Wendy Sandler. In submission. Universal and particular
aspects of sign language morphology.
Bahan, Benjamin. 1996. Non-manual realization of agreement in American Sign Lan-
guage. Doctoral dissertation, Boston University, Boston, MA.
Bos, Heleen. 1994. An auxiliary in Sign Language of the Netherlands. In Perspectives
on sign language structure: Papers from the 5th International Symposium on sign
language research, ed. Inger Ahlgren, Brita Bergman and Mary Brennan, 37–53.
Durham: International Sign Linguistics Association, University of Durham.
Chomsky, Noam. 1977. Essays on form and interpretation. New York: North-Holland.
Chomsky, Noam. 1981. Lectures on government and binding. Dordrecht: Foris.
Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press.
260 Diane Lillo-Martin
DeMateo, Asa. 1977. Visual imagery and visual analogues in American Sign Language.
In On the other hand: New perspectives on American Sign Language, ed. Lynn
Friedman, 109–136. New York: Academic Press.
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language. Hamburg: Signum-
Verlag.
Engberg-Pedersen, Elisabeth. 1995. Point of view expressed through shifters. In Lan-
guage, gesture, and space, ed. Karen Emmorey and Judy Reilly, 133–154. Hillsdale,
NJ: Lawrence Erlbaum Associates.
Fischer, Susan D. 1974. Sign language and linguistic universals. Paper presented
at Actes du Colloque Franco-Allemand de Grammaire Transformationnelle,
Tübingen.
Fischer, Susan D. 1996. The role of agreement and auxiliaries in sign language. Lingua
98:103–120.
Halle, Morris and Alec Marantz. 1993. Distributed Morphology and the pieces of in-
flection. In The view from Building 20: Essays in linguistics in honor of Sylvain
Bromberger, ed. Ken Hale and Samuel J. Keyser, 111–176. Cambridge, MA: MIT
Press.
Jackendoff, Ray. 1997. The architecture of the language faculty. Cambridge, MA: MIT
Press.
Janis, Wynne. 1995. A crosslinguistic perspective on ASL verb agreement. In Language,
gesture, and space, ed. Karen Emmorey and Judy Reilly, 195–223. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Japan Sign Language Research Institute (Nihon syuwa kerkyuusho), ed. 1997. Japanese
Sign Language dictionary (Nihongo-syuwa diten). Tokyo: Japan Federation of the
Deaf.
Liddell, Scott K. 1990. Four functions of a locus: Reexamining the structure of space
in ASL. In Sign language research: Theoretical issues, ed. Ceil Lucas, 176–198.
Washington, DC: Gallaudet University Press.
Liddell, Scott K. 1994. Tokens and surrogates. In Perspectives on sign language struc-
ture: Papers from the 5th International Symposium on Sign Language Research, ed.
Inger Ahlgren, Brita Bergman and Mary Brennan, 105–119. Durham: International
Sign Linguistics Association, University of Durham.
Liddell, Scott K. 1995. Real, surrogate, and token space: Grammatical consequences
in ASL. In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly,
19–41. Hillsdale, NJ: Lawrence Erlbaum Associates.
Liddell, Scott K. 2000. Indicating verbs and pronouns: Pointing away from agreement. In
The signs of language revisited: An anthology to honor Ursula Bellugi and Edward
Klima, ed. Karen Emmorey and Harlan Lane, 303–320. Mahwah, NJ: Lawrence
Erlbaum Associates.
Lillo-Martin, Diane. 1986. Two kinds of null arguments in American Sign Language.
Natural Language and Linguistic Theory 4:415–444.
Lillo-Martin, Diane. 1991. Universal grammar and American Sign Language: Setting
the null argument parameters. Dordrecht: Kluwer.
Lillo-Martin, Diane. 1995. The point of view predicate in American Sign Language.
In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 155–170.
Hillsdale, NJ: Lawrence Erlbaum.
Lillo-Martin, Diane. 1997. The modular effects of sign language acquisition. In Relations
of language and thought: The view from sign language and deaf children, ed.
Where are all the modality effects? 261
Marc Marschark, Patricia Siple, Diane Lillo-Martin, Ruth Campbell and Victoria
Everhart, 62–109. New York: Oxford University Press.
Lillo-Martin, Diane, and Edward S. Klima. 1990. Pointing out differences: ASL pro-
nouns in syntactic theory. In Theoretical issues in sign language research, Vol. 1:
Linguistics, ed. Susan D. Fischer and Patricia Siple, 191–210. Chicago, IL:
University of Chicago Press.
Loew, Ruth. 1984. Roles and reference in American Sign Language: A developmental
perspective. Doctoral dissertation, University of Minnesota, MN.
Mathur, Gaurav. 2000. Verb agreement as alignment in signed languages. Doctoral
dissertation, MIT, Cambridge, MA.
Mathur, Gaurav and Christian Rathmann. 2001. Why not GIVE-US: an articulatory
constraint in signed languages. In Signed languages: Discoveries from international
research, ed. V. Dively, M. Metzger, S. Taub and A. Baer, 1–26. Washington, DC:
Gallaudet University Press.
Meier, Richard P. 1982. Icons, analogues, and morphemes: The acquisition of verb
agreement in ASL. Doctoral dissertation, University of California, San Diego, CA.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical is-
sues in sign language research, ed. Susan D. Fischer and Patricia Siple, 175–190.
Chicago, IL: University of Chicago Press.
Meier, Richard P. 2002. The acquisition of verb agreement: Pointing out arguments for
the linguistic status of agreement in signed languages. In Current developments
in the study of signed language acquisition, ed. Gary Morgan and Bencie Woll.
Amsterdam: John Benjamins.
Meir, Irit. 1998. Thematic structure and verb agreement in Israeli Sign Language. Doc-
toral dissertation, The Hebrew University of Jerusalem.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: MIT Press.
Padden, Carol A. 1983. Interaction of morphology and syntax in American Sign Lan-
guage. Doctoral dissertation, University of California, San Diego, CA.
Padden, Carol A. 1990. The relation between space and grammar in ASL verb mor-
phology. In Sign language research: Theoretical issues, ed. Ceil Lucas, 118–132.
Washington, DC: Gallaudet University Press.
Poizner, Howard, Edward S. Klima, and Ursula Bellugi. 1987. What the hands reveal
about the brain. Cambridge, MA: MIT Press.
Poulin, Christine and Christopher Miller. 1995. On narrative discourse and point of view
in Quebec Sign Language. In Language, gesture, and space, ed. Karen Emmorey
and Judy Reilly, 117–131. Hillsdale, NJ: Lawrence Erlbaum Associates.
Quadros, Ronice Müller de. 1999. Phrase structure of Brazilian Sign Language. Doctoral
dissertation, Pontifı́cia Universidade Católica do Rio Grande do Sul.
Rathmann, Christian. 2000. The optionality of Agreement Phrase: Evidence from signed
languages. Masters report, University of Texas, Austin, TX.
Sandler, Wendy. 1989. Phonological representation of the sign: Linearity and nonlin-
earity in ASL phonology. Dordrecht: Foris.
Sandler, Wendy. 1993. Sign language and modularity. Lingua 89:315–351.
Sandler, Wendy, and Diane Lillo-Martin. 2001. Natural sign languages. In The handbook
of linguistics, ed. Mark Aronoff and Jamie Rees-Miller, 533–562. Malden, MA:
Blackwell.
262 Diane Lillo-Martin
Senghas, Ann. 2000. The development of early spatial morphology in Nicaraguan Sign
Language. In The Proceedings of the Boston University Conference on Language
Development, ed. S.C. Howell, S.A. Fish and T. Keith-Lucas, 696–707. Boston,
MA: Cascadilla Press.
Senghas, Ann, Marie Coppola, Elissa Newport, and Ted Supalla. 1997. Argument struc-
ture in Nicaraguan Sign Language: The emergence of grammatical devices. In The
Proceedings of the Boston University Conference on Language Development, ed.
E. Hughes, M. Hughes and A. Greenhill, 550–561. Boston, MA: Cascadilla Press.
Smith, Wayne. 1990. Evidence for auxiliaries in Taiwan Sign Language. In Theoretical
issues in sign language research, Volume 1: Linguistics, ed. Susan D. Fischer and
Patricia Siple, 211–228. Chicago, IL: University of Chicago Press.
Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American
Sign Language. Doctoral dissertation, University of California, San Diego, CA.
van der Hulst, Harry. 2000. Modularity and modality in phonology. In Phonological
knowledge: Conceptual and empirical issues, ed. Noel Burton-Roberts, Philip Carr
and Gerard Docherty. Oxford: Oxford University Press.
11 Applying morphosyntactic and phonological
readjustment rules in natural language negation
Roland Pfau
11.1 Introduction
As is well known, negation in natural languages comes in many different forms.
Crosslinguistically, we observe differences concerning the morphological char-
acter of the Neg (negation) element as well as concerning its structural position
within a sentence. For instance, while many languages make use of an indepen-
dent Neg particle (e.g. English and German), in others, the Neg element is affixal
in nature and attaches to the verb (e.g. Turkish and French). Moreover, a Neg
particle may appear in sentence-initial position, preverbally, postverbally, or in
sentence-final position (for comprehensive typological surveys of negation, see
Dahl 1979; 1993; Payne 1985).
In this chapter I am concerned with morphosyntactic and phonological prop-
erties of sentential negation in some spoken languages as well as in German
Sign Language (Deutsche Gebärdensprache or DGS) and American Sign Lan-
guage (ASL). Sentential negation in DGS (as well as in other sign languages) is
particularly interesting because it involves a manual and a nonmanual element,
namely the manual Neg sign NICHT ‘not’ and a headshake that is associated
with the predicate. Despite this peculiarity, I show that on the morphosyntactic
side of the Neg construction, we do not need to refer to any modality-specific
structures and principles. Rather, the same structures and principles that allow
for the derivation of negated sentences in spoken languages are also capable of
accounting for the sign language data.
On the phonological side, however, we do of course observe modality-specific
differences; those are due to the different articulators used. Consequently, in
a phonological feature hierarchy for signed languages (like the one proposed
for ASL by Brentari 1998), reference must be made to qualitatively differ-
ent features. In order to investigate the precise nature of the modality effect,
I first show how in some spoken languages certain readjustment rules may
affect phonological or morphosyntactic features in the context of negation.
In the Western Sudanic language Gã, for example, the Neg suffix triggers a
change of tone within the verb stem to which it is attached. I claim that, in
exactly the same way, phonological readjustment rules in DGS may change
263
264 Roland Pfau
the surface form of a given sign by accessing the nonmanual node of a feature
hierarchy.
This chapter is organized as follows: In Section 11.2 I give a short sketch of
the basic assumptions of the theoretical framework I adopt, namely the frame-
work of Distributed Morphology. Once equipped with the theoretical tools, I
demonstrate in Section 11.3 how negated structures in various languages can
uniformly be derived within this framework. In order to exemplify the relevant
mechanisms, I use negation data from French (Section 11.3.1), Háusá (Sec-
tion 11.3.2), Gã (Section 11.3.3), and DGS (Section 11.3.4). In Section 11.3.5
the analysis given for DGS is motivated by a comparison of DGS and ASL.
The discussion of the Gã and DGS examples illustrates the important role of
readjustment rules. Further instances of the application of different readjust-
ment rules in the context of negation are presented in Section 4. Finally, in
Section 5, I focus on possible modality-specific properties of negation. In par-
ticular, I compare properties of tone spreading in spoken languages to properties
of nonmanual spreading in DGS.
DS (Deep Structure)
manipulation of morphosyntactic and
semantic feature bundles (via movement
and merger)
SS (Surface Structure)
1 Note that phonological readjustment rules are triggered by the morphological environment; that
is, an affix causes a phonological change in the stem it attaches to (e.g. umlaut formation in
some German plural nouns: Haus ‘house’ → Häus-er ‘house-pl’). They are therefore to be
distinguished from other phonological alterations like assimilation rules (e.g. place assimilation
in the words imperfect vs. indefinite) or final devoicing.
266 Roland Pfau
11.3.1 French
Probably the best known language with split negation is French. In French,
negation manifests itself in the two Neg elements ne and pas which frame the
verb; in colloquial speech the preverbal element ne may be dropped. Consider
the examples in (1).
Following Pollock (1989) and Ouhalla (1990), I assume that ne is the head
of the NegP while pas is its specifier. The tree structure in (2a) is adopted
from Pollock 1989; however, due to DM assumptions, there is no agreement
projection present in the syntax. In order to derive the surface serialization, the
verb is raised. It adjoins first to Neg and, in a second step, the newly formed
complex is adjoined to Tns. These movement operations are indicated by the
arrows in (2a).
pas
Neg VP
(ne-)
V DP
oublie
The tree within the circle in (2b) shows the full detail of the adjoined struc-
ture under the Tns node after verb movement.3 At MS subject agreement is
3 The reader will notice that the derivation of the complex verb in the tree structure (2a) – as well as
in the structures to follow – involves a combination of left and right adjunction. I assume that the
prefix or suffix status of a given functional head is determined by its feature composition; French
Neg, for example, is marked as a prefix while Tns constitutes a suffix. Rajesh Bhatt (personal
communication) points out to me that within the DM framework, such an assumption may be
superfluous since DM provides a mechanism, namely merger, which is capable of rearranging
hierarchical structures at MS. In the present context, however, space does not permit me to go
into this matter any further.
268 Roland Pfau
implemented as a sister node of Tns (in this and the following structures,
the agreement nodes that are inserted at MS are marked by a square). As a
consequence, the derived structure of the French verb is [Neg–Verb–Tns–AgrS],
as shown by the verb in (1b).
The Vocabulary item for French Neg (that is, for the negative head) is given
in (3); no readjustment rules apply.
11.3.2 Háusá
Háusá is a Chadic language and is the most widely spoken language of West
Africa. It is the lingua franca of Northern Nigeria, where it is the first language
of at least 25 million people. There are also about five million speakers in
neighboring Niger.
The properties of negation in Háusá are somewhat different from the ones
observed in French. As can be seen in the affirmative sentence in (4a), the tense–
aspect and agreement morphemes in Háusá surface preverbally as a functional
complex while the verb remains uninflected (Caron 1990).5 In all aspects except
the progressive, negation consists of two elements: the first Neg element, a low-
toned bà, is part of the functional complex; the second Neg element, a high-toned
bá, appears in sentence-final position (4b).
4 Otherwise, Colloquial French would present us with a situation that is at odds with the principles
of X-bar theory, namely a situation where the head of a projection is missing. Therefore, the
head of the French NegP is assumed to be realized as an abstract morpheme in much the same
way that the English Agr paradigm is realized in terms of abstract elements. Ouhalla (1990)
proposes to extend this analysis to the group of languages in which the negation element seems
to have adverb-like properties: in all of these, he concludes, the Neg elements are specifiers of
a NegP whose head is an abstract morpheme.
5 One might be tempted to analyze the preverbal functional complex as being prefixed to the
verb. Such an analysis is, however, not corroborated by the facts since certain emphatic and
adverbial particles may appear between the functional complex and the verb, as for example
in Náá kúsá káámàà shı́ ‘1.sg.perf almost catch him’ (‘I almost caught him’; example from
Hartmann 1999).
Morphosyntactic and phonological readjustment rules 269
AspP Asp
Neg Asp
DP Asp'
AgrS Asp
Asp NegP
−kàn
Neg' Spec
bá
Neg VP
bà−
V DP
dáfà
Again, the circle in (5b) gives full details of the adjoined structure. At MS
an agreement morpheme is inserted as a sister of the Asp node. Therefore, the
structure of the Háusá preverbal functional complex is [Neg–AgrS–Asp];
cf. (4b).
270 Roland Pfau
Example (6) shows the Vocabulary item for the negative head in Háusá, a
low tone prefix. Again, as in French, no readjustment of any kind takes place.
In Háusá, AgrS and Asp are sometimes fused; this is true, for example, for the
perfective aspect. As mentioned above, fusion reduces the number of terminal
nodes and only one Vocabulary item that matches the morphosyntactic features
of the fused node is inserted; cf. (7a). In a negative perfective sentence, fu-
sion may even take place twice and consequently only one Vocabulary item is
inserted for the whole functional complex; cf. (7b).
To sum up, we may note the following: As in French, we observe split negation
in Háusá which is a combination of morphological and particle negation; the
negative head bà- attaches to the aspect node while the particle bá appears in
sentence-final position. The syntactic structure, however, is different. The NegP
stands below Asp and – in contrast to the French facts – has a right specifier
position.
TnsP Tns
Tns Neg
DP Tns'
AgrS Tns V Neg
Tns NegP
Spec Neg'
Neg VP
−Ø
V DP
gbè
At MS, AgrS and Tns fuse in the past (8a,b) and perfect (8c,d) tense and the
Vocabulary items mı̀- and mı́-, respectively, are inserted under the fused node.
In the affirmative future (8e), however, fusion does not take place. Since there is
no tense prefix in the negative future (8f ), we must assume either that the tense
feature is deleted or that Tns fuses with Neg. As we have seen above, each tense
specification implies the insertion of a different Neg morpheme. The Vocabulary
items for Neg are given in (10).
272 Roland Pfau
[H] [+long]
[H]
In Section 11.3.4 I show that the Gã past tense pattern parallels the one we
observe in DGS. Before doing so, however, let me sum up the facts for Gã. In
contrast to French and Háusá, Gã does not make use of split negation. Once
again, the head of the NegP is affixal in nature, but this time the specifier position
is empty (that is, there is an empty operator in SpecNegP). Most remarkably, in
the past tense the specifier as well as the head of NegP are void of phonological
material; there is, however, an empty affix in Neg (as in colloquial French)
that triggers the obligatory readjustment rule in (11a). In Gã (as in French), the
NegP is situated below Tns and shows a left specifier.
Neg' NICHT
TnsP Neg
DP Tns' −Ø
VP Tns
DP V Neg
KAUF Tns Neg
Tns V
As can be seen, I assume that from a typological point of view DGS belongs
to the class of languages with split negation. The manual sign NICHT is base
generated in the specifier position of the Neg phrase while the head of the NegP
contains an empty affix that is attached to the verb stem in the course of the
derivation (for motivation of this analysis, see Section 11.3.5). In the syntax,
the verb raises to Tns and then to Neg. Note that I do not consider agreement
verbs in the present context (on the implementation of agreement nodes, see
Glück and Pfau 1999 and Pfau and Glück 1999). In the case of a plain verb – as
in (12) – insertion of agreement morphemes does not take place. The Vocabulary
item for Neg in DGS is a zero affix:
(14) Vocabulary item for Neg in DGS
[neg] ↔ -Ø
As in the Gã past tense, the Neg feature realized by the empty affix triggers
a phonological readjustment rule that leads to a stem-internal modification.
In DGS, the rule in (15) applies to the nonmanual component of the featural
description of the verb sign by adding a headshake to the nonmanual node of
the phonological feature hierarchy (for discussion, see Section 11.5).8
(15) Readjustment rule triggered by Neg (addition of nonmanual feature)
nonmanual → nonmanual / [neg]
[headshake]
Note that it is not at all uncommon for empty affixes to trigger readjustment
rules in spoken languages either. For example, ablaut in the English past tense
form sang is triggered by an empty Tns node, while in the German plural noun
Väter ‘fathers’ (singular Vater) umlaut is triggered by an empty plural suffix.
Sentential negation in DGS is therefore characterized by the following facts:
Like French and Háusá, DGS belongs to the group of languages that show split
negation. The manual element NICHT is base generated in SpecNegP; this
element is, however, optional. (In this respect DGS differs from French where
the negative head was shown to be optional.) As far as the negative head is
concerned, DGS resembles the Gã past tense in that an empty affix occupies
this position that is capable of triggering a phonological readjustment rule.
Since NICHT appears in sentence-final position, I assume that SpecNegP in
DGS (as in Háusá) is on the right.
8 In the present chapter I focus on negation in DGS. For a more comprehensive Distributed Mor-
phology account of verbal inflection in DGS (e.g. Vocabulary items for agreement morphemes
and phonological readjustment rules triggered by empty aspect and agreement [classifier] suf-
fixes), see Glück and Pfau 1999; Pfau and Glück 1999.
Morphosyntactic and phonological readjustment rules 275
What distinguishes DGS from all the languages discussed so far is the po-
sition of NegP vis-à-vis TnsP within the syntactic structure. In contrast to
French, Háusá, and Gã, Neg selects TnsP as its complement in DGS. Accord-
ing to Zanuttini (1991), a NegP can be realized in exactly those two structurally
distinct positions, namely above TnsP – as in DGS and, according to Zanuttini
(1991), in Italian – and below TnsP – as exemplified by the other languages
discussed above.
subject Tns'
Tns NegP
Spec Neg'
Neg VP
NOT
V object
and [+neg]
The differences between ASL and DGS are as follows. First of all, the specifier
of NegP is not filled by a lexical Neg element in ASL and, therefore, ASL does
not exhibit split negation. Moreover, since the manual element NOT in the head
of NegP is not affixal in nature, movement of the verb to Neg is blocked (that is,
the negation element does not surface as a constituent of the verbal complex).
9 Neidle et al. (2000) point out that in case the negative marking does not spread, the sentence
receives an emphatic interpretation.
Morphosyntactic and phonological readjustment rules 277
These different properties allow for two interesting predictions. First, since
the ASL verb does not raise to Neg, we predict that it is possible for the verb to
be realized without an accompanying headshake when the manual Neg element
is present. This prediction is borne out, as is illustrated by the grammaticality
of example (16b). In contrast to that, headshake on the manual element alone
gives rise to ungrammaticality in DGS due to the fact that verb movement to
Neg is forced by the Stray Affix Filter (18a). That is, in DGS the empty Neg
affix is always attached to the verb and triggers the phonological readjustment
rule (15). Second, for the same reason, the headshake in DGS has an element to
be associated with (i.e. the verb) even when the manual sign is dropped. This is
not, however, true for ASL where verb movement does not apply. When NOT is
dropped, it is impossible for the headshake to spread only over the verb. Rather,
spreading has to target the entire c-command domain of Neg, that is, the entire
VP. Consequently, the ASL example (18b) is ungrammatical, in contrast to the
DGS example (18c).10
10 In her very interesting study, Wood (1999) shows that in ASL, VP movement may occur in which
the entire VP moves to SpecNegP leaving NOT behind in sentence-final position. Consequently,
the serialization MARY NOT BREAK FAN (without VP shift) is as grammatical as the sequence
MARY BREAK FAN NOT (in which VP shift to SpecNegP has applied). As expected, VP shift
to SpecNegP is impossible in DGS since this position is taken by the manual sign NICHT (hence
the ungrammaticality of *MUTTER NICHT BLUME KAUF ‘mother not flower buy’ which
should be a possible serialization if NICHT occupied the head of NegP as in ASL).
278 Roland Pfau
that from a typological point of view DGS and ASL are different in that ASL
does not show split negation.
Furthermore, the nonmanual marking in ASL is not introduced by a phono-
logical readjustment rule (in contrast to DGS where this rule is triggered by an
empty affix), since in ASL the verb does not raise to Neg. Rather, the nonman-
ual marking is associated with the manual sign in the negative head and it is
forced to spread whenever there is no lexical carrier. This spreading process is
determined by syntactic facts, that is, it may not simply pick the neighboring
sign but rather has to spread over all hierarchically lower elements (over its
c-command domain). For further comparison of DGS and ASL negation, see
Pfau and Glück 2000.11
11 Unfortunately, the examples given for other sign languages (see references in footnote 6) do not
allow for safe conclusions about their typological classification. From a few examples cited in
Zeshan (1997:94), we may – albeit very tentatively – infer that Pakistani Sign Language patterns
with DGS in that the manual Neg sign follows the verb (i) and the headshake may be associated
with the verb sign only in case the Neg sign is dropped (ii).
hs
i. DEAF VAH SAMAJH NAHI:N’
deaf index understand neg
‘The deaf do not understand.’
hs
ii. PA:KISTA:N INTIZ”A:M SAMAJH
Pakistan organize understand
‘The Pakistani do not know anything about organization.’
Morphosyntactic and phonological readjustment rules 279
I take the derivation in Estonian to parallel the one described for French above:
in the syntax, the verb raises and adjoins to the negative head ei and the whole
complex raises further to Tns. The optional Neg particle mitte stands in Spec-
NegP. Contrary to the French data, however, a readjustment rule is active at
MS in Estonian in the context of negation. More precisely, we are dealing
with a rule of impoverishment that deletes the AgrS feature whenever the
sentence is negated (compare Halle 1997; Noyer 1998). The Vocabulary item
for Estonian Neg is given in (20), and the relevant readjustment rule is given
in (21).
Finally, what we can safely infer from the data is that at MS, two agreement
morphemes must be implemented, one for subject and one for object agreement.
These morphemes will subsequently fuse and only one Vocabulary item that
matches the feature description of the fused node will be inserted; for example,
ηi- in (24).
Another striking modification, this time one of phonological nature, is ob-
served in the Tungusic language Nanai spoken in Eastern Siberia and a small
area in Northeastern China. In Nanai, the final vowel of a verb stem is lengthened
in order to express negation (and a distinct tense marker is used). Diachroni-
cally, this modification is due to the fusion of the verb with the formerly used
separate negative auxiliary ∂ (which is still used, for example, in the related
languages Orok and Evenki):
Consequently, the relevant readjustment rule has to target the quality of the
stem-final vowel, as sketched in (28).
[+long]
The readjustment rule in (30) takes into account that in Shónà, the
particular change of vowel is observed in the present and future tense
only.13
In Venda (a language spoken in the south African homeland of the same name),
a similar process applies in the present (31a,b) and in the future tense, in the
latter only when the Tns suffix is deleted (31c,d). Consequently, an alternative
form of the negated future in (31d) would be à-rı́-ngá-dó-shúmà (no deletion
of Tns, no vowel change; compare Poulos 1990:259).
What is particularly interesting with respect to the Venda data is the fact that
together with the vowel change a tone change comes into play. For the high
tone verb ù-shúmá ‘to work’, the final vowel is not only changed from a to i
(as in Shónà) but also receives a low tone in the above examples. In this sense,
readjustment in Venda can be seen as a combination of what happens in Shónà
(30) with a tone changing rule (like the one in (33) below). Note that tone
patterns of inflected verbs crucially depend on the basic tone pattern of the
respective verb stem and that, moreover, tone changes differ from tense to tense
(for details, see Poulos 1990:575ff ).
Kinyarwanda is another language of the Bantu family spoken by about six
million people in Rwanda and neighboring Zaı̈re. Negation in Kinyarwanda
is comparable to what we have observed in Gã and Venda since a change of
tone is involved. For every person except the 1st person singular, negation is
expressed by the prefix nt(ı̀)- (32b); for the 1st person singular the prefix is
sı̀- (32d). The interaction of tense and aspect morphemes is quite intricate and
shall not concern us here. Of importance, however, is the fact that with negation
13 Shónà has two past tenses – the recent and the general past – both of which are negated by
means of the negated form of the auxiliary -né ‘to have’ plus infinitive. This auxiliary follows
another readjustment rule, which I will not consider here.
Morphosyntactic and phonological readjustment rules 283
a lexical high tone on the verb is erased. Moreover, the tone on the aspect suffix
is sometimes raised; compare (32b).
The aspect suffix in (32c,d) is actually -yè. But the glide y combines with
preceding consonants in a very complex way; for monosyllabic stems ending in
r the rule is: r + y = z (Overdulve 1975:133). The phonological readjustment
rule for high tone verbs is given in (33); what we observe is a case of high tone
delinking:
Gã, Venda, and Twi - which are taken to be autosegmental in nature (Goldsmith
1979; 1990).14
Table 11.1 sums up and compares the languages discussed above. It shows
by what means negation is expressed, that is, if a certain language involves
split and/or morphological negation. Moreover, the Vocabulary item for Neg
(for the negative head) is given and it is indicated what kind of readjustment
rule (if any) applies at the grammatical level of MS. After having presented
further data that make clear in which manner readjustment rules are sometimes
14 Becker-Donner (1965) presents remarkable data from Mano, a Western Sudanic language spoken
in Liberia. In Mano, one way of expressing negation is by a tone change alone. This tone change,
however, appears on the pronoun and not on the verb: Ko yı́dò ‘We know.’, Kô yı́dò ‘We do
not know.’ Since it is not clear from her data if the pronoun could possibly be analyzed as an
agreement prefix, I do not discuss this example further. In any case, it is different from all the
examples discussed above because negation does neither introduce a separate morpheme nor
affect the verb stem at all.
Morphosyntactic and phonological readjustment rules 285
nonmanual
articulator place of articulation
setting
nonmanual manual
path
H2 H1
orientation
aperture
In Brentari’s Prosodic Model, a fundamental distinction is made between in-
herent features and prosodic features, both of which are needed to achieve all
lexical contrasts. Since both branches contain a nonmanual node, a few words
need to be said about the status of the negative headshake.
In Brentari’s definition of inherent and prosodic features both are said to be
“properties of signs in the core lexicon” (1998:22). This, however, is not true
for the kind of nonmanual modification I have discussed above: the negative
headshake on the verb sign is not part of the sign’s lexical entry.15 Rather, it
is added to the feature description of the sign by means of a morphosyntactic
operation. Still, the presence of this feature in the surface form of the verb sign
needs to be accounted for in the featural description of the sign (the same is true
for other morphosyntactic and morphological features; e.g. movement features
in aspectual modulation or handshape features in classification).
As far as the negative headshake on the verb is concerned, I assume that in
DGS it is part of the prosodic branch of the feature hierarchy for the following
reasons:
r The negative headshake is a dynamic property of the signal.
r The negative headshake has autosegmental status; that is, it behaves in a way
similar to tonal prosodies in tone languages.
r The negative headshake appears to be synchronized with movement features
of the manual component of the sign.16
r The negative headshake is capable of spreading.
15 The negative headshake on the Neg sign NICHT is, however, part of the lexical entry of this
sign. For this reason, the nonmanual marking on NICHT was represented by a separate line in
(13b) above, implying that the negative headshake on NICHT is not due to a spreading process.
In the actual utterance, however, the headshake is realized continuously.
16 Brentari (1998:173) notes that outputs of forms containing both manual and nonmanual
prosodic features are cotemporal. She mentions the sign FINALLY, which has the accompanying
Morphosyntactic and phonological readjustment rules 287
The fourth criterion deserves some comments. In the DGS examples (12b) and
(18c) above, the negative headshake was indicated as being associated with
the verb sign only. It is, however, possible for the headshake to spread onto
neighboring constituents, for example onto the direct object BLUME ‘flower,’
as indicated in (36a). It is not possible for the headshake to spread onto parts
of phrases as is illustrated by the ungrammaticality of example (36b) in which
the headshake is associated with the adjective ROT ‘red’ only.
Following the analysis sketched above, the headshake on the direct object in
(33a) has its origin in phonological readjustment of the verb stem, that is, a
prosodic feature of the verb has spread onto a neighboring constituent.17 Since
I claim in the second criterion above that the negative headshake behaves in a
way similar to tonal prosodies in tone languages, we now need to consider if
similar phenomena are in fact attested in spoken languages. The question is: Are
tones in spoken languages capable of spreading across word boundaries? And
if so, is the spreading process comparable to the one observed for nonmanuals
in DGS?
nonmanual component ‘pa,’ an opening of the mouth which is synchronized with the beginning
and end of the movement of the manual component. Moreover, nonmanual behavior may imitate
the movement expressed in the manual component (e.g. in the sign JAW-DROP in which the
downward movement of the dominant hand is copied by an opening of the mouth). Similarly,
for a DGS sign like VERSTEH ‘understand’ which is signed with a V-hand performing a back
and forth movement on the side of the forehead, the side-to-side movement of the negative
headshake is synchronized with the movement of the manual component to indicate negation
of that verb.
17 If spreading of the nonmanual marking was in fact syntactically determined in DGS (as was
claimed to be true for ASL in Section 11.3.5), then it should not be optional. Recall that in ASL,
spreading of the nonmanual marking over the entire c-command domain of Neg is obligatory
whenever the manual Neg element is dropped. Note that there is no difference in interpreta-
tion between the DGS sentence with headshake over the verb sign only (18c) and the sentence
with headshake over the verb sign and the object DP (36a). Most importantly, the former
variant does not imply constituent negation (as in ‘John did not buy flowers, he stole them’).
Also note that my above analysis of DGS negation is not to imply that all syntactic phenom-
ena in DGS that are associated with nonmanual markings (e.g. wh-questions, topicalizations)
result from the application of phonological readjustment rules (that are triggered by empty
affixes). Rather, I assume that these phenomena are much more similar to the corresponding
ASL constructions in that the spreading of the respective nonmanual marking is syntactically
determined.
288 Roland Pfau
The answer to the first question is definitely positive. In the literature, the
relevant phenomenon is usually referred to as external tone sandhi.18 Below,
I present representative examples from the three Bantu languages Kinande,
Setswana, and Tsonga.19
In Kinande (spoken in Eastern Zaı̈re), the output of lexical tonology provides
tone bearing units that have high or low tone or are toneless. In (37), e- is the
initial vowel (IV) morpheme and ki- is a class 7 noun prefix. In a neutral
environment, that is, one where no postlexical tone rules apply, the two sample
nouns e-ki-tabu ‘book’ and e-ki-tsungu ‘potato’ surface as è-kı̀-tábù and è-kı̀-
tsùngù, respectively. However, in combination with the adjective kı́-nénè ‘big,’
whose prefix bears a high tone, a change is observed: the high tone of the
adjective prefix spreads leftwards onto the last syllable of the noun. It does not
spread any further as is illustrated by example (37b) (in the following examples
the site(s) of the tone change are underlined).
(37) Regressive high tone spreading in Kinande (Hyman 1990:113)
a. è-kı̀-tábù → è-kı̀-tábú kı́-nénè
iv-7-book iv-7-book pre-big
‘big book’
b. è-kı̀-tsùngù → è-kı̀-tsùngú kı́-nénè
iv-7-potato iv-7-potato pre-big
‘big potato’
Other remarkable tone sandhi phenomena are described by Creissels (1998) for
Setswana (spoken in South Africa and Botswana). By themselves, the Setswana
words bàthò ‘persons’ and bàηwı̀ ‘certain, some’ have no high tone, and no high
tone appears when they combine in a phrase; compare (38a). In (38b), however,
the high tone of the morpheme lı́- ‘with (comitative)’ that is prefixed to the
noun spreads rightwards onto three successive syllables.20
18 Internal tone sandhi refers to tone spreading within the word boundary, as exemplified by the
complex Shónà verb kù-téng-és-ér-á ‘inf-buy-caus-to-vs’ (‘to sell to’) where the root /teng/
is assigned a single high tone on the tonal tier that spreads rightwards onto the underlyingly
toneless extensional and final-vowel (VS) suffixes (see Kenstowicz 1994:332, whose discussion
builds on results by Myers 1987).
19 I am very grateful to Scott Myers who brought this phenomenon to my attention and was so
kind to supply some relevant references.
20 As a matter of fact, the high tone spreads first to two successive toneless syllables inside the
noun. Since by that the final syllable receives high tone, the conditions for the application of
a spreading rule operating at word boundaries are created and the high tone may thus spread
further, from the final syllable of the noun to the first syllable of the following word. Compare
the phrase lı́-bálı́mı̀ bàηwı̀ ‘with-farmers certain’ in which the noun bàlı̀mı̀ has three low tone
syllables; therefore, spreading of the high tone from the prefix does not affect the final syllable
and cannot proceed further across the word boundary (Creissels 1998:151). Consequently, what
we observe in (38b) is a combination of internal and external tone sandhi.
Morphosyntactic and phonological readjustment rules 289
21 It should be noted, however, that in Tsonga (as in many other Bantu languages) a number of
consonants (depressor consonants) prevent the process of progressive tonal spreading to go
beyond them (Baumbach 1987:53ff; for depressor consonants in Ikalanga [Botswana], also see
Hyman and Mathangwane 1998).
290 Roland Pfau
H L H
→ vánà vèkìjìlà nkhùkù ndòrì nkhùndù jàngù
children while.3.PL.eat chickens little red my
‘while the children eat those little red chickens of mine’
turns a prepausal sequence of high tones into low ones. Thus, across-the-board
lowering of high tones is explained by postulating that there is only a single
high tone in such cases.
However, a similar explanation is not available for the potential across-the-
board spreading of nonmanuals in DGS. In contrast to Kipare where the whole
sequence is assumed to be linked to a single prosodic feature (H), no such
feature is present in the DGS examples; that is, the possibly complex object DP
is not linked to a single nonmanual feature of any kind. Consequently, if at all,
we may speculate that in the context of spreading properties, we are actually
encountering a modality effect.
A possible reason for this modality effect might be that in spoken languages
every tone bearing unit must bear a tone and every tone must be associated
with a tone bearing unit; consequently, no vowel can be articulated without a
certain tone value. Due to this restriction, spreading of tone requires repeated
delinking or change of a tone feature. Across-the-board spreading (as in Kipare)
is possible only when the whole sequence is multiply-linked to a single H. But
this observation does not hold for the sign language data under consideration
since skeletal positions in DGS are not necessarily inherently associated with a
prosodic (nonmanual) feature, say, a headshake. For this reason, the spreading
of the nonmanual in DGS does not imply delinking or feature change; rather,
a feature is added to the featural description of a sign. For the same reason,
assuming a single multiply-linked prosodic feature is not a possible explanation
for the DGS facts.25
11.6 Conclusion
The general picture that emerges from the above discussion of spoken language
and signed language negation is that the processes involved in the derivation
of negated sentences are in fact very similar. Not surprisingly, the language-
specific syntactic structures are subject to parametric variation as far as, for
example, the selectional properties of functional heads and the position of spec-
ifiers are concerned (however, for a different view, see Kayne 1994; Chomsky
1995). Still, the relevant syntactic (head movement, adjunction) and morphosyn-
tactic (merger, fusion) operations are exactly the same. Moreover, in both modal-
ities readjustment rules may apply at the postsyntactic levels of Morphological
25 Note that other nonmanual features, such as raised eyebrows or head tilt, do not interfere with
the negative headshake since nonmanuals may be layered in signed languages (Wilbur 2000).
A hypothetical test case for a blocking effect would be one in which a nonmanual on the same
layer interferes with the spreading process, for example, an element within the object DP which
is associated with a headnod. In such a case, it would be interesting to examine if the headnod is
delinked and spreading of the headshake proceeds in the familiar way, or if the headnod rather
blocks further spreading of the headshake. I did not, however, succeed in constructing such an
example (possibly due to semantic awkwardness).
292 Roland Pfau
Structure and Phonological Form for instance, a zero affix may trigger a stem-
internal phonological change (as was exemplified with the help of negation data
from Gã, DGS, and Nanai).
In DGS the feature that is introduced by phonological readjustment is the
nonmanual feature [headshake]. Referring to the phonological feature hierar-
chy proposed by Brentari (1998), I claimed that feature to be a prosodic one.
Interestingly, the headshake, which is initially associated only with the verb
sign, is capable of spreading over neighboring constituents. However, similar
spreading of prosodic material (external tone sandhi) has been shown to apply
in some spoken languages.
Are there therefore no modality effects at all? In Section 11.5 I tentatively
claim that one such effect might be due to the different nature of the prosodic
material involved. While tone bearing units in spoken languages must be as-
sociated with some tone value, it is not the case that similar units in signed
languages must always be associated with some value for a given nonmanual
feature; for example, headshake/headnod. Consequently, spreading of the non-
manual is not blocked by interfering prosodic material (on the same layer) and
may, therefore, proceed over a sequence of words. Once again, it is necessary
to emphasize that further research is necessary in order to evaluate the validity
of this claim.
Acknowledgments
I am particularly indebted to my colleague Susanne Glück for fruitful mutual
work and many stimulating discussions. I am also very grateful to Daniela Happ
and Elke Menges for their invaluable help with the DGS data. Moreover, I wish
to thank Rajesh Bhatt, Katharina Hartmann, Meltem Kelepir, Gaurav Mathur,
Scott Myers, Carol Neidle, Christian Rathmann, and Sandra Wood, as well as
an anonymous reviewer, for their comments on an earlier version of this chapter.
11.7 References
Ablorh-Odjidja, J. R. 1968. Ga for beginners. Accra: Waterville Publishing.
Baumbach, E. J. M. 1987. Analytical Tsonga grammar. Pretoria: University of South
Africa.
Becker-Donner, Etta. 1965. Die Sprache der Mano (Österreichische Akademie der
Wissenschaften, Sitzungsbericht 245 (5)). Wien: Böhlaus.
Bergman, Brita. 1995. Manual and nonmanual expression of negation in Swedish Sign
Language. In Sign language research 1994: Proceedings of the Fourth European
Congress on Sign Language Research, ed. Heleen Bos and Trude Schermer, 85–
103. Hamburg: Signum.
Boyes Braem, Penny. 1995. Einführung in die Gebärdensprache und ihre Erforschung.
Hamburg: Signum.
Morphosyntactic and phonological readjustment rules 293
Harley, Heidi and Rolf Noyer. 1999. Distributed Morphology. Glot International 4:3–9.
Hartmann, Katharina. 1999. Doppelte Negation im Háusá. Paper presented at Generative
Grammatik des Südens (GGS 1999). Universität Stuttgart, May 1999.
Hyman, Larry M. 1990. Boundary tonology and the prosodic hierarchy. In The
phonology-syntax connection, ed. Sharon Inkelas and Draga Zec, 109–125.
Chicago, IL: University of Chicago Press.
Hyman, Larry M. and J.T. Mathangwane. 1998. Tonal domains and depressor consonants
in Ikalanga. In Theoretical aspects of Bantu tone, ed. Larry M. Hyman and Charles
W. Kisseberth, 195–229. Stanford : CSLI.
Kayne, Richard. 1994. The antisymmetry of syntax. Cambridge, MA: MIT Press.
Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA:
Blackwell.
Liddell, Scott. 1980. American Sign Language syntax. The Hague: Mouton.
Lyovin, Anatole V. 1997. An introduction to the languages of the world. Oxford: Oxford
University Press.
McCarthy, John. 1988. Feature geometry and dependency: A review. Phonetica 43:
84–108.
Myers, Scott. 1987. Tone and the structure of words in Shona. Doctoral dissertation,
University of Massachusetts, Amherst, MA. (Published by Garland Press, New
York. 1990.)
Neidle, Carol, Benjamin Bahan, Dawn MacLaughlin, Robert G. Lee, and Judy Kegl.
1998. Realizations of syntactic agreement in American Sign Language: Similarities
between the clause and the noun phrase. Studia Linguistica 52:191–226.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: MIT Press.
Noyer, Rolf. 1998. Impoverishment theory and morphosyntactic markedness. In Mor-
phology and its relation to phonology and syntax, ed. S. G. Lapointe, D.K. Brentari
and P. M. Farrell, 264–285. Stanford: CSLI.
Odden, David. 1995. Tone: African languages. In The handbook of phonological theory,
ed. John A. Goldsmith, 444–475. Cambridge, MA: Blackwell.
Ouhalla, Jamal. 1990. Negation, relativized minimality and the aspectual status of aux-
iliaries. The Linguistic Review 7:183–231.
Overdulve, C. M. 1975. Apprendre la langue rwanda. The Hague: Mouton.
Payne, John R. 1985. Negation. In Language typology and syntactic description, Vol.1:
Clause structure, ed. Timothy Shopen, 197–242. Cambridge: Cambridge University
Press.
Perlmutter, David M. 1992. Sonority and syllable structure in American Sign Language.
Linguistic Inquiry 23:407–442.
Pfau, Roland. 2001. Typologische und strukturelle Aspekte der Negation in Deutscher
Gebärdensprache. In Gebärdensprachlinguistik 2000: Theorie und Anwendung, ed.
Helen Leuninger and Karin Wempe, 13–31. Hamburg: Signum.
Pfau, Roland and Susanne Glück. 1999. The pseudo-simultaneous nature of complex
verb forms in German Sign Language. In Proceeding of the 28th Western Conference
on Linguistics, ed. Nancy M. Antrim, Grant Goodall, Martha Schulte-Nafeh and
Vida Samiian, 428–442. Fresno, CA: CSU.
Pfau, Roland and Susanne Glück. 2000. Negative heads in German Sign Language
and American Sign Language. Paper presented at 7th International Conference
Morphosyntactic and phonological readjustment rules 295
12.1 Introduction
Signed language research in recent decades has revealed that signed and spoken
languages share many properties of natural language, such as duality of pattern-
ing and linguistic arbitrariness. However, the fact that there are fundamental
differences between the oral–aural and visual–gestural modes of communica-
tion leads to the question of the effect of modality on linguistic structure. Var-
ious researchers have argued that, despite some superficial differences, signed
languages also display the property of formal structuring at various levels of
grammar and a similar language acquisition timetable, suggesting that the prin-
ciples and parameters of Universal Grammar (UG) apply across modalities
(Brentari 1998; Crain and Lillo-Martin 1999; Lillo-Martin 1999). The fact that
signed and spoken languages share the same kind of cognitive systems and
reflect the same kind of mental operations was suggested by Fromkin (1973),
who also argued that having these similarities does not mean that the differences
resulting from their different modalities are uninteresting. Meier (this volume)
compares the intrinsic characteristics of the two modalities and suggests some
plausible linguistic outcomes. He also comments that the opportunity to study
other signed languages in addition to American Sign Language (ASL) offers a
more solid basis to examine this issue more systematically.
This chapter suggests that a potential source of modality effect may lie in the
use of space in the linguistic and discourse organization of nominal expressions
in signed language. In fact, some researchers in this field have proposed that
space plays a relatively more prominent role in signed language than in spoken
language. As Padden (1990) claims, in spoken language space is only something
to be referred to; it represents a domain in our mental representation in which
different entities and their relations are depicted. On the other hand, space is
physically accessible and used for linguistic representation in signed language.
This includes not just the neutral signing space, but also space around or on
the signer’s body.1 Poizner, Klima and Bellugi (1987) distinguish two different
1 The space for representing syntactic relations with loci was originally proposed by Klima and
Bellugi (1979) as a horizontal plane in front of the signer at the trunk level. Kegl (1985) argued
that loci in the signing space are not restricted to this horizontal plane.
296
Nominal expressions in Hong Kong Sign Language 297
uses of space in signed language: spatial mapping and spatialized syntax. Spatial
mapping describes through signing the topographic or iconic layout of objects
in the real world. At the same time, certain syntactic or semantic properties like
verb agreement, pronominal, and anaphoric reference also use locations or loci
in space for their linguistic representation.
In fact, if objects and entities are being referred to through nominal expres-
sions in natural language, the relationship between syntactic structure, space,
and nominal reference in signed language requires a detailed examination. In a
signing discourse, objects and entities are either physically present, or concep-
tually accessible through their associated loci in the signing space, or they are
simply being referred to in the universe of discourse. A research question thus
arises as to whether, or to what extent, the presence or absence of referents in the
signing discourse influences the linguistic organization of nominal expressions
in the language.
In what follows, we first present a description of the internal structure of
the nominal expressions of Hong Kong Sign Language (HKSL). Where ap-
propriate, comparative data from ASL and spoken languages such as English
and Cantonese are also adopted. We illustrate how Hong Kong deaf signers
encode (in)definiteness through syntactic cues, such as the structure of nominal
expressions, syntactic position, as well as nonmanual markings. Toward the
end of the chapter, we provide an account of the distribution and interpretation
of certain nominal expressions in the HKSL discourse, using Liddell’s (1994;
1995) concept of mental spaces. We suggest that the types of mental space in-
voked during signing serve as constraints for the distribution and interpretation
of certain nominal expressions in the HKSL discourse.
HKSL appears to be quite variable because the data reveal that the pointing sign
and the numeral sign may either precede or follow the noun.
12.3 Determiners
Figure 12.2 ‘That man eats rice’: 12.2a INDEXdet i ; 12.2b MALE; 12.2c
EAT RICE
4 All manual signs of HKSL in this chapter are glossed with capital letters. Where the data involve
ASL, they are noted separately. Hyphenation between two signs means that the two signs form a
compound. Underscoring is used when more than one English word is needed to gloss the sign.
Subscripted labels like INDEXdet are used to state the grammatical category of the sign and/or
how the sign is articulated. Subscripted indices on the manual sign or nonmanual markings like
eye gaze (e.g. egi ) are used to indicate the spatial information of the referent. INDEXdet i means
the definite determiner is pointing to a location i in space. As for nonmanual markings, ‘egA ’
means eye gaze directed toward the addressee; ‘egpath ’ means eye gaze that follows the path of
the hand; ‘rs’ refers to role shift in the signing discourse. In some transcriptions, RH refers to
the signer’s right hand and LH refers to the left hand.
5 Optionally, eye gaze may extend over only the initial determiner, rather than over the entire DP.
300 Gladys Tang and Felix Sze
Figure 12.3 ‘Those men are reading’: 12.3a MALE; 12.3b INDEXdep-pl i ;
12.3c READ
6 MacLaughlin (1997) argues that ±definite features and agreement features are located in D in
ASL. Nonmanual markings like head tilt and eye gaze are associated with these semantic and
syntactic features.
Nominal expressions in Hong Kong Sign Language 301
(a) (b)
7 Neidle et al. (2000) observe that SOMETHING/ONE may occur alone, and it is interpreted as
a pronominal equivalent to English ‘something’ or ‘someone.’
8 A distinction suggested by MacLaughlin (1997) is the presence of stress in the articulation of
numeral ONE. Our data shows that stress occurs only in postnominal ONE.
302 Gladys Tang and Felix Sze
Figure 12.5 ‘A female stole a dog’: 12.5a HAVE; 12.5b ONEdet/num ; 12.5c
FEMALE; 12.5d–e STEAL; 12.5f DOG
Nominal expressions in Hong Kong Sign Language 303
With [HAVE [ONEdet/num N]DP ], the ONE sign is interpreted as a numeral and
the sign sequence is similar to the existential constructions in Cantonese except
for the absence of a classifier in the constituent:9
9 ‘Jau’ (‘have’) in Cantonese is an existential verb which may be preceded by an adverbial such
as ‘nei dou’ (‘here’) or ‘go dou’ (‘there’):
i. Nei dou jau saam zek gai
Here have three cl chicken
‘There are three chickens here.’
Note that if the noun is singular and indefinite, the numeral is omitted, yielding a constituent
like the one below:
with this pattern of eye gaze, the introduction of the new referent is interpreted
as referring to a specific indefinite referent.11
What if the referent is indefinite and nonspecific? The data show that [ONEdet
N] in postverbal position may apply to a nonspecific indefinite referent (7a).
However, when the signer wishes to indicate that he or she is highly uncertain
about the identifiability of the referent, the index finger moves from left to right
with a tremoring motion involving the wrist. This sign usually combines with
an N, as shown in (7b) (see Figure 12.6) and (7c):
(7) a. FATHER LOOK FOR [ONEdet/num POLICEMAN]
‘Father is looking for a/one policeman.’
egpath egA
b. [INDEXpro-3p BOOK]DP GIVE [ONEdet-path PERSON]DP
‘His book was given to someone.’
c. [INDEXdet MALE] WANT TALK [ONEdet-path STUDENT]DP
‘That man wants to talk to a student.’
[ONEdet-path N] normally occurs in postverbal position and is accompanied
by round protruded lips, lowered eyebrows and an audible bilabial sound.
When this sign is produced, the signer’s eye gaze is never directed at a spe-
cific point in space; instead, it follows the path of the hand, suggesting that
there is no fixed referent in space. Note that this eye gaze pattern does not
spread to the noun. Usually, it returns to the addressee and maintains eye
contact with him (or her) when the noun is signed (7b). Alternatively, eye
11 Sometimes, a shift in eye gaze from the addressee to a specific location together with a pointing
sign is observed when the signer tries to establish a locus for the new referent:
egA egi
i. MALE INDEXadv i STEAL DOG
‘A man there stole the dog.’
This sign is taken to be an adverbial in our analysis.
Nominal expressions in Hong Kong Sign Language 305
gaze is directed at the addressee, maintaining eye contact with him through-
out the entire DP. Unlike ASL, ONEdet-path alone is not a pronominal and it is
[ONEdet-path PERSON] that is taken to be a pronominal equivalent to the En-
glish ‘someone.’ Relative to [ONEdet-path PERSON], it seems that [ONEdet-path
N] is not yet established firmly in HKSL, as the informants’ judgments on
this constituent are not unanimous, as is the case for other nominal expres-
sions. While all of our deaf informants accept [ONEdet-path PERSON], some
prefer [ONE N] or a bare noun to [ONEdet-path N] for nonspecific indefinite
referents.
In sum, in terms of nonmanual markings, definite determiners require that
eye gaze be directed to a specific location in space. On the other hand, the
signer maintains eye contact with the addressee when he introduces an in-
definite specific or nonspecific referent to the discourse. However, variation
is observed with the eye gaze pattern for indefinite nonspecific referents. The
ONEdet-path sign may also be accompanied by eye gaze that tracks the path of the
hand.
12.4 Pronouns
It has been assumed that pronouns are determiners (Abney 1987; Cheng and
Sybesma 1999). MacLaughlin (1997) argues that pronouns and definite deter-
miners in ASL are the same lexical element, base-generated in D. In HKSL
the pointing sign may be interpreted as a pronoun when signed alone, hence
glossed as INDEXpro . We assume that this manual sign is base-generated in
D and has a [+definite] feature. It can be inflected for person and number
(8a,b,c). Note also that (8d) is ambiguous; it can either be a pronominal or a
demonstrative.
egi
(8) a. [INDEXpro-3p i ]DP CRY
‘She cried.’
b. [INDEXpro-1p i ]DP LOVE [INDEXpro-3p j ]DP
‘I love him.’
c. [INDEXpro-1p i ]DP LOVE [INDEXpro-3p-pl j ]DP
‘I love them’
d. [INDEXpro-3p i/det i ]DP TALL, [INDEXpro-3p j/det j ]DP SHORT
‘It/This (tree) is tall, it/this (tree) is short.’
In HKSL pronouns are optionally marked by eye gaze directed at the location
of the referent in space, similar to the definite determiner (8a). Based on the
observations made so far, INDEXdet and INDEXpro are associated with the
definiteness and agreement features in HKSL.
306 Gladys Tang and Felix Sze
(a) (b)
12.5 Possessives
There are two signs for possessives in HKSL: a possessive marker, glossed as
POSS, and a sign similar to INDEXpro , which is interpreted as a possessive
pronoun. Similar to ASL, POSS is articulated with a B handshape with all the
extended fingers (thumb included) pointing upward. POSS may be neutral or
inflected such that the palm is oriented toward the location of the possessor
in space (see Figures 12.7a, 12.7b). As we shall see, this possessive marker is
highly restricted in distribution in HKSL. It differs from ASL in the following
respects. First, possessive DPs in HKSL that are transitive (i.e. categorize for
a NP) do not have an overt possessive marker, as in (9a) and (9b). Therefore,
(9c) is unacceptable in HKSL.12
egi
(9) a. [PETER CAR]DP BREAK DOWN
‘Peter’s car broke down.’
b. YESTERDAY I SIT [PETER CAR]DP
‘I rode in Peter’s car yesterday.’
c. *[PETERi POSSi CAR] OLD
‘Peter’s car is old.’
In ASL, possessive constructions require a possessive marker POSS that
agrees with the possessor (10a). Alternatively, POSS is a pronominal in (10b).
An equivalent structure in HKSL as shown in (11a) would be ruled out as
ungrammatical and POSS does not occur before the possessee as a pronominal
but INDEXpro does (11b):
(10) a. [FRANKi POSSi NEW CAR]DP (ASL data from Neidle et al.
2000:94)
‘Frank’s new car’
12 Some deaf signers accept this pattern; however, they admit that they are adopting signed
Cantonese, and the sequence can be translated as ‘Peter ge syu.’ The morpheme /ge/ is a
possessive marker in spoken Cantonese.
Nominal expressions in Hong Kong Sign Language 307
Figure 12.8 ‘That dog is his’: 12.8a INDEXdet i ; 12.8b DOG; 12.8c POSSdet j
308 Gladys Tang and Felix Sze
egi
(14) a. [DOG]DP CATCH MOUSE (definite)
‘The dog caught a mouse.’
egA
b. I SEE [DOG]DP LIE INDEXadv (indefinite specific)
‘I saw a dog lying there.’
egA
c. I GO CATCH [BUTTERFLY]DP (indefinite nonspecific)
‘I’ll go and catch a butterfly.’
egA
d. I LIKE [VEGETABLE]DP (generic)
‘I like vegetables.’
In a study of narratives in HKSL (Sze 2000), bare nouns were observed to
refer to definite referents for about 40% of all the nominal expressions under
study and 58% for indefinite and specific referents, as shown by examples (15a)
and (15b):
(15) a. [DOG]DP CL:ANIMAL JUMP INTO BASKET (definite)
‘The dog jumped into a basket.’
b. [MALE]DP RIDE A BICYCLE (indefinite specific)
‘A man is riding a bicycle.’
Many spoken languages do not allow bare nouns for such a wide range of
referents. English bare nouns, for instance, refer to generic entities only. In
Cantonese bare nouns only yield a generic reading. They cannot be definite in
either preverbal or postverbal positions (16a). To be definite, the count noun
‘horse,’ if singular, requires a lexical classifier ‘zek’ to precede it and a mass
noun like ‘grass’ is preceded by a special classifier ‘di,’ as shown in (16b)
(Matthews and Yip 1994). In postverbal position, a bare noun may yield an
indefinite nonspecific reading (16c).
(16) a. [Maa]DP sik [cou]DP (generic/*definite)
Horse eat grass
‘Horses eat grass.’/*‘The horse is eating the grass.’
b. [Zek maa]DP sik gan [di cou]DP (definite)
cl horse eat asp cl grass
‘The horse is eating the grass.’
13 It is not clear whether ASL exhibits a similar pattern of distribution with bare nouns. The data
from Neidle et al. (2000) suggest that bare nouns in both preverbal and postverbal positions can
be either indefinite or definite.
310 Gladys Tang and Felix Sze
14 Liddell’s concept of mental spaces actually differs from Fauconnier’s. The types of mental spaces
as described by Fauconnier (1985) are nongrounded (i.e. not in the immediate environment of
either the speaker or the addressee) and not physically accessible. The mental spaces proposed
by Liddell may be grounded and physically accessible.
15 We leave the debate on person distinctions in signed language open. For example, Meier (1990)
argues that ASL does not distinguish second and third person in the pronominal system.
Nominal expressions in Hong Kong Sign Language 311
16 Little signed language research to date has been conducted using the concept of mental spaces;
and most existing studies are concerned with pronominal reference and verb types in signed
language (Liddell 1995; van Hoek 1996).
17 Null arguments are also common in signed languages, and recently there has been a debate
on the recoverability of null arguments. Views have diverged into recoverability via discourse
topics (Lillo-Martin 1991) or via person agreement (Bahan et al. 2000).
312 Gladys Tang and Felix Sze
In this context, the narrator is describing an event that happened the night before.
It involves a man hitting a woman. After introducing the man, the deaf signer
assumes the role of the male referent and hits at a specific location on his right
before he signs WOMAN, suggesting that the woman is standing on the right
side of the man who hits her. In both instances, the deaf signer gazes at the
addressee for MALE and WOMAN but his gaze turns to the direction of the
woman surrogate when he signs HIT.
We also found role shift to accompany bare nouns in HKSL; here, it is usually
associated with definite specific referents (18):
Nominal expressions in Hong Kong Sign Language 313
egA egA
(18) [MALE]DP CYCLE KNOW[MALE]DP BACK,
rsbody leans backward
[MALE]DP DRIVE CAR BE ARROGANT PRESS HORN
‘A man who was riding a bicycle knew that there was a male driving
a car behind him. The driver was arrogant and pressed the horn.’
This narrative features a driver and a cyclist. The cyclist in front notices that
there is a driver behind him. The driver arrogantly sounds the horn. Both men
in the event are introduced into the discourse using eye gaze directed at the ad-
dressee. However, to refer to the driver again as a definite referent, the signer’s
body leans backward to assume the role of the driver. Therefore, role shift in this
example is associated with a definite referent in surrogate space. However, role
shift appears to be more functional than grammatical since the data show that
this nonmanual marking spreads over the entire predicate (18). In other words,
role shift seems to cover the entire event predicate rather than a single nominal
expression.
Nevertheless, the use of eye gaze at the addressee to introduce an indefinite
specific referent as shown in (17) and (18) is quite common among the deaf
signers of HKSL. Alternatively, the signer may direct his eye gaze at a specific
location in space in order to establish a referential locus for the referent. The
latter phenomenon is also reported in Lillo-Martin (1991).
In a definite context, the bare noun is associated with either eye gaze directed
at the locus or role shift (19):
In our data, there are fewer bare nouns in token space than in surrogate
space. It could be that token space is invoked particularly upon the production
of classifier predicates. In this case, the referents are usually perceived to be
maximally accessible and INDEXpro is common. In fact, Liddell (1995) ob-
serves that the entities (tokens) in this type of mental space are limited to a third
person role in the discourse. Nevertheless, occasional instances of bare nouns
are found, as shown by the following example:
egi egj
(20) MALE PERSON BE LOCATEDi , FEMALE PERSON BE LOCATEDj ,
egj egi
INDEXpro-3p j j SCOLDi , MALE ANGRY, WALK TOWARD i HITj
‘A man is located here. A woman is located here (The man is placed
in front of the woman). She scolds him. The man becomes angry.
He walks toward her and hits her.’
12.7.2 Determiners
As discussed previously, a definite determiner necessarily agrees with the spa-
tial location associated with the referent. It follows that if a signer does not
conceptualize a location in the signing space for the referent, definite determin-
ers would not be used. In fact, INDEXdet in HKSL can be associated with both
proximal and distal referents in surrogate space, as in (21a) and (21b):
egi
(21) a. [INDEXdet (center-downward) KID] SMART (proximal surrogate)
i DP
‘This kid is smart.’
Nominal expressions in Hong Kong Sign Language 315
egi
b. [INDEXdet (center-forward) MALE] SMART (distal surrogate)
i DP
‘That man is smart.’
12.7.3 Pronouns
Although a pronoun normally implicates full accessibility and identifiability of
its referent through anaphoric relations, given a situation where there is more
than one referent in the discourse, the use of pronouns might fail the principle
of identifiability. A third person pronoun in Cantonese is phonetically realized
as ‘keoi’ (‘he/she/it’) and interpretation is crucially dependent upon contextual
information. INDEXpro in HKSL typically provides spatial location of the refer-
ent in the signing space, leading to unambiguous identifiability. In Cantonese,
where more than one referent is involved, a complex nominal expression or
proper name is used instead to identify the referent in the discourse. In HKSL,
INDEXpro is seldom ambiguous, since it is directed at the referent either in
the immediate environment or via its conceptual location in space. As a conse-
quence, INDEXpro is found in all kinds of mental spaces, but more prominently
in real space and token space. In token space, it is common to use INDEXpro
directed at the classifier in the predicate construction. Prior to the articulation
of (22), a locative predicate is set up in such a way that the father is located
on the left and the son on the right. Both referents are represented by a human
classifier articulated with a Y handshape with the thumb pointing upward and
the pinky finger downward:
(22) egi
LH: FATHER PERSON BE LOCATEDi . . . . . . . . . . . . . . . . . . . . . . . . .
egj
RH: SON PERSON BE LOCATEDj
LH: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i SHOOTj
egi
RH: INDEXpro i PERSON BE LOCATEDj . . . . . . . . . . . . . . . . . .
‘The father is located on the left of the signer and the son is on the
right. He (the father) shot him (the son).’
316 Gladys Tang and Felix Sze
Having set up the spatial location of the two referents through the locative
predicates, the signer produces INDEXpro with his right hand (RH), directing it
at the human classifier (i.e. the father) located on the left (LH). Note that the eye
gaze that accompanies INDEXpro is also directed at the referent (i.e. the father) in
this token space. The right hand (RH) then retracts to the right and re-articulates
a locative predicate with a human classifier referring to the son. The left hand
(LH) changes to SHOOT showing subject–object agreement, indicating that it
is the father who shoots the son. The specific location of the tokens in space as
depicted through the classifier predicates favors the occurrence of INDEXpro in
subsequent signing.
12.7.4 Possessives
Our discourse data show that predicative possessive constructions that con-
tain POSS are common in real space (23a,b). What triggers such a distri-
bution? We argue that the presence of the referent, especially the possessee,
in the immediate physical environment is a crucial determinant. To refer to
the possessee that is physically present, a pronominal index as grammatical
subject with eye gaze at a particular location is observed. It is usually fol-
lowed by a predicative possessive construction in which POSS may func-
tion as a possessive marker or a pronominal (23). When the possessor is
not present, as in (23a), [possessor POSSneu ] is adopted in the predicative
construction and it is usually directed toward the signer’s right at the face
level while the signer maintains eye contact with the addressee. Even if the
possessor is present, as in (23b), the sign for the possessor JOHN is op-
tional but POSS has to agree with the specific location of the possessor in
space.
egi egi
(23) a. [INDEXpro-3p i ]DP [JOHN POSSneu ]DP , [INDEXpro-3p i ]DP SICK
‘It (the dog) is John’s. It is sick.’ (possessee present, possessor
not present)
egi egj egi
b. [INDEXpro-3p i ]DP [(JOHN) POSSj ]DP , [INDEXpro-3p i ]DP SICK
‘It (the dog) is his. It is sick.’ (possessee present, possessor
present)
egi
(24) [INDEXpro-3p i DOG]DP SICK (possessor present, possessee
absent)
‘His dog is sick.’
In (24), INDEXpro is interpreted as a possessive pronoun that selects a noun as
its complement. In (25) a full determiner phrase is used to refer to a definite
referent, and the nonmanual marking for INDEXdet has to agree with the location
of the possessor, which is assumed to be distant from the signer.
egi
(25) [INDEXdet i MALE DOG]DP SICK (possessor present, possessee
absent)
‘That man’s dog is sick.’
Where both the possessor and the possessee are absent from the immediate
environment, a possessive DP in the form of [possessor possessee] is observed
without any specific nonmanual agreement features (26).
(26) [JOHN DOG]DP SICK (possessor absent, possessee absent)
‘John’s dog is sick.’
To summarize, one can observe that, in real space, the choice of possessive
constructions is determined in part by the presence or absence of the referents
in the immediate physical environment.
12.8 Conclusion
The data described in this chapter show that while conforming to general prin-
ciples of linguistic structuring at the syntactic level, the nominal expressions in
HKSL display some variation in nonmanual markings and syntactic order when
compared with ASL. First, while it has been claimed that unique nonmanual
markings including both head tilt and eye gaze are abstract agreement features
for D in ASL, data from HKSL show that only eye gaze at a specific location is
a potential nonmanual marker for definiteness. Eye gaze at a specific location
in space co-occurs with a definite referent, but maintaining eye contact with the
addressee is associated with an indefinite referent.
Second, there appears to be a subtle difference between signed and spoken
languages in the types of nominal expressions that can denote (in)definiteness.
We observe that bare nouns are common in HKSL and they are accompanied
by different nonmanual markings to refer to definite, indefinite, and generic
referents. Definite bare nouns may also be reflected by the signer’s adoption
of role shift in our data. Third, we observe that there is a relationship between
the type of mental spaces and the distribution of nominal expressions for refer-
ential purpose. This reflects the signer’s perceived use of space in the signing
318 Gladys Tang and Felix Sze
discourse, in particular his or her choice of mental spaces for the representa-
tion of entities and their relations. Nevertheless, the analysis shows a reliance
on narrative data. More data, especially those from free conversations or from
other signed languages, are sorely needed in order to verify the observations
presented in this chapter.
Acknowledgments
The research was supported by the Direct Grant of the Chinese University
of Hong Kong, No. 2101020. We would like to thank the following people for
helpful comments on the earlier drafts of this chapter: Gu Yang, Steve Matthews,
the editors, and two anonymous reviewers. We would also like to thank Tso Chi
Hong, Wong Kai Fung, Lam Tung Wah, and Kenny Chu for providing intuitive
judgments on the signed language data. We thank also Kenny Chu for being
our model and for preparing the images.
12.9 References
Abney, Steven P. 1987. The English noun phrase in its sentential aspect. Doctoral dis-
sertation, MIT, Cambridge, MA.
Ahlgren, Inger and Brita Bergman. 1994. Reference in narratives. In Perspectives on
sign language structure: Papers from the 5th International Symposium on Sign
Language Research, ed. Inger Ahlgren, Brita Bergman and Mary Brennan, 29–36.
Durham: International Sign Linguistics Association, University of Durham.
Allan, Keith. 1977. Classifiers. Language 53:285–311.
Bahan, Benjamin, Judy Kegl, Robert G. Lee, Dawn MacLaughlin and Carol Neidle.
2000. The licensing of null arguments in American Sign Language. Linguistic
Inquiry 31:1–27.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Cheng, Lisa and Rint R. Sybesma. 1999. Bare and not-so-bare nouns and the structure
of NP. Linguistic Inquiry 20:509–542.
Crain, Stephen and Diane Lillo-Martin. 1999. An introduction to linguistic theory and
language acquisition. Malden, MA: Blackwell.
Fauconnier, Gilles. 1985. Mental spaces: Aspects of meaning construction in natural
language. Cambridge, MA: MIT Press.
Fauconnier, Gilles. 1997. Mapping in thought and language. Cambridge: Cambridge
University Press.
Fromkin, Victoria A. 1973. Slips of the tongue. Scientific American 229:109–117.
Kegl, Judy. 1985. Locative relations in American Sign Language: Word formation,
syntax and discourse. Doctoral dissertation, MIT, Cambridge, MA.
Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liddell, Scott K. 1980. American Sign Language syntax. The Hague: Mouton.
Liddell, Scott K. 1994. Tokens and surrogates. In Perspectives on sign language
Nominal expressions in Hong Kong Sign Language 319
structure: Papers from the 5th International Symposium on Sign Language Re-
search, ed. Inger Ahlgren, Brita Bergman and Mary Brennan, 105–119. Durham,
England: International Sign Linguistics Association, University of Durham.
Liddell, Scott K. 1995. Real, surrogate and token space: grammatical consequences in
ASL. In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 19–42.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Liddell, Scott K. 1996. Spatial representation in discourse: comparing spoken and sign
language. Lingua 98:145–167.
Liddell, Scott K. and Melanie Metzger. 1998. Gesture in sign language discourse. Journal
of Pragmatics 30:657–697.
Lillo-Martin, Diane. 1991 Universal Grammar and American Sign Language: Setting
the null argument parameters. Dordrecht: Kluwer Academic.
Lillo-Martin, Diane. 1999. Modality effects and modularity in language acquisition: The
acquisition of American Sign Language. In Handbook of child language acquisi-
tion, ed. Tej K. Bhatia and William C. Ritchie, 531–568. San Diego, CA: Academic
Press.
Longobardi, Giuseppe. 1994. Reference and proper names: a theory of N-movement in
syntax and logical form. Linguistic Inquiry 25:609–665.
MacLaughlin, Dawn. 1997. The structure of determiner phrases: Evidence from
American Sign Language. Doctoral dissertation, Boston University, Boston, MA.
Matthews, Stephen and Virginia Yip. 1994. Cantonese: A comprehensive grammar.
London: Routledge.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical issues
in sign language research. Vol. 1: Linguistics, ed. Susan D. Fischer and Patricia
Siple, 175–190. Chicago, IL: University of Chicago Press.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language. Cambridge, MA: MIT Press.
Padden, Carol. 1990. The relation between space and grammar in ASL verb mor-
phology. In Sign language research: Theoretical issues, ed. Ceil Lucas, 118–132.
Washington, DC: Gallaudet University Press.
Poizner, Howard, Edward Klima, and Ursula Bellugi. 1987. What the hands reveal about
the brain. Cambridge, MA: MIT Press.
Sze, Felix Y. B. 2000. Space and nominals in Hong Kong Sign Language. M.Phil. thesis,
Chinese University of Hong Kong.
Tang, Gladys. 1999. Motion events in Hong Kong Sign Language. Paper presented at
the Annual Research Forum, Hong Kong Linguistic Society, Chinese University of
Hong Kong, December.
van Hoek, Karen. 1996. Conceptual locations for reference in American Sign Language.
In Spaces, worlds, and grammar, ed. Gilles Fauconnier and Eve Sweetser, 334–350.
Chicago, IL: University of Chicago Press.
Wilbur, Ronnie B. 1979. American Sign Language and sign systems. Baltimore, MD:
University Park Press.
Zimmer, June and Cynthia Patschke. 1990. A class of determiners in ASL. In Sign
language research: Theoretical issues, ed. Cecil Lucas, 201–210. Washington, DC:
Gallaudet University Press.
Part IV
321
322 Using space and describing space
vs. the right side, even though it would seem that the tongue tip has sufficient
mobility to achieve such a contrast.
However, dictionary entries do not make a language. Spatial distinctions are
widespread elsewhere in ASL and other signed languages. Most pronouns in
ASL are pointing signs that indicate the specific locations of their referents, or
locations that have – during a sign conversation – been associated with those ref-
erents. So, if I am discussing John and Mary – both of whom happen to be in the
room – it makes all the difference in the world whether I point to John’s location
on the left, as opposed to Mary’s location on the right. If John and Mary are not
present, I the signer may establish a location on my right for Mary and one on my
left for John (or vice versa). I can then point back to the locations when I want
to refer to John or Mary later in a conversation. My addressee can do the same.
Spatial distinctions are crucial to the system of verb agreement, whereby a
transitive verb “agrees with” the location associated with its direct or indirect
object (depending on the particular verb) and optionally with its subject. Along
with word order, the spatial modification of verbs is one of the two ways in
which sign languages mark the argument structure of verbs. So, a sign such as
GIVE obligatorily moves toward the spatial location associated with its indirect
object (the recipient). Optionally, the movement path of GIVE may start near a
location associated with its subject.
Sign languages also use the sign space to talk about the locations of objects
in space and their movement; this is the phenomenon that Karen Emmorey
addresses in her chapter (Chapter 15). So-called classifier handshapes indicate
whether a referent is, in the case of ASL, a small animal, a tree, a human, an
airplane, a vehicle other than an airplane, or a flat thin object, among other
categories.2 A classifier handshape on the dominant hand can be moved with
respect to a classifier on the nondominant hand to indicate whether, for instance,
a car drove in front of a tree, or behind a tree, or whether it crashed into
the tree. Karen Emmorey argues that the use of space to represent space – in
contrast to the prepositional or postpositional phrases that are characteristic of
many spoken languages – means that different cognitive abilities are required to
comprehend spatial descriptions in ASL than in spoken languages. In particular,
she argues that comprehension of signed spatial descriptions requires that the
addressee perform a mental transformation on that space that is not required in
the comprehension of spoken descriptions.
In her chapter, Susan McBurney makes a useful distinction between modal-
ity and medium (Chapter 13). For her, modality refers to the articulatory and
perceptual apparatus used to transmit and receive language. For visual–gestural
languages, that apparatus includes the manual articulators and the visual system.
2 There is now considerable debate about the extent of the analogy between classifier constructions
in signed languages and those in spoken languages. It is the verbal classifiers of the Athabascan
languages to which sign classifiers may bear the strongest resemblance (see Newport 1982).
Using space and describing space 323
order, for the distribution of negative markers, and for the licensing of null
arguments.
However, Rathmann and Mathur also adopt something of a compromise
position when they assert that the markers of agreement – the loci in space with
respect to which agreeing verbs move – are not phonologized. Here they too
echo the recent arguments of Liddell (2000). In their view – and in Liddell’s –
there is not a listable set of locations with which verbs may agree; this is
what Rathmann and Mathur call the “infinity problem.” In other words, the
phonology of ASL and of other signed languages does not make a listable set
of spatial contrasts available as the markers of agreement. For example, when
verbs agree with spatial loci associated with referents that are present in the
immediate environment, those locations are determined not by grammar but
by the physical locations of those referents. From this, Rathmann and Mathur
conclude that the spatial form of agreement is gestural, not phonological. So,
in their view, agreement is at once linguistic and gestural.
The two remaining chapters – Chapter 16 by Gary Morgan, Neil Smith, Ianthi
Tsimpli, and Bencie Woll and Chapter 17 by David Quinto-Pozos – address
the use of agreement in special populations. Gary Morgan and his colleagues
describe the acquisition of British Sign Language (BSL) by Christopher, an
autistic-like adult who shows an extraordinary ability to learn languages, as
evidenced by his knowledge (not necessarily complete) of some 20 or so spoken
languages. However, paired with his linguistic abilities are, according to Morgan
and colleagues, significant impairments in visuo-spatial abilities and in motor
co-ordination.
Christopher was formally trained in BSL over a period of eight months. His
learning of BSL was compared to a control group of hearing, intellectually nor-
mal undergraduates. This training was analogous to formal training that he had
previously received in Berber, a spoken language (Smith and Tsimpli 1995). In
Berber, Christopher showed great enthusiasm, and apparently great success, in
figuring out the subject agreement morphology. However, his acquisition of BSL
appears to proceed on a different track. Although he succeeded in learning some
significant aspects of BSL, including the recall of individual signs, the ability to
produce simple sentences, the ability to recognize fingerspelling, and the use of
negation, he showed little success in acquiring the classifier morphology of BSL,
and his acquisition of verb agreement is more limited than what Morgan and
his colleagues would have expected given Christopher’s success in acquiring
morphology in spoken languages. Christopher could not, for example, estab-
lish locations within the sign space for nonpresent referents. On the authors’
interpretation, Christopher lacked the spatial abilities necessary to acquire the
verb agreement and classifier systems of BSL. In their view, the acquisition of
certain key aspects of signed languages depends crucially on intact spatial abil-
ities. This conclusion appears to converge with Karen Emmorey’s suggestion
326 Using space and describing space
that different cognitive abilities are implicated in the processing of spatial de-
scriptions in signed languages than in spoken languages.
In Chapter 17 David Quinto-Pozos reports a case study of two Deaf-Blind
signers in which he compares their use of the sign space to that of two Deaf,
sighted signers. Quinto-Pozos asked the participants in his study to memorize a
short narrative; each then recited that narrative to each of the other three partic-
ipants. The sighted participants made great use of the signing space, even when
they were addressing a Deaf-Blind subject. In contrast, the Deaf-Blind signers
made little use of the spatial devices of ASL. In general, they did not establish
locations in space for the characters in their stories, nor did they use points to
refer to those characters. Instead, their use of pronominal pointing signs was
largely limited to self-reference and to reference to the fictive addressee of
dialogue in the story that they were reciting. Even with just two participants,
Quinto-Pozos finds interesting differences between the Deaf-Blind signers. For
example, one showed frequent use of the signing space with agreeing verbs;
the other did not. The two Deaf-Blind signers also differed in how they referred
to the characters in their narratives: one used proper names (and did so more
frequently than the sighted signers); the other used common nouns or pronouns
derived from Signed English. At this point, we do not know the extent to which
Quinto-Pozos’s results will generalize to other Deaf-Blind signers. For exam-
ple, native Deaf-Blind signers – including those who are congenitally blind –
might make greater use of the sign space. However, his results raise the possi-
bility that full access to the spatial medium of signed languages may depend
on vision.
richard p. meier
References
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language. Hamburg: Signum-
Verlag.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liddell, Scott K. 2000. Indicating verbs and pronouns: pointing away from agreement.
In The signs of language revisited, ed. Karen Emmorey and Harlan Lane, 303–320.
Mahwah, NJ: Lawrence Erlbaum Associates.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical is-
sues in sign language research, Vol. 1: Linguistics, ed. Susan D. Fischer and Patri-
cia Siple, 175–190. Chicago, IL: University of Chicago Press.
Newport, Elissa L. 1982. Task specificity in language learning? Evidence from speech
perception and American Sign Language. In Language acquisition: The state of
the art, ed. Eric Wanner and Lila Gleitman, 451–486. Cambridge: Cambridge
University Press.
Using space and describing space 327
Pinker, Steven and Paul Bloom. 1990. Natural language and natural selection. Behavioral
and Brain Sciences 13:707–784.
Smith, Neil and Ianthi-Maria Tsimpli. 1995. The mind of a savant. Oxford: Blackwell.
Smith, Wayne. 1990. Evidence for auxiliaries in Taiwanese Sign Language. In Theoret-
ical issues in sign language research, Vol. 1: Linguistics, ed. Susan D. Fischer and
Patricia Siple. Chicago, IL: University of Chicago Press.
Supalla, Samuel J. 1991. Manually Coded English: The modality question in signed
language development. In Theoretical issues in sign language research, Vol. 2:
Psychology, ed. Patricia Siple and Susan D. Fischer, 85–109. Chicago, IL: Univer-
sity of Chicago Press.
13 Pronominal reference in signed and spoken
language: Are grammatical categories
modality-dependent?
13.1 Introduction
Language in the visual–gestural modality presents a unique opportunity to ex-
plore fundamental structures of human language. One of the larger, more com-
plex questions that arises when examining signed languages is the following:
how, and to what degree, does the modality of a language affect the structure of
that language? In this context, the term “modality” refers to the physical systems
underlying the expression of a language; spoken languages are expressed in the
aural-oral modality, while signed languages are expressed in the visual–gestural
modality.
One apparent difference between signed languages and spoken languages
relates to the linguistic expression of reference. Because they are expressed in
the visual–gestural modality, signed languages are uniquely equipped to convey
spatial–relational and referential relationships in a more overt manner than is
possible in spoken languages. Given this apparent difference, it is not unrea-
sonable to ask whether systems of pronominal reference in signed languages
are structured according to the same principles as those governing pronominal
reference in spoken languages.
Following this line of inquiry, this typological study explores the grammatical
distinctions that are encoded in pronominal reference systems across spoken
and signed languages.Using data from a variety of languages representing both
modalities, two main questions are addressed. First, are the categories encoded
within pronoun systems (e.g. person, number, gender, etc.) the same across
languages in the two modalities? Second, within these categories, is the range of
distinctions marked governed by similar principles? Because spatial–locational
distinctions play such an integral role in pronominal reference across signed
languages, I explore in greater depth spatial marking within spoken language
pronominal systems. In particular, I examine demonstratives and their use as
third person pronominal markers in spoken languages.
The structure of the chapter is as follows. In Section 13.2 I present and discuss
pronominal data from a range of spoken and signed languages. In Section 13.3
I compare pronominal reference in signed and spoken languages, and discuss
329
330 Susan Lloyd McBurney
four ways in which signed language pronominal systems are typologically un-
usual. Section 13.4 examines spatial marking within spoken language pronomi-
nal systems, with particular focus on demonstratives. In Section 13.5 I present an
analysis of signed language pronominal reference that is based on a distinction
between the “modality” and the “medium” of language. Finally, in Section 13.6
I return to the sign language data and discuss the effects of language medium
on the categories of person, number, and gender.
1 I we
2 you you
he
3 she they
it
in the first person plural, and the exclusive form is used otherwise. None of the
Nogogu pronouns are marked for gender.
Finally, we come to Aranda, an Australian Aboriginal language. Table 13.5
presents partial data from the personal pronouns in Aranda. The Aranda data
reveal a three-way distinction in person marking, as well as number mark-
ing for dual and plural (singular data were not available). What is most unusual
about the Aranda pronominal system is the extensive marking of kinship distinc-
tions. Two major distinctions in kinship are encoded throughout the pronominal
system: agnatic vs. non-agnatic (where agnatic denotes an individual related
through a line of patrilineal descent) and harmonic vs. disharmonic (where har-
monic refers to a person from the same generation or a generation that differs
by an even number).
Although the data presented above are a miniscule sampling of the world’s
languages, this brief survey serves to illustrate two points. First, spoken lan-
guage pronominal systems vary in the categories marked; some languages mark
a small number of categories (Nogogu, for example, marks only person and
number), while others encode a wider range of categories (Aranda’s marking
for kinship). Second, spoken languages differ in the range of distinctions marked
within certain categories; compare, for example, Asheninca (which has a plural
Agnatic
Number Harmonic Disharmonic Non-Agnatic
pronoun only in the first person) and Nogogu (which has four distinctions in
number – singular, dual, trial, and plural – across all three persons).
addressee
2
d c
e b
... a
1
signer
inclusive
1 exclusive
2
3 a,b,c,d . . . ? ?
considered to be part of the grammar of ASL; they form the base of pronominal
reference and play a crucial role in conveying person distinctions throughout
discourse.
The pronominal system of ASL patterns as shown in Table 13.6.6 Looking
first to the category of person, we see that the three-way distinction of first,
second, and third person exists throughout the pronominal system. Particu-
larly interesting, from a crosslinguistic perspective, is the large number of third
person singular pronouns. As was discussed above, nonpresent referents are
established (or localized) at distinct locations in the signing space. Since there
are an unlimited number of locations in space, it has been argued that there is
a potentially infinite number of distinct pronominal forms (Lillo-Martin and
Klima 1990). Additionally, because individual referents are associated with
distinct locations in the signing space, pronominal reference to single individ-
uals is unambiguous; once Mary has been established at location ‘a,’ an index
directed toward location ‘a’ unambiguously identifies Mary as the referent for
the duration of the discourse.7
ASL has a very rich system of marking for number in its pronouns. The
singular form is an index directed toward a point along the horizontal signing
plane, while plural forms have an added arc-shaped, or sweeping, movement.
6 Table 13.6 represents an analysis and summary of pronoun forms elicited from one native Deaf
signer (a Deaf individual with Deaf parents). Although many of the distinctions represented in
this table are commonly known to exist, the distinctions in number marking are ones that exist
for this particular signer. Whether distinct number-marking forms are prevalent across signers
is a question for further research.
7 There are, in fact, certain circumstances in which reference is not wholly unambiguous. In ASL
discourse abstract concepts and physical locations (such as cities) can also be localized in the
signing space, usually as a strategy for comparison. For example, a signer might wish to compare
her life growing up in Chicago with her experiences as an adult living in New York City. In
this instance, Chicago might be localized to the signer’s left (at locus ‘e’ in Figure 13.1) and
New York to the signer’s right (at locus ‘b’). In the course of the discourse the signer might
also establish a nonpresent referent, her mother perhaps, at locus ‘e’ because her mother lives
in Chicago. Consequently, later reference to locus ‘e’ could be argued to be ambiguous, in that
an index to that locus could be interpreted as a reference to either the city of Chicago or to
the signer’s mother. The correct interpretation is dependent upon the context of the utterance. I
thank Karen Emmorey for bringing this exception to my attention.
336 Susan Lloyd McBurney
In addition to the singular and plural forms, ASL appears to have dual, trial,
quadruple, and quintuple forms throughout much of the pronoun system. The
trial, quadruple, and quintuple forms are formed by replacing the index hand-
shape with a numeral handshape (the handshape for 3, 4, and 5, respectively),
changing the orientation from palm downward to palm upward, and adding
a small circular movement. The dual form is formationally distinct from the
other forms, in that the handshape that surfaces is distinct from the numeral
two; whereas the numeral two has the index and middle fingers extending from
an otherwise closed fist, the handshape in the dual form is the K handshape (the
thumb is extended and placed between the two fingers, and the middle finger is
lowered slightly, so that it is perpendicular to the thumb). The movement in the
dual is also distinct from that found in the trial, quadruple, and quintuple forms;
the K handshape is moved back and forth between the two loci associated with
the intended referents.
In addition to the number marking, it appears that ASL has an inclusive/
exclusive distinction in the first person. In a study of plural pronouns and the
inclusive/exclusive distinction in ASL, Cormier (1998) reports that the exclusive
can be marked by displacement of the sign. For example, the general plural
WE (which is normally articulated with the index touching two points along a
horizontal plane at the center of the chest) can be displaced slightly either to the
right or to the left side of the chest. This instance of WE, which Cormier glosses
as WE-DISPLACED, is marked as first person exclusive, and is interpreted as
[+speaker], [–addressee], and [+non-addressed speech act participant].8 In the
pronoun forms summarized in Table 13.6, the exclusive form of WE is marked
by displacement of the sign, as well as a slight backward lean of the body. The
inclusive and exclusive forms of the dual pronoun are differentiated by whether
or not the location of the addressee is indexed in the sign. Cormier reports
that for the trial, quadruple, and quintuple forms, the locations of the included
referents are often, but not always indexed, but displacement of the sign to the
side of the chest is always present in the exclusive form. The ASL pronoun
forms elicited for this study seem to support this observation.
The extensive marking for number evidenced in ASL raises an interesting
question: do these forms constitute true grammatical number marking (i.e. num-
ber marking that is internal to the pronoun system) or are they the result of an
independent morphological process that happens to surface in pronouns? Based
upon the limited data that are available, I argue that the dual form is an instance
8 Cormier’s (1998) examination of the inclusive/exclusive distinction in ASL is far more compre-
hensive than my discussion of it suggests. She develops an analysis of plural pronouns based
on a distinction between lexical plurals (including the general plural WE, number incorporated
forms 3/4/5-OF-US, the sign A-L-L, as well as the possessive OUR) and indexical plurals (in-
cluding dual and composite forms, individual indexes to all those included). Whereas indexical
plurals index the locations of individual referents, lexical plurals do not.
Pronominal reference in signed and spoken language 337
of grammatical number marking, while the trial, quadruple, and quintuple forms
are not. Three facts of ASL support this interpretation. First, in spoken language
grammatical number marking, dual and trial forms are, by and large, not etymo-
logically derived from the numerals in the language (Last, in preparation).9 For
example, the morpheme that distinguishes a trial form in a given pronominal sys-
tem is not etymologically derived from the numeral three. In contrast, the mor-
phemes (handshapes) that distinguish the trial, quadruple, and quintuple forms
in ASL are the very same handshapes that serve as numerals in the language.
The handshape in the dual form, however, is distinct from the numeral two.
Second, the number handshapes that present within the trial, quadruple, and
quintuple forms of ASL pronouns are systematically incorporated into a lim-
ited number of nonpronominal signs in ASL. This morphological process has
been referred to as “numeral incorporation” (Chinchor 1979; Liddell 1996).
For example, signs having to do with time (MINUTE, HOUR, DAY, WEEK,
MONTH, YEAR) incorporate numeral handshapes to indicate a specific num-
ber of time units. The basic form of the sign WEEK is articulated by moving an
index, or 1, handshape of the dominant hand (index finger extended from the fist)
across the upturned palm of the nondominant hand. The sign THREE-WEEKS
is made with a 3 handshape (thumb, index, and middle finger extended from the
fist), and the sign for FOUR-WEEKS with the 4 handshape (all four fingers ex-
tended from the fist).10 Other signs that can take numeral incorporation include
EXACT-AGE, APPROXIMATE-AGE, EXACT-TIME, DOLLAR-AMOUNT,
and HEIGHT. Numeral incorporation is clearly a productive (though limited)
morphological process, one that surfaces in several areas of the language. Sig-
nificantly, the handshape for the numeral two (as opposed to the K handshape
that surfaces in the dual pronominal form) is also involved in this productive
morphological process. Thus, the data available suggest that the trial, quadru-
ple, and quintuple forms that surface in parts of the pronominal system are
instances of numeral incorporation, not grammatical number marking.
The final argument for treating trial, quadruple, and quintuple number mark-
ing in ASL as something other than grammatical number marking has to do
with obligatoriness. Last (personal communication) suggests that in order to be
considered grammatical number marking, the marking of a particular number
distinction within a pronominal system has to exhibit some degree of obliga-
toriness. Whereas the dual form appears to be obligatory in most contexts (and
9 Greville Corbett (personal communication) notes some exceptions in a number of Austronesian
languages, where forms indicating ‘we-three’ and ‘we-four’ appear to be etymologically related
to numerals.
10 The handshapes that can be incorporated into these signs appear to be limited to numerals ‘1’
through ‘9’. While the number signs for ‘1’ through ‘9’ are static, the signs for numbers ‘10’ and
above have an internal (nonpath) movement component. These numbers cannot be incorporated
because the resulting sign forms would violate phonological constraints in the language (Liddell
et al. 1985).
338 Susan Lloyd McBurney
1 1 1 1 1 1
2 2 2 non 1 2 2
3. . . 3. . . 3. . . 3. . . 3. . .
Group (LIS and DSL), the British Sign Language Group (Auslan), and the
Asian Sign Language Group (NS) (Woodward 1978b). Although the precise
historical affiliation of Indo-Pakistani Sign Language (IPSL) is at present un-
known, evidence suggests IPSL is not related to any European signed languages
(Vasishta, Woodward, and Wilson 1978).12
Rather than summarize data from each signed language separately, I ex-
amine each individual category (person, number, gender) in turn. To facilitate
comparison, I also include the data from ASL. I begin with an examination
of person distinctions (Table 13.7). All but one of the signed languages ana-
lyzed here utilize an index (or some highly similar handshape, such as a lax
index) directed toward the signer or the addressee to indicate first and second
person, respectively. DSL has forms that directly correspond to these first and
second person pronouns, but Engberg-Pedersen (1993) argues for a distinction
between first person and nonfirst person (discussed below).13 Whether or not
a second/third person distinction exists, all signed languages considered here
use strategies similar to those used in ASL to establish (or localize) nonpresent
referents along a horizontal plane in the signing space.14 In addition, all signed
languages appear to allow a theoretically unlimited number of third person (or
nonfirst person) pronouns and, because individual referents are associated with
distinct loci in the signing space, reference to individuals is unambiguous.
The number distinctions in these signed languages pattern as shown in
Table 13.8.15 LIS, Auslan, and DSL all have a singular/plural distinction similar
12 To an even greater extent than is true with ASL, the data available on pronouns in these signed
languages is incomplete. Consequently, there are gaps in the data I present and discuss. In addi-
tion, relatively little is known concerning the historical relationships between signed languages
outside the French and British Sign Language Groups.
13 Like Engberg-Pedersen, Meier (1990) argues for a first/nonfirst distinction in ASL pronouns.
His analysis is also discussed below.
14 Zeshan (1998) points out that in IPSL, for some referents it is required or possible to localize
referents in the upper signing space, as opposed to along the horizontal plane. The upper signing
space is used for place names, and can be used for entities that have been invested with some
degree of authority as well as referents that are physically remote from the signer (for example,
in a telephone conversation).
15 In Table 13.8 I have included all information that is available in the literature, including data
covering dual, trial, and quadruple forms in DSL and Auslan. Based on available data, I am not
able to comment on whether or not these forms constitute grammatical number or rather are
instances of numeral incorporation, as I have argued is the case for ASL.
340 Susan Lloyd McBurney
to that present in ASL, where the plural form is marked by an arc-shaped move-
ment. In addition, Auslan and DSL appear to have dual, trial, and quadruple
forms as well (the sources available for LIS did not mention these distinctions).
No number-marking data were available for NS.
Number marking in IPSL is, in certain respects, distinct from number mark-
ing in the other signed languages considered here. Zeshan (1999; personal
communication) reports that IPSL has a transnumeral form that is unspecified
for number. In other words, a single point with an index finger can refer to any
number of entities; it is the context of the discourse that determines whether
singular or plural reference is intended. This is true not only for second and third
person reference, but for first person reference as well. IPSL also has a dual
form (handshape with middle and index finger extended, moving between two
points of reference) that can mark for “inclusive/exclusive-like” distinctions.
In addition, IPSL has a “nonspecific plural” (a half-circle horizontal move-
ment) that refers to an indefinite number of persons exclusively (not used with
nonhuman entities).
Finally, gender marking across these signed languages patterns as shown in
Table 13.9. Only one of the six signed languages considered here has (possibly)
morphological marking for gender; as the discussion below reveals, it is not clear
whether this gender marking is in fact an integral component of the pronominal
system.
Japanese and other Asian signed languages (Taiwan Sign Language; see
Smith 1990) are unique in that they use classifier handshapes to mark gender
in certain classes of signs. In NS a closed fist with the thumb extended upward
???
– – – – – male
female
(optional?)
Pronominal reference in signed and spoken language 341
represents ‘male,’ while the same fist with the pinkie finger extended represents
‘female.’ Fischer and Osugi (2000) point out that the male and female hand-
shapes are borrowed directly from Japanese culture, but that the use of these
handshapes in NS appears to be completely grammaticized.
Supalla and Osugi (unpublished) report that these gender handshapes are
used in four morphosyntactic paradigms:
r nominal lexemes referring to humans, where a combination of the two gender
handshapes refers to ‘couple’;
r classifier predicate constructions, where a verb of motion or location incor-
porates the masculine gender handshape to represent any animate entity;
r kinship lexemes, where the handshape marks the gender of the referent, and
placement in relation to other articulators denotes the familial status, as in
‘daughter’; and
r inflectional morphemes incorporated into agreement verb constructions, where
a gender handshape can mark the gender of the subject or object.
Fischer (1996) reports that, in addition to co-occurring with verb agreement,
gender marking can co-occur with pronominal indexes. She gives only the
following example:
(1) MOTHERa COOK CAN INDEXa-1
‘Mother can cook, she can.’
(INDEXa-1 simultaneously indicates gender and location). In (1) subscript let-
ters indicate spatial locations, while the subscript number ‘1’ indicates the
female gender handshape. Fischer (personal communication) comments that
most often the gender handshape is articulated on the nondominant hand, with
the dominant hand index pointing to it. Less common is a form where the
gender handshape is incorporated into the pronoun itself; in example (1) the
female gender handshape ‘1’ would be articulated at location ‘a.’ It is not clear
what restrictions apply to the use of gender handshapes within the pronominal
system (for example, can they be used across all three person distinctions?),
nor is it clear whether or not this gender marking is a required component of a
well-formed pronoun.16
17 While the phonological form of personal pronouns is highly similar across signed languages,
this is not true in the case of possessive and reflexive pronouns. Although locations in space
are still used, there appears to be considerable variation in handshape among languages in the
possessives and reflexives.
18 Japanese Sign Language appears to have two forms of the first person; one form contacts the
chest and a second form contacts the nose of the signer (a gesture that is borrowed from Japanese
hearing culture). Similarly, Farnell (1995) notes that in Plains Indian Sign Language reference
to self sometimes takes the form of an index finger touching the nose.
19 This is, in fact, an oversimplification. Locations in the signing space are also used in classifier
constructions, where specific handshapes (classifiers) are combined with location, orientation,
movement, and nonmanual signals to form a predicate (Supalla 1982; 1986). For example, the
English sentence The person walked by might be signed in ASL by articulating the person
classifier (a 1 handshape, upright orientation) and moving it from right to left across the sign-
ing space. In these constructions the use of space is not lexical, but rather topographic. The
relationship between locations in space is three-dimensional; there exists a one-to-one corre-
spondence between the elements of the classifier predicate and what they represent in the real
world. For a discussion of the topographic use of space, see Emmorey (this volume). In terms
of morphophonological exclusivity, my claim is that when space is used lexically (as opposed
to topographically), locations in space seem to be reserved for referential purposes.
Pronominal reference in signed and spoken language 343
there are no minimal pairs that are distinguished solely by spatial location.20
To my knowledge, there are no spoken languages that pattern this way, where a
particular subset of phonemes is used exclusively for a specific morphological
purpose, such as pronominal reference.
from a person y, and x sells it to a person z, then z is not obliged to give it back to y.” I thank an
anonymous reviewer for bringing this to my attention. Also, Aronoff, Meir and Sandler (2000)
discuss a small number of spoken languages that have verbal agreement systems similar in
structure to the hypothetical system discussed in Table 13.10.
Pronominal reference in signed and spoken language 345
Nagala
English IPSL Auslan
Nogogu ISL DSL
Asheninca Aranda
ASL NS
?
LOW DEGREE HIGH DEGREE
SPOKEN SIGNED
LANGUAGES LANGUAGES
this (proximal)
that (distal)
The deictic center (also referred to as the origo) is most often associated with
the location of the speaker. Table 13.11 shows the two-term deictic distinction
found in English demonstrative pronouns. 23 Here, the proximal form this refers
to an entity near the speaker, while the distal form that refers to an entity that
is at a distance from the speaker.
In contrast to the two-way distinction found in English, many languages have
three basic demonstratives. In such languages, the first term denotes an entity
that is close to the speaker, while the third term represents an entity that is
remote relative to the space occupied by speaker and addressee. As Anderson
and Keenan (1985) note, three-term systems differ in the interpretation given to
the middle term. Take, for example, the following demonstrative forms, found
in Quechua, an Amerindian language spoken in central Peru (Table 13.12).
The type of three-way distinction evidenced in Quechua has been characterized
as “distance-oriented,” in that the middle term refers to a location that is a
medial distance relative to the deictic center (or speaker) (Anderson and Keenan
1985:282–286).
In contrast to the distance-oriented distinctions encoded in Quechua
(Table 13.12), we have the following system in Pangasinan, an Austronesian
language spoken in the Philippines (Table 13.13). The three-term deictic system
in Pangasinan is “person-oriented”; in this system the middle term denotes a
referent that is close to the hearer (as opposed to a medial distance relative to the
speaker).
The demonstrative pronoun system of Khasi, an Austro-Asiatic language
spoken in India and Bangladesh, patterns as shown in Table 13.14. The demon-
strative system in Khasi is based on six demonstrative roots, which are paired
with personal pronouns, u ‘he’ and ka ‘she.’ Three of the demonstratives locate
the referent on a distance scale (proximal, medial, distal). Khasi demonstrative
pronouns encode two additional deictic dimensions: visibility (ta ‘invisible’)
and elevation (tey ‘up’, thie ‘down’). The elevation dimension indicates whether
the referent is at a higher or lower elevation relative to the deictic center, or
speaker.
person pronouns. Similar to Lak, Bella Bella has independent forms for first
and second person pronouns, but recruits the demonstrative forms to serve as
third person pronouns. The forms shown in Table 13.17 can all be used as third
person pronouns. Thus, third person pronouns in Bella Bella have a seven-fold
distinction that relies on proximity to speaker, hearer, and other, as well as
visibility and presence.
Although only two languages have been discussed here, there are many others
that utilize demonstrative (spatially deictic) pronouns for third person reference.
Several languages of the Pacific Northwest (for example Northern Wakashan)
mark a variety of spatial categories in the third person. Additionally, many
Indo-Aryan languages (Sinhala and Hindi among them) utilize demonstratives
as third person pronominal forms.
va ‘near to speaker’
mu ‘near to addressee’
ta ‘distant from both, neutral’
ga ‘below speaker’
ka
. ‘above speaker’
been a focus of linguistic research. While modality effects have received a great
deal of attention in the literature, the distinction between modality and medium
has not. In an attempt to provide a principled account for the differences between
pronominal reference in spoken and signed languages, I propose a preliminary
analysis that relies on a distinction between the modality of a language and the
medium of a language.
The “modality” of a language can be defined as the physical or biological
systems of transmission on which the phonetics of a language relies. There are
separate systems for production and perception. For spoken languages, pro-
duction relies upon the vocal system, while perception relies on the auditory
system. Spoken languages can be categorized, then, as being expressed in the
vocal-auditory modality. Signed languages, on the other hand, rely on the gestu-
ral system for production and the visual system for perception. As such, signed
languages are expressed in the visual–gestural modality.
The “medium” of a language I define as the channel (or channels) through
which a language is conveyed. More specifically, channel refers to the dimen-
sions of space and time that are available to a given language. Defined as such,
I suggest that the medium of spoken languages is “time,” which in turn can be
defined as “a nonspatial continuum, measured in terms of events that succeed
one another.”24 Indeed, all spoken languages unfold in time; speech segments,
morphemes, and words follow one another, and the order in which they appear is
temporally constrained. This is not to say that all aspects of spoken language are
entirely segmental in nature. Autosegmental approaches to phonology (in which
tiers comprised of linear arrangements of discrete segments are co-articulated)
have proven essential in accounting for certain phonological phenomena (tone
spreading and vowel harmony among them). However, the temporal character
of spoken languages is paramount, while “spatial” relations play no role (by this
I mean the segments of spoken languages have no inherent spatial–relational
value).25
Whereas spoken languages are limited to the temporal medium, signed lan-
guages are able to utilize an additional medium, that of “space”: “a boundless,
three-dimensional extent in which objects occur and have relative position and
direction.” It is certainly not the case that signed languages exist apart from time;
like spoken languages, the signs of signed languages are temporally ordered.
Additionally, although much of sign language phonology has been argued to
24 The definitions of time and space are taken from Webster’s online dictionary: www.m-w.com
25 In their paper on the evolution of the human language faculty, Pinker and Bloom (1990:712)
discuss the vocal-auditory channel and argue that “language shows signs of design for the
communication of propositional structures over a serial channel.” Although their use of the term
“channel” seems to cover both modality and medium (as defined in the present chapter), their
observations seem to fall in line with the observations made here.
352 Susan Lloyd McBurney
be simultaneous (in the sense that the components of a sign – handshape, lo-
cation, movement, orientation, and nonmanual features – are simultaneously
articulated), research suggests that linear segments do exist, and that the or-
dering of these segments is an important aspect of phonological structure (for
an overview, see Corina and Sandler 1993). Nevertheless, signed languages
are unique in that they have access to the three dimensions of space; thus, the
medium of signed languages is space and time. Significantly, it is the spatial
medium, a medium not available to spoken languages, that affords a radically
increased potential for representing spatial relationships in an overt manner.26
Returning to the question of pronominal reference, the fact that signed lan-
guages are expressed through the visual–gestural modality does not preclude a
fully abstract, grammatical system of reference. Signed languages could have
developed systems of reference that utilize the modality-determined abstract
building blocks that are part of the language (handshapes, locations on the
body, internal movements, etc.) without using space. Instead of localizing ref-
erents at distinct locations in the signing space, reference might look something
like (2).
(2) Possible sign language pronoun system
M-A-R-Y [F handshape to left shoulder], B-I-L-L [F handshape
to chest]
[F handshape to left shoulder] LIKE [F handshape to chest]
‘Mary likes Bill.’
In principle, there is no reason this kind of system for referring to individuals
in the discourse would not work. However, there are no known natural signed
languages that are structured this way; all natural signed languages take full
advantage of the spatial medium to refer to referents within a discourse.
There are, in fact, artificial sign systems that do not use space, or use it in
a very restricted manner. One example is Signing Exact English (SEE 2), an
English-based system of manual communication developed by hearing educa-
tors to provide deaf students with visible, manual equivalents of English words
and affixes. For a discussion of the acquisition of these artificial sign systems
and the ways in which Deaf children adapt them, see Supalla and McKee (this
volume). 27 The fact that deaf children learning artificial sign systems spon-
taneously use space in ways that are characteristic of ASL and other natural
signed languages is strong support for the extent to which space is an essential
and defining feature of signed languages (Supalla 1991).
26 One area in which we see this potential fully exploited is the classifier systems of signed
languages. See footnote 19 for a brief description of the way in which classifiers utilize space.
For an overview of classifier predicates in ASL, see Supalla (1986).
27 See also Quinto-Pozos (this volume) for a discussion of the more limited use of space by
Deaf-Blind signers; it appears that the tactile-gestural modality to which Deaf-Blind signers are
limited provides a more limited access to the medium of space.
Pronominal reference in signed and spoken language 353
28 Indeed, as the number of signed languages studied increases, it is quite likely that other types
of variation in number marking will be found.
29 Although my comments are framed with respect to person distinctions in ASL, the typological
homogeneity that characterizes pronominal reference across signed languages makes it possible
to extend the analysis to other signed languages. Where appropriate, I have included references
to works on other signed languages.
Pronominal reference in signed and spoken language 355
(the unambiguous reference that the medium, space, allows) makes the appli-
cation of standard models of person distinction challenging, if not potentially
problematic. In the remainder of this section I discuss some of these problems.
Ingram (1978) approaches the category of person by examining the lexical
marking of deictic features (speaker, hearer, other). In evaluating languages with
respect to this, Ingram asks, “what are the roles or combination of roles in the
speech act that each language considers to be of sufficient importance to mark by
a separate lexical form?” (p.215). Approached in this manner, English would be
analyzed as having a five-way system: I, we, you, he, they; see Table 13.1 above.
Gender and case distinctions do not come into consideration here because they
are not pertinent to the roles of individuals in speech acts. Thus, English can be
said to mark a lexical distinction between first, second, and third persons.
Using this framework as a guideline, what roles or combination of roles does
ASL mark with a separate lexical form? Are the individual pronoun forms in
ASL separate lexical forms? Addressing first the question of separate, if we
interpret “separate” to mean distinct, the answer would be yes. The various
pronouns in ASL are both formationally distinct (in the sense that distinct loca-
tions in the signing space are utilized for all pronouns) as well as semantically
or referentially distinct (reference to individuals within the discourse is unam-
biguous). But do these individual pronouns constitute separate lexical forms?
Considering only singular reference for a moment, the standard analysis of
pronominal reference argues for a three-way distinction in person: first person
(index directed toward signer), second person (index directed toward a locus
in front of the addressee), and third person (index directed toward a point in
space previously associated with nonpresent referent). On this view, person
distinctions in ASL, as well as in the other signed languages reviewed here, are
based on distinct locations in the signing space.30
In order to evaluate the question of which person distinctions (if any) exist in
ASL we must ask whether these locations in space are lexical; in other words,
are the locations in space that are used for pronominal reference, that are used to
distinguish individual referents in a discourse, specified in the lexicon? For the
purposes of this discussion, I define the lexicon as the component of a grammar
that contains information about the structural properties of the lexical items in
a language. As such, the lexicon contains semantic, syntactic, and phonological
specifications for the individual lexical items within the language.
In a grammatical analysis of spatial locations, spatial features must have
phonological substance. In other words, the locations toward which pronouns
are directed must be part of a phonological system, and must be describable
using a set of discrete phonological features. Liddell (2000a) addresses this
30 The proposed first person pronoun in ASL is not, in fact, articulated at a location in the signing
space. Rather, it contacts the signer’s chest. This fact is addressed below.
356 Susan Lloyd McBurney
issue with respect to ASL pronouns and agreement verbs by reviewing the liter-
ature on spatial loci and pointing out the lack of adequate explanation concern-
ing phonological implementation. One system of phonological representation
(Liddell and Johnson 1989) attempted to specify locations in space by means of
seven possible vectors radiating away from the signer, four possible distances
away from the signer along that vector, and several possible height features.
Combinations of vector, distance, and height could result in a large number of
possible locations (loci) in the signing space. Although this is the most com-
plete attempt at providing phonological specification for locations in the signing
space, Liddell (2000a: 309) points out that it remains fundamentally inadequate:
signers do not select from a predetermined number of vectors or heights if the person or
thing is present. . . the system of directing signs is based on directing signs at physically
present entities, regardless of where the entity is with respect to the signer.
modality and/or the medium of a language might in some way have an influence
on the interface between language and conceptualization?
As was discussed in Section 13.5, because signed languages are able to uti-
lize the spatial medium, they are uniquely equipped to convey spatial–relational
information in a very direct, non-abstract manner. As a result, the interface
between certain conceptual structures and their linguistic representations is
qualitatively different. Specifically, signed languages enjoy a high degree of
“conceptual iconicity” in certain subsystems of the languages (pronominal ref-
erence and classifier predicates being the two most evident). This conceptual
iconicity is, I believe, an affordance of the medium.
Participant roles (speaker/signer, addressee, other) are pragmatic constructs
that exist within all languages, spoken and signed. However, the manner in
which these roles are encoded is, at a very fundamental level, medium-dependent.
In order to encode participant roles, spoken languages require a level of abstrac-
tion; a formal (i.e. purely linguistic) device is necessary to systematically encode
distinctions in person. The resulting systems of grammatical person utilize dis-
tinctly linguistic forms (separate lexical forms, in Ingram’s framework) to refer
to speaker, addressee, and other. These forms are lexically specifiable, and are
internal to the grammar of the language.
Signed languages are unique in that they do not require this level of ab-
straction. Because signed languages are conveyed in the spatial medium (and
because reference to individuals within a discourse is unambiguous), formal,
language-internal, marking of person is unnecessary. The coding of participant
roles is accomplished not through linguistic devices, but rather through ges-
tural deixis. The participants within a discourse are unambiguously identified
through these deictic gestures, which are systematically incorporated into the
system of pronominal reference.
It is definitely not the case that the entire pronominal system (in ASL, for
example) is devoid of grammatical or linguistic features. Case information is
carried by the handshape components of referential signs; person pronouns
are articulated with the 1 handshape, possessive pronouns with the B hand-
shape, and reflexives with the “open A” handshape. In addition, as argued in
Section 13.6.1, number appears to be a category that is grammatically marked
in ASL. Crucially, however, locations in space, as they are used for reference
across signed languages, do not constitute grammatical person distinctions. I
am arguing (in support of Lillo-Martin and Klima 1990; Ahlgren 1990) that
there are no formal person distinctions in signed languages. Rather “gestural
deixis” (along the lines of Liddell’s analysis) serves to identify referents within a
discourse.
articulation for well-formed signs; in ASL, these include locations on the neck,
upper arm, elbow, forearm, as well as several distinct locations on the face,
chest, and nondominant hand. The center of the chest is, without question, one
of these locations, as evidenced by the fact that there are numerous lexical items
in ASL whose specification for location is the center of the chest (LIKE, FEEL,
EXCITED, WHITE). To my knowledge, however, there are no signs whose
specification for location is the area just in front of the chest. I would argue that
the first person pronoun contacts the chest because the well-formedness con-
straints that are active in the phonology of the language require that it do so.33 In
other words, an index directed toward the chest but not actually contacting the
chest could be argued to be in violation of well-formedness constraints that ex-
clude the area in front of the chest as a permissible place of articulation for a sign.
The fact that the pronoun referring to the addressee (second person in the stan-
dard analysis) does not contact the chest of the addressee is also due to phonolog-
ical well-formedness constraints; in signed languages, locations on other peo-
ple’s bodies are not permissible places of articulation for well-formed signs.34
Engberg-Pedersen’s second argument for distinguishing the category of first
person is based on handshape variation that occurs with the first person pronoun
forms in DSL. While the data Engberg-Pedersen provides with respect to this
issue are incomplete, observations of similar variation in ASL “first person”
pronouns suggest that the variation might be due in some instances to surface
phonetic variation and in others to morphophonological processes, in particular
handshape assimilation. The data in (3) from ASL illustrate the first type of
variation.35
(3) MY NEIGHBOR TEND TALK+++, PRO-1 1 HATE3 HER-3 GOSSIP.
‘My neighbor, she tends to talk a lot. I hate her gossiping!’
In this particular utterance, the phonological form of the “first person” pronoun
(PRO-1) is a loose index handshape (index finger is partially extended, other
three fingers are loosely closed). Whereas the citation form of this pronoun is
a clearly articulated index, I would argue that what surfaces here is an instance
of phonetic variation.
A second example (4) illustrates handshape assimilation.
(4) DOG STUBBORN. PRO-1 FEED PRO-3, REBEL, REFUSE EAT
‘The dog is stubborn. I feed it, but it rebels, refuses to eat.’
33 This is perhaps an overstatement; phonetic variation may lead to an index directed toward, but
not actually contacting, the chest.
34 An exception to this might be found in infant-directed or child-directed signing, during which
mothers (or other caregivers) sometimes produce pointing signs that contact a child.
35 Data in (3) and (4) are from a corpus of ASL sentences being used as stimuli for a neurolinguistic
experiment currently under way (Brain Development Lab, University of Oregon; Helen Neville,
Director).
Pronominal reference in signed and spoken language 361
In this utterance, the handshape of the “first person” pronoun is not the citation
form handshape (clearly articulated index), but rather something that more
closely resembles the handshape of the following verb, FEED (four outstretched
fingers together, and the thumb touching the middle of the fingers). In other
words, the handshape of the pronoun I has assimilated to the handshape of the
following sign, FEED.
Returning to Engberg-Pedersen’s posited distinction between first and non-
first person in DSL, I have shown that an alternative analysis is possible. Though
my comments are based not on DSL data but on similar data from ASL, I have
illustrated that the two formational differences she claims support a first person
distinction (contact with body and varying handshape) can, in fact, be inter-
preted as resulting from phonological factors.
Like Engberg-Pedersen, Meier (1990) has argued that ASL distinguishes
between first and non-first person in its pronouns. Meier’s arguments against
a formal distinction between second and third person pronouns in ASL are
convincing, and I fully agree with this aspect of his analysis.36 However, his
arguments for distinguishing between first and nonfirst pronouns are less clearly
convincing, and an alternative analysis is possible. Here I address two of his
arguments.
Analyzing data from role-playing in ASL, Meier states that “deictic points in
role-playing do mark grammatical person, as is indicated by the interpretation
of deictic points to the signer in role-playing” (Meier 1990:185). In role-playing
situations, he argues, the ASL pronoun INDEXs (an index to the signer) behaves
just like the English first-person pronoun, I , does in direct quotation. Although
Meier takes this as evidence of the category first person in ASL, an alternative
analysis exists. Couched within the Liddell’s framework (discussed above in
Section 13.6.2.1), each “deictic point” in a discourse, regardless of whether or
not role-playing is involved, is a point to an entity within a grounded mental
space. These entities are either physically present (in the case of the signer and
the addressee) or conceived of as present (in the case of nonpresent referents).
When role-playing occurs, the conceptual maps on which the mental spaces are
based shift. In other words, the conceptual layout of referents within a discourse
shifts in the context of role-playing. Role-playing or not, indexes still point to
entities within a grounded mental space, and referents are identified not through
abstract person features, but through gestural deixis.
36 Meier’s arguments are threefold. First, with respect to the ways in which points in space are
actually used in discourse, “the set of pointing signs we might identify as second person largely,
if not completely, overlaps with the set we would identify as third person” (Meier 1990:186).
Second, although eye gaze at the addressee is an important component of sign conversations, it
does not appear to be a grammatical marker of second person in ASL. Finally, Meier notes that,
while there exist gaps in the paradigms of agreement verbs that appear to be motivated by the
existence of a first person object, there are no gaps that arise with respect to either the addressee
(second person) or a non-addressed participant (third person).
362 Susan Lloyd McBurney
In addition to these arguments from role-playing, Meier suggests that the first
person plural pronouns WE, OUR, and OURSELVES provide further evidence
for person distinctions in ASL. The place of articulation of these signs, he argues,
is only partially motivated; they share the same general place of articulation as
the singular first person forms (the signer’s chest) but the place of articulation
does not indicate the real world locations of those other than the signer.
Although pronominal reference is unambiguous for singular pronouns, it is
not the case that the plural forms of pronouns are always unambiguous. I agree
with Meier on this point. Some plural pronouns are unable to take advantage
of spatial locations in the same way that singular pronouns are; articulatory
constraints can limit the ability to identify and coindicate plural referents that
are located at non-adjacent locations in the signing space. Take, for example,
the sign WE, which is normally articulated with the index handshape contacting
the right side of the chest, then arcing over to the left side of the chest. As Meier
notes, the articulation of this plural pronoun does not indicate the locations of
any referents other than the signer.
While Meier argues that this is evidence for a distinction between first and
nonfirst person, this is not the only possible analysis. Like the plural form WE,
there are instances in which non-first plural forms (THEY, for example) do not
indicate the locations of referents. Example (5) serves to illustrate this.
(5) [Context: The signer is describing her experience working at a Deaf school.
The individuals for whom she worked, while the topic of conversation,
have not been established at distinct locations in the signing space.]
t head nod
RESEARCH WORK, REGULAR. SOMETIMES FIND INFORMA-
TION FOR INDEX-PL.
‘I did research on a regular basis. Sometimes I found information for
them.’
In (5) the INDEX-PL serves as an unspecified general plural (a nonfirst person
plural in Meier’s terms) and is articulated by a small sweeping motion of the
index from left to right in neutral space. As none of the referents have been
established in the signing space, this plural pronoun is non-indexic.
A second set of nonsingular pronouns provides an additional example. Cormier
(1998) notes that the number-incorporated signs (THREE-OF-US, FOUR-OF-
US, FIVE-OF-US) do not always index the locations of the referents; “modu-
lations for inclusive/exclusive interfere with the default indexic properties” of
these pronouns (p.23). Thus we see that when pronouns are marked for plurality,
the indexical function is sometimes suppressed. These non-indexical plurals can
be taken as evidence for grammatical number marking within ASL;37 however,
37 Since I have argued that the numeral incorporated forms are not instances of grammatical number
marking, this statement pertains most directly to examples like (5) above.
Pronominal reference in signed and spoken language 363
the fact that they can surface in both “first” and “nonfirst” constructions sug-
gests that the non-indexical WE is insufficient evidence for the existence of a
first person category.
The fact that WE is more often non-indexical than the plural forms YOU-
ALL and THEY can be analyzed as resulting from the unusual semantics of WE
(speaker + other(s)). Generally speaking, the category traditionally referred to
as first person plural is anomalous across languages; as Benveniste (1971: 202)
and others point out, “ ‘we’ is not a multiplication of identical objects but a junc-
tion between ‘I’ and the ‘non-I’, no matter what the content of this ‘non-I’ may
be.” This anomaly is, one could argue, one of denotational semantics; whereas
the plurals of “second” and “third” persons readily denote multiple addressees
and multiple nonpresent referents, respectively, a “first” person plural does not
typically denote multiple speakers.38 In the case of sign language pronominal
reference, if we refer to the pragmatic constructs of speaker, hearer, and other
(as opposed to the purely linguistic notions of person, which I have argued are
unnecessary in languages expressed in the spatial medium), the non-indexical
WE can be analyzed as just one way of expressing the concept of signer +
unspecified others.
Before moving on to a discussion of gender marking in signed language
pronominal systems, one additional piece of evidence against a distinction be-
tween first and nonfirst person is discussed. Recall that IPSL has a transnumeral
form that is unspecified for number, where a single point with an index finger
can refer to any number of entities (Zeshan 1999; personal communication).
As discussed in Section 13.2.2.2, the transnumeral form surfaces across all
“persons,” first, second, and third. If there were a formal distinction between
first and nonfirst persons, we might expect that number marking, in this case
transnumerality, would reflect this distinction as well. The fact that first and
nonfirst person pronouns are treated identically with respect to transnumerality
suggests that the posited distinction is not well motivated.
13.7 Conclusions
The present study, which examines pronominal reference across spoken and
signed languages, reveals that, from a typological perspective, signed languages
are unusual. Because signed languages have access to the spatial medium, there
is a qualitative difference in spatial marking between languages in the two
modalities. Because of their medium, signed languages have greater potential
for non-arbitrary form–meaning correspondences within pronoun systems. Re-
lationships can be expressed overtly in spatial terms, and as a result reference
to individuals within a discourse is unambiguous.
The data from signed languages indicate that number is a category that is
grammatically marked in signed language pronouns, but the category of person
is not. The fact that the location component of pronouns cannot be lexically
specified precludes an analysis of lexical distinctions in person. In addition, I
have argued that, although participant roles (as pragmatic constructs) exist in
all human languages, spoken and signed languages differ in how these roles
are encoded. Spoken languages require a formal linguistic device to systemat-
ically encode distinctions in person. Signed languages, on the other hand, do
Pronominal reference in signed and spoken language 365
not. The coding of participant roles is accomplished not through abstract lin-
guistic categories of person, but rather through gestural deixis. The participants
within a discourse are unambiguously identified through deictic gestures that
are incorporated into the system of pronominal reference.
This having been said, an important question arises: what is the precise
status of this class of signs in ASL? In other words, are the signs typically
glossed as pronouns in fact pronouns at all? I have argued that because signed
languages are deeply rooted in the spatial medium, they are able to convey
spatial–relational information in a very direct manner and reference to indi-
viduals within a discourse is unambiguous. The resulting conceptual iconicity
renders formal, language-internal, marking of person unnecessary. If it is the
case that formal person distinctions do not exist in signed languages, then there
may be no basis for analyzing these signs as personal pronouns.
Although further research is needed, the results from the present study sug-
gest that the class of signs traditionally referred to as personal pronouns may,
in fact, be demonstratives. Describing this word class, Diessel (1999:2) writes
that demonstratives generally serve pragmatic functions, in that they are pri-
marily used to focus the addressee’s attention on objects or locations in the
speech environment. Within the framework of mental space theory, the point-
ing signs that have been analyzed as pronouns behave very much like demon-
stratives. These pointing signs are directed toward entities that are present
within the signing environment; for the signer and addressee, toward physi-
cally present entities, and for nonpresent referents toward conceptually present
entities.
If, in fact, this class of signs turns out to be more accurately classified as
demonstratives, then the typologically unusual nature of sign language “pro-
nouns” takes on a new meaning. Signed languages would be typologically
unusual not because the pronouns all share some unusual characteristics, but
because, as a class of human languages, there are no pronouns at all. This would
most certainly be a significant typological finding, and a clear example of the
extent to which the medium of a language affects the structure of that language.
To be sure, additional research is needed in order to understand more fully
the complex nature of spatial locations and the central role they play in signed
language reference. By elucidating the role of space in signed languages, we
will gain insight into the factors that shape language as well as the effects of
modality and medium on the structure of language.
Acknowledgments
I have benefited greatly from two reviewers’ comments, questions, and criti-
cisms, and I would like to thank them for their valuable contributions to this
chapter. Special thanks to Richard Meier for his thoughtful responses, and to
366 Susan Lloyd McBurney
David Corina, Fritz Newmeyer, and Soowon Kim for their assistance. An earlier
version of this chapter was presented at the Third Biennial Conference of the
Association for Linguistic Typology, University of Amsterdam, 1999. I would
like to thank the conference participants for their comments and suggestions.
13.8 References
Ahlgren, Inger. 1990. Deictic pronouns in Swedish and Swedish Sign Language. In
Theoretical issues in sign language research, Vol. I: Linguistics, ed. S. Fischer and
P. Siple, 167–174. Chicago, IL: The University of Chicago Press.
Anderson, Stephen R. and Edward L. Keenan. 1985. Deixis. In Language typology and
syntactic description, Vol. III: Grammatical categories and the lexicon, ed. Timothy
Shopen, 259–308. New York: Cambridge University Press.
Aronoff, Mark, Irit Meir, and Wendy Sandler. 2000. Universal and particular aspects of
sign language morphology. University of Maryland Working Papers in Linguistics,
10:1–33.
Baker, Charlotte and Dennis Cokely. 1980. American Sign Language: A teacher’s re-
source text on grammar and culture. Silver Spring, MD: T.J. Publishers.
Bellugi, Ursula and Edward Klima. 1982. From gesture to sign: Deixis in a visual–
gestural language. In Speech, place, and action, ed. Robert J. Jarvella and Wolfgang
Klein, 279–313. Chichester: John Wiley.
Benton, R.A. 1971. Pangasinan reference grammar. Honolulu, HI: University of Hawaii
Press.
Benveniste, Emile. 1971. Problems in general linguistics. Coral Gables, FL: University
of Miami Press.
Boas, Franz. 1947. Kwakiutl grammar. In Transactions of the American Philosophical
Society, Volume 37 (3). New York: AMS Press.
Chinchor, Nancy. 1979. Numeral incorporation in American Sign Language. Doctoral
dissertation, Brown University, Providence, Rhode Island.
Corina, David and Wendy Sandler. 1993. On the nature of phonological structure in sign
language. Phonology 10:165–207.
Cormier, Kearsy. 1998. How does modality contribute to linguistic diversity?
Manuscript, University of Texas, Austin, Texas.
Diessel, Holger. 1999. Demonstratives: Form, function, and grammaticalization. Typo-
logical Studies in Language, 42. Amsterdam: John Benjamins.
Engberg-Pedersen, Elisabeth. 1986. The use of space with verbs in Danish Sign Lan-
guage. In Signs of life: Proceedings of the 2nd European Congress of Sign Language
Research. ed. Bernard Tervoort, 32–51. Amsterdam: Institute of General Linguis-
tics of the University of Amsterdam.
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language: The semantics and
morphosyntax of the use of space in a visual language. Hamburg: Signum.
Farnell, Brenda. 1995. Do you see what I mean? Plains Indian sign talk and the embod-
iment of action. Austin: University of Texas Press.
Fauconnier. Giles. 1985. Mental spaces: Aspects of meaning construction in natural
language. Cambridge, MA: The MIT Press.
Fauconnier, Giles. 1997. Mappings in thought and language. Cambridge: Cambridge
University Press.
Pronominal reference in signed and spoken language 367
Fischer, Susan D. 1996. The role of agreement and auxiliaries in sign language. Lingua
98:103–119.
Fischer, Susan D. and Yutaka Osugi. 2000. Thumbs up vs. giving the finger: Indexical
classifiers in NS and ASL. Paper presented at the Seventh International Conference
on Theoretical Issues in Sign Language Research, Amsterdam, July.
Forchheimer, Paul. 1953. The category of person in language. Berlin: Walter de Gruyter.
Friedman, Lynne A. 1975. On the semantics of space, time, and person reference in the
American Sign Language. Language 51:940–961.
Friedman, Victor A. 1994. Ga in Lak and the three “there”s: Deixis and markedness
in Daghestan. NSL 7: Linguistic studies in the non Slavic languages of the Com-
monwealth of Independent States and the Baltic Republics, ed. Howard I. Aronson.
79–93. Chicago, IL: Chicago Linguistic Society
Hale, K.L. 1966. Kinship reflections in syntax, Word 22:318–324.
Ingram, David. 1978. Typology and universals of personal pronouns. In Universals
of human language, Vol. 3: Word structure, ed. Joseph H. Greenberg, 213–247.
Stanford, CA: Stanford University Press.
Johnston, Trevor. 1991. Spatial syntax and spatial semantics in the inflection of signs
for the marking of person and location in Auslan. International Journal of Sign
Linguistics 2:29–62.
Johnston, Trevor. 1998. Signs of Australia: A new dictionary of Auslan. North Rocks,
NSW: North Rocks Press.
Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Lane, Harlan. 1984. When the mind hears: A history of the Deaf. New York: Random
House.
Last, Marco. In preparation. Expressions of numerosity: A cognitive approach to
crosslinguistic variation in grammatical number marking and numeral systems.
Doctoral dissertation, University of Amsterdam.
Laycock, Donald C. 1965. The Ndu language family. Canberra: Australian National
University.
Liddell, Scott. 1994. Tokens and surrogates. In Perspectives on Sign Language Structure:
Papers from the 5th International Symposium on Sign Language Research, Vol. I,
ed. I. Ahlgren, B. Bergman, and M. Brennan, 105–119. Durham, England: The
International Sign Linguistics Association.
Liddell, Scott. 1995. Real, surrogate, and token space: Grammatical consequences in
ASL. In Sign, gesture, and space, ed. Karen Emmorey and Judy Reilly, 19–41.
Hillsdale, NJ: Lawrence Erlbaum.
Liddell, Scott. 1996. Numeral incorporating roots and nonincorporating prefixes in
American Sign Language. Sign Language Studies 92:201–225.
Liddell, Scott. 2000a. Indicating verbs and pronouns: Pointing away from agreement. In
The signs of language revisited: An anthology to honor Ursula Bellugi and Edward
Klima. ed. Harlan Lane and Karen Emmorey, 303–320. Mahwah, NJ: Lawrence
Erlbaum.
Liddell, Scott. 2000b. Blended spaces and deixis in sign language discourse. In Lan-
guage and gesture: Window into thought and action, ed. David McNeill, 331–357.
Cambridge: Cambridge University Press
Liddell, Scott and Robert Johnson. 1989. American Sign Language: The phonological
base. Sign Language Studies 64:195–277.
368 Susan Lloyd McBurney
Lillo-Martin, Diane. 1986. Parameter setting: Evidence from use, acquisition, and break-
down in American Sign Language. Doctoral dissertation, University of California,
San Diego, CA.
Lillo-Martin, Diane. 1995. The point of view predicate in American Sign Language.
In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 155–170.
Hillsdale, NJ: Lawrence Erlbaum.
Lillo-Martin, Diane and Edward S. Klima. 1990. Pointing out differences: ASL pro-
nouns in syntactic theory. In Theoretical issues in sign language research. Vol. I:
Linguistics, ed. S. Fischer and P. Siple, 191–210. Chicago, IL: The University of
Chicago Press.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical issues
in sign language research. Vol. I: Linguistics. ed. S. Fischer and P. Siple, 175–190.
Chicago, IL: The University of Chicago Press.
Miller, G. A. 1956. The magical number seven, plus or minus two: Some limits on our
capacity for processing information. Psychological Review 63:81–97.
Mühlhäusler, Peter and Rom Harré. 1990. Pronouns and people: The linguistic con-
struction of social and personal identity. Oxford: Basil Blackwell.
Nagaraja, K. S. 1985. Khasi: A descriptive analysis. Pune, India: Deccan College.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: The MIT Press.
Padden, Carol. 1988. Interaction of morphology and syntax in American Sign Language.
New York: Garland.
Pederson, Eric and Jan Nuyts. 1997. On the relationship between language and concep-
tualization. In Language and conceptualization, ed. Jan Nuyts and Eric Pederson,
1–12. Cambridge: Cambridge University Press.
Pinker, Steven and Paul Bloom. 1990. Natural language and natural selection. Behavioral
and Brain Sciences 13:707–784.
Pizzuto, Elena. 1986. The verb system of Italian Sign Language (LIS). In Signs of
life: Proceedings of the Second European Congress of Sign Language Research,
ed. Bernard Tervoort, 17–31. Amsterdam: Institute of General Linguistics of the
University of Amsterdam.
Pizzuto, Elena, Enza Giurana, and Giuseppe Gambino. 1990. Manual and nonmanual
morphology in Italian Sign Language: Grammatical constraints and discourse pro-
cesses. In Sign language research: Theoretical issues, ed. Ceil Lucas, 83–102.
Washington DC: Gallaudet University Press.
Rabel, L. 1961. Khasi: A language of Assam. Baton Rouge, LA: Louisiana State Uni-
versity Press.
Ray, Sidney Herbert. 1926. A comparative study of the Melanesian Island languages.
London: Cambridge University Press.
Reed, Judy and David L. Payne. 1986. Asheninca (Campa) pronominals. In Pronominal
systems, ed. Ursula Wiesmann, 323–331. Tübingen: Narr.
Siple, Patricia. 1982. Signed language and linguistic theory. In Exceptional language and
linguistics, ed. Loraine K. Obler and Lise Menn, 313–338. New York: Academic
Press.
Smith, Wayne. 1990. Evidence for auxiliaries in Taiwan Sign Language. In Theoretical
issues in sign language research, Vol. I: Linguistics, ed. Susan Fischer and Patricia
Siple, 211–228. Chicago: The University of Chicago Press.
Pronominal reference in signed and spoken language 369
Supalla, Samuel J. 1991. Manually Coded English: The modality question in sign lan-
guage development. In Theoretical issues in sign language research, Vol. II: Psy-
chology, ed. Patricia Siple and Susan Fischer, 85–110. Chicago: University of
Chicago Press.
Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American
Sign Language. Doctoral dissertation, University of California, San Diego, CA.
Supalla, Ted. 1986. The classifier system in American Sign Language. In Noun classes
and categorization, ed. Colette Craig, 181–214. Amsterdam: John Benjamins.
Supalla, Ted and Yutaka Osugi. Unpublished. Gender handshapes in JSL (Japanese Sign
Language). Course lecture notes, University of Rochester.
Vasishta, Madan M., James C. Woodward, and Kirk L. Wilson. 1978. Sign language
in India: Regional variation within the deaf population. Indian Journal of Applied
Linguistics 4:66–74.
Weber, David J. 1986. Huallaga Quechua pronouns. In Pronominal systems, ed. Ursula
Wiesmann, 333–349. Tübingen: Narr.
Winston, Elizabeth A. 1995. Spatial mapping in comparative discourse frames. In Lan-
guage, gesture, and space, ed. Karen Emmorey and Judy Reilly, 87–114. Hillsdale
NJ: Lawrence Erlbaum.
Woodward, James C. 1978a. Historical basis of American Sign Language. In Under-
standing language through sign language research, ed. P. Siple, 333–348. New
York: Academic Press.
Woodward, James C. 1978b. All in the family: Kinship lexicalization across sign lan-
guages. Sign Language Studies 19:121–138.
Zeshan, Ulrike. 1998. Functions of the index in IPSL. Manuscript, University of Cologne,
Germany.
Zeshan, Ulrike. 1999. Indo-Pakistani Sign Language. Manuscript, Canberra, Australian
National University, Research Centre for Linguistic Typology.
14 Is verb agreement the same crossmodally?
14.1 Introduction
One major question in linguistics is whether the universals among spoken lan-
guages are the same as those among signed languages. Two types of universals
have been distinguished: formal universals, which impose abstract conditions
on all languages, and substantive universals, which fix the choices that a lan-
guage makes for a particular aspect of grammar (Chomsky 1965; Greenberg
1966; Comrie 1981). It would be intriguing to see if there are modality dif-
ferences in both types of universals. Fischer (1974) has suggested that formal
universals like some syntactic operations apply in both modalities, while some
substantive universals are modality specific. Similarly, Newport and Supalla
(2000:112) have noted that signed and spoken languages may have some dif-
ferent universals due to the different modalities.
In this chapter we focus on verb agreement as it provides a window into some
of the universals within and across the two modalities. We start with a working
definition of agreement for spoken languages and illustrate the difficulty in
applying such a definition to signed languages. We then embark on two goals:
to investigate the linguistic status of verb agreement in signed language and
to understand the architecture of grammar with respect to verb agreement. We
explore possible modality differences and consider their effects on the nature
of the morphological processes involved in verb agreement. Finally, we return
to the formal and substantive universals that separate and/or group spoken and
signed languages.
Spoken languages vary as to whether they show null, weak or strong agree-
ment (e.g. Speas 1995). Null agreement languages do not show overt agreement
for any combination of person, number, and/or gender features (see Table 14.1).
Other languages like Brazilian Portuguese and English (Table 14.2) show overt
agreement for some feature combinations. If there is no overt agreement for
a certain combination, a phonetically null affix, represented here by ø, is at-
tached to the verb.1 Positing phonetically null affixes in languages like Brazilian
Portuguese or English is justified by the fact that they contrast with overt agree-
ment for other combinations within the paradigm. This is different from null
agreement languages, where not even a phonetically null affix is attached.
Languages that have strong agreement (Table 14.3) show overt agreement
for all feature combinations, even if the same form is used for two or more
combinations, e.g. -en for first person plural and third person plural in German.
One characteristic common to all types of spoken languages showing overt
agreement is that they show subject agreement. In rare cases, the verb may agree
with the object – e.g. Huichol (Comrie 1982:68–70) and Itelmen (Bobaljik and
Wurmbrand 1997) – but these languages usually show subject agreement too,
which suggests that object agreement is more marked than subject agreement
in spoken languages.
There seems to be little controversy in the literature regarding the realization
of verb agreement in spoken languages. On the other hand, in the literature on
signed languages, the status of verb agreement is still under debate. Consider
Figure 14.1, which shows what many people mean by “verb agreement” in
signed languages. As the figure shows for American Sign Language (ASL), the
handshape for ASK (an index finger) bends as it moves from one location to an-
other in front of the signer. Each location can be understood as representing a ref-
erent: the first would be associated with the asker and the second with the askee.
While the forms for ask differ across the signed languages with respect
to hand configuration and other lexical properties, the forms undergo exactly
the same changes to mark the meaning of you ask me. See the examples in
1 The agreement forms given for the spoken languages are valid in the present tense; they may
look different in other tenses, providing another source of variation.
372 Christian Rathmann and Gaurav Mathur
How do we characterize the reference to the asker and to the askee, as well
as their relation to each other and to the verb? How do we account for the large
number of substantive universals across signed languages with respect to verb
agreement? We briefly review the sign language literature, which has mostly
focused on ASL, to understand how these issues have been addressed.
5 The sign language literature now calls them “regular” and “backwards” verbs (compare Padden
1983).
374 Christian Rathmann and Gaurav Mathur
ASL
a verb’s arguments from its direction of movement alone, and verbs like
TEASE and BOTHER do not move between loci but change only in their
orientation.6
DGS NS Auslan
Language, Keller (1998) using data from DGS, Meir (1998) using data from
Israeli Sign Language, and Janis (1992), Bahan (1996), and Cormier, Wechsler
and Meier (1998) using data from ASL have also followed this kind of ap-
proach, under which the locus is represented as a variable in the linguistic
system, whose content comes from discourse.7 There is no need to represent
the locus overtly at the level of syntax. It is sufficient to use the referential
indices that are associated with loci during the discourse. Keller further sug-
gests that once the content is retrieved from discourse, it is cliticized onto the
verb.
7 Ahlgren (1990:167) argues that “in Swedish Sign Language pronominal reference to persons
is made through location deictic terms” rather than through personal pronouns. Assuming that
the use of location deictic terms is dependent on discourse structure, we have grouped this
publication under the R-locus view.
8 A related point is made by Liddell (1990, 1995) that many agreement verbs are articulated at
a specific height. For example, the ASL signs ESP, TELL, GIVE, and INVITE are articulated
respectively at the levels of the forehead, the chin, the chest, and the lower part of the torso.
376 Christian Rathmann and Gaurav Mathur
“Garfield” are mapped respectively from the “owner” and “Garfield” in the
cartoon space. From Real space, the “signer” is mapped onto “Garfield” in the
blended space.
Using entities in mental space removes the need to define a “locus” mor-
phologically or phonologically. It also follows from the account that verbs are
directed according to the height of the referent(s) rather than to a dot-like point
in space.
9 See, for example, Lillo-Martin (1991), Cormier et al. (1998), Bahan (1996:84) citing Neidle
et al. (1995) and Neidle et al. (2000).
378 Christian Rathmann and Gaurav Mathur
This last criterion focuses on the morphological side of agreement and does
not conclusively determine whether there is agreement as a general linguistic
process, especially when some of the morphemes are null. To argue for the
presence of verb agreement in the English sentence I ask her (as opposed to he
ask-s her), it is common to assume that there is a phonetically null morpheme
for first person singular attached to the verb ask, but this assumption is not
required by the criteria.
The criteria can also be applied in such a way that signed languages exhibit
agreement if the spatial location of a noun referent is taken to be a grammat-
ical category. For example, in the DGS sentence MUTTER IXi VATER IXj
i FRAGENj ‘the mother asks the father,’ FRAGEN can be said to agree with the
object IXj VATERin its spatial location if and only if IXj VATER is syntactically
related to FRAGEN (as an object); IXj VATERhas a particular spatial location
(notated by j), which is independent of the nature of FRAGEN; and this spatial
location is expressed as an endpoint of the verb.
Similarly, signed languages may pass the criteria if person is taken as the rele-
vant grammatical category. Verb forms for first person are distinctive from non-
first person forms (Meier 1990). The presence of a single distinctive form, like
the English third person singular form -s, is sufficient to pass Lehmann’s criteria.
Signed languages may also pass the criteria if number is the relevant gram-
matical category. Although number marking does not change the directionality
of the verb, and Liddell (2000a) is concerned with identifying the grammatical
category that drives the change in directionality, the presence of overt number
agreement in signed languages can be used to argue for the grammatical basis
of verb agreement. For instance, plural object agreement is expressed through
the “multiple” morpheme in a number of signed languages that we have studied
(DGS, ASL, Auslan, Russian Sign Language, and NS).
380 Christian Rathmann and Gaurav Mathur
tests: PAM may appear with the uninflected forms of such DGS signs, and the
verbs of this type in both languages may be modulated for the “multiple” form.
Otherwise, they cannot shift the locus of the (animate) argument nor can they
omit object agreement optionally.
Within this set is a subtype of verbs that may take two animate arguments
or that may take a concrete, inanimate argument instead of an animate one.
By “concrete” we mean the referent of the argument is something that we
can see in the real world. Examples include DGS BEOBACHTEN ‘look at’
and ASL LEAVE. If these verbs appear with two animate arguments, they
behave like the first set of verbs with respect to the four tests. However, if
they appear with an inanimate argument, they pass only the third test: they
can shift the locus of the object. In its usual sense, ‘give to someone’ moves
from the subject locus to the object locus. However, it can also mean ‘hand
an object.’ If so, the verb moves from the source locus to the goal locus,
even though the subject argument remains animate with the theta role as the
CAUSER of the event. When these verbs appear with an inanimate argument,
they cannot appear with PAM, nor be inflected for “multiple” but, importantly,
like the first class of verbs, object agreement with the inanimate argument is
obligatory.
There is another set of verbs, which may take two animate arguments, but
in some cases, they may instead take a nonconcrete inanimate argument. For
example, DGS LEHREN ‘teach’ and ASL OFFER may appear with two animate
382 Christian Rathmann and Gaurav Mathur
arguments and behave exactly like the verbs in the first class or they may
appear with a nonconcrete inanimate argument, as in ‘teach mathematics,’ ‘offer
promotion,’ and ‘support a certain philosophy.’ In these cases, the verbs pass
only the fourth test: they optionally leave out object agreement even if the
argument has been set up at a location in the space in front of the signer.
Otherwise, they cannot be used with PAM, nor with the “multiple” inflection,
nor can they shift the locus of the object.
These three types of verbs differ from other verbs like DGS KOCHEN ‘cook’
and ASL BUY which always take inanimate object arguments. Other verbs that
do not show agreement are those that take only one animate argument (DGS
SCHWIMMEN ‘swim’) and verbs that take a sentential complement (DGS
DENKEN ‘think’).12
There are several psych-verbs that assign the theta role of an EXPERIENCER
to an external argument, some of which have been previously assumed to be
plain (or non-agreeing) verbs, e.g. ASL LIKE and LOVE (Padden 1983; for
similar data in Israeli Sign Language, see Meir 1998). Some that select for
one argument – like DGS ERSCHRECKEN ‘shock’ or ASL SURPRISE –
do not qualify for verb agreement since they do not have the required two
arguments. On the other hand, we argue that psych verbs with two animate
arguments are agreeing verbs. First some verbs do show agreement, e.g. ASL
HATE, ADMIRE, PITY, and LOOK-DOWN-ON. Also, in DGS such verbs
may appear with PAM: MAG ‘like’ and SAUER ‘be mad at.’13 Other psych
verbs do not show agreement for the reason that they are articulated on the
body, e.g. DGS MAG ‘like’ or ASL LOVE.
The above characterizations seem to lead to the generalization, in line with
Janis (1992), that when verb agreement is present, it is either with an animate
direct object or with an indirect object if present, which itself tends to be
animate.
It is possible that the animate direct object in (2b) shares the same structural
position as the indirect object in (2c); similarly the (inanimate) direct object in
(2a) may share the same structural position as the direct object in (2c). If this is
correct, it would be straightforward to characterize verb agreement in terms of
the structural positions of the subject and the indirect object-like position.14 This
clustering of indirect objects with animate direct objects receives independent
evidence from Mohawk, where “noun incorporation is a property of inanimate
nouns that fill the direct object role” and where “noun incorporation of animate
direct objects is limited and noun incorporation of subjects and indirect objects
is completely impossible” yet visible agreement morphemes exist for these last
three categories (Baker 1996:20). In sum, defining verb agreement as referring
to entities in mental spaces alone does not seem to be sufficient for predicting
the different types of verbs with respect to agreement in terms of the animacy
properties of their arguments.
14 Janis’s (1992) analysis is similar in that it uses grammatical relations to predict the presence of
verb agreement, but instead of characterizing the agreement exclusively in terms of grammatical
relations, this analysis uses a hierarchy of controller features that include case and semantic
relations.
384 Christian Rathmann and Gaurav Mathur
In all the examples, the noun phrases HANS and MARIE are topicalized with a
special facial expression marker. While the object may be an overt noun phrase,
which would be MARIE in the above cases, it is more common to have a null
pronoun indicated as by a small pro. Then the difference between (3) and (4) lies
in the fact that PAM may be raised before the negation, but the verb FRAGEN
cannot, even though the verb bears agreement just like PAM. These facts suggest
that PAM licenses an additional layer of structure that allows the object to shift
from its base-generated position and raise above negation. If PAM functions
to show agreement when a verb cannot show it and if PAM licenses object
raising, this constitutes strong syntactic evidence for the linguistic component
of agreement in signed languages.
Furthermore, the use of PAM is available not only in DGS but also in other
signed languages such as NS (Fischer 1996), Taiwanese Sign Language (Smith
1990) and Sign Language of the Netherlands (Bos 1994). However, not all
signed languages have an element like PAM, for example ASL, British Sign
Language, and Russian Sign Language. Thus, there may be parametric variation
across signed languages with respect to whether they can use PAM or not, which
may explain the differing basic word orders (e.g. SVO vs. SOV, i.e. subject–
verb–object vs. subject–object–verb) (Rathmann 2001). This syntactic variation
across signed languages constitutes another piece of evidence for the linguistic
aspect of verb agreement in signed languages.
Another example comes from binding principles. Lillo-Martin (1991:62–63)
argues that when there is verb agreement, a small pro may appear in the object
position, as in *STEVEi SEEi pro ‘Steve saw himself.’ Like overt pronouns,
small pro is constrained by a binding principle that says roughly that a pronoun
cannot be bound by an antecedent within the same clause (Chomsky 1981:188).
Thus, the sentence is ruled out. We see that verb agreement in signed languages
interacts with pronouns in ways similar to spoken languages at the level of
syntax.
In line with Aronoff et al. (2000), and Meier (2002), Lillo-Martin (this vol-
ume), these examples make two points:
Is verb agreement the same crossmodally? 385
r There are syntactic constraints that reveal a linguistic component to verb
agreement.
r These constraints show the need for a syntactic module, which will be im-
portant later in the discussion of the architecture of grammar.
15 Fiengo and May (1994:1) define the function of an index as affording “a definition of syntactic
identity: elements are the ‘same’ only if they bear occurrences of the same index, ‘different’ if
they bear occurrences of different indices.”
386 Christian Rathmann and Gaurav Mathur
structure. If the indices are distinct, the loci are automatically distinct and face
each other. This is one way to reconcile the linguistic nature of verb agreement
with the listability issue in signed languages.
Now let us return to the possibility raised by the infinity view: are signed
languages still considered to be a separate group from spoken languages? In
spoken languages, the elements that determine verb agreement are argument
structure, the indices (or more precisely, the phi-features) of the noun phrases,
as well as the visibility condition that these noun phrases may be assigned theta
roles only if they are made “visible” either through the assignment of abstract
Case (Chomsky 1981:Chapter 6) or through coindexation with a morpheme
in the verb via agreement or movement (Morphological Visibility Condition;
Baker 1996:17). This is exactly the same as the scenario sketched for signed
languages, if we take the visibility condition to mean that in the case of signed
languages, a noun phrase is “visible” for theta role assignment through agree-
ment in the sense of Baker (1996).
It seems then that the visibility condition on argument structure is a candidate
for a formal universal applying to both spoken and signed languages. It is not
with which argument the verb agrees that the argument structure decides, since
that is subject to variation (recall that agreement tends to be with the subject
in spoken languages and with the object in signed languages). Rather, it is
the fact that the argument structure predicts verb agreement that is universal
crossmodally.
While signed and spoken languages may be grouped together on the basis
of these formal universals, the infinity view is partly correct that there must be
some differences between the two modalities due to the listability issue. We
suggest that the differences lie at the articulatory-perceptual interfaces and we
flesh out the above scenario with an articulated architecture of grammar as it
pertains to verb agreement.
Syntactic
structure
also “correspondence rules” linking one module with another. We go over each
module in clockwise fashion, starting from the top of Figure 14.3.
In syntax, elements are taken from the numeration (a set of lexical items
chosen for a particular derivation, in the sense of Chomsky 1995), merged and
moved. Here syntactic constraints apply, and the noun phrases are themselves
linked to conceptualizations of referents.
Conceptual structure maps onto “other forms of mental representation that
encode, for instance, the output of the visual faculty and the input to the formu-
lation of action” (Jackendoff 1992:32).16 This is the domain of mental represen-
tations that may be subject to further “inference rules,” which include not just
logical inference but also rules of “invited inference, pragmatics and heuristics.”
We focus on one part of conceptual structure, the spatio-temporal conceptual
structure. Since this module is concerned with relations between entities, we
suggest it is this part that interfaces with the gestural space.
Next, phonological structure can be broken into subcomponents, such as seg-
mental phonology, intonation contour, and metrical grid. This is the component
where phonological constraints apply both within and across syllables, defined
canonically in terms of consonants and vowels.
The above architecture applies to both spoken and signed languages. We
have made two adaptations to the architecture. The first difference is in the A–P
systems. In Jackendoff’s original model, the A–P systems are obviously the
auditory input system and vocal motor output. In this adaptation, the systems
for signed languages are the visual input and motor output systems for the hands
and nonmanuals.
As outlined in a number of current sign language phonology theories (e.g.
Sandler 1989; van der Hulst 1993; Brentari 1998), phonological structure in
signed languages is concerned with defining the features of a sign such as hand-
shape, orientation, location, movement, and nonmanuals like facial expressions,
16 Jackendoff (1992:33) mentions that this is one point of similarity with the theoretical framework
of cognitive grammar by Fauconnier 1985; 1997; Lakoff 1987; Langacker 1987.
388 Christian Rathmann and Gaurav Mathur
eye gaze, and head tilt. Phonological structure also encodes the constraints on
their combinations. While phonological structure may be modality-specific in
its content, it is a self-governing system that interacts in parallel ways with other
modules for both signed and spoken languages. We clarify the architecture by
inserting “A–P interfaces” between phonological structure and the input and
output systems.
We turn to the second difference in the above adaptation: there is a “ges-
tural space as medium” linking the conceptual structure with the articulatory–
perceptual interfaces (A–P interfaces).17 Here we have in mind representational
gestures. Such gestures include pointing out things (deixis) and indicating the
shape and/or size of an object as well as showing spatial relations. We do not
refer to other types of gesture such as pantomime or emblems like the F hand-
shape for ‘good’ (Kendon 2000) which uses an open hand with index finger
contacting the thumb. We assume that these gestures do not use the gestural
space; rather the emblems, for example, would come from a list of conven-
tionalized gestures which may vary crossculturally and which appear in both
spoken and signed languages.
The gestural space makes visible the relations encoded by the spatio-temporal
conceptual structure.18 The gestural space is a level of representation where a
given referent may be visualized as being on one side of that space. The spatio-
temporal conceptual structure is different in that it provides the referents and
their spatial relations, if any, but not necessarily where they are represented
in the space in front of the signer. Moreover, the form at the A–P interface
can be different from what is provided by the gestural space. For example,
the agreement rules in signed languages permit optional subject agreement
omission (Padden 1983). The referents for both subject and object may be
visualized within the gestural space in particular locations, but the location of
the subject does not have to be used at the A–P interface.
The following figure summarizes the role of each module in the architecture.
We now provide one example each from spoken and signed languages and
elaborate on these roles.
Synactic structure
REFERENTIAL
INDICES
i, j
Articulatory-
perceptual system
MATCH BETWEEN Conceptual structure
Phonological
REFERENTIAL INDICES (including spatio-temporal
structure
i, j AND cognitive module)
UNDERSPECIFIED
CONCEPTUALIZATION CONCEPTUALIZATION OF
MORPHEME OR i, j
OF REFERENTS REFERENT
Gestural space
as medium
MAKING
CONCEPTUALIZATION
OF REFERENT VISIBLE
between speech and gesture and that the role of gesture is to aid speech, but none
of them suggest that the role of gesture is directly correlated to verb agreement.
To understand how gesture is otherwise used in spoken languages, consider
McNeill’s (2000:144) Growth Point Hypothesis, which suggests that there is
“an analytic unit combining imagery and linguistic categorial content.” While
gesture and speech are considered separate, they are combined together under a
“growth point” so that they remain tightly synchronized. Another perspective on
gesture comes from Kita’s (2000:163) Information Packaging Hypothesis: “the
production of a representational gesture helps speakers organize rich spatio-
temporal information into packages suitable for speaking.” Thus, the role of
the gestural space can be seen as an addition to the architecture of grammar for
spoken languages.
This role of the gestural space as an addition is one reason that the gestural
space is placed as interacting with the A–P interfaces, not with phonological
structure. It is not desirable to admit any phonological representation of gesture
during speech, since they access different motor systems. Gesture accesses the
motor system for the hands and arms, while speech accesses the motor system
for the vocal cords. If the gestural space interacts directly with the hand and
arm motor system at the A–P interface, there will be no conflict with the use
of speech in phonological structure, which then interacts with the vocal motor
system at the A–P interface.
390 Christian Rathmann and Gaurav Mathur
Let us see how this system works for a spoken word that is accompanied
by a gesture, such as the Spanish word for ‘go down’ bajar accompanied by a
spinning gesture to express the manner of rolling. This example is useful be-
cause Duncan (2001) has suggested that the manner gesture is not compensatory
but figures in “thinking-for-speaking” about motion events in verb-framed lan-
guages like Spanish, which express less path and manner information than
satellite-framed languages like English (Talmy 1985). In the syntactic struc-
ture, the verb bajar has an argument structure where only one theta role of the
THEME is assigned. The noun phrase in the example is cat, which receives
the theta role of the THEME as well as a referential index i. The conceptual
structure envisions a cat rolling down a pipe. At the same time, the phonological
structure provides the phonetic form of the subject and the correctly inflected
verb as determined in the syntax: el gato baja ‘the cat goes down.’ Optionally,
a spinning gesture is added from the gestural space, which makes visible the
manner of rolling that is present in the conceptual structure.
The addition of the gestural space depends on how much linguistic informa-
tion is provided. For example, if all the information from conceptual structure is
encoded in the linguistic form, there may be no need to add the use of gestural
space. If some information, such as the manner of the movement present in
the conceptual structure, is not encoded linguistically, gesture may be added to
make the information clear to the listener, as shown above. Gesture may also
help the speaker organize spatio-temporal information, for example, when giv-
ing directions over the phone (see Kita’s Information Packaging Hypothesis;
Kita 2000). However gesture still does not directly aid in the expression of verb
agreement in spoken languages.
fingertips trace an arc against the signer’s chest. However, this form does not
occur.20 This is due to a phonetic constraint that bans the elbow from rotating
inward (i.e. toward the body) while keeping the palm up and raising the shoulder
at the same time.
There are many other phonetic constraints that may interact with the agree-
ment forms in signed languages to yield those phonetic gaps. In such cases,
there are several alternatives to the expected (but phonetically barred) agree-
ment forms, as described by Mathur and Rathmann (2001): distalization, con-
traction, arc deletion, and the use of auxiliary-like elements (PAM) or overt
pronouns, among several others.
On the other hand, such gaps do not appear in the verb agreement paradigm
of spoken languages, since they do not have to match what is provided by the
gestural space. Rather, as shown in Section 14.2, they vary as to whether they
show null, weak, or strong agreement. Agreement morphemes may also have
variants (allomorphs) that occur in certain contexts, and there will naturally be
phonetic constraints on the combination of the verb stem with the agreement
morpheme(s). However, these are not comparable to the “phonetic gaps” we
see in signed languages.
20 In contrast, the corresponding DGS sign GEBEN has different phonetic properties. The first
person multiple object form of this verb does not violate any phonetic constraints.
21 There is an alternative approach with which no modality effect is assumed in the morphological
process: the index-copying analysis by Meir (1998) and Aronoff et al. (2000), which is based
on Israeli Sign Language and ASL. The advantages of this analysis vs. those of the analysis
presented in this chapter deserve a thorough discussion in the future.
394 Christian Rathmann and Gaurav Mathur
base + agreement
of the theoretical framework adopted, the point remains that some content is
added to the verb in order to express agreement. This property may constitute a
substantive universal for spoken languages. Otherwise, spoken languages may
vary in how the content is sequenced with respect to the verb: prefixation, suf-
fixation, circumfixation or, in rare cases, infixation; or even a combination of
these. The first three options for affixation are illustrated in Figure 14.5.
base + agreement
b b b
a or a or a or …
s s s
e e
e
to represent any change in direction of movement at the same time as any change
in orientation. This process is not found in spoken languages for the expression
of verb agreement, and it is suggested that this is a true modality effect since
the (syntactic) agreement rule generates a form that must be matched with the
gestural space at the A–P interfaces. Evidence comes from the phonetic gaps
described in the previous section, which reveal the mismatches (and therefore
the interaction) between the A–P interface and the gestural space.
Moreover, the matching must obey one constraint specific to agreement: the
loci are no more than two in number (and, thus, opposite each other on the line
formed by the loci). Verb agreement must also obey syntactic constraints based
on the argument structure of the verb: it must agree with animate and inanimate
concrete arguments (see Section 14.4.2.2).
14.7.3 Implications
14.7.3.1 Uniformity in form. Spoken languages may be relatively
uniform in that they use the process of affixation for verb agreement, even
though the content varies from one language to another, not just in the verb stem
but also in the affix. On the other hand, for the expression of other functional
categories like aspect and tense, spoken languages may employ a whole variety
of other morphological processes such as:
r reduplication (e.g. Classical Greek grapho ‘I write’ vs. gegrapha ‘I have
written [perfective aspect]’); see also various languages discussed by Marantz
1982; Broselow and McCarthy 1983);
r stem-internal changes (e.g. German er rennt vs. er rannte and English he runs
vs. he ran);
r templatic morphology (e.g. Arabic katab ‘perfective active’ vs. aktub ‘im-
perfective active’ for write; McCarthy 1982).
It seems, then, that the relative uniformity of spoken languages with respect to
verb agreement does not extend to the expression of aspect and tense.
396 Christian Rathmann and Gaurav Mathur
14.7.4 Recreolization
There is another possible factor behind the uniformity of verb agreement in all
signed languages: recreolization (Fischer 1978; Gee and Kegl 1982; Gee and
Goodhart 1988). In our view the relevant factor is the short length of the cycle of
24 The palm of the B-mid hand faces the contralateral side, while that of the B-down hand faces
downward.
398 Christian Rathmann and Gaurav Mathur
recreolization, namely one or two generations, that may slow the development
of verb agreement and restrict it to the use of the gestural space. The majority
of the Deaf community, who are born to hearing parents, lack native input
and take the process of creolization back to square one. Since they constantly
outnumber the multi-generation Deaf signers who could otherwise advance
the process of creolization (e.g. the case of Simon reported by Singleton and
Newport 1994; Newport 2000), the cycle of recreolization is perpetuated.
14.8 Summary
Having shown various differences and similarities between the two modalities,
we return to the discussion of formal and substantive universals. The two modal-
ities share the same architecture of grammar with respect to verb agreement,
with the exception that gestural space does not have the same function in spoken
languages and in signed languages. Other processes within the architecture are
the same crossmodally and would constitute formal universals:
r the theta criterion, which requires every theta role to be discharged from the
verb and every noun phrase to receive one; and
r the visibility condition that every noun phrase be made visible, e.g. through
case or agreement.
At the level of substantive universals, there seem to be modality-specific
universals. Within the spoken modality, languages vary as to whether they ex-
press agreement. When a language expresses agreement, it is usually with the
person, number, and/or gender features of the subject and is expressed through
affixation. Otherwise, the content of the affixation varies greatly, which adds
another layer of diversity. For signed languages, there seem to be a greater
number of substantive universals. All signed languages seem uniformly to ex-
press agreement, and this agreement is uniformly manifested in the same way,
through readjustment. Moreover, this agreement is restricted to animate (and
inanimate concrete) arguments. Finally, it is object agreement that is unmarked,
and number is one phi-feature that may be expressed through overt and separate
morphology.
While signed languages may be uniform with respect to the form of agree-
ment, they may vary in terms of whether an element like PAM is available
in that language and licenses an additional layer of structure in the sentence.
However, even when PAM is available in a signed language, PAM still under-
goes the same process of readjustment as a verb that shows overt agreement.
Thus, we stress that while signed languages may be uniform with respect to
the form of the agreement, they vary with respect to the surface structure of the
sentence.
The modality-specificity of the substantive universals with respect to the
form of agreement is hypothesized to arise from the different uses of the
Is verb agreement the same crossmodally? 399
Acknowledgments
We are very grateful to the following for helpful comments on an earlier draft of
the chapter: Michel deGraff, Karen Emmorey, Morris Halle, and Diane Lillo-
Martin. Finally we thank Richard Meier for his insightful discussion and ex-
tensive comments on the chapter. All remaining errors are our own.
14.9 References
Ahlgren, Ingrid. 1990. Deictic pronouns in Swedish and Swedish Sign Language. In
Theoretical issues in sign language research, Vol. 1: Linguistics, ed. Susan Fischer
and Patricia Siple, 167–174. Chicago, IL: The University of Chicago Press.
Allan, Keith. 1977. Classifiers. Language 53:285–311.
Aronoff, Mark, Irit Meir, and Wendy Sandler. 2000. Universal and particular aspects
of sign language morphology. Manuscript, State University of New York at Stony
Brook and University of Haifa.
Aubry, Luce. 2000. Vertical manipulation of the manual and visual indices in the person
reference system of American Sign Language. Masters thesis, Harvard University.
Bahan, Benjamin. 1996. Non-manual realization of agreement in American Sign Lan-
guage. Ph.D. dissertation, Boston University.
Baker, Mark. 1996. The polysynthesis parameter. New York: Oxford University Press.
Baker-Shenk, Charlotte and Dennis Cokely. 1980. American Sign Language: A teacher’s
resource text on grammar and culture. Silver Spring, MD: T.J. Publishers.
Bobaljik, Jonathan and Susanne Wurmbrand. 1997. Preliminary notes on agreement in
Itelmen. In PF: Papers at the Interface, ed. Benjamin Bruening, Yoon-Jung Kang
and Martha McGinnis, 395–423. MIT Working Papers in Linguistics, Vol. 30.
Bos, Heleen. 1994. An auxiliary verb in Sign Language of the Netherlands. In Perspec-
tives on Sign Language Structure: Papers from the 5th International Symposium
on Sign Language Research, Vol. 1, ed. Inger Ahlgren, Brita Bergman and Mary
Brennan, 37–53. Durham: ISLA.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Broselow, Ellen and John McCarthy. 1983. A theory of internal reduplication. The
Linguistic Review 3:25–88.
Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Chomsky, Noam. 1981. Lectures on government and binding. Dordrecht: Foris.
Chomsky, Noam. 1995. The Minimalist Program. Cambridge, MA: MIT Press.
Comrie, Bernard. 1981. Language universals and linguistic typology: Syntax and mor-
phology. Oxford: Blackwell.
Comrie, Bernard. 1982. Grammatical relations in Huidrol. In Syntax and semantics,
Vol. 15: Studies in transitivity, ed. Paul J. Hopper and Sandra A. Thompson,
95–115. New York: Academic Press.
Cormier, Kearsy. 1998. How does modality contribute to linguistic diversity?
Manuscript, The University of Texas at Austin.
Cormier, Kearsy. 2002. Grammaticization of indexic signs: How American Sign Lan-
guage expresses numerosity. Doctoral dissertation, The University of Texas at
Austin.
Is verb agreement the same crossmodally? 401
Cormier, Kearsy, Steven Wechsler and Richard P. Meier. 1998. Locus agreement in
American Sign Language. In Lexical and constructional aspects of linguistic ex-
planation, ed. Gert Webelhuth, Jean-Pierre Koenig and Andreas Kathol, 215–229.
Stanford, CA: CSLI Publications.
Duncan, Sandra. 2001. Perspectives on the co-expressivity of speech and co-speech
gestures in three languages. Paper presented at the 27th annual meeting of the
Berkeley Linguistics Society.
Ehrlenkamp, Sonja. 1999. Possessivkonstruktionen in der Deutschen Gebärdensprache:
Warum die Gebärde ‘SCH’ kein Verb ist? Das Zeichen 48:274–279.
Fauconnier, Gilles. 1985. Mental spaces: aspects of meaning construction in natural
language. Cambridge, MA: MIT Press.
Fauconnier, Gilles. 1997. Mappings in thought and language. Cambridge: Cambridge
University Press.
Fiengo, Robert and Robert May. 1994. Indices and identity. Cambridge, MA: MIT Press.
Fischer, Susan. 1973. Two processes of reduplication in the American Sign Language.
Foundations of language 9:469–480.
Fischer, Susan. 1974. Sign language and linguistic universals. In Actes de Colloque
Franco-Allemand de Grammaire Transformationelle, ed. Christian Rohrer and
Nicholas Ruwet, 187–204. Tübingen: Niemeyer.
Fischer, Susan. 1978. Sign language and creoles. In Understanding language through
sign language research, ed. Patricia Siple, 309–331. New York: Academic Press.
Fischer, Susan. 1996. The role of agreement and auxiliaries in sign language. Lingua
98:103–120.
Fischer, Susan and Bonnie Gough. 1978. Verbs in American Sign Language. Sign Lan-
guage Studies 18:17–48.
Fourestier, Simone. 1998. Verben der Bewegung und Position in der Katalanischen
Gebärdensprache. Magisterarbeit, Universität Hamburg.
Friedman, Lynn. 1975. Space, time, and person reference in ASL. Language 51:940–961.
Friedman, Lynn. 1976. The manifestation of subject, object and topic in the American
Sign Language. In Subject and topic, ed. Charles Li, 125–148. New York: Academic
Press.
Gee, James and Wendy Goodhart. 1988. American Sign Language and the human bi-
ological capacity for language. In Language, learning and deafness, ed. Michael
Strong, 49–74. Cambridge: Cambridge University Press.
Gee, James and Judy Kegl. 1982. Semantic perspicuity and the locative hypothesis:
implications for acquisition. Journal of Education 164:185–209.
Gee, James and Judy Kegl. 1983. Narrative/story structure, pausing and American Sign
Language. Discourse Processes 6:243–258.
Greenberg, Joseph. 1966. Universals of language. Cambridge, MA: MIT Press.
Grinevald, Colette. 1999. A typology of nominal classification systems. Faits de
Langues, 1999, 14:101–122.
Jackendoff, Ray. 1987. Consciousness and the computational mind. Cambridge, MA:
MIT Press.
Jackendoff, Ray. 1992. Languages of the mind. Cambridge, MA: MIT Press.
Jackendoff, Ray. 1997. The architecture of the language faculty. Cambridge, MA: MIT
Press.
Janis, Wynne. 1992. Morphosyntax of the ASL verb phrase. Doctoral dissertation, State
University of New York at Buffalo.
402 Christian Rathmann and Gaurav Mathur
Johnston, Trevor. 1991. Spatial syntax and spatial semantics in the inflection of signs
for the marking of person and location in Auslan. International Journal of Sign
Linguistics 2:29–62.
Johnston, Trevor. 1996. Function and medium in the forms of linguistic expression found
in a sign language. In International Review of Sign Linguistics, Vol. 1, ed. William
Edmondson and Ronnie B. Wilbur, 57–94. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Keller, Jörge. 1998. Aspekte der Raumnutzung in der deutschen Gebärdensprache.
Hamburg: Signum Press.
Kendon, Adam. 2000. Language and gesture: unity or duality? In Language and gesture,
ed. David McNeill, 47–63. Cambridge: Cambridge University Press.
Kita, Sotaro. 2000. How representation gestures help speaking. In Language and gesture,
ed. David McNeill, 162–185. Cambridge: Cambridge University Press.
Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Lacy, Richard. 1974. Putting some of the syntax back into the semantics. Paper presented
at the annual meeting of the Linguistic Society of America, New York.
Lakoff, George. 1987. Women, fire and dangerous things. Chicago, IL: University of
Chicago Press.
Langacker, Ronald W. 1987. Foundations of cognitive grammar, Vol. 1: Theoretical
prerequisites. Stanford, CA: Stanford University Press.
Lehmann, Christian. 1988. On the function of agreement. In Agreement in natural
language: approaches, theories, descriptions, ed. Michael Barlow and Charles
Ferguson, 55–65. Stanford, CA: CSLI Publications.
Liddell, Scott. 1990. Four functions of a locus: re-examining the structure of space
in ASL. In Sign language research: theoretical issues, ed. C. Lucas, 176–198.
Washington, DC: Gallaudet University Press.
Liddell, Scott. 1995. Real, surrogate, and token space: grammatical consequences in
ASL. In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 19–
42. Hillsdale, NJ: Lawrence Erlbaum Associates.
Liddell, Scott. 2000a. Indicating verbs and pronouns: pointing away from agreement. In
The signs of language revisited: an anthology to honor Ursula Bellugi and Edward
Klima, ed. Karen Emmorey and Harlan Lane, 303–320. Mahwah, NJ : Lawrence
Erlbaum Associates.
Liddell, Scott. 2000b. Blended spaces and deixis in sign language discourse. In
Language and gesture, ed. David McNeill, 331–357. Cambridge: Cambridge Uni-
versity Press.
Liddell, Scott and Robert Johnson. 1989. American Sign Language: the phonological
base. Sign Language Studies 18, 195–277.
Lillo-Martin, Diane. 1991. Universal grammar and American Sign Language: setting
the null argument parameters. Dordrecht: Kluwer Academic.
Lillo-Martin, Diane and Edward Klima. 1990. Pointing out differences: ASL pronouns in
syntactic theory. In Theoretical issues in sign language research, Vol. 1: Linguistics,
ed. Susan Fischer and Patricia Siple, 191–210. Chicago: The University of Chicago
Press.
Lindblom, Björn. 1990. Explaining phonetic variation: a sketch of the H&H theory.
In Speech production and speech modeling, ed. William Hardcastle and Alain
Marchal, 403–439. Dordrecht: Kluwer Academic.
Is verb agreement the same crossmodally? 403
Petronio, Karen and Diane Lillo-Martin. 1997. Wh-movement and the position of Spec,
CP: evidence from American Sign Language. Language 73, 18–57.
Prillwitz, Siegmund. 1986. Die Gebärde der Gehörlosen. Ein Beitrag zur Deutschen
Gebärdensprache und ihrer Grammatik. In Die Gebärde in Erziehung und Bil-
dung Gehörloser, ed. Sigmund Prillwitz, 55–78.Tagungsbericht. Hamburg: Signum
Verlag.
Rathmann, Christian. 2001. The optionality of Agreement Phrase: evidence from signed
languages. Unpublished manuscript, The University of Texas at Austin.
Sandler, Wendy. 1986. The spreading hand autosegment of ASL. Sign Language Studies
15, 1–28.
Sandler, Wendy. 1989. Phonological representation of the sign. Dordrecht: Foris.
Sandler, Wendy. 1993. A sonority cycle in American Sign Language. Phonology 10,
243–279.
Schick, Brenda. 1990. Classifier predicates in American Sign Language. International
Journal of Sign Linguistics 1:15–40.
Shepard-Kegl, Judy. 1985. Locative relations in American Sign Language word forma-
tion, syntax and discourse. Ph.D. dissertation, MIT.
Singleton, Jenny and Elissa Newport. 1994. When learners surpass their models: the
acquisition of American Sign Language from impoverished input. Manuscript,
University of Rochester.
Smith, Wayne. 1990. Evidence for auxiliaries in Taiwan Sign Language. In Theoretical
issues in sign language research, Vol. 1: Linguistics, ed. Susan Fischer and Patricia
Siple, 211–228. Chicago, IL: The University of Chicago Press.
Speas, Margaret. 1995. Economy, agreement, and the representation of null arguments.
Unpublished manuscript, University of Massachusetts, Amherst.
Stokoe, William, Dorothy Casterline and Carl Croneberg. 1965. A dictionary of
American Sign Language based on linguistic principles. Silver Spring, MD:
Linstok Press.
Supalla, Ted. 1986. The classifier system in American Sign Language. In Noun classes
and categorization, ed. Colette Craig, 181–214. Amsterdam: John Benjamins.
Supalla, Ted. 1997. An implicational hierarchy in the verb agreement of American Sign
Language. Unpublished manuscript, University of Rochester.
Sutton-Spence, Rachel and Bencie Woll. 1999. The linguistics of British Sign Language:
an introduction. Cambridge: Cambridge University Press.
Talmy, Leonard. 1985. Lexicalization patterns: semantic structure in lexical forms. In
Language typology and syntactic description, Vol. 3: Grammatical categories and
the lexicon, ed. T. Shopen, 57–149. Cambridge: Cambridge University Press.
Talmy, Leonard. 2000. Spatial structuring in spoken language and its relation to that
in sign language. Paper presented at the Third Workshop on Text Structure, The
University of Texas at Austin.
Taub, Sarah. 2001. Language from the body: iconicity and metaphor in American Sign
Language. Cambridge: Cambridge University Press.
van der Hulst, Harry. 1993. Units in the analysis of signs. Phonology 10:209–241.
Wilcox, Phyllis. 2000. Metaphor in American Sign Language. Washington, DC:
Gallaudet University Press.
Wood, Sandra. 1999. Syntactic and semantic aspects of negation in ASL. Masters thesis,
Purdue University.
15 The effects of modality on spatial language:
How signers and speakers talk about space
Karen Emmorey
15.1 Introduction
Most spoken languages encode spatial relations with prepositions or locative
affixes. Often there is a single grammatical element that denotes the spatial
relation between a figure and ground object; for example, the English spatial
preposition on indicates support and contact, as in The cup is on the table. The
prepositional phrase on the table defines a spatial region in terms of a ground
object (the table), and the figure (the cut) is located in that region (Talmy
2000). Spatial relations can also be expressed by compound phrases such as
to the left or in back of. Both simple and compound prepositions constitute a
closed class set of grammatical forms for English. In contrast, signed languages
convey spatial information using so-called classifier constructions in which
spatial relations are expressed by where the hands are placed in the signing space
or in relationship to the body (e.g. Supalla 1982; Engberg-Pedersen 1993).1 For
example, to indicate ‘The cup is on the table,’ an American Sign Language
(ASL) signer would place a C classifier handshape (referring to the cup) on
top of a B classifier handshape (referring to the table). There is no grammatical
element specifying the figure–ground relation; rather, there is a schematic and
isomorphic mapping between the location of the hands in signing space and the
location of the objects described (Emmorey and Herzig in press). This chapter
explores some of the ramifications of this spatialized form for how signers talk
about spatial environments in conversations.
405
406 Karen Emmorey
(c)
entrance
Figure 15.1 Illustration of ASL descriptions of the location of a table within
a room, described from: 15.1a the speaker’s perspective; or 15.1b the ad-
dressee’s perspective. Signers exhibit better comprehension for room descrip-
tions presented from the speaker’s perspective, despite the mental transforma-
tion that this description entails; 15.1c Position of the table described in (a)
and (b)
the location of the table in the room being described (the table is on the left
as seen from the entrance) and what the addressee observes in signing space
(the classifier handshape referring to the table is produced to the addressee’s
right). In this case, the addressee must perform what amounts to a 180◦ mental
rotation to correctly comprehend the description.
Although spatial scenes are most commonly described from the speaker’s
point of view (as in Figure 15.1a), it is possible to indicate a different view-
point. ASL has a marked sign that can be glossed as YOU-ENTER, which
indicates that the scene should be understood as signed from the addressee’s
viewpoint (see Figure 15.1b). When this sign is used, the signing space in
which the room is described is, in effect, rotated 180◦ so that the addressee
is “at the entrance” of the room. In this case, the addressee does not need to
mentally transform locations within signing space. However, ASL descriptions
using YOU-ENTER are quite unusual and rarely found in natural discourse.
Furthermore, Emmorey, Klima, and Hickok (1998) found that ASL signers
comprehended spatial descriptions much better when they were produced from
the speaker’s point of view compared to the addressee’s viewpoint. In that study,
signers viewed a videotape of a room and then a signed description and were
asked to judge whether the room and the description matched. When the room
was described from the addressee’s perspective (using YOU-ENTER), the de-
scription spatially matched the room layout shown on the videotape, but when
signed from the speaker’s perspective (using I-ENTER), the description was
the reverse of the layout on the videotape (a simplified example is shown in
Figure 15.1). Emmorey et al. (1998) found that ASL signers were more accurate
when presented with descriptions from the speaker’s perspective, despite the
mental transformation that these descriptions entailed.
One might consider this situation analogous to that for English speakers who
must understand the terms left and right with respect to the speaker’s point of
view (as in on my left). The crucial difference, however, is that these relations
are encoded spatially in ASL, rather than lexically. The distinction becomes
particularly clear in situations where the speaker and the addressee are both in
the environment, observing the same scene. In this situation, English speakers
most often adopt their addressee’s point of view, for example giving directions
such as, pick the one on your right, or it’s in front of you, rather than pick the one
on my left or it’s farthest from me (Schober 1993; Mainwaring, Tversky, and
Schiano 1996). However, when jointly viewing an environment, ASL signers
do not adopt their addressee’s point of view but use what I term “shared space”
(Emmorey 2002). Figure 15.2 provides an illustration of what is meant by
shared space. In the situation depicted, the speaker and addressee are facing
each other, and between them are two boxes. Suppose the box on the speaker’s
left is the one that he (or she) wants shipped. If the speaker uses signing space
(rather than just pointing to the actual box), he would indicate the box to be
408 Karen Emmorey
X X
X X
Speaker Speaker
Figure 15.2 Illustration of a speaker using: 15.2a shared space; and 15.2b
using the addressee’s spatial viewpoint to indicate the location of the box
marked with an “X” (the asterisk indicates that signers reject this type of
description). By convention, the half circle represents the signing space in
front of the signer. The “X” represents the location of the classifier sign used
to represent the target box (e.g., a hooked 5 handshape)
shipped by placing the appropriate classifier sign on the left side of signing
space. Note that, in this situation, no mental transformation is required by the
addressee. Instead, the speaker’s signing space is simply “mapped” onto the
jointly observed physical space: the left side of the speaker’s signing space
maps directly to the actual box on the right side of the addressee. In contrast, if
the speaker were to adopt the addressee’s point of view, producing the classifier
sign on his right, the location in signing space would conflict with the location
of the target box observed by the addressee.
Note that it is not impossible to adopt the addressee’s viewpoint when physical
space is jointly observed by both interlocutors. For example, the speaker could
describe an action of the addressee. In this case, the speaker would indicate a
referential shift through a break in eye gaze, and within the referential shift the
signer could sign LIFT-BOX using a handling classifier construction articulated
toward the right of signing space. The signing space in this case would reflect
the addressee’s view of the environment (i.e. the box is to the addressee’s
right).
In general, however, for situations in which the signer and addressee are
both observing and discussing a jointly viewed physical environment, there
is no true speaker vs. addressee point of view in signed descriptions of that
environment (Emmorey 1998; Emmorey and Tversky, in press). The signing
The effects of modality on spatial language 409
space is “shared” in the sense that it maps to the physically observed space and
to both the speaker’s and addressee’s view of the physical space. Furthermore,
the signer’s description of the box would be the same regardless of where the
addressee happened to be standing (e.g. placing the addressee to the signer’s
left in Figure 15.2, would not alter the signer’s description or the nature of the
mapping from signed space to physical space). Thus, in this situation, ASL
signers do not need to take into account where their addressee is located, unlike
English speakers who tend to adopt their addressee’s viewpoint. This difference
between languages derives from the fact that signers use the actual space in front
of them to represent observed physical space.
In sum, language modality impacts the interpretation and nature of speaker
and addressee perspectives for spatial descriptions. For descriptions of nonpre-
sent environments in ASL, an addressee must mentally transform the locations
within a speaker’s signing space in order to correctly understand the left–right
arrangements of objects with respect to the speaker’s viewpoint. For speech,
spatial information is encoded in an acoustic signal, which bears no resem-
blance to the spatial scene described. An English speaker describing the room
in Figure 15.1 might say either You enter the room, and a table is to your left
or I enter the room, and a table is to my left. Neither description requires any
sort of mental transformation on the part of the addressee because the relevant
information is encoded in speech rather than in space. 2 However, when English
speakers and addressees discuss a jointly viewed scene, an addressee may need
to perform a type of mental transformation if the speaker describes a spatial
location from his or her viewpoint. For example, if the speaker says Pick the
box on my left for the situation depicted in Figure 15.2, the addressee must un-
derstand that the desired box is on his or her right. Again, this situation differs
for signers because the speaker’s signing space maps to the observed physical
space and to the addressee’s view of that space. Signing space is shared, and
no mental transformation is required by the addressee.
For the situations discussed thus far, the speaker produced monologue de-
scriptions of environments (e.g. describing room layouts, as illustrated in
Figure 15.1) or the speaker described a jointly viewed environment (as illus-
trated in Figure 15.2). In the study reported in Section 15.4, I explore the
2 If an English speaker describes a situation in which the addressee is placed within the room (e.g.
You are at the back of the room facing the door, and when I walked in I noticed a table on my
left), then the addressee would indeed need to perform some type of mental transformation to
understand the location of the table with respect to his or her viewpoint (i.e. the table is on the
addressee’s right). However, for spatial descriptions that do not involve the addressee as part
of the environment, no such mental transformation would be required. For the description and
room matching task used by Emmorey et al. (1998), it would make no difference whether the
speaker described the room from her perspective (using I ) or from the addressee’s perspective
(using you). For ASL, however, the placement of classifier signs within signing space changes
depending upon whether the room description is introduced with YOU-ENTER or I-ENTER
(see Figure 15.1).
410 Karen Emmorey
situation in which two signers converse about a spatial scene that they are not
currently observing, focusing on how the addressee refers to and interprets loca-
tions within the speaker’s signing space. First, however, I examine the different
ways that signing space can be structured when describing a spatial environment
and how English speakers and ASL signers sometimes differ in their choice of
spatial perspective.
Route perspective: As you open the door, you are in a small five-by-five room which
is a small closet. When you get past there, you’re in what we call the foyer . . . If you
keep walking in that same direction, you’re confronted by two rooms in front of you . . .
a large living room which is about twelve by twenty on the left side. And on the right
side, straight ahead of you again, is a dining room which is not too big . . . (p. 929).
Survey perspective: The main entrance opens into a medium-sized foyer. Leading off
the foyer is an archway to the living room which is the furthermost west room in the
apartment. It’s connected to a large dining room through double sliding doors. The dining
room also connects with the foyer and main hall through two small arches. The rest of
the rooms in the apartment all lead off this main hall which runs in an east–west direction
(p. 927).
Emmorey and Falgier (1999) found that ASL signers also adopt either a route
or survey perspective when describing environments, and that signers structure
signing space differentially, depending upon perspective choice. We found that
if signers adopted a survey perspective when describing an environment, they
most often used a diagrammatic spatial format, but when a route perspective was
adopted, they most often used a viewer spatial format. The term spatial format
The effects of modality on spatial language 411
Signing space represents a map-like model Signing space reflects an individual’s view
of the environment. of the environment at a particular point in
time and space.
Space can have either a 2-D “map” or a 3-D Signing space is 3-D.
“model” format.
The vantage point does not change The vantage point can change.
(generally a bird’s eye view).
Relatively low horizontal signing space or a Relatively high horizontal signing space.
vertical plane.
as if he or she were actually moving through it. Under Liddell’s analysis, the
surrogate within this type of description coincides with the signer’s body (i.e. it
occupies the same physical space as the signer’s body). The term viewer space,
rather than surrogate space, may be preferred for the type of spatial descriptions
discussed here because it is the environment, rather than a surrogate, which is
conceptualized as present.
Spatial formats can be determined independently of the type of perspective
chosen to describe an environment. For example, route perspectives are charac-
terized by movement through an environment, but motion verbs can be produced
within a viewer spatial format (e.g. DRIVE-TO) or within a diagrammatic spa-
tial format (e.g. a classifier construction meaning ‘vehicle moves straight and
turns’). Survey perspectives are characterized by the use of cardinal direction
terms, but these terms can also be produced within either a viewer spatial for-
mat (e.g. the sign EAST produced outward from the signer at eye level to
indicate ‘you go straight east’) or within a diagrammatic spatial format (e.g.
the sign NORTH produced along a path that coincides with a road traced in
signing space, indicating that the road runs to the north). Although perspective
choice can be determined independently of spatial format, diagrammatic space
is clearly preferred for survey descriptions, and viewer space is clearly preferred
for route descriptions.
Emmorey and Falgier (1999) found that ASL signers did not make the same
perspective choices as English speakers when describing spatial environments
learned from a map. ASL signers were much more likely to adopt a survey
perspective compared to English speakers. We hypothesized that ASL signers
may have been more affected by the way the spatial information was acquired,
i.e. via a map rather than via navigation. A mental representation of a map
is more easily expressed using diagrammatic space, and this spatial format is
more compatible with a survey perspective of the environment. English speakers
were not subject to such linguistic influences and preferred to adopt a route
perspective when describing environments with a single path and landmarks of
similar size (specifically, the layout of a convention center).
Thus, language modality appears to influence the choice of spatial perspec-
tive. For signers, diagrammatic signing space can be used effectively to repre-
sent a map, thus biasing signers toward adopting a survey perspective where
English speakers would prefer a route perspective. However, signers do not al-
ways adopt a survey perspective for spatial descriptions. Pilot data suggest that
signers often produce route descriptions for environments they have learned
by navigation. Language modality may have its strongest impact on the nature
of spatial perspective choice when knowledge of that environment is acquired
from a map.
We now turn from studies of narrative spatial descriptions to a study that
explores the nature of spatial conversations in ASL.
The effects of modality on spatial language 413
TOWN
White Mountains
Mountain Road
White River
Store
Gas station
River Highway
W E
S
Figure 15.3 Map of the town (from Tversky and Taylor 1992)
414 Karen Emmorey
Eleven pairs of fluent ASL signers participated in the study. For three of
the pairs, the signer who described the town had previously described it to
another participant in a separate session. Since we were primarily concerned
with how the addressee structured signing space, it was not critical that speaker
be naive to the task of explaining the layout of the town. The addressee could
ask questions throughout the speaker’s description, and the participants faced
each other during the task. Subjects were tested either at Gallaudet University
in Washington, DC or at The Salk Institute in San Diego, CA. Sixteen subjects
had Deaf parents, two subjects had hearing parents and were exposed to ASL
prior to the age of three, and one subject learned ASL in junior high school (her
earlier exposure was to SEE).
As noted, the previous research by Emmorey and Falgier (1999) indicates that
ASL signers tend to produce descriptions of the town from a survey perspective,
rather than from a route perspective, and this was true for the speakers in this
study: nine speakers produced a survey description, two speakers produced a
mixed description (part of the description was from a survey perspective and part
was from a route perspective), and no speaker produced a pure route description
of the town. Given that the addressee’s task was to draw a map, the speaker’s
use of diagrammatic space was sensible because this spatial format allows a
large number of landmarks to be schematically mapped onto signing space,
and diagrammatic space can easily be transformed into a map representation.
In what follows, I focus on how the addressee referred to the speaker’s signing
space when asking a question or when commenting upon the description. All
addressees used a diagrammatic spatial format when re-describing the town.
The results revealed that all but one of the addressees performed a mental
reversal of observed signing space when re-describing the town or asking a ques-
tion. For example, if the speaker indicated that Maple Street looped to the left
(observed as motion to the right for the addressee), the addressee would trace the
Maple Street loop to his or her left in signing space. It was rare for an addressee
to mirror the speaker’s space, e.g. by tracing Maple Street to the right (see be-
low). Addressees also used a type of shared space by pointing toward a location
within the speaker’s space to ask a question or comment about the landmark
associated with that location. Figure 15.4 illustrates these different possibilities.
As discussed earlier, when the addressee reverses the speaker’s signing
space, he or she must perform what amounts to a 180◦ mental rotation (see
Figure 15.4a). However, the nature of this mental transformation is not com-
pletely clear. The intuitions of native signers suggest that they probably do not
mentally rotate locations within the speaker’s signing space. Rather, addressees
report that they “instantly” (intuitively) know how to interpret locations in the
speaker’s signing space. They do not experience a sensation of rotating a men-
tal image of the scene or landmarks within the scene. How then, do addressees
transform observed signing space into a reversed mental representation of that
The effects of modality on spatial language 415
X X
X X
Addressee Addressee
(c) Shared space
(i) Speaker (ii) Speaker
X X
Addressee
Addressee
Figure 15.4 Illustration of: 15.4a reversed space; 15.4b mirrored space; and
15.4c two examples of the use of shared space for non-present referents. The
half circles represent signing space, the solid arrow represents the direction of
the Maple Street loop, and the “X” represents the location of the Town Hall
with respect to Maple Street. The dotted arrow in example (i) of 15.4c indicates
the direction of a pointing sign used by the addressee to refer to the Town Hall
has been associated with a location on the speaker’s right, then the addressee
may direct a pronoun toward this location, which is on his or her left, to refer
to John (compare Figure 3.4 in Neidle, Kegl, MacLaughlin, Bahan, and Lee
2000). However, when the speaker’s signing space is structured topographically
to represent the location of several landmarks, the addressee generally reverses
the speaker’s space, as shown in Figure 15.4a. Such reversals do not occur
for nonspatial conversations because the topography of signing space is not
generally complex and does not convey a spatial viewpoint.
Example (ii) in Figure 15.4c illustrates an example of shared space in which
the signing spaces for the addressee and speaker overlap. For example, in one
situation, two signers sat across from each other at a table, and the speaker
described the layout of the town by signing on the tabletop, e.g. tracing the
location of streets on the table. The addressee then used classifier constructions
and pronouns articulated on the table to refer to locations and landmarks in the
town with the same spatial locations on the table: thus, the signing spaces of the
speaker and addressee physically overlapped. In another similar example, two
signers (who were best friends) swiveled their chairs during the conversation
so that they were seated side-by-side. The addressee then moved her hands into
the speaker’s space in order to refer to landmarks and locations, even while her
partner was still signing! This last example was rare, with both signers finding
such “very shared space” amusing. Physically overlapping shared space may be
possible only when there is an object, such as a table, to ground signing space
in the real world or when signers know each other very well. For one pair of
signers, the speaker clearly attempted to use overlapping shared space with his
addressee, but she adamantly maintained a separate signing space.
Özyürek (2000) uses the term “shared space” to refer to the gesture space
shared between spoken language users. However, Özyürek focuses on how
the speaker changes his or her gestures depending upon the location of the ad-
dressee. When narrators described “in” or “out” spatial relations (e.g. ‘Sylvester
flies out of the window’), their gestures moved along a front–back axis when
the addressee was facing the speaker, but speakers moved their gestures later-
ally when the addressee was to the side. Özyürek argues that speakers prefer
gestures along these particular axes so that they can move their gestures into or
out of a space shared with the addressee. In contrast, shared space as defined
here for ASL is not affected by the spatial position of the addressee, and signers
do not alter the directionality of signs depending upon where their addressee is
located. For example, OUT is signed with motion along the horizontal axis (out-
ward from the signer), regardless of the location of the addressee. The direction
of motion can be altered to refer explicitly to the addressee (e.g. to express ‘the
two of us are going out’) or to refer to a specific location within signing space
(e.g. to indicate the location of an exit). The direction of motion for OUT (or for
other directional signs) is not affected by the location of the addressee, unless
418 Karen Emmorey
the signs specifically refer to the addressee. The use of shared space in ASL
occurs when the speaker’s signing space maps to his or her view of the spatial
layout of physically present objects (as in Figure 15.2) or to a mental image of
the locations of nonpresent objects (as in Figure 15.4c). The addressee shares
the speaker’s signing space either because it maps to the addressee’s view of
present objects or because the addressee uses the same locations within the
signer’s space to refer to nonpresent objects.
Furuyama (2000) presents a study in which hearing subjects produced col-
laborative gestures within a shared gesture space, which appears to be similar
to shared space in ASL. In this study, a speaker (the instructor) explained how
to create an origami figure to a listener (the learner), but the instructor had to
describe the paper-folding steps without using a piece of origami paper for
demonstration. Similar to the use of shared space in signed conversations, Fu-
ruyama found that learners pointed toward the gestures of their instructor or
toward “an ‘object’ seemingly set up in the air by the instructor with a gesture”
(Furuyama 2000:105). Furthermore, the gesture spaces of the two participants
could physically overlap. For example, learners sometimes referred to the in-
structor’s gesture by producing a gesture near (or even touching) the instructor’s
hand. In addition, the surface of a table could ground the gesture space, such
that instructors and learners produced gestures within the same physical space
on the table top. These examples all parallel the use of shared space in ASL
depicted in Figure 15.4c.
Finally, the use of shared space is independent of the spatial format used to
describe an environment. All of the examples in this study involved diagram-
matic space, but it is also possible for viewer space to be shared. For example,
suppose a speaker uses viewer space to describe where she wants to place a
new sofa in her living room (i.e. the spatial description is as if she were in the
room). Her addressee may refer to the sofa by pointing to its associated location
in the speaker’s space, for example, signing the equivalent of, ‘No, move it over
toward that side of the room.’
descriptions. Second, results from Emmorey and Falgier (1999) suggest that
language modality may influence choice of spatial perspective. The ease with
which diagrammatic space can express information learned from a map may
explain the preference of ASL signers for spatial descriptions with a survey
perspective. In contrast, nothing about the linguistic structure of English leads
to a preference for a route or a survey perspective when the environment has
been learned from a map. Rather, English speakers were more influenced by the
nature of the environment, preferring a route description for the environment
that contained a single path and landmarks of similar size (Taylor and Tversky
1992; Emmorey, Tversky, and Taylor 2000).
Finally, unlike English speakers, ASL signers can use shared space, rather
than adopt their addressee’s point of view. English speakers generally need to
take into account where their addressee is located because spatial descriptions
are most often given from the addressee’s point of view (‘It’s on your left’
or ‘It’s in front of you’). In ASL, there may be no true speaker or addressee
viewpoint, particularly when the signers are discussing a jointly viewed scene
(as illustrated in Figure 15.2). Furthermore, addressees can refer directly to
locations in the speaker’s space (as illustrated in Figure 15.4c). When shared
space is used, speakers and addressees can refer to the same locations in signing
space, regardless of the position of the addressee. Thus, the interface between
language and visual perception (how we talk about what we see) has an added
dimension for signers (they also see what they talk about). That is, signers see
(rather than hear) spatial descriptions, and there is a schematic isomorphism
between aspects of the linguistic signal (the location of the hands in signing
space) and aspects of the spatial scene described (the location of objects in the
described space). Signers must integrate a visually observed linguistic signal
with a visually observed environment or a visual image of the described envi-
ronment. The studies discussed here represent an attempt to understand how
signers accomplish this task.
Acknowledgments
This work was supported in part by a grant from the National Institutes of
Health (NICHD RO1-13249) and from the National Science Foundation (SBR-
9809002). I thank Robin Thompson and Melissa Herzig for help in data analysis,
and I thank Richard Meier, Elisabeth Engberg-Pedersen, and an anonymous
reviewer for helpful comments on this chapter.
15.6 References
Emmorey, Karen. 1998. Some consequences of using signing space to represent phys-
ical space. Keynote address at the Theoretical Issues in Sign Language Research
meeting, November, Washington, DC.
420 Karen Emmorey
Emmorey, Karen. 2002. Language, cognition, and brain: Insights from sign language
research. Mahwah, NJ: Lawrence Erlbaum Associates.
Emmorey, Karen. In press. Perspectives on classifier constructions in signed languages.
Mahwah, NJ: Lawrence Erlbaum Associates.
Emmorey, Karen, and Brenda Falgier. 1999. Talking about space with space: Describing
environments in ASL. In Storytelling and conversations: Discourse in Deaf com-
munities, ed. Elizabeth A. Winston, 3–26. Washington, DC: Gallaudet University
Press.
Emmorey, Karen, and Melissa Herzig. In press. Categorical versus analogue properties
of classifier constructions in ASL. In Perspectives on classifier constructions in
sign languages, ed. Karen Emmorey. Mahwah, NJ: Lawrence Erlbaum Associates.
Emmorey, Karen, Edward S. Klima, and Gregory Hickok. 1998. Mental rotation within
linguistic and nonlinguistic domains in users of American Sign Language. Cogni-
tion 68:221–246.
Emmorey, Karen and Barbara Tversky. In press. Spatial perspective in ASL. Sign Lan-
guage and Linguistics.
Emmorey, Karen, Barbara Tversky, and Holly A. Taylor. 2000. Using space to describe
space: Perspective in speech, sign, and gesture. Spatial Cognition and Computation
2:157–180.
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language: The semantics
and morphosyntax of the use of space in a visual language. International studies on
sign language research and communication of the deaf, Vol. 19. Hamburg: Signum-
Verlag.
Furuyama, Nobuhiro. 2000. Gestural interaction between the instructor and the learner
in origami instruction. In Language and gesture, ed. David McNeill, 99–117.
Cambridge: Cambridge University Press.
Liddell, Scott. 1994. Tokens and surrogates. In Perspectives on sign language struc-
ture: Papers from the 5th International Symposium on Sign Language Research,
Vol. 1, ed. Inger Ahlgren, Brita Bergman, and Mary Brennan, 105–19. Durham:
The International Sign Language Association, University of Durham.
Liddell, Scott. 1995. Real, surrogate, and token space: Grammatical consequences in
ASL. In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 19–41.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Linde, Charlotte, and William Labov. 1975. Spatial networks as a site for the study of
language and thought. Language 51:924–939.
Mainwaring, Scott, Barbara Tversky, and Diane J. Schiano. 1996. Perspective choice
in spatial descriptions. IRC Technical Report, 1996–06. Palo Alto, CA: Interval
Research Corporation.
Masataka, Nobuo. 1995. Absence of mirror-reversal tendency in cutaneous pattern per-
ception and acquisition of a signed language in deaf children. Journal of Develop-
mental Psychology 13:97–106.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: MIT Press.
Özyürek, Asli. 2000. The influence of addressee location on speaker’s spatial language
and representational gestures of direction. In Language and gesture, ed. David
McNeill, 64–83. Cambridge: Cambridge University Press.
The effects of modality on spatial language 421
16.1 Introduction
This chapter reports on the findings of an experiment into the learning of British
Sign Language (BSL) by Christopher, a linguistic savant, and a control group
of talented second language learners. The results from tests of comprehension
and production of morphology and syntax, together with observations of his
conversational abilities and judgments of grammaticality, indicate that despite
his dyspraxia and visuo-spatial impairments, Christopher approaches the task
of learning BSL in a way largely comparable to that in which he has learned
spoken languages. However, his learning of BSL is not uniformly successful.
Although Christopher approaches BSL as linguistic input, rather than purely
visuo-spatial information, he fails to learn completely those parts of BSL for
which an intact nonlinguistic visuo-spatial domain is required (e.g. the BSL
classifier system). The unevenness of his learning supports the view that only
some parts of language are modality-free.
Accordingly, this case illuminates crossmodality issues, in particular, the
relationship of sign language structures and visuo-spatial skills. By exploring
features of Christopher’s signing and comparing it to normal sign learners,
new insights can be gained into linguistic structures on the one hand and the
cognitive prerequisites for the processing of signed language on the other.
In earlier work (see Smith and Tsimpli 1995 and references therein; also
Tsimpli and Smith 1995; 1998; Smith 1996; Smith and Tsimpli 1996; 1997;
Morgan, Smith, Tsimpli, and Woll 2002), we have documented the unique
language learning abilities of a polyglot savant, Christopher (date of birth:
January, 6 1962). Christopher exhibits a striking dissociation between his lin-
guistic and nonlinguistic abilities. Despite living in sheltered accommodation
because his limited cognitive abilities make him unable to look after himself,
Christopher can read, write, translate, and speak (with varying degrees of flu-
ency) some 20 to 25 languages. This linguistic ability is in sharp contrast with
his general intellectual and physical impairments. Due to a limb apraxia (a mo-
tor disorder which makes the articulation of planned movements of the arms and
hands difficult or impossible), he has difficulty with everyday activities such as
422
BSL development in an exceptional learner 423
past work looking at Christopher, we expected that his linguistic talent would
outweigh the disadvantages of the medium, and that his ability in BSL would
mirror his mixed abilities in spoken languages: that is, he would make extremely
rapid initial progress, his mastery of the morphology and vocabulary would be
excellent, and that he would have significant difficulty with those syntactic
properties that differentiate BSL from spoken English.
As well as teaching BSL to Christopher we also taught BSL to a comparator
group of 40 talented second language learners, volunteer undergraduate students
at UCL and City University, London. Their ages ranged between 18 and 30 years
and there were 30 females and 10 males. They were assessed as having a level
of fluency in a second language (learnt after 11 years of age) sufficient to begin
a first year degree course at University in one of French, Spanish, or German.
The group was taught the same BSL curriculum as Christopher using the same
teaching methods. We do not discuss this comparison in depth here (for more
details, see Morgan, Smith, Tsimpli and Woll 2002) but occasionally refer to
test scores as a guide to the degree to which Christopher can be regarded as a
normal sign learner.
Test Score
but common skills across these tests involve the ability to visualize how ab-
stract spatial patterns change from different perspectives, to co-ordinate spatial
locations in topographic maps, and to hold these abstract spatial patterns in
nonverbal short-term memory.
Unlike for instance, individuals with Williams syndrome, Christopher is ex-
tremely poor at face recognition, as shown by the results in Table 16.2. On
the Benton test (Benton et al. 1983), a normal score would be between 41 and
54, and anything below 37 is “severely impaired.” On the Warrington (1984)
face/word recognition test, he scored at the 75th percentile on words, with 48 out
of 50 correct responses, but on faces his performance was too poor to be eval-
uated in comparison with any of the established norms.
The preference for the “verbal” manifest in these data is reinforced by two
other sets of results. First, in a multilingual version of the Peabody Picture Vo-
cabulary Test, administered at age 28 (O’Connor and Hermelin 1991), Christo-
pher scored as shown in (1).
(1) English 121; German 114; French 110; Spanish 89
Second, in a variant of the Gollin figures test (Smith and Tsimpli 1995:8–
12) he was strikingly better at identifying words than objects. In this test, the
subject is presented with approximations to different kinds of representation:
either words or objects. The stimuli were presented in the form of a computer
print-out over about 20 stages. At the first stage there was minimal information
(approximately 6 percent), rendering the stimulus essentially unrecognizable.
Succeeding stimuli increased the amount of information monotonically until,
at the final stage, the representation was complete. The test was administered
to Christopher and 15 controls. Christopher was by far the worst on object
recognition, but second best on word recognition (for details, see Smith and
Tsimpli 1995:Appendix 1).
While no formal diagnosis has been made clinically, it is reasonably clear that
Christopher is on the autistic continuum: he fails some, but not all, false-belief
tasks, and he has some of the characteristic social manifestations of autism.
He typically avoids eye contact, fails to initiate conversational exchanges, and
BSL development in an exceptional learner 427
16.4 Apraxia
On the basis of two initial apraxia batteries (Kimura 1982) and an adapta-
tion of the Boston Diagnostic Apraxia Examination (Goodglass and Kaplan
1983) it appears that Christopher has a severe apraxia involving the production
of planned movements of the limbs when copying nonrepresentational move-
ments. He scored 29 percent correct on the Kimura 3-movement copying test,
where anything below 70 percent is considered apraxic.
This limb apraxia contrasts with his normal performance in the comprehen-
sion and production of meaningful gestures. A version of the BDAE designed
for signing subjects (described in Poizner et al. 1987) was carried out during the
second period of Christopher’s exposure to BSL (after four formal classes), and
he correctly produced 12 of 13 test items: that is, he is within normal limits for
controls (as reported in Poizner et al. 1987:168). When requested to demon-
strate a sneeze, or how to wave ‘goodbye,’ or how to cut meat, Christopher
responded without difficulty, although some of his responses were somewhat
strange. For example, he indicated ‘attracting a dog’ by beckoning with his
finger; for ‘starting a car’ and ‘cleaning out a dish’ he used the BSL signs for
CAR and COOK, instead of imitating the turning of an ignition key or the
wiping of an imaginary vessel with an imaginary cloth. Christopher produced
more conventional gestures for these items when told not to sign. Apart from
this interference, the only test item Christopher failed outright was ‘move your
eyes up.’ As well as producing simple gestures he has normal comprehension
of these gestures when produced by another person.
16.5.1 Input
A deaf native signing BSL tutor taught Christopher a conventional BSL class
once a month, concentrating on the core grammatical properties of the lan-
guage: the lexicon, negation, verb agreement, questions, topicalization, as well
as aspectual morphology, classifier constructions, nonmanual modifiers, and
spatial location setting. Over eight months there were about 12 hours of formal
teaching. This formal teaching was supplemented by conversation with a deaf
native signer, who went over the same material in a less pedagogic context
between classes. The total amount of BSL contact was therefore 24 hours. All
classes and conversation classes were filmed on video tape.
428 G. Morgan, N. Smith, I. Tsimpli, and B. Woll
The 24 hours of BSL exposure were divided for the purposes of analysis
into five periods: four of 5 hours each and a fifth of 4 hours. Each period was
approximately 6–7 weeks in duration. After each subject area of BSL had been
taught we assessed Christopher’s progress before increasing the complexity of
the material he was exposed to.
Christopher’s uptake of BSL was assessed in each area, using translation
tasks from BSL to English and from English to BSL, as well as analysis of
spontaneous and elicited use of sign. In addition, we carried out a variety of
tests of Christopher’s general cognitive abilities. This battery of assessment and
observational data are used to describe the development of his communicative
behavior, on the one hand, and his acquisition of linguistic knowledge, on the
other.
2 Signed sentences that appear in the text follow standard notation conventions. Signs are repre-
sented by upper-case English glosses. Where more than one English word is needed to gloss
a sign, this is indicated through hyphens e.g. FALL-FROM-HEIGHT ‘the person fell all the
way down.’ When the verb is inflected for person agreement, subject and indirect object are
marked with subscripted numbers indicating person e.g. 3 EXPLAIN2 ‘he explains it to you.’
Lower-case hyphenated glosses indicate a fingerspelled word e.g. g-a-r-y. Repetition of signs is
marked by ‘+,’ and ‘IX’ is a pointing sign. Subscripted letters indicate locations in sign space.
Nonmanual markers such as headshakes (hs) or brow-raised (br), and topics (t) are indicated
by a horizontal line above the manual segment(s). When specific handshapes are referred to we
use standard Stokoe notation e.g. ‘5 hand’ or ‘bent V.’
BSL development in an exceptional learner 429
Comparator group
Christopher (mean score)
16.6.2 Morphosyntax
16.6.2.1 Negation. There are four main markers of negation in BSL:
r facial action;
r head movement;
r manual negation signs; and
r signs with negation incorporated in them (Sutton-Spence and Woll 1999).
Each marker can occur in conjunction with the others, and facial action can
vary in intensity. Christopher identified the use of headshakes early on in his
exposure to BSL, but he had extreme difficulty in producing a headshake in
combination with a sign. In Period 1 of exposure Christopher separated the two
components out and often produced a headshake at the end of the sign utterance.
In fact, as was ascertained in the apraxia tests, Christopher has major difficulty
in producing a headshake at all. A typical early example of his use of negation
is given in (2) and (3).
t br
(2) Target: NIGHT SIGN CAN YOU
‘Can you sign in the dark?’
hs
(3) Christopher: NIGHT SIGN ME
‘I sign in the dark no’
100%
80%
Correct scores
60% Christopher
40% Comparator
20%
0% t1
1
r1
t2
2
r2
n
en
ie
en
ie
tio
tio
sif
sif
m
m
a
as
eg
ee
as
eg
ee
Cl
Cl
gr
N
gr
A
Test domains
Table 16.4 Use of negation markers across learning period: Types, tokens,
and ungrammatical use
Types of negation 4 4 2 4 3
Total tokens 13 29 6 57 28
Percentage ungrammatical 7.7 (1) 24 (7) 50 (3) 1.7 (1) 7 (2)
(occurrences)
∗
Figure 16.3 ‘I like (her)’
(the signer is left handed). Moving a plain verb between indexed locations is
ungrammatical, as in Figure 16.3 where the signer moves the sign LIKE toward
a location previously indexed for the NP ‘a woman.’
Verb agreement morphology in BSL is fairly restricted, being used only with
transitive verbs that express an event. When Christopher first observed signers
using indexed locations he seemed to treat this as deictic reference. He looked
in the direction of the point for something that the point had referred to. He did
not use indexing or spatial locations himself; whenever possible, he used a real
world location. In the Period 1 he used uninflected verb forms when copying
sentences.
and object), or by articulating the verb inflection in the wrong direction. These
are typical developmental errors in young children exposed to signed language
from infancy (e.g. Bellugi et al. 1990).
pursed lips
(11) Target: BOY CL-bent-V-person-JUMP-OVER-CL-B-WALL
‘the boy just managed to clear the surface of the high wall’
Christopher signed only the general movement of the sentence by crossing his
hands in space, nor did he sign the ‘effortful’ manner of the jump, through facial
action.
16.7 Discussion
By the final period of exposure to BSL, Christopher’s signing has greatly im-
proved, and it is at a level where he can conduct a simple conversation. In
this respect he has supported our prediction that he would find the language
accessible and satisfying in linguistic terms. From the beginning of BSL expo-
sure he has shown interest and a motivation to learn, despite the physical and
psychological hurdles he had to overcome. Christopher has learnt to use sin-
gle signs and short sentences as well as normal learners do. This supports part
of our first prediction, that he would find vocabulary learning relatively easy.
His understanding of verb morphology is comparable to that of the comparator
group, but in production the complexity of manipulating locations in sign space
is still beyond him. After eight months’ exposure he continues to use real world
objects and locations (including himself and his conversation partner) to map
out sign locations. Thus, verb morphology in BSL is markedly less well devel-
oped than in his other second languages (for example, in his learning of Berber)
438 G. Morgan, N. Smith, I. Tsimpli, and B. Woll
to which he had comparable exposure. These findings do not support our pre-
diction that he would learn BSL verb morphology quickly and easily, at least
insofar as his sign production is concerned.
In his spontaneous signing, utterance length is limited, yet he does not use
English syntax. He understands negation as well as other normal learners, al-
though in production we have seen an impact of his apraxia on the correct
co-ordination of manual and nonmanual markers. In general, in his production
there is an absence of facial grammar. We have not observed the same extent
of influence of English syntax on his BSL as we originally predicted. How-
ever, there is one domain where a difference in syntactic structure between
English and BSL may have influenced his learning. Christopher’s compre-
hension and production of classifier constructions was very limited. Although
the comparator group performed less well in classifier comprehension than in
the other linguistic tests, Christopher’s scores were significantly worse than the
comparator group only in this domain.
general, the greatest influence of English in his other spoken languages is shown
when he is speaking or reading quickly.
In one domain of Christopher’s learning, there may have been a direct in-
fluence of modality. Christopher avoided the use of classifier constructions
and performed very poorly in tests of their comprehension. This may either
be attributable to the complexity of the use of space in the formation of BSL
classifiers (a modality effect), or to the inherent linguistic complexity of clas-
sifiers (a nonmodality effect). On this latter view, Christopher’s difficulty with
classifiers is simply that they encode semantic contrasts (like shape) that none
of his other languages do.
Support for the former view – that there is a modality effect – comes from his
poor performance in using sign space to map out verb agreement morphology.
Although the use of spatial locations for linguistic encoding was comprehended
to the same general degree as in the comparator group, in his sign production
the use of sign space was absent. Thus, if it is the use of sign space which
is a problem, and classifiers rely on a particularly high level of sign space
processing, his visuo-spatial deficits appear to impinge most in the use of this
set of structures.
The analysis of Christopher’s learning of BSL reveals a dissociation be-
tween spatial mapping abilities and the use of grammatical devices that do
not exploit spatial relations. We have attempted to relate this dissociation to
the asymmetry Christopher demonstrates between his verbal and nonverbal
IQ. The general abilities needed to map spatial locations in memory, recog-
nize abstract patterns of spatial contrasts and visualize spatial relations from
different perspectives are called upon in the use of classifiers in sign space.
Christopher’s unequal achievements in different parts of his BSL learning can
then be attributed to his apraxia and visuo-spatial problems. It is clear that
certain cognitive prerequisites outside the linguistic domain are required for
spatialized aspects of BSL but there are no comparable demands in spoken
languages.
The fact that the aspects of signed language that are absent in Christopher’s
signing are those that depend on spatial relations (e.g. the classifier system)
suggests that the deficit is actually generalized from outside the language faculty.
In this case it might be said that underlying grammatical abilities are preserved,
but they are obscured by impairments in cognitive functions needed to encode
and decode a visuo-spatial language.
In conclusion the dissociations between Christopher’s ability in different
parts of the grammar provide the opportunity to explore which areas of lan-
guage are modality-free and which areas are modality-dependent, and the
extent to which signed languages differ from spoken languages in their require-
ments for access to intact, nonlinguistic processing capabilities. Differences in
440 G. Morgan, N. Smith, I. Tsimpli, and B. Woll
Christopher’s abilities and the unevenness of his learning supports the view that
only some parts of language are modality-free.
Acknowledgments
Aspects of this research have been presented at the conferences for Theo-
retical Issues in Sign Language Research, Washington, DC, November 1998,
Amsterdam, July 2000 and the Linguistics Association of Great Britain meeting
at UCL, London, April 2000. We are indebted to the audiences at these venues
for their contribution. We are grateful to Frances Elton and Ann Sturdy for their
invaluable help with this project. We would also like to express our thanks to
the Leverhulme Trust who, under grant F.134AS, have supported our work on
Christopher for a number of years, and to John Carlile for helping to make it
possible. Our deepest debt is to Christopher himself and his family, who have
been unstinting in their support and co-operation.
16.8 References
Bellugi, Ursula, Diane Lillo-Martin, Lucinda O’Grady, and Karen van Hoek. 1990. The
development of spatialized syntactic mechanisms in American Sign Language. In
The Fourth International Symposium on Sign Language Research, ed. William H.
Edmonson and Fred Karlsson, 16–25. Hamburg: Signum-Verlag.
Benton, Arthur L., Kerry Hamsher, Nils R. Varney, and Otfried Spreen. 1983. Contri-
butions to neuropsychological assessment. Oxford: Oxford University Press.
Fodor, Jerry. 1983. The modularity of mind. Cambridge, MA: MIT Press.
Gangel-Vasquez, J. 1997. Literacy in Nicaraguan Sign Language: Assessing word recog-
nition skills at the Escuelita de Bluefields. Manuscript, University of California,
San Diego, CA.
Goodglass, Harold, and Edith Kaplan. 1983. The assessment of aphasia and related
disorders. Philadelphia, PA: Lea and Febiger.
Kimura, Doreen. 1982. Left-hemisphere control of oral and brachial movements and
their relation to communication. Philosophical Transactions of the Royal Society
of London ser. B, 298:135–149.
Kimura, Doreen. 1993. Neuromotor mechanisms in human communication. New York:
Oxford University Press.
Morgan. Gary, Neil V. Smith, Ianthi-Maria Tsimpli, and Bencie Woll. 2002. Language
against the odds: The learning of British Sign Language by a polyglot savant.
Journal of Linguistics 39:1–41.
Morgan. Gary, Neil V. Smith, Ianthi-Maria Tsimpli, and Bencie Woll. In preparation a.
Autism in signed language learning. Manuscript, University College London.
Morgan. Gary, Neil V. Smith, Ianthi-Maria Tsimpli, and Bencie Woll. In prepara-
tion b. Learning to talk about space with space. Manuscript, University College
London.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: MIT Press.
BSL development in an exceptional learner 441
O’Connor, Neil and B. Hermelin. 1991. A specific linguistic ability. American Journal
of Mental Retardation 95:673–680.
Petitto, Laura A. 1987. On the autonomy of language and gesture: Evidence from the
acquisition of personal pronouns in American Sign Language. Cognition 27:1–52.
Pizzuto, Elena, and Virginia Volterra. 2000. Iconicity and transparency in sign lan-
guages: A cross-linguistic cross-cultural view. In The signs of language revisited:
An anthology to honor Ursula Bellugi and Edward Klima, ed. Karen Emmorey and
Harlan Lane, 261–286. Mahwah, NJ: Lawrence Erlbaum Associates.
Poizner, Howard, Edward S. Klima, and Ursula Bellugi. 1987. What the hands reveal
about the brain. Cambridge, MA: MIT Press.
Smith, Neil V. 1996. A polyglot perspective on dissociation. Behavioural and Brain
Sciences 19:648.
Smith, Neil V., and Ianthi-Maria Tsimpli. 1995. The mind of a savant: Language learning
and modularity. Oxford: Blackwell.
Smith, Neil V., and Ianthi-Maria Tsimpli. 1996. Putting a banana in your ear. Glot
International 2:28.
Smith, Neil V., and Ianthi-Maria Tsimpli. 1997. Reply to Bates. International Journal
of Bilingualism 2:180–186.
Sutton-Spence, Rachel L. and Bencie Woll. 1999. The linguistics of BSL: An introduc-
tion. Cambridge: Cambridge University Press.
Tsimpli, Ianthi-Maria and Neil V. Smith. 1991. Second language learning: Evidence from
a polyglot savant. In UCL Working Papers in Linguistics 3:171–183. Department
of Phonetics and Linguistics, University College London.
Tsimpli, Ianthi-Maria and Neil V. Smith. 1995. Minds, maps and modules: Evidence
from a polyglot savant. In Working Papers in English and Applied Linguistics, 2,
1–25. Research Centre for English and Applied Linguistics, University of
Cambridge.
Tsimpli, Ianthi-Maria and Neil V. Smith. 1998. Modules and quasi-modules: Language
and theory of mind in a polyglot savant. Learning and Individual Differences
10:193–215.
Warrington, E.K. 1984. Recognition memory test. Windsor: NFER Nelson.
17 Deictic points in the visual–gestural and
tactile–gestural modalities
David Quinto-Pozos
17.1 Introduction
A Deaf-Blind person has only one channel through which conventional lan-
guage can be communicated, and that channel is touch.1 Thus, if a Deaf-Blind
person uses signed language for communication, he must place his hands on top
of the signer’s hands and follow that signer’s hands as they form various hand-
shapes and move through the signing space.2 A sign language such as American
Sign Language (ASL) that is generally perceived through vision must, in this
case, be perceived through touch.
Given that contact between the signer’s hands and the receiver’s hands is nec-
essary for the Deaf-Blind person to perceive a signed language, we may wonder
about the absence of the nonmanual signals of visual–gestural language (e.g.
eyebrow shifts, head orientation, eye gaze). These elements play a significant
role in the grammar of signed languages, often allowing for the occurrence
of various word orders and syntactic structures. One of the central questions
motivating this study was how the absence of such nonmanual elements might
influence the form that tactile-gestural language takes.
Thus, this study began as an effort to describe the signed language production
of Deaf-Blind individuals with a focus on areas where nonmanual signals would
normally be used in visual–gestural language. However, after reviewing the
narrative data from this study, it quickly became evident that the Deaf-Blind
subjects did not utilize nonmanual signals in their signed language production.
In addition, they differed from the sighted Deaf subjects in another key area: in
1 Throughout this work, the term “Deaf-Blind” is used to refer to people who do not have the hear-
ing necessary to perceive spoken language nor the sight necessary to perceive signed language
through the visual channel.
2 As mentioned, some Deaf-Blind individuals perceive tactile signed language with both hands,
but some individuals use only one hand (usually the hand used as the dominant hand in the
production of signed language), especially when conversing with interlocutors whom they know
well. Additionally, there are times when only one hand can be used for reception because of
events that are occurring in the immediate environment (i.e. the placement of individuals around
the signer and receiver(s), movement from one location to another by walking, etc.).
442
Deixis in visual and tactile signed language 443
3 It may be the case that non Deaf-Blind users of visual signed language would also fail to
reach 100 percent accuracy if they were asked to perform a similar identification task. In other
words, sighted Deaf individuals would likely fail to reach 100 percent identification accuracy on
the identification of isolated signs and signs in sentences in visual signed language. However,
sighted Deaf individuals did not participate in the Reed et al. (1995) study in order to compare
such figures.
4 In this portion of the Reed et al. (1995) study, 10 subjects – rather than nine as in part one –
were presented with the sentence stimuli; five subjects were given ASL sentences to repeat and
five were given PSE (Pidgin Sign English) sentences to repeat. The term PSE was introduced
by Woodward (1973) as the language use among the deaf that displays grammatical elements
of other pidgins, with elements of both ASL and English. Later work (Cokely 1983) made the
claim that PSE is really not a pidgin, but rather, among other things, a type of foreigner talk
with influence from judgments of proficiency. These issues are beyond the scope of this chapter.
Regarding the Reed et al. (1995) study, there were some group differences between the two
groups regarding the degree of reception accuracy, but both groups made the most errors in the
parameter of hand shape when presented with sentence stimuli.
5 Subjects in the Reed et al. study who were presented with PSE sentences fared better in the
reception of signs in sentences than those subjects who were presented with ASL sentences.
This may be due to the subjects in the study and their preferences rather than the “forms” of the
message. However, it is worth remarking that several of the PSE subjects in the study learned
language tactually (in one form or another) because they were born deaf and blind, or they
became so at a very early age. In fact, Reed et al. (1995:487) mentioned that “the subjects in
the PSE group may be regarded as closer to native signers of tactual sign language than the
Deixis in visual and tactile signed language 445
subjects in the ASL group.” On the other hand, most of the “ASL subjects” were Deaf-Blind
individuals who had lost their sight after having acquired visual ASL. Clearly, more research
is needed on the sign language production of congenitally Deaf-Blind individuals in order to
determine if tactual language acquisition has a unique effect on the form and/or structure of the
tactile signed language used.
6 The terms “visual ASL” and “Tactile ASL” were used by Collins and Petronio (1998) to refer
to traditional ASL as it is signed in North America and ASL as it is signed by Deaf-Blind
individuals, respectively. The term “visual ASL” is used in the same manner in this chapter,
although the label “Tactile ASL” can be somewhat misleading, since tactile signed language in
the USA, like visual sign language, can have the characteristics of ASL or Signed English, as well
as variations that contain elements of both varieties. The basic claim that Collins and Petronio
make is that the Deaf-Blind subjects in their studies signed ASL with some accommodations
for the tactile modality, hence the term “Tactile ASL.” I refer to the signed language used by
the Deaf-Blind subjects in the study described in this chapter as “tactile signed language,”
and avoid terms such as Tactile ASL or Tactile Signed English (which is also a possibility if
referring to the signed language of Deaf-Blind individuals). Also, Collins and Petronio referred
to differences between Tactile ASL and visual ASL as “sociolinguistic changes ASL undergoes
as it is adapted to the tactile mode” (1998:18). It seems that these “changes” could be better
described as linguistic accommodations or simply adaptations made to ASL (or sign language in
general) when signed in a tactile modality. The term “sociolinguistic changes” implies diachronic
change for many researchers, and the direct claim of diachronic change of visual ASL to tactile
ASL has not been advanced by any known researcher.
446 David Quinto-Pozos
I do not, however, address the deictic point used for first person singular –
whether used to communicate the perspective of the narrator or the perspective
of a character in a narrative – because the first person singular deictic point
differs considerably in phonetic form from the types of deictic points presented
above (for example, those points are generally directed away from the signer’s
body). As mentioned in the introduction, the goal of this work is to address the
use of (or lack of) deictic points that have the potential of being referentially
ambiguous – especially without the support of the nonmanual signal eye gaze –
in tactile signed language. Determining the referent of a first person point is
generally not problematic.8
and eye gaze). Does the signed language production of Deaf-Blind individuals
differ substantially in form from that of Deaf sighted individuals in the con-
text of recounting narratives? If so, how does such production differ, and can
the absence of visual signed language nonmanual signals be implicated in the
difference(s)?
17.3.1 Subjects
Four subjects participated in this study of tactile signed language in narrative
form. Of the four, two are Deaf and sighted, and two are Deaf and blind.
Information about the background of these subjects appears in (1).
(1) Description of subjects
r DB1: male, congenitally deaf and blind (Rubella); began to learn
sign language at age four; began to learn Braille at age six; age at
time of study: 25
r DB2: male, born with hearing and sight, deaf at age five; diagnosed
with Usher Syndrome at age 11;9 fully blind at age 19; age at time
of study: 34
r D1: female, congenitally deaf, fourth generation Deaf; age at time
of study: 28
r D2: male, congenitally deaf, second generation Deaf; age at time of
study: 26
The Deaf sighted subjects (D1 and D2) were chosen because they were born
deaf, are both children of Deaf parents, and also because both had previously
interacted with Deaf-Blind individuals. Thus, D1 and D2 had some experience
using signed language tactually, and they were both relatively comfortable
communicating with the Deaf-Blind subjects in this study. As a consequence of
D1 and D2 having Deaf parents, it is assumed that ASL was learned by each of
them in environments that fostered normal language development. Furthermore,
they both attended residential schools for the Deaf for their elementary and
secondary education, and they both socialize frequently with family and friends
in the Deaf community.
Different criteria were used for the selection of the Deaf-Blind subjects for the
study. DB1 and DB2 were chosen based on their vision and hearing impairments
as well as their language competence. Language competence was an important
9 While DB2 reported being diagnosed with Usher Syndrome at age 11, these ages do not corre-
spond with normal onset of blindness in the several standard types of Usher Syndrome patients.
It is a possibility that DB2 was misdiagnosed with this condition, which accounts for the ages
in question. I thank an anonymous reviewer for pointing out this peculiarity to me.
Deixis in visual and tactile signed language 449
criterion for inclusion in this study because the subjects needed to be able to
read the story that was presented to them. Both of them had graduated from high
school and, at the time of this study, were enrolled in a community college. They
appear to be highly functioning individuals as evidenced by their participation
in an independent living program that encourages each of them to live in his
own apartment and to hold at least part-time employment in the community.
Both DB1 and DB2 are nonnative signers inasmuch as their parents are hearing
and did not use signed language in the household. Regarding primary and
secondary education, there were several similarities between DB1 and DB2.
DB1 was educated in schools that used Signed English for communication.
However, he reports that he started to learn ASL in high school from a hearing
teacher. From the age of five to 19, DB2 also attended schools where Signed
English was the primary mode of communication. At the age of 19, he entered
a school for the Deaf and began to learn ASL tactually because he was fully
blind by this point. DB2 spent three years at this school for the Deaf. Upon
graduating, he attended Gallaudet University, but was enrolled for only about
one year. Currently, both DB1 and DB2 read Braille and use it daily. In fact,
they read periodicals, books from the public library, and other materials written
in Braille on a regular basis. Because of this, it is assumed that their reading
skills are at least at the level needed to understand the simple narratives that
were presented to them. The narratives are discussed below.
17.3.2 Materials
Each subject was presented with a short narrative consisting of 225–275 words.
The narratives were written in English for the Deaf sighted subjects and tran-
scribed into Braille for the Deaf-Blind subjects. Each narrative contains dia-
logue between at least two characters and describes an interaction between these
characters. Several yes–no and wh-questions were included in each of these nar-
ratives. In an effort to diminish the influence that English structure would have
on the signed form of each story, the narratives were presented to all four sub-
jects 24 hours before the videotaping was conducted. Each subject was allowed
30 minutes to read his or her story as many times as he or she wished and was
instructed that the story would have to be signed from memory the following
day. Each subject only read his or her own story prior to the videotaping.
17.3.3 Procedure
For the videotaping of the narratives, each subject signed his or her story to
each of the other subjects in the study one at a time. If a Deaf-Blind subject
was the recipient of a narrative, he placed his hand(s) on top of the signer’s
hand(s). However, the sighted Deaf subjects did not place their hands on the
450 David Quinto-Pozos
Segment numbers 1 2 3 4 5 6 7 8 9 10 11 12
signer’s hands when they were recipients of narratives. The 12 narratives were
videotaped in random order and followed the format in Table 17.1.
17.4 Results/discussion
This presentation of the results begins with general comments regarding the
narratives presented by the subjects and then addresses the specific differences
between the subjects. Given this format, the reader will see the ways in which
the 12 narratives were similar as well as ways in which the narratives differed,
presumably as a result of the particular narrator and recipient pairing.
Length (seconds) 240 210 205 190 180 165 180 135 120 210 138 150
Number of signs 163 167 169 216 256 237 206 246 228 176 169 201
452 David Quinto-Pozos
10 Most tokens of Type II indexation were realized with the 1-handshape CL. However, when the
V-classifier was used, the signer still pointed to each finger of the V handshape individually.
There were tokens of a V handshape occurring with the deictic point involving two fingers
(glossed as THEY-TWO), but those tokens were not included in these data. While this study
focuses on the use of a single deictic point, further research must be conducted which addresses
the use of other deictic pointing devices such as the pronoun THEY-TWO.
11 These questions were designed to elicit the use of nonmanual signals (especially furrowed
and raised eyebrows, grammatically significant signals in visual ASL) by the Deaf sighted
subjects and to determine what strategies the Deaf-Blind subjects would use to communicate
such questions.
Deixis in visual and tactile signed language 453
torso or head re-orientation and eye gaze shift) for specifying the message
of a different speaker or character, whereas there was no role shifting or eye
gaze shifting in the signing of the Deaf-Blind subjects. In fact, the Deaf-Blind
subjects’ torsos did not deviate much – if at all – from the default position of
facing the receiver of the narrative. As a result, the deictic points that the Deaf-
Blind subjects produced were directed straight out, essentially in the direction
of the interlocutor. As an example of the only type of indexation produced by
the Deaf-Blind subjects, DB1 used indexation when a character in his narrative
asked another character if she wanted to eat fish. The passage is as follows:
“D-O IX-forward WANT EAT FISH” (here, IX-forward can also be glossed as
YOU or second person singular). Table 17.3 presents the total use of indexation
for the functions described in (2) by each subject in the study.
What is most evident from Table 17.3 is the use of Types I, II, and III
indexation by the Deaf subjects, but not the Deaf-Blind subjects. Figure 17.1
shows the total use of each type of indexation by each subject (summed across
all three instances of each narrative). Note that there are no examples of third
person singular or of locative deictic points (either to a point in space or to a
classifier on the nondominant hand) in the data from the Deaf-Blind subjects
in this study, and the only examples of indexation by the same subjects are
in the form of second person singular reference in questions addressed to a
character in the narrative. As reported in Section 17.2.1.2, Petronio (1988) and
Collins and Petronio (1998) have claimed that indexation is used by Deaf-Blind
signers to signal that a question is about to be posed to the interlocutor. Based
on these claims and the data from the current study, perhaps indexation in Deaf-
Blind signing is used primarily for referring to addressees, either in the context
of an interrogative as described previously or presumably in the context of a
declarative statement (e.g. I LIKE YOU, YOU MY FRIEND, etc.).
Since the Deaf-Blind subjects did not utilize Type I and Type II indexation
for pronominal reference in the signed narratives, I now describe how such pro-
nouns were realized by the Deaf-Blind subjects (or whether they used pronouns
at all). One of the Deaf-Blind subjects (DB1) used the strategy of fingerspelling
the name of the character who would subsequently perform an action. This strat-
egy served a similar function as Types I and II indexation, which was favored
by subjects D1 and D2. Not surprisingly, the sighted Deaf subjects also used
fingerspelling as a strategy for identifying characters in their narratives. In fact,
they often used fingerspelling in conjunction with a deictic point (sometimes
following the fingerspelling, sometimes preceding it, and sometimes articulated
simultaneously – on the other hand – with the fingerspelling). Table 17.4 shows
the use, by subject, of proper names (realized through fingerspelling) in the
narratives.
It can be seen that DB1, in order to refer to characters in his narratives,
fingerspelled the names of the characters, and he used that strategy more times
than any other subject did. However, DB2 never used fingerspelling of proper
Table 17.3 Use of indexation in the narratives
Person (I) 0 0 0 0 0 0 0 2 13 7 12 18
Person CL (II) 0 0 0 0 0 0 6 2 2 0 0 4
Location (III) 0 0 0 0 0 0 0 0 2 0 0 3
2sg (IV) 1 1 1 7 6 6 1 0 0 1 2 3
Deixis in visual and tactile signed language 455
40
35
20
15
10
0
DB1 DB2 D1 D2
Subjects and type of indexation
nouns to make reference to his characters. Rather, DB2 used such signs as GIRL,
SHE, MOTHER, and FATHER. Table 17.5 shows the number of times DB2
used the signs for GIRL and the Signed English sign SHE.12 Thus, DB2 did not
use indexation for the function of specifying third person singular pronouns nor
did he use fingerspelling, but instead referred to his characters with common
nouns or Signed English pronouns. The use of SHE by DB2 raises another issue
that must be addressed: the use of varying amounts of Signed English by the
Deaf-Blind subjects. A discussion of the use of Signed English by the subjects
in this study and the implications of such use follows.
12 SHE is not an ASL sign but rather an invented sign to represent the English pronoun ‘she.’
Several invented signed systems are used in deaf education throughout the USA in an effort
to teach deaf students English; the sign SHE as used by DB2 likely originated in one of these
signed systems. In the interest of space I refer to this type of signed language production simply
as Signed English.
Table 17.4 The use of proper names realized by fingerspelling the name of the character being referred to
Number of tokens 23 36 29 0 0 0 19 15 16 15 18 9
Deixis in visual and tactile signed language 457
GIRL 14 7 7
SHE 0 11 12
is infrequent in ASL), and – in the case of DB2 – articles and the copula (which
do not exist in ASL). Table 17.6 displays the number of times each subject
produced the copula, articles, AND, and fingerspelled D-O in each narrative.
In contrast to the Deaf-Blind subjects, the Deaf subjects, for the most part, did
not use features of Signed English. They did not utilize the copula, articles, and
Signed English D-O, and they only used the conjunction AND minimally (D1:
four tokens; D2: two tokens). Rather, the sighted Deaf subjects signed ASL with
NMS such as eye gaze shifts, head/torso shifts, and grammatical information
displayed with the raising or lowering of the eyebrows. In fact, it is the case that
the Deaf signers did not discontinue their use of ASL nonmanual signals despite
the presumed knowledge that their interlocutors were not receiving those cues.
Copula 3 2 0 4 9 10 0 0 0 0 0 0
Articles 0 0 0 1 6 2 0 0 0 0 0 0
AND 16 14 16 12 9 11 1 1 2 1 1 0
D-O 1 1 1 10 12 11 0 0 0 0 0 0
Deixis in visual and tactile signed language 459
in English or varieties of Signed English. The signing space can also be used
in a grammatical manner in other ways (e.g. aspectual verb modulation; Klima
and Bellugi 1979) or to compare and contrast two or more abstract or concrete
entities (Winston 1991).
Naturally, both Deaf-Blind subjects utilized the signing space for the produc-
tion of uninflected signs. However, DB1, who is congenitally deaf and blind,
also consistently used the signing space for the production of inflected verbs,
but that was not the case for DB2.
Throughout the narratives, DB1 produced several verbs that have the possi-
bility of undergoing some type of inflection that utilizes the signing space. In
the three narratives produced by DB1, I identified 53 instances of a verb that
may be inflected, and DB1 executed 45 of those verb tokens with what appears
to be inflection that utilized the signing space to specify information about the
subject and/or object of the verb. For example, DB1 signed MEET with the
right hand held close to his chest while the left hand (in the same handshape)
moved from the area to the left and in front of the signer toward the right hand
in order to make contact with it. This manner of signing MEET occurred twice
across the three narratives. The inflected form has the meaning ‘He/She came
from somewhere in that direction, to the left of me, and met me here,’ whereas
the uninflected form does not specify location or a category of person.
DB1 inflected other verbs as well. The verb SEE was produced nine times
in his three narratives; in eight of those instances it was executed with some
reference to a target that was being “looked at.” For example, several times DB1
signed SEE with hand and arm movement in an upward fashion. He did this in
the context of referring to a mountain. Thus, the sign can be taken to mean that
he was ‘looking up the side of the mountain.’ The sign GO was often inflected
as well, giving reference to the general location of the action.
In contrast to DB1, DB2 did not use the signing space for the inflection
of verbs. Rather, strings of signs in his narratives resemble English, which
primarily relies on word order for the determination of subject and object in
a sentence or phrase. Remember, too, that DB2 used several Signed English
elements throughout his narratives.
Rather than signing only some verbs with inflection (like DB1), the two
sighted Deaf subjects signed almost all their verbs with some type of inflection.
That is, they inflected most (if not all) verbs that were produced by them that can
be inflected for location. Furthermore, ASL nonmanual signals such as eye gaze
and body/torso movement were common in conjunction with the production of
verbs, and these nonmanual signals were often used to indicate role shifts.
Several facts have been presented above. First, the Deaf sighted subjects
produced ASL syntax (along with appropriate nonmanual signals) throughout,
while the Deaf-Blind subjects produced versions of Signed English, specifically
English word order and some lexical signs that do not exist in ASL. DB2 fol-
lowed Signed English word order more than DB1, who inflected most verbs in
460 David Quinto-Pozos
his narratives. Thus, at least one of the Deaf-Blind subjects (DB1) used the sign-
ing space in a spatially distinctive manner (thus resembling visual ASL), but he
still failed to use the deictic point for third person singular or locative reference,
which is unlike what would be expected in visual ASL. From these facts it is
clear that it is not necessarily the use of Signed English elements or word order
that influences the non-occurrence of certain deictic points (specifically, third
person singular and location references), but rather something else. It appears
that the tactile medium does not support substantial use of deictic points, and
perhaps we can hypothesize why this is so.
14 I must emphasize that the suggestions offered to explain the lack of deictic points in the Deaf-
Blind narratives are all based on production data. Comprehension studies of deictic points must
be conducted in order to confirm these suggestions.
15 The “other gestures” that I refer to here were defined in Iverson et al. (2000:111) as the following:
“showing, or holding up an object in the listener’s potential line of sight,” and “palm points, or
extensions of a flat hand in the direction of the referent.”
Deixis in visual and tactile signed language 461
finger toward the referent in gesture. Index points localize the indicated referent with
considerable precision – much more precision than the Palm point. It may be that blind
children, who cannot use vision to set up a line between the eyes, the index finger, and the
gestural referent in distant space, are not able to achieve the kind of precise localization
that the Index point affords (indeed, demands). They may therefore make use of the less
precise Palm point. (p. 119)
In addition to the Iverson et al. (2000) study, Urwin (1979) and Iverson and
Goldin-Meadow (1997) reported that the blind subjects in their studies failed
to utilize deictic points for referencing purposes. These studies support the
suggestion that eye gaze might be the necessary requirement for the use of
deictic points for communication purposes.
16 As mentioned in Section 17.2.1.2, Collins and Petronio (1998) described that the signing space
for Deaf-Blind individuals is smaller because of the close proximity of the signer and interlocutor.
This claim can also be made for the signing of the Deaf-Blind subjects in this study. In general,
there were no significant displacements of the hands/arms from the signing space other than
movement to contact the head/face area.
Deixis in visual and tactile signed language 463
perceive the movement of verbs through the signing space as the verbs specify
subject and object in a phrase because perception simply requires that a Deaf-
Blind receiver follow the signer’s hands as he or she moves through contrastive
locations in the signing space. Following this line of reasoning, a Deaf-Blind
signer would presumably use the signing space for production and reception
of aspectual modification, which also involves specialized movement of the
hands through the signing space. However, there were no cases of aspectual
modification of verbs in the Deaf-Blind narratives.
Yet, the data from this study suggest that at least one use of the signing
space may have a visual component that would influence the manner in which
a Deaf-Blind person would use sign language. Specifically, the lack of deictic
points for referencing purposes in the Deaf-Blind narratives suggests that eye
gaze may play a significant role in the realization of deictic points. This means
that some uses of the signing space can be carried out without eye gaze support
and some uses likely rely upon eye gaze support to be executed.
One limitation of the current study is that the Deaf subjects are native signers
while the Deaf-Blind subjects are late learners of signed language. It would
be ideal to also investigate the signed language production of Deaf-Blind sign-
ers who acquired signed language following a regular acquisition process and
timeline. However, the incidence of children who are congenitally deaf and
blind and who also have Deaf parents is quite low. Alternatively, future in-
vestigations could include Deaf sighted subjects who are late learners of lan-
guage in order to make matched comparisons with Deaf-Blind late-learners of
language.
17.6 Conclusions
This chapter has examined language that is perceived by touch and has compared
it to signed language that is perceived by vision. Integral to visual–gestural
language is the use of nonmanual signals (e.g. eyebrow shifts, head and torso
movement, and eye gaze, to name a few). What, then, are the consequences of
the presumed inability of Deaf-Blind users of signed language to perceive such
nonmanual signals? This study has begun to address this issue.
Based on the narrative data presented here, the signed language production of
Deaf-Blind individuals does differ in form from that of sighted Deaf individuals.
Specifically, sighted Deaf signers utilize nonmanual signals (such as eyebrow
shifts, head orientation, and eye gaze) extensively, while Deaf-Blind signers
do not. In addition, sighted Deaf signers utilize deictic points for referential
purposes while Deaf-Blind signers use other strategies to accomplish the same
task. It appears that the ability to perceive eye gaze is a crucial component in
the realization of deictic points for referential purposes.
Regarding the use of the deictic point, the Deaf sighted subjects in this study
used such points in four general ways in order to fulfill three semantic functions
(reference to third person singular, to a location or object at a location, and to
second person singular). On the other hand, the Deaf-Blind subjects used deictic
points exclusively to fulfill one function (second person singular reference).
In addition, the Deaf sighted subjects produced ASL, while the Deaf-Blind
subjects each produced a unique version of Signed English. One Deaf-Blind
subject (DB1) used the signing space to inflect verbs for location, whereas the
other Deaf-Blind subject (DB2) did not. This shows that the signing space can
be used contrastively in tactile signed language, but some uses of the signing
space in visual signed language – such as the use of deictic points – do not seem
to be as robust in the tactile modality.
As mentioned above, the difficulty in perceiving eye gaze presumably re-
stricts the manner in which Deaf-Blind signers use deictic points. This sug-
gestion is similar to findings regarding congenitally blind children who have
Deixis in visual and tactile signed language 465
normal hearing: they rarely utilize deictic points for gestural purposes. The
manner in which blind individuals (both hearing and Deaf ) – especially those
who are congenitally blind – conceive of the space around them may also
differ from sighted individuals. More research is certainly needed to under-
stand the language use of blind and Deaf-Blind individuals more fully; there
are many more insights to be gained from research on the role of vision in
language.
Acknowledgments
I would like to thank Carol Padden and an anonymous reviewer for their insight-
ful comments on an earlier draft of this chapter. This study was supported in part
by a Graduate Opportunity Fellowship from the University of Texas at Austin
to the author, a grant (F 31 DC00352-01) from the National Institute on Deaf-
ness and Other Communication Disorders (NIDCD) and National Institutes of
Health (NIH) to the author; and an NIDCD/NIH grant (RO1 DC01691-04) to
Richard P. Meier.
17.7 References
Bahan, Benjamin 1996. Non-manual realization of agreement in American Sign Lan-
guage. Doctoral dissertation, Boston University.
Bahan, Benjamin, Judy Kegl, Dawn MacLaughlin, and Carol Neidle. 1995. Convergent
evidence for the structure of determiner phrases in American Sign Language. In
FLSM VI. Proceedings of the Sixth Annual Meeting of the Formal Linguistics
Society of Mid-America, Vol. Two: Syntax II & Semantics/Pragmatics, ed. Leslie
Gabriele, Debra Hardison, and Robert Westmoreland, 1–12. Bloomington, IN:
Indiana University Linguistics Club.
Baker, Charlotte. 1976a. Eye-openers in American Sign Language. California Linguis-
tics Association Conference Proceedings.
Baker, Charlotte. 1976b. What’s not on the other hand in American Sign Language.
In Papers from the Twelfth Regional Meeting of the Chicago Linguistics Society.
Chicago, IL: University of Chicago Press.
Baker, Charlotte., and Carol A. Padden. 1978. Focusing on the nonmanual components
of American Sign Language. In Understanding language through sign language
research, ed. Patricia Siple, 27–57. New York: Academic Press.
Bendixen, B. 1975. Eye behaviors functioning in American Sign Language. Unpublished
manuscript, Salk Institute and University of California, San Diego, CA.
Cokely, Dennis. 1983. When is a pidgin not a pidgin? An alternative analysis of the
ASL-English contact situation. Sign Language Studies 38:1–24.
Collins, Steve, and Karen Petronio. 1998. What happens in Tactile ASL? In Pinky
extension and eyegaze: Language in deaf communities, ed. Ceil Lucas, 18–37.
Washington, DC: Gallaudet University Press.
Engberg-Pedersen, Elisabeth. 1993. The ubiquitous point. Sign 6:2–10.
466 David Quinto-Pozos
469
470 Index
Asheninca 331–32, 344–45 Bloom, Paul 10–11, 24, 323, 327, 351, 368
ASL: see American Sign Langauge Bloomfield, Leonard 1–2, 21
assimilation 123, 154–56, 158–159 Boas, Franz 349, 366
Athabascan languages 322 Bobaljik, Jonathan 371, 400
Aubry, Luce 391, 400 Bornstein, Harry 157, 162
Auslan 231, 233, 345, 372, 376, 338–40, 6 borrowings, lexical 2, 3, 224, 230–235
Australian Sign Language: see Auslan Bos, Heleen 19, 21, 176, 197, 237, 252, 259,
auxiliary verbs 19, 176, 252, 324, 383–384 384, 400
Boyes-Braem, Penelope 272, 292
Baars, Bernard 117, 139, 141 Bradley, Lynette 102, 109
babbling: manual 7, 43; vocal 9 Braun, Allen 4, 23
Bahan, Benjamin 23, 175, 196, 251, 259, Brauner, Siegmund 281, 293
261, 294, 301, 303, 306, 307, 308, 309, 311, Brazilian Portuguese 371
318, 319, 354, 368, 374, 375, 391, 400, 403, Brazilian Sign Language (LSB) 4, 19–20,
417, 420, 424, 432, 440, 445, 446, 462, 463, 231, 233, 251, 324
465 Bregman, Albert S. 36, 38, 61
Bainouk 257, 259 Brentari, Diane 3, 5, 10, 21–22, 27, 30, 33,
Baker, Charlotte 213, 220, 338, 363, 366, 38, 39, 41, 42, 44, 52, 60, 61, 68, 81, 84, 88,
446, 447, 460, 465 93, 109, 130, 132, 139, 205, 221, 263, 285,
Baker, Mark 383, 386, 397, 400 286, 292, 293, 296, 318, 387, 400
Baker-Shenk, Charlotte 396, 400 Brinkley, Jim 108, 109
Battison, Robbin 3, 21, 32, 33, 39, 61, 68, British Sign Language (BSL) 6, 19, 213,
84, 95, 108, 149–51, 154, 157, 162 231, 233, 325, 372, 384, 396, 422–441
Battison’s constraints 32, 149 Broselow, Ellen 400, 395
Baumbach, E.J.M. 289, 292 Brouland, Josephine 202, 204, 209, 211,
Bavelier, Daphne 4, 21, 23 221
Bebian, Roch-Ambroise 145, 146, 162 Brown, Roger 156, 162
Becker-Donner, Etta 284, 292 Bryant, Peter 102, 109
Bella Bella 347–49 Butterworth, Brian 112, 139
Bellugi, Ursula 2–4, 8, 14, 19, 21–22, 27, 28, Bybee, Joan L. 199, 202, 206–210, 219,
29, 32, 33, 62, 72, 85, 102, 103, 108, 110, 221
116, 124, 126, 127, 128, 129, 131, 132, 133,
140, 141, 151, 158, 162, 163, 169, 170, 171, Campbell, Ruth 10, 21
172, 173, 175, 197, 234, 236, 261, 296, 311, Cantonese 297, 298, 303, 306, 309–310,
318, 319, 323, 325, 333, 366, 367, 373, 396, 311, 315
402, 427, 429, 436, 441, 457, 459, 466 Capell, A. 280, 293
Bendixen, B. 447, 465 Caramazza, Alfonso 117, 139
Benson, Philip J. 10, 21 Caron, B. 268, 293
Bentin, Shlomo 46, 62 Cassell, Justine 198
Benton, Arthur L. 426, 440 Casterline, Dorothy C. 2, 24, 27, 33, 69, 86,
Benton, R.A. 347, 366 88, 111, 147, 164, 373, 404
Benveniste, Emile 363, 366 categorical perception 10
Berber 325, 437, 438 categorical structure 177, 178, 179, 191,
Berg, Thomas 115, 117, 133, 139 195
Bergman, Brita 213, 220, 272, 292, 312, 318 Channon, Rachel; see also Crain, Rachel
Bever, Thomas G. 93, 97, 111, 145, 162 Channon 81, 83, 84, 85
Bickerton, Derek 12, 21 Chao, Chien-Min 69, 85
Bickford, Albert 226, 235 Chao, Y.R. 56, 61
Blake, Joanna 210, 221 Charlier, M. 295
Blondel, Marion 103, 109 Chase, C. 37, 61
Index 471
gradient structure 179, 186–187, 189 iconicity: 11, 12, 14, 15, 16, 83, 167, 225,
grammaticization 199–202, 205–208, 171–72, 233–235, 357–358, 365, 430, 438;
210–212, 216–220 shared symbolism 224, 229
Green, David M. 37, 62 Idioma de Señas de Nicaragua (ISN): see
Greenberg, Joseph 224, 233, 235, 370 Nicaraguan Sign Language
Grinevald, Collette 397, 401 Igoa, José 128, 140
Groce, Nora E. 12, 22, 225, 235 indexation: deictic points 443, 446, 452,
Grosjean, Francois 158, 163 453, 455, 460, 461, 462, 463; indexic signs
Guerra Currie, Anne-Marie P. 226, 235 224, 233
Gustason, Gerilee 147, 163 indices (referential) 253–257
indigenous signed languages: Mexico 225,
Haegeman, Liliane 266, 293 71–72; Southeast Asia 21
Haiman, John 57, 62, 213–214, 221 Indo-European 130, 131
Haitian Creole 13 Indo-Pakistan Sign Language 72, 73,
Hale, Kenneth 333, 367 338–40, 345, 353, 363
Halle, Morris 36, 41, 61, 258, 260, 264, 265, Ingram, David 338, 355, 367
279, 293 initialized signs 227, 235
Hamburger, Marybeth 89, 111 Inkelas, Sharon 50, 64
Hamilton, Lillian 157, 162 International Phonetic Association 70, 85
Hamsher, Kerry 426, 440 Israeli Sign Language 16, 19, 69, 72, 73, 74,
Happ, Daniela 114, 117, 127, 140 248
Harley, Heidi 264, 294 Italian 3, 114
Harré, Rom 330, 332–33, 368 Italian Sign Language (LIS) 203, 338–40,
Hartmann, Katharina 268, 269, 270, 294 345
Háusá 268, 284 Itô, Junko 39, 42
Henrot, F. 295 Iverson, Jana M. 168, 169, 173, 460, 461,
Hermelin, B. 426, 441 465
Herzig, Melissa 405, 419, 420
Hewes, Gordon W. 200, 221 Jackendoff, Ray 242, 260, 386, 387, 401
Hickock, Gregory 4, 22, 108, 110, 407, Jakobson, Roman 41, 62
420 Janis, Wynne 51, 62, 250, 260, 372, 375,
Hildebrandt, Ursula 104, 108, 110 382, 401
Hinch, H.E. 280, 293 Janzen, Terry 200, 205, 212, 214–218, 220,
Hinshaw, Kevin 108, 109 222
Hirsh, Ira J. 62 Japanese 93, 371
Hockett, Charles 2–3, 15, 22, 461, 465 Japanese Federation of the Deaf 69, 71, 85
Hohenberger, Annette 114, 117, 127, 140 Japanese Sign Language (NS) 5, 6, 20, 72,
Holzrichter, Amanda S. 42, 62, 68, 85, 226, 73, 172, 224–229, 232, 234, 245, 252,
235 363, 344–45, 338–42, 372, 376, 380, 384,
home signs 12, 168, 225 396
Hong Kong Sign Language (HKSL) 5, 238, Jenkins, J. 46, 62
299, 300, 301, 302, 303, 304, 305, 306, 307, Jenner, A.R. 37, 61
309, 312, 314, 315, 316, 317 Jescheniak, Jörg 115, 140
Hopper, Paul 199, 205–206, 212, 221 Jezzard, Peter 4, 23
Hua 57, 62, 214 Johnson, Mark 186, 197
Hulst, Harry van der; see van der Hulst, Harry Johnson, Robert E. 27, 33, 44, 46, 58, 63, 68,
Huet, Eduardo 224, 225, 235 86, 88, 110, 154, 163, 175, 197, 225, 235,
Hume, Elizabeth V. 39, 43, 61 356, 367, 374, 402
Humphries, Tom 204, 209, 211, 221 Johnston, Trevor 231, 232, 236, 338, 367,
Hyman, L.M. 288, 289, 294 391, 402
474 Index
Kaplan, Edith 427, 440 language contact and change: cognate signs
Karni, Avi 4, 23 231; lexical borrowing 224, 230–235
Kaufman, Terrence 231, 236 language contact and change normal
Kayne, Richard 291, 294 transmission 231–232
Kean, Mary-Louise 130, 140 language faculty 241–242, 423
Keenan, Edward L. 18, 346, 350, 366 language planning 144–47, 160, 161
Kegl, Judy A. 12–13, 19, 22–23, 173, 175, language processing 31, 89, 92, 145, 156
196, 261, 294, 296, 301, 303, 306, 307, 308, language production 112, 115, 132, 135, 138
309, 311, 318, 319, 354, 368, 374, 397, 401, language typology 56–57, 114, 129, 131, 139
403, 404, 417, 420, 424, 432, 440, 446, 465 language universals: formal 370, 396, 398;
Keller, Jörge 372, 375, 402 substantive 370, 373, 394, 398, 399
Kendon, Adam 69, 85, 170, 173, 207, 210, Langue de Signes Française (LSF): see French
222, 388, 402 Sign Language
Kennedy, Graeme D. 69, 85, 226, 231, 233, Lany, Jill 173, 466
236 LaSasso, Carol 161, 163
Kenstowicz, Michael 65, 69, 70, 85, 288, 294 Last, Marco 337, 367
Kettrick, Catherine 69, 85 lateralization 2, 4
Khasi 346–48 Latin 207
Kimura, Doreen 423, 427, 440 Launer, Patricia 151, 163
Kinande 288 Laycock, Donald C. 332, 367
Kinyarwanda 282, 284 Lee, Robert G. 23, 261, 294, 301, 303, 306,
Kipare 290 307, 308, 309, 311, 318, 319, 354, 368, 374,
Kiparsky, Paul 177, 197 403, 417, 420, 424, 432, 440
Kita, Sotaro 184, 197, 200, 222, 389, 390, Lehmann, Christian 379, 380, 402
402 lengthening 44, 52, 53, 54
Klima, Edward S. 2–4, 8, 14, 19, 22, 27, 28, Lengua de Señas Mexicana (LSM): see
29, 32, 33, 52, 58, 62, 73, 85, 102, 103, 108, Mexican Sign Language
110, 116, 123, 126, 127, 128, 129, 131, 132, Lengua de Signos Española (LSE): see
133, 140, 141, 151, 158, 163, 169, 170, 171, Spanish Sign Language
172, 173, 175, 197, 234, 236, 237, 245, 247, Lentz, Ella Mae 117, 140, 142
252, 253, 255, 261, 296, 311, 318, 319, 323, Leuninger, Helen 114, 115, 117, 127, 128,
325, 330, 333, 335, 349, 354, 356, 358, 366, 132, 135, 139, 140
367, 368, 373, 374, 396, 402, 407, 420, 427, Levelt, Willem J.M. 98, 101, 110, 111, 112,
429, 441, 446, 457, 459, 466 113, 115, 116, 121, 128, 133, 134, 135, 136,
Kohlrausch, A. 37, 62 137, 140, 141
Kyle, Jim G. 69, 85, 232, 233, 234, 236 Levelt’s model of language production 113,
115, 116, 121, 125, 128, 133, 134, 135, 138
Labov, William 410, 420 Levesque, Michael 102, 111
Lacy, Richard 374, 402 Levy, Elena 198
Ladd, Robert 177, 197 lexicon: 2, 15, 375, 377, 378, 387, 391, 392;
Lak 347–49 comparative studies of borrowing, lexical
Lakoff, George 186, 197, 402 224, 230–235; comparative studies of
Lalwani, Anil 4, 23 equivalent variants 227–235; comparative
Landau, Barbara 156, 162 studies of similarly-articulated signs
Lane, Harlan 62, 63, 67, 86, 103, 111, 144, 227–235
146, 147, 163, 202, 222, 333, 350–353, 367 Liben, Lynn 143, 163
Langacker, Ronald W. 179, 185, 197, 402 Liberman, Alvin M. 28, 33, 93, 108, 110
language acquisition 2, 4, 12, 16, 17, Liddell, Scott K. 3, 18, 22, 27, 33, 44, 46, 58,
143–45, 151, 156–58, 160, 161, 250, 256, 63, 68, 86, 88, 110, 154, 163, 170, 171, 173,
423, 425, 429, 434 175, 188, 190, 197, 213, 222, 245, 248, 249,
Index 475
250, 252, 253, 254, 255, 257, 260, 272, 294, Masataka, Nobuo 416, 420
297, 310, 311, 314, 318, 319, 324–326, 337, Mathangwane, J.T. 289, 294
343, 355–58, 361, 367, 368, 374, 375–377, Mathur, Gaurav 250, 252, 257, 258, 261,
379, 390, 391, 402, 411–412, 420, 446, 447, 372, 391–394, 403
466 Matthews, Stephen 309, 319
Lillo-Martin, Diane 4, 19, 22, 112, 113, 114, Mattingly, Ignatius 108, 110
139, 141, 175, 197, 237, 243, 244, 245, 247, Mauk, Claude 7, 23, 63
251, 252, 253, 255, 260, 261, 296, 318, 319, Maung 280, 284
323–324, 330, 335, 349, 354, 356, 358, 368, Maxwell, Madeline 156, 163
374, 384, 396, 399, 402, 404, 436, 440, 446, May, Robert 395, 401
466 Mayberry, Rachel 4, 22
Lindblom, Björn 378, 402 Mayer, Connie 147, 163
Linde, Charlotte 410, 420 Mayer, Karl 115, 141
Lingua de Sinais Brasileira (LSB): see McAnally, Patricia 158, 163
Brazilian Sign Language McBurney, Susan 108, 109
Lingua Italiana del Signi (LIS): see Italian McCarthy, John 39, 63, 114, 141, 290, 294,
Sign Language 400, 403
linguistic savant 6, 325, 422–440 McClelland, James 92, 109
literacy 147, 161 McCullough, Karl-Erik 198
Liu, Chao-Chung 69, 85 McCullough, Stephen 10, 22
Livingston, Sue 160, 163 McGarvin, Lynn 7, 23
Llogoori 47, 62 McKee, Cecile 156, 161, 163, 164
loci (referential) 245–249, 252–257 McKee, David 226, 231, 233, 236
Loew, Ruth 256, 261 McNeill, David 11, 22, 168, 169, 170, 173,
Logical Form (LF) 114, 264–265 176, 177, 180, 184, 186, 196, 197, 198, 200,
Longobardi, Giuseppe 308, 319 222, 389, 403
Luce, Paul 89, 110 Meadow, Kathryn P. 156, 164
Luetke-Stahman, Barbara 144, 163 Mehler, Jacques 107, 110
Lundberg, Ingvar 102, 110 Meier, Richard P. 3–7, 9–10, 13, 16–20,
Lupker, Stephen 89, 92, 110 23–24, 35, 37, 38, 42, 62, 63, 68, 85, 143,
Lyovin, Anatole V. 279, 294 151, 164, 176, 198, 244, 245, 247, 249, 250,
252, 253, 254, 255, 261, 296, 310, 319,
MacDonald, Brennan 102, 111 323–324, 330, 339, 354, 361, 368, 372, 373,
MacKay, Donald G. 112, 115, 117, 139, 375, 378, 379, 384, 401, 403, 446
141 Meir, Irit 16, 19, 21, 237, 248, 256, 259, 261,
MacKay, Ian R.A. 136, 141 344, 366, 372, 375, 382, 400, 403
MacLaughlin, Dawn 23, 261, 294, 298, 299, Meissner, Martin 8, 23
300, 301, 303, 305, 306, 307, 308, 309, 311, memory, short term 14, 29, 426
318, 319, 354, 368, 374, 396, 403, 417, 420, mental spaces 296–297, 310–317
424, 432, 440, 446 Meringer, Rudolf 115, 141
MacNeilage, Peter F. 9, 22, 133, 134, 141 Methodical Sign 145–47, 160
Mainwaring, Scott 407, 420 Metzger, Melanie 161, 163, 170, 171, 173,
Mano 284 175, 184, 198, 311, 319, 446, 466
Manually Coded English (MCE) 32, 323, Mexican Sign Language (LSM) 5, 172,
143–162, 17 224–235
Marantz, Alec 260, 264, 265, 293, 403 Meyer, Antje 98, 101, 110, 111
Marcario, Joanne 89, 110 Meyer, Antje 112, 115, 117, 121, 129, 133,
Marentette, Paula 4, 7, 24, 43, 63 141
Marschark, Marc 167, 171, 173, 183, 197 Meyer, Ernst 102, 111
Marsh, P. 173 Mikos, Ken 117, 140, 142
476 Index
Miller, Christopher Ray 49, 63, 68, 86, 103, Mühlhäusler, Peter 330, 332–33, 368
109, 261 Myers, Scott 56, 63, 288, 294
Miller, George A. 334, 368 Mylander, Carolyn 12, 22
Mills, Anne 39, 64
Mintun, Mark 102, 110 Nagala 331–32, 344–45
Mirus, Gene R. 7, 23, 63 Nagaraja, K.S. 348, 368
modal: 13, 201, 212, 220, 207–209; Nanai 281, 284
epistemic 208–212; obligation 202, 210, natural language 143, 161, 162
212; permission 207–210; possibility 201, Natural Sign 145, 146
207–210 Navaho 8
modality: 35, 145, 241, 259, 350–53, negation: 5, 20, 251, 325, 431–433, 437,
364–65; influence on phonology 60, 113, 438; morphological 263, 274, 281, 283,
253; medium versus 11, 322–323, 329, 284; split 238, 266, 274, 284
358–59, 364–65; noneffects of 2, 14–15, negation phrase (NegP) 266, 269, 271, 273,
113, 237–238, 243–244 275
modality effects: classification of, rules Neidle, Carol 5, 18, 23, 175, 196, 237, 244,
unique to signed or spoken languages 13, 247, 248, 251, 261, 272, 275, 276, 294, 301,
17–18; classification of, statistical 13, 301, 303, 306, 307, 308, 309, 311, 318, 319,
15–16; classification of, typological 13, 16, 354, 368, 374, 403, 417, 420, 424, 432, 440,
56–57, 114; classification of, uniformity of 446, 465
signed languages 13, 18–20, 57, 113–114, Nespor, Marina 39, 63
324, 395–397, 399; iconicity 233–235; Neville, Helen J. 4, 21, 23
New Zealand Sign Language (NZSL) 158,
lexical similarity across signed languages
159, 231
159–61, 232–235; sequentiality vs.
Newkirk, Don 67, 86, 116, 124, 126, 133,
simultaneity 27–28, 113–114, 134, 438;
134, 141
sources of articulators 6, 7, 8, 9, 36,
Newport, Elissa L. 3–5, 10, 12, 16, 19–21,
107–108, 125, 132; sources of, iconicity 11,
48, 64, 67, 68, 87, 116, 124, 126, 134, 143,
12, 15, 357–358; sources of, indexicality 11,
151, 159, 164, 176, 198, 262, 322, 326, 370,
12, 245, 359; sources of, nonmanual
372, 398, 403, 404
behaviors 237, 238; sources of, perception
Nicaraguan Sign Language (ISN) 12, 13, 168
10, 11, 36, 107; sources of, signing space
Nihon Syuwa (NS): see Japanese Sign
132, 237, 238, 245, 344, 348–352, 399, 409,
Language
439; sources of, youth of 6, 12, 13, 20 Nogogu 331–32, 333, 345
Moeller, Mary P. 144, 163 Nolen, Susan Bobbit 8, 25, 68, 87
Moody, Bill 19, 23, 237 nominals 5, 238, 297, 308–310, 312
Moores, Donald F. 144, 147, 164 non-manual behaviors: 19, 113, 119, 124,
Morford, Jill P. 168, 173 167, 170, 171, 237, 238, 274, 284–285, 289,
Morgan, Gary 422, 423, 425, 436, 440 442, 447, 457, 459, 464, 431, 438; as
morpheme 117, 130, 131, 138, 120, 126, gestures 217–220; eyebrow raise 213–214,
128, 129 217–218; eyegaze 170, 447, 453, 459, 460,
morphology: 2, 3, 13, 16, 19, 20, 32, 48–50, 461, 463, 464; negative headshake 263, 275,
57, 113, 138, 148, 151, 152, 156–60, 277, 272, 286
177–178; affixation 16, 17, 150–55, 157–60, Norris, Dennis 93, 109, 111
393, 394; and language typology (see noun phrases (see nominals)
language typology); and slips of the hand Noyer, Rolf 264, 265, 279, 294
128–131 null arguments 19, 114, 325
Morris, Desmond 170, 173 number 329–33, 335–40, 344, 353–54,
Motley, Michael 117, 139, 141 362–65
Mounty, Judith 160, 162 Nusbaum, Howard 89, 111
Mowry, Richard 136, 141 Nuyts, Jan 357, 368
Index 477
Quadros, Ronice Müller de 4, 20, 24, 251, Savir, Hava 69, 71, 86
261 Schade, Ulrich 115, 141
Quechua 346–47 Schein, Jerome 144, 164
questions: wh- 399, 446, 449, 452; yes/no Schiano, Diane J. 407, 420
201, 212–218, 446, 449, 452 Schick, Brenda 411, 421, 404
Quigley, Stephen 158, 163 Schlesinger, I.M. 156, 164
Schober, Michael F. 407, 421
Rabel, L. 348, 368 Schreifers, Herbert 98, 99, 101, 110, 111
Raffin, Michael 156, 162, 164 segments: 35–36, 39, 42–45, 51–59, 65–68,
Raichle, Marcus 102, 110 69, 76–84, 93, 97, 98; repetition of 31,
Ramsey, Claire 143, 164, 368 65–81, 84, 85
rate of signing 8, 32, 132, 138 Segui, Juan 107, 110
Rathmann, Christian 20, 24, 248, 252, 261, Seidenberg, Mark 46, 63
273, 295, 372, 380, 383, 384, 392–393, 403, Semitic languages 16, 57, 114, 160
404 Senghas, Ann 12, 19, 22, 200, 222, 237, 251,
Rauschecker, Josef 4, 23 262
Ray, Sidney Herbert 332, 368 Sergent, Justine 102, 111
Rayman, Janice 102, 111 Setswana 288
Readjustment rules 265, 272, 274, 285, 394, Shaffer, Barbara 200, 202, 208–210, 212,
395, 278 220, 222
Redden, J.E. 283, 295 Shankweiler, Donald 93, 110
reduplication 48, 49, 50, 69, 74 Shattuck-Hufnagel, Stephanie 125, 142
Reed, Charlotte 443, 444, 445, 462, 466 Shepard-Kegl, Judy; also see Kegl, Judy A.
Reed, Judy 368, 331 374, 404
referential specificity 358–359 Sherrick, Carl E. 37, 62
Reich, Peter 112, 115, 140 Shónà 56, 63, 284, 288, 281
Reikhehof, Lottie 69, 86 Shroyer, Edgar H. 69, 86
Remez, Robert E. 156, 162 Shroyer, Susan P. 69, 86
Repp, Bruno 93, 111 Shuman, Malcolm K. 69, 71, 86
Rizzolatti, Giacomo 200, 222 Sierra, Ignacio 224, 236
Roelofs, Ardi 112, 115, 121, 141 Sign Language of the Netherlands (NGT)
role shift 248, 255, 452, 453 19, 176, 213, 252
Romance languages 114 sign poetry 102–103
Romano, Christine 237 Signed English 326, 449, 455, 457, 459, 460,
Rondal, J.-A. 272, 294 461, 463, 464
Rose, Heidi 103, 111 Signing Exact English (SEE 2) 8, 12, 17,
Rose, Susan 158, 163 146–50, 152–54, 158–60, 323, 352
Ross, J.R. 237 signing space: 205, 237–238, 321, 326, 435,
Russian 178, 279, 284 439, 457, 462, 463; gestural 387–393,
395–397, 399; interpretation of, mirrored
Salvatore, M. 62 413–416; interpretation of, reversed
Sanchez-Casas, Rosa 93, 109 413–416, 418; interpretation of, shared
Sandler, Wendy 16, 21, 27, 30, 33, 35, 44, 407–409, 413–419; spatial formats,
60, 63, 80, 83, 86, 88, 106, 109, 111, 116, diagrammatic 411–412, 414, 418–419;
123, 141, 243, 244, 246, 247, 259, 261, spatial formats, viewer space 411–412, 418;
261, 344, 352, 366, 373, 374, 387, 400, 404 410–412, 414, 418
Sapir, Edward 2, 24 Simmons, David 69, 86
Sauliner, Karen 157, 162 Singleton, Jenny L. 12, 24, 169, 173, 200,
Saussure, Ferdinand de 15, 24 222, 404
Savin, H.B. 93, 97, 111 Siple, Patricia 29, 33, 350, 368
Index 479
slips of the hand: 2, 3, 5, 14, 29, 116, 138, Sutton-Spence, Rachel 19, 25, 237, 404,
117; morphological structure and 128–131; 431, 432, 433, 441
phonological parameters and 29, 123–124, Suty, Karen 160, 164
126–127; self-corrections (repairs) 29, 117, Swedish Sign Language 213, 253
119, 122, 125, 136, 135; types 117, 124, Sweetser, Eve 199, 222
128, 133, 134, 138, 119, 120, 122, 125, 126, Swisher, M. Virginia 156, 158, 164
127 Sybesma, Rint R. 297, 305, 318
slips of the tongue 29, 116, 119, 121, 127, syllable 27, 30, 35, 43, 44, 45, 46, 50, 51, 56,
129, 138 57, 93, 94, 97, 98, 106–107, 108, 124, 132,
Slobin, Dan I. 132, 142, 156, 164 133, 137, 290
Slowiaczek, Louisa M. 89, 111 syntax: 2–4, 113, 258, 259, 237–238,
Smith, Neil 422, 423, 425, 426, 427, 430, 241–244, 251–255; autonomy of 237–238,
435, 436, 440, 441 241–244; modality and 243–244, 296–297
Smith, Cheri 117, 140, 142 Sze, Felix Y.B. 309, 319
Smith, Wayne 18–19, 248, 252, 262, 324,
327, 329, 340–41, 368, 384, 404, 24 tactile signed language 442, 443, 445,
Smith Stark, Thomas C. 225, 231, 232, 233, 446
236 tactile-gestural modality 4, 442–465
Son, Won-Jae 69, 86 Taiwanese Sign Language 18, 248, 252, 324
sonority 31, 43, 98, 106 Talmy, Leonard 390, 399, 404, 405, 421
Spanish 3, 15, 128, 207, 373, 390, 426 Tang, Gladys 297, 319
Spanish Sign Language (LSE) 5, 172, Taub, Sarah F. 83, 87, 178, 198, 392, 404
224–235 Taylor, Holly A. 410, 413, 419, 421
spatial language 18, 322, 405, 439 temporal aspect: continuative 151–152;
spatial locations: 17, 244–50, 252–253, delayed completive 83
255–258, 297, 322, 373–375, 377–381, 385, Tencer, Heather L. 173, 445, 446, 462, 463,
390–392, 395; non-listablity of 175–176, 466
245, 356, 377, 378, 385, 386, 392 Thelen, Esther 7, 25
spatial marking 333–34, 344–50, 353, 358 Thomason, Sarah G. 231, 232, 236
Speas, Margaret 371, 404 tone languages 114, 268, 281, 287
specificity 298–305, 308–309, 312 topic marking 19, 201, 212–218
Spreen, Otfried 426, 440 Traugott, Elizabeth Closs 206, 217, 222
Stack, Kelly 44, 64 Tsimpli, Ianthi-Maria 422, 423, 425, 426,
Stedt, Joe D. 144, 147, 164 427, 430, 435, 436, 440, 441
Stemberger, Joseph P. 112, 115, 117, 125, Tsonga 289
128, 130, 133, 134, 142 Tuldava, J. 279, 295
Sternberg, Martin L. A. 69, 86 Turkish 131
Stokoe, William C. 2, 5, 24, 27, 28, 33, 39, Turner, Robert 4, 23
58, 64, 69, 80, 86, 88, 111, 147, 164, 167, Tversky, Barbara 407–408, 410, 413, 419,
173, 174, 200, 220, 231, 236, 373 420, 421
Strange, Winifred 46, 62, 64 Twi 283, 284
Studdert-Kennedy, Michael 28, 33, 93, 108, typological homogeneity 114–115, 348–50,
110 352–54, 358–59, 364–65
Stungis, Jim 103, 111
Supalla, Samuel J. 12, 17, 24, 69, 87, 148, Universal Grammar (UG) 27, 38, 112, 114,
151, 158–161, 155, 164, 323, 327, 352, 369 139, 243–244, 423, 431
Supalla, Ted 3, 5, 19–21, 24, 48, 64, 67, 68, universals 241, 256, 370
87, 151, 159, 160, 164, 169, 174, 245, 262, Uno, Yoshio 69, 87
338, 342–45, 369, 370, 372, 397, 403, 404, Urwin, Cathy 461, 466
405, 421 Uyechi, Linda 44, 46, 64, 68, 73, 80, 87
480 Index
Valli, Clayton 103, 111 Wilcox, Phyllis Perrin 68, 87, 200, 208, 212,
van der Hulst, Harry 52, 60, 64, 80, 87, 243, 223, 392, 404
262, 387, 404 Wilcox, Sherman E. 167, 173, 200, 208, 212,
van Hoek, Karen 311, 319, 436, 440 220, 223
van Ooijen, Brit 93, 97, 109, 111 Willerman, Raquel 7, 23
variation, sources of (see also modality effects) Wilson, Kirk L. 338–39, 369
114 Winston, Elizabeth A. 330, 369, 459,
Varney, Nils R. 426, 440 467
Vasishta, Madan M. 338–39, 369 Wismann, Lynn 69, 87
Veinberg, Silvana C. 272, 295 Wix, Tina 161, 164
Venda 282, 284 Wodlinger-Cohen, R. 157, 164
verb agreement: 2, 3, 5, 6, 12, 17–19, 51, Woll, Bencie 10, 19, 21, 25, 69, 85, 172,
175–177, 241, 244–259, 322–326, 342, 350, 174, 213, 223, 232, 233, 234, 235, 236, 237,
356, 371–372, 374, 379–381, 388, 393, 398, 404, 422, 423, 425, 431, 432, 433, 436, 440,
433–439, 457, 459; phonetic constraints on 441
258, 392, 393, 395 Wood, Sandra K. 277, 295, 399, 404
visual perception 28, 29 Woodbury, Anthony 177, 198
Vogel, Irene 39, 63 Woodward, James C. 19, 21, 25, 147, 165,
Vogt-Svendsen, Marit 272, 295 202, 223, 226, 230, 232, 236, 333, 338–39,
Volterra, Virginia 430, 441 369, 444, 467
Vorberg, Dirk 101, 110 word order 3, 19, 20, 51, 322, 324–325, 459,
vowels 45–47, 93, 94, 97, 102, 106, 133, 187 460
Wurmbrand, Susanne 371, 400, 404
Wall, Stig 102, 110 Wylie, Laurence 206, 223
Wallace, Simon B. 10, 21
Walsh, Margaret 69, 87 Yarnall, Gary 443, 467
Ward, Jill 69, 87 Yidin 56, 62
Warren, D.H. 38, 64 Yip, Virginia 309, 319
Warrington, E.K. 426, 441 Yoruba 371
Webb, Rebecca 159, 160, 164, 186, 198 Yucatec Maya Sign Language 72, 73
Weber, David J. 347, 369
Wechsler, Stephen 375, 401 Zaidel, Eran 102, 111
Weinreich, Uriel 234, 236 Zakia, Renée A. E. 7, 23
Welch, R.B. 38, 64 Zanuttini, Raffaella 266, 275, 293, 295
Wells, G. 147, 163 Zattore, Robert 102, 111
West Greenlandic 56, 57, 62 Zawlkow, Esther 147, 163
Whittemore, Gregory 116, 142 Zec, Draga 50, 64
Wiese, Richard 116, 142 Zeshan, Ulrike 69, 72, 73, 87, 237, 272, 278,
Wilbur, Ronnie B. 8, 25, 27, 30, 33, 42, 44, 295, 338–40, 363, 369
64, 68, 73, 80, 87, 200, 215, 218, 222, 223, Zimmer, June 123, 142, 298, 319
272, 285, 290, 291, 295, 298, 319 Zuck, Eric 102, 111