Quinto Pozos Cormier - Modality and Structure Is Spoken and Signed Languages PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 500

This page intentionally left blank

Modality and structure in signed and spoken languages

The realization that signed languages are true languages is one of the great
discoveries of the last thirty years of linguistic research. The work of many
sign language researchers has revealed deep similarities between signed and
spoken languages in their structure, acquisition, and processing, as well as
differences arising from the differing articulatory and perceptual constraints
under which signed languages are used and learned. This book provides a
crosslinguistic examination of the properties of many signed languages, in-
cluding detailed case studies of American, Hong Kong, British, Mexican, and
German Sign Languages. The contributions to this volume, by some of the
most prominent researchers in the field, focus on a single question: to what
extent is linguistic structure influenced by the modality of language? Their
answers offer particular insights into the factors that shape the nature of lan-
guage and contribute to our understanding of why languages are organized as
they are.

richard p. meier is Professor of Linguistics and Psychology at the Univer-


sity of Texas at Austin. His publications have appeared in various journals in-
cluding Language, Cognitive Psychology, Journal of Memory and Language,
Applied Psycholinguistics, Phonetica, and American Scientist.

kearsy cormier is a lecturer of Deaf Studies in the Centre for Deaf Studies
at the University of Bristol. She earned her doctorate in linguistics at the
University of Texas at Austin. Her dissertation explores phonetic properties
of verb agreement in American Sign Language.

david quinto-pozos received his doctorate in linguistics from the


University of Texas at Austin. He currently teaches in the Department of
Linguistics at the University of Pittsburgh.
Modality and structure in signed
and spoken languages

edited by
Richard P. Meier, Kearsy Cormier,
and David Quinto-Pozos

with the assistance of Adrianne Cheek, Heather Knapp,


and Christian Rathmann
         
The Pitt Building, Trumpington Street, Cambridge, United Kingdom

  


The Edinburgh Building, Cambridge CB2 2RU, UK
40 West 20th Street, New York, NY 10011-4211, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
Ruiz de Alarcón 13, 28014 Madrid, Spain
Dock House, The Waterfront, Cape Town 8001, South Africa

http://www.cambridge.org

© Cambridge University Press 2004

First published in printed format 2002

ISBN 0-511-00787-6 eBook (Adobe Reader)


ISBN 0-521-80385-3 hardback
Contents

List of figures page viii


List of tables xi
List of contributors xiii
Acknowledgments xvii

1 Why different, why the same? Explaining effects and


non-effects of modality upon linguistic structure in sign
and speech 1
richard p. meier

Part I Phonological structure in signed languages 27


2 Modality differences in sign language phonology
and morphophonemics 35
diane brentari
3 Beads on a string? Representations of repetition in spoken and
signed languages 65
rachel channon
4 Psycholinguistic investigations of phonological structure
in ASL 88
david p. corina and ursula c. hildebrandt
5 Modality-dependent aspects of sign language production:
Evidence from slips of the hands and their repairs in German
Sign Language 112
annette hohenberger, daniela happ,
and helen leuninger
6 The role of Manually Coded English in language development
of deaf children 143
samuel j. supalla and cecile m ckee

v
vi Contents

Part II Gesture and iconicity in sign and speech 167


7 A modality-free notion of gesture and how it can help us with
the morpheme vs. gesture question in sign language linguistics
(Or at least give us some criteria to work with) 175
arika okrent

8 Gesture as the substrate in the process of ASL grammaticization 199


terry janzen and barbara shaffer

9 A crosslinguistic examination of the lexicons of four


signed languages 224
anne-marie p. guerra currie, richard p. meier,
and keith walters

Part III Syntax in sign: Few or no effects of modality 237


10 Where are all the modality effects? 241
diane lillo-martin

11 Applying morphosyntactic and phonological readjustment


rules in natural language negation 263
roland pfau

12 Nominal expressions in Hong Kong Sign Language: Does


modality make a difference? 296
gladys tang and felix y. b. sze

Part IV Using space and describing space: Pronouns, classifiers,


and verb agreement 321
13 Pronominal reference in signed and spoken language: Are
grammatical categories modality-dependent? 329
susan lloyd m cburney

14 Is verb agreement the same crossmodally? 370


christian rathmann and gaurav mathur

15 The effects of modality on spatial language: How signers and


speakers talk about space 405
karen emmorey
Contents vii

16 The effects of modality on BSL development in an


exceptional learner 422
gary morgan, neil smith, ianthi tsimpli,
and bencie woll

17 Deictic points in the visual–gestural and tactile–gestural modalities 442


david quinto-pozos

Index 469
Figures

2.1 2.1a The handshape parameter used as an articulator in


THINK; 2.1b as a place of articulation in TOUCH;
2.1c as a movement in UNDERSTAND page 41
2.2 ASL signs showing different timing patterns of handshape
and path movement 42
2.3 Nominalization via reduplication 49
2.4 Nominalization via trilled movement affixation 50
2.5 2.5a UNDERSTAND (simple movement sign);
2.5b ACCOMPLISH-EASILY (complex movement sign) 53
2.6 Polymorphemic form 58
2.7 2.7a Handshape used in AIRPLANE with a thumb
specification; 2.7b Handshape used in MOCK with no thumb
specification 59
4.1 Reaction times for the two versions of the experiment 90
4.2 Reaction times for detection of handshapes in ASL signs 96
4.3 Word–picture interference and facilitation 99
4.4 Comparisons of sign–picture and word–picture
interference effects 101
4.5 Results from Experiment 1: Two-shared parameter condition 105
4.6 Results from Experiment 2: Single parameter condition 106
5.1 Levelt’s (1989: 9) model of language production 116
5.2 Picture story of the elicitation task 118
5.3 5.3a SEINE [Y-hand]; 5.3b ELTERN [Y-hand]; 5.3c correct:
SEINE [B-hand] 119
5.4 5.4a substitution: VA(TER); 5.4b conduite: SOHN;
5.4c target/correction: BUB 122
5.5 5.5a VATER [B-hand]; 5.5b slip: MOTHER [B-hand];
5.5c correct: MOTHER [G-hand] 123
5.6 5.6a MANN [forehead]; 5.6b slip: FRAU [forehead];
5.6c correct: FRAU [breast] 124
5.7 5.7a slip: HEIRAT/HOCHZEIT; 5.7b correction: HEIRAT;
5.7c correct: HOCHZEIT 126

viii
List of figures ix

5.8 A polymorphemic form in ASL (Brentari 1998:21) 130


6.1 6.1a The SEE 2 sign -ING; 6.1b The SEE 2 sign -MENT 149
6.2 The SEE 2 sign -S 150
6.3 Three forms of the ASL sign IMPROVE: 6.3a the citation
form; 6.3b the form inflected for continuative aspect;
6.3c a derived noun 152
6.4 The SEE 2 signs: 6.4a IMPROVING; 6.4b IMPROVEMENT 153
6.5 The SEE 2 sign KNOWING: 6.5a prior to assimilation;
6.5b after assimilation 155
7.1 Video stills of speaker telling the story of a cartoon he has
just watched 181
7.2 Illustration of (2) 183
7.3 Illustration of (3) 184
7.4 Spectrogram of English utterance with gestural intonation 192
7.5 Spectrogram of Chinese utterance with neutral intonation 193
7.6 Spectrogram of Chinese utterance with gestural intonation 194
8.1 8.1a 1855 LSF PARTIR (‘to leave’); 8.1b 1855 LSF FUTUR
(‘future’) (Brouland 1855); 8.1c 1913 ASL FUTURE
(McGregor, in 1997 Sign Media Inc.); 8.1d Modern ASL
FUTURE (Humphries et al. 1980) 204
8.2 On se tire (‘go’) (Wylie 1977) 206
8.3 8.3a 1855 LSF POUVOIR (Brouland 1855); 8.3b 1913 ASL
CAN (Hotchkiss in 1997 Sign Media Inc.); 8.3c Modern
ASL CAN (Humphries et al. 1980) 209
8.4 8.4a 1855 LSF IL-FAUT (‘it is necessary’) (Brouland 1855);
8.4b 1913 ASL OWE (Hotchkiss in 1997 Sign Media Inc.);
8.4c Modern ASL MUST (Humphries et al. 1980) 211
9.1 Decision tree for classification of sign tokens in corpus 227
10.1 ASL verb agreement: 10.1a ‘I ask her’; 10.1b ‘He asks me’ 246
10.2 Verb agreement template (after Sandler 1989) 247
11.1 Five-level conception of grammar 265
12.1 INDEXdet i 298
12.2 ‘That man eats rice’ 299
12.3 ‘Those men are reading’ 300
12.4 12.4a ONEdet/num ; 12.4b ONEnum 301
12.5 ‘A female stole a dog’ 302
12.6 12.6a–b ONEdet-path ; 12.6c PERSON 304
12.7 12.7a POSSdet i ; 12.7b POSSneu 306
12.8 ‘That dog is his’ 307
13.1 ASL signing space as used for pronominal reference 334
13.2 Continuum of referential specificity 345
14.1 ASK ‘You ask me’ in ASL 374
x List of figures

14.2 ‘You ask me’ in DGS, NS, and Auslan 375


14.3 An adaptation of Jackendoff’s (1992) model 387
14.4 Making the conceptualization of referents visible 389
14.5 Affixation in spoken languages 394
14.6 Readjustment in signed languages 395
15.1 Illustration of ASL descriptions of the location of a table
within a room 406
15.2 Illustration of a speaker: 15.2a Using shared space;
15.2b Using the addressee’s spatial viewpoint to indicate the
location of the box marked with an “X” 408
15.3 Map of the town (from Tversky and Taylor 1992) 413
15.4 Illustration: 15.4a of reversed space; 15.4b of mirrored
space; 15.4c of two examples of the use of shared space for
non-present referents 415
16.1 Assessments of comprehension across BSL grammar tests:
Christopher and mean comparator scores 432
16.2 ‘(He) asks (her)’ 433

16.3 ‘I like (her)’ 434
17.1 Total use of indexation by each subject 455
Tables

1.1 Non-effects of modality: Some shared properties between


signed and spoken languages page 2
1.2 Possible sources of modality effects on linguistic structure 6
1.3 Some properties of the articulators 7
1.4 Some properties of the sensory and perceptual systems
subserving sign vs. speech 10
1.5 Possible outcomes of studies of modality effects 13
2.1 Differences between vision and audition 37
2.2 Traditional “parameters” in sign language phonological
structure and one representative feature 38
2.3 Canonical word shape according to the number of syllables
and morphemes per word 57
3.1 Words: Irregular and rhythmic repetition as percentages of
all repetition 70
3.2 Signs: Irregular and rhythmic repetition as percentages of
all repetition 71
4.1 Instruction to subjects: “Press the button when you see a
‘l’ handshape” 95
5.1 DGS slip categories, cross-classified with affected entity 120
5.2 Frequency of phonological errors by parameter in ASL and
in DGS 127
5.3 Slip categories/affected entities for the German slip corpus 128
5.4 Locus of repair (number and percentage) in DGS vs. Dutch 136
9.1 Summary of similarly-articulated signs for the three
crosslinguistic studies 229
11.1 Negation: A comparative chart 284
13.1 English personal pronouns (nominative case) 331
13.2 Asheninca personal pronouns 331
13.3 Nagala personal pronouns 332
13.4 Nogogu personal pronouns 332
13.5 Aranda personal pronouns 333
13.6 ASL personal pronouns 335

xi
xii List of tables

13.7 Person distinctions across signed languages 339


13.8 Number distinctions across signed languages 340
13.9 Gender distinctions across signed languages 340
13.10 Possible analogously structured system of pronominal
reference 343
13.11 English demonstrative pronouns 346
13.12 Quechua demonstrative pronouns 347
13.13 Pangasinan demonstrative pronouns 347
13.14 Khasi demonstrative pronouns 348
13.15 Lak demonstrative base forms 348
13.16 Lak personal pronouns 349
13.17 Bella Bella third person pronouns 349
14.1 Null agreement system: Yoruba and Japanese 371
14.2 Weak agreement system: Brazilian Portuguese and English 372
14.3 Strong agreement system: Spanish and German 373
14.4 Verb classes according to the phonological manifestation
of agreement 376
14.5 Verb types according to whether they accept (in)animate
arguments 381
15.1 Properties associated with spatial formats in ASL 411
16.1 Christopher’s performance in five nonverbal (performance)
intelligence tests 425
16.2 Christopher’s performance in two face recognition tests 426
16.3 Test of identification of iconic vs. semi-iconic and
non-iconic signs 430
16.4 Use of negation markers across learning period: Types,
tokens, and ungrammatical use 433
17.1 Narrative and subject order for data collection 450
17.2 Signed narratives: Length (in seconds) and number of signs 451
17.3 Use of indexation in the narratives 454
17.4 The use of proper names realized by fingerspelling the name
of the character being referred to 456
17.5 The use of GIRL and SHE by DB2 457
17.6 English features in each narrative 458
Contributors

diane brentari is Professor and Director of the American Sign Language


Program at Purdue University, and works on comparative analyses of phonol-
ogy and morphology in signed and spoken languages. She is currently
investigating crosslinguistic variation among sign languages in the area
of morphophonemics. Her recent books include A prosodic analysis of sign
language phonology (1998) and Foreign vocabulary in sign languages
(2001).

rachel channon recently received her doctorate in linguistics at the Uni-


versity of Maryland at College Park. Her dissertation considers characteristics
of repetition, sequence and iconicity in sign languages and concludes that
simple signs must have a single segment structure.
david p. corina is an Associate Professor of Psychology at the University
of Washington in Seattle, WA. His research program investigates the neural
representation of human languages, focusing on Deaf users of signed lan-
guages, persons with temporal lobe epilepsy and aphasic populations. He
uses converging methodologies – including behavioral studies, single unit
recordings, cortical stimulation mapping, fMRI, and PET – in order to gain
insights into the neural architecture of language.

karen emmorey is a Senior Staff Scientist at the Salk Institute for Biolog-
ical Studies, La Jolla, CA. She studies the processes involved in how Deaf
people produce and comprehend sign language and how these processes are
represented in the brain. Her most recent book is titled Language, cognition,
and the brain: Insights from sign language research (2002).

anne-marie p. guerra currie is at STI Healthcare, Inc. in Austin,


Texas where she works on information retrieval and classification of medical
record texts. Her diverse research interests include information retrieval and
extraction, natural language processing, sociolinguistics, and sign language
research.

xiii
xiv List of contributors

daniela happ is a Deaf research assistant at the University of Frankfurt,


Germany and works as a lecturer in German Sign Language (DGS) in the
interpreter training program and in the qualification program for Deaf sign
language teachers in Frankfurt. She has published curricular material for
teaching DGS.
ursula hildebrandt is a psychology doctoral student at the University
of Washington in Seattle, WA. She studies perception of American Sign
Language in deaf and hearing infants.
annette hohenberger is a research assistant at the University of
Frankfurt, Germany and works on German Sign Language (DGS), language
production, and language acquisition. Her recent book is Functional cat-
egories in language acquisition: Self-organization of a dynamical system
(2002).
terry janzen is an Assistant Professor of Linguistics at the University of
Manitoba in Winnipeg, Canada, and his research focuses on issues in Ameri-
can Sign Language (ASL) syntax, discourse structure, and grammaticization.
He has published recent articles on the properties of topic constituents in ASL,
and on the interaction of syntax and pragmatics.
helen leuninger is Professor of Linguistics at the University of Frankfurt,
Germany and works on the psycholinguistics and neurolinguistics of German
Sign Language (DGS). She is the author of various books and articles on slips
of the tongue and hand. In her current research project she investigates the
effect of modality on sign and spoken language production.
diane lillo-martin is Professor and Department Head at the University
of Connecticut, and Senior Research Scientist at Haskins Laboratories. Her
research interests include the structure and acquisition of American Sign
Language, particularly in the area of syntax, and crosslinguistic studies of
language acquisition.
gaurav mathur is currently a postdoctoral fellow at Haskins Laboratories
in New Haven, CT. He completed his doctoral dissertation in 2000 at MIT
on the phonological manifestation of verb agreement in signed languages.
His research interests include the interfaces among morphology, phonology,
and phonetics in signed languages.
richard p. meier is Professor of Linguistics and Psychology at the
University of Texas at Austin, where he also directs the American Sign
Language (ASL) program. Much of his research has examined the acquisi-
tion of ASL as a first language.
List of contributors xv

susan lloyd m c b urney is a linguistics graduate student at the University


of Washington, Seattle, WA. She works on the morphology and syntax of
signed languages, the neurolinguistic processing of signed languages, and
the history of the discipline of sign language linguistics.
gary morgan is a Lecturer in Developmental Psychology at City University,
London. He has published on British Sign Language in the International
Journal of Bilingualism, the Journal of Child Language, and the Journal of
Sign Language and Linguistics.
cecile m c ke e is an Associate Professor at the University of Arizona, and
works on crosslinguistic comparisons of language structures (e.g. English,
Italian, American Sign Language), children’s processing mechanisms, and
developmental language impairments (e.g. Down syndrome). Her recent
publications include Methods for assessing children’s syntax (1996, with
D. McDaniel and H. S. Cairns).
arika okrent is completing a joint Ph.D. in the Departments of Linguistics
and Psychology at the University of Chicago.
roland pfau is an Assistant Professor at the University of Amsterdam,
the Netherlands where he teaches sign language linguistics. Besides doing
research on the morphosyntax and phonology of signed languages, he works
on language typology and the processing of morphosyntactic features in
language production.
david quinto-pozos recently received his Ph.D. in linguistics at the Uni-
versity of Texas at Austin. His dissertation examines the contact between
Mexican and American Sign Languages along the Texas–Mexico border. He
is also a certified interpreter and interpreter trainer. He now teaches in the
Department of Linguistics of the University of Pittsburgh.
christian rathmann is a doctoral student in linguistics at the University
of Texas at Austin. His research interests include the interface between syn-
tax, semantics, and pragmatics, comparative studies of signed languages, and
psycholinguistics.
barbara shaffer is an Assistant Professor of Linguistics at the University
of New Mexico. Her research and publications focus on markers of modality,
pragmatics, and the grammaticization of signed languages.

neil smith is Professor of Linguistics at University College London, where


he has been head of the linguistics section since 1972. His most recent books
are: Chomsky: Ideas and ideals (1999) and Language, bananas and bonobos
(2002).
xvi List of contributors

samuel j. supalla is an Associate Professor in the Department of Special


Education, Rehabilitation, and School Psychology at the University of Ari-
zona. He co-founded the Laurent Clerc Elementary School, a charter school
in Tucson, AZ as part of a university–school affiliation effort. He supports
the creation of a working academic curriculum for deaf children. To this end,
he has carried out research on modality issues associated with language de-
velopment, especially in learning print English as a second language without
the support of sound.
felix y. b. sze is a research student at the University of Bristol, UK. Her
research interests include syntax as well as information packaging in Hong
Kong Sign Language.
gladys tang is an Associate Professor at the Chinese University of Hong
Kong, Hong Kong. She works on sign linguistics, language acquisition, and
applied linguistics. She is developing a project on sign language classifiers,
their internal structure and acquisition by deaf children of Hong Kong Sign
Language.
ianthi tsimpli is an Associate Professor at the English Department at the
Aristotle University of Thessaloniki and is also an Assistant Director of
Research at the Research Centre for English and Applied Linguistics at the
University of Cambridge, UK. Her research interests include first and second
language acquisition, language disorders, and formal syntax. Her book pub-
lications include The mind of a savant: Language learning and modularity
(1995, with Neil Smith) and The prefunctional stage of language acquisition:
A crosslinguistic study (1996).
keith wal ters is an Associate Professor of Linguistics, Anthropology, and
Middle Eastern Studies at the University of Texas at Austin, where he also
serves as Associate Director of the Center for Middle Eastern Studies. Much
of his research focuses on the sociolinguistics of North Africa–Arabic diglos-
sia, Arabic–French bilingualism, codeswitching, and language and education
in the USA.
bencie woll holds the Chair in Sign Language and Deaf Studies at City
University London, and she is involved in all aspects of sign language and
sign linguistic research. Recent publications include The linguistics of BSL:
An introduction (1999, with Rachel Sutton-Spence) and ‘Assessing British
Sign Language development’ (with Ros Herman and Sallie Holmes).
Acknowledgments

Few readers will be surprised to learn that this volume is the fruit of a confer-
ence. That conference – one of an annual series sponsored by the Texas Linguis-
tics Society – was held at the University of Texas at Austin on February 25–27,
2000; the topic was “The effects of modality on language and linguistic theory.”
It was, we believe, a very successful meeting, one marked by the high quality of
the papers and of the ensuing discussions. There are many people and organiza-
tions to whom we are indebted for their financial support of the conference and
for their hard work toward its realization. Here there are two sets of friends and
colleagues whom we especially want to thank: Adrianne Cheek, Heather Knapp,
and Christian Rathmann were our co-organizers of the conference. We owe a
particular debt to the interpreters who enabled effective conversation between
the Deaf and hearing conferees. The skill and dedication of these interpreters –
Kristen Schwall-Hoyt, Katie LaSalle, and Shirley Gerhardt – were a foundation
of the conference’s success.
This book brings together many of the papers from that conference. All are
now much updated and much revised. The quality of the revisions is due not
only to the hard work of the authors but also to the peer-review process. To every
extent possible, we obtained two reviews for each chapter, one from a scholar
who works on signed languages and one from a scholar who, while expert in
linguistics or psycholinguistics, works primarily on spoken languages. There
were two reasons for this: first we sought to make sure that the chapters would
be of the highest possible quality. And, second, we sought to ensure that the
chapters would be accessible to the widest possible audience of researchers in
linguistics and related fields.
To obtain these reviews, we abused many of our colleagues here at the
University of Texas at Austin, including Ralph Blight, Megan Crowhurst,
Lisa Green, Scott Myers, Carlota Smith, Steve Wechsler, and Tony Woodbury
from the Department of Linguistics and Randy Diehl, Cathy Echols, and Peter
MacNeilage from the Department of Psychology. We, and our authors, also
benefited from the substantive and insightful reviews provided by Diane Brentari
(Purdue University, West Lafayette, IN), Karen Emmorey (The Salk Insti-
tute, La Jolla, CA), Elisabeth Engberg-Pedersen (University of Copenhagen,

xvii
xviii Acknowledgments

Denmark), Susan Fischer (National Technical Institute for the Deaf, Rochester,
NY), Harry van der Hulst (University of Connecticut), Manfred Krifka
(HumboldtUniversity,Berlin,Germany),CecileMcKee(University ofArizona),
David McKee (Victoria University of Wellington, New Zealand), Irit Meir
(University of Haifa, Israel), Jill Morford (University of New Mexico), Carol
Neidle (Boston University), Carol Padden (University of California, San Diego),
Karen Petronio (Eastern Kentucky University), Claire Ramsey (University of
Nebraska), Wendy Sandler (University of Haifa, Israel), and Sherman Wilcox
(University of New Mexico). We thank all of these colleagues for the time that
they gave to this volume.
Christine Bartels, who at the outset was our acquisitions editor at Cambridge
University Press, shaped our thinking about how to put this book together.
We are greatly indebted to her. The Children’s Research Laboratory of the
Department of Psychology of the University of Texas at Austin provided the
physical infrastructure for our work on this book. During the preparation of this
book, David Quinto-Pozos was supported by a predoctoral fellowship from the
National Institutes of Health (F31 DC00352). Last – but certainly not least –
we thank the friends and spouses who have seen us through this process, in par-
ticular Madeline Sutherland-Meier and Mannie Quinto-Pozos. Their patience
and support have been unstinting.

richard p. meier, kearsy cormier,


and david quinto-pozos
Austin, Texas
1 Why different, why the same? Explaining effects
and non-effects of modality upon linguistic
structure in sign and speech

Richard P. Meier

1.1 Introduction
This is a book primarily about signed languages, but it is not a book targeted just
at the community of linguists and psycholinguists who specialize in research
on signed languages. It is instead a book in which data from signed languages
are recruited in pursuit of the goal of answering a fundamental question about
the nature of human language: what are the effects and non-effects of modality
upon linguistic structure? By modality, I and the other authors represented in
this book mean the mode – the means – by which language is produced and
perceived. As anyone familiar with recent linguistic research – or even with
popular culture – must know, there are at least two language modalities, the
auditory–vocal modality of spoken languages and the visual–gestural modality
of signed languages. Here I seek to provide a historical perspective on the issue
of language and modality, as well to provide background for those who are
not especially familiar with the sign literature. I also suggest some sources of
modality effects and their potential consequences for the structure of language.

1.2 What’s the same?


Systematic research on the signed languages of the Deaf has a short history. In
1933, even as eminent a linguist as Leonard Bloomfield (1933:39) could write
with assurance that:
Some communities have a gesture language which upon occasion they use instead of
speech. Such gesture languages have been observed among the lower-class Neapolitans,
among Trappist monks (who have made a vow of silence), among the Indians of our
western plains (where tribes of different language met in commerce and war), and among
groups of deaf-mutes.
It seems certain that these gesture languages are merely developments of ordinary
gestures and that any and all complicated or not immediately intelligible gestures are
based on the conventions of ordinary speech.

Why Bloomfield was so certain that speech was the source of any and all
complexity in these gesture languages is unclear. Perhaps he was merely echoing
1
2 Richard P. Meier

Edward Sapir (1921:21) or other linguists who had articulated much the same
views.
Later, Hockett (1960) enumerated a set of design features by which we can
distinguish human language from the communication systems of other animals
and from our own nonlinguistic communication systems. The first of those 13
design features – the one that he felt was “perhaps the most obvious” (p.89) –
is the vocal-auditory channel. Language, Hockett argued, is a phenomenon
restricted to speech and hearing. Thus, the early conclusion of linguistic research
was that there are profound differences between the oral–aural modality of
spoken languages and the visual–gestural modality of Bloomfield’s “gesture
languages.” On this view, those differences were such that human language
was only possible in the oral–aural modality.
However, the last 40 years of research – research that was started by William
Stokoe (1960; Stokoe, Casterline, and Croneberg 1965) and that was thrown
into high gear by Ursula Bellugi and Edward Klima (most notably, Klima and
Bellugi 1979) – has demonstrated that there are two modalities in which human
language may be produced. We now know that signed and spoken languages
share many properties. From this, we can safely identify many non-effects of
the modality in which language happens to be produced; see Table 1.1. Signed
and spoken languages share the property of having conventional vocabularies
in which there are learned pairings of form and meaning. Just as each speech
community has its own idiosyncratic pairings of sound form and meaning, so
does each sign community. In sign as in speech, meaningful units of form

Table 1.1 Non-effects of modality: Some shared properties between signed


and spoken languages

r Conventional vocabularies: learned pairings of form and meaning.


r Duality of patterning: meaningful units built of meaningless sublexical units, whether units of
sound or of gesture:
– Slips of the tongue/Slips of the hand demonstrate the importance of sublexical units in adult
processing.
r Productivity: new vocabulary may be added to signed and spoken languages:
– Derivational morphology;
– Compounding;
– Borrowing.
r Syntactic Structure:
– Same parts of speech: nouns, verbs, and adjectives;
– Embedding to form relative and complement clauses;
– Trade-offs between word order and verb agreement in how grammatical relations are
marked: rich agreement licenses null arguments and freedom in word order.
r Acquisition: similar timetables for acquisition.
r Lateralization: aphasia data point to crucial role for left hemisphere.
Explaining effects and non-effects of modality 3

are built of meaningless sublexical units, whether units of sound or units of


manual gesture; thus signed and spoken languages amply demonstrate duality
of patterning, another of Hockett’s design features of human language. Slips of
the tongue and slips of the hand show that in sign, as in speech, these sublexical
units of form are important in the adult’s planning of an utterance; the fact
that speech phonemes or sign handshapes can be anticipated, perseverated, or
switched independently of the word or sign to which they belong demonstrates
the “psychological reality” of such units (Fromkin 1973; Klima and Bellugi
1979). The chapter in this volume by Annette Hohenberger, Daniela Happ, and
Helen Leuninger provides the first crucial evidence that the kinds of slips of the
hand found in American Sign Language (ASL) by Klima and Bellugi are also
encountered in other sign languages, in this instance German Sign Language
(Deutsche Gebärdensprache or DGS). The kinds of online psycholinguistic
tasks that David Corina and Ursula Hildebrandt discuss in their chapter may
offer another window onto the psycholinguistic reality of phonological structure
in signed languages.
Like spoken languages, signed languages can expand their vocabularies
through derivational processes (Supalla and Newport 1978; Klima and Bellugi
1979), through compounding (Newport and Bellugi 1978; Klima and Bellugi
1979), and through borrowing (Padden 1998; Brentari 2001). Borrowings enter
the vocabulary of ASL through the fingerspelling system (Battison 1978) and,
recently, from foreign signed languages, which are a source of place names in
particular. In the fact that they add to their vocabularies through rule-governed
means and in the fact that novel messages may be expressed through the con-
strained combination of signs and phrases to form sentences, signed languages
are fully consistent with another of Hockett’s design features: productivity.
In the syntax of signed languages, we find evidence that signs belong to
the same “parts of speech” as in spoken languages. In ASL, consistent mor-
phological properties distinguish nouns such as CHAIR from semantically and
formationally related verbs, in this instance SIT (Supalla and Newport 1978).
ASL and other signed languages exhibit recursion; for example, sentence-like
structures (clauses) can be embedded within sign sentences (e.g. Padden 1983).
Word order is one means by which ASL and other signed languages distinguish
subject from object (Fischer 1975; Liddell 1980). An inflectional rule of verb
agreement means that the arguments of many verbs are marked through changes
in their movement path and/or hand orientation (Padden 1983, among others).1
As in such Romance languages as Spanish and Italian, there is a tradeoff between
word order and rich morphological marking of argument structure, the result
1 For a recent critique of the analysis of this property of verbs as being a result of agreement,
see Liddell (2000), but also see Meier (2002) for arguments from child language development
suggesting that what has been called agreement in signed languages is properly viewed as a
linguistic rule.
4 Richard P. Meier

being that when arguments are signaled morphologically ASL exhibits “null
arguments,” that is, phonologically empty subjects and objects (Lillo-Martin
1991). As Diane Lillo-Martin reviews in her chapter, Brazilian Sign Language –
unlike ASL, perhaps – allows a further tradeoff, such that agreeing verbs sanc-
tion preverbal objects, whereas only SVO (subject – verb – object) order is
permitted with non-agreeing verbs (Quadros 1999).
Studies of the acquisition of ASL and other signed languages have revealed
strong evidence that signed languages are acquired on essentially the same
schedule as spoken languages (Newport and Meier 1985; Meier 1991; Petitto
and Marentette 1991). There is evidence of an optimal maturational period – a
critical period – for the acquisition of signed languages, just as there is for the
acquisition of spoken languages (Mayberry and Fischer 1989; Newport 1990).
In the processing of signed languages, as in the processing of spoken languages,
there is a crucial role for the left hemisphere (Poizner, Klima, and Bellugi 1987)
although there is ongoing controversy about whether there might be greater
right hemisphere involvement in the processing of signed languages than there
is in spoken languages (e.g., Neville, Bavelier, Corina, Rauschecker, Karni,
Lalwani, Braun, Clark, Jezzard, and Turner 1998; and for discussion of these
results, Corina, Neville, and Bavelier 1998; Hickok, Bellugi, and Klima 1998).
On the basis of results such as those outlined above, there were two conclu-
sions that many of us might have drawn in the early 1980s. One conclusion is
unassailable, but the other is more problematic:
Conclusion 1: The human language capacity is plastic: there are at least two modalities –
that is, transmission channels – available to it. This is true despite the fact that every
known community of hearing individuals has a spoken language as its primary language.
It is also true despite plausible claims that humans have evolved – at least in the form
of the human vocal tract – specifically to enable production of speech.

The finding that sign and speech are both vehicles for language is one of the
most crucial empirical discoveries of the last decades of research in any area of
linguistics. It is crucial because it alters our very definition of what language
is. No longer can we equate language with speech. We now know that funda-
mental design features of language – such as duality of patterning, discreteness,
and productivity – are not properties of a particular language modality. Instead
these design features are properties of human language in general: properties
presumably of whatever linguistic or cognitive capacities underlie human lan-
guage. Indeed, we would expect the same properties to be encountered in a
third modality – e.g. a tactile gestural modality – should natural languages be
indentified there.2
Conclusion 2: There are few or no structural differences between signed and spoken
languages. Sure, the phonetic features are different in sign and speech: speech does
2 In his contribution to this volume, David Quinto-Pozos discusses how deaf-blind signers use
ASL in the tactile–gestural modality.
Explaining effects and non-effects of modality 5

not have handshapes and sign does not have a contrast between voiced and nonvoiced
segments, but otherwise everything is pretty much the same in the two major language
modalities. Except for those rules that refer specifically to articulatory features – or to
auditory or visual features – any rule of a signed language is also a possible rule of a
spoken language, and vice versa.
It is this second conclusion that warrants re-examination. The hypothesis that
there are few or no structural differences between sign and speech is the subject
of the remainder of this chapter. The fact that we know so much more now
about signed languages than we did when William Stokoe began this enterprise
in 1960 means that we can be secure in the understanding that discussion of
modality differences does not threaten the fundamental conclusion that signed
languages are indeed languages. The last 40 years of research have demon-
strated conclusively that there are two major types of naturally-evolved human
languages: signed and spoken.
Why should we be interested in whether specific aspects of linguistic structure
might be attributable to the particular properties of the transmission channel?
Exploration of modality differences holds out the hope that we may achieve a
kind of explanation that is rare in linguistics. Specifically, we may be able to
explore hypotheses that this or that property of signed or spoken language is
attributable to the particular constraints that affect that modality.

1.3 Why is it timely to revisit the issue of modality effects


on linguistic structure?
Several developments make this a good time to reassess the hypothesis that
there are few fundamental differences between signed and spoken languages.
First, our analyses of ASL – still the language that is the focus of most research
on signed languages – are increasingly detailed (see, for example, Brentari
1998; Neidle et al. 2000). Second, there are persistent suggestions of modality
differences in phonological and morphological structure, in the use of space, in
the pronominal systems of signed languages, and in the related system of verb
agreement.
It is a third development that is most crucial (Newport and Supalla 2000):
there is an ever-increasing body of work on a variety of signed languages other
than ASL. Even in this one volume, a range of signed languages is discussed:
Annette Hohenberger, Daniela Happ, and Helen Leuninger discuss an extensive
corpus of experimentally-collected slips of the hand in German Sign Language
(DGS). Roland Pfau analyzes the syntax of negation in that same language,
while Gladys Tang and Felix Y. B. Sze discuss the syntax of noun phrases
in Hong Kong Sign Language (HKSL). Anne-Marie P. Guerra Currie, Keith
Walters, and I compare basic vocabulary in four signed languages: Mexican,
French, Spanish, and Japanese. Christian Rathmann and Gaurav Mathur touch
on a variety of signed languages in their overview of verb agreement: not only
6 Richard P. Meier

ASL, but also DGS, Australian Sign Language, and Japanese Sign Language
(Nihon Syuwa or NS). Gary Morgan and his colleagues discuss how Christo-
pher – a hearing language savant – learned aspects of British Sign Language
(BSL). Research on signed languages other than ASL means that discussion of
modality differences is not confounded by the possibility that our knowledge
of signed languages is largely limited to one language that might have many
idiosyncratic properties. Just as we would not want to make strong conclusions
about the nature of the human language capacity on the basis of analyses that
are restricted to English, we would not want to characterize all signed languages
just on the basis of ASL.

1.4 Why might signed and spoken languages differ?


Signed and spoken languages may differ because of the particular character-
istics of the modalities in which they are produced and perceived; see Table 1.2.
I mention three sets of ways in which the visual–gestural and oral–aural
modalities differ; these differences between the language modalities are po-
tential sources of linguistic differences between signed and spoken languages.
At this point in time, however, we have few conclusive demonstrations of any
such effects. In addition to those factors that pertain to specific properties of
the two language modalities, I mention a fourth possible source of differences
between signed and spoken languages: Signed and spoken languages may dif-
fer not only because of characteristics of their respective channels, but be-
cause of demographic and historical factors that suggest that sign languages
are, in general, rather young languages. Young languages may themselves
be distinctive. However, even here a property of the visual–gestural modality
may come into play: one resource for the development of signed languages
may be the nonlinguistic gestures that are also used in the visual–gestural
modality.

1.4.1 The articulators


I turn first to the differing properties of the articulators in sign and speech (cf.
Meier 1993). That the hands and arms are in many ways unlike the tongue,

Table 1.2 Possible sources of modality effects on linguistic structure

1. Differing properties of the articulators


2. Differing properties of the perceptual systems
3. Greater potential of the visual–gestural system for iconic and/or indexic representation
4. The youth of signed languages and their roots in nonlinguistic gesture
Explaining effects and non-effects of modality 7

Table 1.3 Some properties of the articulators

Sign Speech

Light source external to signer Sound source internal to speaker


Sign articulation not coupled (or Oral articulation tightly coupled
loosely coupled) to respiration to respiration
Sign articulators move in a Oral articulators largely hidden
transparent space
Sign articulators relatively massive Oral articulators relatively small
Sign articulators paired Oral articulators not paired
No predominant oscillator? Mandible is predominant oscillator

mandible, lips, and velum surely comes as no surprise to anyone.3 Table 1.3
lists a number of ways in which the oral and manual articulators differ. The
oral articulators are small and largely hidden within the oral cavity; the fact
that only some of their movements are visible to the addressee accounts for
the failure of lipreading as a means of understanding speech. In contrast, the
manual articulators are relatively large. Moreover, the sign articulators are
paired; the production of many signs entails the co-ordinated action of the
two arms and hands. Yet despite the impressive differences between the oral
and manual articulators, their consequences for linguistic structure are far from
obvious. For example, consider the fact that the sound source for speech is
internal to the speaker, whereas the light source for the reflected light that
carries information about the signer’s message is external to that signer.4

3 The articulators in speech or sign seem so different that, when we find common properties of
sign and speech, we are tempted to think that they must be due to general, high-level proper-
ties of the human language capacity or perhaps to high-level properties of human cognition.
But a cautionary note is in order: there are commonalities in motoric organization across the
two modalities that mean that some similar properties of the form of sign and speech may be
attributable to shared properties of the very disparate looking motor systems by which speech
and sign are articulated (Meier 2000b). Here are two examples: (1) in infancy, repetitive, non-
linguistic movements of the hands and arms emerge at the same time as vocal babbling (Thelen
1979). This motoric factor may contribute to the apparent coincidence in timing of vocal and
manual babbling (Petitto and Marentette 1991; Meier and Willerman 1995). More generally, all
children appear to show some bias toward repetitive movement patterns. This may account for
certain facts of manual babbling, vocal babbling, early word formation, and early sign formation
(Meier, McGarvin, Zakia, and Willerman 1997; Meier, Mauk, Mirus, and Conlin 1998). (2) The
sign stream, like the speech stream, cannot be thought of as a series of beads on a string. Instead,
in both modalities, phonological units are subject to coarticulation, perhaps as a consequence
of principles such as economy of effort to which all human motor performance – linguistic or
not – is subject. Instrumented analyses of handshape production reveal extensive coarticulation
in the form of ASL handshapes, even in very simple sign strings (Cheek 2001; in press).
4 There are communication systems – both biological and artificial – in which the light source is
internal: the most familiar biological example is the lightening bug.
8 Richard P. Meier

This fact may limit the use of signed languages on moonless nights along
country roads, but may have no consequence for how signed languages are
structured.5
To date, the articulatory factor that has received the most attention in the
sign literature involves the relative size of the articulators in sign and speech.
In contrast to the oral articulators, the manual articulators are massive. Large
muscle groups are required to overcome inertia and to move the hands through
space, much larger muscles than those required to move the tongue tip. Not
surprisingly, the rate at which ASL signs are produced appears to be slower
than the rate at which English words are produced, although the rate at which
propositions are produced appears to be the same (Bellugi and Fischer 1972;
Klima and Bellugi 1979). How can this seeming paradox be resolved? Klima
and Bellugi (1979; see also Bellugi and Fischer 1972) argued that the slow
rate of sign production encourages the simultaneous layering of information
within the morphology of ASL; conversely, the slow rate of sign production
discourages the sequential affixation that is so prevalent in spoken languages.6
Consistent with this suggestion, when Deaf signers who were highly experi-
enced users of both ASL and Signing Exact English (SEE) were asked to sign
a story, the rate at which propositions were produced in SEE was much slower
than in ASL (a mean of 1.5 seconds per proposition in ASL, vs. 2.8 seconds
per proposition in SEE). In SEE, there are separate signs for the morphology of
English (including separate signs for English inflections, function words, and
derivational morphemes). In this instance an articulatory constraint may push
natural signed languages, such as ASL, in a particular typological direction,
that is, toward nonconcatenative morphology. The slow rate at which propo-
sitions are expressed in sign systems such as SEE that mirror the typological

5 Similarly, the use of spoken languages is limited in environments in which there are very high
levels of ambient noise, and in such environments – for example, sawmills – sign systems may
develop (Meissner and Philpott 1975).
6 Measurements of word/sign length are, of course, not direct measurements of the speed of oral
or manual articulators; nor are they measures of the duration of movement excursions. Some
years ago, at the urging of Ursula Bellugi, I compared the rate of word production in English and
Navaho. The hypothesis was that the rate of word production (words/minute) would be lower
in Navaho than in English, consistent with the fact that Navaho is a polysynthetic language
with an elaborate set of verbal prefixes. The results were consistent with this hypothesis. Wilbur
and Nolen (1986) attempted a measure of syllable duration in ASL. They equated movement
excursion with syllable, such that, in bidirectional signs and in reduplicated forms, syllable
boundaries were associated with changes in movement direction. On this computation, syllable
durations in sign were roughly comparable at 250 ms to measures of English syllable duration
that Wilbur and Nolen pulled from the phonetics literature. Note, however, that there is little
phonological contrast – and indeed little articulatory change – across many of the successive
“syllables” within signs; in a reduplicated or bidirectional form, the only change from one
syllable to the next would be in direction of path movement. See Rachel Channon’s contribution
to this volume (Chapter 3) for a discussion of repetition in signs.
Explaining effects and non-effects of modality 9

organization of English may account for the fact that such systems have not
been widely adopted in the Deaf community.
The two language modalities may also differ in whether they make a single
predominant oscillator available for the production of language, as I discussed in
an earlier paper (Meier 2000b). Oscillatory movements underlie human action,
whether walking, chewing, breathing, talking, or signing. Although there are
several relatively independent oral articulators (e.g. the lips, the tongue tip, the
tongue dorsum, the velum, and the mandible), MacNeilage and Davis (1993;
also MacNeilage 1998) ascribe a unique status to one of those articulators.
They argue that oscillation of the mandible provides a “frame” around which
syllable production is organized. Repeated cycles of raising and lowering the
mandible yield a regular alternation between a relatively closed and relatively
open vocal tract. This articulatory cycle is perceived as an alternation between
consonants and vowels. Mandibular oscillation may also be developmentally
primary: MacNeilage and Davis argue that, except for the mandible, children
have little independent control over the speech articulators; cycles of raising and
lowering the mandible account for the simple consonant–vowel (CV) syllables
of vocal babbling.
When we observe individual ASL signs we see actions – sometimes repeated,
sometimes not – of many different articulators of the arm and hand. ASL signs
can have movement that is largely or completely restricted to virtually any joint
on the arm: The sign ANIMAL requires repeated in-and-out movements of
the shoulder. Production of the sign DAY entails the rotation of the arm at the
shoulder. The arm rotates toward the midline along its longitudinal axis. The
signs GOOD and GIVE (citation form) are articulated through the extension of
the arm at the elbow, whereas TREE involves the rotation of the forearm at the
radioulnar joint. YES involves the repeated flexion and extension of the wrist.
The movement of still other signs is localized at particular articulators within the
hand (e.g. TURTLE: repeated internal bending of the thumb; BIRD: repeated
bending of the first finger at the first knuckle; COLOR: repeated extension and
flexion of the four fingers at the first knuckle; BUG: repeated bending at the
second knuckle). Still other signs involve articulation at more than one joint;
for example, one form of GRANDMOTHER overlays repeated rotation of the
forearm on top of an outward movement excursion executed by extension of
the arm at the elbow. Facts such as these suggest that it will be hard to identify
a single, predominant oscillator in sign that is comparable to the mandibular
oscillation of speech. This further suggests that analysts of syllable structure
in sign may not be able to develop a simple articulatory model of syllable
production comparable to the one that appears possible for speech. On the view
suggested by MacNeilage and Davis’s model, speech production – but not sign
production – is constrained to fit within the frame imposed by a single articulator.
10 Richard P. Meier

Table 1.4 Some properties of the sensory and perceptual systems subserving
sign vs. speech

Sign Speech

Signer must be in view of addressee Speaker need not be in view of addressee


High bandwidth of vision Lower bandwidth of audition
High spatial resolution of vision; lower temporal High temporal resolution of audition; lower
resolution than audition spatial resolution than vision
Visual stimuli generally not categorically Categorical perception of speech (and of
perceived some highly dynamic nonspeech stimuli)
Articulatory gestures as the object of perception Acoustic events as the object of perception

1.4.2 The sensory or perceptual systems


A second source of linguistic differences between signed and spoken languages
could lie in the differing properties of the sensory and perceptual systems
that subserve the understanding of sign and speech (again see Meier 1993
for further discussion, as well as Diane Brentari’s contribution to this book).
In Table 1.4, I list some pertinent differences between vision and audition.7
Specific claims about the relationship between these sensory/perceptual factors
and linguistic structure have hardly been developed. One instance where we
might make a specific proposal pertains to the greater bandwidth of the visual
channel: to get a feel for this, compare the transmission capacity needed for
regular telephone vs. a videophone. Greater bandwidth is required to trans-
mit an adequate videophone signal, as opposed to a signal that is adequate
for a spoken conversation on a standard telephone. The suggestion is that at
any instant in time more information is available to the eye than the ear, al-
though in both modalities only a fraction of that information is linguistically
relevant.
A more concrete statement of the issue comes from an important discussion of
the constraints under which spoken languages have evolved. Pinker and Bloom
(1990:713) observed that “[The vocal–auditory channel] is essentially a serial
interface . . . lacking the full two-dimensionality needed to convey graph or tree
structures and typographical devices such as fonts, subscripts, and brackets.

7 In an earlier article that addressed some of the same issues as discussed here (Meier 1993), I
listed categorical perception as a modality feature that may distinguish the perception of signed
and spoken languages. The results of early studies, in particular Newport (1982), suggested that
handshape and place distinctions in ASL were not categorically perceived, a result that indicated
to Newport that categorical perception might be a property of audition. Very recent studies
raise again the possibility that distinctions of handshape and of linguistic and nonlinguistic
facial expression may be categorically perceived (Campbell, Woll, Benson, and Wallace 1999;
McCullough, Emmorey, and Brentari 2000).
Explaining effects and non-effects of modality 11

The basic tools of a coding scheme using such a channel are an inventory of
distinguishable symbols and their concatenation. Thus, grammars for spoken
languages must map propositional structures onto a serial channel . . .” In her
chapter, Susan McBurney makes an interesting distinction between the modality
and the medium of a human language. For her, modality is the biological or phys-
ical system that subserves a given language; thus, for signed languages it is the
manual and visual systems that together make up the visual–gestural modality.
Crucially, she defines the medium “as the channel (or channels) through which
a language is conveyed. More specifically, channel refers to the dimensions of
space and time that are available to a given language.” Like Pinker and Bloom,
she considers the medium for speech to be fundamentally one-dimensional;
speech plays out over time. But sign languages are conveyed through a mul-
tidimensional medium: the articulatory and perceptual characteristics of the
visual–gestural modality give signed languages access to four dimensions of
space and time. The question then becomes: to what extent do signed languages
utilize space and what consequences does the use of space have for the nature
of linguistic structure in sign?

1.4.3 The potential of the visual–gestural modality for iconic


representation and for indexic/ostensive identification of referents
Visual representations – not just language, but also gesture and visual media
in general – seem to have greater access to iconicity than do auditory repre-
sentations: compare the rich possibilities for iconic portrayal in painting and
photography to the much more limited possibilities in music. Moreover the
visual–gestural modality has great capacity for indexic motivation: with ges-
tures an individual can point to the referents that he or she is discussing. Not
only do the possibilities for iconic and indexic motivation seem greater in the
visual–gestural modality of signed languages, but the kinds of notions that can
be encoded through non-arbitrary gestures may be more important and varied
than the kinds of notions that can be encoded in a non-arbitrary fashion in spo-
ken languages. In speech, imagistic words can represent the sounds of objects.
Sound symbolism may loosely be able to indicate the relative size of objects.
Order of mention may reflect the temporal sequence of events. Gesture can
likewise signify size and order, but it can also point to the locations of objects,
sketch their shapes, and describe their movements.
Goldin-Meadow and McNeill (1999:155) suggest that the manual and oral
modalities are equally good at what they call “segmented and combinatorial
encoding.” Consistent with this suggestion, signed and spoken languages share
fundamental aspects of linguistic structure. But Goldin-Meadow and McNeill
also suggest that, for “mimetic encoding,” the manual modality is a superior
vehicle to the oral modality. In spoken conversations, such mimetic encoding
12 Richard P. Meier

is achieved through the nonlinguistic gestures that accompany speech. On their


view the oral modality – unlike the manual one – is constrained in that it is only
suitable for segmented, combinatorial, categorical encoding of information.
They conclude (p.166) that, in the evolution of human languages:

speech became the predominant medium of human language not because it is so well
suited to the segmented and combinatorial requirements of symbolic communication (the
manual modality is equally suited to the job), but rather because it is not particularly
good at capturing the mimetic components of human communication (a task at which
the manual modality excels).

1.4.4 The youth of sign languages and their roots in nonlinguistic gesture
As best we can tell, signed languages are young languages, with histories that
hardly extend beyond the mid-eighteenth century. With some effort we can trace
the history of ASL to seventeenth century Martha’s Vineyard (Groce 1985).
The youngest known signed language – Nicaraguan Sign Language – has a
history that extends back only to the late 1970s (Kegl, Senghas, and Coppola
1999; Polich 2000). We also know of one class of young spoken languages –
specifically, the creole languages – and, importantly, these languages tend to be
very uniform in structure (Bickerton 1984).
The demographics of Deaf communities mean that children may have been,
and may continue to be, key contributors to the structure of signed languages.
Few deaf children have native signing models. Only third-generation deaf
children – in other words, those with a deaf grandparent – have at least one
native-signing parent. The fact that most deaf children do not have native-
signing models in the home – indeed the preponderance of deaf children (specif-
ically, the 90 percent of deaf childen who are born to hearing parents) do not
even have fluent models in the home – may mean that deaf children have freer
rein to use linguistic forms that reflect their own biases, as opposed to the con-
ventions of an established linguistic community. The biases of different deaf
children are likely to have much in common. That deaf children can create
linguistic structure has been shown in a variety of situations:
r in the innovated syntax of the “home sign” systems developed by deaf children
born to nonsigning, hearing parents (Goldin-Meadow and Feldman 1977;
Goldin-Meadow and Mylander 1990);
r in the acquisition of ASL by a deaf child who had input only from deaf
parents who were late – and quite imperfect – learners of ASL (Singleton and
Newport, in press);
r in the innovated use of spatial modification of verbs (“verb agreement”) by
deaf children exposed only to Signing Exact English with its thoroughly
nonspatial syntax (Supalla 1991); and
Explaining effects and non-effects of modality 13
r in the apparent creation of Nicaraguan Sign Language since the late 1970s
(Kegl et al. 1999).
Young spoken and signed languages need not be structured identically, given
the differing “substrates” and “superstrates” that contributed to them and the
differing constraints upon the oral–aural and visual–gestural modalities. For
young spoken languages – that is, for creole languages – the preponderance
of the vocabulary derived from the vocabulary of whatever the dominant (or
“superstrate”) language was in the society in which the creole arose; so, French
Creoles such as Haitian drew largely from the vocabulary of French. But signed
languages could draw from rather different resources: one source may have
been the gestures that deaf children and their families sometimes innovate in
the creation of home sign systems. Other contributors to the vocabularies of
signed languages may have been the gestures that are in general use among the
deaf and hearing populations; in their chapter, Terry Janzen and Barbara Shaffer
trace the etymology of certain modal signs in ASL and in French Sign Language
(Langue des Signes Française or LSF) back to nonlinguistic gesture. Because
many gestures – whether they be the gestures of young deaf home signers or the
gestures of hearing adults – are somehow motivated in their form, these gestures
may exhibit some internal form–meaning associations. It seems possible that
such latent regularities may be codified and systematized by children, yielding
elaborate sign-internal morphology of a sort that we would not expect within
the words of a spoken creole (Meier 1984).

1.5 What are possible linguistic outcomes of these modality


differences? What, if anything, differs between signed
and spoken languages?
In Table 1.5, I list five types of linguistic outcomes that may arise as conse-
quences of the modality differences listed in Table 1.2. Let us look at the first
of these possible outcomes.

Table 1.5 Possible outcomes of studies of modality effects

1. Not much: Signed and spoken languages share the same linguistic properties. Obviously the
distinctive features of sign and speech are very different, but there are no interesting structural
differences.
2. Statistical tendencies: One modality has more instances of some linguistic feature than the
other modality.
3. Preferred typological properties differ between the modalities.
4. Rules or typological patterns that are unique to a particular modality.
5. Relative uniformity of signed languages vs. Relative diversity of spoken languages.
14 Richard P. Meier

1.5.1 Not much


There are different sets of distinctive features available to signed and spoken
languages, but otherwise everything could be pretty much the same. I have al-
ready asserted that this finding is true of the basic architecture of signed and
spoken languages. It may also be true generally of certain areas of linguistic
structure. It is not easy to identify factors that would lead to systematic dif-
ferences between signed and spoken languages in syntax and semantics, in
what categories are encoded in grammatical morphologies, in how the scope of
quantifiers is determined, and so on.
Demonstrations that sign and speech share a particular linguistic property
will remain important: they show that the existence of a given property in, say,
speech is not attributable to the peculiar properties of the oral–aural modality.
For example, we might think that iconic signs would be represented in the
mental lexicon in terms of their global, imagistic properties; on this view, the
representation of lexical items in terms of meaningless, sublexical units of form
would be reserved for arbitrary words (and, perhaps, signs) in which the overall
shape of the lexical item is of no matter. The abundant evidence for sublexical
structure in speech might then be seen as a consequence of the fact that speech
is so poor at iconic representation. But it turns out that iconic signs also have
sublexical structure. For example, slips of the hand can disrupt the iconicity
of signs. Klima and Bellugi (1979:130) cite an example in which a signer
attempted to produce the sentence RECENT EAT FINISH ‘(I) recently ate.’ In
the slip, the signer switched the places of articulation of RECENT and EAT, so
that RECENT was made at the mouth (instead of at the cheek) and EAT was
made at the cheek (instead of at the mouth). The error disrupts the iconicity of
EAT whose target place of articulation is motivated by the fact that the sign is an
icon of the act of putting food in one’s mouth. Evidence from studies of short-
term memory likewise suggests that signers who had been asked to memorize
lists of signs represented those signs in terms of their sublexical structure,
not in terms of their global iconic qualities (Klima and Bellugi 1979). The way
in which these signers represented signs in memory closely parallels the ways
in which hearing individuals represent the words of a spoken language. In sum,
duality of patterning in speech is not a consequence of the fact that speech
is poor at iconic representation. Duality of patterning characterizes word and
signs, whether arbitrary or iconic.
In her contribution to this volume, Diane Lillo-Martin (Chapter 10) notes
that in the generative tradition the autonomy of syntax has long been assumed.
On this hypothesis, syntactic rules of natural languages do not refer to phono-
logical categories or structures, and conversely phonological rules do not refer
to syntactic categories or structures. Thus, in signed and in spoken languages,
the syntax should be blind to the kinds of modality-specific properties that are
Explaining effects and non-effects of modality 15

encoded by the distinctive features of phonetics; we should find no modality


effects on syntactic structure (or indeed semantic structure). Lillo-Martin sees
one potential class of exceptions to this generalization in the stylistic reorder-
ing rules that may apply in the interface between syntax and phonology. More
generally, it is at the articulatory–perceptual interface where the vocabulary of
linguistic rules is modality specific. Mapping phonology to articulation requires
references to voicing or to circular movements. Here modality effects of a sort
may be numerous, but such effects may reflect nothing more than the defining
properties of the two modalities (i.e. one modality makes the hands available
for language, the other makes the mouth available).

1.5.2 Statistical tendencies


Statistical tendencies can lead to important conclusions about the nature of lan-
guage. Let us consider Saussure’s (1916/1959) assertion that linguistic symbols
are fundamentally arbitrary. Following Saussure’s lead, Hockett (1960) listed
arbitrariness as one of the design features of language. Thus, English words like
dog or Spanish words like perro do not look or sound like their referents. But
iconic signs seem to be much more frequent than iconic words and they seem
to occupy comparatively central places in the lexicons of signed languages. In
contrast, onomatopoetic words occupy a rather marginal place in the vocabulary
of a language like English. Why is there this difference between signed and spo-
ken languages in the frequency of iconic lexical items? As already suggested
above, the oral–aural modality seems to have very limited possibilities for the
iconic representation of meaning. Here the speech modality is impoverished. In
contrast, the visual–gestural modality grants signed languages the possibility
of having many relatively iconic signs.
Thus, the iconicity of many signs and of some iconic words suggests that
the human language capacity is not unduly troubled by iconicity; it does not
demand that all words and signs be strictly arbitrary. Instead what is key in both
speech and sign is that form–meaning pairings are conventionalized. That is,
such pairings are specific to a particular language community and are learned by
children reared in those communities. The frequency of iconic signs in signed
languages leads me to the conclusion that there are in fact two pertinent design
requirements on linguistic vocabularies:
1. Languages have vocabularies in which form and meaning are linked by
convention.
2. Languages must allow arbitrary symbols; if they did not, they could not
readily encode abstract concepts, or indeed any concept that is not
imageable.
We know, of course, that ASL has many arbitrary signs, including signs such
as MOTHER or CURIOUS or FALSE.
16 Richard P. Meier

Note that this statistical difference between sign and speech in the frequency
of iconic lexical items may indeed be a consequence of differences in the oral–
aural and visual–gestural modalities. Yet this difference may have few or no
consequences for the grammar of signed and spoken languages. And, thus, the
linguist could continue to believe a variant of Outcome 1: specifically, that
linguists could quite reasonably believe that, with regard to grammar, not much
differs across the two modalities. Even so, there could be consequences for
acquisition, but I do not think that there are (for reviews, see Newport and
Meier 1985; Meier 1991). Or there could be consequences for the creation of
new languages. And, indeed, there may be. For example, the greater resources
for iconic representation in the visual–gestural modality allow deaf children of
hearing parents to innovate gestures – “home signs” – that can be understood
by their parents or other interlocutors (Goldin-Meadow and Feldman 1977).
This may jump-start the creation of new signed languages.8

1.5.3 Preferred typological properties differ between signed


and spoken languages
Klima and Bellugi (1979) argued that the relatively slow rate of manual ar-
ticulation may push sign languages in the direction of simultaneous, tiered,
nonconcatenative morphology. In contrast, affixal morphology is the norm in
spoken languages, although the Semitic languages in particular have abundant
nonconcatenative morphology. ASL and other signed languages make great
use of patterns of repetition, of changes in rhythm, of “doubling” of the hands
(i.e. making a normally one-handed sign two-handed), and of displacement of
signs in space to mark temporal and distributive aspect, derived nouns or ad-
jectives, and subject and object agreement. In contrast, prefixes and suffixes
are rare in signed languages (Aronoff, Meir, and Sandler 2000), although ASL
has an agentive suffix (among a small set of possible affixes) and Israeli Sign
Language appears to have a derivational prefix. Thus, the difference between
signed and spoken languages appears to be this: signed languages generally opt
for nonconcatenative morphology, but make occasional use of sequential af-
fixes. Spoken languages generally opt for concatenative morphology, but make
limited use of nonconcatenative morphology.
Developmental evidence suggests that children acquiring signed languages
prefer nonconcatenative morphology, as discussed by Samuel J. Supalla and

8 Having said this, there is at least anecdotal evidence (discussed in Meier 1982) that deaf children
of hearing parents are not limited by the iconicity of their home signs. For example, Feldman
(1975) reports that one deaf child’s home sign for ice cream resembled the action of licking an
ice cream cone. Early on, the gesture was used only in contexts that matched this image. But,
with development, the child extended the gesture to other contexts. So, this same gesture was
used to refer to ice cream that was eaten from a bowl.
Explaining effects and non-effects of modality 17

Cecile McKee in their contribution to this volume (Chapter 6). Many deaf chil-
dren in the USA are exposed to some form of Manually Coded English (MCE) as
part of their school curriculum. Supalla (1991) examined the signing of a group
of children who had been exposed to Signing Exact English (SEE 2), one of the
MCE systems currently in use in the schools. This artificial sign system follows
the grammar of English. Accordingly, SEE 2 does not use the spatial devices
characteristic of ASL and other natural signed languages, but does have separate
signs for each of the inflectional affixes of English. Thus, in SEE 2, verb agree-
ment is signaled by a semi-independent sign that employs the S handshape (i.e. a
fist) and that has the distribution of the third-person singular suffix of spoken
English. Supalla’s subjects were deaf fourth- and fifth-graders (ages 9–11), all
of whom came from hearing families and none of whom had any ASL exposure.
The SEE 2 exposed children neglected to use the affixal agreement sign that had
been modeled in their classrooms; instead they innovated the use of directional
modifications of verbs, despite the fact that their input contained little such mod-
ification.9 Through such directional modifications, many verbs in conventional
sign languages such as ASL – and in the innovative uses of the SEE 2 exposed
children – move from a location in space associated with the subject to a loca-
tion associated with the object. No affixes mark subject and object agreement;
instead an overall change in the movement path of the verb signals agreement.10

1.5.4 Rules or typological patterns that are unique to signed


or spoken languages
Identifying grammatical rules or typological patterns that are unique to sign or
speech presents clear methodological problems. Rules that have been identified
only in spoken languages may be of little interest because there are so many
more spoken languages than signed languages. Therefore, our failure to identify
a given property (say, ergative case) in signed languages could be a reflection
merely of sampling problems. Alternatively, some “exotic” patterns exemplified
in spoken languages may never occur in young languages, whether spoken or
signed. If so, age may bring more rule types to signed languages. But testing
this hypothesis is going to be difficult.
9 In their chapter, Supalla and McKee (Chapter 6) raise problems for any account that would
look solely to articulatory factors (including rate of production) in order to explain the tendency
toward noncatenative morphology in signed languages. These authors suggest certain perceptual
and grammatical factors that may explain the difficulties that children have in acquiring English
inflectional morphology as encoded in SEE 2. Specifically, they argue that when these forms
are affixed to ASL signs, constraints on the wellformedness of signs are violated. Further,
because these suffixal signs are so sign-like, children may not identifiy the stem and the suffix
as constituting a single sign, thereby leading to errors in the segmentation of the sign stream.
10 Crucially, childen’s innovative use of directional verbs is not identical to the forms that are sanc-
tioned in conventional signed languages such as ASL or French Sign Language. For discussion
of this, see Meier 2002.
18 Richard P. Meier

What about rules or patterns that are unique to signed languages? Such rules
or patterns are perhaps most likely to be found in pronominal/agreement systems
and in spatial descriptions where the resources available to signed languages
are very different than in speech. Here are three candidates:
r The signed languages examined to date distinguish first and nonfirst person –
and ASL has lexical first-person plural signs WE and OUR – but may have no
grammatical distinction between second and third person, whereas all spoken
languages distinguish first, second, and third persons (Meier 1990). Spatial
distinctions – not person ones – allow reference to addressees to be distin-
guished from reference to non-addressed participants. This characterization of
the pronominal system of ASL has gained wide acceptance (see, for example,
Neidle et al. 2000, as well as the chapters in this volume by Diane Lillo-Martin
[Chapter 10] and by Christian Rathmann and Gaurav Mathur [Chapter 14])
and has also been adopted in the analysis of signed languages other than ASL:
for example, Danish Sign Language (Engberg-Pedersen 1993) and Taiwanese
Sign Language (Smith 1990). However, this is a negative claim about signed
languages: specifically that signed languages lack a grammatical distinction
that is ubiquitous in spoken languages.11
r Signed languages favor object agreement over subject agreement, unlike spo-
ken languages. For verbs that show agreement, object agreement is obliga-
tory, whereas subject agreement is optional.12 Acceptance of this apparent
difference between signed and spoken languages depends on resolution of
the now raging debate as to the status of verb agreement in signed languages.
Is it properly viewed as a strictly gestural system (Liddell 2000), or is it a
linguistically-constrained system, as argued in the chapters in this volume
by Diane Lillo-Martin (Chapter 10) and by Christian Rathmann and Gaurav
Mathur (Chapter 14; see also Meier 2002)?
r Instead of the kinds of spatial markers that are familiar in spoken languages
(e.g. prepositions such as in, on, or under in English), signed languages always
seem to use the signing space to represent the space being described. This is
the topic of Karen Emmorey’s contribution to this volume (Chapter 15).

1.5.5 Relative uniformity of signed languages vs. relative diversity


of spoken languages
In general, sign languages may not exhibit unique linguistic rules, but may dis-
play a more limited range of variation than is true of spoken languages. This

11 Acceptance of the first–nonfirst analysis of person in ASL and other signed languages is by no
means universal. Liddell (2000) and McBurney (this volume, Chapter 13) have each argued for
an analysis of sign pronominal systems that makes no person distinctions.
12 However, Engberg-Pedersen (1993) cites the work of Edward Keenan to the effect that there are
a couple of known spoken languages that show object but not subject agreement.
Explaining effects and non-effects of modality 19

hypothesis was advanced most prominently by Newport and Supalla (2000).


The general picture that has emerged from recent research on a variety of signed
languages is that signed languages use word order and verb agreement to distin-
guish the arguments of verbs. For a variety of signed languages, three classes of
verbs have been distinguished: plain, agreeing, and spatial. This proposal was
first made for ASL (Padden 1983). Spatial verbs agree with locative arguments,
whereas agreeing verbs agree with the direct or indirect object (depending
on the verb in question) and with the subject. Agreeing verbs may show ei-
ther single or double agreement; singly-agreeing verbs show object agreement.
For agreeing verbs, subject agreement appears to be optional, whereas object
agreement is obligatory (Meier 1982; Padden 1983). This basic description of
verb agreement has been extended to a variety of other signed languages in-
cluding British (Sutton-Spence and Woll 1998), French (Moody 1983), Israeli
(Meir 1998), and Danish (Engberg-Pedersen 1993) Sign Languages, as well
as the Sign Language of the Netherlands (Bos 1990). In general, signed lan-
guages have been described as topic–comment languages. Topic structures, as
well as verb agreement, license null arguments (Lillo-Martin 1991). Signed
languages have grammaticalized facial expressions that distinguish important
sentence types: for example, declaratives, yes–no questions, wh-questions, and
conditionals (although different signed languages may assign different facial
expressions to a particular linguistic function; cf. Kegl, Senghas, and Coppola
1999). In their morphological structure, signed languages tend to use patterns
of repeated movement (loosely, reduplication) to mark temporal aspect. Verb
agreement is signaled by the movement of verbs with respect to locations in
space that are associated with subject and object. Within verbs of movement and
location, so-called classifier handshapes identify referents as belonging to the
class of humans, or small animals, or flat, flexible objects, or vehicles, among
others (see Emmorey, in press).
Of course, signed languages also differ. Most obviously they do so in their
vocabularies; the distinct vocabularies of American Sign Language and British
Sign Language render those languages mutually unintelligible. In their phono-
logical structures, signed languages differ in their inventories of contrastive
phonological elements, perhaps particularly so in handshape inventories (e.g.
Woodward 1982). ASL and Chinese Sign Language have been shown to differ
in constraints on how the two hands interact, such that an F-hand sign in ASL
cannot contact the nondominant hand at the tips of the extended fingers, but can
do so where the thumb and first finger meet. The opposite is apparently true in
Chinese Sign Language (Klima and Bellugi 1979). In syntax, the most interest-
ing known difference amongst sign languages lies in whether or not they have
auxiliary-like elements; some signed languages – but not ASL – have auxiliaries
that carry agreement when the main verb is a plain verb (i.e. a non-agreeing
verb). Among those signed languages are Taiwanese (Smith 1990), Brazilian
20 Richard P. Meier

(Quadros 1999), and German (Rathmann 2000). Signed languages also vary in
their predominant word order; some like ASL are predominately SVO, whereas
others – including Japanese Sign Language – are SOV (subject – object – verb).
And, as Roland Pfau demonstrates in his chapter (Chapter 11), the grammar of
negation varies across signed languages.
However, as Newport and Supalla (2000) have observed, the variation that
we encounter in signed languages seems much more limited than the variation
found in spoken languages. Spoken languages may be tonal, or not. Spoken lan-
guages may be nominative/accusative languages or they may be ergative. They
may have very limited word-internal morphology or they may have the elabo-
rate morphology of a polysynthetic language. And some spoken languages have
elaborate systems of case morphology that permit great freedom of word order,
whereas others have little or no such morphology. Why is variation apparently
so limited in signed languages? The distinctive properties of the visual–gestural
modality may be a contributor. But, as mentioned before, the limited variation
in signed languages could be less a product of the visual–gestural modality, than
of the youth of the languages that are produced and perceived in that modality.

1.6 Conclusion
What I have sketched here is basically a classification of potential causes and
potential effects. It is not a theory by any means. The chapters that follow
allow us to jettison this meager start in favor of something much meatier: rich
empirical results placed within much richer theoretical frameworks.
But even from this brief review, we have seen, for example, that recent
research on a range of signed languages has led to the surprising suggestion that
signed and spoken languages exhibit distinct patterns of variation (Newport and
Supalla 2000). Although signed languages differ in their vocabularies, in word
order, in the presence of auxiliary-like elements, and in other ways, they seem on
the whole to be much less diverse typologically than are spoken languages. The
relative uniformity of signed languages, in contrast to the typological diversity
of spoken languages, may be due to the differing resources available to sign
and speech and the differing perceptual and articulatory constraints imposed
by the visual–gestural and oral–aural modalities. The apparent fact that signed
languages are young languages may also contribute to their uniformity.
The suggestion that signed languages are less diverse than spoken ones is
a fundamental hypothesis about the factors that determine what structures
are available to individual human languages. Yet this hypothesis has hardly
been tested. Doing so will demand that we examine a large sample of signed
languages. But just like many spoken languages, the very existence of some
signed languages is threatened (Meier 2000a). The pressures of educational
policy, of more prestigious spoken and signed languages, and of the ease of
Explaining effects and non-effects of modality 21

communication across once-formidable barriers mean that many signed lan-


guages may disappear before we have the faintest understanding of how much
signed languages may vary. For example, the indigenous signed languages of
Southeast Asia are apparently dying out and being replaced by signed languages
substantially influenced by ASL or by LSF (Woodward 2000). And, even seem-
ingly secure languages such as ASL face threats from well-intentioned medical
and educational practices. Linguists will be able to map the extent of linguistic
diversity in signed languages only if these languages can flourish for generations
to come.

Acknowledgments
In thinking and writing about the issues discussed here, I owe a particular debt to
Elissa Newport and Ted Supalla’s recent chapter (Newport and Supalla 2000) in
which they raise many of the issues discussed here. I also thank Wendy Sandler,
Gaurav Mathur, and Christian Rathmann for their thoughtful comments on a
draft of this chapter.

1.7 References
Aronoff, Mark, Irit Meir, and Wendy Sandler. 2000. Universal and particular aspects
of sign language morphology. Unpublished manuscript, State University of New
York at Stony Brook, NY.
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Bellugi, Ursula and Susan Fischer. 1972. A comparison of sign language and spoken
language. Cognition 1:173–200.
Bickerton, Derek. 1984. The language bioprogram hypothesis. Behavioral and Brain
Sciences 7:173–221.
Bloomfield, Leonard. 1933. Language. New York: Holt, Rinehart and Winston.
Bos, Heleen F. 1990. Person and location marking in SLN: Some implications of a
spatially expressed syntactic system. In Current trends in European sign language
research, ed. Siegmund Prillwitz and Tomas Vollhaber, 231–248. Hamburg: Signum
Brentari, Diane, ed. 2001. Foreign vocabulary in sign languages: A cross-linguistic
investigation of word formation. Mahwah, NJ: Lawrence Erlbaum Associates.
Campbell, Ruth, Bencie Woll, Philip J. Benson, and Simon B. Wallace. 1999. Categorical
perception of face actions: Their role in sign language and in communicative facial
displays. Quarterly Journal of Experimental Psychology 52:67–95.
Cheek, Adrianne. 2001. The phonetics and phonology of handshape in American Sign
Language. Doctoral dissertation, University of Texas at Austin, TX.
Cheek, Adrianne. In press. Synchronic handshape variation in ASL: Evidence of coar-
ticulation. Northeastern Conference on Linguistics (NELS) 31 Proceedings.
Corina, David P., Helen J. Neville, and Daphne Bavelier. 1998. Response from Corina,
Neville and Bavelier. Trends in Cognitive Sciences 2:468–470.
22 Richard P. Meier

Emmorey, Karen, ed. In press. Perspectives on classifier constructions. Mahwah, NJ:


Lawrence Erlbaum Associates.
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language. Hamburg: Signum.
Feldman, Heidi. 1975. The development of a lexicon by deaf children of hearing parents
or, There’s more to language than meets the ear. Doctoral dissertation, University
of Pennsylvania, PA.
Fischer, Susan. 1975. Influences on word order change in American Sign Language. In
Word order and word order change, ed. Charles N. Li, 1–25. Austin, TX: University
of Texas Press.
Fromkin, Victoria A. 1973. Introduction. In Speech errors as linguistic evidence, ed.
Victoria A. Fromkin, 11–45. The Hague: Mouton.
Goldin-Meadow, Susan and Feldman, Heidi. 1977. The development of language-like
communication without a language model. Science 197:401–403.
Goldin-Meadow, Susan and McNeill, David. 1999. The role of gesture and mimetic
representation in making language the province of speech. In The descent of mind:
Psychological perspectives on hominid evolution, ed. Michael C. Corballis and
Stephen E. G. Lea, 155–172. Oxford: Oxford University Press.
Goldin-Meadow, Susan and Carolyn Mylander. 1990. Beyond the input given: The
child’s role in the acquisition of language. Language 66:323–355.
Groce, Nora E. 1985. Everyone here spoke sign language: Hereditary deafness on
Martha’s Vineyard. Cambridge, MA: Harvard.
Hickok, Gregory, Ursula Bellugi, and Edward S. Klima. 1998. What’s right about the
neural organization of sign language? A perspective on recent neuroimaging results.
Trends in Cognitive Sciences 2:465–468.
Hockett, Charles. 1960. The origin of speech. Scientific American 203:88–96.
Kegl, Judy, Ann Senghas, and Marie Coppola. 1999. Creation through contact: Sign
language emergence and sign language change in Nicaragua. In Language cre-
ation and language change, ed. Michel DeGraff, 179–237. Cambridge, MA:
MIT Press.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liddell, Scott K. 1980. American Sign Language syntax. The Hague: Mouton.
Liddell, Scott K. 2000. Indicating verbs and pronouns: pointing away from agreement. In
The Signs Of Language revisited, ed. Karen Emmorey and Harlan Lane, 303–320.
Mahwah, NJ: Lawrence Erlbaum Associates.
Lillo-Martin, Diane. 1991. Universal Grammar and American Sign Language: Setting
the null argument parameters. Dordrecht: Kluwer.
MacNeilage, Peter F. and Barbara L. Davis. 1993. Motor explanations of babbling and
early speech patterns. In Developmental neurocognition: Speech and face process-
ing in the first year of life, ed. B. de Boysson-Bardies, S. de Schonen, P. Jusczyk,
P. F. MacNeilage, and J. Morton, 341–352. Dordrecht: Kluwer.
MacNeilage, Peter F. 1998. The frame/content theory of evolution of speech production.
Behavioral and Brain Sciences 21:499–546.
Mayberry, Rachel and Fischer, Susan D. 1989. Looking through phonological shape to
lexical meaning: The bottleneck of non-native sign language processing. Memory
and Cognition 17:740–754.
McCullough, Stephen, Karen Emmorey, and Diane Brentari. 2000. Categorical
Explaining effects and non-effects of modality 23

perception in American Sign Language. Linguistic Society of America, January,


Chicago, IL.
Meier, Richard P. 1982. Icons, analogues, and morphemes: The acquisition of verb
agreement in ASL. Doctoral dissertation, University of California, San Diego, CA.
Meier, Richard P. 1984. Sign as creole. Behavioral and Brain Sciences 7:201–202.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical
issues in sign language research. Vol. 1: Linguistics, ed. Susan D. Fischer and
Patricia Siple, 175–190. Chicago, IL: University of Chicago Press.
Meier, Richard P. 1991. Language acquisition by deaf children. American Scientist
79:60–70.
Meier, Richard P. 1993. A psycholinguistic perspective on phonological segmentation
in sign and speech. In Phonetics and Phonology. Vol. 3: Current Issues in American
Sign Language Phonology, ed. Geoffrey R. Coulter, 169–188. San Diego, CA:
Academic Press.
Meier, Richard P. 2000a. Diminishing diversity of signed languages. Science 288:1965.
Meier, Richard P. 2000b. Shared motoric factors in the acquisition of sign and speech. In
The Signs of Language Revisited, ed. Karen Emmorey and Harlan Lane, 331–354.
Mahwah, NJ: Lawrence Erlbaum Associates.
Meier, Richard P. 2002. The acquisition of verb agreement: Pointing out arguments for
the linguistic status of agreement in signed languages. In Current developments
in the study of signed language acquisition, ed. Gary Morgan and Bencie Woll.
Amsterdam: John Benjamins.
Meier, Richard P., Claude Mauk, Gene R. Mirus, and Kimberly E. Conlin. 1998. Mo-
toric constraints on early sign acquisition. In Proceedings of the Child Language
Research Forum, Vol. 29, ed. Eve Clark, 63–72. Stanford, CA: CSLI Press.
Meier, Richard P., Lynn McGarvin, Renée A. E. Zakia, and Raquel Willerman. 1997.
Silent mandibular oscillations in vocal babbling. Phonetica 54:153–171.
Meier, Richard P. and Raquel Willerman. 1995. Prelinguistic gesture in deaf and hearing
children. In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly,
391–409. Mahwah, NJ: Lawrence Erlbaum Associates.
Meir, Irit. 1998. Syntactic–semantic interaction of Israeli Sign Language verbs: The
case of backward verbs. Sign Language and Linguistics 1:3–37.
Meissner, Martin and Stuart B. Philpott. 1975. The sign language of sawmill workers
in British Columbia. Sign Language Studies 9:291–308.
Moody, Bill. 1983. La langue des signes, Vol. 1: Histoire et grammaire. Paris: Ellipses.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G.
Lee. 2000. The syntax of American Sign Language: Functional categories and
hierarchical structure. Cambridge, MA: MIT Press.
Neville, Helen J., Daphne Bavelier, David Corina, Josef Rauschecker, Avi Karni, Anil
Lalwani, Allen Braun, Vince Clark, Peter Jezzard, and Robert Turner. 1998. Cere-
bral organization for language in deaf and hearing subjects: Biological constraints
and effects of experience. Proceedings of the National Academy of Science 95:
922–929.
Newport, Elissa L. 1982. Task specificity in language learning? Evidence from speech
perception and American Sign Language. In Language acquisition: The state of
the art, ed. Eric Wanner and Lila Gleitman, 451–486. Cambridge: Cambridge
University Press.
24 Richard P. Meier

Newport, Elissa L. 1990. Maturational constraints on language learning. Cognitive


Science 14:11–28.
Newport, Elissa L. and Ursula Bellugi. 1978. Linguistic expression of category levels
in visual–gestural language. In Cognition and categorization, ed. Eleanor Rosch
and Barbara B. Lloyd. Hillsdale, NJ: Lawrence Erlbaum Associates.
Newport, Elissa L. and Richard P. Meier. 1985. The acquisition of American Sign
Language. In The crosslinguistic study of language acquisition. Vol. 1: The data,
ed. Dan I. Slobin, 881–938. Hillsdale, NJ: Lawrence Erlbaum Associates.
Newport, Elissa L. and Ted Supalla. 2000. Sign language research at the millennium. In
The Signs of Language Revisited, ed. Karen Emmorey and Harlan Lane, 103–114.
Mahwah, NJ: Lawrence Erlbaum Associates.
Padden, Carol A. 1983. Interaction of morphology and syntax in American Sign
Language. Doctoral dissertation, University of California, San Diego, CA.
Padden, Carol A. 1998. The ASL lexicon. Sign Language and Linguistics 1:39–60.
Petitto, Laura A. and Paula Marentette. 1991. Babbling in the manual mode: Evidence
from the ontogeny of language. Science 251:1493–1496.
Pinker, Steven and Paul Bloom. 1990. Natural language and natural selection.
Behavioral and Brain Sciences 13:707–784.
Poizner, Howard, Edward S. Klima, and Ursula Bellugi. 1987. What the hands reveal
about the brain. Cambridge, MA: MIT Press.
Polich, Laura G. 2000. The Search for Proto-NSL: Looking for the roots of the
Nicaraguan deaf community. In Bilingualism and identity in Deaf communi-
ties, ed. Melanie Metzger, 255–305. Washington, DC: Gallaudet University
Press.
Quadros, Ronice M. de. 1999. Phrase structure of Brazilian Sign Language. Unpublished
dissertation, Pontifı́cia Universidade Católica do Rio Grande do Sul.
Rathmann, Christian. 2000. Does the presence of person agreement marker predict
word order in signed languages? Paper presented at 7th International Conference
on Theoretical Issues in Sign Language Resarch (TISLR 2000). University of
Amsterdam, July.
Sapir, Edward 1921. Language. New York: Harcourt, Brace, and World.
Saussure, Ferdinand de. 1916/1959. Course in general linguistics. New York: Philo-
sophical Library. (English translation of Cours de linguistic générale. Paris: Payot.)
Singleton, Jennie and Elissa L. Newport. In press. When learners surpass their models:
The acquisition of American Sign Language from inconsistent input. Cognitive
Psychology.
Smith, Wayne. 1990. Evidence for auxiliaries in Taiwanese Sign Language. In
Theoretical issues in sign language research. Vol. 1: Linguistics, ed. Susan D.
Fischer and Patricia Siple. Chicago, IL: University of Chicago Press.
Stokoe, William C. 1960. Sign language structure: An outline of the communication
systems of the American deaf. Studies in Linguistics, Occasional Papers, 8. Silver
Spring, MD: Linstok Press.
Stokoe, William C., Dorothy C. Casterline, and Carl G. Croneberg. 1965. A Dictionary
of American Sign Language on Linguistic Principles. Washington, DC: Gallaudet
University Press.
Supalla, Samuel J. 1991. Manually Coded English: The modality question in signed
language development. In Theoretical issues in sign language research. Vol. 2:
Explaining effects and non-effects of modality 25

Psychology, ed. Patricia Siple and Susan D. Fischer, 85–109. Chicago, IL:
University of Chicago Press.
Supalla, Ted and Elissa L. Newport. 1978. How many seats in a chair? The derivation of
nouns and verbs in American Sign Language. In Understanding language through
sign language research, ed. Patricia Siple, 91–133. New York: Academic Press.
Sutton-Spence, Rachel and Bencie Woll. 1998. The linguistics of British Sign Language:
An introduction. Cambridge: Cambridge University Press.
Thelen, Esther. 1979. Rhythmical stereotypes in normal hearing infants. Animal
Behaviour, 27, 699–715.
Wilbur, Ronnie B. and Susan B. Nolen. 1986. The duration of syllables in American
Sign Language. Language and Speech 29:263–280.
Woodward, James. 1982. Single finger extension: Toward a theory of naturalness in
sign language phonology. Sign Language Studies 37:289–304.
Woodward, James. 2000. In The Signs of Language Revisited, ed. Karen Emmorey and
Harlan Lane, 23–47. Mahwah, NJ: Lawrence Erlbaum Associates.
Part I

Phonological structure in signed languages

At first glance, a general linguistic audience may be surprised to find a phonol-


ogy section in a book that focuses on sign language research. The very word
“phonology” connotes a field of study having to do with sound (phon). Sign
languages, however, are obviously not made up of sounds. Instead, the pho-
netic building blocks of sign languages are derived from movements and pos-
tures of the hands and arms. Although early sign researchers acknowledged
these obvious differences between signed and spoken languages by referring
to the systematic articulatory patterns found within sign language as “cherol-
ogy” (Stokoe 1960; Stokoe, Casterline, and Croneberg 1965), later researchers
adopted the more widely used term “phonology” to emphasize the underlying
similarity. Although different on the surface, both sign and speech are composed
of minimal units that create meaningful distinctions (i.e. phonemes, in spoken
languages) and these units are subject to language specific patterns (for discus-
sions of phonological units and patterns in American Sign Language [ASL],
see Stokoe 1960; Stokoe, Casterline, and Croneberg 1965; Klima and Bellugi
1979; Liddell 1984; Liddell and Johnson 1986, 1989; Wilbur 1987; Brentari
1990; Corina and Sandler 1993; Perlmutter 1993; Sandler 1993; Corina 1996;
Brentari 1998).
The research reported in this set of chapters contributes to our understanding
of sign phonology, and specifically to the issue of whether and how the way
in which a language is produced and perceived may influence its underlying
phonological structure. Implicit in these questions is the notion that wherever
signed and spoken languages are phonologically divergent, the differences can
be traced to modality-specific properties, whereas the properties shared by
sign and speech are likely candidates for universal grammar (UG). Each chap-
ter addresses this question in a different way – behaviorally, statistically, or
theoretically – but the opinions that emerge are remarkably similar. Although
spoken and signed languages share a phonological vocabulary that may include
prosodic words, syllables, timing units, and static and dynamic elements, the
instantiation of many of these components varies dramatically between the two
modalities. The authors in this section agree that these differing instantiations
are rooted in the simultaneous rather than sequential nature of signs, and that

27
28 Phonological structure in signed languages

simultaneity owes its existence, at least in part, to properties of visual percep-


tion. To the extent that the auditory system forces words to be sequentially
organized (Liberman and Studdert-Kennedy 1977), the capacities of the visual
system effectively relax this constraint. Similarity between sign and speech
phonology, then, is largely a function of the existence, rather than the particular
instantiation of, phonological categories. If so, it is the existence of such cat-
egories and their application for communicative purposes, more than the form
they take, that forms the core of UG phonology.
In an attempt to explore the central question of modality, two major topics
emerge. First, how does the channel through which a language is produced and
perceived influence language behavior, both under naturalistic and experimen-
tal circumstances? This line of research speaks to the psychological reality of
the phonological representation of signs and invites inquiry into the second
topic, the precise nature of phonological representations in signed and spo-
ken languages. By exploring these topics, the authors not only better inform
us about issues in sign language phonology, they broaden our understanding
of phonological patterning in general, providing theories that encompass two
language modalities.
The classical view of sign language phonology was that individual signs could
be described by three major parameters of sign formation: hand configuration,
movement, and place of articulation (Stokoe 1960; Stokoe et al. 1965; Klima
and Bellugi 1979). Hand configuration includes both handshape, the distinct
shape produced by extension and/or flexion of the finger and thumb joints, and
orientation, the position of the palm of the hand relative to the body. Movement
denotes the path that the manual articulators traverse to produce the sign, and
may also include hand-internal movement of the finger joints. The place of
articulation of a sign is the location on the body or in space where the sign
is produced. Each parameter comprises a set of possible forms for a given
language. These forms are, for example, A, 5, and O for hand configuration,
upward, to and fro, and circular for movement, and neutral space, nose, and
chin for place of articulation. The set of hand configurations, movements, and
places of articulation used in any given sign language draws from the sets of
possible hand configurations, movements, and places of articulation available
to all sign languages.
Evidence that these parameters are contrastive building blocks within signs
can be found in minimal pairs, signs that differ from one another in a single pa-
rameter (see Klima and Bellugi 1979). Minimal pairs demonstrate that a change
in the value of a single formational parameter can result in a change in the mean-
ing of a sign. Other evidence for the significance of the major parameters of
sign formation comes from the unequal distribution of spontaneous errors in
sign production. In a seminal study, Klima and Bellugi (1979) found that in
a corpus of 131 ASL slips of the hand there were 65 instances of handshape
Phonological structure in signed languages 29

substitutions, but only 13 place of articulation substitutions and 11 movement


substitutions. In addition to slip data, claims that parameters are linguisti-
cally and cognitively significant are bolstered by short-term memory error data
(Bellugi and Siple 1974; Bellugi, Klima, and Siple 1975; Klima and Bellugi
1979) and by aphasia studies (e.g. Corina, Poizner, Bellugi, Feinberg, Dowd,
and O’Grady-Batch 1992). The dissociation and differential susceptibility of
the formational parameters of signed languages have motivated a significant
body of research. Formational parameters that originated in the literature as
useful notational devices have thus come to be recognized as having linguistic,
and perhaps even psychological, utility.
The authors in this volume pursue this discussion. In a study similar to
Klima and Bellugi’s original, Annette Hohenberger, Daniela Happ, and Helen
Leuninger (Chapter 5) induced and then categorized slips of the hand pro-
duced by native users of German Sign Language (Deutsche Gebärdensprache
or DGS). The DGS slip data provide crosslinguistic evidence of the indepen-
dence of the parameters of sign formation. And, like Klima and Bellugi’s data,
the DGS data show that handshape is slipped more often than other parameters.
Hohenberger and her colleagues expanded the scope of the original Klima
and Bellugi study in an important way: in order to assess the influence of
modality on language production, they conducted an identical experiment in
spoken German with hearing users of that language. By undertaking a compre-
hensive comparison of the slips and repairs found in DGS and spoken German,
Hohenberger et al. attribute many production error differences between the lan-
guages to the different time courses by which phonological units are produced
in sign and speech. They argue that the synchronous articulation of sign param-
eters results, for example, in relatively more fusion errors and fewer exchange
errors in DGS than in German.
David Corina and Ursula Hildebrandt (Chapter 4) also discuss behavioral
manifestations of the phonological categories of sign language, but their fo-
cus is on perception rather than production. By adapting classic techniques of
speech perception to the visual–gestural modality and by designing some novel
experiments of their own, these authors sought to determine whether there is
evidence for the psychological reality of the phonological architecture of signs.
Their finding of a reduced role of phonological structure in sign processing
stands in contrast to the spoken language literature. However, some evidence is
observed in a metalinguistic task of similarity judgments. In this experiment,
which compared the intuitions of deaf, native signers of ASL to those of hear-
ing, sign-naive English speakers, subjects were asked to watch video-presented
stimuli and determine which of four phonologically possible nonsigns was most
similar in appearance to a target nonsign. Each of the nonsign choices shared
different parameter combinations with the target. Corina and Hildebrandt found
that while hearing subjects’ judgments were based on only the perceptual
30 Phonological structure in signed languages

salience of the sign parameters, and on sign movement in particular, deaf sub-
jects’ judgments reflected both perceptual salience and linguistic relevance.
The results of Corina and Hildebrant’s phonological similarity study corrob-
orate prior suggestions that movement is the most salient element within the
sign (see also Sandler 1993). Sign language researchers suggest that sign lan-
guage movement – whether as large as a path movement or as small as a hand
configuration change – forms the nucleus of the sign syllable (Wilbur 1987;
Perlmutter 1993; Corina 1996; Brentari 1998).
Curious as to whether the greater perceptibility of vowels is due to physi-
cal properties of the signal or to their syllabic status, Corina and Hildebrandt
observed that hand configurations in sign have a dual status. Depending on
whether the posture of the hands remains constant throughout a sign – in which
case the dynamic portion of the sign comes from path movement – or whether
the posture of the hands changes while the other parameters are held constant,
hand configurations can be thought of as non-nuclear (C) or nuclear (V) in
a sign syllable. Accordingly, Corina and Hildebrandt conducted a phoneme-
monitoring experiment in which subjects were asked to quickly identify the
presence of specific handshapes in signs. Each handshape was presented in a
static and a dynamic context. Their finding – that this context has little effect
on the speed with which handshapes are identified – leads them to suggest that
differences found in natural language perception of nuclear and non-nuclear seg-
ments rests on physical differences in the signal, not on the syllabic status of the
segment.
Generally, Corina and Hildebrandt find that the influence of phonological
form on sign perception is less robust than one might expect. They discuss the
possibility that the differential transparency of the articulatory structures of sign
and speech may have important consequences for language perception. They
hypothesize that, compared to speech, in sign there is greater transparency be-
tween the physical signal directly observed by an addressee and the addressee’s
internal representation of the signs being produced. In the perception of signed
languages the addressee observes the language articulators directly. In contrast,
in speech the listener perceives the acoustic effects of the actions of the speaker’s
articulators.
Similarly, Diane Brentari, who in this volume (Chapter 2) utilizes her Prosodic
Model of sign language phonology as a theoretical framework by which to
evaluate the influence of modality on phonology, attributes much of the dif-
ference between signed and spoken phonological representations to phonetics.
Specifically, she invokes the realm of phonetics whereby a physical signal is
transformed into a linguistic one. Rather than appealing to greater transparency,
Brentari argues that representational differences between sign and speech can-
not be separated from the visual system’s capacity for vertical processing. The
advantage of the visual–gestural modality for vertical processing, or processing
Phonological structure in signed languages 31

items presented at the same time, stands in contrast to the advantage that the
auditory–vocal modality has with respect to horizontal processing, or the abil-
ity to process temporally adiscrete items. This distinction allows – and in fact
requires, says Brentari – a different organization of the phonological units of
signed languages.
Brentari’s model of sign language phonology is not a simple transformation
of spoken language models. She accords movement features, which she labels
prosodic (PF), an entirely different theoretical status from handshape and place
of articulation features, which she labels inherent (IF). This distinction was
developed to account for sign languages in particular, but it succeeds in capturing
some more general aspects of phonology. Syllables in sign have visual salience,
which is analogous to acoustic sonority, and prosodic features are vowel-like
while inherent features are consonant-like. Brentari argues that properties such
as PFs (or Vs) and IFs (or Cs), along with other properties common to both sign
and speech, are likely candidates for UG. For example, both language modalities
exhibit structures that can be divided into two parts. One part “carries most of the
paradigmatic contrasts,” and the other part “comprises the medium by which
the signal is carried over long distances” (the salient movement features or
vowels). These observations suggest that UG requires both highly contrastive
and highly salient phonological elements.
Rachel Channon’s chapter (Chapter 3) also supports the idea that signed and
spoken languages have different phonological representations. She observes
that the different phonological structures of the languages in each modality
lead to different patterns of repeated elements. Spoken words, for example,
are composed of contrastive segments that may repeat in an irregular fashion.
Simple signs, however, are composed of a bundle of features articulated si-
multaneously; in such signs, elements repeat in a regular “rhythmic” fashion.
In her statistical analysis of the types of repetition found within a sign and
within a word, Channon develops two models of repetition. One model pre-
dicts speech data well, but not sign data. The other predicts sign data well, but
is a poor predictor of repetition in speech. She concludes that differences in
the occurrence of repetition in sign and in speech are systematic, and that the
systematicity is a function of different phonological representations of the two
modalities.
Not only do the chapters in this volume advance ideas about differences in the
phonological representations of sign and speech, they also highlight possible
differences between sign and speech that may have little phonological import,
but that are of real psycholinguistic interest. For example, in Hohenberger et al.’s
comparison of the DGS slips data to slips in spoken German, they report that
errors are detected and repaired earlier in sign languages (typically within a
sign) compared to spoken languages (almost always after the word). A possible
explanation for this difference can be found in the relative duration of signs
32 Phonological structure in signed languages

vs. words. Specifically, an individual sign takes about twice as long as a word
to be articulated, so errors in sign are more likely to be caught before the sign
is completed (however, for evidence that the rates at which propositions are
expressed in sign and speech are not different, see Klima and Bellugi 1979).
This difference in how signers or speakers repair their language is a modality
effect that does not reach into the phonology of the language.
The last chapter in this section – that by Samuel J. Supalla and Cecile McKee
(Chapter 6) – argues that there are also educational consequences of properties
of word formation that may be specific to signed languages. Various systems for
encoding English in sign have attempted to graft the morphological structure of
English onto the signs of ASL. According to Supalla and McKee the unintended
result of these well-intentioned efforts has been to create systems of Manually
Coded English (MCE) that violate constraints on sign complexity first noticed
by Battison (1978). These constraints limit the number of distinct handshapes
or distinct places of articulations within signs. Even children with little or no
exposure to ASL seem to expect sign systems to fit within these constraints.
Supalla and McKee suggest that these constraints have their origins in perceptual
strategies by which children segment the sign stream. Supalla and McKee’s
chapter (Chapter 6) reminds us that linguistic and psycholinguistic work on
the structure of signed languages can have very immediate consequences for
educational practice in the education of deaf children.
The research reported in this part of the volume speaks to some possible
causes and effects of an organizational difference between spoken and sign
language phonology. Each author discusses similarities and differences be-
tween sign and speech, bringing us closer to understanding how and why the
phonological representations of signs could be different from those of spoken
words. The three chapters by Hohenberger et al., Corina and Hildebrandt, and
Supalla and McKee report on behavioral data, interpreting their results with an
eye toward this modality issue, whereas Channon and Brentari approach the
problem from a predominantly theoretical perspective, offering insights into
the phonological representation for sign language. The convergence of behav-
ioral, statistical, and theoretical research methods on a central problem – the
extent to which the sensory and articulatory modality of a language shapes
its phonological structure – yields a surprisingly consistent picture. What we
learn is that many elements of the linguistic representations of signs are guided
by their paradigmatic nature, and that this organization is likely associated
with the ability of our visual systems to process a large number of linguis-
tic features simultaneously. Perceptual and production consequences naturally
follow.

heather knapp and adrianne cheek


Phonological structure in signed languages 33

References
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Bellugi, Ursula and Patricia Siple. 1974. Remembering with and without words. In
Current problems in psycholinguistics, ed. François Bresson. Paris: Centre National
de la Recherche Scientifique.
Bellugi, Ursula, Edward S. Klima, and Patricia Siple. 1975. Remembering in signs.
Cognition 3:93–125.
Brentari, Diane. 1990. Licensing in ASL handshape. In Sign language research: Theo-
retical issues, ed. Ceil Lucas, 57–68. Washington, DC: Gallaudet University Press.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Corina, David P. 1996. ASL syllables and prosodic constraints. Lingua 98:73–102.
Corina, David P., Howard Poizner, Ursula Bellugi, Tod Feinberg, Dorothy Dowd, and
Lucinda O’Grady-Batch. 1992. Dissociation between linguistic and non-linguistic
gestural systems: A case for compositionality. Brain and Language 43:414–447.
Corina, David P. and Wendy Sandler. 1993. On the nature of phonological structure in
sign language. Phonology 10:165–207.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liberman, Alvin M., and Michael Studdert-Kennedy. 1977. Phonetic perception. In
Handbook of sensory physiology, ed. R. Held, H. Leibowitz, and H. L. Tueber.
Heidelberg: Springer-Verlag.
Liddell, Scott K. 1984. THINK and BELIEVE: Sequentiality in American Sign Lan-
guage. Language 60:372–392.
Liddell, Scott K. and Robert E. Johnson. 1986. American Sign Language compound
formation processes, lexicalization, and phonological remnants. Natural Language
and Linguistic Theory 4:445–513.
Liddell, Scott K. and Robert E. Johnson. 1989. American Sign Language: the phono-
logical base. Sign Language Studies 64:197–278.
Perlmutter, David M. 1993. Sonority and syllable structure in American Sign Language.
In Phonetics and phonology: Current issues in ASL Phonology, ed. Geoffrey R.
Coulter, 227–261. New York: Academic Press.
Sandler, Wendy. 1993. Sign language and modularity. Lingua 89:315–351.
Stokoe, William C. 1960. Sign language structure. Studies in linguistics occasional
papers 8. Buffalo: University of Buffalo Press.
Stokoe, William C., Dorothy C. Casterline, and Carl G. Croneberg. 1965. A dictionary
of American Sign Language. Washington, DC: Gallaudet College Press.
Wilbur, Ronnie B. 1987. American Sign Language linguistic and applied dimensions,
2nd edition. Boston, MA: College-Hill Press.
2 Modality differences in sign language
phonology and morphophonemics

Diane Brentari

2.1 Introduction
In this chapter it is taken as given that phonology is the level of grammatical
analysis where primitive structural units without meaning are combined to
create an infinite number of meaningful utterances. It is the level of grammar
that has a direct link with the articulatory and perceptual phonetic systems,
either visual–gestural or auditory–vocal. There has been work on sign language
phonology for about 40 years now, and at the beginning of just about every piece
on the topic there is some statement like the following:
The goal is, then, to propose a model of ASL [American Sign Language] grammar at
a level that is clearly constrained by both the physiology and by the grammatical rules.
To the extent that this enterprise is successful, it will enable us to closely compare the
structures of spoken and signed languages and begin to address the broader questions
of language universals . . . (Sandler 1989: vi)

The goal of this chapter is to articulate some of the differences between the
phonology of signed and spoken languages that have been brought to light
in the last 40 years and to illuminate the role that the physiological bases
have in defining abstract units, such as the segment, syllable, and word. There
are some who hold a view that sign languages are just like spoken languages
except for the substance of the features (Perlmutter 1992). I disagree with this
position, claiming instead that the visual–gestural or auditory–vocal mode of
communication infiltrates the abstract phonological system, causing differences
in the frequency of a phenomenon’s occurrence, as well as differences due to
the signal, articulatory, or perceptual properties of signed and spoken languages
(see Meier, this volume).
I argue that these types of differences should lead to differences in the phono-
logical representation. That is, if the goal of a phonological grammar is to ex-
press generalizations as efficiently and as simply as possible – and ultimately
to give an explanatory account of these generalizations – then the frequency
with which a given phenomenon occurs should influence its representation. A
grammar should cover as many forms as possible with the fewest number of
exceptions, and frequent operations should be easy to express, while infrequent
35
36 Diane Brentari

or non-occurring operations should be difficult to express. These premises are


some of the most basic of those in phonology (Chomsky and Halle 1968, cf.
330–335; Clements 1985). If a true comparison of signed and spoken language
phonology is to be made, representations must take issues of frequency into
account. The areas of differences between signed and spoken languages that I
describe and the organization of the chapter are given in (1).
(1) Areas of difference described in this chapter:
a. perceptual differences between audition and vision (Section 2.2.1);
b. articulatory differences: the roles that articulators play in the system
(Section 2.2.2);
c. distribution of information in the signal: (i) consonants; (ii) vowels
(Section 2.3);
d. segmental differences: (i) the need for segments; (ii) the relationship
between segments and root nodes (Section 2.4);
e. lexical differences: (i) word shape; (ii) minimal pairs (Section 2.5).
In the background throughout this chapter are these questions:
r How much can the phonological representation of spoken languages elegantly
and efficiently express sign language phonology?
r How far into the phonology do the effects of the phonetics (modality) reach?
r At what level of description are phonological units equally applicable to
signed and spoken languages?
r In terms of lost insight about human language, how much cost is there if sign
languages are expressed through spoken language representations that are not
designed for them?

2.2 Bases for differences in signed and spoken languages


In Section 2.1 differences between vision and audition that might play a role
in phonological evolution are discussed, and in Section 2.2 fundamental points
about sign language phonology that figure in the discussions to follow are
discussed.

2.2.1 Some key differences between vision and audition


Many aspects of vertical and horizontal processing take place in both vision and
audition (Bregman 1990). “Vertical processing” is a cover term for our ability
to process various input types presented roughly at the same time (e.g. pattern
recognition, paradigmatic processing); “horizontal temporal processing” is our
ability to process temporally discrete inputs into temporally discrete events (e.g.
ordering and sequencing of objects in time, syntagmatic processing). There are,
however, differences in the inherent strengths built into the design of the visual
Modality differences in phonology and morphophonemics 37

Table 2.1 Differences between vision and audition

Vision Audition

Speed of signal transmission 229, 274 km/s 331 m/s


Peripheral temporal resolution 25–30 ms 2 ms
Spatial arrangement information peripheral nonperipheral

and auditory systems due to signal transmission and peripheral processing, and
a few of these are listed in Table 2.1.
In general, the advantage in vertical processing tasks goes to vision, while
the advantage in horizontal processing tasks goes to audition. For example, the
time required for a subject to detect temporally discrete stimuli is a horizontal
processing task. Hirsch and Sherrick (1961) show that the time required for the
higher order task of recognition, or labeling of a stimulus, called “threshold
of identification” (involving more cortical involvement) is roughly the same
in both vision and audition, i.e. approximately 20 ms. The time required for
the more peripheral task of detection – called “threshold of flicker fusion”
in vision (Chase and Jenner 1993) and “threshold of temporal resolution” in
audition (Kohlrausch et al. 1992) – is quite different. Humans can temporally
resolve auditory stimuli when they are separated by an interval of only 2 ms,
(Green 1971; Kohlrausch et al. 1992), while the visual system requires at least
a 20 ms interstimulus interval to resolve visual stimuli presented sequentially
(Chase and Jenner 1993). The advantage here is with audition. Meier (1993)
also discusses the ability to judge duration and rate of stimulus presentation;
both of these tasks also give the advantage to audition.
Comparing vertical processing tasks in audition and vision – e.g. pattern
recognition, localization of objects – is inherently more difficult because of the
nature of sound and light transmission. To take just two examples, vision has no
analogue to harmonics, and the difference between the speed of transmission
of light waves vs. sound waves is enormous: 299,274 km/s for light waves vs.
331 m/s for sound waves. As a result of these major differences, I could find no
tasks with exactly the same experimental design or control factors; however,
we can address vertical processing in a more general way. One effect of the
speed of light transmission on the perception of objects is that vision can take
advantage of light waves reflected not only from the target object, but also by
other objects in the environment, thereby making use of “echo” waves, i.e.
those reflected by the target object onto other objects. These echo waves are
available simultaneously with the waves reflected from the target object to the
retina (Bregman 1990). This same echo phenomenon in audition is available to
the listener much more slowly. Only after the sound waves produced by the
38 Diane Brentari

target object have already struck the ear will echo waves from other objects
in the environment do the same. The result of this effect is that a more three-
dimensional image is available more quickly in vision due, in part, to the speed
at which light travels. Moreover, the localization of visual stimuli is registered
at the most peripheral stage of the visual system, at the retina and lens, while the
spatial arrangement of auditory stimuli can only be inferred by temporal and
intensity differences of the signal between the two ears (Bregman 1990). Meier
(1993; this volume) also discusses the transmission property of bandwidth,
which is larger in vision, and spatial acuity, which is the ability to pinpoint
accurately an object in space (Welsh and Warren 1986); both of these properties
also give the advantage to vision.
In sum, the auditory system has an advantage in horizontal processing, while
the visual system has an advantage in vertical processing. An expected result
would be that phonological representations in signed and spoken languages re-
flect these differences. This would not present a problem for a theory of universal
grammar (UG), but it may well have an effect on proposals about the principles
and properties contained in the part of UG concerned with phonology. At the
end of the chapter, a few such principles for modality-independent phonology
are proposed. These principles can exploit either type of language signal.

2.2.2 Introduction to sign language phonology and to the Prosodic Model


This section is a summary of results in the area of sign language phonology, in
general, and in the Prosodic Model (Brentari 1998), in particular. I hold the view
that a specific theory must be employed in order to illuminate areas of difference
between sign and spoken languages. However, many of the topics covered in
this chapter enjoy a large degree of consensus and could be articulated in several
different phonological models of sign language structure. At the center of the
work on sign language phonology are several general questions concerning
how much of a role such factors as those discussed in Section 2.2.1 play in our
phonological models.
Basically, signed words consist of a set of three or four parameters, each con-
sisting of featural material, as shown in Table 2.2. It is often assumed that these
parameters all have more or less equal status in the system. I would attribute the

Table 2.2 Traditional “parameters” in sign language phonological structure


and one representative feature

Parameters Handshape Place of articulation Movement Orientation


Features [open] [distal] [direction] [pronation]
Modality differences in phonology and morphophonemics 39

source of this assumption to transcription systems, which create symbols for


a sign’s handshape (henceforth called articulator), place, and movement (Stokoe
1960; Stokoe et al. 1965), and orientation (Battison 1978) without investigating
carefully how these properties fit together as a phonological system.1 This is not
to underestimate the contribution that this early, groundbreaking work made to
the field; however, each of these parameters was found to have at least one con-
trastive property, which creates a minimal pair in such systems of transcription,
and so each parameter was considered equal.2
In general, the more recent innovations in phonological theory – such as
autosegmental phonology (Goldsmith 1976), feature geometry (Clements 1985;
McCarthy 1988; Clements and Hume 1995), and prosodic phonology (Nespor
and Vogel 1986; Itô 1986) – have made it possible for further common ground
in sign language and spoken language phonological work to be established.
To take one example, it is relatively easy to make the connection between the
sign language entities in Table 2.2 and feature geometry. The traditional sign
language parameters (i.e. articulator, movement, etc.) are class nodes, and the
features (i.e. [open], [direction], etc.) are terminal nodes dominated by class
nodes; however, the class nodes in Table 2.2 are not equal if we consider these
parameters according to how much consensus there is about them. As soon
as we move beyond the articulator parameter (the one on which there is the
most consensus) or place of articulation (on which there is a fair amount of
consensus), then there are controversies about major issues. The controversies
include:
r the necessity of movement and orientation parameters as phonological enti-
ties;
r the nature and type of other possible structures, such as the segment and mora;
and
r the articulatory and/or perceptual bases for features in sign languages.
Let us now turn to the Prosodic Model (Brentari 1998), since this is the
phonological model that is used in this chapter. In the Prosodic Model, features
are organized hierarchically using feature geometry. The primary branches of
structure are given in (2a); the complete structure is given in (2b). Phonological
theory emphasizes that feature organization is based on phonological behavior
rather than the physical nature of the articulators, it is worth discussing this
point in sign language phonology in some detail, because it brings to light a
difference in the phonetic roles of signed and spoken language articulators.
1 Stokoe’s notation was never intended to be a phonological representation, but rather a notation
system; however, this distinction between a notation system and a phonological representation
is not always well understood.
2 For other overviews of sign language phonology, please see Coulter and Anderson (1993),
Corina and Sandler (1993), Brentari (1995), and van der Hulst and Mills (1996).
40 Diane Brentari

(2) Prosodic Model feature geometry

a.
root

inherent features (IF) prosodic features (PF)

articulator (A) place of articulation (POA)

b.
root

Inherent Features Prosodic Features

Articulator Place (POA) non-manual

non- manual −y y Fnm setting


manual
h2 h1 x z path Fsetting

arm hand body2 location Fpath orientation

non-selected fingers selected fingers body1 head aperture Faperture

joints fingers1 body0 torso

non-base base fingers0 thumb arm h2

quantity point of reference


Modality differences in phonology and morphophonemics 41

(a) (b) (c)

Figure 2.1a The handshape parameter used as an articulator in THINK;


2.1b as a place of articulation in TOUCH; 2.1c as a movement in UNDER-
STAND

The “vocal mechanism” in speech includes the tongue, lips, and larynx as
the primary active articulators, and the teeth, palate, and pharyngeal area as
target places of articulation (i.e. the passive articulators). Although there are
exceptions to this – since the lips and glottis can be either active or passive
articulators – other articulators have a fixed role. The tongue is always active
and the palate always passive in producing speech, so to some extent structures
have either an active or a passive role in the articulatory event. This is not the case
in sign languages. Each part of the body involved in the “signing mechanism” –
the face, hands, arms, torso – can be active or passive. For example, the hand
can be an active articulator in the sign THINK, a passive articulator in the
sign TOUCH, and a source of movement in the sign UNDERSTAND; this
is shown in Figure 2.1. The lips and eyes are the articulator in the bound
morpheme CAREFUL(LY) but the face is the place of articulation in the sign
BEAUTIFUL. This is one reason models of sign language phonology must be
grouped by phonological role; however, just as in spoken languages, articulatory
considerations play an important secondary role in these groupings.
Within the Prosodic Model features are divided into mutually exclusive sets
of inherent features (IF) and prosodic features (PF). Movement features are
grouped together as prosodic features, based on the use of the term by Jakobson
et al. (1951), who stated that prosodic features are “defined only with reference
to a time series.” The inherent features are the articulator and place of articulation
features. The articulator refers to features of the active articulator, and place of
articulation (POA) refers to features of the passive articulator. The relation of
the articulator with the POA is the orientation relation.
There are several arguments for organizing features in the representation this
way, rather than according to articulatory structure. A few are given here; for
more details and for additional arguments see Brentari (1998). When consider-
ing their role in the phonological grammar, not only the number of distinctive
features, but also the complexity and the type of constraints on each of the IF and
42 Diane Brentari

(a) (b) (c)

Figure 2.2 ASL signs showing different timing patterns of handshape and
path movement: 2.2a INFORM shows the handshape and path movement in
a cotemporal pattern; 2.2b DESTROY shows the handshape change happen-
ing only during the second part of the bidirectional movement; 2.2c BACK-
GROUND shows a handshape change occurring during a transitional move-
ment between two parts of a repeated movement

PF feature trees must be considered. The number of IFs is slightly larger (24)
than the number of PFs (22). The structure of the IF branch of structure is also
more complex and yields more potential surface contrasts than the PF branch.
In addition, the constraints on outputs of the PF tree are much more restrictive
than those on the outputs of the IF tree. A specific PF branch constraint sets a
minimum of 1 and a maximum of 3 of movement components in any lexical item,
and another PF branch constraint limits the number of features from each class
node to 1 in stems. PFs are also subject to principles of Alignment, which insure
that a sign with movements involving both handshape and arm movements
will have the correct surface pattern; examples of such signs are INFORM,
DESTROY, and BACKGROUND, shown in Figure 2.2. The IF branch is subject
to fewer and more general constraints, and IFs are generally realized across the
entire prosodic word domain.
PFs also have the ability to undergo “movement migration,” while IFs do
not. A joint of the arm or even the torso can realize movements specified as
handshape movements (i.e. aperture changes) or wrist movements. Some of the
reasons for movement migration that have been documented in the literature:
lexical emphasis (Wilbur 1999; 2000), linguistic maturation (Meier et al. 1998;
Holzrichter and Meier 2000; Meier 2000), loudness (Crasborn 2001), and motor
impairment due to Parkinson’s Disease (Brentari and Poizner 1994). Finally,
PFs participate in the operation of “segment generation,” while IFs do not. This
is explained further in Section 4 below.
Within the Prosodic Model, the following units of analysis are used, and they
are defined as follows:
(3) Units of phonological analysis
a. prosodic word (p-words): the phonological domain consisting of a
stem + affixes;
Modality differences in phonology and morphophonemics 43

b. root node: the node at which the phonological representation inter-


faces with the morphosyntactic features of the form; “the node that
dominates all features and expresses the coherence of the melodic
material as a phonological unit” (Clements and Hume 1995);
c. syllable: i. the fundamental parsable prosodic unit;
ii. (in sign language) a sequential, phonological move-
ment;
d. weight unit: a branching class node in the PF tree, which adds com-
plexity to the syllable nucleus and can be referred to by phonological
and morphophonological processes;
e. timing unit (segment): the smallest concatenative unit on the timing
tier (X-slots).
P-words and root nodes are defined in ways recognizable to phonologists
working on spoken languages, and these need no further explanation; let us
therefore address the syllable, timing unit, and weight unit in turn.
Sign language syllables – defined in terms of the number of sequential move-
ments in a form – are necessary units in the phonological grammar, because
if one considers the grammatical functions that these units serve for a sign
language such as American Sign Language (ASL), they parallel those served
by the syllable in spoken languages. The reason for calling such units sylla-
bles is related to facts regarding language acquisition, sonority, minimal word
constraints, and word-internal two-movement combinations.3 First, regarding
language acquisition, it has been shown that Deaf babies learning ASL as a first
language engage in prelinguistic babbling whose structure is that of a linguistic
movement (Petitto and Marentette 1991; Petitto 2000). This type of movement
functions in a similar way to vocal babbling in babies acquiring a spoken lan-
guage in temporal patterning, and it is distinct from rhythmic, excitatory motoric
activity. This activity involving movement serves as the basic prosodic structure
upon which other aspects of the phonology, such as hand-internal movements
and secondary movements, can be added. Second, regarding sonority, there is
evidence that movements (hand-internal movements, wrist movements, elbow
movements, and shoulder movements) are subject to an evaluative procedure
that decides the relative suitability of a movement for constructing a syllable
nucleus, movement according to the joint producing it. Movements articulated
by a more proximal joint are preferred over those articulated by more distal
joints. For example, movements articulated by the wrist and forearm are pre-
ferred over movements articulated by the knuckles in loan signs that come from
fingerspelling. This type of evaluation mechanism in sign phonology is similar
to the one in spoken languages that evaluates the local maximum of sonority
to determine a sound’s suitability as a syllable nucleus. I consider this visual
3 Perlmutter (1992) has independently arrived at the same conclusion.
44 Diane Brentari

salience a type of sign language sonority, which is a property of syllables. Third,


regarding minimal word constraints, no sign is well formed unless it has a move-
ment of some type (Wilbur 1987; Stack 1988; Brentari 1990a; 1990b). When
one is not present in the underlying form, there are insertion rules that repair
such forms. Finally, regarding two-movement combinations, there are restric-
tions on the types of movement sequences a signed word may contain (Uyechi
1995:104–106), as compared with a sequence of two signs or polymorphemic
forms. Bidirectional repeated, unidirectional repeated, and circle + straight
movements are possible combinations. Straight + circle movement combina-
tions are disallowed as are all combinations containing an arc movement.
Timing units (or segments) in the Prosodic Model are defined as minimal
concatenative units, i.e. the smallest temporal phonological slice of the sig-
nal. Attempts to establish a phonology that contrasts movement and stasis as
the two types of fundamental phonological entities in sign languages (Liddell
and Johnson 1983; 1989; Liddell 1984) were gradually replaced when new
evidence came to light showing that all stasis in a monomorphemic sign is pre-
dictable from phonological context. Such contexts include position in a phrase
(Perlmutter 1992) and contact with the body (Liddell and Johnson 1986; Sandler
1987). There are no minimal pairs in ASL that involve segmental geminates
in any of the sign parameters.4 Abstract timing units are necessary, however,
in order to account for a variety of duration-based, phonological operations,
such as lengthening effects, that target several prosodic class nodes at once
(handshape, setting, etc.) when they occur in the same phonological context
(e.g. word-initially or word-finally).
In the Prosodic Model, the number of timing units is predictable from the
PFs in a form; these are calculated as follows: Path features (located at the path
node) and abstract PFs (located at the node at the top of the PF tree) generate
two “x” timing slots; all other PFs generate one timing slot. The class node with
the highest number of potential segments determines the number of segments
in the word. A process of alignment then takes place (right to left) so that all
features associate to their correct segments. The features of each of the class
nodes in the PF tree are given in (4). IFs do not generate any timing slots at all.
(4) Segment generation in the Prosodic Model (from Brentari 1998:
chapter 5)
a. two segments generated:
prosodic node features: [straight], [arc], [circle];
path features: [direction], [tracing], [pivot], [repeat];

4 I am considering only forms from different morphological paradigms, so FLY and FLY-THERE
would not be a minimal pair. Perlmutter (1992) refers to some signs as having geminate Positions,
but the two Positions in such cases have different values, so they are not, strictly speaking,
geminates.
Modality differences in phonology and morphophonemics 45

b. one segment generated:


setting features: [proximal], [distal], [top], [bottom], [ipsilateral],
[contralateral];
wrist/orientation: [supination], [pronation], [abduction],
[adduction], [flexion], [extension];
aperture: [open], [closed].
Finally, let us turn to weight units. Weight units in the Prosodic Model are
assigned based on the complexity of the movement. A sequential movement
may have one component (i.e. features immediately dominated by one class
node in the PF tree, e.g. UNDERSTAND); these are called simple movements. A
sequential movement can have more than one component as well (i.e. features at
more than one class node, e.g. INFORM); these are called complex movements.
Each branching class node contributes a weight unit to the structure. As we see
later in this chapter, ASL phonology is sensitive to this difference.

2.3 The distribution of “consonant” and “vowel” information


All languages – both spoken and signed – are organized such that certain features
are members of sets having rich paradigmatic contrast, while other features are
members of sets that do not carry much contrastive power. Moreover, these
(ideally mutually exclusive) feature sets are assigned to different parts of the
signal. In spoken languages the former description is appropriate for the set
of consonants, and the latter description appropriate for the set of vowels.
The general findings in this section about sign languages are as follows. First,
the IF branch of structure carries more lexical contrast than the PF branch of
structure, just as consonants carry more potential for lexical contrast in spoken
languages. Second, movements (PFs) function as the “medium” of the signal,
just as vowels function as the medium of spoken languages. Third, movements
(PFs) function as syllable nuclei in sign languages, just as vowels function
as syllable nuclei in spoken languages. For these reasons, the IF branch of
structure is analyzed as more consonant-like and the PF branch is analyzed as
more vowel-like. Fourth, and finally, it is shown that the complexity of vowels
and the complexity of movements is calculated differently in signed and spoken
languages, respectively. These results lead to the differences in the distribution
of vowel and consonant information in sign and spoken languages given in (5).
(5) Differences between the nature of consonant (C) and vowel (V) infor-
mation in signed and spoken languages:
a. Cs and Vs are realized at the same time in sign languages, rather
than as temporally discrete units.5
5 It is important to mention that in spoken languages it is a misconception to see Cs and Vs as
completely discrete, since spreading, co-articulatory effects, and transitions between Cs and Vs
may cause them to overlap considerably.
46 Diane Brentari

b. With respect to movements (i.e. vowels), the phonology is sensitive


to the number of simultaneous movement components present in a
form.

2.3.1 Consonants and vowels in sign languages


The Hold–Movement Model of sign language phonology was the first to draw
parallels between vowels in spoken languages and movements in sign languages
(Liddell and Johnson 1984). There are good reasons for this, given in (6) and
explained further below.
(6) Reasons for a vowel: PF analogy
a. Signed words can be parsed without movements, just as spoken
words can be parsed without vowels.
b. In sign languages, the number of paradigmatic contrasts in the PF
tree (movements) is fewer than the number of contrasts in the IF
tree (articulator + POA), just as in spoken languages the number
of paradigmatic contrasts in vowels is fewer than the number of
consonant contrasts.
c. It is the movements that make signs perceptible at long distances in
a sign language, just as vowels make the signal perceptible at long
distances in spoken languages, i.e. it is the “medium” of the signal.
d. It is the vowels that function as syllable nuclei in spoken languages;
the movements in sign languages function as syllable nuclei.
First, if the movement of a sign is removed, a native signer is still likely to be
able to parse it in context, just as in spoken languages a native speaker is likely
to be able to parse a word in context if the vowels are removed. In sign, this find-
ing can be inferred from the numerous dictionaries in use that generally consist
only of photographic images. For speech, this is true for derived media, such as
orthographic systems without vowels, such as Hebrew (Frost and Bentin 1992),
reading activities in English involving “cloze” tasks (Seidenberg 1992), as well
as vowel recognition tasks in spoken words with silent-center vowels (Strange
1987; Jenkins et al. 1994). Second, the number of paradigmatic contrasts in the
IF branch is much larger than the number of movement contrasts because of
combinatoric principles that affect the IF and PF branches of structure. Third,
works by Uyechi (1995) and Crasborn (2001) propose that “visual loudness” in
sign languages is a property of movements, just as loudness in spoken languages
is a property of vowels. Without movement, the information in the signed sig-
nal could not be transmitted over long distances. Fourth, as I describe in Sec-
tion 2.3.2, in the sign signal it is the movements that behave like syllable nuclei.
We can contrast movement features, which contribute to the dynamic prop-
erties of the signal within words, with IFs, which are specified once per word.
Modality differences in phonology and morphophonemics 47

The complete set of reasons for an analogy between the IFs and consonants
in spoken languages is summarized in (7); they are further explained in the
following paragraph.

(7) Reasons for a consonant: IF analogy


a. The IF tree is more complex hierarchically than the PF tree.
b. The combinatoric mechanisms used in each yield more surface IF
contrasts than PF contrasts, respectively.
c. There is a larger number of features in the IF tree than in the
PF tree.

These facts about IFs and PFs have already been mentioned in Section 2.2.2,
but here they have new relevance because they are being used to make the
consonant:IF and vowel:PF analogy. In summary, if sign language Cs are prop-
erties of the IF tree and sign language Vs are properties of the PF tree, the
major difference between sign and spoken languages in this regard is that in
sign languages IFs and PFs are realized at the same time.

2.3.2 Sensitivity to movement-internal components


Even though vowels are similar to movements in overall function in the gram-
mar, as we have seen in the previous section, movements in ASL are different
from vowels in the way in which their complexity is calculated. In ASL, the
phonological grammar is sensitive to the number of movement components
present in a word, not simply the number of sequential movements in a form.
For a spoken language it would be like saying that vowels with one feature and
those with more than one feature behaved differently in the vowel system; this
is quite rare in spoken languages. Llogoori is one exception, since long vowels
and those with a high tone are both counted as “heavy” in the system (Goldsmith
1992). In the Prosodic Model, movements with one component (defined as a
single branching class node in the PF tree) are called simple movements (8);
see Figure 2.1 for UNDERSTAND. Movements with more than one component
are called complex movements (9); see Figure 2.2 for INFORM.

(8) Simple movement: one branching class node in the PF tree

UNDERSTAND DIE SIT


PF PF PF

aperture orientation path

x x x x x x
48 Diane Brentari

(9) Complex movement: two or more branching class nodes in the PF tree
INFORM STEAL FALL ACCOMPLISH
EASILY
PF PF PF PF

path path path nonmanual

aperture orient orient aperture

x x aperture x x x x

x x

ASL grammar exhibits sensitivity to the distinction between simple and com-
plex movements in nominalization of two types – reduplicative nominalization,
and in the formation of activity verbs (i.e. gerunds) – and in word order prefer-
ences. The generalization about this sensitivity is given in (10).
(10) Movement-internal sensitivity: ASL grammar is sensitive to the
complexity of movements, expressed as the number of movement
components.
With regard to nominalization, only simple movements – shown in (8) –
undergo either type of nominalization. The first work on noun–verb pairs in
ASL (Supalla and Newport 1978) describes reduplicative nominalization: the
input forms are selected by the following criteria: (a) they contain a verb that
expresses the activity performed with or on the object named by the noun, and
(b) they are related in meaning. The structural restrictions for reduplicative
nominalization are given in (11); typical forms that undergo this operation are
given in (12a–b). All of the forms in the Supalla and Newport (1978) study,
which undergo reduplication, are simple movement forms.6 There are also a
few reduplicative nouns which do not follow the semantic conditions of Supalla
and Newport (1978), but these also obey the structural condition of being simple
movements (12c–d); a typical form that undergoes reduplication is shown in
Figure 2.3.
(11) Reduplication nominalization input conditions
a. They contain a verb that expresses the activity performed with or
on the object named by the noun.

6 The movements of both syllables are also produced in a restrained manner. I am referring here
only to the nominalization use of reduplication. Complex movements can undergo reduplication
in other contexts, e.g. in various temporal aspect forms.
Modality differences in phonology and morphophonemics 49

(a) (b)

Figure 2.3 Nominalization via reduplication: 2.3a CLOSE WINDOW;


2.3b WINDOW

b. They are related in meaning.


c. They are subject to the following structural condition: simple move-
ment stems.

(12) Possible reduplicative noun/verb pairs:


a. reduplicated movement: CLOSE-WINDOW/WINDOW,
SIT/CHAIR, GO-BY-PLANE/AIRPLANE;
b. reduplicated aperture change: SNAP-PHOTOGRAPH/PHOTO-
GRAPH, STAPLE/STAPLER, THUMP-MELON/MELON;
c. no activity performed on the noun: SUPPORT, DEBT, NAME,
APPLICATION, ASSISTANT;
d. no corresponding verb: CHURCH, COLD, COUGH, DOCTOR,
CUP, NURSE.
Another nominalization process that is sensitive to simple and complex move-
ments is the nominalization of activity verbs (Padden and Perlmutter 1987),
resulting in gerunds. The input conditions are given in (13), with relevant forms
given in (14). The movement of an activity noun is “trilled”; that is, it contains
a series of rapid, uncountable movements.7 Like reduplicative nouns, inputs
must be simple movements, as defined in (8).8

(13) Activity nouns input conditions:


a. simple movement stems;
b. activity verbs.

7 This definition of “trilled movement” is based on Liddell (1990). Miller (1996) argues that the
number of these movements is, at least in part, predictable due to the position of such movements
in the prosodic structure.
8 If a [trilled] movement feature does co-occur with a stem having a complex movement, it is
predicted that the more proximal of the movement components will delete (e.g. LEARNING,
BEGGING).
50 Diane Brentari

(a) (b)

Figure 2.4 Nominalization via trilled movement affixation: 2.4a READ;


2.4b READING

(14) Possible activity verb/noun pairs: READ/READING (see Figure 2.4),


DRIVE/DRIVING, SHOP/SHOPPING, ACT/ACTING,
BAT/BATTING

THROW/THROWING (violation of (13a): complex movement)

KNOW/KNOWING (violation of (13b): stative verb)
The formalization of the nominalization operations for reduplicative and
activity nouns is given in (15). A weight unit (WU) is formed by a branching
node of the PF tree. Reduplication generates another simple movement syllable,
while activity noun formation introduces a [trilled] feature at the site of the
branching PF node.
(15) Formalization of input and output structures words:
a. input to nominalization b. reduplication output c. activity output
word word word

PF PF PF

syllable syllable syllable syllable

WU WU WU WU
class node class node class node class node
[trilled]

Another phenomenon that shows sensitivity to movement complexity is


seen in the gravitation of complex movements to sentence-final position (16).
This type of phenomenon is relatively well known in spoken languages, i.e.
when heavy syllables have an affinity with a particular sentential position (Zec
and Inkelas 1990). In (16a), the complex movement co-occurs with a per-
son agreement verb stem (Padden 1983). In (16b) the complex movement
co-occurs with a spatial agreement verb stem (Padden 1983). In (16c) the
Modality differences in phonology and morphophonemics 51

complex movement occurs in a “verb sandwich” construction (Fischer and Janis


1990). In such constructions, which are a type of serial verb construction, a noun
argument occurs between two instances of the same verb stem. The first instance
is uninflected; the second instance, in sentence-final position, has temporal and
spatial affixal morphology, which also makes the form phonologically heavy.9
(16) Word order and syllable weight:
a. in agreement verbs:
i. 1 GIVE2 BOOK (simple movement: one branching PF class
node)
‘I give you the book.’
ii. ? 1 GIVE2 [habitual] BOOK (complex movement: two
branching PF class nodes)
‘I give you the book repeatedly.’
iii. BOOK 1 GIVE2pl [exhaustive] (complex movement: three
branching PF nodes)
‘I give each of you a book.’
iv. ∗ 1 GIVE2pl [exhaustive] BOOK
b. in spatial verbs:
i. a PUTb NAPKIN (one branching PF class node)
‘(Someone) placed the napkin there.’
ii. ? a PUTb [habitual] NAPKIN (2 branching PF class nodes)
‘(Someone) placed the napkin there repeatedly.’
iii. NAPKIN a PUTb [exhaustive] (3 branching PF class nodes)
‘(Someone) placed a napkin at each place.’
iv. *a PUTb [exhaustive] NAPKIN
c. with verb sandwich constructions (Fischer and Janis 1990):
i. S-H-E LISTEN R-A-D-I-O (2 branching PF class nodes)
‘She listens to the radio.’
ii. S-H-E LISTEN R-A-D-I-O LISTEN [continuous]
(LISTEN [continuous] has 3 branching PF class nodes)
‘She was continuously listening to the radio . . . ’
iii. *S-H-E LISTEN [continuous] R-A-D-I-O LISTEN

2.4 Differences concerning segments


In this section, three kinds of feature and segment behavior in sign languages are
addressed. First, it will be made clear that even though segments are predictable
by the features present in the structure, they are still needed by the grammar
because they are referred to in the system of constraints. Second, the canonical
relationship between root nodes and segments is discussed.
9 The phonological explanation may be only one part of a full account of these phenomena.
Fischer and Janis (1990) propose a syntactic explanation for the verb sandwich construction.
52 Diane Brentari

2.4.1 Segments: Predictable, yet required by the grammar


A segment in the Prosodic Model is defined as the minimal concatenative unit
required by the grammar for timing (i.e. duration) or ordering effects.10 As
described earlier, features in the PF tree generate segments, but the ones in the
IF tree do not. The difference between the placement of segments in spoken
and sign language phonological structure is given in (17).
(17) Spoken language hierarchy of units: segments dominate features;11
Sign language hierarchy of units: features dominate segments.
Since features predict segmental structure in the Prosodic Model, we can say
that features dominate segmental structure. Despite their predictability, seg-
ments cannot be dispensed with altogether, since they are needed to capture the
environment for morphophonemic operations, as is shown below.
Segments are needed in order to account for several lengthening effects
in ASL. Two of them are the result of morphophonemic operations: inten-
sive affixation (18) and delayed-completive aspect affixation (19). A third is a
purely phonological operation: phrase-final lengthening (20). One cannot cap-
ture lengthening effects such as these unless all of the features associated to a
particular timing unit are linked together. The point is that, for each operation,
signs that have one or more than one branching PF class node(s) undergo the
lengthening operation in an identical way. Feature information must be gath-
ered together into segmental units so that forms in (18b), (19b), and (20b) do
not require a separate rule for each feature set affected by the rule. The sets of
features can include any of those under the PF tree: nonmanual, setting, path,
orientation, and aperture.
A form, such as UNDERSTAND, has a handshape (aperture) change at the
forehead. In the form meaning “intensive” ([18]; Klima and Bellugi 1979),
and in the form meaning “delayed completive” ([19]; Brentari 1996), the du-
ration of the initial handshape is longer than in the uninflected form. The form
ACCOMPLISH-EASILY has both an aperture change and a non-manual move-
ment (the mouth starts in an open position and then closes). Both the initial
handshape and nonmanual posture are held longer in both of the complex mor-
phological forms in (18) and (19). In the intensive form, the initial segment
lengthening is the only modification of the stem.
(18) Intensive affixation: ø → xi / stem [xi
Prose: Copy the leftmost segment of a stem to generate a form with
intensive affixation.
10 Some models of sign language phonology (van der Hulst 1993; 1995) equate the root node and
the segment. They are referred to as monosegmental models. In such models, all ordering or
duration phenomena that involve more than one set of features, such as the phenomena discussed
in this section, would need to be handled by a different mechanism.
11 This point is also made in van der Hulst (2000).
Modality differences in phonology and morphophonemics 53

(a) (b)

Figure 2.5a UNDERSTAND (simple movement sign);


2.5b ACCOMPLISH-EASILY (complex movement sign)

a. signs that undergo this operation containing one branching PF node:


UNDERSTAND, TAN, DEAD, CONCENTRATED, INEPT,
GOOD, AGREE
b. signs that undergo this operation containing more than one branch-
ing PF node:
ACCOMPLISH-EASILY, FASCINATED, FALL-ASLEEP,
FINALLY
In the delayed completive form, the initial segment is lengthened, and a [trilled
movement] is added to the resulting initial geminate.
(19) Delayed completive aspect: ø → xi / stem [xi
[wiggle]
Prose: Copy the leftmost segment of a stem and add a [wiggle] feature
to generate a form with delayed completive affixation.
a. signs that undergo this operation containing one branching PF node:
UNDERSTAND, FOCUS, DEAD
b. signs that undergo this operation containing more than one branch-
ing PF node:
INFORM, ACCOMPLISH-EASILY, RUN-OUT-OF, FALL-
ASLEEP, FINALLY
In the phonological operation of phrase-final lengthening (first discussed as
Mora-Insertion in Perlmutter 1992), the final segment is targeted for lengthen-
ing, and the simple and the complex movement forms are lengthened identically
(20), just as they were in (18) and (19).
(20) Phrase-final lengthening: ø → xi / xi ] p-phrase
Prose: At the end of a phonological phrase, copy the rightmost
segment.
a. signs that undergo this operation containing one branching PF node:
UNDERSTAND, TAN, DEAD, FOCUS CONCENTRATED,
INEPT, GOOD
54 Diane Brentari

b. signs that undergo this operation containing more than one branch-
ing PF node:
INFORM, ACCOMPLISH-EASILY, FASCINATED, RUN-OUT-
OF, FALL-ASLEEP, FINALLY
Because the segment, defined as above, is needed to capture these lengthening
phenomena, this is evidence that it is a necessary unit in the phonology of sign
languages.

2.4.2 Root nodes and timing slots


Now we turn to how segments and root nodes are organized in the phonological
representation. In spoken languages the root node has a direct relation to the
timing or skeletal tier, which contains either segments or moras. While affricates
and diphthongs demonstrate that the number of root nodes to timing slots is
flexible in spoken languages, the default case is one timing slot per root node.
The Association Convention (21) expresses this well, since in the absence of
any specification or rule effects to the contrary, the association of tones to tone
bearing units (TBUs) proceeds one-to-one.
(21) Association Convention (from Goldsmith 1976): In the absence of
any specification or rule effects to the contrary, TBUs are associated
to tones, one-to-one, left-to-right.
This canonical one-to-one relationship between root nodes and segments does
not hold in sign languages, since the canonical shape of root to segments cor-
responds closely to that of a diphthong, i.e. one root node to two timing slots.
For this reason, I would argue that segments are not identified with the root, but
are rather predictable from features. Thus, the canonical ratio of root nodes to
timing slots in sign languages is 1:2, rather than 1:1 as it is in spoken languages,
as given in (22).
(22) The canonical ratio of root nodes to segmental timing slots in sign
languages is 1:2, rather than 1:1 as it is in spoken languages.
This situation is due to two converging factors. First, segments are predictable
from features, but they are also referred to in rules (18)–(20), so I would ar-
gue that their position in the representation should reflect this, placing feature
structures in a position of dominance over segments. Second, there is no mo-
tivation for assigning the root node either to the IF or the PF node only.12 The
inventory of surface root-to-segment ratios for English and ASL are given in

12 Space does not permit me to give a more detailed set of arguments against these alternatives
here.
Modality differences in phonology and morphophonemics 55

(23)–(24). A schema for the root-feature-segment relation for both spoken and
signed languages is given in (25a–b).13

(23) Spoken language phonology: root-segment ratios (English):


a. 1:1 ‘dot’ [dat] b. 2:1 ‘dude’ [u:] c. 1:2 ‘jot’ [d]
x x x x x x x x x x

root root root root root root root root root root
d a t d u d d  a t

(24) Sign language phonology: root-to-segment ratios (ASL):


a. 2:1 UNDERSTAND b. 3:1 DESTROY c. 4:1 BACKGROUND
root root root

IF PF IF PF IF PF
[–] [–] [–] [–]
aperture
path path
[tracing] [direction:>1]
[open] IF [repeat: 180°] [repeat]

x x x x x x xx x

(25) Schema of root/feature/segment relationship:


Spoken language Sign language
a. schema b. schema
x root

root melody

melody x

To summarize, at the segmental level of sign language structure as defined


in the Prosodic Model, there are two differences between signed and spoken

13 These are surface representations; for example, in English (23b) the /u/ in /dud/ is lengthened
before a voiced coda consonant resulting in an output [du:d]. Also, in ASL (24b) DESTROY,
the input form generates four segments due to the two straight path shapes located at the highest
node of the PF tree; however, since the second and third segments are identical in bidirectional
movements (indicated by the [repeat: 180o ] feature), one is deleted to satisfy the Obligatory
Contour Principle (OCP) (Brentari 1998; Chapter 5).
56 Diane Brentari

language: segments are necessary – but predictable – and the canonical rela-
tionship between roots and segments is 1:2, rather than 1:1.

2.5 Differences at the lexical level


The differences in this section are concerned with the preferred form that words
assume in signed and spoken languages, and how words are recognized as
distinct from one another in the lexicon.

2.5.1 Word shape


One area of general interest within the field of phonology is crosslinguistic
variation in canonical word shape; that is, what is the preferred phonological
shape of words across languages. For an example of such canonical word prop-
erties, many languages – including the Bantu language Shona (Myers 1987)
and the Austronesian language Yidin (Dixon 1977) – require that all words
be composed of binary branching feet. With regard to statistical tendencies at
the word level, there is also a preferred canonical word shape exhibited by the
relationship between the number of syllables and morphemes in a word, and
it is here that sign languages differ from spoken languages. In general, sign
language words tend to be monosyllabic (Coulter 1982), even when the forms
are polymorphemic. The difference exhibited by the canonical word shape of
sign language words is given in (26).

(26) Unlike spoken languages, sign languages have a proliferation of mono-


syllabic, polymorphemic words.

This relationship between syllables and morphemes is a hybrid measurement,


which is both phonological and morphological in nature, in part due to the shape
of stems and in part due to the type of affixal morphology in a given language.
A language, such as Chinese, contains words that tend to be monosyllabic and
monomorphemic, because it has monosyllabic stems and little overt morphol-
ogy (Chao 1968). A language, such as West Greenlandic, contains stems of a
variety of shapes and a rich system of affixal morphology that lengthens words
considerably (Fortescue 1984). In English, stems tend to be polysyllabic, and
there is relatively little affixal morphology. In sign languages, stems tend to be
monosyllabic (i.e. one movement; Coulter 1982). There is a large amount of
affixal morphology, but most of these forms are less than a segment in size;
hence, polymorphemic and monomorphemic words are typically not different
in word length. Table 2.3 schematizes the canonical word shape in terms of the
number of morphemes and syllables per word.
Modality differences in phonology and morphophonemics 57

Table 2.3 Canonical word shape according to the number


of syllables and morphemes per word

Monosyllabic Polysyllabic

Monomorphemic Chinese English


Polymorphemic Sign languages West Greenlandic

Except for the relatively rare morphemic change by ablaut marking past
preterit in English (sing-present/sang-preterit; ring-present/rang-
preterit), or for person marking in Hua (Haiman 1979), indicated by the
[±back] feature on the vowel, spoken languages tend to create polymorphemic
words by adding sequential material in the form of segments or syllables. Even
in Semitic languages, which utilize non-concatenative morphology, lexical roots
and grammatical vocalisms alternate with one another in time; they are not lay-
ered onto the same segments used for the root as they are in sign languages.
This difference is a remarkable one; in this regard, sign languages constitute
a typological class unto themselves. No spoken language has been found that
is both as polysynthetic as sign languages and yet makes the morphological
distinctions primarily in monosyllabic forms. An example of a typical poly-
morphemic, monosyllabic structure in a sign language is given in Figure 2.6.14

2.5.2 Minimal pairs


This final section addresses another word-level phenomenon, i.e. the notion of
minimal pairs in signed and spoken languages. Even though minimal pairs in
phonological theory have traditionally been based on a single feature, advances
in phonological representation make it possible to have minimal pairs based on
a variety of types of structure. Any pair of forms that differs crucially in one and
only one respect (whatever the structural locus of this difference) can be called
a minimal pair. For example, the difference between the signs AIRPLANE and
MOCK is based on the presence vs. absence of a thumb structure: AIRPLANE
has a thumb structure and MOCK does not; see Figure 2.7 and the structures in
(27). For reasons having to do with the way the thumb behaves with respect to
the other fingers, the thumb is a branch of structure, not a feature, yet this type
of difference can still be referred to as a minimal pair.
14 The number of morphemes present in this form is subject to debate. The handshape might be 1–2
morphemes, and the movement (with its beginning and ending points) may be 1–4 morphemes.
Orientation of the hands toward each other, and in space, adds another 1–3 morphemes. Until a
definitive analysis is achieved, I would say that the total number of morphemes for this form is
minimally five and maximally nine.
58 Diane Brentari

Figure 2.6 Polymorphemic form with the following morphological structure


(conservative estimate of 6 morphemes): “two (1); hunched-upright-beings
(2); make-their-way-forward (3); facing-forward (4); carefully (5); side-by-
side (6)”

(27) a. AIRPLANE b. MOCK


IF IF

fingers1 fingers1

fingers0
fingers0 thumb
[unopposed] quantity ref
quantity ref [one] [ulnar]
[one] [ulnar] [all]
[all]

Unlike the other sections of this chapter, the central point of this section is to
show that minimal pairs in signed and spoken language are not fundamentally
different, but that a different structure is required for sign languages if we are
to see this similarity. If features dominate segments, as I have described is the
case for sign languages, this similarity is quite clear; if segments dominate
features, as is the case for spoken languages, the notion of the minimal pair in
sign language becomes difficult to capture.
The reason for this is as follows. If the handshapes for AIRPLANE and
MOCK are minimally different, then all things in other structures being equal,
the signed words in which they occur should also be minimally different. This is
the intuition of native signers, and this is the basis upon which Stokoe (1960) and
Klima and Bellugi (1979) established minimal pairs. In the Hold–Movement
Phonological Model proposed by Liddell and Johnson (1983; 1989) – which is
a model where segments dominate features – such signs are not minimal pairs,
Modality differences in phonology and morphophonemics 59

(a) (b)

Figure 2.7a Handshape used in AIRPLANE with a thumb specification;


2.7b Handshape used in MOCK with no thumb specification

because MOCK and AIRPLANE are signs where differences exist in more
than one segment. MOCK and AIRPLANE each have four segments, and the
handshape is the same for all of the segments. In the Prosodic Model, barring
exceptional circumstances, IFs spread to all segments.

(28) Prosodic and Hold–Movement representations of AIRPLANE (hsa )


and MOCK (hsb )
a. Hold–Movement model representations
AIRPLANE MOCK
X X X X X X X X

hsa hsa hsa hsa hsb hsb hsb hsb

b. Prosodic Model representations


AIRPLANE MOCK
root root

IF PF IF PF
[–] [–] [–] [–]
hsa hsb
path path
[direction:>1] [direction:>1]
[repeat] [repeat]
x xx x x xx x

I have suppressed the details of the representations that are not relevant here. The
important point is that the handshape features are represented once in Prosodic
Model, but once per segment in the Hold–Movement Model. The Prosodic
60 Diane Brentari

Model representation allows handshape and place of articulation features to be


represented only once, and then to be allowed to spread to all segments. Sandler
(1986; 1987) first proposed the autosegmental representation for handshape; the
autosegmental representation for place of articulation is a more recent innova-
tion in the Prosodic Model (Brentari 1998) and in the Dependency Phonology
Model (van der Hulst 1993; 1995). Without the type of structure expressed in
(28b), most forms considered to be minimally different by native signers are
not counted as such.

2.6 What comprises a modality-independent phonology?


Within the Prosodic Model, the menu of units available to signed and spoken
languages is the same, but because of modality effects, the relative distribu-
tion of the units within the grammar is different. Based on the evidence in
this chapter, I conclude that modality is – at least in part – responsible for
the phonological differences between signed and spoken languages. To be pre-
cise, these differences are due to the advantage of the visual system to process
more paradigmatic information more quickly and with greater accuracy. We
have seen that the grammar has exploited these advantages in several ways.
These structural differences warrant a different organization of the units within
the representation in several cases.
Some properties of phonology that are common to signed and spoken lan-
guages are given in (29), and some that are different in (30).
(29) Phonological properties common to both sign and spoken languages:
a. There is a part of structure that carries most of the paradigmatic
contrasts: consonants in spoken languages; handshape + Place (IFs)
in sign languages.
b. There is a part of structure that comprises the medium by which the
signal is carried over long distances: vowels in spoken languages;
movements in sign languages.
c. There is a calculation of complexity carried out at levels of the
structure independent from the syntax: i.e. in prosodic structure.
d. One of the roles of the root node is to function as a liaison point
between the phonology and the syntax, gathering all of the feature
information together in a single unit.
(30) Phonological properties that differ between sign and spoken languages;
in sign languages:
a. The default relationship of root node to timing slots is 1:2, not 1:1.
b. Timing units are predictable, rather than phonemic.
c. Cs and Vs are realized at the same time, rather than sequentially.
d. The phonology is sensitive to the number of movement components
present.
Modality differences in phonology and morphophonemics 61

e. The calculation of prosodic complexity is more focused on paradig-


matic structure.

This chapter has shown that all of the divergent properties in (30) are due to
greater sensitivity to paradigmatic structure. This sensitivity can be traced to
the advantage of the visual system for vertical processing. Certain structural re-
arrangement and elaboration is necessary to represent sign languages efficiently,
well beyond simply re-naming features. The common properties in (29) are not
nearly as homogeneous in nature as the divergent ones, since they are not
attributable to physiology; these are likely candidates for UG.

Acknowledgments
I am grateful to Arnold Davidson, Morris Halle, Michael Kenstowicz, Richard
Meier, Mary Niepokuj, Cheryl Zoll, and two anonymous reviewers for their
helpful discussion and comments on a previous version of this chapter.

2.7 References
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Bregman, Albert S. 1990. Auditory scene analysis. Cambridge, MA: MIT Press.
Brentari, Diane. 1990a. Theoretical foundations of American Sign Language phonol-
ogy. Doctoral dissertation, University of Chicago. (Published 1993, University of
Chicago Occasional Papers in Linguistics, Chicago, IL.)
Brentari, Diane. 1990b. Licensing in ASL handshape. In Sign language research: Theo-
retical issues, ed. Ceil Lucas, 57–68. Washington, DC: Gallaudet University Press.
Brentari, Diane. 1995. Sign language phonology: ASL. In A handbook of phonological
theory, ed. John Goldsmith, 615–639. New York: Basil Blackwell.
Brentari, Diane. 1996. Trilled movement: Phonetic realization and formal representation.
Lingua 98:43–71.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Brentari, Diane and Howard Poizner. 1994. A phonological analysis of a deaf Parkinso-
nian signer. Language and Cognitive Processes 9: 69–99.
Chao, Y. R. 1968. A grammar of spoken Chinese. Berkeley: University of California
Press.
Chase, C. and A. R. Jenner. 1993. Magnocellular visual deficits affect temporal process-
ing of dyslexics. Annals of the New York Academy of Sciences 682:326–329.
Chomsky, Noam and Morris Halle. 1968. The sound pattern of English. New York:
Harper and Row.
Clements, George N. 1985. The geometry of phonological features. Phonology Yearbook
2:225–252.
Clements, George N. and Elizabeth V. Hume. 1995. The internal organization of speech
sounds. In A handbook of phonological theory, ed. John Goldsmith, 245–306.
New York: Basil Blackwell.
62 Diane Brentari

Corina, David, and Wendy Sandler. 1993. On the nature of phonological structure in
sign language. Phonology 10:165–207.
Coulter, Geoffrey. 1982. On the nature of ASL as a monosyllabic language. Paper pre-
sented at the Annual Meeting of the Linguistic Society of America, San Diego, CA.
Coulter, Geoffrey, ed. 1993. Phonetics and phonology, Vol. 3: Current issues in ASL
phonology. San Diego, CA: Academic Press.
Coulter, Geoffrey and Stephen Anderson. 1993. Introduction. In Coulter, ed. (1993),
1–17.
Crasborn, Onno. 2001. Phonetic implementation of phonological categories in Sign
Language of the Netherlands. Doctoral dissertation, HIL, Leiden University.
Dixon, R. M. W. 1977. A grammar of Yidiny. Cambridge/New York: Cambridge Uni-
versity Press.
Emmorey, Karen and Harlan Lane. 2000. The signs of language revisited: Festschrift for
Ursula Bellugi and Edward Klima. Mahwah, NJ: Lawrence Erlbaum Associates.
Fischer, Susan and Janis Wynn. 1990. Verb sandwiches in American Sign Language. In
Current trends in European sign language research, ed. Siegmund Prillwitz and
Tomas Vollhaber, 279–294. Hamburg, Germany: Signum Press.
Fortescue, Michael D. 1984. West Greenlandic. London: Croom Helm.
Frost, Ram and Shlomo Bentin. 1992. Reading consonants and guessing vowels: Visual
word recognition in Hebrew orthography. In Orthography, phonology, morphol-
ogy and meaning, ed. Ram Frost and Leonard Katz, 27–44. Amsterdam: Elsevier
(North-Holland).
Goldsmith, John. 1976. Autosegmental phonology. Doctoral dissertation, MIT, Cam-
bridge, MA. (Published 1979, New York: Garland Press.)
Goldsmith, John. 1992. Tone and accent in Llogoori. In The joy of syntax: A festschrift in
honor of James D. McCawley, ed. D. Brentari, G. Larson, and L. MacLeod, 73–94.
Amsterdam: John Benjamins.
Goldsmith, John. 1995. A handbook of phonological theory. Oxford/Cambridge, MA:
Basil Blackwell.
Green, David M. 1971. Temporal auditory acuity. Psychological Review 78:540–551.
Haiman, John. 1979. Hua: A Papuan language of New Guinea. In Languages and their
status, ed. Timothy Shopen, 35–90. Cambridge, MA: Winthrop.
Hirsh, Ira J., and Carl E. Sherrick. 1961. Perceived order in different sense modalities.
Journal of Experimental Psychology 62:423–432.
Holzrichter, Amanda S. and Richard P. Meier. 2000. Child-directed signing in ASL. In
Language acquisition by eye, ed. Charlene Chamberlain, Jill P. Morford and Rachel
Mayberry, 25–40. Mahwah, NJ: Lawrence Erlbaum Associates.
Itô, Junko. 1986. Syllable theory in prosodic phonology. Doctoral dissertation, Univer-
sity of Massachusetts, Amherst. (Published 1989, New York: Garland Press.)
Jakobson, Roman, Gunnar Fant, and Morris Halle. 1951, reprinted 1972. Preliminaries
to speech analysis. Cambridge, MA: MIT Press.
Jenkins, J., W. Strange, and M. Salvatore. 1994. Vowel identification in mixed-speaker
silent-center syllables. The Journal of the Acoustical Society of America 95:1030–
1035.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Kohlrausch, A., D. Püschel, and H. Alphei. 1992. Temporal resolution and modulation
analysis in models of the auditory system. In The Auditory processes of speech:
Modality differences in phonology and morphophonemics 63

From sounds to words, ed. Marten E. H. Schouten, 85–98. Berlin/NewYork: Mouton


de Gruyter.
Liddell, Scott. 1984. THINK and BELIEVE: Sequentiality in American Sign Language.
Language 60:372–392.
Liddell, Scott. 1990. Structures for representing handshape and local movement at
the phonemic level. In Theoretical issues in sign language research, Vol. 1, ed.
Susan Fischer and Patricia Siple, 37–65. Chicago, IL: University of Chicago
Press.
Liddell, Scott and Robert E. Johnson. 1983. American Sign Language: The phonological
base. Manuscript, Gallaudet University, Washington, DC.
Liddell, Scott, and Robert E. Johnson. 1986. American Sign Language compound for-
mation processes, lexicalization, and phonological remnants. Natural Language
and Linguistic Theory 4:445–513.
Liddell, Scott and Robert E. Johnson. 1989. American Sign Language: The phonological
base. Sign Language Studies 64:197–277.
McCarthy, John. 1988. Feature geometry and dependency: A review. Phonetica 41:
84–105.
Meier, Richard P. 1993. A psycholinguistic perspective on phonological segmentation
in sign and speech. In Coulter, ed. (1993), 169–188.
Meier, Richard P. 2000. Shared motoric factors in the acquisition of sign and speech. In
Emmorey and Lane, 333–356.
Meier, Richard P., Claude Mauk, Gene R. Mirus., and Kimberly E. Conlin. 1998. Mo-
toric constraints on early sign acquisition. In Proceedings for the Child Language
Research Forum, Vol. 29, ed. Eve Clark, 63–72. Stanford, CA: CSLI.
Miller, Christopher. 1996. Phonologie de la langue des signes québecoise: Structure
simultanée et axe temporel. Doctoral dissertation, Université du Québec à Montreal.
Myers, Scott. 1987. Tone and the structure of words in Shona. Doctoral dissertation,
University of Massachusetts, Amherst, MA.
Nespor, Marina, and Irene Vogel. 1986. Prosodic phonology. Dordrecht: Foris.
Padden, Carol. 1983. Interaction of morphology and syntax in American Sign Language.
Doctoral dissertation, University of California, San Diego, CA. (Published 1988,
Garland Press, New York.)
Padden, Carol and David Perlmutter. 1987. American Sign Language and the architecture
of phonological theory. Natural Language and Linguistic Theory 5:335–375.
Perlmutter, David. 1992. Sonority and syllable structure in American Sign Language,
Linguistic Inquiry 23:407–442.
Petitto, Laura A. 2000. On the biological foundations of human language. In Emmorey
and Lane, 449–473.
Petitto, Laura A. and Paula Marentette. 1991. Babbling in the manual mode: Evidence
for the ontogeny of language. Science 251:1493–1496.
Sandler, Wendy. 1986. The spreading hand autosegment of American Sign Language.
Sign Language Studies 50:1–28.
Sandler, Wendy. 1987. Sequentiality and simultaneity in American Sign Language
phonology. Doctoral dissertation, University of Texas, Austin, Texas.
Sandler, Wendy. 1989. Phonological representation of the sign. Dordrecht: Foris.
Seidenberg, Mark. 1992. Beyond orthographic depth in reading. In Orthography,
phonology, morphology and meaning, ed. Ram Frost and Leonard Katz, 85–118.
Amsterdam: Elsevier (North-Holland).
64 Diane Brentari

Stack, Kelly. 1988. Tiers and syllable structure: Evidence from phonotactics. M.A. thesis,
University of California, Los Angeles.
Stokoe, William C. 1960. Sign language structure: An outline of the visual communi-
cation systems of the American Deaf. Studies in Linguistics, Occasional Papers 8.
Silver Spring, MD: Linstok Press.
Stokoe, William C., Dorothy C. Casterline, and Carl G. Croneberg. 1965. A dictionary
of American Sign Language on linguistic principles. Silver Spring, MD: Linstok
Press.
Strange, Winifred. 1987. Information for vowels in formant transitions. Journal of Mem-
ory and Language 26:550–557.
Supalla, Ted and Elissa Newport. 1978. How many seats in a chair? The derivation of
nouns and verbs in American Sign Language. In Understanding language through
sign language research, ed. Patricia Siple, 91–133. New York: Academic Press.
Uyechi, Linda. 1995. The geometry of visual phonology. Doctoral dissertation, Stanford
University, Stanford, CA. Published 1996, CSLI, Stanford, California.
van der Hulst, Harry. 1993. Units in the analysis of signs. Phonology 102:209–241
van der Hulst, Harry. 1995. The composition of handshapes. University of Trondheim,
Working Papers in Linguistics, 1–17. Dragvoll, Norway: University of Trondheim.
van der Hulst, Harry. 2000. Modularity and modality in phonology. In Phonological
knowledge: Conceptual and empirical issues, ed. Noel Burton-Roberts, Philip Carr,
and Gerard J. Docherty. Oxford: Oxford University Press.
van der Hulst, Harry and Anne Mills. 1996. Issues in sign linguistics: Phonetics, phonol-
ogy and morpho-syntax. Lingua 98:3–17.
Welch, R. B. and D. H. Warren. 1986. Intersensory interactions. In Handbook of percep-
tion and human performance, Volume 1: Sensory processes and perception, ed. by
Kenneth R. Boff, Lloyd Kaufman, and James P. Thomas, 25–36. New York: Wiley.
Wilbur, Ronnie. 1987. American Sign Language: Linguistic and applied dimensions,
2nd edition. Boston, MA: Little, Brown.
Wilbur, Ronnie B. 1999. Stress in ASL: Empirical evidence and linguistic issues. Lan-
guage and Speech 42:229–250.
Wilbur, Ronnie B. 2000. Phonological and prosodic layering of non-manuals in Amer-
ican Sign Language. In Emmorey and Lane, 213–241.
Zec, Draga and Sharon Inkelas. 1990. Prosodically constrained syntax. In The
phonology-syntax connection, ed. Sharon Inkelas and Draga Zec, 365–378.
Chicago, IL: University of Chicago Press.
3 Beads on a string?
Representations of repetition in spoken
and signed languages

Rachel Channon

3.1 Introduction
Someone idly thumbing through an English dictionary might observe two char-
acteristics of repetition in words. First, segments can vary in the number of times
they repeat. In no, Nancy, unintended, and unintentional, /n/ occurs one, two,
three, and four times respectively. In the minimal triplet odder, dodder, and
doddered, /d/ occurs one, two, and three times.
A second characteristic is that words repeat rhythmically or irregularly:
(1) Rhythmic repetition: All the segments of a word can be temporally
sliced to form at least two identical subunits, with patterns like aa,
abab, and ababab. Examples: tutu (abab), murmur (abcabc).
(2) Irregular repetition: any other segment repetition, such as abba, aabb,
abca, etc. Examples: tint (abca), murmuring (abcabcde).
If asked to comment on these two characteristics, a phonologist might shrug
and quote from a phonology textbook:
[An] efficient system would stipulate a small number of basic atoms and some simple
method for combining them to produce structured wholes. For example, two iterations
of a concatenation operation on an inventory of 10 elements . . . will distinguish 103
items . . . As a first approximation, it can be said that every language organizes its lexicon
in this basic fashion. A certain set of speech sounds is stipulated as raw material. Distinct
lexical items are constructed by chaining these elements together like beads on a string.
(Kenstowicz 1994:13)

Repetition, the phonologist might say, is the meaningless result of the fact that
words have temporal sequences constructed from segments. Because languages
have only a limited segment set, then by chance some sequences include a
varying number of repeating segments. This explains Nancy and the others.
As for rhythmic and irregular repetition, by chance some segment sets repeat
exactly and some do not. And that would be the end of the story.
These data, however, are interesting when compared to sign language data.
Sign phonologists agree that signs have the same function in sign language
65
66 Rachel Channon

that words have in spoken language.1 Because their function is the same, one
might reasonably expect that the representations are similar and that signs, like
words, have segment sequences. If so, then segment repetition should behave
similarly. Irregular repetition should be common, and contrasts should occur
between one, two, three, or more repetitions within a sign.
But this is not what happens. The following seem to be correct generaliza-
tions for signs (with certain exceptions and explications to be given along the
way):
(3) Two, three, or more repetitions are not contrastive in signs.
(4) Simple signs have only rhythmic repetition.
(5) Compound signs have only irregular repetition.
The simple sign TEACH shows that number of repetitions is not contrastive;
the sign illustrates rhythmic repetition patterns. Both hands are raised to face
level. The hands move away from the face, parallel to the ground, then back
several times: out–in–out–in. The out–in motion may occur two times (abab),
three times (ababab), or more times without altering the sign’s meaning.
The compound TEACH-PERSON ‘teacher’ illustrates irregular repetition.
TEACH is made as described. Then the hands move in parallel lines down the
sides of the body (PERSON). The outward–inward repeated direction change
precedes a completely different motion down the body that does not repeat, for
an irregular repetition pattern of out–in–out–in–down (ababc).
This chapter explores the possibility that repetition is different in words and
signs because their phonological structures are different. Words have contrastive
repeating temporal sequences that must be represented with repeating segments.
Simple signs do not have contrastive repetition sequences, and a single segment
can, and should, represent them. They function like spoken words, but their
representations are like spoken segments.
Section 2 discusses contrast in the number of repetitions, and Section 3,
rhythmic and irregular repetition. Signs and words are shown to be dissimilar
for both characteristics. Section 4 presents two representations: Multiseg with
multiple segments, and Oneseg with one segment for simple signs and two for
compounds. Most examples are American Sign Language (ASL) from Costello
(1983) and use her glosses. Only citation forms of signs, or dictionary entries,
are included here, so inflected verbs, classifier predicates, and fingerspelling
are not considered.
1 Signs are not morphemes, because signs can have multiple morphemes (PARENTS has mor-
phemes for mother and father). Translation shows that signs are similar to words. Because most
sign translations are one word/one sign, it seems reasonable to assume that signs are not phrases,
but words, and attribute the occasional phrasal translation to lexical gaps on one side or the other.
While a sign may properly be called a “word” or a “sign word”, here “word” refers to spoken
words only, to keep the distinction between sign and speech unambiguous.
Representations of repetition 67

3.2 Number of repetitions in words and signs


In words, the number of segment repetitions is contrastive: some words are
different only because a segment repeats zero, one, two, three, or more times,
as in the minimal pairs and triplets for English in (6) (repeated segments are
emboldened).
(6) derive/derived, ten/tent, lob/blob, fat/fast/fasts, eat/ seat/seats, lip/slip/
slips, Titus/tightest, artist/tartest, classicist/classicists
Languages with single segment affixes can be expected to have minimal pairs
similar to seat/seats, and random minimal pairs like lob/blob can undoubtedly
be found in many languages.
In sign, the number of repetitions is not contrastive. Lane (1984) gives an
example of an attempt to use number of repetitions in signs contrastively from
the eighteenth century. The Abbé de l’Epée was one of the first hearing people
to use a sign language to teach deaf people. However, not satisfied with the
language of his pupils, he tried to improve it by inventing methodical signs, as
in this example:
To express something past, our pupil used to move his hand negligently toward his
shoulder . . . We tell him he must move it just once for the imperfect, twice for the
perfect, and three times for the past perfect. (Lane 1984:61)

The Abbé’s signs for the perfect and pluperfect are impossible because they
use a contrastive number of repetitions. In an ordinary, non-emphatic utterance,
two or three repetitions are usual, but variation in number of repetitions cannot
produce a minimal pair.
Repetition in speech can be defined in terms of segments, but because the
question raised here is what representation a sign has, notions of segments
within a sign cannot be relied on to define the repetition unit. A pre-theoretic
notion of motion in any one direction is used instead. TEACH is an example
of repeated outward and inward motion.2 If the action is circular, and the hand
returns to the beginning point and continues around the same circle, this also
counts as repetition. In ALWAYS, the index finger points up and circles several
times. In arcing signs, each arc counts as a motion, as in GRANDMOTHER,
with repeating arcs moving outward from the chin, where the hand does not
return to the starting point to repeat, but instead each arc begins at the last arc’s
ending point. Repeated handshape changes, such as opening and closing are
2 For some signs with a back and forth motion, such as PAPER and RESEARCH, the action
returning the hands to the original starting point is epenthetic (Supalla and Newport 1978;
Perlmutter 1990; Newkirk 1998), and the underlying repetition pattern is a–a. For other signs,
such as TEACH or DANCE, the underlying repetition pattern is ab–ab. This difference does
not affect the arguments presented here (both types have noncontrastive number of repetitions,
and are rhythmic) so abab is used for both.
68 Rachel Channon

also included, as in SHOWER, where the bunched fingers repeatedly open to a


spread handshape.
The body parts involved in repetition vary. Signs can repeat hand contact with
a body part (MOTHER), knuckle bending (COLOR), wrist bending (FISH),
path motions (ME TOO), and so on.
Brentari (1998) and Wilbur and Petersen (1997) have argued that signs where
the hand draws an x-shape, such as HOSPITAL or ITALY, have repeated motion
at a 90◦ angle from the first motion. These signs are not counted as repeating,
but even if they were, they would not be a problem for the generalizations and
proposals made here, because they are not irregularly repeating, and do not
contrast for number of repetitions.
About half of all signs are nonrepeating, as in GOOD, which moves from
chin to space once. Nonrepeating and repeating signs can contrast. In THAT,
the strong hand in a fist strikes the weak hand palm once; in IMPOSSIBLE the
strong hand strikes several times. Other pairs are given in (7).
(7) a. COVER-UP/PAPER, SIT/CHAIR (Supalla and Newport 1978);
b. ON/WARN, CHECK/RESEARCH, ALMOST/EASY (Perlmutter
1990);
c. MUST/SHOULD, CAN/POSSIBLE (Wilcox 1996).
While nonrepetition can contrast with repetition, two, three, four, or more
repetitions of an action in a sign cannot change the meaning. Fischer (1973)
may have been the first to observe this. Battison (1978:54) notes that “. . . signs
which require at least two beats have no absolute limit on the actual number
of iterations . . . the [only] difference is between signs with one beat and those
with iterations.” Others note this fact in passing (Liddell and Johnson 1989:fn.
15, Uyechi 1996:117, fn. 9) or implicitly recognize it by not specifying the
number of repetitions, as in Supalla and Newport (1978). Anderson (1993:283)
observes that ASL signs can repeat “a basic gesture an indefinite number of
times. Such a pattern of repetition has no parallel in spoken languages, as far
as I can see.”
If number of repetitions is predictable, it cannot be contrastive. Coulter (1990)
reports that stressing a sign can increase the number of repetitions. Wilbur and
Nolen (1986) report that in an elicited set of 24 signs, significantly more repeti-
tions occurred when stressed than unstressed. Miller (1996) argues that number
of repetitions is predictable based on the position within a sentence. Holzrichter
and Meier (2000) report that native-signing parents use more repetitions to at-
tract their child’s attention. Some signs such as MORE have as many as 16
repetitions, with higher numbers positively correlated with neutral space (com-
pared to on the face), and with nonpath (compared to path) signs.
Sign language dictionary keys provide additional evidence. If number were
contrastive, then distinct symbols should indicate this. Instead, zigzagging or
Representations of repetition 69

repeated lines show repetition, with no correspondence between number of


zigzags and number of repetitions. Some typical repetition symbols, from the
Israeli Sign Language (ISL) key, are two arrows pointing in the same direction
described as “repeated movement” and two sets of small arcs described as
“vibrating or shaking movement, vibrating fingers” (Savir 1992).
The symbol keys for nine ASL dictionaries or books about ASL were ex-
amined: Stokoe, Casterline, and Croneberg 1965; Ward 1978; Sternberg 1981;
Costello 1983; Fant 1983; Kettrick 1984; Shroyer and Shroyer 1984;
Reikhehof 1985; Supalla 1992). Thirteen keys for dictionaries, books, or long
articles for 13 other sign languages were also examined: British (Kyle and
Woll 1985), Enga (Kendon 1980), Hong Kong, Taiwanese, and Japanese (Uno
1984; Japanese Federation of the Deaf 1991), Indonesian (Pertama 1994),
Indo-Pakistan (Zeshan 2000), ISL (Savir 1992), Korean (Son 1988), Mayan
(Shuman and Cherry-Shuman 1981), Moroccan (Wismann and Walsh 1987),
New Zealand (Kennedy and Arnold 1998), South African (Penn, Ogilvy-
Foreman, Simmons, and Anderson-Forbes 1994), and Taiwanese (Chao, Chu,
and Liu 1988). No dictionary has symbols showing that number is contrastive.

3.3 Rhythmic and irregular repetition in words and signs


The term rhythmic repetition is used to allow a modality-neutral comparison
with sign, but in speech, it is almost the same as total reduplication, except that
it also includes words like English tutu, where repetition is not a productive
morphological process. More than two repetitions seem rare in speech: the
LLBA database has only one record for triplication (in Ewe, Ameka 1999).
How many words have rhythmic or irregular repetition? The example words
and exercises in the introduction and Chapters 1–3 of Kenstowicz 1994 – which
included 69 different languages/dialects, from 17 different language families –
were counted.3 Using a phonology textbook avoided orthography problems, and
offered a large crosslinguistic sample.4 Table 3.1 shows that if a word repeats,
about 99% of the time it will be irregular repetition (Fox ahpemeki ‘above’,
Ewe feflee ‘bought’).5 About 1% of repeating words have rhythmic repeti-
tion (Chukchee tintin ‘ice’). Table 3.1 also shows repetition counts for short
3 Some examples are excluded because they use normal orthography in whole or part or are
duplicates.
4 Although any one example set is not representative of spoken languages, the examples as a whole
seem representative with respect to irregular and rhythmic repetition, because repetition types
are unconnected to the phonological patterns being illustrated in the examples, and because
the phonological patterns illustrated are so varied. An email inquiry to the author (Kenstowicz,
personal communication) confirmed that he did not consciously favor or disfavor repetition
types.
5 This total includes 253 words with geminate, or geminate-like segments (Basque erri ‘village’),
which have no other segment repetition. Without these geminates, irregular repetition would be
35.6 percent of the total examples.
70 Rachel Channon

Table 3.1 Words: Irregular and rhythmic repetition as


percentages of all repetition

Irregular Rhythmic
n n % n %

Kenstowicz data 2104 1002 98.7 13 1.3


IPA Handbook data
American English 113 11 100.0 0 0.0
Amharic 94 57 100.0 0 0.0
Arabic 85 61 100.0 0 0.0
Bulgarian 92 23 100.0 0 0.0
Cantonese 129 0 0.0 0 0.0
Catalan 113 13 100.0 0 0.0
Croatian 106 51 100.0 0 0.0
Czech 83 45 100.0 0 0.0
Dutch 107 29 100.0 0 0.0
French 100 12 100.0 0 0.0
Galician 95 18 100.0 0 0.0
German 107 18 100.0 0 0.0
Hausa 164 86 97.7 2 2.3
Hebrew, Non-Oriental 89 44 100.0 0 0.0
Hindi 125 14 100.0 0 0.0
Hungarian 99 45 100.0 0 0.0
Igbo 111 27 100.0 0 0.0
Irish 126 20 100.0 0 0.0
Japanese 88 43 97.7 1 2.3
Korean 60 40 100.0 0 0.0
Persian 91 25 100.0 0 0.0
Portuguese 93 15 100.0 0 0.0
Sindhi 110 22 100.0 0 0.0
Slovene 92 50 100.0 0 0.0
Swedish 107 47 100.0 0 0.0
Taba 96 44 100.0 0 0.0
Thai 131 66 100.0 0 0.0
Tukang Besi 117 49 100.0 0 0.0
Turkish 65 44 100.0 0 0.0
Total IPA 2988 1019 99.7 3 0.3
Total all words 5092 2021 99.2 16 0.8

narratives for 29 different languages from the International Phonetic Associa-


tion (IPA) (1999).6 The Kenstowicz and IPA results are similar, and strongly
confirm that the rhythmic repetition percentage in words is extremely low.
6 All but the Taba narrative are translations of the same story (an Aesop’s fable about the north
wind and the sun), and all have some duplicate word tokens (the word wind occurred four times
in the American English narrative). The American English example has 113 word tokens, but
Representations of repetition 71

Table 3.2 Signs: Irregular and rhythmic repetition as


percentages of all repetition

Irregular Rhythmic
n n % n %

Simple signs
ASL 1135 0 0.0 527 100.0
IPSL 282 0 0.0 87 100.0
ISL 1490 0 0.0 648 100.0
NS 370 0 0.0 124 100.0
MSL 114 0 0.0 31 100.0
Total simple signs 3391 0 0.0 1417 100.0
Compound signs
ASL 75 19 100.0 0 0.0
IPSL 6 2 100.0 0 0.0
ISL 124 85 100.0 0 0.0
NS 95 47 100.0 0 0.0
MSL 35 10 100.0 0 0.0
Total compounds 335 163 100.0 0 0.0
All signs
ASL 1210 19 3.5 527 96.5
IPSL 288 2 2.2 87 97.8
ISL 1614 85 11.6 648 88.4
NS 465 47 27.5 124 72.5
MSL 149 10 24.4 31 75.6
Total all signs 3725 163 10.3 1413 89.7

Table 3.2 shows the number and percentage of signs with rhythmic and irreg-
ular repetition. All signs from the following sources were examined: Costello
1983 for ASL, Savir 1992 for ISL, Japanese Federation of the Deaf 1991
for Japanese Sign Language (Nihon Syuwa or NS), the appendix of Zeshan
2000 for IndoPakistan Sign Language (IPSL), the appendix to Shuman and

only 64 different types; the Korean example has 60 tokens and 48 different types. All tokens
are counted (because of the time-consuming nature of determining which words are duplicates).
This does not seriously affect the percentages of rhythmic and irregularly repeating words
because the percentage of types and tokens should be approximately the same, since there is
no reason that tokens should include more examples of repetition or nonrepetition than types
do. In the Korean example, 63 percent of the types, and 67 percent of the tokens have irregular
repetition. In the American English example, 13 percent of the types and 10 percent of the tokens
have irregular repetition. (Neither example has rhythmically repeating words.) There are only
three rhythmically repeating words in the entire IPA sample, none of which have more than one
token, so the major point that irregular repetition is overwhelmingly preferred to rhythmic is
true regardless of whether tokens or types are counted.
72 Rachel Channon

Cherry-Shuman 1981 for a Yucatec Mayan sign language used in the village of
Nohya (MSL).7
ASL is one of the oldest sign languages and may have the largest population of
native and nonnative signers of any sign language. MSL is at the other extreme.
Nohya’s population of about 300 included about 12 deaf people. The oldest
deaf person seems to have been the first deaf person in Nohya, and claims to
have invented the language (Shuman 1980:145), so the language is less than
100 years old. As the table shows, language age and number of users does not
significantly affect repetition patterns.
Compounds, the concatenation of two signs to make a single sign, fall along
a continuum from productive to lexical, terms loosely borrowed from Liddell
and Johnson (1986). Productive compounds strongly resemble the two signs
they are formed from. Any signer can identify the two parts. Examples are
TEACH-PERSON ‘teacher’, SHOWER-BATHE ‘shower’, SLEEP-CLOTHES
‘pajamas’ (Costello 1983), and FLOWER-GROW ‘plant’ (Klima and Bellugi
1979:205). These signs can have irregular repetition.8
7 Occasionally, a dictionary lists the same sign on different pages. Because there is no simple
way to ensure that each sign is only counted once, each token is counted. This should not
affect the reported percentages. The ISL dictionary has one ambiguous symbol: a circle with
an arrowhead, specified in the key as “full circle movement.” It is impossible to tell whether
those cycles occur once or more than once. In some signs, such as (ocean) WAVE or BICYCLE,
iconicity strongly suggests that the circular motion repeats. Furthermore, in ASL most circular
signs repeat. Therefore, all 105 signs with this symbol count as rhythmically repeating. Email
to Zeshan resolved several symbols for IPSL. A closed circle with an arrow counts as repeating,
an open circle as nonrepeating. Hands opening and closing count as repeating. In NS, repetition
may be undercounted, because pictures and descriptions do not always show repetition, though
iconicity suggests it. For example, HELP is described as “pat the thumb as if encouraging the
person.” But pat is not specified as repeated, and the picture does not show repetition, so it is
not counted as repeating. MSL signs that use aids other than the signer’s own body (pulling the
bark off a tree for BARK) are omitted.
8 Only productive compounds are systematically distinguished in the dictionaries. The ASL, NS,
and MSL dictionaries have written descriptions as well as pictures, and the compilers usually
indicate which signs they consider compounds. The ASL dictionary indicates compounds by
a “hint” (“X plus X”) and/or with two pictures labeled 1 and 2; the IPSL and NS have two
pictures labeled 1 and 2. The MSL dictionary usually indicates which are compounds, but
some judgments are needed. For example, BIRD, an upward point, followed by arm flapping,
is coded as a compound. The ISL dictionary does not explicitly recognize compounds, and
has no descriptions, but productive compounds have two pictures, instead of the normal one.
(Many signs can be identified as compounds, because the two parts can be found as separate signs
elsewhere.) However, some signs that clearly are not compounds also have two pictures; usually,
signs with handshape changes. Those ISL signs with two pictures where the difference is only
one handshape, place, or orientation change are therefore counted as simple signs. Including
these two-picture signs as simple signs decreases the count of rhythmically repeating compound
signs. The irregular repetition count is not affected, because none of these signs has irregular
repetition. Because the other dictionaries have almost no rhythmically repeating compounds,
and these signs do not look like compounds, the choice seems justified. Klima and Bellugi
(1979) list tests for whether two signs are compounds or phrases, but these cannot be used in a
dictionary search. So some signs counted as compounds are probably phrases. Including all but
the most obvious (such as how are you) was preferred to omitting signs nonsystematically.
Representations of repetition 73

Lexical compounds have been so changed from their two-sign origin that in
almost every respect they look like noncompound signs. Often, only historical
evidence identifies a compound origin. An example is ASL DAUGHTER, where
the flat hand touches the jaw and then the elbow crook, from GIRL (the fist
hand draws the thumb forward along the jaw line) and BABY (the two arms
cradle an imaginary baby and rock back and forth). The sources do not identify
these lexical compounds as compounds. They cannot be identified as such
without knowing the language, and so are counted as simple signs. From here
forward, noncompounds and lexical compounds are called simple signs, and
the productive compounds, compounds.
Table 3.2 supports the generalizations for repetition shown in (8), (9), and
(10).
(8) Simple signs repeat rhythmically, not irregularly.
(9) Compound signs repeat irregularly, not rhythmically.
(10) Rhythmic repetition is common in signs instead of rare as in speech.
Other researchers have observed generalizations (8) and (9). Uyechi (1996:118)
notes that “a well formed repeated gesture is articulated with two identical ges-
tures.” Supalla (1990:14–15) observes that simple signs can only have redupli-
cation (i.e. rhythmic repetition):
ASL signs show restricted types of syllabic structure . . . Coulter (1982) has argued that
simple signs are basically monosyllabic. He has suggested that the multi-syllabic forms
that exist in ASL are all either reduplicated forms or compounds (excluding the category
of fingerspelled loans). (Note that a compound is not a good example of a simple sign
since it consists of two signs.) Among the simple signs, Wilbur (1987) pointed out that
there is no ASL sign with two different syllables, as in the English verb “permit.” The
only multisyllabic signs other than compounds are with reduplicated syllables.

Table 3.2 shows that simple signs only repeat rhythmically,9 confirming gener-
alization (8), and that compound signs only repeat irregularly, confirming gen-
eralization (9). Rhythmic repetition in repeating words occurs about 1 percent of
9 Four irregularly repeating signs are excluded from the simple sign counts. Costello does not
recognize SQUIRREL as a compound. Nevertheless, this sign seems analyzable as a compound
in which the contact with the nose or chin is a reference to a squirrel’s pointed face and the
second part a reference to the characteristic action of a squirrel holding its paws near the upper
chest. ISL SICK, SMART, and NAG each have only one picture, implying they are simple signs.
However, SICK and NAG both have two places – one at the forehead and one on the hand – and
are probably compounds. The drawing for SMART, which may be misleading, has an unusual
motion that seems to be a straight motion followed by wiggling. Including these as simple
signs would change the percentage of simple signs with rhythmic repetition from 100 percent
to 99.7 percent. Costello recognizes three rhythmically repeating signs as compounds, PENNY,
NICKEL, and QUARTER. For each, Costello uses the wording “x plus x” which is one of the
guides to whether she considers a sign a compound. However, these three signs are clearly highly
assimilated lexical compounds, with a single place (forehead), and a single motion (outward,
with simultaneous finger wiggling for QUARTER). They are therefore counted as simple signs.
74 Rachel Channon

the time, as compared to 100 percent in simple signs, confirming generalization


(10). A further generalization not seen in the tables is:
(11) Signs have only three irregular patterns.
The patterns are limited because compounds are a concatenation of two simple
signs, which are either rhythmically repeating or nonrepeating, and produce
only three possible patterns, abab–c, a–bcbc, and abab–cdcd.10
(12) a. Repeating followed by nonrepeating (abab–c)
TEACH-PERSON ‘teacher’: hand moves out, in, out, in, down
b. Nonrepeating followed by repeating (a–bcbc)
DRY-CIRCLE ‘dryer’: hand moves ipsilaterally, then in circles
c. Repeating followed by repeating (abab–cdcd )
SHOWER-BATHE ‘shower’: hand opens, closes, opens, closes,
then moves down, up, down, up
In speech, irregular repetition (which is not limited to compounds) can have
any pattern, as in tint (abca) or unintended (abcbdebfcf ). If similar patterns
occurred in compound signs, signs like (13) with a repetition pattern the same
as unintended, would occur, but they do not.
(13) An impossible sign: The hand moves up–down–in–down–out–
ipsilateral–down–contralateral–in–contralateral
To summarize, words have an overwhelming preference for irregular repeti-
tion; number of repetitions is contrastive, and any irregular repetition pattern
can occur. Signs have an overwhelming preference for rhythmic repetition,
number of repetitions is not contrastive, simple signs only repeat rhythmically,
compounds only repeat irregularly, and only three irregular repetition patterns
occur.

3.4 Representing the data: Multiseg and Oneseg


Up to this point, the goal has been to make the repetition facts explicit and to
compare these facts in speech and sign. Other researchers have also recognized

10 Note that repetition often deletes: TEACH-PERSON can be pronounced as a single outward
motion followed by a downward motion. This does not affect the point here, which is that if
compounds repeat on a part, they can only have three possible patterns. A fourth pattern is
nonrepeating followed by nonrepeating, which produces a nonrepeating compound, not of in-
terest here. Note that two rhythmically repeating signs will not usually produce a rhythmically
repeating compound. By definition a rhythmically repeating sign must have two or more iden-
tical subunits, so two signs have at least four subunits ab, ab, cd, and cd. Unless ab and cd
are very similar, the concatenated compound cannot be rhythmically repeating. Over time, of
course, these productive compounds do alter to become lexical compounds that are rhythmically
repeating or nonrepeating.
Representations of repetition 75

most observations made here, usually as footnotes, literally or figuratively, to


other points. Because of this prior recognition, there should be little argument
about the major typological generalizations presented here. The next question
must be why: what difference between speech and sign encourages this differ-
ence in repetition characteristics?
The introduction alluded to a reasonable explanation for speech: words are
segment strings. Add to this the fact that in strings longer than two, irregular
repetition is statistically more likely than rhythmic repetition.11 Consider the
simple language in (14) with string lengths of four and a two-segment inventory:
a and b. (Because the strings in this case are longer than the number of possible
elements, all 16 possible strings repeat.)
(14) Irregularly repeating strings: abba, aaab, aaba, abaa, baaa, aabb,
bbaa, baab, abbb, babb, bbab, bbba
Rhythmically repeating strings: abab, baba, aaaa, bbbb
If these, and only these, strings are possible, then this language has 75 percent
irregular repetition and only 25 percent rhythmic repetition. It turns out that
for any reasonable segment inventory size and string length, chance levels of
rhythmic repetition are below 1 percent (Channon, 2002). Obviously, in natural
spoken languages, many constraints prevent the occurrence of many strings,
and not every segment is equally likely to occur (in English, s is more common
than z).12 The proportion of irregular to rhythmic strings also varies depending
on how many possible segments a language has, and how long the strings are. In
spite of these complicating factors, the high percentage of irregular repetition
and the tiny percentage of rhythmic repetition are essentially as predicted by
chance. Current phonological theories, including autosegmental phonology,
consider words to be strings, so it seems intuitively reasonable that the irregular
and rhythmic repetition distribution in speech is explained as a primarily chance
effect of segment string permutations.
But if this is a reasonable explanation for speech, then how should the op-
posing sign data be explained? If rhythmic repetition occurs at chance levels in
speech, and if signs, like words, are segment strings, then rhythmic repetition
should be a tiny percentage of all repetition. Instead, it is the only repetition
type occurring in simple signs. Why are words and simple signs so different?

11 A one-segment string cannot repeat; a two-segment string can only repeat rhythmically. Irregular
repetition can only occur in strings of three and more segments. Note the difference from
repetition as a segment feature, where one-segment signs can repeat rhythmically and two-
segment signs can repeat irregularly.
12 These constraints are likely to operate against both irregular and rhythmic strings, so they would
be unlikely to explain why rhythmic is so dispreferred. For example, a constraint that restricts
codas to nasals would eliminate many possible irregularly repeating words like tat or district,
but it would also eliminate rhythmically repeating words like murmur.
76 Rachel Channon

Why are simple signs and compound signs so different? Why do compound
signs have only three irregular repetition patterns?
I propose that all the repetition facts for signs can be economically explained
if simple signs are not segment strings, but single segments, and if repetition
is not caused by string permutations, but instead by a feature [repeat]. To show
this, I compare the characteristics of a multisegmental and single segment rep-
resentation, here abbreviated to Multiseg and Oneseg.
Before comparing the two representations, first consider an unworkable, but
instructive, minimal representation, which allows only one occurrence of any
feature in one unordered segment. A word like da can be represented, because
d and a have enough disjoint features to identify them. Example (15) shows
a partial representation. (Order problems are ignored here, but a CV syllable
structure can be assumed which allows da but not ad. See also footnote 15
below.) Although this representation can handle a few words, it cannot handle
repetition, either rhythmic as in dada or irregular as in daa or dad, because each
feature can only occur once.
(15) da as one segment
[da]

[stop] [coronal] [low]

Multiseg is a powerful solution to this representation’s problems. With its


multiple sequenced segments, it records how many times, and when, a fea-
ture occurs. Example (16) shows rhythmic repetition in dada and (17) shows
irregular repetition in dad.
(16) Multiseg: dada as four segments
[d] [a] [d] [a]

[stop] [coronal] [low] [stop] [coronal] [low]

(17) Multiseg: dad as three segments


[d] [a] [d]

[stop] [coronal] [low] [coronal] [stop]

Multiseg can generate any irregular repetition pattern. While this is correct for
words, it is too powerful for signs, which only have three irregular patterns.
Multiseg systematically overgenerates non-occurring signs with a variety of
irregular patterns, as shown in (18).
Representations of repetition 77

(18) Multiseg: Impossible sign with abcbd irregular repetition pattern


[] [] [] [] []

near eye near ear near nose near ear near mouth

Multiseg represents number of repetitions contrastively, so da, dada, and dadada


are contrastive with two, four, and six segments respectively. In signs, Multiseg
correctly represents THAT (19) and IMPOSSIBLE (20) contrastively.

(19) Multiseg: THAT


Y handshape

[] []

near palm of palm of


weak hand weak hand

(20) Multiseg: IMPOSSIBLE with two repetitions


Y handshape

[] [] [] []

near palm near palm


palm palm

But Multiseg must incorrectly represent each form of IMPOSSIBLE con-


trastively, as shown in (20) with two repetitions and (21) with three repetitions.
(21) Multiseg: IMPOSSIBLE with three repetitions
Y handshape

[] [] [] [] [] []

near palm near palm near palm


palm palm palm

Even worse, because number of repetitions has no determinate upper limit,


Multiseg has not just two, but an indeterminately large number of contrastive
representations for the same sign. Instead of overgenerating non-occurring signs
as in the irregular repetition patterns, here it is overgenerating representations,
78 Rachel Channon

an equally serious problem. Finally, Multiseg cannot explain the sharp con-
trast between simple and compound signs in terms of rhythmic and irregular
repetition, since it makes no special distinction between simple and compound
forms.
Oneseg is a much less powerful solution, which is nevertheless a better fit
for simple signs and compounds. Oneseg has one segment for simple signs,
two segments for compounds and adds a feature [repeat]. Brentari (1998),13
Perlmutter (1990), and others have proposed similar features for repetition.
The default value for [repeat] is that whatever changes in the sign repeats. A
sign with handshape change repeats the handshape change: SHOWER opens
and closes the hand repeatedly. If the hand contacts a body part, the contact
is repeated: MOTHER contacts the chin repeatedly. If the hand moves along a
path, as in TEACHER, the path repeats.14
Like other features, [repeat] cannot occur more than once in a segment. The
representation for da (15) does not change, but Oneseg can represent words that
repeat as a whole (rhythmic repetition): (22) shows dada.
(22) Oneseg: dada/dadada as one segment
[]

[repeat] [coronal] [stop] [low]

Number of repetitions cannot be contrastive, so dada, dadada, dadadada, etc.


all have the same representation, because Oneseg, unlike Multiseg, can only
distinguish between one and more than one. While this is not the correct rep-
resentation for speech, it is correct for signs, since, as Section 2 showed, signs
only contrast repeating and nonrepeating, and cannot contrast number of repeti-
tions. Examples (23) and (24) show the minimal pair THAT and IMPOSSIBLE,
as represented in Oneseg.
(23) Oneseg: THAT
[]

palm of Y
weak hand handshape

13 Her [repeat] feature, however, has more details than needed here.
14 A few signs have more than one change, and some constraint hierarchy probably controls this.
For example, in signs with both a handshape change and a path motion – as in DREAM – it
may be generally true that only the handshape change repeats. This issue may have more than
one solution, however, and further details of [repeat], and its place in a possible hierarchical
structure, are left for future research.
Representations of repetition 79

(24) Oneseg: IMPOSSIBLE


[]

[repeat] palm of Y
weak hand handshape

Because Oneseg allows only global or rhythmic repetition in simple signs,


this explains why all simple signs have rhythmic repetition. Oneseg cannot
represent irregular repetition in simple words, as in daa, daddaad, or aadad. A
single segment cannot represent the irregularly repeating compound TEACH-
PERSON, because deciding which features repeat is impossible.15 However,
because Oneseg can add a segment for compounds, then irregularly repeating
compounds are possible. Example (25) shows an abab–c example, TEACH-
PERSON.
(25) Oneseg: TEACH-PERSON as two segments
TEACH PERSON
[] []

[repeat] O in front [out] [in] [down] in front B


handshape of face of body handshape

The three possible irregular repetition patterns are:


r Both segments have [repeat]: abab–cdcd.
r The first segment has [repeat], the second does not: abab–c.
r The first segment does not have [repeat], the second does: a–bcbc.
r (If neither segment repeats, it is a nonrepeating compound a–b.)
These three patterns are exactly the patterns listed in generalization (11) above.
To summarize, Multiseg allows rhythmic and irregular repetition, and only a
contrastive number of repetitions, without distinguishing between simple and
compound words or signs. These are the characteristics seen in words. It over-
generates for signs, because it allows irregular repetition in both simple and
15 TEACH-PERSON as a single segment also suffers from order problems: which direction, hand-
shape, or place comes first. Even in TEACH, the reader may wonder how order can be handled
in Oneseg, because TEACH has two directions. The order problems of TEACH-PERSON as a
single segment are an additional reason why a single segment is not a successful representation
for compounds. But simple signs like TEACH are not a problem for a single segment. If needed,
direction features can sequence place features (because TEACH has only one place this is not
needed), handshape features can be sequenced by a handshape change feature (Corina 1993),
and so on. Direction features themselves are only sequenced by constraints. Perlmutter (1990)
has pointed out that signs with repeating motion out from and toward the body move out first, so
this would determine the direction sequence for TEACH. Note that using features to sequence
other features is only possible for short, constrained sequences, but only short, constrained
sequences are seen in simple signs. For further discussion, see Crain 1996.
80 Rachel Channon

compound signs, and any irregular repetition pattern. It represents many sets of
signs that are systematically non-occurring, and produces multiple representa-
tions for existing signs.
Oneseg allows only rhythmic repetition in simple forms, only three irreg-
ularly repeating patterns in compounds, and only a noncontrastive number of
repetitions. These are the characteristics seen in simple and compound signs.
While Oneseg cannot possibly represent all possible words, it can represent the
repetition facts in sign. An important goal in linguistics is to use the least pow-
erful representation, i.e. the representation that allows all and only the attested
language forms. Undergeneration is bad, but so is overgeneration. If Oneseg
can represent all signs, and predict systematic gaps in the lexicon that Multiseg
cannot, it is preferable to Multiseg for signs.
Two important points need to be mentioned. The first is that a sign unit
does not have to be a “segment”. What is essential is some unit of time, which
could be segmental, syllabic or other. One can disagree, as Wilbur (1993) does,
with the statement that any phonological representation of a sign or word must
have at least one segment. But it should be noncontroversial to claim that any
phonological representation for a sign or word must have at least one unit of
time. Multiseg and Oneseg are intended to look at the two logical possibilities of
one timing unit or more than one timing unit with as little additional apparatus
as possible. What must happen if signs or words have multiple timing units, or
only one timing unit? Because the phonological representations assumed here
have little detail, “timing unit” could be substituted for every occurrence of
“segment” (when referring to sign languages), because here segment is only
one possible instantiation of a single timing unit. This use of segment, however,
implies that segments are structurally unordered, or have no internal timing
units. Not every phonologist has so understood segment. For example, van der
Hulst’s (2000) single segment sign representation is an ordered segment with
multiple timing units, and therefore an example of a Multiseg representation,
not Oneseg.
If the two representations are understood as generic multiple or single timing
unit representations, then Multiseg is the basis for all commonly accepted rep-
resentations for speech, as well as almost all representations proposed for signs,
including the multisegmental representations of Liddell and Johnson (1989),
Sandler (1989), and Perlmutter (1993), and those using other units such as
syllable, cell, or ordered segment (Wilbur 1993; Uyechi 1996; Brentari 1998;
Osugi 1998; van der Hulst 2000).
Probably the closest to Oneseg is Stokoe’s (1960) conception of a sign as
a simultaneous unit. Oneseg however does not claim that everything about a
sign is simultaneous, but only that sequence does not imply sequential structure
(multiple segments/timing units), and that features can handle the simple and
constrained sequences that actually occur in signs. Repetition is one example
Representations of repetition 81

of sequence for which a featural solution exists, as described above. Two other
examples are handshape and place sequences, which features or structure could
handle (Corina 1993; Crain 1996).
A second point is that whether a multisegmental or single segment repre-
sentation is correct should be considered and decided before, and separately
from, questions of hierarchy, tiers, and other details within the segment. Au-
tosegmental characteristics are irrelevant. The models discussed here are neither
hierarchical nor nonhierarchical, and representation details are deliberately left
vague. Multiseg and Oneseg are not two possible representations among many.
At this level of detail, there are only two possible representations: either a rep-
resentation has one segment/timing unit or it has more than one segment/timing
unit. When this question has been answered, then more detail within a repre-
sentation can be considered, such as timing unit type and hierarchical structure.

3.4.1 Challenges to Oneseg


Oneseg faces several challenges in attempting to handle all sign language data.
I list a few here with brief sketches of solutions. For more details, see Channon
(2002).
In DESTROY (illustrated in Brentari 1998:187) the open, spread, claw-
shaped, palm-down, strong hand closes to a fist as it moves in toward the chest
and passes across the open, spread, clawshaped, palm-up weak hand, which
simultaneously changes to a fist. The strong hand then moves back out across
the weak hand. (In some versions, the weak hand moves symmetrically with the
strong hand.) This sign would seem to be a problem for Oneseg, because the
handshape change must occur during the motion inward. It is not spread across
the entire time of the sign, or repeated with each change in direction. One con-
sultant told me that it feels awkward to delay the handshape change because it
tangles the fingers of the two hands together, suggesting a phonetic basis for
the timing of the handshape change. Significantly, no contrasting signs exist in
which the handshape change occurs on the second part of the path motion, or
spreads across the entire time of the sign, or where the weak handshape change
occurs at a different time from the strong hand. It seems unlikely that these are
accidental gaps. If phonetic constraints control the timing, then representing
this sign as a single segment is not a problem.
For Oneseg to succeed, constraints must be assumed for many cases. For
example, signs like MAYBE must begin by moving the hand downward and
then upward (Perlmutter 1990). Oneseg could not represent these signs without
this constraint on motion direction. If Oneseg is correct, then signs with apparent
temporal contrasts but no minimal pairs imply that some constraint is at work,
so more such constraints should be found. One difficulty in studying not just
a new language, but a new language in a new modality, is that researchers are
82 Rachel Channon

rightly hesitant to dismiss aspects of a sign as not phonologically significant.


But this hesitation has meant that the lack of contrast that Oneseg predicts in
many situations has often been overlooked, with the result that representations
with too much power and too many features have been thought necessary.
A second challenge concerns contact variations. Signs with contact at the be-
ginning and end of a sign are easily handled with two place features. Many such
signs are underlyingly unordered. DEAF has no contrast between beginning at
the ear or chin (for more discussion, see Crain 1996). In other cases, the two
places can be ordered with a direction feature. AMERICAN-INDIAN contacts
the lower cheek then moves up to the upper cheek. It can be represented with
two place features and a direction feature [up]. Signs with contact in the middle
can be represented with a contact feature such as [graze]. This leaves the ques-
tion of signs with initial or final contact. As with AMERICAN-INDIAN, most
of these signs can be handled with direction features. KNOW (final contact at
the forehead) is a near minimal pair with IDEA (initial contact at the forehead).
KNOW (26) probably has no direction feature, because “toward (a body part)”
is the default.
(26) KNOW
[]

[forehead] B handshape

Since the hand moves toward the forehead, contact must be final. IDEA (27)
can be represented with a feature [out].16
(27) IDEA
[]

[forehead] I handshape [out]

Since the hand moves out from the body, contact must be initial. Thus, while
there are variations in when the hand contacts the body, these variations do not
require structural sequence.
A final challenge is the apparent temporal contrasts of some inflected signs.
As already mentioned, the proposal made here does not apply to all signs, but
only to simple signs and compounds: the kind of signs found in an ordinary
sign language dictionary. Inflected signs (both inflected verbs and classifier
16 Note that in KNOW, the hand may approach the forehead from any phonetically convenient
direction, but in IDEA, the hand must move in a specific direction, namely out. Note also
that Multiseg must represent most signs with phonologically significant beginning and ending
places; Oneseg represents most signs as having one underlying place.
Representations of repetition 83

predicates) are excluded. The verb KNOCK is one example of the problems of
inflected forms for a single segment representation. It can be iconically inflected
to have a contrastive number of repetitions (knocking a few times vs. knock-
ing for a long time). A second example is the Delayed Completive (Brentari
1998), an inflection with a prolonged initial length that appears to contrast
with the shorter initial length of the uninflected sign, and which iconically
represents a prolonged initial period of inaction (Taub 1998). These types of
contrasts occur only within the inflected verbs and classifier predicate domain,
and Oneseg cannot represent them. I argue that they are predictably iconic, and
this iconicity affects their representation, so that some elements of inflected
signs have no phonological representation.17 Channon (2001) and Channon
(2002) explain these exclusions in more detail. If it were the case that inflected
signs could not be explained as proposed, then one might invoke a solution
similar to Padden’s (1998) proposal that ASL has vocabulary groups within
which different rules apply. Regardless of the outcome for inflected verbs, it
remains true that Oneseg represents simple signs and compounds better than
Multiseg.
To turn the tables, (28) offers some examples of physically possible, but
non-occurring, simple signs as a challenge to Multiseg.
(28) a. Contact the ear, nose, mouth, chest in that order and no other.
b. Open the fist hand to flat spread, then to an extended index.
c. Contact the ear, brush the forehead, then draw a continuous line
down the nose.
d. Contact the ear for a prolonged time, then contact the chin for a
normal time.
e. Wiggle the hand while moving from ear to forehead, then move
without wiggling to mouth (see Sandler 1989:55; Perlmutter 1993).
The existence of such signs would be convincing evidence that signs require
Multiseg; the absence of such signs is a serious theoretical challenge for
Multiseg, which predicts that signs like these are systematically possible. Be-
cause these are systematic gaps, a representation must explain why these signs
do not exist. In English, the absence of blick is an accidental gap in the lexicon
that has no phonological explanation, but the absence of bnick is systematic,
and has a phonological explanation: bn is not an acceptable English consonant
cluster. Proponents of Multiseg must likewise explain the systematic gaps illus-
trated above. Note that Oneseg explains these gaps easily: it cannot represent
them.

17 This proposal also excludes fingerspelling. Its dependence on the sequence of English letters
means that it has repetition patterns more like speech than the signs examined here. This must
somehow be encoded in the representation, but is left for future research.
84 Rachel Channon

3.5 Conclusion
This chapter has shown that there are interesting and even surprising differ-
ences in repetition characteristics in the two language modalities. Number of
repetitions is contrastive in words, but not signs. Only a few repeating words
are rhythmic, but all repeating simple signs are rhythmic. Words and com-
pound signs have similar high rates of irregular repetition, but words allow any
irregular repetition pattern, while compound signs allow only three.
The models Multiseg for words and Oneseg for signs economically explain
these differences. Multiseg must represent different numbers of repetitions
contrastively; Oneseg cannot represent number of repetitions contrastively.
Multiseg can represent both rhythmic and irregular repetition. Possible seg-
ment string permutations suggest that irregular repetition should be common
and rhythmic repetition rare. Oneseg can only represent rhythmic repetition
in simple signs, but allows irregular repetition in two segment compounds.
Multiseg allows any irregular repetition pattern, but Oneseg allows only three.
Multiseg correctly represents the repetition data for words, but overgenerates
for signs; Oneseg undergenerates for words, and correctly represents the data
for signs. A single segment for simple signs, and two segments for compounds,
plus a [repeat] feature, is therefore a plausible representation.

Acknowledgments
I thank Linda Lombardi for her help. She has been an exceptionally consci-
entious, insightful and intelligent advisor. I thank Richard Meier and the two
anonymous reviewers for their helpful comments, Thomas Janulewicz for his
work as ASL consultant, Ceil Lucas for discussion of some of the issues raised
here, and the audiences at the Student Conference in Linguistics at the Uni-
versity of Arizona at Tucson, the Texas Linguistic Society conference at the
University of Texas at Austin, the student conference at the University of Texas
at Arlington, and the North American Phonology Conference in Montreal.

3.6 References
Ameka, Felix K. 1999. The typology and semantics of complex nominal duplication in
Ewe. Anthropological Linguistics 41:75–106.
Anderson, Stephen R. 1993. Linguistic expression and its relation to modality. In Coulter,
ed. (1993), 273–290.
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Channon, Rachel. 2001. The protracted inceptive verb inflection and phonological
Representations of repetition 85

representations in American Sign Language. Paper presented at the annual meeting


of the Linguistic Society of America, Washington DC, January.
Channon, Rachel. 2002. Signs are single segments: Phonological representations and
temporal sequencing in ASL and other sign languages. Doctoral dissertation,
University of Maryland, College Park, MD.
Chao, Chien-Min, His-Hsiung Chu, and Chao-Chung Liu. 1988. Taiwan natural sign
language. Taipei, Taiwan: Deaf Sign Language Research Association.
Corina, David P. 1993. To branch or not to branch: Underspecification in American Sign
Language handshape contours. In Coulter, ed. (1993), 63–95.
Costello, Elaine. 1983. Signing: How to speak with your hands. New York: Bantam
Books.
Coulter, Geoffrey R. 1982. On the nature of American Sign Language as a monosyl-
labic language. Paper presented at the annual meeting of the Linguistic Society of
America, San Diego, CA.
Coulter, Geoffrey R. 1990. Emphatic stress in American Sign Language. In Fischer and
Siple, 109–125.
Coulter, Geoffrey R., ed. 1993. Current issues in American Sign Language phonology.
San Diego, CA: Academic Press.
Crain, Rachel Channon. 1996. Representing a sign as a single segment in American Sign
Language. In Proceedings of the 13th Eastern States Conference on Linguistics,
46–57. Cornell University, Ithaca, NY.
Fant, Lou. 1983. The American Sign Language phrase book. Chicago, IL: Contemporary
Books.
Fischer, Susan D. 1973. Two processes of reduplication in the American Sign Language.
Foundations of Language 9:469–480.
Fischer, Susan D. and Patricia Siple, eds. 1990. Theoretical issues in sign language
research, Vol. 1: Linguistics. Chicago, IL: University of Chicago Press.
Holzrichter, Amanda S., and Richard P. Meier. 2000. Child-directed signing in American
Sign Language. In Language acquisition by eye, ed. Charlene Chamberlain, Jill
P. Morford, and Rachel I. Mayberry, 25–40. Mahwah, NJ: Lawrence Erlbaum
Associates.
International Phonetic Association. 1999. Handbook of the International Phonetic As-
sociation. Cambridge: Cambridge University Press.
Japanese Federation of the Deaf. 1991. An English dictionary of basic Japanese signs.
Tokyo: Japanese Federation of the Deaf.
Kendon, Adam. 1980. A description of a deaf-mute sign language from the Enga
Province of Papua New Guinea with some comparative discussion. Part III. Semi-
otica 32:245–313.
Kennedy, Graeme D. and Richard Arnold. 1998. A dictionary of New Zealand Sign
Language. Auckland: Auckland University Press, Bridget Williams Books.
Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA: Basil
Blackwell.
Kettrick, Catherine. 1984. American Sign Language: A beginning course. Silver Spring,
MD: National Association of the Deaf.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Kyle, Jim and Bencie Woll with G. Pullen and F. Maddix. 1985. Sign language: The
study of deaf people and their language. Cambridge: Cambridge University Press.
86 Rachel Channon

Lane, Harlan. 1984. When the mind hears: A history of the deaf. New York: Vintage
Books.
Liddell, Scott K. and Robert E. Johnson. 1986. American Sign Language compound
formation processes, lexicalization, and phonological remnants. Natural Language
and Linguistic Theory 4:445–513.
Liddell, Scott K. and Robert E. Johnson. 1989. American Sign Language: The phono-
logical base. Sign Language Studies 64:195–278.
Miller, Christopher Ray. 1996. Phonologie de la langue des signes québécoise: Structure
simultanée et axe temporel. Doctoral dissertation, Université de Québec, Montreal.
Newkirk, Don. 1998. On the temporal segmentation of movement in American sign
Language. Sign Language and Linguistics 1:173–211.
Osugi, Yutaka. 1998. In search of the phonological representation in American Sign
Language. Doctoral dissertation, University of Rochester, NY.
Padden, Carol. 1998. The ASL Lexicon. Sign Language and Linguistics 1:39–60.
Penn, Claire, Dale Ogilvy-Foreman, David Simmons, and Meribeth Anderson-Forbes.
1994. Dictionary of Southern African signs for communicating with the deaf.
Pretoria, South Africa: Joint Project of the Human Sciences Research Council
and the South African National Council for the Deaf.
Perlmutter, David M. 1990. On the segmental representation of transitional and bidi-
rectional movements in American Sign Language phonology. In Fischer and Siple,
67–80.
Perlmutter, David M. 1993. Sonority and syllable structure in American Sign Language.
In Coulter, ed. (1993), 227–261.
Pertama, Edisi. 1994. Kamus sistem isyarat bahasa Indonesia (Dictionary of Indonesian
sign language). Jakarta: Departemen Pendidikan dan Kebudayaan.
Reikhehof, Lottie. 1985. The joy of signing. Springfield, MO: Gospel Publishing House.
Sandler, Wendy. 1989. Phonological representation of the sign: Linearity and nonlin-
earity in American Sign Language. Dordrecht: Foris.
Savir, Hava. 1992. Gateway to Israeli Sign Language. Tel Aviv: The Association of the
Deaf in Israel.
Shroyer, Edgar H. and Susan P. Shroyer. 1984. Signs across America. Washington, DC:
Gallaudet College Press.
Shuman, Malcolm K. 1980. The sound of silence in Nohya: a preliminary account of
sign language use by the deaf in a Maya community in Yucatan, Mexico. Language
Sciences 2:144–173.
Shuman, Malcolm K. and Mary Margaret Cherry-Shuman. 1981. A brief annotated sign
list of Yucatec Maya sign language. Language Sciences 3:124–185.
Son, Won-Jae. 1988. Su wha eui kil jap i (Korean Sign Language for the guide). Seoul:
Jeon-Yong Choi.
Sternberg, Martin L. A. 1981. American Sign Language: A comprehensive dictionary.
New York: Harper and Row.
Stokoe, William C. 1960. Sign language structure: An outline of the visual communi-
cation systems of the American deaf. Studies in Linguistics, Occasional Papers 8.
Silver Spring, MD: Linstok Press.
Stokoe, William C., Dorothy C. Casterline, and Carl G. Croneberg. 1965, reprinted 1976.
A dictionary of American Sign Language on linguistic principles. Silver Spring,
MD: Linstok Press.
Representations of repetition 87

Supalla, Samuel J. 1990. Segmentation of Manually Coded English: problems in the


mapping of English in the visual/gestural mode. Doctoral dissertation, University
of Illinois, Urbana-Champaign.
Supalla, Samuel J. 1992. The book of name signs. San Diego, CA: DawnSign Press.
Supalla, Ted and Elissa Newport. 1978. How many seats in a chair? The derivation
of nouns and verbs in American Sign Language. In Understanding language
through sign language research, ed. Patricia Siple, 91–132. New York: Academic
Press.
Taub, Sarah Florence. 1998. Language in the body: Iconicity and metaphor in American
Sign Language. Doctoral dissertation, University of California, Berkeley.
Uno, Yoshio, chairman. 1984. Sike kuni shuwa ziten, rōzin shuwa (Speaking with signs).
Osaka, Japan: Osaka YMCA.
Uyechi, Linda. 1996. The geometry of visual phonology. Doctoral dissertation, Stanford
University, Stanford, CA.
van der Hulst, Harry. 2000. Modularity and modality in phonology. In Phonological
knowledge: Its nature and status, ed. Noel Burton-Roberts, Philip Carr, and Gerry
Docherty, 207–243. Oxford: Oxford University Press.
Ward, Jill. 1978. Ward’s natural sign language thesaurus of useful signs and synonyms.
Northridge, CA: Joyce Media.
Wilbur, Ronnie B. 1987. American Sign Language: Linguistic and applied dimensions.
Boston, MA: Little, Brown.
Wilbur, Ronnie B. 1993. Syllables and segments: Hold the movement and move the
holds! In Coulter, ed. (1993), 135–168.
Wilbur, Ronnie B. and Susan Bobbitt Nolen. 1986. The duration of syllables in American
Sign Language. Language and Speech 29:263–280.
Wilbur, Ronnie B. and Lesa Petersen. 1997. Backwards signing and American Sign
Language syllable structure. Language and Speech 40:63–90.
Wilcox, Phyllis Perrin. 1996. Deontic and epistemic modals in American Sign Lan-
guage: A discourse analysis. In Conceptual structure, discourse and language, ed.
Adele E. Goldberg, 481–492. Stanford, CA: Center for the Study of Language and
Information.
Wismann, Lynn and Margaret Walsh. 1987. Signs of Morocco. Rabat, Morocco: Peace
Corps.
Zeshan, Ulrike. 2000. Sign language in Indo-Pakistan: A description of a signed lan-
guage. Amsterdam: John Benjamins.
4 Psycholinguistic investigations of phonological
structure in ASL

David P. Corina and Ursula C. Hildebrandt

4.1 Introduction
Linguistic categories (e.g. segment, syllable, etc.) have long enabled cogent
descriptions of the systematic patterns apparent in spoken languages. Begin-
ning with the seminal work of William Stokoe (1960; 1965), research on the
structure of American Sign Language (ASL) has demonstrated that linguistic
categories are useful in capturing extant patterns found in a signed language. For
example, recognition of a syllable unit permits accounts of morphophonologi-
cal processes and places constraints on sign forms (Brentari 1990; Perlmutter
1993; Sandler 1993; Corina 1996). Acknowledgment of Movement and Loca-
tion segments permits descriptions of infixation processes (Liddell and Johnson
1985; Sandler 1986). Feature hierarchies provide accounts of assimilations that
are observed in the language and also help to explain those that do not occur
(Corina and Sandler 1993). These investigations of linguistic structure have led
to a better understanding of both the similarities and differences between signed
and spoken language.
Psycholinguists have long sought to understand whether the linguistic cat-
egories that are useful for describing patterns in languages are evident in the
perception and production of a language. To the extent that behavioral reflexes
of these theoretical constructs can be quantified, they are deemed as having a
‘psychological reality’.1 Psycholinguistic research has been successful in es-
tablishing empirical relationships between a subject’s behavior and linguistic
categories using reaction time and electrophysiological measures.
This chapter describes efforts to use psycholinguistic paradigms to explore
the psychological reality of form-based representations in ASL. Three online
reaction time experiments – Lexical-Decision, Phoneme Monitoring, and Sign–
Picture Naming – are adaptations of well-known spoken language psycholin-
guistic paradigms. A fourth off-line experiment, developed in our laboratory,
uses a novel display technique to explore form-based similarity judgments of
1 While psychological studies provide external evidence for the usefulness of these theoretical
constructs, the lack of a “psychological reality” does not undermine the importance of these
constructs in the description of linguistic processes.

88
Psycholinguistic investigations of phonological structure 89

signs. This chapter discusses the results of these experiments, which, in large
part, fail to establish reliable form-based effects of ASL phonology during lex-
ical access. This surprising finding may reflect how differences in the modality
of expression impact lexical representations of signed and spoken languages.
In addition, relevant methodological factors2 and concerns are discussed.

4.2 Experiment 1: Phonological form-based priming


Psycholinguistic research has relied heavily on examination of the behavioral
phenomenon of priming. Priming refers to the ability to respond faster to a
stimulus when that stimulus or some salient characteristic of that stimulus has
been previously processed. For example, when a subject is asked to make a
lexical decision (i.e. determine whether the stimulus presented is either a word
in his or her language or an unrecognized form), the subject’s response time to
make the decision is influenced by the prior context. The influence of this prior
context (i.e. the prime) may result in a subject responding faster or slower to a
target word. These patterns of interference and facilitation have been taken to
infer the processes by which word forms may be accessed.
The lexical decision paradigm has been used to examine form-based factors
underlying lexical access. Most directly relevant to the present experiment are
those studies that examine priming of auditory (as opposed to written) words.
Several studies have been successful in detecting the influence of shared phono-
logical forms in auditory lexical decision experiments. In contrast to semantic
priming studies, form-based studies (and especially auditory form-based stud-
ies) are far less robust, and appear to be more sensitive to experimental manip-
ulations. Several early studies have reported facilitation when auditory words
share segmental or syllabic overlap (Emmorey 1987; Slowiaczek, Nusbaum,
and Pisoni 1987; Corina 1991) while other studies have uncovered significant
inhibitory effects (Slowiaczek and Hamburger 1992; Goldinger, Luce, Pisoni,
and Marcario 1993; Lupker and Colombo 1994). Re-examination of these re-
sults has suggested that facilitation may be more reflective of post-lexical influ-
ences, including volitional (i.e. controlled) factors such as a subject’s expecta-
tion or response bias. In contrast, inhibition may be more reflective of processes
associated with lexical access.

4.2.1 Method
In this sign lexical decision paradigm a subject viewed two successive signs.
The first of the pair was always a true ASL sign, the second was either a true
2 Although a full accounting of the specific details behind each experiment is beyond the scope of
this chapter, it should be noted that rigorous experimental methodologies were used and necessary
controls for order effects were implemented.
(a) (b)
850 850

800 800

p. < .088 p. < .064


750 750

Reaction Time (ms)


Reaction Time (ms)
700 700 n.s. n.s.

650 650
Location Movement Location Movement

Related Unrelated Nonsigns

Figure 4.1 Reaction times for the two versions of the experiment: (a) 100ms ISI; (b) 500ms ISI.
Psycholinguistic investigations of phonological structure 91

sign or a formationally possible, but non-occurring sign form (nonsign). The


subject pressed one button if the sign was a true ASL sign and pressed a different
button if it was not. The sign pairs varied in phonological similarity; they either
had no phonological overlap (i.e. unrelated stimuli) or shared one formational
parameter (i.e. related stimuli). We compared reaction times to accept a sign
as a “true sign” when it was the second member of a related pair with the
reaction times to accept this same sign when it was a member of an unrelated
pair. Systematic differences in reaction times between the two conditions (i.e.
related vs. unrelated) are taken as evidence for form-based effects during lexical
access. The time it took to reject the nonsigns was also recorded.3 The parameter
contrasts of interest were limited to movement and location. The length of time
between when the prime is presented and when the target is presented (i.e.
interstimulus interval or ISI) is known to influence reaction times. Short ISIs
may foster more automatic, noncontrolled processing. To examine temporal
factors related to form-based priming, two different versions of the experiment
were constructed. In the first version, the second sign of the pair is presented at
100 msec following the offset of the preceding sign. The second version of the
experiment used an ISI of 500 msec. Fourteen native deaf signers participated
in Version 1 and 15 subjects participated in Version 2.

4.2.2 Results
The results shown in Figure 4.1 illustrate the reaction times for the two versions
of the experiment (i.e. ISI 100 msec and 500 msec). There was an expected and
highly significant difference between sign and nonsign lexical decisions. It took
subjects longer to reject nonsigns than to accept true signs. Most surprising,
however, are the small, statistically weak effects of related vs. unrelated signs.
Specifically, in Version 1 (i.e. 100 msec ISI), lexical decisions were slower in
the related context relative to the unrelated context; however, these inhibitory
differences only approach statistical significance, which is traditionally set at
p < .05 (Movement Related X = 746, Movement Unrelated X = 735, p = .064;
Location Related X = 733, Location Unrelated X = 721, p = .088). The
500 msec ISI condition also demonstrates a lack of effects attributable to form-
based relations (Movement Related X = 680, Movement Unrelated X = 674,
p = .62; Location Related X = 669, Location Unrelated X = 667, p = .76). Taken
together, the present study reveals limited influence of shared formational over-
lap in ASL lexical decisions. Specifically, trends for early and temporally limited
inhibition were noted; however, these effects were not statistically significant.4
3 Nonsign forms also shared phonological relations with the prime. However, the discussion of
these data is beyond the scope of this chapter.
4 The categories of movement types examined in the present experiment included both “path”
movements, and examples of “secondary movements.” Limiting the analysis to just path
movements did not significantly change the pattern of results.
92 David P. Corina and Ursula C. Hildebrandt

4.2.3 Discussion
When deaf subjects made lexical decisions to sign pairs that shared a location
or a movement, we observed weak inhibition effects when stimuli were sepa-
rated by 100 msec, but these effects completely disappeared when the stimulus
pairs were separated by 500 msec. These findings stand in contrast to an earlier
study reported by Corina and Emmorey (1993), in which significant form-based
effects were observed. Specifically, signs that shared movement showed signif-
icant facilitatory priming, and signs that shared a common location exhibited
significant inhibition. How can we reconcile these differences between studies?
Two important issues are noteworthy: methodological factors and the nature of
form-based relations.
In the Corina and Emmorey (1993) study, only 10 pairs of each parame-
ter category (handshape, movement, and location) were tested. In contrast, the
present study included examinations of a much larger pool of contrasts (39 in
each condition). In addition, the mean reaction time in the earlier study to related
and unrelated sign was 1033 msec, while reaction time in the present experi-
ment averaged 704 msec. The small number of stimuli pairs in the Corina and
Emmorey study may have encouraged more strategy-based decisions. Indeed,
the relatively long reaction times are consistent with a mediated or controlled
processing strategy. In contrast, the present study may represent the effects of
automatic priming. That is, these effects may represent effects of lexical, rather
than post-lexical, access.
Recent spoken language studies have sought to clarify the role of shared
phonetic-featural level information vs. segment level information. Experiments
utilizing stimuli that were phonetically confusable by virtue of featural over-
lap (i.e. bone-dung) have reported temporally limited inhibition (Goldinger
et al. 1993). In contrast, when stimuli share segmental overlap (i.e. bone-
bang), facilitation may be more apparent because these stimuli allow subjects
to adopt prediction strategies characteristic of controlled rather than automatic
processing. In the present ASL experiment, form-based overlap was limited to
a single parameter, either movement or location. If we believe these stimuli
are more analogous to phonetically confusable spoken stimuli, then we might
expect to observe temporally limited inhibitory effects similar to those that
have been reported for spoken languages. The existence of (albeit weak) in-
hibitory effects that are present at the 100 msec ISI are consistent with this
hypothesis.
Finally, it should be noted that several models of word recognition suggest
that as activation of a specific lexical entry grows, so does inhibition of compet-
ing entries (Elman and McClelland 1988). These mechanisms of inhibition may
result in a situation where forms that are closely related to a target are inhibited
relative to an unrelated entry. The work of Goldinger et al. (1993) and Lupker
Psycholinguistic investigations of phonological structure 93

and Colombo (1994) has appealed to these spreading activation and inhibition
models to support patterns of inhibition for phonetic-featural overlap. It would
not be surprising if similar mechanisms were at work in the case of ASL recog-
nition. Thus, it is possible that the patterns observed for form-based priming in
sign language are not so different from those of spoken language processing.
Further work is required to establish the reliability of these findings. Future
studies will manipulate the degree of shared overlap (for example, using pairs
that share both location and movement) in order to examine form-based effects
in ASL.

4.3 Experiment 2: Phoneme monitoring


In spoken language, speech sounds come in two different varieties: vowels (Vs)
and consonants (Cs). All spoken languages have both kinds of phonemes and
all language users usually have some awareness of this distinction (van Ooijen,
Cutler, and Norris 1992). The theoretical status of sublexical units in signed
languages is of considerable interest. Most theories acknowledge a difference
between those elements that remain unchanging throughout the course of a sign
and those that do not. This difference underlies the distinction between move-
mental and positional segments (Perlmutter 1993) or between the inherent and
prosodic features described by Brentari (1998). Perlmutter (1993) provides lin-
guistic evidence for two classes of formational units in ASL: “movemental”
segments (Ms) and “positional” segments (Ps). Perlmutter argues that these
units are formally analogous to the distinction of Vs and Cs found in spoken
languages. Just as vowels and consonants comprise basic units in speech sylla-
bles, the Ms and Ps of ASL factor significantly in the patterning of sign language
syllables. To the extent that this characterization is correct, this finding provides
powerful evidence for a fundamental distinction in signal characteristics of hu-
man languages, regardless of the modality of expression.
Differences in the perception of consonants and vowels have been extensively
studied. Perception of consonants appears to be categorical, while the percep-
tion of vowels appears more continuous (see Liberman et al. 1967; see also
Ades 1977; Repp 1984). The amount of time necessary for the identification of
consonants and vowels also differs. Despite the greater perceptibility of vowels
(Ades 1977), aurally presented vowels take longer to identify than do conso-
nants. Several studies have shown that reaction times for monitoring for vowels
is slower than that of consonants and semi-vowels (Savin and Bever 1970; van
Ooijen, Cutler, and Norris 1992; Cutler, van Ooijen, Norris, and Sánchez-Casas
1996). The effects appear to hold across languages as well. Cutler and Otake
(1998) reported that English native speakers show less accurate vowel detection
than consonant detection for both Japanese and English lexical targets, and that
Japanese speakers showed a similar effect for English materials. The curious
94 David P. Corina and Ursula C. Hildebrandt

trade-off between perceptibility and identification raises questions with respect


to the cognitive processing of consonants and vowels.
The experiments reported here make use of a handshape detection task to de-
termine whether differences exist in detection times for sublexical components
of signed languages. As discussed below, handshape change can constitute the
nucleus of a sign syllable. As noted above syllable nuclei in spoken language
(i.e. vowels) are identified more slowly than consonants; therefore handshapes
in signs with handshape change should be identified more slowly in phoneme
monitoring. Specifically, we examine whether the time to detect a handshape
within a sign with a handshape change is slower to detect than a handshape in a
sign that does not change posture. These studies are similar to phoneme moni-
toring studies conducted with spoken languages, in which subjects monitor for
consonant or vocalic segments in lexical environments.
Handshapes in ASL hold a dual status. At times the handshape in a sign
like THINK assumes a posture that remains static throughout the sign. In other
signs, e.g. UNDERSTAND, the entire movement of the sign may be composed
of a change in the handshape configuration. Based upon the analyses in Corina
(1993), a movement dynamic that is limited to a handshape change may consti-
tute the most salient element of a sign form, and thus serve as the nucleus of the
syllable. This observation raises the following question: Do these differences
in the status of handshape have processing consequences? Based upon these
analyses and prior work on spoken language, one might expect that the time to
detect a handshape in a sign with a handshape change would be longer than the
time to detect the same handshape in a static form. This is because in instances
when the handshape changes, handshape is functioning more like a vowel. In
those signs with no change in handshape, handshape serves a more consonantal
function. This hypothesis is tested in the following experiment.5

4.3.1 Method
A list of signs was presented on videotape at a rate of one sign every two
seconds. The subjects were instructed to press a response key when they had
detected the target handshape. An individual subject monitored four handshapes
5 Some reviewers have questioned the reasonableness of this hypothesis. The hypothesis is moti-
vated, in part, by the observation that human visual systems show specialization for perception
of movement and specialization related to object processing. Hence, we ask, could movement
properties of signs be processed differently from static (i.e. object) properties? I have presented
evidence that the inventory of contrastive handshape changes observed within signs is a subset
of the handshape changes that occur between signs. A possible explanation for this observation
is that human linguistic systems are less readily able to rectify fine differences of handshape in a
sign with a dynamic handshape change. However, given sufficient time (as occurs between signs)
the acuity tolerances are more relaxed, permitting a wider range of contrastive forms (Corina
1992; 1993). These observations motivated the investigation of these two classes of handshape
in ASL.
Psycholinguistic investigations of phonological structure 95

Table 4.1 Instruction to subjects: “Press the button when you see
a ‘1’ handshape”

Sign Handshape Subject’s response Comment

ABLE S no response
AGREE 1 yes! static handshape
PUZZLE 1→X yes! handshape change, first
handshape is the target
FIND 5→F no response
ASK S→1 yes! handshape change, second
handshape is the target
VACATION 5 no response

drawn from a total of six different handshapes. This set included three marked
handshapes (X, F, V) and three unmarked handshapes (1, S, 5) (after Battison
1978). Each subject monitored for two marked and two unmarked handshapes.
Prior to testing all subjects were informed that the target might occur as a
part of a handshape change or not, and were explicitly shown examples of these
contrasts. Table 4.1 shows a representative example of the stimulus conditions
and the intended subject response. Two conditions were included in the exper-
iment, “real-time” and “video-animated” signing. In the latter, the stimuli are
constructed by capturing the first “hold” segment of a sign and freezing this
image for 16 frames and then capturing the final hold segment of a sign and
freezing this image for 16 frames. When viewed in sequence one observed an
animated sign form in which the actual path of movement is inferred. This ma-
nipulation provides a control condition to examine temporal properties of sign
recognition. The order of the target handshapes and the order of conditions were
counterbalanced across subjects. Due to the temporal qualities of handshape
changes, the crucial comparison is between the first handshape of a contouring
form compared to a static handshape that is stable throughout the sign.

4.3.2 Results
The graphs in Figure 4.2 illustrate detection times for identifying handshapes
in ASL signs from 32 signers (22 native signers and 10 late learners of ASL).
The left half of the graph shows results for moving signs and the right half plots
results from the “video-animated” control conditions. HS-1 and HS-2 refer to
the first and second shape of “contour” handshape signs (for example, in the
sign ASK, HS-1 is an “S” handshape and HS-2 is a “G” handshape). The third
category is composed of signs without handshape changes (i.e. no handshape
change or NHSC).
(a) (b)
900 900
* p < .013

800 800

n.s. n.s.
700 700

n.s.
600 600
n.s.

Reaction Time (ms)

Reaction Time (ms)


n.s.
500 500

400 400
HS-1 HS-2 NHSC HS-1 HS-2 NHSC

Native Signers (n = 22)

Late Learners (n = 10)

Figure 4.2 Reaction times for detection of handshapes in ASL signs for (a) moving signs and (b) static signs
Psycholinguistic investigations of phonological structure 97

Several patterns emerge from these data. Importantly, the detection of the
first element of a handshape (HS-1) that is changing during the course of a
sign is not significantly different from the detection of that same handshape
when it is a member of a sign that involves NHSC ( p> .05). These preliminary
data from this detection paradigm suggest no behavioral processing differences
associated with handshapes as a function of their syllabic composition (i.e.
whether they are constituents of an M or a P segment). Moreover, statistical
analysis reveals no significant group differences ( p > .05). A second finding
emerges from consideration of the video-animated condition. Here, the overall
reaction time to detect handshapes is much longer for these signs than for the
real time signs. In addition, late learners appear to have more difficulty detecting
the second handshape of these signs. This may indicate that the late learners
have a harder time suppressing the initial handshape (thus resulting in slower
second handshape detection) or, alternatively, that these subjects are perturbed
when a sign lacks movement.

4.3.3 Discussion
In Experiment 2, a phoneme monitoring experiment was used to examine the
recognition of handshapes under two conditions: in one condition a hand-
shape appeared in a sign in which the handshape configuration remained static
throughout the course of a sign’s articulation, while in the other the handshape
was contoured. The results revealed that a target handshape could be equally
well detected in a sign without a handshape change as in a sign with a handshape
change. Under some analyses, for a sign in which the only dynamic component
is the handshape change, the handshape change constitutes the most sonorant
element of the sign form (see Corina 1993). Viewed from this perspective it
would then appear that the syllabic environment of the handshape does not
noticeably alter its perceptibility.
In spoken languages, phoneme monitoring times are affected by the major
class status of the target phoneme, with consonants being detected faster than
semi-vowels, which are detected faster than vowels (Savin and Bever 1970;
van Ooijen et al. 1992; Cutler et al. 1996). However, a central concern in
the phoneme monitoring literature is whether behavioral patterns observed in
the studies of consonants and vowels are attributable to the composition of the
signal (i.e. vowels are steady state, consonants are time-varying) or rather reflect
their function in the language (i.e. vowels constitute the nucleus of a syllable,
consonants are the margins of syllables). These facts have been inexorably
confounded in the spoken language domain. Attempts have been made (albeit
unsuccessfully) to control for this confound in spoken language (see van Ooijen
et al. 1992).
If we accept the premise that handshape properties hold a dual status, that is,
that handshapes may be a constituent of a movemental or positional segment
98 David P. Corina and Ursula C. Hildebrandt

(or alternatively a constituent of the syllable nucleus or not), then ASL provides
an important control condition. Following this line of argumentation, the present
data suggest that it is not the syllabic status of a segment that determines the
differences in reaction time, but rather the physical differences in the signal.
Thus, the physical differences between a handshape that is a member of a
handshape change compared to its nonchanging variant does not have significant
consequences for detection times as measured in the present experiment.
A methodological point concerns whether the phoneme monitoring task in
the context of naturalistic signing speeds has led to ceiling effects, such that true
processing differences have not been revealed. Further studies with speeded or
perceptually degraded stimuli could be used to address this concern.
Finally several theoretical points must be raised. It must be acknowledged
that the homologies between segments, phonemes, and features in spoken and
signed languages are quite loose. Thus, some may argue that these data have
little bearing on the signal vs. constituent controversy in spoken language. In
addition, as noted, the assumed status distinction between a static handshape and
a handshape change within a sign may be, in some fundamental way, incorrect.
A related concern is that the featural elements of the handshapes themselves
are not diagnostic of sonority, but of some more abstract property of the sign
(for example, the integration of articulatory information across time). Thus,
monitoring for specific handshape posture may not be a valid test of the role of
syllable information in ASL.

4.4 Experiment 3: Sign picture naming


The third experiment explores the time course of sign language production by
making use of a picture naming paradigm. This experiment is modeled after
Schriefers, Meyer, and Levelt’s (1990) cross modal word-picture paradigm. In
this paradigm, subjects are asked to name simple black and white line drawings
presented by computer under two different conditions: in one condition, only
the line drawing is seen (for example, a picture of a lion), while in the inter-
fering stimulus (IS)6 condition, the picture is accompanied by an auditorily
presented stimulus. The IS is either a semantically related word (picture: lion;
word: jungle) a phonologically related word (picture: lion; word: lighter) or an
unrelated word (picture: lion; word: stroller). Of course, trying to name a pic-
ture when you are hearing a competing word is disruptive. The main question
of interest, however, is whether the nature of the IS results in differential effects
on naming speed. Thus, in this paradigm, the most interesting results come
from the comparisons between the semantic and phonological IS relative to the

6 The term “Interfering Stimulus” (IS) is the accepted nomenclature in this research area. Note,
however, the effects of interference may be either inhibitory or facilitatory.
Psycholinguistic investigations of phonological structure 99

unrelated IS. In addition, in these paradigms one may systematically vary the
temporal relationships, or stimulus onset asynchronies (SOAs), between when
the picture appears and when the IS is delivered. The differential effects of the
IS are further illuminated by varying these temporal properties.
Several findings have shown that in the early-onset conditions (–150 msec
SOA), picture naming latencies are greater in the presence of semantic IS com-
pared to unrelated IS, whereas phonological IS have little effect (the phonolog-
ical stimuli shared word onsets). It is suggested that the selective interference
reflects an early stage of semantic activation. In the post-onset IS condition
(+150 msec SOA), no semantic interference is evident, but a significant facil-
itatory effect for phonological stimuli is observed. These results support the
model of word production in which a semantic stage is followed by a phono-
logical or word-form stage of activation.
Figure 4.3 shows results obtained from our laboratory on replication and
extension of Schriefers et al.’s (1990) paradigm. This study included over 100
subjects at 5 different SOAs (–200, –100, 0, +100, +200). Figure 4.3 illus-
trates the difference in magnitude of reaction times for the unrelated condition
compared to the interfering stimulus conditions (thus, a negative value reflects
a slowing of reaction time) for aurally presented words under a variety of
conditions. As shown in the figure, at early points in time semantically re-
lated ISs produce significant interference. This semantic interference is greatly
diminished by the 0 msec SOA. At roughly –100 msec SOA we observe ro-
bust phonological facilitation, which was absent at the –200 msec SOA. This
Reaction Time (unrelated: interfering stimulus)

75

50

25 Semantic

Phonological onset
0
Phonological rhyme

−25

−50
−300

−200

−100

100

200

300
0

Stimulus onset asynchrony

Figure 4.3 Word–picture interference and facilitation


100 David P. Corina and Ursula C. Hildebrandt

facilitation begins to diminish at +200 msec SOA. Also plotted are the results
of phonologically rhyming stimuli. Here we observe an early facilitatory effect
that rapidly diminishes. These studies show that there is an early point in time
during which semantic information is being encoded, followed by a time in
which phonological information is being encoded (in particular, word onset in-
formation). This experimental paradigm permits us to tap selectively into these
different phases of speech production. The inclusion of the rhyme condition (a
condition not reported by Schriefers et al. 1990) provides a useful data point
for the assessment of phonological effects in ASL where the concept of shared
onset vs. rhyme is not transparent.

4.4.1 Method
In the sign language adaptation of this experiment, a subject is shown a picture
and is asked to sign the name of the picture as fast as possible. A reaction time
device developed in our laboratory stops a clock when the subject raises his or
her hands off the table to sign the target item. The interference task is achieved
by superimposing the image of a signer on the target picture. This is achieved
through the use of transparent dissolve, a common technique in many digital
video-effects kits. Impressionistically, the effect results in a “semi-transparent”
signer ghosted over the top of the object to be named. In the present experiment
we examined the roles of three interference conditions: semantically related
signs (e.g. picture: cat; sign: COW); phonologically related signs (e.g. picture:
cat; sign: INDIAN); or an unrelated sign (e.g. picture: cat; sign: BALL). In
the majority of cases the phonological overlap consisted of at least two shared
parameters; however, in this initial study these parameter combinations were
not systematically varied.
We examined data from 14 native deaf signers and 25 hearing subjects re-
sponding to a third visual experiment conducted in English. In the English
experiment the target was a picture, while the interfering stimulus was a written
word superimposed on the target picture. This was done to isolate a within-
modality interference effect rather than the crossmodal effects from the audi-
tory experiments. Note that the current ASL experiment was conducted only
for a simultaneous (0 msec SOA) interference condition.

4.4.2 Results
Shown in Figure 4.4 is a comparison of a sign-picture condition with a written-
word condition. At this SOA we observe robust facilitation for written words
that share phonemic overlap with the target. In addition, we observe semantic
interference (whose effects are likely abating at this SOA; see above). ASL sign-
ers show a different pattern; little or no phonological facilitation was observed.
Psycholinguistic investigations of phonological structure 101
Reaction Times (Unrelated: Interfering Stimuli)

40

**
30 * p < .05
** p. < .01 ASL phonology
20
English phonology
10
n.s.
0
ASL semantics
−10
* English semantics
**
−20

0 msec SOA

Figure 4.4 Comparisons of sign–picture and word–picture interference


effects

This stands in contrast to the robust and consistent findings reported in spoken
and written language literature. However, as with written English, we do ob-
serve effects of semantic interference at this SOA. These results are intriguing
and suggest a difference between a semantic and phonological stage of pro-
cessing in sign recognition. However, at this SOA, while both phonological and
semantic effects are observed in the English experiment, we find only significant
evidence for semantic effects for ASL.

4.4.3 Discussion
The third experiment used a measure of sign production to examine the effects
of semantic and phonological interference during picture naming in sign lan-
guage. These results showed significant effects for semantic interference but
no effects for phonological interference. These results stand in contrast to sim-
ilar experiments conducted with both written and spoken English, which have
shown opposing effects for semantic and phonological interference.
One of the strengths of the word picture paradigms is the incorporation of
a temporal dimension in the experimental design. By varying the temporal
relationship between the onset of the picture and the onset of the interfering
stimuli, Levelt, Schriefers, Vorberg, Meyer, and colleagues (1991) have been
able to chart out differential effects of semantic and phonological interference.
The choice of the duration of these SOAs has been derived in part from es-
timates of spoken word recognition. The temporal difference in the duration
of words and signs – coupled with the differences in recognition times for
102 David P. Corina and Ursula C. Hildebrandt

words vs. signs (see Emmorey and Corina 1990) – make it difficult to fully
predict what the optimal time windows will be for ASL. The present sign ex-
periment used a 0 msec ISI. For English (and Dutch) this corresponds to a
time when phonological effects are beginning to show a maximal impact and
semantic effects are beginning to wane. In the ASL experiment, we observed
semantic effects but no phonological effects. The data may reflect that the tem-
poral window in which to observe these effects in signed language is shifted
in time. Ongoing experiments in our laboratory are currently exploring this
possibility.

4.5 Experiment 4: Phonological similarity


A fourth study was motivated by spoken language studies that have investi-
gated the concept of “phonological awareness”. Phonological awareness re-
flects an understanding that words are not unsegmented wholes, but are made
up of component parts. Psychological research has explored various aspects of
phonological awareness: its developmental time course, the impact of delayed
or deficient phonological awareness on other linguistic skills such as reading
(Lundberg, Olofsson, and Wall 1980; Bradley and Bryant 1983), and more re-
cently its neural underpinnings (Petersen, Fox, Posner, Mintun, and Raichle
1989; Rayman and Zaidel 1991; Sergent, Zuck Levesque, and MacDonald
1992; Paulescu, Frith, and Frackowiak 1993; Poeppel 1996; Zattore et al.
1996).
In spoken languages, words are comprised of segmental phonemic units (i.e.
consonants and vowels) and languages vary in the inventory and composition
of the phonemic units they employ. In signed languages, handshape, location,
movement, and orientation form the essential building blocks of signs. Formal
descriptions of spoken and signed languages permit theoretically driven state-
ments of structural similarity. For example, one might categorize words that
have a “long e” sound or the phoneme sequence /a/ /t/; or the class of signs
that contain a “five” handshape, or touch the chin, or have a repeated circular
movement.
Some structural groupings in spoken languages have a special status. For
example, words that rhyme (e.g. moose-juice) are generally appreciated as
more similar-sounding than words that share onsets (e.g. moose-moon). Clues
to these privileged groupings can often be observed in stylized usage of language
such as poetry, language games, and song. Signed languages are no exception.
Klima and Bellugi (1979) investigated examples of sign poetry (or “art sign”) as
a specialized use of ASL. They observed that a similar handshape may be used
throughout a poem, a device analogous to the spoken language phenomenon
of alliteration (where each word shares a similar initial consonant sound). In
sign poetry there is a great deal of attention to the flow and rhythm of the signs,
Psycholinguistic investigations of phonological structure 103

much like artistic manipulation of the melodic line in spoken poetry (Rose
1992; Blondel and Miller 1998). Often, the locations and movements of signs
are manipulated to create cohesiveness and continuity between signs. Signs also
may overlap, or be shortened or lengthened, in order to create a rhythmic pattern.
These devices are combined to create strong visual imagery unique to visual–
gestural languages (Cohn 1986). Examples of language games in ASL include
“ABC stories” and “proper name stories.” In these games a story is told with the
constraint that each successive sign in the story must use a handshape drawn
from the manual alphabet in a sequence that follows the alphabet or spells
a proper name. Cheerleader fight songs also evidence repetition of rhythmic
movement patterns and handshape alliteration.
Taken together, these examples demonstrate that sign forms exhibit com-
ponent structures that are accessible to independent manipulation (e.g. hand-
shapes) and provide hints of structural relatedness (e.g. similar movement
paths). However, it should be noted that there is no generally accepted notion
of a naturally occurring structural grouping of sign properties that constitute
an identifiable unit in the same sense that a “rhyme” does for users of spoken
languages. While several researchers have used the term “rhyme” to describe
phonemic redundancy in signs (Poizner, Klima, and Bellugi 1987; Valli 1990)
it remains to be determined whether a specific combination of structural prop-
erties serves this function.
The current exploratory studies gather judgments of sign similarity as rated
by native users of ASL in order to provide insight into the relationship between
theoretical models of sign structure and psychological judgments of similar-
ity. Two experiments sought to uncover psychological judgments of perceptual
similarity of nonmeaningful but phonologically possible signs. We were inter-
ested in what combination of parameters observers would categorize as being
most similar. These experiments tapped into native signer intuitions as to which
parameter, or combination of shared parameters, makes two signs seem particu-
larly similar. These paradigms provided an opportunity to tap into phonological
awareness by investigating phonological similarity in ASL. The similarity judg-
ments of hearing subjects unfamiliar with ASL provided an important control.
A few studies have examined whether similarity ratings of specific compo-
nents of signs – especially, handshape (Stungis 1981), location (Poizner and
Lane 1978), and movement (Poizner 1983) – differ between signers and non-
signers. The rating of handshape and location revealed very high correlations
between signers and hearing nonsigners (r = .88, r = .82, respectively). These
data suggest that linguistic experience has little effect on these perceptual sim-
ilarity ratings. In contrast, Poizner’s (1983) study examined ratings of signs
filmed as point light displays and found some deaf and hearing differences for
qualities of movement. Post hoc analysis suggested that deaf signers’ patterns of
dimensional salience mirrored those dimensions that are linguistically relevant
104 David P. Corina and Ursula C. Hildebrandt

in ASL. However, no studies to date have examined whether theoretically moti-


vated statements of structural similarity of ASL also describe natural perceptual
groupings.
These experiments used an offline technique to uncover psychological judg-
ments of perceptual similarity in naturally presented nonsign stimuli. A para-
digm developed in our laboratory capitalizes upon the relatively greater parallel
processing afforded by the visual system. In these experiments subjects were
asked to make simultaneous comparisons of multiple sign forms that varied in
a systematic fashion.

4.5.1 Method
The stimuli for these experiments were created by videotaping a deaf male
signer signing a series of ASL nonsigns. Nonsigns are pronounceable, phono-
logically possible signs that are devoid of any meaning. In the first of two
experiments, each trial had a target nonsign in a circular field in the middle of
the screen. Surrounding the central target were alternative nonsign forms, one
in each corner of the screen. Three of these nonsigns shared two parameters
with the target nonsign and differed on one parameter. One shared movement
and location (M + L) and differed in handshape, one shared movement and
handshape (M + H) and differed in location, and one shared location and hand-
shape (L + H) and differed in movement. The remaining flanking nonsign was
phonologically unrelated to the target. All signs (surrounding signs and target
sign) were of equal length and temporally synchronized.
In the second experiment, the target sign shared only one parameter with
three surrounding signs. For both experiments, these synchronized displays
were repeated concurrently five times (for a total of about 15 seconds) with
5 seconds of a black screen between test screens. The repetitions permitted
ample time for all participants to carefully inspect all flanking nonsigns and to
decide which was similar to the target.
The instructions for these experiments were purposely left rather open ended.
The subjects were simply asked to look at the target sign and then decide which
of the flanking signs was “most similar.” We stressed the importance of using
“intuition” in making the decisions, but purposely did not specify a basis for
the similarity judgment. Rather, we were interested in examining the patterns
of natural grouping that might arise during these tasks.
Three subject groups were run on these experiments (native deaf signers, deaf
late learners of ASL, and hearing sign-naive subjects). A full report of these
data may be found in Hildebrandt and Corina (2002). The present discussion
is limited to a comparison of native deaf signers and hearing subjects. Twenty-
one native signers and 42 hearing nonsigners participated in the two shared
parameter study, and 29 native signers and 51 hearing nonsigners participated
in the single shared parameter experiment.
Psycholinguistic investigations of phonological structure 105

60
Loc + mov
50 Hand + mov
Loc + hand
40 Random
Percentage

30

20

10

0
Native Hearing
Groups

Figure 4.5 Results from Experiment 1: Two-shared parameter condition

4.5.2 Results
Results from the first experiment (i.e. two-shared parameter condition) are
shown in Figure 4.5. Both deaf and hearing subjects chose signs that shared
movement and location as the most similar in relation to the other combinations
of parameters (M = 45.79%, SD = 15.33%). Examination of the remaining con-
trasts, however, reveals a systematic difference; while the hearing group chose
M + H more often than L + H (t (123) = 3.685, two-tailed p < .001), the na-
tive group chose those two combinations of shared parameters equally often
(t (60) = .455, two-tailed p = .651).
In the second experiment (i.e. single parameter condition) the native signers
and hearing groups again made nearly identical patterns of similarity judgments.
Both hearing and deaf subjects rated signs that share a movement, or signs that
share a location with the target sign as highly similar (all ps < .01). Although
Figure 4.6 suggests that movement was more highly valued by the native signers,
this difference did not reach statistical significance ( p < .675).

4.5.3 Discussion
Several important findings emerge from these data. First, and most striking, is
the overall similarity of native deaf signers and hearing subjects on these judg-
ments. First, both deaf and sign-naive subjects chose signs that share movement
and location as the most similar, indicating that this combination of parameters
enables a robust perceptual grouping. The salience of movement and location is
also highlighted in the second experiment, where these combinations of param-
eters once again served as the basis of preferred similarity judgment. It is only
when we consider the parameters of M + H vs. L + H that we find group differ-
ences, which we assume here to be an influence of linguistic knowledge of ASL.
106 David P. Corina and Ursula C. Hildebrandt
45
Movement
40 Location
Handshape
35

30
Percentage

25

20

15

10

0
Native Hearing
Groups

Figure 4.6 Results from Experiment 2: Single parameter condition

Several theories of syllable structure in ASL have proposed that the combi-
nation of movement and location serves as the skeletal structure from which
syllables are built, and that movement is the most sonorant element of the
sign syllable (see, for example, Sandler 1993). In these models, handshape
is represented on a separate linguistic structural tier in order to account for
such phenomena as the spreading of handshape across location and movement
(Sandler 1986). Languages commonly capitalize on robust perceptual distinc-
tions as a basis for linguistic distinctions. The cross-group similarities observed
in the present study reinforce this notion. However, language knowledge does
appear to play a role in these judgments; the lack of a clear preference between
M + H vs. L + H indicates that each of these combinations is an equally poor
grouping. This may be related to the fact that groupings of M + H or L + H
are not coherent syllabic groupings. Consider a spoken language analogue, in
which subjects make judgments of similarity:
Target: dat Flankers: zat, dut, dal
Assume the pair judged most similar is dat–zat. In contrast we find equal non-
preferences for the pairs dat–dut (shared initial onset and final consonant), and
dat–dal (shared initial consonant and vowel). Thus, we conjecture that in the
pair dat–zat the hierarchical structure of the syllable coda provides a basis of
a similarity judgment, while the nonpreferred groupings fail to benefit from
coherent syllabic constituency.
The hearing subjects’ preference for M + H over L + H combinations is likely
to reflect the perceptual salience of the movement for these sign-naive subjects,
for whom the syllabic violations are not an issue.
Psycholinguistic investigations of phonological structure 107

In the single parameter study it is somewhat unexpected that movement did


not further differentiate deaf and hearing subjects. Combinations of parameters
may hold more perceptual weight and may also have greater linguistic signifi-
cance. Indeed, the combination of two parameters may be more salient than the
sum of its parts. Spoken language studies also lend support for the psychological
salience of the syllable unit over individual phonemes (Mehler, Dommergues,
Frauenfelder, and Segui 1981).

4.6 General discussion


We have presented results from four psycholinguistic experiments investigating
the role of form-based information in ASL. These efforts represent some of the
first attempts to study the online recognition and production of ASL. With this in
mind, it is very important not to overinterpret these initial findings. More work
is required to draw strong conclusions. In the spirit of these early investigations
we discuss methodological and theoretical issues that may underlie this pattern
of results.
In the three experiments where reaction time measures were used (Lexical
decision, Phoneme monitoring, and Sign–picture interference), we found little
evidence for the effects of form-based information during the online processing
of ASL signs. A fourth experiment used an offline technique to explore the cate-
gorization of phonological similarity between signs. In this experiment, modest
differences were observed which differentiated native signers from sign-naive
persons. What can be made of these findings? We now turn to a speculative
discussion that examines a possible reason for a reduced role of form-based in-
formation in the processing of a signed language during online tasks. The discus-
sion hinges on what is perhaps one of the most interesting modality differences
between signed and spoken language: the differences in articulatory structures.
Without a doubt, one of the most significant differences between signed and
spoken languages is the difference in relationship of the articulators to the ob-
jects of language perception. In spoken languages, the primary articulators (the
tongue, palate, etc.) are largely hidden from view. The perception of the speech
signal relies on an appreciation of the changes in sound pressure that arises as
a result of changes in configuration of these hidden articulators. In contrast, for
signed languages, the primary articulators are visually obvious. This difference
may have profound implications for the mappings that are required between
a language signal and cognitive representations that comprise the form-based
attributes of lexical representations. In particular, we conjecture that signed
languages afford a more transparent mapping from the medium of expression
to form-based representations.
One way to conceptualize this mapping is to consider that language un-
derstanding (both signed and spoken) requires a mapping to an idealized and
108 David P. Corina and Ursula C. Hildebrandt

internalized articulatory–gestural representation, an idea made popular by the


Motor Theory of Speech Perception (Mattingly and Studdert-Kennedy 1991;
Liberman 1996). If cognitive representations come to reflect the statistical and
language-systematic regularities that are exercised in the service of mapping lin-
guistic signals to meaning, spoken language processing may necessitate a more
elaborate form-based representation. Differences in form-based representations
may have processing consequences. Specifically, the reduced effects of form-
based manipulations evident in the present series of online experiments may
reflect this reduced (but not absent) role of intermediary representations in ASL.
Finally, we return to the question of psychological reality of phonological
structure in ASL. At the outset we stated that to the extent that the behav-
ioral manifestation of theoretical linguistic constructs can be empirically val-
idated, they are deemed as having a “psychological reality”. We have shown,
as measured by these experiments, that the behavioral effects of some phono-
logical form-based properties are difficult to establish. But, in other psycho-
logical and neurological domains these reflexes are clearly apparent. We have
seen, for example, that in measures that encourage controlled processing, na-
tive signers evidence sensitivities to form-based overlap (Corina and Emmorey
1993) and syllable structure (Hildebrandt and Corina, in press). Naturally oc-
curring slips of the hand show systematic substitutions of parameters (Klima
and Bellugi 1979). In neuropsychology the dissolution of signing in the face of
aphasia (Poizner, Klima, and Bellugi 1987; Hickock, Bellugi, and Klima 1998;
Corina 1998), Wada testing (sodium amytal test), and direct cortical stimulation
(Corina, McBurney, Dodrill, Hinshaw, Brinkley, and Ojemann 1999; Corina
2000) show that form-based errors honor independently motivated linguistic
models. Future studies will no doubt ferret out additional conditions under
which form-based properties either do or do not contribute to sign language
processing. The discovery of these conditions will provide powerful insights
into the necessary and sufficient conditions of human linguistic representation.

Acknowledgments
This work was supported by a NIDCD Grant (R29-DC03099) awarded to David
Corina. We thank the deaf volunteers who participated in this study. We ac-
knowledge the help of Nat Wilson, Deba Ackerman, and Julia High. We thank
the reviewers for helpful comments.

4.7 References
Ades, Anthony E. 1977. Vowels, consonants, speech, and nonspeech. Psychological
Review 84:524–530.
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Psycholinguistic investigations of phonological structure 109

Blondel, Marion and Christopher Miller. 1998. The relation between poetry and phonol-
ogy: Movement and rhythm in nursery rhymes in LSF. Paper presented at the 2nd
Intersign Workshop, Leiden, The Netherlands, December.
Bradley, Lynette L. and Peter E. Bryant. 1983. Categorizing sounds and learning to read:
A causal connection. Nature 301:419–521.
Brentari, Diane. 1990. Licensing in ASL handshape change. In Sign language re-
search: Theoretical issues, ed. Ceil Lucas. Washington, DC: Gallaudet University
Press.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Cohn, Jim. 1986. The new deaf poetics: Visible poetry. Sign Language Studies 52:263–
277.
Corina, David P. 1991. Towards an understanding of the syllable: Evidence from linguis-
tic, psychological, and connectionist investigations of syllable structure. Doctoral
dissertation, University of California , San Diego, CA.
Corina, David P. 1992. Biological foundations of phonological feature systems: Evidence
from American Sign Language. Paper presented to the Linguistics Departmental
Colloquium, University of Chicago, IL.
Corina, David P. 1993. To branch or not to branch: Underspecification in ASL handshape
contours. In Phonetics and Phonology, Vol. 3: Current issues in ASL Phonology,
ed. Geoffrey R. Coulter, 63–95. San Diego, CA: Academic Press.
Corina, David P. 1996. ASL syllables and prosodic constraints. Lingua 98:73–102.
Corina, David P. 1998. Aphasia in users of signed languages. In Aphasia in atypical pop-
ulations, ed. Patrick Coppens, Yvan Lebrum, and Anna Baddo, 261–309. Hillsdale,
NJ: Lawrence Erlbaum Associates.
Corina, David P. 2000. Some observations regarding paraphasia and American Sign
Language. In The signs of language revisited: An anthology to honor Ursula Bellugi
and Edward Klima, ed. Karen Emmorey and Harlan Lane, 493–507. Mahwah, NJ:
Lawrence Erlbaum Associates.
Corina, David P. and Karen Emmorey. 1993. Lexical priming in American Sign
Language. Paper presented at the Linguistic Society of America Conference,
Philadelphia, PA.
Corina, David P., Susan L. McBurney, Carl Dodrill, Kevin Hinshaw, Jim Brinkley,
and George Ojemann. 1999. Functional roles of Broca’s area and SMG: Evi-
dence from cortical stimulation mapping in a deaf signer. NeuroImage 10:570–
581.
Corina, David, and Wendy Sandler. 1993. On the nature of phonological structure in
sign language. Phonology 10:165–207.
Cutler, Anne, Brit van Ooijen, Dennis Norris, and Rosa Sánchez-Casas. 1996. Speeded
detection of vowels: A cross-linguistic study. Perception and Psychophysics
58:807–822.
Cutler, Anne, and Takashi Otake. 1998. Perception and suprasegmental structure in a
non-native dialect. Journal of Phonetics 27:229–253.
Elman, Jeffrey L. and James L. McClelland. 1988. Cognitive penetration of the
mechanisms of perception: Compensation for coarticulation of lexically restored
phonemes. Journal of Memory and Language 27:143–165.
Emmorey, Karen. 1987. Morphological structure and parsing in the lexicon. Doctoral
dissertation, University of California, Los Angeles.
110 David P. Corina and Ursula C. Hildebrandt

Emmorey, Karen and David Corina 1990. Lexical recognition in sign language: Effects
of phonetic structure and morphology. Perceptual and Motor Skills 71:1227–1252.
Goldinger, Stephen D., Paul A. Luce, David B. Pisoni, and Joanne K. Marcario. 1993.
Form-based priming in spoken word recognition: The role of competition and bias.
Journal of Experimental Psychology: Memory, Learning, and Cognition 18:1211–
1238.
Hickock, Gregory, Ursula Bellugi, and Edward S. Klima. 1998. The neural organization
of language: Evidence from sign language aphasia. Trends in Cognitive Science
2:129–136.
Hildebrandt, Ursula C., and David P. Corina. 2002. Phonological similarity in American
Sign Language. Language and Cognitive Processes 17(6).
Klima, Edward and Ursula Bellugi. 1979. The Signs of Language. Cambridge, MA:
Harvard University Press.
Levelt, Willem J. M., Herbert Schriefers, Dirk Vorberg, Antje S. Meyer. 1991. The
time course of lexical access in speech production: A study of picture naming.
Psychological Review 98:122–142.
Liberman, Alvin M. 1996. Speech: A special code. Cambridge, MA: MIT Press.
Liberman, Alvin M., Franklin S. Cooper, Donald P. Shankweiler, and Michael
Studdert-Kennedy. 1967. Perception of the speech code. Psychological Review
74:431–461.
Liddell, Scott K. and Robert E. Johnson. 1985. American Sign Language: The phono-
logical base. Manuscript, Gallaudet University, Washington DC.
Lundberg, Ingvar, Ake Olofsson, and Stig Wall. 1980. Reading and spelling skills in
the first school years predicted from phonemic awareness skills in kindergarten.
Scandinavian Journal of Psychology 21:159–173.
Lupker, Stephen J. and Lucia Colombo. 1994. Inhibitory effects in form priming: Evalu-
ating a phonological competition explanation. Journal of Experimental Psychology:
Human perception and performance 20:437–451.
Mattingly, Ignatius G. and Michael Studdert-Kennedy. 1991. Modularity and the motor
theory of speech perception. Hillsdale, NJ: Lawrence Erlbaum Associates.
Mehler, Jacques, Jean Y. Dommergues, Uli Frauenfelder, and Juan Segui. 1981. The
syllable’s role in speech segmentation. Journal of Verbal Learning and Verbal
Behavior 20:298–305.
Paulesu, Eraldo, Christopher D. Frith, and Richard S. J. Frackowiak. 1993. The neural
correlates of the verbal component of working memory. Nature 362:342–345.
Perlmutter, David M. 1993. Sonority and syllable structure in American Sign Language.
In Phonetics and Phonology, Vol. 3: Current issues in ASL, ed. Geoffrey R. Coulter,
227–261. San Diego, CA: Academic Press.
Petersen, Steven E., Peter T. Fox, Michael I. Posner, Mark A. Mintun, and Marcus E.
Raichle. 1989. Positron-emission tomographic studies of the processing of single
words. Journal of Cognitive Neuroscience 1:153–170.
Poeppel, David. 1996. A critical review of PET studies of phonological processing.
Brain and Language 55:317–351.
Poizner, Howard. 1983. Perception of movement in American Sign Language: Effects
of linguistic structure and linguistic experience. Perception and Psychophysics
3:215–231.
Poizner, Howard, Edward S. Klima, and Ursula Bellugi. 1987. What the hands reveal
about the brain. Cambridge, MA: MIT Press.
Psycholinguistic investigations of phonological structure 111

Poizner, Howard and Harlan Lane. 1978. Discrimination of location in American Sign
Language. In Understanding language through sign language research, ed. Patricia
Siple, 271–287. San Diego, CA: Academic Press.
Rayman, Janice and Eran Zaidel. 1991. Rhyming and the right hemisphere. Brain and
Language 40:89–105.
Repp, Bruno H. 1984. Closure duration and release burst amplitude cues to stop conso-
nant manner and place of articulation. Language and Speech 27:245–254.
Rose, Heidi. 1992. A semiotic analysis of artistic American Sign Language and perfor-
mance of poetry. Text and Performance Quarterly 12:146–159.
Sandler, Wendy. 1986. The spreading hand autosegment of ASL. Sign Language Studies
15:1–28.
Sandler, Wendy. 1993. Sonority cycle in American Sign Language. Phonology 10:243–
279.
Savin, H. B. and T. G. Bever. 1970. The nonperceptual reality of the phoneme. Journal
of Verbal Learning and Verbal Behavior 9:295–302.
Schriefers, Herbert, Antje S. Meyer, and Willem J. Levelt. 1990. Exploring the time
course of lexical access in language production: Picture/word interference studies.
Journal of Memory and Language 29:86–102.
Sergent, Justine, Eric Zuck, Michel Levesque, and Brennan MacDonald. 1992. Positron-
emission tomography study of letter and object processing: Empirical findings and
methodological considerations. Cerebral Cortex 40:68–80.
Slowiaczek, Louisa M., and Marybeth Hamburger. 1992. Prelexical facilitation and lexi-
cal interference in auditory word recognition. Journal of Experimental Psychology:
Learning, Memory, and Cognition 18:1239–1250.
Slowiaczek, Louisa M., Howard. C. Nusbaum, and David B. Pisoni. 1987. Phonolog-
ical priming in auditory word recognition. Journal of Experimental Psychology:
Learning, Memory, and Cognition 13:64–75.
Stokoe, William C. 1960. Sign language structure: An outline of the visual communi-
cation systems of the American deaf. Studies in Linguistics, Occasional Papers 8.
Silver Spring, MD: Linstok Press.
Stokoe, William C., Dorothy C. Casterline, and Carl G. Croneberg. 1965. A Dictionary
of American Sign Language on linguistic principles. Washington, DC: Gallaudet
University Press.
Stungis, Jim. 1981. Identification and discrimination of handshape in American Sign
Language. Perception and Psychophysics 29:261–276.
Valli, Clayton. 1990. The nature of a line in ASL poetry. In SLR87: Papers from the 4th
International Symposium on Sign Language Research. Lappeenranta, Finland July,
1987. (International Studies on Sign Language and Communication of the Deaf;
10.) Eds.William H. Edmondson and Fred Karlsson, 171–182. Hamburg: Signum.
van Ooijen, Brit, Anne Cutler, and Dennis Norris. 1992. Detection of vowels and con-
sonants with minimal acoustic variation. Speech Communication 11:101–108.
Zattore, Robert J., Ernst Meyer, Albert Gjedde, and Alan C. Evans. 1996. PET studies
of phonetic processing of speech: Review, replication, and reanalysis. Cerebral
Cortex 6:21–30.
5 Modality-dependent aspects of sign language
production: Evidence from slips of the hands
and their repairs in German Sign Language

Annette Hohenberger, Daniela Happ,


and Helen Leuninger

5.1 Introduction
In the present study, we investigate both slips of the hand and slips of the tongue
in order to assess modality-dependent and modality-independent effects in lan-
guage production. As a broader framework, we adopt the paradigm of generative
grammar, as it has been developed over the past 40 years (Chomsky 1965; 1995,
and related work of other generativists). Generative grammar focuses on both
universal and language-particular aspects of language. The universal charac-
teristics of language are known as Universal Grammar (UG). UG defines the
format of possible human languages and delimits the range of possible variation
between languages. We assume that languages are represented and processed
by one and the same language module (Fodor 1983), no matter what modal-
ity they use. UG is neutral with regard to the modality in which a particular
language is processed (Crain and Lillo-Martin 1999).
By adopting a psycholinguistic perspective, we ask how a speaker’s or
signer’s knowledge of language is put to use during the production of lan-
guage. So far, models of language production have been developed mainly
on the basis of spoken languages (Fromkin 1973; 1980; Garrett 1975; 1980;
Butterworth 1980; Dell and Reich 1981; Stemberger 1985; Dell 1986; MacKay
1987; Levelt 1989; Levelt, Roelofs, and Meyer 1999). However, even the set
of spoken languages investigated so far is restricted (with a clear focus on
English). Thus, Levelt et al. (1999:36) challenge researchers to consider a
greater variety of (spoken) languages in order to broaden the empirical basis
for valid theoretical inductions. Yet, Levelt and his colleagues do not go far
enough. A greater challenge is to include sign language data more frequently in
all language production research. Such data can provide the crucial evidence for
the assumed universality of the language processor and can inform researchers
what aspects of language production are modality dependent and what aspects
are not.

112
Modality-dependent aspects of sign language production 113

5.2 Goals and hypotheses


We follow Levelt (1983; 1989; 1999; Levelt et al. 1999) in adopting a model
of language production with one component that generates sentences (the
processor) and another that supervises this process (the monitor). Therefore,
we have formulated two hypothesis pairs with regard to the processor and the
monitor (see also Leuninger, Happ, and Hohenberger 2000a):
r Hypothesis 1a: The language processor is modality neutral (amodal).
r Hypothesis 1b: The content of the language processor (phonology, morpho-
logy, syntax) is modality dependent.
r Hypothesis 2a: The monitor is modality neutral.
r Hypothesis 2b: The content of the monitor is modality dependent.
This twofold hypothesis pair is well in line with what other sign language
researchers advocate with regard to modality and modularity (Crain and Lillo-
Martin 1999:314; Lillo-Martin 1999; Lillo-Martin this volume): while the input
and output modules of spoken and signed languages are markedly different,
the representations and processing of language are the same because they are
computed by the same amodal language module.
The goal of our study is to investigate these hypotheses as formulated above.
We are interested in finding out, in the first place, how a purported amodal
language processor and monitor work in the two different modalities. Therefore,
we investigate signers of German Sign Language (Deutsche Gebärdensprache
or DGS) and speakers of German, and present them with the same task. The
tension between equality and difference is, we feel, a very productive one and
is at the heart of any comparative investigation in this field.
Hypotheses 1b and 2b deserve some elaboration. By stating that the content
of the language processor and the monitor are modality dependent we mean
that phonological, morphological, and syntactic representations are different for
signed and spoken languages. Some representations may be the same (syntactic
constructions such as wh-questions, topicalizations, etc.); some may be differ-
ent (signed languages utilize spatial syntax and have a different pronominal
system); some may be absent in one of the languages but present in the other
(signed languages utilize two-handed signs, classifiers, facial gestures, other
gestures, etc., but spoken languages do not utilize these language devices). If
modality differences are to be found they will be located here, not in the overall
design of the processor. The processor will deal with and will be constrained by
those different representations. As the function of the processor is the same no
matter what language is computed – conveying language in real-time – the pro-
cessor dealing with signed language and the one dealing with spoken language
will have to adapt to these different representations, exploit possible process-
ing advantages, and compensate for possible disadvantages (Gee and Goodhart
1988). One prominent dimension in this respect is simultaneity/linearity of
114 A. Hohenberger, D. Happ, and H. Leuninger

grammatical encoding. In the sense of UG, the format of linguistic represen-


tations, however, is the same for both modalities. Both modalities may draw
on different offers made available by UG, but, crucially, this format will al-
ways be UG-constrained. Natural languages – if signed or spoken – will never
fall out of this UG space. The extent to which a particular language will draw
upon simultaneity or linearity as an option will, of course, depend on spe-
cific (Phonetic Form or PF) interface conditions of that language.1 Different
interface-conditions select different options of grammatical representations, all
of which are made available by UG. Therefore, UG-constrained variation is
a fruitful approach to the modality issue. In this respect, we distinguish three
sources of variation:
r “Intra-modal variation” between languages: This variation pertains to cross-
linguistic differences between spoken languages (e.g. English vs. German)
or crosslinguistic differences between signed languages (e.g. ASL vs. DGS).
r “Inter-modal variation” (e.g. German vs. DGS): This variation is highly wel-
come as it can test the validity of the concept of UG and the modularity
hypothesis.
r “Typological variation”: It is important not to mix modality and typolog-
ical effects. The mapping of languages onto the various typological cate-
gories (fusional, isolating, agglutinating languages, or, more generally, con-
catenative vs. nonconcatenative languages) can cut across modalities. For
example, spoken languages as well as signed languages may belong to the
same typological class of fusional/nonconcatenative languages (Leuninger,
Happ, and Hohenberger 2000a).2 Sign languages, however, seem to uni-
formly prefer nonconcatenative morphology and are established at a more

1 In Chomsky’s minimalist framework (1995), syntax has two interfaces: one phonetic-articulatory
(Phonetic Form, PF) and one logical-semantic (Logical Form, LF). Syntactic representations
have to meet wellformedness constraints on these two interfaces, otherwise the derivation fails.
LF is assumed to be modality neutral; PF, however, imposes different constraints on signed and
spoken languages. Therefore, modality differences should be expected with respect to the PF
interface.
2 In this sense, spoken German shares some aspects of nonconcatenativity with German Sign
Language. Of course, DGS displays a higher degree of nonconcatenativity due to the many
features that can be encoded simultaneously (spatial syntax, facial gestures, classifiers, etc.). In
spoken German, however, grammatical information can also be encoded simultaneously. Ablaut
(vowel gradation) is a case in point: the alternation of the stem such as /gXb/ yields various forms:
geben (‘to give,’ infinitive), gib (‘give’, second person singular imperative), gab (‘gave,’ first and
third person singular past tense), die Gabe (‘the gift,’ noun), g äbe (‘give,’ subjunctive mode).
Here, morphological information is realized by vowel alternation within the stem – a process
of infixation – and not by suffixation, the default mechanism of concatenation. A forteriori,
Semitic languages with their autosegmental morphology (McCarthy 1981) and tonal languages
(Odden 1995) also pattern with DGS. In syntax, sign languages also pattern with various spoken
languages with respect to particular parametric choices. Thus, Lillo-Martin (1986; 1991; see
also Crain and Lillo-Martin 1999) shows that ASL shares the Null Subject option with Italian
(and other Romance languages) and the availability of empty discourse topics with languages
such as Chinese.
Modality-dependent aspects of sign language production 115

extreme pole on the continuum of isolating vs. fusional morphology (see Sec-
tion 5.5.3.1).

5.3 A serial model of language production


As we investigate DGS production from a model-theoretic viewpoint, we tie
our empirical research to theories of spoken language production that have been
proposed in the literature. We adopt Levelt’s model (1989; 1992; 1999; Levelt
et al. 1999) which is grounded in the seminal work of Garrett (1975; 1980) and
Fromkin (1973; 1980).3
Levelt’s “speaking” model (1989) comprises various modules: the conceptua-
lizer, the formulator, the articulator, the audition, and the speech-comprehension
system. Levelt also includes two knowledge bases: discourse/world knowledge
and the mental lexicon. Furthermore, in the course of language planning, Levelt
distinguishes several planning steps from “intention to articulation,” (the sub-
title of Levelt 1989), namely conceptualizing, formulating, and articulation.
Formulating proceeds in two discrete steps: grammatical encoding (access to
lemmas, i.e. semantic and syntactic information) and phonological encoding
(access to lexemes, i.e. phonological word forms). This two-stage approach is
the defining characteristic of Levelt’s and Garrett’s discrete serial production
models.
Figure 5.1 depicts Levelt’s model of language production. The serial pro-
cess of sentence production is shown on the left-hand side of the diagram. The
monitor–which is located in the conceptualizer and conceived of as an indepen-
dent functional module–makes use of the speech comprehension system shown
on the right-hand side via two feedback loops: one internal (internal speech)
and one external (overt speech).
How can the adequacy of this model and the validity of our hypotheses be
determined? Of the different empirical approaches to this topic (all of which
are discussed in Levelt 1989; Jescheniak 1999), we chose language production
errors, a data class that has a long tradition of investigation in psycholinguistic
research. The investigation of slips of the tongue in linguistic research dates
back to the famous collection of Meringer and Mayer (1895); this collection
instigated a long tradition of psycholinguistic research (see, amongst others,
Fromkin 1973; 1980; Garrett 1975; 1980; Dell and Reich 1981; Cutler 1982;
Stemberger 1985; 1989; Dell 1986; MacKay 1987; Berg 1988; Leuninger 1989;
Dell and O’Seaghdha 1992; Schade 1992; 1999; Poulisse 1999).
3 With the adoption of a serial model of language production, we do not intend to neglect or
disqualify interactive models that have been proposed by connectionists or cascading models.
The sign language data that we discuss here must, in principle, also possibly be accounted for
by these models. The various models of language production are briefly reviewed in an article
by Jescheniak (1999) and are discussed in depth in Levelt, Roelofs, and Meyer (1999).
116 A. Hohenberger, D. Happ, and H. Leuninger

CONCEPTUALIZER

Discourse model,
Message
situation knowledge,
generation
encyclopedia, etc.

Monitoring

Parsed speech
Preverbal message

FORMULATOR SPEECH--COMPREHENSION
SYSTEM
Grammatical
encoding
LEXICON
Surface lemmas
structure
forms
Phonological
encoding

Phonetic plan
(internal speech) Phonetic string

ARTICULATOR AUDITION

overt speech

Figure 5.1 Levelt’s (1989:9) model of language production

The investigation of slips of the hand is still relatively young. Klima and
Bellugi (1979) and Newkirk, Klima, Pedersen, and Bellugi (1980) were the
first to present a small corpus of slips of the hand (spontaneous as well as
videotaped slips) in American Sign Language (ASL). Sandler and Whittemore
added a second small corpus of elicited slips of the hand (Whittemore 1987).
In Europe, as far as we know, our research on slips of the hand is the first.
Slips (of the tongue or of the hand) offer the rare opportunity to glimpse inside
the brain and to obtain a momentary access to an otherwise completely covert
process: language production is highly automatic and unconscious
(Levelt 1989). Slips open a window to the (linguistic) mind (Wiese 1987).
This is the reason for the continued interest of psycholinguists in slips. They
are nonpathological and involuntary deviations from an original plan which
can occur at any stage during language production. Slips are highly charac-
teristic of spontaneous language production. Although a negative incident, or
an error, a slip reveals the normal process underlying language production. In
Modality-dependent aspects of sign language production 117

analyzing the error we can find out what the production process normally looks
like.4

5.4 Method: Elicitation of slips of the hand


Traditionally, slips (of the tongue and hand) have been studied in a non-intrusive
way, by means of recording them ex post facto in a paper-and-pencil fashion.
Alternatively, more restricted experimental methods have been invoked to elicit
slips at a higher rate (Baars, Motley, and MacKay 1975; Motley and Baars 1976;
Baars 1992).
In order to combine the advantages of both methods – naturalness as well as
objectivity of the data – we developed the following elicitation task. We asked
10 adult deaf signers to sign 14 picture stories of varying lengths under various
cognitive stress conditions (unordered pictures, signing under time pressure,
cumulative repetition of the various pictures in the story, combinations of the
conditions).5
Figure 5.2 shows one of the short stories that had to be verbalized.6 The
signers who were not informed about the original goal of the investigation were
videotaped for 30–45 minutes. This raw material was subsequently analyzed
by the collaborators of the project. Importantly, a deaf signer who is compe-
tent in DGS as well as linguistic theory participated in the project. We see
these as indispensable preconditions for being able to identify slips of the hand.
Then, video clips of the slip sequences were digitized and fed into a large com-
puter database. Subsequently, the slips and their corrections were categorized
according to the following main criteria:7
r type of slip: anticipation, perseveration, harmony error,8 substitution (seman-
tic, formal, or both semantic and formal), blend, fusion, exchange, deletion;
r entity: phonological feature, morpheme, word, phrase;
r correction: yes/no; if yes, then by locus of correction: before word, within
word, after word, delayed.
4 This is also the logic behind Caramazza’s (1984) transparency condition. On the limitations of
speech errors as evidence for language production processes, see also Meyer (1992).
5 Cognitive stress is supposed to diminish processing resources which should affect language pro-
duction as a resource-dependent activity (compare Leuninger, Happ, and Hohenberger 2000a).
6 We thank DawnSignPress, San Diego, for kind permission to use the picture material of two of
their VISTA course books for teaching ASL (Smith, Lentz, and Mikos 1988; Lentz, Mikos, and
Smith 1989).
7 The complete matrix contains additional information which is not relevant in the present context.
8 Whereas the other pertinent slip categories need no further explanation, we briefly define “har-
mony” error here. By “harmony” we denote an error that has two sources, one in the left and
one in the right context, so that it is impossible to tell whether it is an anticipation or a per-
severation. Note that Berg (1988) calls these errors doppelquellig (“errors with two sources”),
and Stemberger (1989) calls them “A/P errors” (anticipation/perseveration). We prefer the term
“harmony” as it captures the fact well that two identical elements in the left and right context
“harmonize” the element in their middle.
Figure 5.2 Picture story of the elicitation task
Modality-dependent aspects of sign language production 119

(a) (b) (c)

Figure 5.3a SEINE [Y-hand]; 5.3b ELTERN [Y-hand]; 5.3c correct: SEINE
[B-hand]

Our scoring procedure is illustrated by the following slip of the hand:


(1) SEINE [B-hand → Y-hand] ELTERN9
his parents
‘his parents’
In (1) the signer anticipates the Y handshape of ELTERN (see Figure 5.3b)
when signing the possessive pronoun SEINE (see Figure 5.3a) which is cor-
rectly signed with the B handshape (see Figure 5.3c). The other three phono-
logical features – hand orientation, movement, and place of articulation – are
not affected. The slip is apparently unnoticed by the signer as evidenced by the
fact that it was not corrected. Scoring for example (1):
r type of slip: anticipation;
r entity: phonological feature (handshape);
r correction: no.
In Section 5.5, we present our major empirical findings on slips of the hands
and compare them to slips of the tongue.
9 We represent the slips of the hand by using the following notations:
SEINE The slip is given in italics.
[B-hand →Y-hand] In brackets, we first note the intended form followed by the
erroneous form after the arrow.
S(OHN) In parentheses, we note parts of the sign which are not spelled
out.
GEHT-ZU The hyphen indicates a single DGS sign as opposed to separate
words in spoken German.
// The double slash indicates the point of interruption.
mouth gesture
NICHT-VORHANDEN Nonmanual parts of a sign (in this case, mouth gestures) are
represented on an additional layer.
Table 5.1 DGS slip categories, cross-classified with affected entity

Affected entity
Phonology: Hand
Slip of the hand type n % Word sum Handshape orientation Move Place Other h1+h2 Combination Morpheme

Anticipation 44 21.7 9 32 16 4 2 5 5 3
Perseveration 45 22.1 12 31 11 9 1 3 3 3 1 2
Harmony 13 6.4 13 10 1 2
Substitution 5 2.5 4 1
semantic 38 18.7 35 3
formal 1 0.5 1
semantic and formal 1 0.5 1
Blend 32 15.7 30 1
Fusion 18 8.8 18
Exchange 2 1.0 1 1
Deletion 4 2.0 2 2 1 1
Total 203 112 78 12
Total (as percentage) 100.0 55.2 38.4 6
Modality-dependent aspects of sign language production 121

5.5 Results

5.5.1 Distribution of slip categories and affected entities


In Table 5.1 we analyze the distribution of the various slip categories cross-
classified with entities.10 In these data, the slip categories that contain the most
errors are anticipation and perseveration; these are syntagmatic errors. The next
largest categories are semantic substitutions and blends; these are paradigmatic
errors.
In a syntagmatic error, the correct serialization of elements is affected. Ob-
servationally, a phonological feature, such as a handshape, is spelled out too
early (anticipation)11 or too late (perseveration).12 If a phonological feature is
affected, this error is located in the formulator module; strictly speaking this
happens during the access of the lexeme lexicon where the phonological form
of a word is specified.
In a paradigmatic error, elements that are members of the same paradigm are
affected. A paradigm may, for example, consist of verbs that share semantic fea-
tures. Typically, one verb substitutes for a semantically related one; for example
SIT substitutes for STAND. Semantic substitutions take place in the formulator
again, but this time during access of the lemma-lexicon where semantic and
grammatical category information is specified.
The most frequently affected entities are sign words, followed by phonologi-
cal parameters. Morphemes and phrases are only rarely affected. Most slip cat-
egories co-occur with all entities. There are, however, preferred co-occurrences
that are presented in Section 5.5.2.

5.5.2 Selection of original slips of the hand


In this section we present a qualitative analysis of a small collection of slips of
the hand that exemplify the major results in Section 5.5.1. The errors may or may
not be corrected. Typically, paradigmatic errors such as semantic substitutions

10 The categories and the affected entities are those described in Section 5.3. The phonological
features are further specified as handshape, hand orientation, movement, and place of articula-
tion. The category ‘other’ concerns other phonological errors; for example the proper selection
of fingers or the contact. The category ‘h1 and h2’ concerns the proper selection of hands, e.g. a
one-handed sign is changed into a two-handed sign. The category ‘combination’ concerns slips
where more than one phonological feature is changed.
11 Compare example (1) in Section 5.4.
12 In a serial, modular perspective (as in Garrett, Levelt), the problem with syntagmatic errors
concerns the proper binding of elements to slots specified by the representations on the respec-
tive level. From a connectionist perspective, the problem with syntagmatic errors concerns the
proper timing of elements. Both approaches, binding-by-evaluation and binding-by-timing are
competing conceptions of the language production process (see also Levelt, Roelofs, and Meyer
1999).
122 A. Hohenberger, D. Happ, and H. Leuninger

(a) (b) (c)

Figure 5.4a substitution: VA(TER); 5.4b conduite: SOHN; 5.4c target/correc-


tion: BUB

and blends referred to in Section 5.4.1 affect words. Example (2) is a semantic
substitution (with a conduite d’approche13 ):
(2) (Context: A boy is looking for his missing shoe)
VA(TER) [BUB → VATER] S(OHN) [conduite: BUB → SOHN] BUB
father son boy
‘the father, son, boy’
The signer starts with the erroneously selected lemma VATER (‘father’) given
in Figure 5.4a. That BUB and not VATER is the intended sign can be inferred
from the context in which the discourse topic is the boy who is looking for his
missing shoe. After introducing the boy, the signer goes on to say where the
boy is looking for his shoe. Apart from contextual information, the repair BUB
also indicates the target sign. Immediately after the onset of the movement of
VATER, the signer changes the handshape to the F-hand with which SOHN
(‘son’) is shown in Figure 5.4b.14 Eventually, the signer converges on the target
sign BUB (‘boy’) as can be seen in Figure 5.4c.
Linearization errors such as anticipation, perseveration, and harmony errors
typically affect phonological features. Example (3) is a perseveration of the
handshape of the immediately preceding sign:
13 A conduite d’approche is a stepwise approach to the target word, either related to semantics or
to form. In (2) the target word BUB is reached only via the semantically related word SOHN,
the conduite.
14 In fact the downward movement is characteristic of TOCHTER (‘daughter’); SOHN (‘son’)
is signed upwards. We have, however, good reasons to suppose that SOHN is, in fact, the in-
tended intermediate sign which only coincidentally surfaces as TOCHTER because of the com-
pelling downward movement from VATER to the place of articulation of BUB. Thus, the string
VATER–SOHN–BUB behaves like a compound.
Modality-dependent aspects of sign language production 123

(a) (b) (c)

Figure 5.5a VATER [B-hand]; 5.5b slip: MOTHER [B-hand]; 5.5c correct:
MOTHER [G-hand]

(3) (Discourse topic: the boy)


(ER) GEHT-ZU VATER MUTTER [G-hand → B-hand] SAGT-
BESCHEID
(He) goes-to father mother tells-them
‘(The boy) goes to father and mother, and tells them . . .’
In (3) the B handshape of VATER (‘father’) as can be seen in Figure 5.5a is
perseverated on the sign for MUTTER (‘mother’), as shown in Figure 5.5b.
MUTTER is correctly signed with the G-hand as can be seen in Figure 5.5c.
With regard to serial handshape errors, we need to explain how erroneous
phonological processes are distinguished from non-erroneous ones. First,
Zimmer (1989) reports on handshape anticipations and perseverations on the
nondominant hand which occur frequently in “casual” registers. While we ac-
knowledge the phenomenon Zimmer describes, it is important not to mix these
cases with the ones reported here; these ones concern the dominant hand only.15
Second, it has been observed that signers of ASL and of Danish Sign Lan-
guage may systematically assimilate the index [G] handshape to the preceding
or following sign with first person singular, but not with second and third person
singular. Superficially, these cases look like anticipations and perseverations.
While we also observe this phenomenon in DGS, it does not seem to have a com-
parable systematic status as in ASL.16 In (1) above SEINE ELTERN (‘his par-
ents’) it is the third person singular possessive pronoun SEINE (‘his’) which is
affected. This clearly cannot be accounted for along the lines of Zimmer (1989).
Handshape is the most prominent phonological feature to be affected by lin-
earization errors. Our findings with regard to the high proportion of handshape
errors among the phonological slips reproduce earlier findings of Klima and
15 In fact we found only one perseveration that concerns the nondominant hand of a P2-sign (in
the sense of Sandler 1993). The nondominant hand is rarely affected, and this fact might mirror
the minor significance of the nondominant hand in sign language production.
16 We found only four such cases (three anticipations and one perseveration) which involved first
person singular.
124 A. Hohenberger, D. Happ, and H. Leuninger

(a) (b) (c)

Figure 5.6a MANN [forehead]; 5.6b slip: FRAU [forehead]; 5.6c correct:
FRAU [breast]

Bellugi (1979; see also Newkirk et al. 1980, and Section 5.5.3). They report
49.6 percent of handshape errors which equals our ratio of 47.4 percent. Our re-
sult is also confirmed by findings in sign language aphasia, where phonological
paraphasia mostly concerns the handshape parameter (Corina 1998).
Other phonological features – such as hand orientation, movement, and place
of articulation – are only rarely affected. In (4) we introduce a place of articu-
lation error:
(4) (Context: The signer suddenly realizes that the character he had re-
ferred to as a man is, in fact, a woman)
MANN FRAU [POA: MANN]
man woman
‘The man is a woman.’
In (4) the signer perseverates the place of articulation of MANN [forehead]
(see Figure 5.6a) on the sign FRAU (see Figure 5.6b). The correct place of
articulation of FRAU is at the chest (see Figure 5.6c). All other phonological
parameters (hand orientation, movement, handshape) are from FRAU.
Fusions are another slip category that are sensitive to linearization. Here,
two neighboring signs fuse. Each loses parts of its phonological specification;
together they form a single sign (syllable), as in (5):
(5) (Context: A boy is looking for his missing shoe)17
mouth gesture: blowing
SCHUH DA (ER) NICHT-VORHANDEN [F-hand → V-hand]
SCHAUT [path movement → circular movement]
shoe here (he) not-there
looks
‘He looks for the shoe, and finds nothing.’
17 We represent the fusion by stacking the glosses for both fused signs, SCHAUT and NICHT-
VORHANDEN, as phonological features of both signs realized simultaneously. The nonmanual
feature of NICHT-VORHANDEN – the mouth gesture (blowing) – has scope over the fusion.
The [F]-handshape of NICHT-VORHANDEN, however, is suppressed, as is the straight or arc
movement and hand orientation of SCHAUEN.
Modality-dependent aspects of sign language production 125

In (5), the signer fuses the two signs SCHAUT (‘looks’) and NICHT-
VORHANDEN (‘nothing’). The [V] handshape is from SCHAUT; the circular
movement, the hand orientation, and the mouth gesture (blowing out a stream
of air) is from NICHT-VORHANDEN. The fused elements are adjacent and
have a syntagmatic relation in the phrase. Their positional frames are fused
into a single frame; phonological features stem from both signs. Interestingly,
a nonmanual feature (the mouth gesture) is also involved.18
Fusions in spoken languages are not a major slip category but have been
reported in the literature (Shattuck-Hufnagel 1979; Garrett 1980; Stemberger
1984). Fusions are similar to blends, formationally, but involve neighboring
elements in the syntactic string, whereas blends involve paradigmatically re-
lated semantic items. Stemberger (1984) argues that they are structural errors
involving two words in the same phrase for which, however, only one word
node is generated. In our DGS data, two neighboring signs are fused into a
single planning slot, whereby some phonological features stem from the one
sign and some from the other; see (5). Slips of this type may relate to regular
processes such as composition by which new and more convenient signs are
generated synchronically and diachronically. Therefore, one might speculate
that fusions are more frequent in sign language than in spoken language, as
our data suggest. This issue, however, is not well understood and needs further
elaboration.
Word blends are frequent paradigmatic errors in DGS. In (6) two semantically
related items – HOCHZEIT (‘marriage’) and HEIRAT (‘wedding’) – compete
for lemma selection and phonological encoding. The processor selects both of
them and an intricate blend results; this blend is complicated by the fact that
both signs are two-handed signs:
(6) HEIRAT
PAAR// HOCHZEIT// HEIRAT PAAR
marriage
couple// wedding// marriage couple
‘wedding couple’
The two competing items in the blend (6) are HEIRAT (‘marriage’) (see Fig-
ure 5.7b) and HOCHZEIT (‘wedding’) (see Figure 5.7c).19 In the slip (see
Figure 5.7a), the dominant hand has the [Y] handshape of HOCHZEIT and
also performs the path movement of HOCHZEIT, while the orientation and
configuration of the two hands is that of HEIRAT. For the sign HEIRAT, the
dominant hand puts the ring on the non-dominant hand’s ring finger as in the
18 It is important not to confuse fusions and blends. Whereas in fusions neighboring elements in
the syntagmatic string interact, only signs that bear a paradigmatic (semantic) relation engage in
a blend. While SCHAUT and NICHT-VORHANDEN have no such relation, the signs involved
in blends like (6) do.
19 Note that this blend has presumably been triggered by an “appropriateness” repair, namely the
extension of PAAR (‘couple’) to HEIRATSPAAR (‘wedding couple’).
126 A. Hohenberger, D. Happ, and H. Leuninger

(a) (b) (c)

Figure 5.7a slip: HEIRAT/HOCHZEIT; 5.7b correction: HEIRAT; 5.7c correct:


HOCHZEIT

wedding ceremony. In the slip, however, the dominant hand glides along the
palm of the non-dominant hand and not over its back, as in HOCHZEIT. Inter-
estingly, features of both signs are present simultaneously, but distributed on the
two articulators, the hand; this kind of error is impossible in spoken languages.
The blend is corrected after the erroneous sign. This time, one of the competing
signs, HEIRAT, is correctly selected.

5.5.3 Intra-modal and inter-modal comparison with other slip corpora


In this section we present a quantitative and a qualitative analysis of our slips
of the hand data. We then compare our slip corpus with the one compiled
by Klima and Bellugi (1979), which also appears in Newkirk et al. (1980).
With respect to word and morpheme errors, the latter is not very informative.
Klima and Bellugi report that only nine out of a total of 131 slips were whole
signs being exchanged. No other whole word errors (substitutions, blends) are
reported. With respect to the distribution of phonological errors, however, we
can make a direct comparison. The ASL corpus consists of 89 phonological
slips that are distributed as shown in Table 5.2. We present our data so that it is
directly comparable to Klima and Bellugi’s.20 As can be seen in Table 5.2, the
distribution in both slip collections is parallel. As expected, hand configuration
(especially handshape) has the lion’s share in the overall number of phonological
slips.
The reason why handshape is so frequently involved in slipping may have
to do with inventory size and the motoric programs that encode handshape.
In DGS the signer has to select the correct handshape from a set of approxi-
mately 32 handshapes (Pfau 1997) which may lead to mis-selection to a certain
20 In the rearrangement of our own data from Table 5.1 we only considered the first four param-
eters and left out the minor categories (other, h1 + h2, combination; see footnote 10 above).
Note that we have combined handshape and hand orientation into the single parameter “hand
configuration” in Table 5.2.
Modality-dependent aspects of sign language production 127

Table 5.2 Frequency (percentages in parentheses) of phonological


errors by parameter in ASL (Klima and Bellugi 1979) and in DGS

Parameter ASL DGS

Hand configuration 65 (73) 47 (82.5)


Place of articulation 13 (14.6) 5 (8.8)
Movement 11 (12.4) 5 (8.8)
Total 89 (100) 57 (100)

degree. One might conjecture that the bigger the inventory, the more error-prone
the process of selection both because there is higher competition between the
members of the set and because the representational space has a higher density.
Furthermore, the motor programs for activating these handshapes involve only
minor differences; this might be an additional reason for mis-selection. Note
that the inventory for hand orientation is much smaller – there are only six
major hand orientations that are used distinctively – and the motor programs
encoding this parameter can be relatively imprecise. Hand orientation errors,
accordingly, are less frequent.
In spoken language, phonological features are also not equally affected in
slips; the place feature (labial, alveolar, palatal, glottal, uvular, etc.) is most
frequently involved (Leuninger, Happ, and, Hohenberger 2000b).
In order to address the question of modality, we have to make a second com-
parison, this time with a corpus of slips of the tongue. We use the Frankfurt
corpus of slips of the tongue.21 This corpus includes approximately
5000 items. Although both corpora differ with respect to the method by which
the data were gathered and with respect to categorization, we provide a broad
comparison.
As can be seen from Tables 5.1 and 5.3,22 there is an overall congruence for
affected entities and slip categories. There are, however, two major discrepan-
cies. First, there are almost no exchanges in the sign language data, whereas
they are frequent in the spoken language data. Second, morphemes are rarely
affected in DGS, whereas they are affected to a higher degree in spoken German.
These two results become most obvious in the absence of stranding errors in
DGS. In Section 5.5.3.1 we concentrate on these discrepancies, pointing out
possible modality effects.
21 We are in the process of collecting slips of the tongue from adult German speakers in the same
setting, so we have to postpone the exact quantitative intermodal comparison for now.
22 In Table 5.3 the following categories from Table 5.1 are missing: harmony, formal, and semantic
and formal substitutions. These categories were not included in the set of categories by the
time this corpus had been accumulated. Harmony errors are included in the anticipation and
perseveration category.
128 A. Hohenberger, D. Happ, and H. Leuninger

Table 5.3 Slip categories/affected entities for the German slip corpus

Affected entity
Slip of the tongue type n % Word Phoneme Morpheme Phrase

Anticipation 1024 20.7 143 704 177


Perseveration 906 18.3 155 644 107
Substitution, semantic 1094 22.1 783 147 164
Blend 923 18.6 658 13 242 10
Exchange 774 15.6 200 439 135
Fusion 13 0.3 10 2 1
Deletion 182 3.7 46 78 58
Addition 35 0.7 8 17 10
Total 4951 2003 2043 894 10
Total (as percentage) 100.0 40.5 41.3 18.1 0.2

5.5.3.1 Absence of stranding errors. One of the most striking differ-


ences between the corpora is the absence of stranding errors in DGS. Surprising
as this result is from the point of view of spoken languages, it is in line with
Klima and Bellugi’s earlier findings for ASL. They, too, did not find any strand-
ing errors (Klima and Bellugi 1979). In this section we explore possible reasons
for this finding.
In spoken languages, this category is well documented (for English, see
Garrett 1975; 1980; Stemberger 1985; 1989; Dell 1986; for Arabic, see Abd-El-
Jawad and Abu-Salim 1987; for Spanish, see Del Viso, Igoa, and Garcı́a-Albea
1991; for German, see Leuninger 1996). Stranding occurs when the free content
morphemes of two words, usually neighbors, are exchanged whereas their re-
spective bound grammatical morphemes stay in situ. A famous English example
is (7a); a German example which is even richer in bound morphology is (7b):
(7) a. turking talkish ← talking Turkish (from Garrett 1975);
b. mein kollegischer Malaye ← mein malayischer Kollege;
‘my colleagical Malay’← “my Malay colleague” (Leuninger
1996:114).
In (7a) the word stems talk- and turk-, figuring in a verb and an adjective,
respectively, are exchanged, leaving behind the gerund -ing and the adjectival
morpheme -ish. This misordering is suppposed to take place at a level of process-
ing where the word form (morphological, segmental content) is encoded, on the
positional level (Garrett’s terminology) or lexeme level (Levelt’s terminology).
In (7b), the stems malay- and kolleg-, figuring in an adjective and a noun, re-
spectively, are exchanged, leaving behind the adjectival morpheme -isch as well
as the case/gender/number morpheme -er of the adjective and the nominalizing
morpheme -e of the noun.
Modality-dependent aspects of sign language production 129

The absence of this category in DGS and ASL calls for some explanation. First
of all, we have to exclude a sampling artifact. The data in both corpora (DGS vs.
spoken German) were collected in a very different fashion: the slips of the tongue
stem from a spontaneous corpus; the slips of the hand from an elicited corpus
(for details, see Section 5.4). The distribution of slip categories in the former
type of corpora is known to be prone to listeners’ biases (compare Meyer 1992;
Ferber 1995; see also Section 5.4). Stranding errors are perceptually salient,
and because of their spectacular form they are more likely to be recorded and
added to a slip collection. In an objective slip collection, however, this bias is not
operative.23 Pending the exact quantification of our elicited slips of the tongue,
we now turn to linguistic reasons that are responsible for the differences. The
convergent findings in ASL as well as in DGS are significant: if morphemes do
not strand in either ASL or DGS this strongly hints at a systematic linguistic
reason.
What first comes to mind is the difference in morphological type: spoken
German is a concatenative language to a much higher degree than DGS or ASL.
Although spoken German is fusional to a considerable degree (see Section 5.2),
it is far more concatenative than DGS in that morphemes typically line up
neatly one after the other, yielding, for example, ‘mein malay-isch-er Kolleg-e’
(‘my Malay colleague’) with one derivational morpheme (-isch), one stem-
generating morpheme (-e) and one case/agreement morpheme (-er). In DGS this
complex nominal phrase would contain no such functional morphemes but take
the form: MEIN KOLLEGE MALAYISCH (‘my Malay colleague’). For this
reason, no stranding can occur in such phrases in the first place. Note that this
is not a modality effect but one of language type. We can easily show that
this effect cuts across languages in the same modality, simply by looking at
the English translation of (7b): ‘my Malay colleague.’ In English comparable
stranding could also not occur because the bound morphemes (on the adjective
and the noun) are not overt, as in DGS. English, however, has many other bound
morphemes that are readily involved in stranding errors (as in 7a), unlike in
DGS.
Now we are ready for the crucial question, namely, whether we are to expect
no stranding errors in DGS (or ASL) at all? The answer is no. Stranding errors
should, in principle, occur (see also Klima and Bellugi 1979).24 What we have
to determine is what grammatical morphemes could be involved in such sign
morpheme strandings. The answer to this question relates to the second reason
for the low frequency of DGS stranding errors: high vs. low separability of
23 A preliminary inspection of our corpus of elicited slips of the tongue suggests that stranding
errors are also a low-frequency error category in spoken languages, so that the apparent difference
is not one between language types but is, at least partly, due to a sampling artifact.
24 Klima and Bellugi (1979) report on a memory study in which signers sometimes misplaced the
inflection. Although this is not the classical case of stranding (where the inflections stay in situ
while the root morphemes exchange), this hints at a possible separability of morphemes during
online production.
130 A. Hohenberger, D. Happ, and H. Leuninger

Figure 5.8 A polymorphemic form in ASL (Brentari 1998:21)

grammatical morphemes. This difference is a traditional one of descriptive


linguistics and dates back to the early days of research into Indo-European
languages (Kean 1977).
Sign languages such as DGS are extremely rich with inflectional and deriva-
tional morphemes and, crucially, are able to realize them at the same time.
Figure 5.8 describes a polymorphemic ASL sign in which nine morphemes
(content morphemes and classifiers) are simultaneously realized in one mono-
syllabic word (Brentari 1998:21) meaning something like:
(8) ‘two, hunched, upright-beings, facing forward, go forward, carefully,
side-by-side, from point “a”, to point “b” ’.
Of these many morphemes, however, only a few – such as the spatial loci –
could, if ever, be readily involved in a stranding error. Fusional as these sign
language morphemes are, they are much more resistant to being separated from
each other than concatenated morphemes.25
There are, however, grammatical sign language morphemes that should allow
for stranding errors, hence be readily separable; for example aspectual, plural,
and agreement morphemes. In a hypothetical sentence like (9):
(9) ICH PERSON+++ ICH FRAGJEDEN-EINZELNEN
I personplural I askeach-of-them
‘I ask each of them.’
the plural morpheme +++ (realized by signing PERSON three times) and
the AGR-morpheme ‘each of them’ (realized by a zigzag movement of the
verbal stem FRAG) could, in principle, strand while the free content morphemes
25 In Stemberger (1985:103), however, there is little difference in the stranding of regular (high
separability) vs. irregular (low separability) past tense in English speech errors.
Modality-dependent aspects of sign language production 131

PERSON and FRAG- (‘to ask’) could be exchanged. This would result in the
hypothetical slip (9 ):
(9 ) ICH FRAG+++ ICH PERSONJEDEN-EINZELNEN
I askplural I personeach-of-them

The same holds true of aspectual morphemes such as durative or habitua-


tive which are realized as movement alternations of the verbal stem. Thus,
BEZAHLENHABIT (‘to payhabitual ’) is realized as multiple short repetitions of
the stem BEZAHL (‘to pay’), whereas FAHRENDURATIVE (‘to drivedurative’) is
realized as prolonging the entire sign FAHREN (‘to drive’).
Morphemes that have a distinct movement pattern altering the sign in the
temporal dimension (repetition of the sign or stem, or specific movement: long,
zigzag, or arc movement) are likely candidates for stranding errors. As their
incidence in spontaneous signing is, however, low, the probability of a stranding
slip in a corpus is negligible (see also Klima and Bellugi 1979:141).26
Finally, we address the issue as to whether or not our explanation can be
restricted to a difference in typology rather than modality. The former line of
argumentation would be highly welcome in terms of parsimony of linguistic
explanation and, hence, Occam’s razor. We would simply apply a categorial
difference which is already known to distinguish spoken languages. If we can
show that the difference between spoken German and DGS (and, inductively,
between spoken and signed languages in general) boils down to a typological
difference, this would have two highly desired outcomes.
First, we could show that signed languages can be typologically charac-
terized, as can any spoken language. Signed languages would simply be an
additional but, of course, very important class of languages which are subject
to the same universal characteristics. This would strengthen the universality
issue.
Second, Occam’s razor would be satisfied in showing that it was unneccessary
to invoke the “broader” concept of modality, and that the “smaller” and already
well-established concept of typology suffices to explain the behavior of sign
languages such as DGS.
There is, however, one consideration that makes it worth invoking modality.
Whereas the class of spoken languages divides into various subgroups with
regard to morphological type (Chinese being an isolating language, Turkish be-
ing a concatenative language, the Indo-European languages being inflectional
languages) sign languages seem to behave in a more uniform way: across the
board, they all seem to be highly fusional languages realizing multiple mor-
phemes at the same time. This hints at a common characteristic which is rooted
in pecularities of the modality.
26 We are confident, though, to be able to elicit these errors in an appropriate setting where the
stimulus material is more strictly controlled than in the present study.
132 A. Hohenberger, D. Happ, and H. Leuninger

Gee and Goodhart (1988) have convincingly argued that spoken and signed
languages differ with respect to the amount of information that can be conveyed
in a linguistic unit and in a particular time. This topic is intimately related to
language production and therefore deserves closer inspection (Leuninger et al.
2000a). Spoken languages, on the one hand, make use of very fine motoric
articulators (tongue, velum, vocal chords, larynx, etc.). The places of articula-
tion of the various phonemes are very close to each other in the mouth (teeth,
alveolar ridge, lips, palate, velum, uvula, etc.). The oral articulators are capable
of achieving a very high temporal resolution of signals in production and can
thus convey linguistic information at a very high speed.
Signed languages, on the other hand, make use of coarse motoric articulators
(the hands and arms, the entire body). The places of articulation are more
distant from each other. The temporal resolution of signed languages is lower.
Consequently, sign language production must take longer for each individual
sign.
The spatio-temporal and physiological constraints of language production
in both modalities are quite different. On average, the rate of articulation for
words doubles that of signs (4–5 words per second vs. 2.3–2.5 signs per sec-
ond; see Klima and Bellugi 1979). Surprisingly, however, signed and spoken
languages are on a par with regard to the ratio of propositional information per
time rate. Spoken and signed sentences roughly have the same production time
(Klima and Bellugi 1979). The reason for this lies in the different information
density of each sign.27 A single monosyllabic sign is typically polymorphemic
(remember the nine morphemes in (8); compare Brentari 1998). The condensa-
tion of information is not achieved by the high-speed serialization of segments
and morphemes but by the simultaneous output of autosegmental phonological
features and morphemes.
Thus, we witness an ingenious trade-off between production time and in-
formational density which enables both signed and spoken languages to come
up with what Slobin (1977) formulated as a basic requirement of languages,
namely that they “be humanly processible in real time” (see also Gee and
Goodhart 1988).
If we follow this line of argumentation it follows quite naturally that signed
languages – due to their modality-specific production constraints – will always
be attracted to autosegmental phonology and fusional morphology. Spoken
languages being subject to less severe constraints will be free to choose between
the available options. We therefore witness a greater amount of variability
among them.
27 Klima and Bellugi (1979) suggest that the omission of function words such as complementizers,
determiners, auxiliaries, etc. also economizes time. We do not follow them here because there
are also spoken languages that have many zero functors, although they are obviously not pressed
to economize time by omitting them.
Modality-dependent aspects of sign language production 133

5.5.3.2 Fewer exchanges in general: Phonological features and words.


As can be seen by comparing Table 5.1 and 5.3, not only are stranding errors
absent in our DGS corpus, but exchanges of any linguistic entity are extremely
rare as compared to the spoken German corpus. The analysis of phonologi-
cal and word exchanges in spoken language has been of special importance
since Garrett (1975; 1980) and others proposed the first models of language
production. Garrett showed that the errors in (10a) and (10b) arise at different
processing levels which he identified as the functional (lemma) level and the
positional (lexeme) level:
(10) a. the list was not in the word ← the word was not in the list
(Stemberger 1985, in Berg 1988:26);
b. heft lemisphere ← left hemisphere
(Fromkin 1973, in Meyer 1992:183).
The reasons to differentiate both types of exchange lie in distinct constraints and
vocabulary used to compute both kinds of exchange. The words involved in word
exchanges, on the one hand, always obey the word class constraint, i.e. nouns
exchange with nouns, and verbs with verbs, but they do not necessarily share
the same phrase. The segments involved in phoneme exchanges, on the other
hand, do belong to the same phrase and do not obey the “word class constraint.”
However, they obey the “syllable position constraint,” namely that segments
of like syllable positions interact; for example onset with onset, nucleus with
nucleus, and coda with coda (Garrett 1975; 1980; Levelt 1992; Meyer 1992;
Poulisse 1999). MacNeilage (1998) refers to this constraint in terms of a “frame-
content metaphor”28 at the core of which is the lack of interchangeability of
the two major class elements of spoken language phonology, consonants, and
vowels.29
Although in the Klima and Bellugi study (1979; see also Newkirk et al. 1980)
no word errors are reported, they found nine phonological exchanges. Among
these is the following handshape exchange:
(11) SICK BORED (Newkirk et al. 1980:171; see also Klima and Bellugi
1979:130)
Here, the handshapes for SICK (G-hand) and BORED (exposed middle finger)
are exchanged; the other features (place of articulation, hand orientation, and
movement) remain unaffected.
28 We are thankful to Peter MacNeilage for pointing out the “frame-content metaphor” to us in
this context.
29 It is well known that in spoken languages phonological slips in normal speakers and phonological
paraphasia in aphasic patients concern mostly consonants. The preponderance of handshape
errors in sign language production errors as well as in aphasic signing bears directly on the
“frame-content metaphor” and invites speculation on a modality-independent effect in this
respect. Consonants in spoken and signed languages may be more vulnerable than vowels due
to a “neural difference in representation” (Corina 1998:321).
134 A. Hohenberger, D. Happ, and H. Leuninger

Given the fact that all major phonological features (handshape, place of ar-
ticulation, hand orientation, and movement) can be affected in “simple” signing
errors where only one element is affected as in anticipations, perseverations,
and harmony errors (see Table 5.1), one wonders why they should not also fig-
ure in “complex” signing errors where two elements are affected. Handshape
exchanges like the one in (11) should, therefore, be expected. There is no reason
to suppose that sign language features cannot be separated from each other. In
fact, it was one of the main goals of Newkirk et al. (1980) to demonstrate that
there are also sub-lexical phonological features in ASL, and to provide em-
pirical evidence against the unwarranted view that signs are simply indivisible
wholes, holistic gestures not worth being seriously studied by phonologists.
Note that spoken and signed languages differ in the following way with
respect to phonological errors in general and phonological exchanges in partic-
ular. Segments of concatenating spoken languages such as English and German
are lined up like beads on a string in a strictly serial order as specified in the
lexical item’s word form (lexeme). If two complete segments are exchanged,
the “syllable position constraint” is always obeyed. The same, however, cannot
hold true of the phonological features of a sign. They do not behave as segments:
they are not realized linearly, but simultaneously. It is a modality specificity,
indeed, that the sign’s phonological features are realized at the same time,
although they are all represented on independent autosegmental tiers. Obvi-
ously, the “frame-content metaphor” (MacNeilage 1998) cannot be transferred
to signed languages straightforwardly. The “frame-content metaphor” states
that “a word’s skeleton and its segmental content are independently generated”
(Levelt 1992:10). This is most obvious in segmental exchanges. If we roughly
attribute handshape, hand orientation, and place of articulation consonantal sta-
tus and movement vocalic status, then of the two constraints on phonological
errors – the “segment class constraint” and the “syllable position constraint”–
sign languages obey only the former (compare Perlmutter 1992). Typically, one
handshape is replaced with another handshape or one movement with another
movement. The latter constraint, however, cannot hold true of the phonological
features of a sign because they are realized simultaneously. Thus, phonological
slips in sign languages compare to segmental slips in spoken languages, but
there is no equivalence for segmental slips in sign language.
We still have to answer the question why exchanges across all entities are so
rare in sign language. As Stemberger (1985) pointed out, the true number of
exchanges may be veiled by what he calls “incompletes,” i.e. covered exchanges
that are caught and repaired by the monitor after the first part of the exchange
has taken place. (An incomplete is an early corrected exchange.) These errors,
then, do not surface as exchanges but as anticipations. As a null hypothesis
we assume that the number of exchanges, true anticipations, and incompletes
is the same for spoken and signed languages, unless their incidence interacts
Modality-dependent aspects of sign language production 135

with other processes that change the probability of their occurrence. We will, in
fact, argue below that monitoring is such a process. In Section 5.6 we point out
that the cut-off points in signed and spoken languages are different. Errors are
detected and repaired apparently earlier in sign languages, preferentially in the
problem item itself, whereas repairs in spoken languages start later, after the
erroneous word or even later. If this holds true, exchanges may be more likely
to surface in spoken languages simply because both parts of the error would
have already occurred before the monitor was able to detect them.

5.6 The sign language monitor: Repair behavior in DGS


Corrections are natural phenomena in spontaneous language production. Ad-
vanced models of language production therefore contain a functional com-
ponent that supervises its own output, realizes discrepancies to the intended
utterance, and, if necessary, induces a repair. This module is the monitor. In
Levelt’s model the monitor (see Figure 5.1) is situated in the conceptualizer,
i.e. hierarchically very high, and is fed by two feedback loops, one internal
(via internal speech), the other external (via overt speech). The monitor uses
the speech comprehension system as its informational route. The fact that the
language production system supervises itself and provides repairs is not at all
trivial. Repair behavior is a complex adaptive behavior and shows the capacity
of the system in an impressive way.
To date, monitor behavior in signed languages has not been investigated
systematically. In the following discussion we analyze slip repairs from a
model-theoretic perspective. According to Hypothesis 2a, we expect compara-
ble monitoring with respect to processing DGS and spoken German. The rate
of correction and correction types should be the same. According to Hypothesis
2b, we expect the sign language monitor to be sensitive to the specific represen-
tations of signed and spoken languages. Therefore, repair behavior is taken to
be an interesting new set of data that can reveal possible modality differences.
In the following, we present our quantitative analysis of repairs in DGS.
Above all, we concentrate on the locus of repair in spoken languages and DGS
because this criterion reveals the most striking difference between DGS and
spoken language.

5.6.1 Locus of repair: Signed vs. spoken language


Slip collections do not always contain detailed information about repair behav-
ior. We believe, however, that monitor behavior is revealing with respect to the
capacity of the processor, to incremental language production, and to the pro-
cessor’s dependency on the linguistic representations it computes (Leuninger
et al. 2000a).
136 A. Hohenberger, D. Happ, and H. Leuninger

Table 5.4 Locus of repair (percentages) in DGS vs. Dutch

Locus of repair DGS Dutch

Before word 8 (7.3) 0


Within word 57 (51.8) 91 (23)
After word 37 (33.6) 193 (48)
Delayed 8 (7.3) 115 (29)
Total slip repairs 110 (100.0) 399 (100)
Ratio repairs/slips 110/203 (54.2)

Source: Levelt 1983:63

In this section we address the following questions: To what extent do German


signers correct their slips? What are the main cut-off points and do these cor-
respond to those in spoken languages?
According to Levelt (1983:56; 1989:476), the speaker adheres to the Main
Interruption Rule: namely “Stop the flow of speech immediately upon detecting
trouble.” It is assumed that this rule is obeyed in the same way in both spoken
and sign language. However, we see that – due to modality differences – repairs
in sign language appear to occur earlier than those in spoken language.
Table 5.4 shows the distribution of repairs at four cut-off points (see Section
5.4): before word,30 within word, after word, and delayed. We compare the
DGS data with error repairs of Dutch speakers from Levelt (1983).31 First, we
can see that 54.2 percent of all slips in DGS are repaired. This is in the normal
range when compared with the percentage of repairs in spoken languages, which
exhibit varying correction rates of about 50 percent.
Focusing on differences in monitor behavior between DGS and spoken lan-
guages, as can be seen from Table 5.4, the most frequent locus of repair for
DGS is within word (51.8 percent), followed by after the word (33.6 percent).
30 The diagnosis of such early repairs is possible because the handshape is already in place during
the transitional movement. This allows a good guess to be made at what sign would have been
produced if it had not been caught by the monitor. Maybe these extremely early repairs must
be compared to sub-phonological speech errors which consist in altered motor patterns that
are imperceptible unless recorded by special electromyographic techniques, as in the study
of Mowry and MacKay (1990). Furthermore, these early repairs encourage us to speculate
on the time course of activation of the various phonological parameters of a sign. Handshape
seems to be activated extremely early and very fast, obviously before the other parameters –
i.e. hand orientation, place of articulation, and movement – are planned. This would mean that
sequentiality is, in fact, an issue when signs are accessed in the lexeme lexicon.
31 We compared our DGS repairs with only a subset of Levelt’s data set, namely with error repairs
(E repairs) (Levelt 1983:63). It is well known that in appropriateness repairs (A repairs), the
speaker tends to complete the inappropriate word before he or she corrects it because it is not
erroneous. In E repairs, however, the speaker corrects his or her faulty utterance as fast as
possible, not respecting word boundaries to the same extent.
Modality-dependent aspects of sign language production 137

Delayed repairs where some material intervenes between the error and the repair
are rare (7.3 percent) as are early repairs before word onset (7.3 percent).
The cut-off points in spoken language (here, Dutch) are different.32 The typ-
ical locus of repair in spoken language is after the word. Corrections within the
word are rarer, and delayed repairs are more frequent. For DGS, however, re-
pairs peak on very fast repairs within the word, followed by increasingly slower
repairs. However, we do not invoke modality as an explanation for this apparent
difference because it is only a superficial, albeit interesting, explanation.
From the discussion in Section 5.5 of the different production times for
spoken vs. signed languages (the ratio of which is 2:1), we can easily predict
that the longer duration of a sign word will influence the locus of repair, provided
that the overall capacity of the spoken and the sign language monitor is the same.
The following prediction seems to hold: because a signed word takes twice as
long as a spoken word, errors will be more likely to be caught within the word in
sign language, but after the word in spoken language. Note that this difference
becomes even more obvious when we characterize the locus of repair in terms
of syllables. In DGS, the error is caught within a single syllable, whereas for
spoken Dutch, the syllable counting begins only after the trouble word (not
counting any syllables within the error).
Again, the reason is that words in signed language (monomorphemic as well
as polymorphemic) tend to be monosyllabic (see Section 5.5). This one syllable,
however, has a long production time and allows for a repair at some point during
its production.33 Thus, the comparison of signed and spoken language repairs
reveals once more the different temporal expansion of identical linguistic ele-
ments, i.e. words and syllables. This is a modality effect, but not a linguistic
one. This effect is related to the articulatory interface. Note that in Chomsky’s
minimalist program (1995) PF, which is related to articulation and perception,
is one of the interfaces with which the language module interacts. Obviously,
spoken and signed languages are subject to very different anatomical and phys-
iological constraints with regard to their articulators. Our data reveal exactly
this difference.
Would it be more appropriate to characterize the locus of repair not in terms
of linguistic entities but in terms of physical time? If we did this we would
find that in both language types repairs would, on average, be provided equally
fast. With this result any apparent modality effect vanishes. We would not
know, however, what differences in the temporal resolution of linguistic entities
exist in both languages, and that these differences result in a very different
32 Levelt distinguishes word-internal corrections (without further specifying where in the word),
corrections after the word, and delayed corrections that are measured in syllables after the error.
33 It is even possible that both the erroneous word and the repair share a single syllable. In these
cases, the repair is achieved by a handshape change during the path movement. This is in accord
with phonological syllable constraints (Perlmutter 1992) which allow for handshape changes
on the nucleus of a sign.
138 A. Hohenberger, D. Happ, and H. Leuninger

monitor behavior. Stopping after the problem word has been completed or while
producing the problem word itself makes a difference for both the producer and
the interlocutor.

5.7 Summary and conclusions


We have investigated slips of the hand and repair behavior in DGS and compared
them to slips of the tongue and repair behavior in spoken languages. Our aim
was to determine whether there are true modality differences between them. Our
major finding is that signed and spoken language production is, in principle,
the same. This comes as no surprise as both are natural languages and are
therefore subject to the same constraints on representation and processing. Due
to modality differences, the satisfaction of these constraints may, however, be
different in each language. In this respect, our language production data reveal
exactly the phonological, morphological, and syntactic design of DGS and
spoken German. Language production data therefore provide external evidence
for the structures and representations of DGS in particular, and of sign languages
in general, which have been analyzed by sign language researchers so far.
As for the slip behavior, stranding errors are absent in DGS and exchange
errors are, in general, very rare. Fusions are more prominent. We explain this
discrepancy partly by appealing to typological differences and more specifically
with respect to the autosegmental character of the phonology and morphology
of signed languages. The possibility of simultaneous encoding of linguistic
information enhances the information density of signs. They may be composed
of many morphemes which are realized at the same time. As this is characteristic
of sign languages in general and not just of a particular typological class (as the
Semitic languages in spoken languages) we acknowledge that this discrepancy
is rooted in a true modality difference. Thus, the simultaneous encoding of
morphological information is – at first sight – a typological difference, but one
which is layered upon a true modality effect.
The repair behavior in DGS reveals again the different interface conditions
(articulatory, physical, and timing conditions) of spoken and signed languages.
The longer production time of signs enables the monitor to catch and repair
errors before the end of the sign. The low incidence of exchanges receives an
explanation along these lines: they are rarer in sign language because the monitor
has enough time to catch them after the first erroneous word (the anticipation)
due to the longer production time of signed vs. spoken words. The physical,
neurophysiological, and motor constraints on the primary articulators (hands vs.
vocal tract) and receptors (visual vs. auditory) in signed vs. spoken languages
are vastly different (see Brentari, this volume). These are indisputable modality
differences. They are, however, situated at the linguistic interfaces, here at the
articulatory–perceptual interface (Chomsky 1995).
Modality-dependent aspects of sign language production 139

Our approach to modality effects is a highly restrictive one. We only accept


the different degree of linguistic information being processed simultaneously
and the different interface conditions as true modality differences. All other
differences turn out to be typological differences or crosslinguistic differences
that have always been known to exist between natural languages. From the
perspective of UG, the question of modality is always a secondary one, the
primary one being the question of the nature of language itself.

Acknowledgments
Our research project is based on a grant given to Helen Leuninger by the German
Research Council (Deutsche Forschungsgemeinschaft DFG), grant number
LE 596/6-1 and LE 596/6-2.

5.8 References
Abd-El-Jawad, Hassan and Issam Abu-Salim. 1987. Slips of the tongue in Arabic and
their theoretical implications. Language Sciences 9:145–171.
Baars, Bernard J., ed. 1992. Experimental slips and human error. Exploring the archi-
tecture of volition. New York: Plenum Press.
Baars, Bernard J. and Michael T. Motley. 1976. Spoonerisms as sequencer conflicts:
Evidence from artificially elicited errors. American Journal of Psychology 89:467–
484.
Baars, Bernard J., Michael T. Motley and Donald G. MacKay. 1975. Output editing for
lexical status in artificially elicited slips of the tongue. Journal of Verbal Learning
and Verbal Behavior 14:382–391.
Berg, Thomas. 1988. Die Abbildung des Sprachproduktionsprozesses in einem Akti-
vationsflussmodell. Untersuchungen an deutschen und englischen Versprechern.
Linguistische Arbeiten 206. Tübingen: Niemeyer.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Butterworth, Brian. 1980. Some constraints on models of language production. In Lan-
guage production, Vol. 1: Speech and talk, ed. B. Butterworth, 423–459. London:
Academic Press.
Caramazza, Alfonso. 1984. The logic of neuropsychological research and the problem
of patient classification in aphasia. Brain and Language 21:9–20.
Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press.
Corina, David. 1998. The processing of sign language. Evidence from aphasia. In Hand-
book of neurolinguistics, ed. Brigitte Stemmer and Harry A. Whitaker, 313–329.
San Diego, CA: Academic Press.
Crain, Stephen and Diane Lillo-Martin. 1999. An introduction to linguistic theory and
language acquisition. Oxford: Blackwell.
Cutler, Anne. 1982. Speech errors: A classified bibliography. Bloomington, IN: Indiana
University Linguistics Club.
140 A. Hohenberger, D. Happ, and H. Leuninger

Dell, Gary S. 1986. A spreading activation theory of retrieval in sentence production.


Psychological Review 93:293–321.
Dell, Gary S. and Padraigh G. O’Seaghdha. 1992. Stages of lexical access in language
production. Cognition 42:287–314.
Dell, Gary S. and Peter A. Reich. 1981. Stages in sentence production: An analysis of
speech error data. Journal of Verbal Learning and Verbal Behavior 20:611–629.
Del Viso, Susana, José M. Igoa, and José E. Garcı́a-Albea. 1991. On the autonomy of
phonological encoding: Evidence from slips of the tongue in Spanish. Journal of
Psycholinguistic Research 20:161–185.
Ferber, Rosa. 1995. Reliability and validity of slip-of-the-tongue corpora: A method-
ological note. Linguistics 33:1169–1190.
Fodor, Jerry A. 1983. The modularity of mind. Cambridge, MA: MIT Press.
Fromkin, Victoria A. 1973. Introduction. In Speech errors as linguistic evidence, ed.
Victoria A. Fromkin, 11–45. The Hague: Mouton.
Fromkin, Victoria A., ed. 1980. Errors in linguistic performance: slips of the tongue,
ear, pen, and hand. New York: Academic Press.
Garrett, Merrill F. 1975. The analysis of sentence production. In The psychology of learn-
ing and motivation, ed. Gordon H. Bower, Vol. 1, 133–175. New York: Academic
Press.
Garrett, Merrill F. 1980. Levels of processing in sentence production. In Language
production, ed. Brian Butterworth, Vol. 1, 177–210. London: Academic Press.
Gee, James and Wendy Goodhart. 1988. American Sign Language and the human bio-
logical capacity for language. In Language learning and deafness, ed. Michael
Strong. Cambridge: Cambridge University Press.
Jescheniak, Jörg D. 1999. Accessing words in speaking: Models, simulations, and data.
In Representations and processes in language production, ed. Ralf Klabunde and
Christiane von Stutterheim, 237–257. Wiesbaden: Deutscher Universitäts Verlag.
Kean, Mary-Louise. 1977. The linguistic interpretation of aphasic syndromes: Agram-
matism in Broca’s aphasia, an example. Cognition 5:9–46.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Lentz, Ella Mae, Ken Mikos, and Cheri Smith. 1989. Signing naturally. Teachers cur-
riculum guide level 2. San Diego, CA: DawnSignPress.
Leuninger, Helen. 1989. Neurolinguistik. Probleme, Paradigmen, Perspektiven.
Opladen: Westdeutscher Verlag.
Leuninger, Helen. 1996. Danke und Tschüβ fürs Mitnehmen. Zürich: Amman Verlag.
Leuninger, Helen, Daniela Happ, and Annette Hohenberger. 2000a. Sprachliche
Fehlleistungen und ihre Korrekturen in Deutscher Gebärdensprache (DGS).
Modalitätsneutrale und modalitätsabhängige Aspekte der Sprachproduktion. In
Sprachproduktion, ed. Christopher Habel and Thomas Pechmann. Wiesbaden:
Deutscher Universitäts Verlag.
Leuninger, Helen, Daniela Happ, and Annette Hohenberger. 2000b. Assessing modality-
neutral and modality-dependent aspects of language production: Slips of the tongue
and slips of the hand and their repairs in spoken German and German sign lan-
guage. Paper presented at the DFG-colloquium at Dagstuhl, 2000, September.
Levelt, Willem J. M. 1983. Monitoring and self-repair in speech. Cognition 14:41–104.
Levelt, Willem J. M. 1989. Speaking: From intention to articulation. Cambridge, MA:
MIT Press.
Modality-dependent aspects of sign language production 141

Levelt, Willem J. M. 1992. Assessing words in speech production: Stages, processes


and representations. Cognition 42:1–22.
Levelt, Willem J. M. 1999. Producing spoken language: A blueprint of the speaker. In
The neurocognition of language, ed. Colin M. Brown and Peter Hagoort, 83–122.
Oxford: Oxford University Press.
Levelt, Willem J. M., Ardi Roelofs, and Antje S. Meyer. 1999. A theory of lexical access
in speech production. Behavioral and Brain Sciences 22:1–75.
Lillo-Martin, Diane. 1986. Two kinds of null arguments in American Sign Language.
Natural Language and Linguistic Theory 4:415–444.
Lillo-Martin, Diane. 1991. Universal Grammar and American Sign Language: Setting
the null argument parameters. Dordrecht: Kluwer Academic.
Lillo-Martin, Diane. 1999. Modality effects and modularity in language acquisition: The
acquisition of American Sign Language. In Handbook of child language acquisi-
tion, ed. William C. Ritchie and Tej K. Bhatia, 531–567. San Diego, CA: Academic
Press.
McCarthy, John J. 1981. A prosodic theory of nonconcatenative morphology. Linguistic
Inquiry 12:373–418.
MacKay, Donald G. 1987. The organization of perception and action: A theory for
language and other cognitive skills. New York: Springer.
MacNeilage, Peter F. 1998. The frame/content theory of evolution of speech production.
Behavioral and Brain Sciences 21:499–511.
Meringer, Rudolf and Karl Mayer. 1895. Versprechen und Verlesen. Eine psychologisch-
linguistische Studie. Stuttgart: Goeschen.
Meyer, Antje S. 1992. Investigation of phonological encoding through speech error
analyses: Achievements, limitations, and alternatives.Cognition 42:181–211.
Motley, Michael T. and Bernard J. Baars. 1976. Semantic bias effects on the outcomes
of verbal slips. Cognition 4:177–187.
Mowry, Richard A. and Ian R. A. MacKay. 1990. Phonological primitives: Elec-
tromyographic speech error evidence. Journal of the Acoustical Society of America
88:1299–1312.
Newkirk, Don, Edward S. Klima, Carlene C. Pedersen, and Ursula Bellugi. 1980. Lin-
guistic evidence from slips of the hand. In Errors in linguistic performance: Slips
of the tongue, ear, pen, and hand, ed. Victoria A. Fromkin, 165–197. New York:
Academic Press.
Odden, David. 1995. Tone: African languages. In The handbook of phonological theory,
ed. John A. Goldsmith, 444–475. Cambridge, MA: Blackwell.
Perlmutter, David. 1992. Sonority and syllable structure in American Sign Language.
Linguistic Inquiry 23:407–442.
Pfau, Roland. 1997. Zur phonologischen Komponente der Deutschen Gebärdensprache:
Segmente und Silben. Frankfurter Linguistische Forschungen 20:1–29.
Poulisse, Nanda. 1999. Slips of the tongue: Speech errors in first and second language
production. Amsterdam: Benjamins.
Sandler, Wendy. 1993. Hand in hand: The roles of the nondominant hand in sign language
phonology. Linguistic Review 10:337–390.
Schade, Ulrich. 1992. Konnektionismus: Zur Modellierung der Sprachproduktion.
Opladen: Westdeutscher Verlag.
Schade, Ulrich. 1999. Konnektionistische Sprachproduktion. Wiesbaden: Deutscher
Universitäts Verlag.
142 A. Hohenberger, D. Happ, and H. Leuninger

Shattuck-Hufnagel, Stephanie. 1979. Speech errors as evidence for a serial-ordering


mechanism in sentence production. In Sentence processing: Psycholinguistic stud-
ies presented to Merrill Garrett, eds. W. Cooper and E. Walker, 295–342. Hillsdale,
NJ: Lawrence Erlbaum.
Slobin, Dan I. 1977. Language change in childhood and in history. In Language learning
and thought, ed. John MacNamara, 185–214. New York: Academic Press.
Smith, Cheri, Ella Mae Lentz, and Ken Mikos. 1988. Signing naturally. Teachers cur-
riculum guide Level 1. San Diego, CA: DawnSignPress.
Stemberger, Joseph P. 1984. Structural errors in normal and agrammatic speech. Cog-
nitive Neuropsychology 1:281–313.
Stemberger, Joseph P. 1985. The lexicon in a model of language production. New York:
Garland.
Stemberger, Joseph P. 1989. Speech errors in early child production. Journal of Memory
and Language 28:164–188.
Whittemore, Gregory L. 1987. The production of ASL signs. Dissertation, University
of Texas, Austin.
Wiese, Richard. 1987. Versprecher als Fenster zur Sprachstruktur. Studium Linguistik
21:45–55.
Zimmer, June. 1989. Towards a description of register variation in American Sign Lan-
guage. In The sociolinguistics of the Deaf community, ed. Ceil Lucas, 253–272.
San Diego, CA: Academic Press.
6 The role of Manually Coded English in language
development of deaf children

Samuel J. Supalla and Cecile McKee

6.1 Introduction
A pressing question related to the well-being of deaf children is how they
develop a strong language base (e.g. Liben 1978). First or native language
proficiency plays a vital role in many aspects of their development, ranging
from social development to educational attainment to their learning of a second
language. The target linguistic system should be easy to learn and use. A natural
signed language is clearly a good choice for deaf children. While spoken English
is a natural language, it is less obvious that a signed form of English is also
a natural language. At issue is the development of Manually Coded English
(MCE), which can be described as a form of language planning aimed at making
English visible for deaf children (Ramsey 1989). MCE demonstrates a living
experiment in which deaf children are expected to learn signed English as well
as hearing children do spoken English. If MCE is a natural language, learning
it should be effortless, with learning patterns consistent with what we know
about natural language acquisition in general.
American Sign Language (ASL) is a good example of how a sign system is
defined as a natural language with the capacity of becoming a native language for
deaf children, especially those of deaf parents who use ASL at home (Newport
and Meier 1985; Meier 1991). However appropriate ASL is for deaf children
of deaf parents, it is not the case that all deaf children are exposed to ASL.
Many are born to hearing parents who do not know how to sign. One area
of investigation is how children from nonsigning home environments develop
proficiency in ASL. Attention to that area has diverted us from studying how
deaf children acquire English through the signed medium. For this chapter,
we ask whether MCE constitutes, or has the capacity of becoming, a natural
language. If it does not, why not?
The idea that modality-specific constraints shape the structure of language
requires attention. We ask whether MCE violates constraints on the percep-
tion and processing of a signed language. We find structural deficiencies in
MCE, and suggest that such problems may compromise any sign system whose
grammatical structure is based on a spoken language. The potential problems

143
144 Samuel J. Supalla and Cecile McKee

associated with the structure of MCE are confounded by the fact that the input
quality for many deaf children may be less than optimal, thus affecting their
acquisition. The question of input quality dominates the literature regarding
problems associated with MCE in both home and school settings (for a review,
see Luetke-Stahlman and Moeller 1990). We regard impoverished input as an
external factor. We propose to study the way MCE is structured, which is best
described as an internal factor. Problematic internal factors can undermine the
learning of any linguistic system, including MCE. In other words, regardless
of the quality of the input, deficiencies in a linguistic system will hamper its
acquisition. The focus of this chapter are the internal factors affecting MCE
acquisition. We consider, for example, how words are formed in the visual–
gestural modality. Such morphological considerations bear on the question of
whether MCE is a natural language. First, however, we discuss some historical
precedents to MCE and some theoretical assumptions underlying the language
planning efforts associated with MCE.

6.2 Language planning and deaf children


The drive to make a spoken language accessible to deaf children is as old as
the field of deaf education. In fact, the modern language planning effort with
MCE can be traced back to Charles Michel de l’Epée, who founded the first
public school for the deaf in Paris in the eighteenth century (Lane 1984a; Stedt
and Moores 1990). There was initially widespread skepticism concerning deaf
children’s educability. Educators at the time assumed that true language was
spoken, and so deaf children would need to master a spoken language in order
to be educated (Schein 1984). This, of course, proved to be a serious obstacle.
Nevertheless, de l’Epée was able to make his position on language and deaf
children known, and it has since become a model for deaf education worldwide:
Teaching the deaf is less difficult than is commonly supposed. We merely have to
introduce into their minds by way of the eye what has been introduced into our own by
the ear. These two avenues are always open, each leading to the same point provided that
we do not deviate to the right or left, whichever one we choose (de l’Epée 1784/1984:54).

De l’Epée’s claims about deaf children’s potential relied on an alternative


mode to hearing: vision. He wanted to use a signed language to instruct deaf
children. De l’Epée even warned that we should not “deviate to the right or left”
in this endeavor. His position is consistent with our present understanding of
language and language acquisition. Inspired by Chomsky’s observations (e.g.
Chomsky 1965; 1981), many language acquisitionists hypothesize that children
are predisposed to learn language. This “predisposition” takes the form of a set
of guidelines that facilitate and direct language learning. Chomsky originally
called this the language acquisition device (LAD). We recognize that both the
The role of MCE in language development of deaf children 145

term and the concepts associated with it have changed since the 1960s. How-
ever, because the point we are making here does not hinge on departures from
Chomsky’s original observations, we use the term LAD and refer only gener-
ally to the idea that universal structural principles restrict possible variations in
language. On this hypothesis, the child’s task is to discover which variations
apply to the particular language he or she is acquiring. The LAD limits natural
languages. That is, a natural language is one that is allowed by the guidelines
encoded in the LAD (whatever they are).
Another important consideration in examining the question of what makes
a natural language is processing constraints. As Bever (1970) observed, every
language must be processed perceptually. Further, a language’s processing must
respect the limits of the mind’s processing capacities. Thus, for a system to be a
natural language (i.e. acquired and processed by humans), it must meet several
cognitive prerequisites. What is still not clear is whether modality plays a role
in shaping language structure and language processes. Whatever position is
taken on the modality question has direct ramifications on the feasibility of the
language planning effort as described for the field of deaf education.
De l’Epée acknowledged the relationship of cognition and language at least
intuitively. Not only did he hold the position that a signed language is fitting for
deaf children, but he was also convinced that it had the capacity of incorporating
the structure of a spoken language effectively. He assumed that modality did not
play a role in the structuring of a signed language. First, de l’Epée’s encounters
with deaf children and their signing prior to the school’s founding provided
him with the basis needed to make an effective argument for their educational
potential. Second, this is where the idea was conceived of making the spoken
language, French in his case, visible through the signed medium. De l’Epée
then made a formal distinction between Natural Sign and Methodical Sign,
reflecting his awareness of relevant structural differences. The former referred
to signing by deaf children themselves and the latter to the sign system that he
developed to model French.
A more radical approach would have been to create a French-based sign
system without reference to Natural Sign. In other words, de l’Epée could
have invented a completely new system to sign French. Instead, he chose to
adjust an existing sign system to the structure of French. That is, he made
Methodical Sign by modifying Natural Sign. This language planning approach
is consistent with Epée’s warning about the possibility of structural deviation
leading to the breakdown of a linguistic system in the eyes of a deaf child. Pure
invention would increase the likelihood of such an outcome. Even with these
considerations, Methodical Sign did not produce positive results and failed to
meet de l’Epée’s expectations.
The problems plaguing Methodical Sign were serious enough for the third
director of the school, Roche-Ambroise Bebian, to end its use with deaf students.
146 Samuel J. Supalla and Cecile McKee

Bebian is also credited with leading the movement to abandon Methodical


Sign throughout France in the 1830s (Lane 1984b). Bebian (1817/1984:148)
presented his argument:
Signs were considered only in relation to French, and great efforts were made to bend
them to that language. But as sign language is quite different from other languages, it
had to be distorted to conform to French usage, and it was sometimes so disfigured as
to become unintelligible.

At the time of Bebian’s writing, over 40 years had passed since the founding
of de l’Epée’s school. The continued reference to Natural Sign indicates that
it had persisted regardless of the adoption of Methodical Sign as a language
of instruction. Despite de l’Epée’s use of Natural Sign to develop Methodical
Sign, it appears that the latter was not learned well. Bebian’s insights on this
are valuable. He identified the problem as one that concerned deaf children’s
perceptual processing of the French-based sign system. The distortion affecting
these children’s potential for successful language acquisition suggests serious
problems associated with the structure of Methodical Sign (Bebian described
it as “disfigured”).
Bebian’s reference to the special nature of signed languages to account for
de l’Epée’s failed efforts raises doubts that the structure of a spoken language can
successfully map onto the signed medium. This alone represents a significant
shift from de l’Epée’s position, and it is part of Bebian’s argument in favor of
Natural Sign as a language of instruction over Methodical Sign. Unfortunately,
Bebian was not completely successful in describing what went wrong with
Methodical Sign. He did not elaborate on possible structural deficiencies of
the French-based sign system. The basis for making the theoretical shift in the
relationship of signed languages and spoken languages was thus lacking.
With this historical background, we need to re-examine recent language plan-
ning efforts associated with deaf children and spoken languages. Methodical
Sign cannot be further studied because it has ceased to exist. Its English descen-
dants, on the other hand, provide us with the opportunity to examine several
similar systems. We turn now to English-based sign systems. Next, we address
deaf children’s learning patterns and their prospect for mastering English in the
visual–gestural modality.

6.3 An evaluation of Manually Coded English


In the USA language planning efforts leading to the development of English-
based sign systems were most active during the early 1970s when four systems
were developed:
r Seeing Essential English;
r Signing Exact English;
The role of MCE in language development of deaf children 147
r Linguistics of Visual English; and
r Signed English.
Due to disagreement among the systems’ developers, each system constitutes
a different representation of English (Woodward 1973). Yet, all the relevant
language planners relied on certain aspects of ASL for the development of
all English-based sign systems. We use the umbrella term “Manually Coded
English” to refer to the four specific systems noted above. In order to understand
the relationship between ASL and MCE, one English-based sign system is
chosen for a closer examination below. The version we focus on has achieved
nationwide recognition and use: Signing Exact English or SEE 2 (Stedt and
Moores 1990).
When these English-based sign systems were being planned, linguistic re-
search on ASL was already underway (e.g. Stokoe 1960; Stokoe, Casterline,
and Croneberg 1965). The developers of SEE 2, for example, acknowledged
the linguistic status of ASL in the introduction of Signing Exact English (Gus-
tason, Pfetzing, and Zawolkow 1980). Increasing recognition of ASL as a full-
fledged human language has played a role in these language planning efforts.
Nonetheless, the underlying motivation for the development of MCE remains
the same: to provide deaf children access to spoken English as a target for their
schooling. The idea of deaf children gaining native competence in English via
the signed medium is tempting, especially with respect to the morphosyntac-
tic level. Language planners expected deaf children to develop more effective
literacy skills by drawing on their signed English knowledge. Systematic one-
to-one correspondences between signed and written utterances are meant to
serve as “bridges” to literacy development in English (Mayer and Wells 1996;
Mayer and Akamatsu 1999). The structural problems with Methodical Sign
revealed by the historical literature were apparently set aside. Lacking elabora-
tion of what might have gone wrong with Methodical Sign, American language
planners gave de l’Epée’s original idea another try.

6.3.1 Structural properties


Like their French antecedents, American language planners were careful about
inventing sign equivalents to English. A vast number of ASL signs are incor-
porated into the SEE 2 vocabulary, with lexical differences between ASL and
English marked by alterations in the handshape parameter in individual ASL
signs (Lane 1980). For example, consider the English words way, road, and
street. In ASL, these are signed identically. In SEE 2, their handshapes differ
by incorporating the manual alphabet (W for way, R for road, and S for street)
to distinguish them. The movement, location, and orientation parameters in
most SEE 2 signs remain like their ASL antecedents. Because SEE 2 is meant
to represent English, lexical variation in ASL is not similarly incorporated. For
148 Samuel J. Supalla and Cecile McKee

example, ASL has three distinct signs for three of the concepts behind the En-
glish word ‘right’ (i.e. correct, direction, and to ‘have a right’). SEE 2 borrows
only one ASL sign and uses it for all three concepts.
At the morphological level, ASL signs are also adopted. They serve as roots,
to which invented prefixes and suffixes are added. These represent English
inflections for tense, plurality, and so on. SEE 2 relies on both invented and
borrowed signs to provide a one-to-one mapping for English pronouns, prepo-
sitions, and conjunctions. As a result, borrowing from ASL is primarily at the
lexical level. Some function morphemes (i.e. free closed class elements) are
also borrowed from ASL. All bound morphemes are invented. The invention of
this class of morphemes is due to the fact that ASL does not have a productive
set of prefixes and suffixes.
We turn now to the formational principles that underlie SEE 2 prefixes and
suffixes. Importantly, SEE 2’s planners attempted to create English versions of
bound morphemes in the most natural way possible. If a form is to function as
a linear affix, it is phonologically independent of the root. This does not mean
that a linear affix will not influence the phonological properties of the root at
some point. For example, the sound of the English plural is determined by the
last consonant of the root: bats vs. bells vs. churches. We want to emphasize
that some aspects of the form of the linear affix remain constant even though
other aspects of its form may change. One approach was to create a linear affix
with all four of the basic parameters in a full ASL sign (S. Supalla 1990). For
example, the SEE 2 suffix -ING involves the handshape I (representing the
initial letter of the suffix), a location in neutral signing space, outward rotation
of the forearm, and a final orientation of the palm facing away from the signer’s
body (see Figure 6.1a). Another example is the SEE 2 suffix -MENT, which
involves two handshapes: the dominant one as M (representing the initial letter
of the suffix) located on the palm of the other, a path movement across the
palm surface, and an orientation of the dominant palm facing away from the
signer’s body (see Figure 6.1b). The majority of SEE 2’s affixes are, like -ING
and -MENT, complete with respect to full sign formational structure: 43 out of
49, or 88%. The remaining six affixes use only three of the four parameters;
they omit movement (either internal or path). Figure 6.2 exemplifies one of
these latter six affixes, the suffix -S for singular present tense verbs and plural
nouns.
Thus, most of the bound morphemes in SEE 2 are sign-like. Further, it seems
that the invented nature of bound morphemes in MCE is not necessarily prob-
lematic. Analysis confirms that all sign-like affixes conform to how a sign
should be formed in ASL (S. Supalla 1990). But is this enough? We consider
now what constitutes a sign in ASL, or perhaps in any signed language. A word
in any linguistic system is formed according to certain formational rules. These
The role of MCE in language development of deaf children 149

(a)

(b)

Figure 6.1a The SEE 2 sign -ING; 6.1b The SEE 2 sign -MENT

rules involve both the phonological and the morphological systems. Battison’s
(1978) pioneering typological work indicates that an ASL sign has its own
formational rules based on the physical dynamics of the body as manifested
primarily in gestural articulation and visual perception. These constraints of
production and perception on the formational aspects of signs have resulted
in the development of a highly integrated linguistic system comparable to the
phonological and morphological components of a spoken language.

[T]wo [handshapes and locations] is the upper limit of complexity for the formation
of signs. A simple sign can be specified for no more than two different locations (a
sign may require moving from one location to another), and no more than two different
handshapes (a sign may require that the handshape change during the sign). (Battison
1978:48)
150 Samuel J. Supalla and Cecile McKee

Figure 6.2 The SEE 2 sign -S

Battison also observed that the two handshapes in a sign must be phonologically
related, with one changing into the other by opening or closing. For example,
a straight, extended finger may bend or fully contract into the palm, or the
contracted fingers may extend fully. Thus, these constraints on handshape for-
mation not only limit the number of handshapes to two; they also require the
handshapes to be related.
Such sign formational properties presumably relate to complexity issues.
If constraints like Battison’s prove to be true of all signed languages, then
they might affect the learnability of any manually coded linguistic system,
both ASL and SEE 2 alike. If, on the other hand, such constraints characterize
only ASL (and other signed languages allow, for example, three handshapes
or two unrelated handshapes in a sign), then such constraints are important
only for some language planning efforts. General learnability would not be the
issue. Thus, an important question to resolve before we can fully evaluate the
learnability of MCE systems is the generalizability of constraints like the ones
that Battison described.
At this point, the strong relationship between SEE 2 and ASL needs to be
summarized. Not only is a large inventory of signs borrowed from ASL to form
the free morphemes in SEE 2, but most of the bound morphemes were created
based on how signs are formed in ASL. It is not a question of whether individual
linear affixes are formed appropriately. We focus instead on how these elements
are combined with a root. More specifically, the adoption of linear affixation as
a morphological process in MCE may exceed the formational constraints for
signs in ASL. In contrast, nonlinear affixation is consistent with such constraints
and leads to the desired outcome of successful acquisition by deaf children
The role of MCE in language development of deaf children 151

(for a review on linear and nonlinear affixation types occurring in ASL and
spoken languages, see S. Supalla 1990). The following discussion covers this
particular morphological process in ASL and how it differs from the linear
type.

6.3.2 Nonlinear affixation


The handshape and location constraints discussed thus far are based on mono-
morphemic signs. The extension of such constraints to multi-morphemic signs
allows for the examination of potential learning problems with MCE mor-
phology. Mono-morphemic signs in ASL (e.g. IMPROVE) are consistent with
Battison’s formational constraints. For example, the citation form of IMPROVE
involves the B handshape with the palm facing toward the signer’s body, and
an arc movement produced from one location to another on the signer’s other
arm. There is one handshape and two locations on the arm (see Figure 6.3a). To
inflect IMPROVE for continuative aspect, its citation form undergoes processes
of deletion, replacement, and reduplication. According to Klima and Bellugi
(1979), one of the two locations on the arm is deleted, then the arc move-
ment is replaced with a circular movement, and finally, the circular movement
is reduplicated (see Figure 6.3b). For the noun derived from IMPROVE, the
movement differs. The manner of the noun’s movement is restrained, whereas
there is no restrained manner in the inflected verb. As a result of the re-
strained manner, the noun’s circular movement is smaller and accelerated
(see Figure 6.3c).
T. Supalla and Newport (1978) argued that the non-inflected verb, inflected
verb, and noun forms shown in Figure 6.3 are derived from a common underly-
ing representation. At an intermediate level in the derivation, the noun and verb
forms are distinguished in terms of manner and movement. The inflected verb
is derived from the intermediate level through necessary morphophonological
changes including reduplication. After the derivational process, the signer can
perceive the distinction between nouns and verbs at the surface level.
The learnability of both inflectional and derivational morphology in ASL is
confirmed through a number of acquisition studies. According to Newport and
Meier’s (1985) review, deaf children are able to produce mono-morphemic signs
before they acquire ASL sentence morphology. The latter “begins at roughly
[18–36 months of age] and, for the most complex morphological subsystems,
continues well beyond age 5 [years]” (p.896). The complexity of the deriva-
tional subsystem (i.e. involving noun and verb distinctions) may result in a later
age for production as reported in Launer (1982). However, ASL’s morpholog-
ical complexity does not stop children from acquiring the language; nor does
the learning process deviate from what is expected for any morphologically
complex language.
152 Samuel J. Supalla and Cecile McKee

(a)

(b)

(c)

Figure 6.3 Three forms of the ASL sign IMPROVE: 6.3a the citation form;
6.3b the form inflected for continuative aspect; and 6.3c a derived noun

Inflectional and derivational morphology of SEE 2 also involve changes to


the root. But these changes are linear or sequential. Inflectional and deriva-
tional processes in ASL are nonlinear, or more overlapping. Recall the forms of
IMPROVE discussed earlier. Under inflection for the continuative, the handshape
parameter of the sign remains the same as in the citation form (i.e. single hand-
shape incorporating four extended fingers; see Figure 6.3). The number of
The role of MCE in language development of deaf children 153

locations is reduced from two to one. The SEE 2 examples undergoing mor-
phological change through linear affixation, on the other hand, had completely
different outcomes.

6.3.3 Linear affixation


Again using the example of IMPROVE, SEE 2 has its own morphological pro-
cesses for marking present progressive tense. The SEE 2 equivalent of improving
is multi-morphemic. The verb IMPROVE (borrowed from ASL) serves as the
root. As in ASL, its citation form uses the B handshape with the palm facing
toward the signer’s body and an arc movement from one location to another
on the signer’s arm. The affix -ING is then produced after IMPROVE (see Fig-
ure 6.4a). Similar sequencing of roots and affixes occurs in noun derivation (i.e.
adding the suffix -MENT to IMPROVE to create the noun IMPROVEMENT;
see Figure 6.4b).
In both verb inflection and noun derivation, the SEE 2 suffixes retain their
phonological independence by having their own handshape, location, movement,

(a)

(b)

Figure 6.4 The SEE 2 signs: 6.4a: IMPROVING; 6.4b IMPROVEMENT


154 Samuel J. Supalla and Cecile McKee

and orientation. The handshape for IMPROVE is B, which is distinct from the
I and M of -ING and -MENT, respectively. The location of IMPROVE is on the
arm, whereas -ING and -MENT are signed in neutral space. The movement is
path/arc for IMPROVE while the movement for -MENT is path/straight, and
the movement for -ING is internal with no path at all. Finally, the orientation
of the IMPROVE handshape is tilted with the ulnar side of the hand facing
downward. In contrast, the orientation for both -ING and -MENT is upright
with the palm facing away from the signer. Taken together, it can be seen that
affixes developed for SEE 2 are phonologically distinct, across four parameters,
from the roots that they are sequenced with.
Another important point to consider are the cases where the handshape and
location constraints are exceeded with the multi-morphemic signs in SEE 2. In
IMPROVING, the B and I handshapes are both “open,” and there is no relation-
ship between them. IMPROVEMENT also has two handshapes. Again, there is
no relationship between them; they are formationally distinct (i.e. four extended
fingers for the first handshape and three bent fingers for the second handshape).
If there were a relationship between the two handshapes (as Battison maintained
for ASL), the two handshapes should be formationally consistent (e.g. extended
four fingers to bent four fingers).
In the case of IMPROVES (including the SEE 2 affix that omits movement, as
shown in Figure 6.2), this linearly affixed sign meets both handshape number
and handshape relationship constraints; that is, the B and S handshapes are
related (“open” and “closed”; four extended fingers fully contract into the palm).
But IMPROVES has three locations. It exceeds the location number constraint.
In this example, the first two locations are made on the arm, and the last location
is made in neutral signing space. IMPROVING and IMPROVEMENT also
exceed the two-location limit, in addition to failing the handshape constraints.
We turn now to consider assimilation, a phonological operation that occurs
in natural languages, signed and spoken alike. Another example from SEE 2
shows what happens when the two morphological units in KNOWING are com-
bined. Figure 6.5a shows the combination prior to assimilation and Figure 6.5b
after assimilation. The ASL counterpart can be seen in a lexical compound. For
example, the two signs FACE + STRONG (meaning ‘resemble’) show how a
SEE 2 root might “blend” with the sign-like linear affix. According to Liddell
and Johnson (1986), lexical compounds in ASL undergo phonological restruc-
turing and realignment. The comparable change in the form of KNOWING
involves the removal of KNOW’s reduplication and the reversal of its hand-
shape’s orientation (i.e. from palm facing toward the signer to away from the
signer). A path movement is created between KNOW and -ING, whereas it is
absent in the non-assimilated form.
Assimilation cuts the production time of the non-assimilated form in half.
The assimilated version of KNOWING is comparable in production time to
The role of MCE in language development of deaf children 155

(a)

(b)

Figure 6.5 The SEE 2 sign KNOWING: 6.5a prior to assimilation and
6.5b after assimilation

the average single sign in ASL (S. Supalla 1990). Nevertheless, this form still
violates ASL handshape constraints in terms of number and relationship. The
two handshapes used in this inflected MCE sign are not related (i.e. B and I
are both open and comparable to those of the earlier example, IMPROVING).
Had the suffix’s handshape (i.e. I) been removed to meet the handshape number
constraint (i.e. using B from KNOW only), the phonological independence of
the suffix would be lost. This particular combination of KNOW and -ING would
be overtly blended. The fully assimilated versions of KNOWING and KNOWS
would appear identical and noncontrastive, for example (S. Supalla 1990).
As shown by the examples discussed here, the combination of a root and
bound morpheme (sign-like and less-than-sign; non-assimilated and assimi-
lated) in SEE 2 can produce at least three scenarios:
r The combination may exceed the location number constraint.
r The combination may exceed the handshape number constraint.
r While meeting the handshape number constraint, the combination of two
unrelated handshapes may result in a violation of the handshape relationship
constraint.
156 Samuel J. Supalla and Cecile McKee

Not included here is the combination of two or possibly all three constraint
violation types. According to our analyses, MCE morphology does not meet
the constraints on sign structure.

6.3.4 MCE acquisition


Here we ask whether bound MCE morphemes are acquired like their counter-
parts in spoken English. According to Brown (1973), hearing children master
14 grammatical morphemes in English at different points in time. Some of these
are consistently produced before others. Brown also found that the morphemes
that are less phonologically distinct and more often assimilated in form (e.g.
the possessive marker -’s) are typically produced later. Slobin (1977) attributed
such delay to a lack of perceptual salience. He argued that the degree of salience
predicts the order in which these morphemes emerge in hearing children. The
more perceptually salient a bound morpheme is, the earlier it is produced. More
recent work challenges several aspects of these earlier claims (e.g. Gerken,
Landau, and Remez, 1990; McKee, 1994), but we will refer to them anyway
because they are familiar to most people.
Following Brown and Slobin, a number of investigators have studied the ac-
quisition of these same 14 morphemes in MCE. Gilman and Raffin (1975; Raffin
1976) found several differences in the sequence of deaf children’s acquisition
of bound morphemes as compared to their hearing counterparts. The overall
timing of acquisition for MCE function morphemes is late. Both Schlesinger
(1978) and Meadow (1981) reached similar conclusions. These two investi-
gators account for their late acquisition by suggesting that they are slurred
and indistinct to children. Perhaps the forms of MCE function morphemes are
less perceptually salient to deaf children than their spoken counterparts are to
hearing children.
In contrast, Maxwell (1983; 1987) argued that the MCE morphemes are
too salient. This investigator pointed out that deaf children’s developmental
patterns with MCE morphology are not just delayed, but the patterns are actually
anomalous. The children in her research treated MCE bound morphemes as if
they were free morphemes, occasionally with no potential root form nearby.
In other words, they were placed randomly in a sentence. The examples in (1)
illustrate this error with -ING.
(1) a. SANTA-CLAUS COME TO TOWN ING.
b. WRITE THAT NAME ING THERE. (Maxwell 1987:331)
Swisher (1991:127) observed similar patterns: “[The deaf] child signed rapid
and often unintelligible messages, sprinkled with extra -s’s and -ing’s, which
were very difficult to follow.” This pattern of MCE affixes (i.e. sign-like and
those less-than-signs) being treated as free morphemes is also found in a study
The role of MCE in language development of deaf children 157

by Gaustad (1986). Importantly, such random placement of affixes is something


that hearing children acquiring spoken English do not do. Instead, the more
typical error is omission.
Later in the acquisition of MCE morphology, when deaf children have figured
out that these signs are not free morphemes, a different pattern appears. Even
when deaf children understand that each bound morpheme should be affixed to
a root, they still misuse these morphemes. Bornstein, Saulnier, and Hamilton
(1980) found that deaf children used only four of the 14 markers. These four
were produced inconsistently, and appeared only 39% of the times that they
should have been used. Wodlinger-Cohen (1991) compared hearing and deaf
children and drew similar conclusions. Although omission errors did occur
with hearing children learning spoken English, they did so only at an early age.
As hearing children grew older, they produced the relevant morphemes more
often and more consistently. For deaf children learning MCE, omission errors
were not outgrown. These children may know the rules behind individual bound
morphemes, but their production does not systematically reveal that knowledge.
The affixes developed for MCE appear to be difficult to produce as compared
to the spoken counterparts.

6.4 Discussion and conclusions


We return to the production of MCE morphology later. We focus now on MCE
acquisition. Evidence from various studies indicates that deaf children experi-
ence serious difficulty in learning MCE bound morphology. Some explanation
for this state of affairs is clearly needed. Accounts based on perceptual salience
have produced two extremes: the forms of MCE bound morphemes are too
salient, or not sufficiently salient. Our analyses show that MCE affixes vary
in perceptual salience. For example, the sign-like affixes (e.g. -ING) are more
salient than those less like signs (e.g. -S). The MCE acquisition studies indicate
that both types suffer similar consequences. That is, misanalysis of the MCE
bound morphemes as free morphemes occurs, and this impairs deaf children’s
acquisition of English.
The extension of structural constraints of mono-morphemic to multi-
morphemic signs also bears on the learning problems of MCE morphology.
Recall how Battison (1978) described the sign structure in ASL. No appli-
cation to MCE morphology was made, but we now want to emphasize this.
Our discussion on the possible violation of structural constraints with linearly
affixed signs in MCE has significant implications. A linear affix combined
with a root results in one (or possibly a combination) of three structural con-
straint violation types. As a result, the linear affix places itself outside the
sign boundaries. For this reason, deaf children do not perceive it as part of the
root.
158 Samuel J. Supalla and Cecile McKee

We acknowledge that deaf children do eventually learn some rules of MCE


bound morphemes, albeit through “back-door” techniques. We give credit to
explicit instruction and exposure to print English that make these children aware
of the function of the MCE bound morphemes. These affixes are clearly part
of the word when written (e.g. no spacing between the root and affix), and
teachers of the deaf engage heavily in teaching the rules of English grammar
(for review of methods of teaching English to deaf students, see McAnally,
Rose, and Quigley 1987).
In this light, deaf children do not necessarily learn English through MCE
alone. These children are influenced by what they have learned from other
sources. However, at the subconscious level they experience cognitive break-
down when the MCE bound morphemes are not consistent with how words are
formed in the visual–gestural modality. They may tell themselves that the linear
affixes are “part of the root” when they sign MCE morphology. Yet their mental
grammars continue to treat the linear affixes as separate from the root. This
explains why omission plagues the production of MCE morphology, especially
with affixes.
We must ask ourselves whether the structure of MCE morphology can be
changed to improve its perception and production. Swisher (1991) pointed out
that the morphology of SEE 2 and other English-based sign systems needs to
undergo the process of assimilation that occurs in natural languages (spoken or
signed). The findings that she reported on MCE use among hearing parents with
deaf children indicate that assimilation of linearly affixed signs is frequently
lacking. The earlier mentioned reduction in time (by half) required for the
production of linearly affixed signs undergoing assimilation is also a welcome
insight. We need to consider the previous studies on the time required to produce
a sentence, for example, in SEE 2. A sign, due to its larger physical articulation,
requires twice the length of time to produce than a word (Bellugi and Fischer
1972; Grosjean 1977); however, the proposition rate involved in producing a
sentence is found to be equivalent in both ASL and spoken English. The SEE 2
sentence, on the other hand, takes nearly twice as long to accomplish (Klima
and Bellugi 1979). The reliance on nonlinear affixation in ASL is seen as an
effective compensation for the lengthy production of individual signs.
The impression here is that if MCE were “quicker” in production, then deaf
children would learn the structure involved more effectively. This is question-
able. We propose that even if temporal constraints for effective processing were
met via assimilation, MCE morphology would remain problematic for deaf
children. Supporting this position, S. Supalla (1990) developed an experimen-
tal design to facilitate assimilation within linearly affixed signs based on how
compounds are formed in ASL. Two groups of subjects proficient in ASL and
MCE were asked to count the number of signs they viewed. The signs were
derived from New Zealand Sign Language (NZSL), a language unfamiliar to
The role of MCE in language development of deaf children 159

the subjects. The results of this study suggest that perceptual distortion per-
sisted for MCE morphology. Undergoing assimilation, linearly affixed signs
exceeded sign boundaries and were perceived as two signs, not one. Non-
linearly affixed signs, modeled on ASL, were consistently perceived as one
sign. This sign-counting task was also performed by a group of novice signers,
all sharing the same perceptual biases in regard to where a sign begins and ends.
Interestingly, the novice signers shared the same perceptual knowledge that ac-
counted for the sign-counting of those having extensive signing experience
(via ASL or MCE).
Further, S. Supalla’s (1990) use of NZSL as the foreign language stimulus
holds ramifications for understanding the relationship of signed and spoken
languages. Individual NZSL signs were selected to function as roots and others
as linear affixes. They underwent assimilation upon combination. The subjects
in the study did not know the original meanings of the NZSL signs or the fact
that they were all originally free morphemes. The subjects (even those who had
no signing experience) were able to recognize the formational patterns within
the linearly affixed signs and to identify the sign boundaries splitting the linearly
affixed signs into two full words, not one word. Such behavior is comparable to
deaf children who are exposed to SEE 2 and other English-based sign systems.
Note that NZSL and ASL are two unrelated signed languages, yet they appear
to share the same formational constraints of signs.
A critical implication of these findings is that deaf children may have per-
ceptual strategies (as found among adults) that they apply in their word-level
segmentation of MCE morphology. More specifically, these children may be
able to identify signs in a stream based on the structural constraints as de-
scribed, but they cannot perceive linear affixes as morphologically related to
roots. Linear affixes rather stand by themselves and are wrongly perceived as
“full words.” This is an example of how a language’s modality may shape its
structure, which in turn relates to its learnability. For adult models using MCE,
the combination of a linear affix to a root, assimilated or not, seems to result in
the production of a too-complex sign which affects the potential naturalness of
MCE morphology.
The notion of modality-specific constraints for the structure of signed lan-
guages is not new. Newport and T. Supalla (2000), for example, reviewed the
issues and highlighted T. Supalla and Webb’s (1995) study of the morphological
devices in 15 different signed languages. The explanation for the striking sim-
ilarity of morphology in these languages lies in nonlinguistic resources. That
is, space and motion were described as what “propels sign languages more
commonly toward one or a few of the several ways in which linguistic systems
may be formed” (Newport and T. Supalla 2000:112). Recall the earlier discus-
sion on how inflectional and derivational processes in ASL employ changes
in sign-internal movement and location (e.g. IMPROVE). These processes can
160 Samuel J. Supalla and Cecile McKee

be described in terms of space and motion, thus attributing at least part of


how ASL works as a language to nonlinguistic shaping. However, if we look
beyond the use of motion and space so characteristic of ASL (and possibly
other natural signed languages, as T. Supalla and Webb proposed), we find the
linguistic constraints that shape individual signs or words within the visual–
gestural modality. The arbitrary rules involving the number of locations and
handshapes determine possible signs. The sign formational constraints, in turn,
predict which morphological operations should take place. In this case, we can
use a linguistic explanation for why natural signed languages are structured as
they are.
A linguistic explanation is also needed for why deaf children are driven to
innovate ASL-like forms, especially those who have had MCE exposure for
most of their childhood (Suty and Friel-Patti 1982; Livingston 1983; Goodhart
1984; Gee and Goodhart 1985; S. Supalla 1991). S. Supalla (1991) covers those
exposed to SEE 2 specifically. Re-analysis of the data indicates that children
changed the MCE to include nonlinear affixation. In such restructuring, the
language form is affected by what children bring to the learning process.
This behavior is not surprising from the nativization framework proposed
by Andersen (1983, cited in Gee and Mounty 1991). This framework reflects
Chomsky’s LAD, i.e. the idea that the human biological capacity for language
represents a set of internal norms. If the input for language development is inac-
cessible, children will construct grammars on the basis of their internal norms.
Gee and Mounty (1991) suggest that deaf children may have found MCE deviant
from the internal norms specific to signed languages. This would “trigger” these
children into replacing the MCE grammar with an acceptable one. Although
no internal norms were specified, it supports the idea that structural constraints
specific to signed languages may exist and may explain the outcome of MCE
acquisition. The consideration for the two morphological processes (linear vs.
nonlinear) in this chapter offers insight on how one (and not the other) may be
more suitable for the signed medium as opposed to both being suitable for the
spoken form.
The theory emerging from discussion of language structure and modality
contrasts with the assumptions underlying language planning efforts in the USA
and elsewhere. Both French and American language planners had a different
theoretical basis when they made Methodical Sign and MCE, respectively.
They apparently thought that if linear affixation occurs in spoken languages, it
would not create any problems in the signed medium. The reliance on nonlinear
affixation as found in ASL would be considered simply language-specific (much
as one would say about certain spoken languages of the world; e.g. Semitic
languages and their reliance on nonlinear affixation as a morphological process).
Such reasoning underscores the language planning efforts in creating a viable
sign system modeling the structure of a spoken language.
The role of MCE in language development of deaf children 161

In any case, the viability of a linguistic system is best determined by its


naturalness. Understanding how deaf children cope with MCE has produced
provocative findings ranging from the failure to learn and produce bound mor-
phemes to changes in the form and structure of MCE morphology. There is a
demonstrated incompatibility between a non-natural language system and the
cognitive biases of deaf children. Continued investigation of the interaction be-
tween modality and language is essential. Such a framework is helpful when
we attempt to evaluate MCE and its role in the language development of deaf
children. Validating sign formational constraints affecting the morphological
operations would allow for the development of a more beneficial theory of the
linguistic needs of deaf children.
As part of defining our future directions, we need to consider literacy issues
and deaf children. Recall that MCE was supposed to help with the development
of reading and writing skills. The mapping of spoken language to the signed
medium is an age-old goal for educators who work with deaf children. These
educators have long recognized the difficulties that deaf children experience in
mastering even rudimentary literacy skills. Frustration on the part of the edu-
cator of the deaf and of the deaf child is so great that a wide range of desperate
measures have been developed and implemented over the years. These include
the revival of the modern language planning effort surrounding MCE. We are
not surprised at these outcomes, and must now pay attention to the fact that
hearing children enjoy the added benefit of sound support in learning how to
read effectively (i.e. via phonetic skills). The sound support serves as a transi-
tion from spoken English to learning how to read in the print form (for review of
the disparities in literacy development tools between hearing and deaf children,
see LaSasso and Metzger 1998).
What we need to devise is a new research plan asking questions about how
ASL can serve as a tool to teach reading and writing to deaf children. This is
exactly what our research team has done in recent years with the development
of theoretically sound, ASL-based literacy tools (e.g. involving the use of an
ASL alphabet, a special resource book, and glossing). These tools are designed
to provide deaf children with opportunities to simultaneously develop literacy
skills in the signed language and make the transition to print English as a
second language (for review of ASL-supported English learning, see S. Supalla,
Wix, and McKee 2001). The literacy program as described forms an effective
alternative to the sound support that underlies the education of hearing children,
especially in their reading development. Such educational innovations provide
a path not taken in the past.
Developing a successful literacy program for deaf children requires that we
ask how signed languages differ from spoken languages. This is key to positive
outcomes concerning the welfare of deaf children. With new understanding
of internal factors, the structure of a sign system modeled on English now
162 Samuel J. Supalla and Cecile McKee

commands our full attention. The consideration of how a signed language is


perceived and produced is crucial for the assessment of the effectiveness of
MCE. The cognitive prerequisites involved underlie the structure of natural
languages of the world. If the signed medium is the preferred mode of commu-
nication for deaf children, we must identify its language structure. Once we are
clear on how ASL operates as a language, we can begin to pave the way for the
long-awaited literacy development of deaf children.

6.5 References
Andersen, Roger W. 1983. A language acquisition interpretation of pidginization and
creolization. In Pidginization and creolization as language acquisition, ed. Roger
W. Anderson, 1–56. Rowley, MA: Newbury House.
Battison, Robbin. 1978. Lexical borrowing in American Sign Language. Silver Spring,
MD: Linstok Press.
Bebian, Roch-Ambroise A. 1984. Essay on the deaf and natural language, or introduction
to a natural classification of ideas with their proper signs. In The deaf experience:
Classics in language and education, ed. Harlan Lane, 129–160. Cambridge, MA:
Harvard University Press.
Bellugi, Ursula and Susan Fischer. 1972. A comparison of sign language and spoken
language: Rate and grammatical mechanisms. Cognition 1:173–200.
Bever, Thomas. 1970. The cognitive basis for linguistic structures. In Cognition and the
development of language, ed. John R. Hayes, 279–352. New York: Wiley.
Bornstein, Harry, Karen L. Saulnier, and Lillian B. Hamilton. 1980. Signed English: A
first evaluation. American Annals of the Deaf 125:467–481.
Brown, Roger. 1973. The first language: The early stages. Cambridge, MA: Harvard
University Press.
Chomsky, Noam 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Chomsky, Noam 1981. Lectures on government and binding. Dordrecht, Holland: Foris.
de l’Epée, Charles M. 1984. The true method of educating the deaf, confirmed by much
experience. In The deaf experience: Classics in language and education, ed. Harlan
Lane, 51–72. Cambridge, MA: Harvard University Press.
Gaustad, Martha G. 1986. Longitudinal effects of manual English instruction on deaf
children’s morphological skills. Applied Psycholinguistics 7:101–127.
Gee, James and Wendy Goodhart. 1985. Nativization, linguistic theory, and deaf lan-
guage acquisition. Sign Language Studies 49:291–342.
Gee, James and Judith L. Mounty. 1991. Nativization, variability, and style shifting in
the sign language development of deaf children of hearing parents. In Theoretical
issues in sign language research, Vol. 2: Psychology, ed. Patricia Siple and Susan
D. Fischer, 65–83. Chicago, IL: University of Chicago Press.
Gerken, Louann, Barbara Landau, and Robert E. Remez. 1990. Function morphemes
in young children’s speech perception and production. Developmental Psychology
26:204–216.
Gilman, Leslea A. and Michael J. M. Raffin. 1975. Acquisition of common morphemes
by hearing-impaired children exposed to Seeing Essential English sign system.
Paper presented at the American Speech and Hearing Association, Washington, DC.
The role of MCE in language development of deaf children 163

Goodhart, Wendy. 1984. Morphological complexity, American Sign Language, and


the acquisition of sign language in deaf children. Doctoral dissertation, Boston
University.
Grosjean, François. 1977. The perception of rate in spoken and sign languages. Journal
of Psycholinguistic Research 22:408–413.
Gustason, Gerilee, Donna Pfetzing, and Esther Zawolkow. 1980. Signing Exact English.
Los Alamitos, CA: Modern Signs Press.
Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Lane, Harlan. 1980. A chronology of the oppression of sign language in France
and the United States. In Recent perspectives on American Sign Language, eds.
Harlan Lane and François Grosjean, 119–161. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Lane, Harlan. 1984a. When the mind hears. New York: Random.
Lane, Harlan, ed. 1984b. The deaf experience: Classics in language and education.
Cambridge, MA: Harvard University Press.
LaSasso, Carol J. and Melanie Metzger. 1998. An alternate route for preparing deaf
children for BiBi programs: The home language as L1 and cued speech for convey-
ing traditionally spoken languages. Journal of Deaf Studies and Deaf Education
3:265–289.
Launer, Patricia B. 1982. “A plane” is not “to fly”: Acquiring the distinction between
related nouns and verbs in American Sign Language. Doctoral dissertation, City
University of New York.
Liben, Lynn S. 1978. The development of deaf children: An overview of issues. In
Deaf children: Developmental perspectives, ed. Lynn S. Liben, 3–20. New York:
Academic Press.
Liddell, Scott K. and Robert Johnson. 1986. American Sign Language compound for-
mation processes, lexicalization, and phonological remnants. Natural Language
and Linguistic Theory 4:445–513.
Livingston, Sue. 1983. Levels of development in the language of deaf children. Sign
Language Studies 40:193–285.
Luetke-Stahlman, Barbara and Mary P. Moeller. 1990. Enhancing parents’ use of SEE 2.
American Annals of the Deaf 135:371–379.
Maxwell, Madeline. 1983. Language acquisition in a deaf child of deaf parents: Speech,
sign variations, and print variations. In Children’s language, Vol. 4, ed. Keith E.
Nelson, 283–313. Hillsdale, NJ: Lawrence Erlbaum.
Maxwell, Madeline. 1987. The acquisition of English bound morphemes in sign form.
Sign Language Studies 57:323–352.
Mayer, Connie, and C. Tane Akamatsu. 1999. Bilingual-bicultural models of literacy
education for deaf students: Considering the claims. Journal of Deaf Studies and
Deaf Education 4:1–8.
Mayer, Connie and G. Wells. 1996. Can the linguistic interdependence theory support a
bilingual-bicultural model of literacy education for deaf students? Journal of Deaf
Studies and Deaf Education 1:93–107.
McAnally, Patricia L., Susan Rose and Stephen P. Quigley. 1987. Language learning
practices with deaf children. Boston, MA: College Hill Press.
McKee, Cecile. 1994. What you see isn’t always what you get. In Syntactic theory and
first language acquisition: Crosslinguistic perspectives, Vol. 1: Heads, projections,
164 Samuel J. Supalla and Cecile McKee

and learnability, eds. Barbara Lust, Margarita Suñer, and John Whitman, 101–133.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Meadow, Kathryn P. 1981. Deafness and child development. Berkeley, CA: University
of California Press.
Meier, Richard P. 1991. Language acquisition by deaf children. American Scientist
79:60–70.
Newport, Elissa and Richard P. Meier. 1985. The acquisition of American Sign Language.
In The cross-linguistic study of language acquisition, ed. Dan Slobin, 881–938.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Newport, Elissa P. and Ted Supalla. 2000. Sign language research at the millennium. In
The signs of language revisited, eds. Karen Emmorey and Harlan Lane, 103–114.
Mahweh, NJ: Lawrence Erlbaum Associates.
Raffin, Michael. 1976. The acquisition of inflectional morphemes by deaf children using
Seeing Essential English. Doctoral dissertation, University of Iowa.
Ramsey, Claire. 1989. Language planning in deaf education. In The sociolinguistics of
the deaf community, ed. Ceil Lucas, 123–146. San Diego, CA: Academic Press.
Schein, Jerome D. 1984. Speaking the language of sign. New York: Doubleday.
Schlesinger, I. M. 1978. The acquisition of bimodal language. In Sign language of the
deaf: Psychological, linguistic, and social perspectives, eds. I. M. Schlesinger and
Lila Namir, 57–93. New York: Academic Press.
Slobin, Dan. 1977. Language change in childhood and in history. In Language learning
and thought, ed. John Macnamara, 185–214. New York: Academic Press.
Stedt, Joe D. and Donald F. Moores. 1990. Manual codes in English and American Sign
Language: Historical perspectives and current realities. In Manual communication,
ed. Harry Bornstein, 1–20. Washington, DC: Gallaudet University Press.
Stokoe, William C. 1960. Sign language structure: An outline of the visual communi-
cation systems of the American deaf. Studies in Linguistics, Occasional Papers 8.
Silver Springs, MD: Linstok Press.
Stokoe, William C., Dorothy C. Casterline and Carl G. Croneberg. 1965. A dictionary
of American Sign Language. Washington, DC: Gallaudet College Press.
Supalla, Samuel J. 1990. Segmentation of Manually Coded English: Problems in the
mapping of English in the visual/gestural mode. Doctoral dissertation, University
of Illinois at Urbana-Champaign.
Supalla, Samuel J. 1991. Manually Coded English: The modality question in signed
language development. In Theoretical issues in sign language research, Vol. 2:
Psychology, eds. Patricia Siple and Susan Fischer, 85–109. Chicago, IL: University
of Chicago Press.
Supalla, Samuel J., Tina Wix, and Cecile McKee. 2001. Print as a primary source
of English for deaf learners. In One mind, two languages: Bilingual language
processing, ed. Janet L. Nicol, 177–190. Malden, MA: Blackwell.
Supalla, Ted and Elissa Newport. 1978. How many seats in a chair? The derivation of
nouns and verbs in American Sign Language. In Understanding language through
sign language research, ed. Patricia Siple, 91–132. New York: Academic Press.
Supalla, Ted, and Rebecca Webb. 1995. The grammar of International Sign: A new look
at pidgin languages. In Language, gesture, and space, eds. Karen Emmorey and
Judy Reilly, 333–352. Mahwah, NJ: Lawrence Erlbaum Associates.
Suty, Karen A. and Sandy Friel-Patti. 1982. Looking beyond signed English to describe
the language of two deaf children. Sign Language Studies 35:153–166.
The role of MCE in language development of deaf children 165

Swisher, M. Virginia. 1991. Conversational interaction between deaf children and their
hearing mothers: The role of visual attention. In Theoretical issues in sign lan-
guage research, Vol. 2: Psychology, eds. Patricia Siple and Susan Fischer, 111–134.
Chicago, IL: University of Chicago Press.
Wodlinger-Cohen, R. 1991. The manual representation of speech by deaf children and
their mothers, and their teachers. In Theoretical issues in sign language research,
Vol. 2, Psychology, eds. Patricia Siple and Susan Fischer, 149–169. Chicago, IL:
University of Chicago Press.
Woodward, James. 1973. Some characteristics of Pidgin Sign English. Sign Language
Studies 3:39–46.
Part II

Gesture and iconicity in sign and speech

The term “gesture” is used to denote various human actions. This is even true
among linguists and psychologists, who for the past two decades or more have
highlighted the importance of gestures of various sorts and their significant role
in language production and reception. Some writers have defined gestures as
the movements of the hands and arms that accompany speech. Others refer to
the articulatory movements of speech as vocal gestures and those of signed
languages as manual gestures. For those who work in signed languages, the
term nonmanual gesture usually refers to facial expressions, head and torso
movements, and eye gaze, all of which are vital parts of signed messages. In the
study of child language acquisition, some authors have referred to an infant’s
reaches, points, and waves as prelinguistic gesture.
In Part II we introduce two works that highlight the importance of the study
of gesture and one that addresses iconicity (a closely related topic). We also
briefly summarize some of the various ways in which gesture has been defined
and investigated over the last decade. A few pages of introductory text are not
enough to review all the issues that have arisen – especially within the last
few years – concerning gesture and iconicity and their role in language, but this
introduction is intended to give the reader an idea of the breadth and complexity
of these topics.
Some authors claim that gesture is not only an important part of language as
it is used today, but that formal language in humans began as gestural commu-
nication (Armstrong, Stokoe, and Wilcox 1995; Stokoe and Marschark 1999;
Stokoe 2000). According to Armstrong et al., gestures1 were likely used for
communication by the first groups of humans that lived in social groups. Fur-
thermore, they argue that visible gesture can exhibit both word and syntax at
the same time. They also point out that visible gestures can be iconic, that is,
a gesture can resemble its referent in some way, and visual iconicity can work
as a bridge to help the perceiver understand the meaning of a gesture. Not only
1 Armstrong et al. (1995:38) provide the following general definition of gesture: “Gesture can
be understood as neuromuscular activity (bodily actions, whether or not communicative); as
semiotic (ranging from spontaneously communicative gestures to more conventional gestures);
and as linguistic (fully conventionalized signs and vocal articulations).”

167
168 Gesture and iconicity in sign and speech

do these authors claim that language began in the form of visible gestures, but
that visible gestures continue to play a significant role in language production.
This viewpoint is best illustrated with a quote from Armstrong et al. (1995:42):
“For us, the answer to the question, ‘If language began as gesture, why did it
not stay that way?’ is that it did.”
One of the earliest systematic records of gesture is a description of its use in
a nineteenth-century Italian city. In an English translation of a book published
in Naples in 1832, the work of Andrea de Jorio – one of the first authors
to write about gesture from an anthropological perspective – is revived. De
Jorio compared gestures used in everyday life in Naples in the early nineteenth
century to those gestures that are depicted on works of art from centuries past –
with particular attention to the complexity of some gestures. In one example,
de Jorio describes a gesture used to protect against evil spirits. The same gesture,
according to de Jorio, can also be directed toward a person, and it is referred
to as the “evil eye” in this instance. Interestingly, the form of this nineteenth-
century gesture greatly resembles the current American Sign Language (ASL)
sign glossed as MOCK. Given the historical connection between ASL and
French Sign Language (Langue de Signes Française or LSF), perhaps there is
also a more remote connection between the Neopolitan gesture and the ASL
sign. It appears that de Jorio (2000) might allow us to explore some possible
antecedents of current signed language lexicons.
Along those lines, some authors have analyzed the manner in which contem-
porary gestures can evolve into the signs of a signed language. Morford and
Kegl (2000) describe how in Nicaragua over the last two decades conventional
gestures have been adopted by deaf and hearing individuals for use in homesign
communication, and then such gestures have become lexicalized as a result of
interaction between homesigners. Some of these forms have then gone on to
become signs of Idioma de Señas de Nicaragua (Nicaraguan Sign Language)
as evidenced by the fact that they now accept the bound morphology of that
language.
Not only has the phylogenetic importance of gestures been asserted in the
literature, but their role in child development has been the focus of much re-
search (for an overview, see Iverson and Goldin-Meadow 1998). According to
Goldin-Meadow and Morford (1994), both hearing and deaf infants use single
gestures and two-gesture strings as they develop. Gesture for these authors is
defined as an act that must be directed to another individual (i.e. it must be com-
municative) and an act that must not be a direct manipulation of some relevant
person or object (i.e. it must not serve any function other than communication).
In addition to the importance of gesture for the phylogenetic and ontogenetic
development of language, some writers claim that gesture is an integral com-
ponent of spoken language in everyday settings (McNeill 1992; Iverson and
Goldin-Meadow 1998). Gesture (or gesticulation, as McNeill refers to it) used
Gesture and iconicity in sign and speech 169

in this sense refers to the movements of the hands and arms that accompany
speech. According to McNeill (1992:1), analysis of gestures helps to understand
the processes of language.

Just as binocular vision brings out a new dimension of seeing, gesture reveals a new
dimension of the mind. This dimension is the imagery of language which has lain hidden.
We discover that language is not just a linear progression of segments, sounds, and words,
but is also instantaneous, nonlinear, holistic, and imagistic. The imagistic component
co-exists with the linear-segmented speech stream and the coordination of the two gives
us fresh insights into the processes of speech and thought.

Gesticulation, however, differs from the use of gesture without speech. Singleton
et al. (1995:308) claim that gesture without speech (or nonverbal gesture) ex-
hibits language-like properties and can represent meaning on its own, whereas
gesticulation is much more dependent on the accompanying speech for repre-
senting meaning and is not “an independent representational form.”
One of the most compelling reasons to study gesture is that it is a robust
phenomenon that occurs in human communication throughout the world, among
different languages and cultures (McNeill 1992; Iverson and Goldin Meadow
1998). Gesture is even found in places where one would not expect to find
it. For example, gesture is exhibited by congenitally blind children as they
speak, despite the lack of visual input from language users in their environment
(Iverson and Goldin-Meadow 1997; Iverson et al. 2000) and gesture can be
used in cases when speech is not possible (Iverson and Goldin-Meadow 1998).
While the visible gesture that has been described thus far can be distinguished
from speech because of the different modalities (gestural vs. oral) in which
production occurs, the same type of gesturing in signed languages is far more
difficult to identify. If, for the sake of argument, we posit that gestures are
paralinguistic elements that alternate with formal linguistic units (morphemes,
words), how does one go about defining what is gestural and what is linguistic
(or morphemic) in signed languages where both types of communication involve
the same articulators? Early in the study of ASL, Klima and Bellugi (1979:15)
described how ASL comprises not only signs and strings of signs with certain
formational properties, but also what they termed “extrasystemic gesturing.”
On their view, ASL – and presumably other signed languages as well – utilize
“a wide range of gestural devices, from conventionalized signs to mimetic
elaboration on those signs, to mimetic depiction, to free pantomime” (p.13).
Not only are all these devices used in the production of ASL, but signers also
go back and forth between them and lexical signs regularly; at times with no
obvious signal that a switch is being made. A question that has long animated
the field is the extent to which these devices are properly viewed as linguistic
or as gestural (for contrasting views on this question, see Supalla 1982; Supalla
1986; and Emmorey, in press).
170 Gesture and iconicity in sign and speech

There are, of course, ways that sign linguists have proposed to differenti-
ate gesture from linguistic elements of a signed language. Klima and Bellugi
(1979:18–19) claimed that pantomime (presumably a type of gesture) differs
from ASL signs in various respects:
r Each pantomime includes a number of thematic images whereas regular ASL
signs have only one.
r Pantomimes are much longer and more varied in duration than ASL signs.
r Sign formation requires brief temporal holding of the sign handshape before
initiating movement of the sign, whereas pantomime production does not
require these holds and movement is much more continuous.
r Pantomime production is temporally longer than a semantically equivalent
sign production.
r Handshapes are much freer in pantomime production than in sign production.
r Pantomime often involves hand and arm movements that are not seen (al-
lowed) in sign production.
r Pantomime includes head and body movement while only the hands move in
sign production.
r The role of eye gaze seems to differ in pantomime production as opposed to
sign production.
In addition to manual gesturing with the hands and arms, a signer or speaker
can gesture nonmanually, with facial expressions, head movement, and/or body
postures (Emmorey 1999). It has been suggested that a signer can produce a
linguistic sign (or part of a linguistic sign) with one articulator and a gesture with
another articulator (Liddell and Metzger 1998; Emmorey 1999). This is possible
in signed language, of course, because manual and nonmanual articulations can
take place simultaneously.
In this volume, Okrent (Chapter 7) discusses gesture in both spoken and
signed languages. She suggests that gesture and language can be produced
simultaneously in both modalities. Okrent argues that we need to re-analyze
what gesture means in relationship to language and to re-evaluate where we are
allowing ourselves to find gesture. A critical component of analyzing gesture
is the classification of different types; to that end Okrent describes the kinds of
gestures that signers and speakers regularly produce. She explains to the reader
that some gestures are used often by many speakers/signers and denote specific
meanings; these gestures are “emblems.”2 Other gestures co-occur with speech
and are called speech synchronized gestures (see McNeill 1992); a specific class
of those is “iconics.” Okrent then tackles the meaning of the term “morpheme,”
and she explains how classification of a form as morphemic or not is often the

2 Emblems, according to McNeill (1992), have also been described by Efron 1941; Ekman and
Friesen 1969; Morris et al. 1979; Kendon 1981.
Gesture and iconicity in sign and speech 171

criterion used to determine what is gestural and what is linguistic. Finally, as


a way to classify what is gesture and what is linguistic, Okrent offers three
criteria. They include classification of any given form along three continua:
r degree of conventionalization of a form;
r site of conventionalization; and
r restriction on combination of a gesture with a linguistic form.
In the present volume, Janzen and Shaffer (Chapter 8) trace the develop-
ment in ASL of certain grammatical morphemes (specifically, modals) from
nonlinguistic gestures, both manual and nonmanual. They do this by analyzing
data from ASL as it was used in the early decades of the twentieth century
as well as data from French Sign Language (LSF) and Old French Signed
Language (OFSL), which were influential in the development of ASL. Some
of the forms that the authors discuss began as nonlinguistic gestures of the
face and hands and evolved into lexical items before becoming grammatical
modals. However, Janzen and Shaffer also propose that nonmanual gestures
can develop into grammatical elements (specifically, topic markers) without
passing through a lexical stage. They suggest that this grammaticalization path-
way is unique to signed language. They conclude that, for ASL, gesture plays
the important role of providing the substrate from which grammar ultimately
emerges.
The types of “pantomime” production that Klima and Bellugi wrote of in
1979 were distinguishable from lexical signs in various ways (as described
above), but they made no mention of the simultaneous production of signs
and gestures. Essentially, such pantomimic forms were thought to occur in
the signing stream separate from lexical signs (for descriptions of these types
of gesture patterns, see Marschark 1994; Emmorey 1999). That is, any given
movement could be either gestural or linguistic, but normally not both. Since
that time, other writers (Liddell and Metzger 1998; Liddell 2000) have added
another level of complexity to the analysis: what if a single sign or movement
can have both gestural and linguistic components? In such cases, there may
be no easy way to distinguish phonetically between gestural and linguistic
(morphemic) elements. On Liddell’s view, a sign that uses the signing space for
indicating subject and object has a linguistic component (the formational and
semantic properties of the verb) and also a gestural component (the location
to which it is pointing). The claim is that in deictic signs signers gesture and
provide lexical information simultaneously (Liddell and Metzger 1998; Liddell
2000).
The discussion initiated by Klima and Bellugi (1979) of the similarities (and
subtle differences) between signs and the kinds of gestures that they called
pantomime points to a very obvious characteristic of signed languages: the
degree of iconicity in signed languages is impressive. Klima and Bellugi loosely
172 Gesture and iconicity in sign and speech

referred to iconic signs as those lexical items whose form resembles some
aspect of what they denote. As an example, onomatopoetic words in spoken
languages such as buzz and ping-pong are iconic, but such words tend to be few
in spoken languages. That, however, is not the case for signed languages. In
the signed languages studied thus far, large percentages of signs are related to
visual characteristics of their referents. Of course, these correspondences do not
necessarily determine the exact form of a sign. For example, Klima and Bellugi
described the sign TREE as it is produced in ASL, Danish Sign Language,
and Chinese Sign Language. Each of the three signs is related, in some way,
to the visual characteristics of a tree. Yet, the three signs differ substantially
from each other, and those differences can be described in terms of differences
in handshape, place of articulation, and movement. It is important, however,
to note that iconicity is not present in all signs, especially those that refer to
abstract concepts that are not identifiable with concrete objects.
In this volume, Guerra Currie, Meier, and Walters (Chapter 9) suggest that
iconicity is one of the factors accounting for the relatively high degree of judged
similarity between signed language lexicons. Another related factor is the in-
corporation – into signed languages – of gestures that may be shared by the
larger ambient hearing cultures that surround signed languages.3 In order to
examine the degree of similarity between several signed language vocabular-
ies, Guerra Currie et al. analyze lexical data from four different languages:
Spanish Sign Language (LSE), Mexican Sign Language (LSM), French Sign
Language (LSF), and Japanese Sign Language (Nihon Syuwa or NS). After
conducting pair-wise comparisons of samples drawn from the lexicons of these
four languages, Guerra Currie et al. suggest that signed languages exhibit higher
degrees of lexical similarity to each other than spoken languages do, likely as
a result of the relatively high degree of iconicity present in signed languages.
It is not surprising that this claim is made for those signed languages that have
historical ties, but it is interesting that it also applies to comparisons of un-
related signed languages between which no known contact has occurred and
which are embedded in hearing cultures that are very different (e.g. Mexican
Sign Language compared with Japanese Sign Language). Guerra Currie et al.
suggest, as have other writers (e.g. Woll 1983) that there likely exists a base
level of similarity between the lexicons of all signed languages regardless of
any historical ties that they may or may not share.

david quinto-pozos
3 A similar claim is made by Janzen and Shaffer in this volume, but the questions that they pose
differ from those that Guerra Currie et al. pose. Janzen and Shaffer are concerned with the manner
in which nonlinguistic gestures become grammatical elements of a language, while Guerra Currie
et al. are interested in the manner in which the possible gestural origins of signs may influence
the similarity of signed language vocabularies regardless of where the languages originate.
Gesture and iconicity in sign and speech 173

References
Armstrong, David F., William C. Stokoe, and Sherman E. Wilcox. 1995. Gesture and
the nature of language. Cambridge: Cambridge University Press.
De Jorio, Andrea. 2000. Gesture in Naples and gesture in classical antiquity.
Bloomington: Indiana University Press.
Efron, David. 1941. Gesture and environment. Morningside Heights. NY: King’s Crown
Press.
Ekman, Paul, and Wallace V. Friesen. 1969. The repertoire of nonverbal behavioral
categories: Origins, usage, and coding. Semiotica 1:49–98.
Emmorey, Karen. 1999. Do signers gesture? In Gesture, speech, and sign, ed. Lynn
Messing and Ruth Campbell, 133–159. New York: Oxford University Press.
Emmorey, Karen. In press. Perspectives on classifier constructions. Mahwah, NJ:
Lawrence Erlbaum Associates.
Goldin-Meadow, Susan and Jill Morford. 1994. Gesture in early language. In From
gesture to language in hearing and deaf children, ed. Virginia Volterra and Carol
J. Erting, Washington, DC: Gallaudet University Press.
Iverson, Jana M. and Susan Goldin-Meadow. 1997. What’s communication got to do
with it? Gesture in children blind from birth. Developmental Psychology 33:453–
467.
Iverson, Jana M. and Susan Goldin-Meadow. 1998. Editors’ notes. In The nature and
functions of gesture in children’s communication, eds. Jana M. Iverson and Susan
Goldin-Meadow, 1–7. San Francisco, CA: Josey-Bass.
Iverson, Jana M., Heather L. Tencer, Jill Lany, and Susan Goldin-Meadow. 2000. The
relation between gesture and speech in congenitally blind and sighted language-
learners. Journal of Nonverbal Behavior 24:105–130.
Kendon, Adam. 1981. Geography of gesture. Semiotica 37:129–163.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liddell, Scott K. 2000. Indicating verbs and pronouns: Pointing away from agreement. In
The signs of language revisited, ed. K. Emmorey and H. Lane, 303–320. Mahwah,
NJ: Lawrence Erlbaum Associates.
Liddell, Scott K. and Melanie Metzger. 1998. Gesture in sign language discourse. Journal
of Pragmatics. 30:657–697.
Marschark, Marc. 1994. Gesture and sign. Applied Psycholinguistics 15:209–236.
McNeill, David. 1992. Hand and mind. Chicago, IL: Cambridge University Press.
Morford, Jill P. and Judy A. Kegl. 2000. Gestural precursors to linguistic constructs: How
input shapes the form of language. In Language and gesture, ed. David McNeill,
358–387. Cambridge: Cambridge University Press.
Morris, Desmond, Peter Collett, P. Marsh, and M. O’Shaughnessy. 1979. Gestures: Their
origins and distribution. New York: Stein and Day.
Singleton, Jenny L., Susan Goldin-Meadow, and David McNeill. 1995. The cataclysmic
break between gesticulation and sign: Evidence against a unified continuum of
gestural communication. In Language, gesture, and space, ed. Karen Emmorey
and Judy Reilly, 287–311. Hillsdale, NJ: Lawrence Erlbaum Associates.
Stokoe, William C. and Marc Marschark. 1999. Signs, gestures, and signs. In Gesture,
speech, and sign, ed. Lynn Messing and Ruth Campbell, 161–181. New York:
Oxford University Press.
174 Gesture and iconicity in sign and speech

Stokoe, William C. 2000. Gesture to sign (language). In Language and gesture, ed.
David McNeill, 388–399. Cambridge: Cambridge University Press.
Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American
Sign Language. Unpublished doctoral dissertation, University of California,
San Diego, CA.
Supalla, Ted. 1986. The classifier system in American Sign Language. In Noun classes
and categorization: Typological studies in language, Vol. 7, ed. Collette Craig,
181–214. Philadelphia, PA: John Benjamins.
Woll, Bencie. 1983. The comparative study of different sign languages: Preliminary
analyses. In Recent research on European sign languages, ed. Filip Loncke, Penny
Boyes-Braem, and Yvan Lebrun, 79–91. Lisse: Swets and Zeitlinger.
7 A modality-free notion of gesture and how it can
help us with the morpheme vs. gesture question
in sign language linguistics (Or at least give us
some criteria to work with)

Arika Okrent

7.1 Liddell’s proposal that there are gestures in agreement verbs


Forty years of research on signed languages has revealed the unquestionable
fact that signers construct their utterances in a structured way from units that
are defined within a language system. They do not pantomime or “draw pictures
in the air.” But does this mean that every aspect of a signed articulation should
have the same status as a linguistic unit?
A proposal by Liddell (1995; 1996; Liddell and Metzger 1998) has brought
the issue of the linguistic status of certain parts of American Sign Language
(ASL) utterances to the fore. He proposes that agreement verbs are not verbs si-
multaneously articulated with agreement morphemes, but verbs simultaneously
articulated with pointing gestures. Agreement verbs are verbs that move to lo-
cations in signing space associated with particular referents in the discourse. A
signer may establish a man on the left side at location x and a woman on the
right side at location y. Then, to sign ‘He asks her,’ the signer moves the lexical
sign ASK from location x to location y. The locations in these constructions
have been analyzed as agreement morphemes (Fischer and Gough 1978; Klima
and Bellugi 1979; Padden 1988; Liddell and Johnson 1989; Lillo-Martin and
Klima 1990; Aarons et al. 1992) that combine with the lexical verb to form
a multimorphemic sign xASKy. However, according to Liddell’s claim, when
a signer produces the utterance ‘He asks her,’ he or she does not combine the
specifications of the sign ASK with morphemic location features, but rather, he
or she points at cognitively meaningful locations in space with the sign ASK.
This challenges the claim that the locus aspect of these utterances is morphemic,
or even phonological.
Liddell argues for a gesture account partly on the basis of the impossibility of
specifying the form of the agreement morpheme. There are an infinite number
of possible locations that can have referential value. The signing space cannot
be divided into 10 or 20 discrete, contrastive locations; there are rather as many
locations possible as there are discernible points in space. The locations used
175
176 Arika Okrent

are, in situations where the participants are present, determined by the position
of real people in real space. If the man and the woman being discussed were
present in the above example, the verb would move from the real location of the
man to the real location of the woman. The form of the agreement morpheme can
only be described as “the place where the thing I’m talking about is located.”1
The location of a person in space is a fact about the world independently of
anybody’s language and should not be considered linguistic.

7.2 Objections to the proposal2


Liddell’s proposal has been controversial. One of the main objections to the
notion of verb agreement being gestural has been that the pointing3 in agreement
verbs is restricted by language-internal considerations.
r Some verbs can do it and some verbs cannot: The verb LOVE cannot point
toward its object, but the verb ASK can. One must know the language to know
which verbs can and cannot point.
r Verbs which point, must point, and must do so correctly: The verb ASK must
point to the subject and then the object while the verb INVITE must point to
the object and then the subject. A verb that points must point in a sentence
where its arguments are explicitly located: *HEx ASK(no pointing) HERy.
One must know the language to know the proper pointing behaviors for verbs.
r Different sign languages do it in different ways: In the Sign Language of
the Netherlands, the verb itself is articulated in neutral space, followed by
an auxiliary element that points (Bos 1996). One must know the language to
know whether it is the verb or an auxiliary that points.
The implication behind these objections is that, even if it is true that the phono-
logical form of the agreement morpheme is unspecified, the pointing in general
is restricted on the basis of language internal considerations, and what is re-
stricted on the basis of language internal considerations cannot be gesture.4
1 For the case where the actual referents are not present, Liddell (1995; 1996) argues that they are
made present through the grounding of mental spaces, something that is also done by nonsigners.
2 The authors cited for the objections did not themselves frame any of their works as arguments
against the gesture account of agreement.
3 “Pointing” generally refers to an action performed with an index finger. Here “pointing” refers
to the same action, but without necessarily involving the use of an index finger. It is “pointing”
performed with whatever handshape the verb is specified for.
4 There are also objections based on psycholinguistic studies of language development (Newport
and Meier 1985; Meier 1987; Petitto 1987) and brain damage (Poizner et al. 1987). I do not
address these objections in this chapter, but I do not believe the results of these studies necessarily
rule out the gesture account. It appears that nonsigning children do not have full control of abstract
deixis in gesture until age four or five (McNeill, personal communication) and there is evidence
that nonsigning adults with right hemisphere damage maintain the use of space for gestures of
abstract deixis, while iconic gestures are impaired (McNeill and Pedelty 1995). More studies
of the use of abstract deixis in gesture by nonsigners are required in order to be able to fully
evaluate whether the psycholinguistic studies of verb agreement in ASL reveal something about
the nature of ASL or about the nature of abstract referential pointing gestures.
A modality-free notion of gesture 177

I believe that a large part of the conflict between the gestural and grammatical
accounts results from a misunderstanding of what gesture means in relationship
to language and where we are allowed to find gesture. I present McNeill’s (1992)
notion of gesture as a modality-free notion and show how speakers not only use
speech and manual gesture at the same time, but also speech and spoken gesture
at the same time. For speakers, gestural and linguistic codes can be combined
in the same vocal–auditory channel. However, because they do take place in
the same channel, there are restrictions on how that combination is executed. I
argue that signers can also combine a gestural and linguistic code in the same
manual–visual channel, and that the restrictions on how pointing is carried out
need not mean that the location features are morphemic, but rather that there
are restrictions on how the two codes combine in one channel.

7.3 The morpheme vs. gesture question

7.3.1 What is a morpheme?


So far I have laid out the controversy as follows: the location features are seen
as being either morphemic or gestural. Clearly, the direction of the pointing
of an agreement verb is meaningful, and forms used in language which are
meaningful – and which must combine with other forms in order to be artic-
ulated – are traditionally assigned to the level of morphological description.
Yet, some meaningful behaviors involved in language use, such as gesture and
intonation, have not traditionally been seen as belonging to a morphological
level of description; rather, they are traditionally seen as paralinguistic: parasitic
on linguistic forms, but not themselves linguistic. So to say that the location
features of agreement verbs are nonmorphological implies that they are pushed
out of the linguistic, to the paralinguistic level of description.
It is not necessarily the case that meaningful forms that are not morpho-
logical must therefore be nonlinguistic. There are approaches which allow for
meaningful aspects of form which are not morphological, but are still linguistic.
Woodbury (1987), working in the lexical phonology framework (see Kiparsky
1982) challenges traditional duality of patterning assumptions and the use of
abstract morphological ‘place holders’5 by proposing an account of meaning-
ful postlexical phonological processes. Researchers in intonational phonology
view intonation as having categorical linguistic structure (for an overview, see
Ladd 1996) apart from any claims about whether the relationship between that
structure and the meanings it imparts is a morphological one. These approaches
share the view that meaningful phonology need not be morphological, but what
5 This refers to the use of abstract features to stand in for the “meaning” of phonological processes
that occur at a later stage of a derivation or a different module of a grammar. These abstract
features then have a chance to be manipulated in the syntax without violating the assumption
that all phonology takes place separately from levels dealing with meaning.
178 Arika Okrent

makes it linguistic as opposed to paralinguistic is the categorical structure of its


form. Admittedly, such a statement is a bit of an oversimplification. The matter
of deciding which aspects of intonation are linguistic and which are paralin-
guistic has not been resolved; it is not clear what kind of linguistic status should
be ascribed to gradient, final postlexical processes of phonetic implementation,
but these approaches generally imply that what makes a form–meaning pairing
a linguistic one is the categorical nature of the units out of which the form is
built.
I have stated that some linguists have found ways to see some aspects of
meaningful phonology as being linguistic without them necessarily being mor-
phological. Morphology means different things to different linguists. It has
been placed within the phonology, the lexicon, the syntax, or on its own level
by different theorists. In this chapter, I appeal to a general, structuralist defini-
tion of the morpheme: a minimal linguistic unit of form–meaning pairing. The
question I aim to address – i.e. whether the location features of agreement verbs
are morphemes or gestures – hinges on the term “linguistic” in the definition
above. I have already mentioned one possible criterion for considering some-
thing linguistic or not: that of categorical (as opposed to gradient) form. The
categorical–gradient distinction is important in this discussion, but it is not the
end of the story.
“Conventionality” is an important concept in the determination of whether
something is linguistic or not. The idealized linguistic symbol is conventional-
ized. This is related to the notion that the idealized linguistic symbol is arbitrary.
When the form of a sign does not have any necessary connection with its mean-
ing, if it does not have an iconic or indexical relationship to its referent, the
only way it could be connected to its meaning by language users is by some
sort of agreement, implicit or explicit, that such a form–meaning pairing exists:
convention steps in as an explanation for the successful use of arbitrary signs.
However, a sign need not be arbitrary to be conventional. The onomatopoeia
woof is iconic of a dog’s bark, but it is also conventional. The reason that it is
conventional is not because there would otherwise be no way to transmit its
meaning, but rather because this is the form that English speakers regularly use
as a symbol of a dog’s bark. It is this regularity of use, this consistent pairing of
form and meaning over many instances of use that creates a conventional sign.
In Russian gaf is the conventional sign for a dog’s bark. This form is just as
iconic as woof.6 An English speaker could use the Russian form to represent a
dog’s bark, but to do so he or she would need a lot of contextual support. If he
were to utter gaf in the acoustic manner of a dog’s bark, while panting like a
dog and pointing to a picture of a dog, it would likely be understood that gaf
was meant to signify a dog’s bark. Gaf is not a form regularly paired in English
6 I appeal to a rather intuitive notion of iconicity here; for a thorough discussion of iconicity, see
Taub 1998.
A modality-free notion of gesture 179

with such a meaning, and although it can still have the meaning of a dog’s bark
in a specific context of speaking described above, it does not have that meaning
by virtue of convention. A conventional sign does not need as much contextual
support to have a meaning.
Part of what is needed for a form to be conventionalized is the ability of
that form to be used to mean something stable over a wide range of specific
speaking events. In Langacker’s (1987) terms, it becomes “entrenched” and
“decontextualized.” It becomes entrenched as it is consistently paired with a
meaning and, because of that, it comes to be decontextualized, i.e. it comes
to have a meaning and a form that abstract away from the details of any one
occasion of its use. It is important to note that a symbolic unit can be more
or less entrenched, and more or less decontextualized, but there is no criterial
border that separates the conventionalized from the unconventionalized.
Freyd (1983) suggests that linguistic forms are categorical, as opposed to gra-
dient, as a result of their being conventional. Due to “shareability constraints,”
people stabilize forms as a group, creating categories in order to minimize infor-
mation loss. A form structured around stable categories can be pronounced in
many ways and in many contexts but still retain a particular meaning. A change
in form that does not put that form in a different category does not result in a
change in meaning. But if a form is gradient, a change in that form leads to a
concomitant change of meaning, and the nature of that change is different in
different contexts.
A form can be said to be “conventionalized” if it has a stable form and mean-
ing, which have come into being through consistent pairing and regular use. A
morpheme is a conventionalized form–meaning pairing and, because of its con-
ventionality, has categorical structure. Also, because of its conventionality, it is
decontextualized so it is listable independent from the speech event. It is worth
noting here that while the ideal linguistic unit is a conventionalized unit, it is not
necessarily the case that every conventionalized unit is linguistic. Emblematic
gestures (see Section 7.3.2.1) like ‘thumbs up’ are quite conventionalized, but
we would not want to say they are English words because their form is so vastly
different from the majority of English words. Given that a conventionalized unit
is not necessarily a linguistic unit, is a linguistic unit necessarily conventional-
ized? A linguistic unit necessarily involves conventions. The question remains
as to what the nature of those conventions must be. I explore this question
further in Section 7.5 when I discuss the issues that motivate the criteria for
deciding how to draw the line between morpheme and gesture.

7.3.2 What is a gesture?


The controversy over whether agreement is gestural or morphemic depends
heavily on what one’s notion of gesture is. What does gesture even mean in
180 Arika Okrent

a visual–manual language? It cannot mean ‘things you do with your hands,’


because then all of sign language would be gesture, and we obviously do not
want to equate signed language with the hand movements that people make
while speaking a spoken language. To answer this question, we need a modality-
free notion of gesture.

7.3.2.1 Emblems. I adopt here McNeill’s (1992; 1997) notion of ges-


ture. The most common, colloquial use of the term “gesture” refers to emblems.
Emblems are gestures such as the “OK” sign, the “goodbye” wave, and the
“shushing” gesture. These gestures have fully specified forms: there is a correct
way to produce them. These gestures are picked from a set of defined forms,
and in that way they are similar to lexical items. They are listable and specified,
like words, and can be used in the presence or absence of speech. They are
conventionalized.
I mention these emblem gestures in order to stress that my use of the term
“gesture” does not refer to them, and I would like to lay them aside. The notion
of gesture that I refer to uses the term to describe speech-synchronized gestures
that are created online during speaking. These gestures are not plucked from
a gesture lexicon and pasted into the conversation; they rather arise in concert
with the speech, and their form reflects the processing stage that gave rise to the
speech. The gestures are not conventionalized and are produced with speech.

7.3.2.2 Speech synchronized gestures. Figure 7.1 shows a normal


example of speech-synchronized gesture. These pictures are video stills from a
research project in which subjects are videotaped telling the story of a Sylvester
and Tweety cartoon that they have just seen. If one looks at these gestures
without reading the description of the event being narrated (below), there is
no way to know what these gestures mean. Unlike the case of emblems or
signs, a speech-synchronized gesture does not express a conventionalized form–
meaning pairing. The conventionalized pairings occur in the accompanying
speech and reliably communicate the meaning to an interlocutor who knows the
conventions. The gestures on their own cannot reliably communicate meaning
because they are not conventionalized. Their forms are created in the moment
and directly reflect the image around which the speaker is building his (or her)
speech. However, because those forms do reflect the image around which the
speech is built, knowing what that imagery is (either through hearing the speech
at the same time, or through knowing the content of the narration) will render
the gestures meaningful. In the scene that the speaker is describing in Figure 7.1,
Sylvester makes an attempt to capture Tweety, who is in a cage in a window
high up on a building. He makes a seesaw with a block and a board, stands
on one side of the seesaw, and throws an anvil on the other side, propelling
himself upward. He grabs Tweety at the window and falls back down onto his
A modality-free notion of gesture 181

a. b. c. d.

e. f. g. h.

i. j. k. l.

Figure 7.1 Video stills of speaker telling the story of a cartoon he has just
watched

side of the seesaw, propelling the anvil upward. The anvil then comes down on
his head, flattening it, and Tweety escapes.
The gestures pictured in Figure 7.1 should now seem quite transparent in
meaning. They are meaningful not because they are conventionalized, but be-
cause you know the imagery they are based on, and so will see that imagery in
the gestures. The representational significance of the gestures (a–l) pictured in
Figure 7.1 is given in (1).
(1) a. introduction of the seesaw
b. introduction of the weight
c–d. Sylvester throws the weight
e. Sylvester goes up
182 Arika Okrent

f. and comes down


g–h. grabs Tweety
i–j. Sylvester comes down, sending the weight up
k–l. weight comes down and smashes Sylvester
These gestures do not reproduce exactly the events of the cartoon, in the same
order. Iconic speech-synchronized gestures are not remembered images recre-
ated on the hands. They are representations “formed within and conditioned
by the linguistic system” (Duncan 1998). The speaker packages the incident
into the linguistic structures given by his language and the gesture emerges
in sync with and congruent with that packaging. So what he does with his
gestures is somewhat constrained by the way he has packaged the event for
his speech. Examples (e) and (f) represent Sylvester’s overall trajectory of be-
ing propelled upward then falling downward. Then (g–h) represent Sylvester’s
grabbing Tweety at the window, something that happened before the up–down
trajectory was complete. Examples (i–j) begins at the downward part of the
overall trajectory, but this time the focus is on the relative trajectories of the
Sylvester and the weight. The speaker focuses on the relative trajectories in
order to set up the punchline: Sylvester getting smashed by the weight.
The speaker did not take these gestures from a list of pre-existing conven-
tionalized gestures. He created them on the spot, in the context of his narrative.
They reflect imagery that is important in his discourse at the moment. Also,
they do not simply depict the scene. They depict the scene in chunks that his
linguistic packaging of the scene has created.
The gestures pictured above are all “iconics” (McNeill 1992). They depict
concrete aspects of imagery with forms that look like the images they represent.
There are other classes of speech-synchronized gesture that do not work on
exactly the same principle. The most important class to mention for purposes
of this chapter is that of “abstract deixis” (McNeill et al. 1993). In gestures of
abstract deixis, speakers establish loci for elements of the discourse by pointing
to the empty space in front of them. Like signers, they point to the same spatial
locus when talking about the entity that they have established there, but unlike
signers, they are not required to do so consistently. This is due to the fact that
the speech is carrying most of the communicative content, leaving more room
for referential errors in the gesture.7
This speech-synchronized gesturing is very robust around the world, and it
is used by speakers of all languages. This does not necessarily mean that we
should expect signers to do it as well, but it does give us a good motivation to
look for it. If signers were to have sign-synchronized gesture, they would have
to simultaneously gesture and sign.
7 However, there is evidence that interlocutors do pick up on information communicated in gesture
that is not present in speech (McNeill et al. 1994). It is probable that inconsistency in maintaining
loci in gesture leads to comprehension difficulties.
A modality-free notion of gesture 183

WHY WAKE-UP EARLY

gesture ASK-HIM gesture

Figure 7.2 Illustration of (2)

7.3.2.3 Less controversial analyses of gesture in sign.


r Code suspension: Researchers looking at gesture in sign have considered
certain manual actions that are interspersed with the sign stream as gestures
(Marschark 1994; Emmorey 1999). For example, a thumb-point or a shoulder-
shrug may be considered a gesture. These gestures do not, however, happen
simultaneously with signs. They pattern as suspensions of the linguistic code:
SIGN SIGN GESTURE SIGN, as in (2), which is illustrated in Figure 7.2.
(2) WHY WAKE-UP EARLY gesture ASK gesture
‘Why does he wake up early? Ask him.’
While gestures that interrupt the sign stream are an interesting area for further
research, they do not speak to the issue of simultaneously articulated sign and
gesture.8
r Gesture on different articulators: Some nonmanual actions have been consid-
ered candidates for gesture, namely affect-displaying face and body posture

8 I am neutral on whether these emblem-type signs should be considered proper lexical signs
or gestures. They have different distributional properties from lexical signs (Emmorey 1999),
but it may be simply that we tend not to view forms with purely pragmatic meaning as fully
linguistic, for example English mmm-hmmm.
184 Arika Okrent

LOOK-AROUND SMILE
Figure 7.3 Illustration of (3)

(Emmorey 1999) and “constructed actions” (Metzger 1995). These gestures


are articulated simultaneously with signs, but on separate articulators. In (3)
the signer signs LOOK-AROUND while himself looking around, and SMILE
while smiling (see illustration in Figure 7.3).
(3) looking smiling
LOOK-AROUND SMILE
‘He looked around and then smiled.’
This kind of example shows the co-ordination of a gesture-producing articulator
with a sign-producing articulator, and is similar to the examples of hearing
people producing speech on the vocal articulator while producing gesture on
the manual articulators. However, the question of whether pointing verbs can
contain gesture within them is a question of whether a signer can gesture and
sign simultaneously on the same articulator.

7.3.2.4 Gesture as a semiotic notion. The above discussion of simul-


taneously produced gesture and speech does not seem to reveal anything about
the simultaneous production of ASL and gesture on the same articulator. The
speech is produced with the vocal articulators while the gesture is produced
with the manual articulators, so there can be no confusion as to which is which.
However, McNeill’s (1992; 1997) notion of gesture is not modality bound. It is
a semiotic, not a physical notion. The semiotic notion of gesture is:
r That which expresses the imagistic side of thought during speaking through
forms directly created to conform to that imagery. The imagery can be con-
crete or abstract.
McNeill (1992) argues that in language use two kinds of cognition are in play:
imagistic thinking (Arnheim 1969; see also the “spatial-motoric thinking” of
Kita 2000) and analytic thinking. In imagistic thinking, concepts are repre-
sented as “global-synthetic” images: “global” in that the parts of the image
have meaning only insofar as they form part of a meaningful whole, and “syn-
thetic” in that the parts of the image are synthesized into a single, unitary
A modality-free notion of gesture 185

image. In analytic thinking, multiple separately identifiable components are


assembled and put into linear and hierarchical order (McNeill 1992:245). Ges-
tures are viewed as a manifestation of imagistic thinking and speech as a
manifestation of analytic thinking. The structure of the forms produced in
gesture and in speech reflect the nature of the different thought processes: ges-
tures are global-synthetic and speech is linear, segmented, and hierarchically
organized.
Theories of language aligned with the “cognitive linguistics” perspective
propose that the structure of all aspects of language is motivated by imagery.
The structure of both form and meaning in language is determined by abstract
schematic imagery, from word structure and the lexicon, to sentence structure
(Langacker 1987), to reference and discourse structure (Fauconnier 1994). It
may be argued, from that perspective, that both gesture and speech are man-
ifestations of imagistic thinking. However, gesture is still different in that it
manifests imagistic thinking directly. The images themselves are given physi-
cal realization in the immediate environment. The grabbing action of Sylvester
becomes the grabbing action of a hand; the smashing of an anvil on Sylvester’s
head becomes the smashing of one hand upon the other. In speech, the repre-
sentation of imagery is mediated by access to an inventory of symbolic units.
Those units are symbols by virtue of convention and need not have forms that
directly realize imagery. One may construe the lexical meaning of hypotenuse
with reference to the image of a right triangle with its longest side foregrounded
(Langacker 1991), but there is nothing about the physical form of the utterance
[hapatnju s] which itself manifests that imagery.
r The forms created are unconventionalized.
Upon first looking at Figure 7.1, before reading the description of the scene
being described, a reader of this chapter (most likely) has no access to the
meaning of any of the gestures depicted. Figure 7.1e represents Sylvester being
propelled upward. It only has that meaning in this particular speaking event;
there is no convention of that particular gesture being paired with such a mean-
ing. One might argue that an utterance of the word up likewise could only mean
“Sylvester was propelled upward” in a specific context. It is true that words are
not completely specified for meaning; some aspects of the meaning of up in
any one instance of its usage are supplied by the context of the speech event. It
could refer to a high static location or a movement, different trajectory shapes,
different velocities of movement, etc. The conventionalized meaning of up ab-
stracts away from all such particulars, leaving a rather abstract schema for ‘up’
as its meaning. One might look at the gesture in (1e) and without knowing
that it represents Sylvester being propelled upward, recognize it as some sort
of depiction of the abstract schema of ‘upness.’ However, the word up, as a
conventional sign for ‘upness,’ may be underspecified with respect to particu-
lars of its meaning in context, but it is completely specified with respect to its
186 Arika Okrent

form. In contrast, there is no convention that determines how the gesture for
‘upness’ should be pronounced. It may, as a gesture, come out as the raising
of a fist, the lift of a shoulder, or the raising of the chin. All such gestures do
share one feature of form; they move upward, and one might be tempted to say
that that aspect of upward movement alone is the conventionalized form for
representing upward movement. Such a form–meaning pairing is tautological:
‘up’ means ‘up,’ and need not appeal to convention for its existence. Addi-
tionally, the spoken phrase “He flew up” can be uttered in sync with a gesture
which moves downward without contradiction. For example, the speaker could
make a gesture for Sylvester’s arms flying down to his sides due to the velocity
of his own upward movement. The only motivation for the form of a speech-
synchronized gesture is the imagery in the speaker’s thought at the moment of
speaking.
That being said, there are some conventions involved in the production of
gesture. There may be cultural conventions that determine the amount of ges-
turing used or that prevent some taboo actions (such as pointing directly at the
addressee) from occurring. There are also cultural conventions that determine
what kind of imagery we access for abstract concepts. Webb (1996) has found
that there are recurring form–meaning pairings in gesture. For example, an “F”
handshape (the thumb and index fingers pinched together with the other fingers
spread) or an “O” handshape (all the fingers pinched together) is regularly used
to represent “preciseness” in the discourses she has analyzed. According to
McNeill (personal communication), it is not the conventionality of the form–
meaning pairing that gives rise to such regularity, but the conventionality of
the imagery in the metaphors we use to understand abstract concepts (in the
sense of Lakoff and Johnson 1980; Lakoff 1987). What is conventional is that
we conceive of preciseness as something small and to be gingerly handled with
the fingertips. The handshape used to represent this imagery then comes to look
alike across different people who share that imagery. The disagreement here is
not one of whether there are conventions involved in the use of gestures. It is
rather one of where the site of conventionalization lies. Is it the forms them-
selves that are conventionalized, as Webb claims, or the conceptual metaphors
that give rise to those forms, as McNeill claims? The issue of “site of con-
ventionalization” is also important for the question of whether the pointing in
agreement verbs in ASL is linguistic or gestural. I give more attention to this
issue in Section 7.5.2.
In any case, what is important here is that the gestures are not formed out of
discrete, conventionalized components in the same way that spoken utterances
are. And even if there are conventionalized aspects to gesturing, they are far
less conventionalized than the elements of speech.
r The form of the gesture patterns meaning onto form in a gradient, as opposed
to a categorical way.
A modality-free notion of gesture 187

In gesture, differences in form correspond to differences in meaning in a con-


tinuous fashion. In speech, many different pronunciations of a word are linked
to the same conventional meaning.
To sum up, gesture is:
r That which expresses the imagistic side of thought during speaking through
forms directly created to conform to that imagery. The imagery can be con-
crete or abstract.
r The forms created are unconventionalized.
r The form of the gesture patterns meaning onto form in a gradient, as opposed
to a categorical way.
With this notion of gesture, many actions produced with the vocal articulators
can be considered gesture.

7.4 Spoken gesture9


In this section I discuss something I call “spoken gesture.” This term covers
things that people do with their voices while speaking which exhibit the prop-
erties of gesture described above. Example (4) is an example of the meaningful
manipulation of vowel length.
(4) It was a looooong time.
In (4), the word long is clearly a linguistic, listable unit, as are the phonemes that
compose it. The lengthening of the vowel expresses the imagery of temporal
extension through actual temporal extension. The lengthening of the vowel is
not a phonemic feature of the word, nor a result of phonotactic considerations. It
is not the result of choosing a feature [+long] from the finite set of phonetic fea-
tures. It is an expression of imagery through a directly created form that happens
to be simultaneously articulated with the prescribed form of the vowel.
In (5), the acoustic parameter of fundamental frequency is manipulated.
(5) The bird flew up [high pitch] and down [low pitch].
The words up and down are linguistic, listable units, as are the phonemes that
compose them. The high pitch on up expresses the imagery of highness through
9 All of my examples of spoken gesture are akin to the iconics class of gesture. My argument,
insofar as it addresses gesture in agreement verbs, would be better served by examples of spoken
gesture that are akin to abstract deixis, but there can be no correlate of abstract deixis in speech
because there is simply no way to point with sound waves. One can, of course, refer with sound
and words, but the speech channel cannot support true pointing. A reviewer suggested that
pointing can be accomplished in speech by using pitch, vowel length, or amplitude to index
the distance of referents. I have not quite resolved whether to consider such use of acoustic
parameters to be pointing, but for now I will say that the defining characteristic of pointing
is that it shows you where to direct your attention to in order to ascertain the referent of the
pointing. An index of distance alone gives you some idea of where to direct your attention, but
a much less precise one.
188 Arika Okrent

a metaphor that associates high vocal frequency with high spatial location. The
low pitch on down expresses the imagery of lowness through the flip side of that
metaphor. The tones are not phonemic features of the words. The tones express
imagery through a directly created form which is simultaneously articulated
with fixed lexical items.
In (6), repetition is exploited for effect.
(6) Work, work, work, rest.
The words in this example are linguistic, listable units. The construction in
which they occur cannot be given a syntactic–semantic analysis. The quantity
of words reflects the imagery of the respective quantity of work and rest. The
ordering of the words reflects the ordering of the actions. A lot of work, followed
by a little rest. The words are chosen from the lexicon. The construction in which
they occur is created in the moment.
The examples of spoken gesture show that speakers can express the linguistic
and the gestural simultaneously, on the same articulators, in the same modality.
Liddell does not propose that agreement verbs are gestures. He rather pro-
poses that they are linguistic units (features of handshape, orientation, etc.) si-
multaneously articulated with pointing gestures (the location features, or “where
the verb points”). Spoken linguistic units can be articulated simultaneously with
spoken gestures. That signs could be articulated simultaneously with manual
gestures is at least a possibility.

7.5 The criteria


I have discussed differences between the morphemic (or linguistic) and the
gestural and introduced the idea of spoken gesture. In sections 7.5.1–7.5.3 I
discuss the problematic issues that arise when trying to draw a clear distinction
between what is linguistic and what is gestural when both types of code are
expressed in the same channel, on the same articulators. The issues are:
r “degree of conventionalization” of a form;
r “site of conventionalization” of a convention; and
r “restriction on combination” of a gesture with a linguistic form.
They are presented as the dimensions along which criteria for deciding where
to draw the line between morpheme and gesture can be established.

7.5.1 The determination of conventionalization is a problem


Put simply, the form of a gesture is unconventionalized, while the form of a
word or morpheme is fully conventionalized. However, these examples of
spoken gesture present some problems in the determination of their level of
conventionalization. The vowel extension in (4) can also be applied to other
A modality-free notion of gesture 189

words. It can be applied to other adjectives as in It was biiiiiiiig or to modifiers


as in It was veeeeeeeery difficult. In all cases the lengthening seems to convey
the meaning of “moreness.” Are we dealing with a morpheme here? Most lin-
guistic theories would say no. Still, there is something conventionalized about
the vowel lengthening. It seems to be somewhat regularly and consistently done,
but certainly not as consistently as the form more is paired with the meaning of
“moreness.” Also, unlike the case with conventionalized lexical items or mor-
phemes, we can take advantage of the gradient nature of the patterning in order
to make fine distinctions in meaning. We can vary the meaning by correspond-
ingly varying the form in a nondiscrete way. Consider He had a biiiiig house,
but his parents had a biiiiiiiiiiig house.
The tone sequence in (5) may also be seen as conventionalized within the
realm of intonational phonology. The particular sequence of pitch accents and
boundary tones is also found in other common collocations such as back and
forth and day and night. However, if gradient patterning is taken advantage
of and the intonation is extended to The bird flew uuuuuup [with exagger-
ated rising contour] and dooooown [with exaggerated falling contour], then the
phrase does not follow a conventionalized pattern. The phrase day and night
said with the same exaggerated intonation would be odd (although for me,
saying it that way evokes the image of a rising and setting sun). Through
the manipulation of pitch, intensity, and timing parameters we can continue
to map more meaning onto the form by adding force dynamics, speed, and
particular flight contours to the articulation, all of which would certainly be
outside the realm of intonational phonology. How much of that manipulation
can be considered conventionalized?
One could also see conventionalized aspects in example (6). There seems
to be some sort of convention to three repetitions, as in Work work work,
that’s all I ever do or Bills bills bills, can’t I get any other kind of mail?
However, as with the examples above, the ordered sequence of words in (9)
can also be made more imagistic by taking advantage of gradient pattern-
ing. Consider Work, work, work, work, work [said very rapidly] – rest [said
very slowly]. Again, speed, intensity, force dynamics, and other sequencing
possibilities can be mapped directly onto the form of the utterance. It is dif-
ficult to determine where the conventional leaves off and the idiosyncratic
begins.
The purpose of this section has been to show that it is no trivial matter
to decide whether something is conventionalized and how conventionalized
it is when we leave the extremes of the scale. It is probably reasonable to
have degree of conventionalization as a criterion for considering something
gestural or linguistic, with completely conventionalized on the linguistic side
and completely unconventionalized on the gestural side. Unfortunately, most
of the cases we have difficulty with lie in between these extremes.
190 Arika Okrent

Degree of Conventionalization: The determination of the degree to which something


is conventionalized is certainly useful in deciding whether something is gestural or
morphemic. However, it is no trivial matter to make this determination, and the researcher
must be wary of depending on this criterion entirely.

7.5.2 The determination of the site of conventionalization is important


The controversy over the gesture proposal for agreement verbs is due in large
part to the fact that in deciding whether the pointing is morphemic or gestural,
different criteria are being used. For Liddell, the form of the locus position being
pointed to by the agreement verb is not conventionalized, so it is gesture. The
objections stress that the way that pointing in general is carried out is restricted
in a conventionalized way, so the pointing is morphemic. Both points of view
take degree of conventionalization as a criterion of decision, but they differ on
the site of conventionalization which is important: conventionalization of the
form of the locus position itself vs. conventionalization of the way in which
pointing is carried out in general.
It is clear that there are conventions that govern the use of pointing. If the
criterion for considering something gesture is whether or not it is conventional-
ized, then both viewpoints are correct, but in different ways. It is gesture because
its form is nonconventionalized; it is linguistic because the practice of using
those forms is conventionalized. What we have here is a disagreement about
which stratum of conventionalization is important in considering a phenomenon
linguistic.
Site of Conventionalization: When something is said to be conventionalized, or restricted
by language-internal considerations, the researcher must be as explicit as possible about
the level at which that conventionalization is located. Is it the form that is conven-
tionalized, or a particular aspect of the use of that form? If the researcher decides that
conventionalization on a particular level suggests a linguistic (as opposed to cognitive or
cultural) interpretation of the phenomenon, he or she should be consistent in considering
parallel cases in spoken language as linguistic as well.

7.5.3 Restrictions on the combination of the gestural and the linguistic


7.5.3.1 Where the feature is not elsewhere contrastive. All agree
that there are restrictions on the way pointing is carried out that are language
particular. There are two ways of looking at this. Either the fact that it is restricted
makes all parts of it linguistic, or there is a gestural part and a linguistic part,
and the gestural must combine with the linguistic in a restricted way.
This prompts one to ask whether there are any constraints on the way in which
the combination of gesture and speech is carried out in general. It appears there
are, as in (4) above. In example (4), all three phonemes (// /ɔ/ /ŋ/) are possible
A modality-free notion of gesture 191

candidates for temporal extension. All are continuants and could be sustained
for longer durations. However, the best way to achieve the combination is to
extend the vowel.
(7) a. *llllllong time
b. *longngngng time
This example is intended to show that when speech and gesture must combine
in the same channel, there are restrictions on the way they may combine. There
are linguistic elements that are better suited to carry the gestures than others.
In this case, the vowel is the best bearer of this gesture. It is not always the case
that the gestural manipulation must combine with the vowel. But there seem
to be restrictions determining which kinds of segments are best combined with
certain types of gestures.

7.5.3.2 Where the feature is elsewhere contrastive. The spoken ges-


ture examples I have given here have involved manipulation of parameters that
are not otherwise involved in categorical contrasts elsewhere in the language.
English does not have phonemic vowel length or tone, nor is reduplication a
morphological process. Those parameters are, in a sense, free to be gestural.
But what happens in a language where the phonetic form used to convey the
imagery has phonemic value?
In Mandarin Chinese, the lexical tones for shang, ‘up’ and for xia, ‘down’
are both falling (tone 4). In this case, the lexical specification for tone seems to
preclude any gestural use of tone. Indeed, a speaker cannot make the word for
‘up’ have a rising contour for the expression of imagery as seen in Figure 7.4 (a
spectrogram of an English speaker saying uuuuup and doooown). The Chinese
speaker cannot express the imagery of an upward climb with a sharp pitch rise
over the word for ‘up’ as above, because the lexical word for ‘up,’ shang, must
have a falling tone. However, there are other ways for the gesture to emerge.
The normal pitch contour for shang ‘up’ and xia ‘down’ is shown in Figure 7.5.
(8) Ta pa shang you pa xia
He climb up have climb down
‘He climbed up and down.’
Both shang and xia fall sharply in pitch, and the second articulation starts a
little lower than the first due to regular sentence declination. Figure 7.6 shows
the pitch contour for the gesture enhanced articulation of shang and xia. This
articulation was elicited by asking the speaker to imagine he was telling a story
to a child and wanted to put the imagery of upness and downness into his
voice.10
10 The speaker also stated that his utterance sounded like something going up and down and that
it sounded quite different from simple stress or emphasis on ‘up.’
He went uuuuuuuuup and doooooown

Figure 7.4 Spectrogram of English utterance with gestural intonation


Ta pa shang you pa xia

Figure 7.5 Spectrogram of Chinese utterance with neutral intonation


Xiao niao pa shaaaaaaaaang you pa xia

Figure 7.6 Spectrogram of Chinese utterance with gestural intonation


A modality-free notion of gesture 195

(9) Xiao niao pa shang you pa xia


Bird climb up have climb down
‘The bird flew up and down.’
Notice that shang does not rise as uuuup does in the English example above in
Figure 7.4. It cannot rise because of the restriction of lexical tone. However, it
is still possible for the speaker to manipulate intonation gesturally. The gesture
of ‘upness’ is achieved through the displacement of the pitch peak of shang to
be much higher than the peak for xia and the extended articulation.
The point of this example is to show that the use of an articulatory parameter
for categorial contrast in one area of the language does not render impossible its
use in the realm of gesture, and that there may be language-particular restrictions
on the way that spoken gesture combines with spoken language.11
Not much is known at this point about the kinds of restrictions on the com-
bination of spoken gesture with speech and whether there are any parallels to
the restrictions on pointing mentioned as objections to the gesture proposal
in Section 7.2. Are there spoken gestures that may only combine with a par-
ticular lexical class (first point in Section 7.2)? Expressive reduplication may
in some cases be restricted to a certain aspectual class of verbs. Are there spo-
ken gestures which are non-optional (second point)? That depends on whether
non-optional intonation patterns that indicate topic and focus, or question and
statement, are to be considered gestural. Do different languages handle spoken
gesture in different ways (third point)? Chinese and English seem to differ in
the way pitch can be used gesturally.
I do not have satisfactory answers to the questions above. At this point I only
submit that the dimension of “restriction on combination” is another criterion
by which the morpheme vs. gesture question can be evaluated.
Restriction on combination: Restrictions on phenomena can come from the requirements
of the grammar, but they can also come from the interplay of two kinds of code upon
their integration into one channel. More work needs to be carried out on the nature of
the restrictions on gestural forms that result from the requirement that they share the
same channel with linguistic forms.

7.6 Conclusions
It is unfounded to reject the idea that agreement is gestural simply because
the verbs being produced are linguistic units. People can vocally gesture while
11 There do not seem to be similar restrictions on the combination of manual gesture with speech.
Although speech is tightly synchronized with manual gesture – and conveys meaning which
is conceptually congruent with speech – the particular forms that the manual gestures take do
not appear to be constrained in any way by the specifications on form that the speech must
follow. Combining two semiotic codes in one modality may raise issues for language–gesture
integration that do not arise for situations where the linguistic and the gestural are carried by
separate modalities.
196 Arika Okrent

saying spoken lexical verbs. Signers can manually gesture while signing lexical
verbs. However, the combination of gesture and speech in one channel puts
restrictions on the gesture because important linguistic categorical information,
like lexical tone, must be preserved. The three objections given in Section 7.2
above make reference to restrictions on the way in which pointing is carried out,
and these restrictions are language particular. The fact that there are language-
particular restrictions on the way the pointing is carried out does not in itself
constitute a devastating argument against the gesture proposal.
The title of this chapter promises criteria for deciding what is gestural and
what is morphemic in ASL linguistics. There is no checklist of necessary and
sufficient conditions for membership in either category. There are, however,
three continuous dimensions along which researchers can draw a line between
gesture and language.12 I repeat them here:
r The first is “degree of conventionalization.” How conventionalized must
something be in order to be considered linguistic?
r The second dimension is “site of conventionalization.” What kinds of con-
ventions are linguistic conventions?
r The third dimension is “restriction on combination.” What kinds of conditions
on the combination of semiotic codes are linguistic conditions?
These are not questions I have answers for, but they are the questions that
should be addressed in the morpheme vs. gesture controversy in sign language
linguistics.

Acknowledgments
This research was partially funded by grants to David McNeill from the Spencer
Foundation and the National Institute of Deafness and Other Communicative
Disorders. Some equipment and materials were supplied by the Language Labs
and Archives at the University of Chicago. I am grateful to David McNeill,
John Goldsmith, Derrick Higgins, and my “lunch group” Susan Duncan, Frank
Bechter, and Barbara Luka for their wise advice and comments. Any inaccura-
cies are, of course, my own.

References
Aarons, Debra, Benjamin Bahan, Judy Kegl, and Carol Neidle. 1992. Clausal structure
and a tier for grammatical marking in American Sign Language. Nordic Journal of
Linguistics 15:103–142.
Arnheim, Rudolf. 1969. Visual thinking. Berkeley, CA: University of California Press.

12 I remain agnostic with respect to whether drawing such a line is ultimately necessary, although
I believe that the effort expended in trying to draw that line is very useful for gaining a greater
understanding of the nature of communicative behavior.
A modality-free notion of gesture 197

Bos, Heleen. 1996. Serial verb constructions in the sign language of the Netherlands.
Paper presented at the 5th International Conference on Theoretical Issues in Sign
Language Research. Montreal, September.
Duncan, Susan. 1998. Evidence from gesture for a conceptual nexus of action and entity.
Paper presented at the Annual Conference on Conceptual Structure, Discourse, and
Language, Emory University, October.
Emmorey, Karen. 1999. Do signers gesture? In Gesture, speech, and sign, eds. Lynn
Messing and Ruth Campbell, 133–159. New York: Oxford University Press.
Fauconnier, Gilles. 1994. Mental spaces: Aspects of meaning construction in natural
language. Cambridge: Cambridge University Press.
Fischer, Susan and Bonnie Gough. 1978. Verbs in American Sign Language. Sign Lan-
guage Studies 18:17–48.
Freyd, Jennifer. 1983. Shareability: The social psychology of epistemology. Cognitive
Science 7: 191–210.
Kiparsky, Paul. 1982. From cyclic phonology to lexical phonology. The structure of
phonological representations, I, ed. Harry van der Hulst and Norval Smith, 131–
177. Dordrecht: Foris.
Kita, Sotaro. 2000. How representational gestures help speaking. In Language and
gesture, ed. David McNeill, 162–185. Cambridge: Cambridge University Press.
Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Ladd, Robert. 1996. Intonational phonology. Cambridge: Cambridge University Press.
Lakoff, George and Mark Johnson. 1980. Metaphors we live by. Chicago, IL: University
of Chicago Press.
Lakoff, George. 1987. Women, fire and dangerous things: What categories reveal about
the mind. Chicago, IL: University of Chicago Press.
Langacker, Ronald. 1987. Foundations of Cognitive Grammar, Vol. 1: Theoretical pre-
requisites. Stanford, CA: Stanford University Press.
Langacker, Ronald. 1991. Cognitive Grammar. In Linguistic theory and grammatical de-
scription, ed. Flip Droste and John Joseph, 275–306. Amsterdam: John Benjamins.
Liddell, Scott. 1995. Real, surrogate, and token space: Grammatical consequences in
ASL. In Language, gesture, and space, eds. Karen Emmorey and Judy Reilly,
19–41. Hillsdale, NJ: Lawrence Erlbaum Associates.
Liddell, Scott. 1996. Spatial representation in discourse: Comparing spoken and signed
language. Lingua 98:145–167.
Liddell, Scott and Robert Johnson. 1989. American Sign Language: The phonological
base. Sign Language Studies 64:195–277.
Liddell, Scott. and Melanie Metzger. 1998. Gesture in sign language discourse. Journal
of Pragmatics 30:657–697.
Lillo-Martin, Diane and Edward Klima. Pointing out differences: ASL pronouns in
syntactic theory. In Theoretical issues in sign language research: Vol. 1, eds. Susan
Fischer and Patricia Siple, 191–210. Chicago, IL: University of Chicago Press.
Marschark, Mark. 1994. Gesture and sign. Applied Psycholinguistics 15:209–236.
McNeill, David. 1992. Hand and mind: What gestures reveal about thought. Chicago,
IL: University of Chicago Press.
McNeill, David. 1997. Growth points cross-linguistically. In Language and concep-
tualization, eds. Jan Nuyts and Eric Pederson, 190–212. Cambridge: Cambridge
University Press.
198 Arika Okrent

McNeill, David, Justine Cassell, and Elena Levy. 1993. Abstract deixis. Semiotica 95:
5–19.
McNeill, David, Justine Cassell, and Karl-Erik McCullough. 1994. Communicative ef-
fects of speech-mismatched gestures. Research on language and social interaction
27:223–237.
McNeill, David and Laura Pedelty. 1995. Right brain and gesture. In Language, gesture,
and space, eds. Karen Emmorey and Judy Reilly. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Meier, Richard. 1987. Elicited imitation of verb agreement in American Sign Lan-
guage: Iconically or morphologically determined? Journal of Memory and Lan-
guage 26:362–376.
Metzger, Melanie. 1995. Constructed dialogue and constructed action in American Sign
Language. In Proceedings of the Fourth National Symposium on Sign Language
Research and Teaching, ed. Carol Padden. Silver Spring, MD: National Association
of the Deaf.
Newport, Elissa and Richard Meier. 1985. The acquisition of American Sign Language.
In The crosslinguistic study of language acquisition, Vol. 1: The data, ed. Daniel
Slobin. Hillsdale, NJ: Lawrence Erlbaum Associates.
Padden, Carol, 1988. Interaction of morphology and syntax in American Sign Language.
New York: Garland.
Petitto, Laura. 1987. On the autonomy of language and gesture: Evidence from the
acquisition of personal pronouns in American Sign Language. Cognition 27:1–52.
Poizner, Howard, Edward Klima, and Ursula Bellugi. 1987. What the hands reveal about
the brain. Cambridge, MA: MIT Press.
Taub, Sarah. 1998. Language in the body: iconicity and metaphor in American Sign
Language. Doctoral dissertation, University of California, Berkeley.
Webb, Rebecca. 1996. Linguistic features of metaphoric gestures. In Proceedings of
the Workshop on the Integration of Gesture in Language and Speech, ed. Lynn
Messing, 79–93. Newark, DE: University of Delaware.
Woodbury, Anthony. 1987. Meaningful phonological processes: A consideration of
Central Alaskan Yupik Eskimo prosody. Language 63:685–740.
8 Gesture as the substrate in the process of ASL
grammaticization

Terry Janzen and Barbara Shaffer

8.1 Introduction
Grammaticization is the diachronic process by which:
r lexical morphemes in a language, such as nouns and verbs, develop over time
into grammatical morphemes; or
r morphemes less grammatical in nature, such as auxiliaries, develop into ones
more grammatical, such as tense or aspect markers (Bybee et al. 1994).
Thus any given grammatical item, even viewed synchronically, is understood
to have an evolutionary history. The development of grammar may be traced
along grammaticization pathways, with vestiges of each stage often remain-
ing in the current grammar (Hopper 1991; Bybee et al. 1994), so that even
synchronically, lexical and grammatical items that share similar form can be
shown to be related. Grammaticization is thought to be a universal process; this
is how grammar develops. Bybee et al. claim that this process is regular and
has predictable evidence, found in the two broad categories of phonology and
semantics. Semantic generalization occurs as the more lexical morpheme loses
some of its specificity and, usually along with a particular construction it is
found within, can be more broadly applied. Certain components of the meaning
are lost when this generalization takes place.1 Regarding phonological change,
grammaticizing elements and the constructions they occur in tend to undergo
phonological reduction at a faster rate than lexical elements not involved in
grammaticization.
The ultimate source of grammaticized forms in languages is understood to
be lexical. Most commonly, the source categories are nouns and verbs. Thus,
the origins of numerous grammatical elements, at least for spoken languages,
are former lexical items. Grammaticization is a gradual process that differs
from other processes of semantic change wherein a lexical item takes on new
meaning, but remains within the same lexical category, or word-formation
1 Givón (1975) introduced the term “semantic bleaching” for the loss of meaning. The exact
nature of meaning change in grammaticization is debated by researchers, however. Sweetser
(1988) suggests that the term “generalization” is inadequate, because while certain meanings
are lost, new meanings are added. Thus, Sweetser prefers simply to refer to this phenomenon
as “semantic change.”

199
200 Terry Janzen and Barbara Shaffer

processes by which new lexical items are created through common phenomena
such as compounding. Grammaticization concerns the evolution of grammatical
elements.
Investigations of grammaticization in American Sign Language (ASL) are
still scarce, although Wilcox and Wilcox (1995), Janzen (1995; 1998; 1999),
Wilbur (1999), and Shaffer (1999; 2000), began this study for ASL. It seems
clear that similar diachronic processes do exist for signed languages, but with
one essential difference that results from signed languages occurring within a
visual medium, where gestures of the hands and face act as the raw material
from which formalized linguistic signs emerge, and that is that gesture itself
may be the substrate for the development of new grammatical material.
A crucial link between gesture and more formalized linguistic units has
been proposed by Armstrong et al. (1995) as lexicalization generally, but also
for the gestural roots of ASL morphosyntax, which they demonstrate with
two-handed signs that are themselves “sentences.” In these signs one hand
acts as an agent and the other as a patient, while a gestural movement sig-
nals a pragmatic (and ultimately syntactic) relation between the two. Inter-
estingly, the recent suggestion that mirror neurons may provide evidence of
a neurophysical link between certain gestural (and observed) actions and lan-
guage representation (Rizzolatti and Arbib 1998) strongly supports the idea that
signed languages are not oddities, but rather that they are immensely elaborated
systems in keeping with gestural origins of language altogether (see Hewes
1973).2
The link between gestures and signed language has also been discussed
elsewhere. For example, researchers have addressed how formalized signs differ
from gesture (Petitto 1983; 1990), how gestures rapidly conventionalize into
linguistic-like units in an experimental environment (Singleton et al. 1995),
and how gestures develop into fully conventionalized signs in an emerging
signed language (Senghas et al. 2000). Almost invariably – with the exception
of Armstrong et al. (1995) – these studies involve lexicalization rather than the
development of grammatical features.
The current proposal, that prelinguistic hand and facial gestures are the sub-
strate of signed language grammatical elements, allows for the possibility that
when exploring grammaticization pathways, we may look not only to the ex-
pected lexical material as the sources of newer grams,3 but to even earlier
2 Mirror neurons are identified as particular neurons situated in left hemispheric regions of the
brain associated with Broca’s area, specifically the superior temporal sulcus, the inferior parietal
lobule, and the inferior frontal gyrus. These neurons are activated both when the organism grasps
an object with the hand, and when someone else is observed to grasp the object (but not when
the object is observed on its own). Rizzolatti and Arbib (1998) posit that this associated grasping
action and grasping recognition has contributed to language development in humans.
3 “Gram” is the term Bybee et al. (1994) choose to refer to individual items in grammatical
categories.
Gesture as the substrate in the process of ASL grammaticization 201

gestures as the sources of the lexical items that eventually grammaticize. This
is the case for the linguistic category of modality,4 which we illustrate by
proposing that the development of ASL modals such as FUTURE, CAN, and
MUST take as their ultimate source several generalized prelinguistic gestures.
Topic marking, which we propose developed from an earlier yes–no question
construction, also has a generalized gesture as an even earlier source. In the
case of the grammaticized modals, the resulting forms can be shown to have
passed through a lexical stage as might be expected. The development from
gestural substrate to grammar for the topic marker, however, never does pass
through a lexical stage.
We conclude that for the ASL grammaticization pathways explored in this
study, gesture plays the important role of providing the substrate from which
grammar ultimately emerges. The evidence presented here suggests that the
precursors of these modern-day ASL grammatical devices are gestural in nature,
whether or not a lexical stage is intervening. Thus, the study of ASL provides
new perspectives on grammaticization, in exploring both the sources of grams,
and the role (or lack of role) of lexicon in the developing gram.
In Section 8.2 we discuss data gathered at the start of the twentieth century.
All of the diachronic discourse examples in Section 8.2 were taken from The
Preservation of American Sign Language, a compilation of the early attempts
to document the language on film, made available recently on videotape. All
of the films in this compilation were made in or around 1913. Each shows an
older signer demonstrating a fairly formal register of ASL as it was signed
in 1913. Because the films are representative of monologues of only a fairly
formal register, care must be taken when drawing conclusions regarding what
ASL did not exhibit in 1913. In other words, while we believe it is possible
to use these films to show examples of what ASL was in 1913, they can-
not be used to show what ASL was not. Along with the discourse examples
discussed above, we analyze features of signs produced in isolation. The iso-
lated signs are listed in at least one of several French Sign Language (Langue
de Signes Française or LSF) dictionaries from the mid-1800s, or from ASL
dictionaries from the early 1900s. For those signs not taken from actual dis-
course contexts we are left to rely on the semantic descriptions and glosses
provided by their authors. As with most dictionary images, certain features of
the movement are not retrievable. We were also able to corroborate these im-
ages with the 1913 films in order to draw conclusions regarding phonological
changes.5

4 Throughout this chapter we use the term “modality” to mean the expression of necessity and
possibility; thus, the use of “modals” and such items common to the grammar systems of
language, as opposed to a common meaning of “modality” in signed language research meant
to address differences between signed and spoken channels of production and reception.
5 All examples extracted from the 1913 films of ASL were translated by the authors.
202 Terry Janzen and Barbara Shaffer

All grammaticization paths proposed in the following sections on modality


involve old French Sign Language (OLSF) and modern ASL. The historic
relationship between ASL and LSF is described in detail in Lane (1984), an
account of the circumstances that brought OLSF to the USA.6 Woodward (1978;
1980) suggests that what is now known as American Sign Language was, in
large part, formed from the lexicon and some elements of the grammar of
OLSF along with an indigenous signed language in use in the northeastern
USA prior to the establishment of the first school for the deaf in 1817. It is
believed that before this time deaf people in the USA and Canada did have a
signed communication system – at least wherever there was a critical mass – and
that when the founders of the first school (one hearing American, and one deaf
Frenchman) established the school and began formally instructing deaf pupils
using French signs, language mixing took place. Woodward (1978) discusses the
rapid linguistic changes that took place between 1817 and 1913 and suggests
that such changes are characteristic of creoles. Phonological processes that
contributed to numerous systematic changes in the lexicon of the resulting
signed language are outlined in Frishberg (1975). The 1913 films show that
by the early twentieth century this language had changed sufficiently so that
in many respects, it was similar to what is used in the USA and Canada at the
start of the twenty-first century, although certain differences significant to our
discussion are evident.

8.2 Markers of modality


For the grammatical category of linguistic modality, the generalized path of
grammaticization proposed in Shaffer (2000) and given here in (1) is:
(1) gesture → full lexical morpheme → grammatical morpheme
Markers of modality in ASL are hypothesized to have developed along similar
and predictable grammaticization pathways described for modality in other
languages. Bybee et al. (1991) state that grams meaning ‘future’ in all languages
develop from a limited pool of lexical sources and follow similar and fairly
predictable paths. Future grams may develop from auxiliary constructions with
the meanings of ‘desire,’ ‘obligation,’ or ‘movement toward a goal.’ For example
Bybee et al. (1994) note that English will has as its source the older English verb
willen with the original meaning ‘want,’ which later came to be used to express
future meanings. In modern English willen is no longer used to express ‘desire.’
English go, on the other hand, is polysemous in modern English, maintaining
both ‘movement toward a goal’ and ‘future’ senses as seen in (2) below.

6 Similar relationships are said to exist between other signed languages as well. For a discussion
of the historical relationships among various European signed languages, see Eriksson (1998).
Gesture as the substrate in the process of ASL grammaticization 203

(2) a. I am going to Dallas next week.


b. I am going to be hungry when I arrive.
In (2a) going means physical movement toward a location. In (2b) however, the
movement is temporal. No physical movement toward a goal is implied; thus,
the form has only a future sense. This is not to suggest that ‘be going to’ can only
have either physical or temporal movement in its meaning. In fact, it is exactly
this type of polysemy which is hypothesized to have led to the development
of the future meaning in certain constructions. Put another way, while in some
constructions ‘be going to’ is used to indicate physical movement toward a goal,
in other constructions only a future time reference is being indicated, and in yet
other constructions both a future temporal reference and physical movement
toward a goal are implied by ‘be going to.’

8.2.1 FUTURE
We believe that ASL is among those languages whose main future gram devel-
oped from a physical ‘movement toward a goal’ source. Our claim is that the
future gram in ASL, glossed here as FUTURE, developed from an older lexical
sign with the meaning of ‘to go’ or ‘to leave.’7 Evidence of this sign can be found
as far back as the 1850s in France where it was glossed PARTIR (Brouland 1855;
Pèlissier 1856). The OLSF sign PARTIR is shown in Figure 8.1a. The form in
Figure 8.1a was used as a full verb with the meaning ‘to leave.’ Note that the
sign is a two-handed sign articulated just above waist level, with the dominant
hand moving up to make contact with the nondominant palm. Old ASL (OASL)
also shows evidence of this sign, but with one difference. The 19138 films have
examples of the ASL sign GO, similar to the OLSF sign PARTIR; there are
also instances of GO being signed with only one hand. Modern Italian Sign
Language (Lingua Italiana dei Segni or LIS) also has this form (Paul Dudis,
personal communication).9
E.A. Fay in 1913 signs the following:
(3) TWO, THREE DAY PREVIOUS E.M. GALLAUDET GO TO
TOWN PHILADELPHIA10
‘Two or three days before, (E.M.) Gallaudet had gone to Philadelphia.’
7 The gloss FUTURE was chosen because it is the only sense shared among the various discourse
uses of the sign. WILL, for example, limits the meaning and suggests auxiliary status.
8 All references to the 1913 data indicate filmed narratives, available currently on videotape in
The Preservation of American Sign Language, 1997,
c Sign Media Inc.
9 For a discussion regarding the hypothesized relationship between ASL/LSF and other signed
languages such as LIS, see Eriksson (1998).
10 ASL signs are represented by upper case glosses. Words separated by dashes indicate single
signs (e.g. TAKE-UP); PRO.n = pronouns (1s, 2s, etc.); POSS.n = possessive pronouns; letters
separated by hyphens are fingerspelled words (e.g. P-R-I-C-E-L-E-S-S); plus signs indicate
repeated movement (e.g. MORE+++); top = topic marking; y/n-q = yes–no
204 Terry Janzen and Barbara Shaffer

(a) (b)

(c) (d)

Figure 8.1a 1855 LSF PARTIR (‘to leave’); 8.1b 1855 LSF FUTUR (‘fu-
ture’) (Brouland 1855; reproduced with permission.); 8.1c 1913 ASL FU-
TURE (McGregor, in 1997 Sign Media Inc.; reproduced with permission.);
8.1d Modern ASL FUTURE (Humphries et al. 1980; reproduced with
permission)

In this context the sign GO very clearly indicates physical movement. The
speaker is making a reference to a past event and states that Gallaudet had gone
to Philadelphia. What is striking in this example is that GO is signed in a manner
identical to the old form of FUTURE, shown in Figure 8.1b. An example of
this older form of FUTURE is given in another 1913 utterance in a narrative by
R. McGregor, in (4).
(4) WHEN PRO.3 UNDERSTAND CLEAR WORD WORD OUR
FATHER SELF FUTURE [old form] THAT NO MORE
‘When he clearly understands the words of our father he will do that
no more.’

question marking; CL = classifier; form specific notes are given below glosses for clarification
of forms (e.g. CL:C(globe) ).
both hands-----
Gesture as the substrate in the process of ASL grammaticization 205

This example shows the same form as GO in (3) being used to indicate future
time, suggesting that for a time a polysemous situation existed whereby the
sign could be understood in certain constructions to mean ‘to go’ and in others
to mean ‘future.’
Phonological reduction in the signing space is evident by this time, with the
older form of the sign articulated as a large arc at the waist, with the shoulder
as the primary joint involved in the movement, and the newer form, shown
in Figure 8.1d, with a much shorter movement near the cheek, and with the
movement having the wrist (or, at most, the elbow) as the primary joint involved.
The distalization from a more proximal joint to a more distal joint in this way
constitutes phonological reduction in Brentari’s (1998) model of phonology.
Note the example in (5), where G. Veditz articulates both forms in the same
utterance.
(5) YEAR 50 FUTURE [new form] THAT FILM FUTURE [old form]
TRUE P-R-I-C-E-L-E-S-S
‘In fifty years these films will be priceless.’
In (5) FUTURE is produced twice, yet in each the articulation is markedly
different. In the second instance in (5) the sign resembles FUTURE as produced
by McGregor in (4), while in the first instance FUTURE is signed in a manner
consistent with modern ASL FUTURE, which moves forward from the cheek. In
both instances in the construction above FUTURE has no physical ‘movement
toward a goal’ meaning, only a future time reference.
Newer forms of grammaticizing morphemes frequently co-occur with older
forms synchronically. Hopper (1991:22) describes this as “layering” in gram-
maticization:
Within a broad functional domain, new layers are continually emerging. As this happens,
the older layers are not necessarily discarded, but may remain to coexist with and interact
with the newer layers.

Such layering, or co-occurring of two forms, in other words, may exist for a
limited time; it is entirely possible for the older form to die out, or to continue
grammaticizing in a different direction, resulting in yet a different form with
another function and meaning. This has been proposed for at least some of the
various forms and usages of the item FINISH in ASL in Janzen (1995). Two
such polysemous forms co-occurring synchronically, for however long, often
contribute to grammatical variation in a language, and this would seem to be
the case for FUTURE for a time in ASL. It is remarkable, however, to see two
diachronic variants occur in the same utterance, let alone the same text.
In summary, then, we suggest that FUTURE in modern ASL belongs to the
crosslinguistic group of future markers with a ‘movement toward a goal’ source.
FUTURE began in OLSF as a full verb meaning ‘to go’ or ‘to leave.’ By 1855,
GO was produced with the nondominant hand as an articulated “base” hand.
206 Terry Janzen and Barbara Shaffer

By the start of the twentieth century, OASL contained GO as it was produced in


1855 as well as GO without the nondominant hand. Further, by 1913 a similar
form, without the base hand, was used to indicate future time reference, as was
a newer form, which is phonologically similar to the form used today. Note,
however, that PARTIR as it was signed in 1855 also still exists in modern LSF, as
does a related form commonly glossed as LET’S-GO seen in modern ASL. The
sign LET’S-GO in ASL is articulated with the same handshapes as PARTIR but
with a different type of contact, and slightly different movement. In this case,
the palmar surfaces of the hands brush against each other, while in articulating
PARTIR the thumb side of the dominant hand makes contact with the down
turned palm of the nondominant hand.
What we have described thus far is in keeping with grammaticization theory
as proposed by Traugott (1989), Hopper (1991), Bybee et al. (1994), and oth-
ers. We have described semantic and phonological changes that the morpheme
FUTURE underwent as it grammaticized from a full verb to a grammatical
morpheme. Here, however, we must depart from traditional grammaticization
theory. We claim now that the modern ASL (and LSF) sign FUTURE has an
earlier origin than the lexical sign PARTIR, namely a gesture of the same form
in use in France at the time with the consistent meaning of ‘to go,’ and one we
suggest was available to either deaf or nondeaf groups of language users. In
fact the gesture, with several forms and several related meanings, is still in use
among nonsigners in France (see Figure 8.2). It is a very common French ges-
ture, known to most members of the French speech community and translated
by Wylie (1977:17) as on se tire. Among the French, on se tire means ‘let’s go,’
or ‘let’s split.’ Further, there is evidence of this gesture as far back as classical

Figure 8.2 On Se Tire (‘go’) (Wylie 1977; reproduced with permission)


Gesture as the substrate in the process of ASL grammaticization 207

antiquity. De Jorio notes in 1832 (translated in Kendon 2000) that evidence of


a gesture indicating ‘going away’ is seen in the drawings and sketches from
nineteenth-century Naples. He claims that these gestures have their roots in
ancient Greek and Roman cultures. He describes it as “palm of the hand open
and held edgewise, and moved upwards several times,” and notes that “this
gesture indicates going away. If one wishes to indicate the path to be followed,
the gesture is directed toward it” (Kendon 2000:260).
What we are claiming then is that the source of FUTURE in ASL was a
gesture used in France, which entered the lexicon of OLSF, and then OASL,
and finally proceeded along a common grammaticization path. The resulting
path is given in (6).
(6) gesture ‘to go’ > full verb ‘to go’ >grammatical morpheme ‘future’
We suggest that this progression from a generalized gesture to a gram is not
isolated, but instead, gesture is in fact a common source of modern ASL lexical
and grammatical morphemes. We will now discuss several other examples of
gestural sources for ASL grammatical morphemes.

8.2.2 CAN
In a discussion of markers of possibility, Bybee et al. (1994) note that there
are several known cases of auxiliaries predicating physical ability that come
to be used to mark general ability as well.11 Two cases are cited. English may
was formerly used to indicate physical ability and later came to express general
ability. The second case noted is Latin potere ‘to be able,’ which is related to
the adjective potens meaning ‘strong’ or ‘powerful,’ and which gives French
pouvoir and Spanish poder, both meaning ‘can’ (1994:190). In modern English
can is polysemous, with many agent-oriented senses ranging from uses with
prototypical agents, to those with no salient agent at all. Some examples are
given in (7).
(7) a. I can lift a piano.
b. I can speak French.
c. The party can start at eight.
In (7a) we see a physical ability sense of can. In (7b) can is used to indicate a
mental ability or skill, while in (7c) we see a use of can that could be interpreted
as either a root possibility (with no salient agent) or a permission reading,
depending on the context in which the sentence was spoken.
11 For the purposes of this chapter the category of modal uses – including physical ability, general
ability, permission, and possibility – are all described under the general heading “possibility,”
since it is believed that all of these modal uses are related and all share the semantic feature of
possibility.
208 Terry Janzen and Barbara Shaffer

Discussions of grammaticization cite numerous examples of modals with


strong animacy requirements on the agent generalizing over time to allow for
less specific situational conditions, with no agent necessary. Bybee (1988) de-
scribes the grammaticization of English can which is shown in (8). Here we
see that can began as an agent-oriented modal with a strong animacy require-
ment. Bybee notes that over time, however, the animacy requirement was lost in
certain constructions, allowing for root possibility uses and even epistemic uses.
(8) Can predicates that:
a. mental enabling conditions exist in the agent for the completion of
the main predicate situation.
b. (any) enabling conditions exist in the agent for the completion of
the main predicate situation.
c. (any) enabling conditions exist for the completion of the main pred-
icate situation. (Bybee 1988:255)
Wilcox and Wilcox (1995) and Shaffer (2000) have suggested that a similar
grammaticization path can be seen for markers of possibility in ASL. Evidence
from OLSF and OASL suggests that the OLSF lexical sign POUVOIR (meaning
‘to be strong’) has grammaticized into the modern ASL sign meaning ‘can’
(see Figure 8.3) and is used in constructions to indicate physical ability, mental
ability, root possibility, as well as permission and epistemic possibility.
In Figure 8.3a the sign indicated is glossed POUVOIR. This we suggest was
the original LSF lexical sign from which subsequent uses developed. Evidence
from existing OASL sources supports the claim that CAN is a grammaticized
form of a sign meaning ‘strong.’ McGregor, in a 1913 lay sermon, signs the
sentences given in (9) to (11):
(9) WE KNOW EACH OTHER BETTER AND WE CAN UNDER-
STAND EACH OTHER BETTER AND FEEL BROTHER
‘We know each other better and are able to understand each other better
and feel like brothers.’
(10) OUR FATHER STRONG OVER MOON STARS WORLD
‘Our father is strong over the moon, and stars and world.’
(11) SELF CAN GET-ALONG WITHOUT OUR HELP
‘He can get along without our help.’
In the above examples the sign STRONG and the sign CAN are signed in
an identical manner. In (10) it is unclear whether the signer was intending a
strength or ability reading. Either meaning is possible and logical in sentence
(10). Further, in (10) STRONG is functioning as the main verb of the clause.
This provides good evidence that the sign STRONG could be used in more
than one sense and this polysemy shows the potential for ASL CAN to have
developed from ‘strong.’
Gesture as the substrate in the process of ASL grammaticization 209

(a) (b)

(c)

Figure 8.3a 1855 LSF POUVOIR (Brouland 1855; reproduced with permis-
sion); 8.3b 1913 ASL CAN (Hotchkiss in 1997 Sign Media Inc.; reproduced
with permission); 8.3c Modern ASL CAN (Humphries et al. 1980; reproduced
with permission)

Evidence from the 1913 data suggests that by the start of the twentieth cen-
tury CAN had already undergone a great deal of semantic generalization from
its physical strength source. The 1913 data contain examples of each of the
following discourse uses of CAN: physical ability, nonphysical ability (skills,
etc.) and root possibility. Example (9) shows a root possibility use of CAN
where CAN is used to indicate the possibility of an event occurring.
Examples of permission uses of CAN were not found in this diachronic data,
nor were epistemic uses seen.12 Permission and epistemic uses are, however,
seen in present day ASL. Shaffer (2000) suggests that epistemic CAN is quite
new and is the result of a semantic extension from root possibility uses of CAN,

12 Bybee et al. (1994) state that epistemic modalities describe the extent to which the speaker is
committed to the truth of the proposition. They posit that “the unmarked case in this domain
is total commitment to the truth of the proposition, and markers of epistemic modality indicate
something less than a total commitment by the speaker to the truth of the proposition” (1994:179).
De Haan (1999), by contrast, defines epistemic modality as an evaluation of evidence on the
basis of which a confidence measure is assigned. An epistemic modal will be used to reflect this
degree of confidence.
210 Terry Janzen and Barbara Shaffer

in conjunction with its sentence-final placement and concomitant nonmanual


marking.
Shaffer (2000) suggests the following path for modern ASL CAN, seen in
(12):
(12) gesture ‘strong’ > lexical ‘strong’ > grammatical morpheme ‘can’ >
epistemic ‘can’
While there is abundant crosslinguistic evidence to support a claim that the core
marker of possibility in ASL developed from a lexical sign with the meaning
‘strong’ or ‘power,’ the claim we make here is that POUVOIR, the lexical sign,
does not represent the ultimate source of the grammaticization path described.
Instead, we claim that POUVOIR entered the LSF lexicon as a gesture. ‘Strong’
in OLSF, OASL, and modern ASL is highly iconic, the very gesture nonsigners
might use to visually represent physical strength. Our proposal is that a ritu-
alized gesture in use among signers and nonsigners alike entered the lexicon
of OLSF, and then grammaticized to indicate any kind of ability, both physical
and nonphysical. It then generalized further to be used to indicate permission,
and even epistemic possibility in ASL.

8.2.3 MUST
Turning to ASL MUST, Shaffer (2000) posits another gestural source, namely a
deictic pointing gesture indicating monetary debt. While Shaffer (2000) found
no diachronic evidence of such a gesture with that specific meaning, informal ex-
perimentation with nonsigning English speakers did produce multiple instances
of this gesture. Adults were asked to indicate to another that money was owed.
Each person who attempted to gesture monetary debt used exactly this gesture:
a pointing at the open extended hand. Bybee et al. (1994) cite numerous cases
of verbs indicating monetary debt generalizing to indicate general obligation.
De Jorio (1832, in Kendon 2000) finds evidence of a pointing gesture used
as far back as classical antiquity (and nineteenth-century Naples) to indicate
‘in this place’ and notes that it can express ‘insistence.’ Kendon also states
that the index finger extended and directed to some object is used to point out
that object. Further he finds evidence in nineteenth-century Naples of the flat
upturned hand being used to indicate “a request for a material object” (Kendon
2000:128). What we claim here is that such a gesture existed in nineteenth-
century France and could be used to indicate monetary debt (for a discussion
of pointing gestures and their relation to cognition in nonhuman primates, and
their place in the evolution of language for humans, see Blake 2000).
This pointing gesture entered the lexicon by way of OLSF as a verb indicating
monetary debt, glossed as DEVOIR, then underwent semantic generalization
that resulted in uses where no monetary debt was intended, just a general sense
Gesture as the substrate in the process of ASL grammaticization 211

(a) (b)

(c)

Figure 8.4a 1855 LSF IL-FAUT (‘it is necessary’) (Brouland 1855; reproduced
with permission); 8.4b 1913 ASL OWE (Hotchkiss in 1997 Sign Media Inc.;
reproduced with permission); 8.4c Modern ASL MUST (Humphries et al.
1980; reproduced with permission)

of ‘owing,’ shown in Figure 8.4. It is interesting to note also a parallel in


spoken French in the mid-1800s and continuing today where devoir was used
in constructions to express both monetary debt and obligation.
Further phonological reduction, which resulted in uses without the base hand-
shape, led to the development of a form meaning general necessity in LSF (with
variations glossed IL-FAUT ‘it is necessary’ (Brouland 1855) and DEVOIR
‘should’ (Pèlissier 1856), and subsequently to the development of modern ASL
MUST. The grammaticization path suggested for MUST is given in (13).

(13) gesture ‘owe’ > OLSF verb ‘owe’ > LSF/ASL ‘must,’ ‘should’ >
epistemic ‘should’

The 1913 data suggest that by the start of the twentieth century the ASL sign
OWE (the counterpart to the OLSF sign with the same meaning) was still
212 Terry Janzen and Barbara Shaffer

in use, with and without a financial component to its meaning. MUST was
also in use, with discourse uses ranging from participant external obligation
and advisability, to participant internal obligation and advisability. Uses with
deontic, or authoritative sources were also seen. Epistemic uses of MUST were
not seen in the diachronic data, but are fairly common in modern ASL (for a
more detailed description, see Shaffer 2000).
In summary, this look at the grammaticization of FUTURE, MUST, and CAN
in ASL traces their sources to the point where gesture enters the lexicon. FU-
TURE, MUST, and CAN, we argue, each have gestural sources which, through
frequent use and ritualization, led to the development of lexical morphemes,
then – following standard grammaticization processes – to the development
of grammatical morphemes indicating modal notions. Shaffer (2000) suggests
gestural sources for other ASL modals, such as CAN’T, and Wilcox and Wilcox
(1995) hypothesize gestural origins for markers of evidentiality (SEEM, FEEL,
OBVIOUS) in ASL as well.

8.3 The grammaticization of topic


As we have seen, several grammaticization paths can be described for ASL
which follow conventional thinking regarding the development of modal mean-
ing in language, except for the important links described here to the ultimate
sources of the lexical items being grammaticized. For ASL these sources are
prelinguistic gestures.
Here we present an additional grammaticization pathway that also begins with
a gesture and results in a highly grammaticized functional category, that of topic
marking. The significant difference, however, between the grammaticization
pathways described above and the one proposed for topics is that the path
leading to the ASL topic marker appears not to include any stage where an
identifiable lexical word has conventionalized from the gestural source. Unlike
the modal pathway no lexical word intervenes between this gestural source
and the final grammatical item. The pathway proposed, adapted from Janzen
(1998), is given in (14).
(14) communicative yes–no pragmatic syntactic textual
questioning > questions > domain > domain > domain
gesture topics topics topics
This pathway shows that a plausible gestural origin developed into several func-
tional categories that have retained their grammatical function by the process
of layering (compare Hopper 1991) over an extended period of time. Below
we discuss each stage along this grammaticization pathway, beginning with
the original gesture we believe developed the grammatical functions of yes–no
question marking, which was then followed by topic marking.
Gesture as the substrate in the process of ASL grammaticization 213

8.3.1 The communicative questioning gesture


The gesture proposed as the origin of the yes–no question marker, and eventual
topic marker, is an eyebrow raise. Quite conceivably, when accompanied by
deliberate eye contact with someone the gesturer intends to communicate with,
this gesture suggests an openness to communicative interaction. In other words,
the gesture invites interaction by displaying an interest in interaction.
The eyebrow raise, under the right circumstances, might invite a response to
an obvious query about something. In fact, in modern North American society,
holding an item in one’s hand, such as a drink, and lifting it up while gesturing
to a friend by raising the eyebrows, and perhaps nodding the head toward the
friend, is easily recognizable as Do you want a drink? This iconic gesture,
then, is seen as a motivated choice co-opted into the conventionalized, but still
gestural, language system of ASL. As a conventionalized signal, the gesture
may show some universality: the identical brow raise along with a forward head
tilt marks yes–no questions not only for ASL, but for British Sign Language
(Woll 1981), Sign Language of the Netherlands (Coerts 1990) and Swedish
Sign Language (Bergman 1984), to name a few.
The ease of understanding of such a signal might mean that it is a good can-
didate as an effective communication strategy, and thus a plausible beginning
point from which to build more complex and symbolic constructions. Its conven-
tionalization in yes–no constructions in ASL would suggest that this is the case.

8.3.2 Yes–no questions


The effectiveness of a communication strategy is likely to lead to its repetition,
and once ritualized (compare Haiman 1994), it can become obligatory. Raised
eyebrows has thus become the obligatory yes–no question marker in ASL,
usually along with a forward head tilt, although the appearance of this accom-
panying gesture seems less obligatory (Baker and Cokely 1980; Liddell 1980).
In an ASL yes–no question, the entire proposition being questioned is ac-
companied temporally by raised eyebrows (and again the less obligatory, but
frequent, forward head tilt) and continuous gaze at the addressee. A pause fol-
lowing the question is common, with the final sign of the question held until
a response from the addressee begins. An alternation in word order to indicate
the question does not take place. Examples are given in (15) and (16).
y/n-q
(15) FINISH SEE MOVIE PRO.2 (Baker and Cokely 1980:124)
‘Did you already see that movie?’
y/n-q
(16) SEE PRO.1 PRO.2 (Janzen 1998:93)
‘Did you see me?’
214 Terry Janzen and Barbara Shaffer

The prelinguistic raised-eyebrow gesture may not itself be particularly mean-


ing specific, but pragmatic considerations surrounding the communicative in-
terchange would contribute to its being inferred as a gesture indicating interest
or intentness, or that some information is being solicited, akin to a question
being asked. This prelinguistic gesture – available to both deaf and nondeaf
communities in North America – is also used as a grammatical yes–no question
marker in ASL, however. In this grammatical context it is specific in mean-
ing, marking any string as a yes–no question. Thus a prelinguistic gesture has
been recruited as a grammatical marker, and it would be difficult to claim that
either is a lexical item in the sense of a content word. In this case, it appears
that a grammatical marker, albeit a rather iconic one, takes as its source a more
general communicative gesture, with no lexical word developing or intervening
at this stage of grammatical development or later, as discussed in Sections 8.3.3
and 8.3.4.

8.3.3 From yes–no questions to topic marking


Topics in ASL essentially take the form of yes–no questions, but function
very differently from true yes–no questions in discourse. It is not uncommon
crosslinguistically for yes–no question marking and topic marking to employ
the same morphological marker. In Hua, for example, a Papua New Guinean
language, both yes–no questions and topics are marked by the morpheme -ve
(Haiman 1978), as shown in (17).
(17) a. E -si -ve baigu -e13
come 3sg.fut int will stay 1sg
‘Will he come? I will stay.’
b. Dgai -mo -ve baigu -e
I (emph) c.p. -top will stay 1sg
‘As for me, I will stay.’
For ASL this polysemy is also apparent in the similar eye-brow-raise for both
yes–no questions and topics, and ASL topic marking may be seen as represent-
ing a later stage of grammatical development along this pathway. The same ges-
ture of raised eyebrows that marks yes–no questions indicates a topic in ASL,
but rather than a forward head tilt, the head may optionally tilt backward.14
This in itself is worthy of note. Whereas the forward head tilt in a yes–no ques-
tion invites a response, and is highly interactive in design, the topic-marked

13 From Haiman (1978:570–71), his examples (2b) and (21). int is the interrogative marker;
c.p. in (17b) is a connective particle in Haiman’s notation. Haiman also makes the point that
conditionals in Hua, as in a number of languages, are marked similarly, but details of this are
beyond the present discussion.
14 The backward head tilt is frequently thought to obligatorily accompany the raised eyebrow
marker for topics in ASL, but Janzen (1998) notes that in running discourse this is not the case.
Gesture as the substrate in the process of ASL grammaticization 215

construction retains the form of a yes–no question, but the backward head tilt
may be thought of as an iconic gesture away from any real invitation to respond.
The signer does not wish for any response to the “question” form: the addressee
must read this construction not as a question in its truest interactive sense,
but as providing a ground for some ensuing piece of discourse on the part of
the signer, or as a “pivot” linking some shared or presupposed information to
something new. In other words, it is a grammatical marker signaling a type of
information (or information structuring), and not a questioning cue. Along a
similar vein, Wilbur and Patschke (1999) suggest that the forward head tilt of
yes–no questions indicates inclusiveness for the addressee, whereas a backward
tilt signals an intent to exclude the addressee from the discourse. Examples of
topic marking in running discourse, taken from the monologue texts in Janzen
(1998), are given in (18) and (19).
top
(18) a. WORLD CL:C(globe) MANY DIFFERENT++ LANGUAGE
both hands-----
PRO.3+++15
on ‘globe’
‘There are many different languages in all parts of the world.’
b. LANGUAGE OVER 5000 PRO.3+++ MUCH
both hands
‘There are over five thousand languages in the world.’
c. FIND+++ SAME CHARACTERISTIC FIND+++ LIST
‘(In these languages we) find many of the same characteristics.’
top
d. PEOPLE TAKE-ADVANTAGE lh-[PRO.3]LANGUAGE
COMMUNICATE MINGLE DISCOURSE COMMUNICATE
PRO.3
‘People make use of this language for communicating and
socializing.’
top
e. OTHER COMMUNICATE SKILL lh-[PRO.3] LITTLE-BIT
DIFFERENT PRO.3-pl.alt16
‘Other kinds of communication are a little different from
language.’
top
(19) a. TRAIN ARRIVE(extended movement, fingers wiggle) T-H-E P-A-S
CL:bent V(get off vehicle)
‘The train eventually arrived at The Pas, and (we) got off.’17
15 PRO.3 here is an indexical point to the location of the classifier structure in the sentence.
Whether these points are best analyzed as locative (‘there’), demonstrative (‘that’), or pronouns
‘it,’ ‘them,’ etc.) is not clear, but for these glosses, they will all be given as PRO.3.
16 pl.alt indicates that this sign is articulated with both hands (thus plural) and has an alternating
indexing movement (to two different points in space).
17 The Pas is a town in Manitoba, Canada.
216 Terry Janzen and Barbara Shaffer

top
b. OTHER TRAIN pause T-H-E P-A-Sc TO L-Y-N-N L-A-K-Ed
PRO.3(c upward to d)
‘and took another train north from The Pas to Lynn Lake.’
c. THAT MONDAY WEDNESDAY FRIDAY SOMETHING
THAT PRO.3 CL:A (travelc to d to c )18 THAT
‘That train runs Mondays, Wednesdays, and Fridays – something
like that.’
top
d. 1,3.dual.excl CHANGE CL:bent V(get on vehicle) TRAIN
‘We (the two of us) changed trains,’
e. ARRIVE C-R-A-N-B-E-R-R-Y P-O-R-T-A-G-E
‘and arrived at Cranberry Portage.’
top
f. CL:bent V(get off vehicle) TAKE-UP DRIVE GO-TO
FLIN-FLON
‘(We) got off (the train), and took a car to Flin-Flon.’
As these discourse segments show, the topic-marked constituent may be nominal
or clausal (other elements, such as temporal adverbials, are also frequently
topic-marked). They have the same formal marking as do yes–no questions,19
but the function of this construction in the discourse is very different. The
construction is emancipated from the interactive function of yes–no questions,
and has assumed a further grammatical function. The marker now indicates
a relationship between parts of the discourse text, that is, how one piece of
information relates to the next. As mentioned, the topic-marked constituent has
a grounding or “pivot” function in the text.
In the grammaticization path given in (14) above, “syntactic domain topics”
are suggested as a later stage than “pragmatic domain topics.” While the de-
tails of this differentiation are not addressed here (see Janzen 1998; 1999), it is
thought that information from the interlocutors’ shared world of experience is
available to them as participants in a current discourse event before information
that arises out of the discourse event itself becomes available as shared informa-
tion. Essentially, however, marked topics that draw presupposed information
from interlocutors’ shared world of experience or from prior mention in the
discourse are marked in the same manner. The only difference – and one that
causes some potential confusion for sentence analysis – is that topic-marked
18 This classifier form is similar to what is often glossed as COMMUTE, with an “A” handshape
moving vertically from locus “c” to “d” to “c,” and with the thumb remaining upward.
19 Once again, the backward head tilt is not an obligatory part of the construction. In these texts,
it appears occasionally, but not consistently, and thus is not considered a necessary structural
element to differentiate the two functions.
Gesture as the substrate in the process of ASL grammaticization 217

constituents that arise out of shared pragmatic experience enter the discourse
as “new” information to that discourse event but consist of “given” information
pragmatically, whereas topic-marked constituents that are anaphoric to previ-
ous mention are both “given” by virtue of the previous mention and because
the information, once mentioned, has entered the shared world of experience
(Janzen 1998). Thus the “given–new” dichotomy is not quite so simple.
The gestural eyebrow raise in these grammaticized cases of topic marking
does not mark the full range of yes–no question possibilities for actual yes–no
questions, but only one: do you know X? Notice, however, that even though this
may suggest that the construction described here as a topic may still appear
to be very question-like, it clearly does not function in this way. Consider the
functional practicality of such “questions” as those posed in (20), with (20a)
and (20b) drawing on the discourse text example in (18a) and (18d), and (20c)
and (20d) from (19d) and (19f) above:
(20) a. Do you know ‘the world’?
b. Do you know ‘people’?
c. Do you know ‘the two of us’?
d. Do you know ‘the act of getting off the train’?
These are not yes–no questions that make any communicative sense in their
discourse context. Rather, they are grammaticized constructions with the same
morphological marking as yes–no questions in ASL, but with different gram-
matical function. In these cases, the topic-marked constituents (e.g. ‘the world’
or ‘the two of us’) are clearly grounding information for the new information
that follows within the sentence.

8.3.4 Textual domain topics: A further grammaticization step


The most highly grammaticized use of topic marking appears in ASL not as
marking constituents containing shared information, but as grammatical dis-
course markers. While the pragmatic and syntactic domain topics relate rele-
vant pieces of presupposed and new information in the text, we propose that
the construction form along with its associated topic marking has further gram-
maticized to have a textual cohesion-marking function, following the semantic–
pragmatic change that Traugott (1989) suggests as propositional > textual (or >
expressive). Here it is proposed that the primary motivation for this grammati-
cized function is the topic as a discourse pivot. In this further development the
“shared-information–new-information-linking function” of the topic has been
lost. Examples (21) to (24) below show that what is marked with the topic
marker is not at all information from interlocutors’ world of experience, nor
anything previously mentioned in the text, but is instead information about text
construction itself.
218 Terry Janzen and Barbara Shaffer

(21) WHAT’S-UP GO-TOa RESTAURANT EAT+++


top top
BE-FINISHED, TAKE-ADVANTAGE SEE TRAIN ARRIVE20
‘So then, we went to a restaurant, ate, and then got to see the
train arrive.’
top
(22) PRO.1 lh: [PRO.3] POSS.1 GOOD-FRIEND PRO.3 SAY PRO.3
top
ALCOHOL IN WHY lh:[POSS.3] MOTHER PREVIOUS WORK
ALCOHOL STORE
‘(I . . .) my best friend said she knew there was alcohol in it
because her mother had worked in a liquor store before.’
top
(23) a. FIRST, P-H-O-N-E-M-I-C S-H-A-D-O-W-I-N-G (. . .)
‘The first (exercise) is called “Phonemic Shadowing”. (. . .)’
top
b. NEXT SECOND, P-H-R-A-S-E S-H-A-D-O-W-I-N-G (. . .)
‘The next one is “Phrase Shadowing”. (. . .)’
In (21) and (22) the topic-marked connective acts as a discourse pivot, and
can hardly be called a “topic” at this point, given its grammatical function
(however, for an alternate description of elements such as WHY in (22) as
WH-clefts, see Wilbur 1996). In (23), similarly, the pivot is an overt ordering
device in the discourse. As mentioned, information about the world is not a
factor here, but predictable text organization is. Thus, once again the same
grammatical morpheme as in the yes–no question and in the more prototypical
topic appears, but not to mark topical information in the discourse. Instead, the
phrase marked with this eyebrow raise functions in an analogous way to the
pivot function of the more prototypical, information-bearing topic, grounding,
in effect, what comes next in the discourse. The addressee in these cases is
assumed to be fluent in such ASL discourse strategies: the shared information
is now about text structure.
Thus, this grammatical use of the brow raise gesture has arisen – likely as
an analog of the whole construction functioning as a discourse pivot – having
been emancipated first from the interactive function that a yes–no question has
and, second, abstracting away from the type of information contained in the
topic-marked constituent. The result is a construction very similar in form, but
one that is a grammatical text-cohesion device. The grammaticization cline
moves from a more discourse-participant interactive focus, to a grammatical

20 BE-FINISHED is glossed as this based on its grammaticization from a stative construction


(Janzen 1995). Note also the brow raise topic marker clause-finally in (21). This type of con-
struction occurs frequently in natural ASL discourse, but is not discussed here.
Gesture as the substrate in the process of ASL grammaticization 219

device concerning the organization of information within a (potentially) single


signer’s text, to a text-cohesion marker that has to do with text structure without
regard to the specific information in the mind of the signer.
Historically, there is little recorded evidence to suggest when these gram-
maticized stages appeared in ASL, but by the time of some of the earliest
recorded texts, 1913, this text-cohesion function was already in use. Example
(24), taken from the 1913 J. Hotchkiss text, shows this marker on an event
ordering construction, similar to (23) above.
top
(24) . . . HELP IN TWO WAY+ FIRST LEAD POSS.3 WALK++ . . .
‘. . . helped in two ways, first, by leading him (on his) walk . . .’
Other examples, both of this type and of the connective type, are contained in
the 1913 ASL texts.
The examination of this grammaticization cline shows that, for whatever
reason, at least when linguistic conditions are apparent a lexical stage along the
pathway is not required for the grammaticization of an item. Whether or not the
language modality (signed gestures as opposed to vocal gestures) is the primary
factor that allows this phenomenon to occur is open to question, but the fact
that gestures of the hands and face are of the same medium as linguistic signals
may be significant in the development from gesture to grammatical material
here.

8.4 Conclusions
Grammaticization processes for spoken languages are understood to be system-
atic and pervasive. Diachronic research in the last few decades has brought to
light a vast array of grammatical constructions for which earlier lexical forms
can be shown as their source. The systematicity for grammaticizing functions
is such that, in many languages, polysemous grammatical and lexical items can
be taken to be evidence of grammaticization, even in the absence of detailed
diachronic evidence.
Grammaticization studies on signed languages are rare, but the examples we
have outlined show the potential for signed languages to develop in a man-
ner similar to spoken languages in this respect. In other words, how would
grammatical categories in a signed language emerge, except by the very same
processes? The pursuit of language universals that include both signed and spo-
ken languages is in some senses hampered by differences between vocal and
signed linguistic gestures, and while even the reality of structural universals
has come under question of late (see, for example, Croft 2001), it is thought by
some (Bybee and Dahl 1989; Bybee et al. 1994; Bybee 2001) that real language
universals are universals of language change. In this regard, grammaticization
220 Terry Janzen and Barbara Shaffer

processes in ASL offer significant evidence that crosses the boundary between
spoken and signed languages.
A further advantage to studying grammaticization in a signed language, how-
ever, may be that it offers a unique look at language change. What are commonly
thought of as “gestures” – that is, nonlinguistic but communicative gestures of
the hands and face – are of the same neuromuscular substrate as are the lin-
guistic, fully conventionalized signs of ASL (Armstrong et al. 1995). Several
studies have shown that components of generalized, nonlinguistic gesturing are
evident in ASL in the phonetic inventory (e.g. Janzen 1997), the routinized lexi-
con (Shaffer 2000), and in syntactic and morphosyntactic relations (Armstrong
et al. 1995). The present study, however, shows that the role that gesture plays
in the development of grammatical morphemes is also critical, and not opaque
when viewed through the lens of grammaticization principles.
This study offers something unique to grammaticization theory as well, for
two reasons. First, it is interesting to see the process of gesture > lexical form >
grammatical form as illustrated by the development of modals in ASL. Not only
can we see grammatical forms arise, but also lexical forms as an intermediate
stage in the whole process. Second, we have seen an instance of grammatical
form arising not by way of any identifiable lexical form, but directly from
a more generalized gestural source. This does not cast doubt on the crucial
and pervasive role that lexical items do play in the development of grammar,
but suggests that, under the right circumstances, this diachronic stage may
be bypassed. How this might take place, and what the conditions for such
grammaticization phenomena are, have yet to be explored. In addition, there
is great potential for studying the development of numerous modals, auxiliary
forms, and other grammatical elements in ASL and other signed languages.

Acknowledgments
Portions of this paper were first presented at the 26th Annual Meeting of
the Berkeley Linguistic Society. We wish to thank Sherman Wilcox, Barbara
O’Dea, Joan Bybee, participants at the Texas Linguistic Society 2000 confer-
ence and the reviewers of this book for their comments. We wish to acknowledge
support from SSHRC (Canada) Grant No. 752–95–1215 to Terry Janzen.

8.5 References
Armstrong, David F., William C. Stokoe, and Sherman E. Wilcox. 1995. Gesture and
the nature of language. Cambridge: Cambridge University Press.
Baker, Charlotte, and Dennis Cokely. 1980. American Sign Language: A teacher’s re-
source text on grammar and culture. Silver Spring, MD: T.J. Publishers.
Bergman, Brita. 1984. Non-manual components of signed language: Some sentence
Gesture as the substrate in the process of ASL grammaticization 221

types in Swedish Sign Language. In Recent research on European sign languages,


ed. Filip Loncke, Penny Boyes-Braem, and Yvan Lebrun,49–59. Lisse: Swets &
Zeitlinger B.V.
Blake, Joanna. 2000. Routes to child language: Evolutionary and developmental pre-
cursors. Cambridge: Cambridge University Press.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge: MIT
Press.
Brouland, Josephine. 1855. Language mimique: Spécimen d’un dictionaire des signes.
Gallaudet Archives.
Bybee, Joan L. 1988. Semantic substance vs. contrast in the development of grammat-
ical meaning. In Proceedings of the fourteenth annual meeting of the Berkeley
Linguistics Society, 247–264.
Bybee, Joan L. 2001. Phonology and language use. Cambridge: Cambridge University
Press.
Bybee, Joan L., and Östen Dahl. 1989. The creation of tense and aspect systems in the
languages of the world. Studies in Language 13:51–103.
Bybee, Joan L., William Pagliuca, and Revere Perkins. 1991. Back to the future. In Ap-
proaches to grammaticalization, Vol. II: Focus on the types of grammatical markers,
ed. Elizabeth Closs Traugott and Bernd Heine, 17–58. Amsterdam: Benjamins.
Bybee, Joan, Revere Perkins, and William Pagliuca. 1994. The evolution of grammar:
Tense, aspect and modality in the languages of the world. Chicago, IL: University
of Chicago Press.
Coerts, Jane. 1990. The analysis of interrogatives and negations in Sign Language of the
Netherlands. In Current trends in European sign language research: Proceedings
of the 3r d European Congress on Sign Language Research, ed. Siegmund Prillwitz
and Tomas Vollhaber, 265–277. Hamburg: Signum-Verlag.
Croft, William. 2001. Radical construction grammar: Syntactic theory in typological
perspective. Oxford: Oxford University Press.
de Haan, Ferdinand. 1999. Evidentiality and epistemic modality: Setting boundaries.
Southwest Journal of Linguistics 18:83–101.
Eriksson, Per. 1998. The history of deaf people: A source book. Örebro, Sweden: National
Swedish Agency for Special Education.
Frishberg, Nancy. 1975. Arbitrariness and iconicity: Historical change in American Sign
Language. Language 51:696–719.
Givón, Talmy. 1975. Serial verbs and syntactic change: Niger-Congo. In Word order
and word order change, ed. Charles Li, 47–112. Austin, TX: University of Texas
Press.
Haiman, John. 1978. Conditionals are topics. Language 54:564–589.
Haiman, John. 1994. Ritualization and the development of language. In Perspectives on
grammaticalization, ed. William Pagliuca, 3–28. Amsterdam: Benjamins.
Hewes, Gordon W. 1973. Primate communication and the gestural origin of language.
Cultural Anthropology 14:5–24.
Hopper, Paul. 1991. On some principles of grammaticization. In Approaches to gram-
maticalization, Vol. I: Focus on theoretical and methodological issues, ed. Elizabeth
Closs Traugott and Bernd Heine, 149–187. Amsterdam: Benjamins.
Humphries, Tom, Carol Padden, and Terence O’Rourke. 1980. A basic course in
American Sign Language. Silver Spring, MD: T.J. Publishers.
222 Terry Janzen and Barbara Shaffer

Janzen, Terry. 1995. The polygrammaticalization of FINISH in ASL. Manuscript,


University of Manitoba, Winnipeg.
Janzen, Terry. 1997. Phonological access to cognitive representation. Paper presented at
5th International Cognitive Linguistics Conference, July, Amsterdam.
Janzen, Terry. 1998. Topicality in ASL: Information ordering, constituent structure,
and the function of topic marking. Ph.D. Dissertation, University of New Mexico,
Albuquerque, NM.
Janzen, Terry. 1999. The grammaticization of topics in American Sign Language. Studies
in Language 23:271–306.
Kendon, Adam. 2000. Gesture in Naples and gesture in classical antiquity: A translation
of Andrea de Jorio’s La mimica degli antichi investigata nel gestire napoletano.
Bloomington, IN: Indiana University Press.
Lane, Harlan. 1984. When the mind hears: A history of the deaf. New York: Random
House.
Liddell, Scott K. 1980. American Sign Language syntax. The Hague: Mouton.
Pèlissier, P. 1856. Iconographie des signes. Paris: Imprimerie et Librarie de Paul Dupont.
Petitto, Laura A. 1983. From gesture to symbol: The relation between form and meaning
in the acquisition of personal pronouns in American Sign Language. Papers and
reports on child language development 22:100–127.
Petitto, Laura A. 1990. The transition from gesture to symbol in American Sign Lan-
guage. In From gesture to language in hearing and deaf children, ed. V. Volterra
and C. J. Erting, 153–161. Berlin: Springer-Verlag.
Rizzolatti, Giacomo and Michael A. Arbib. 1998. Language within our grasp. Trends in
Neuroscience 21:188–194.
Senghas, Ann, Asli Ozyurek, and Sotaro Kita. 2000. Encoding motion events in an
emerging sign language: From Nicaraguan gestures to Nicaraguan signs. Paper
presented at the 7th Conference on Theoretical Issues in Sign Language Research,
July, Amsterdam.
Shaffer, Barbara. 1999. Synchronic and diachronic perspectives on negative modals in
ASL. Paper presented at the 2nd Annual High Desert Linguistic Society Conference,
March, University of New Mexico, Albuquerque, NM.
Shaffer, Barbara. 2000. A syntactic, pragmatic analysis of the expression of necessity
and possibility in American Sign Language. Ph.D. Dissertation, University of New
Mexico, Albuquerque, NM.
Singleton, Jenny, Susan Goldin-Meadow, and David McNeill. 1995. The cata-
clysmic break between gesticulation and sign: Evidence against a unified
continuum of gestural communication. In Language, gesture, and space, ed.
Karen Emmorey and Judy Reilly, 287–311. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Sweetser, Eve. 1988. Grammaticalization and semantic bleaching. In Proceedings of the
fourteenth annual meeting of the Berkeley Linguistics Society, 389–405.
Traugott, Elizabeth Closs. 1989. On the rise of epistemic meanings in English: An
example of subjectification in semantic change. Language 65:31–55.
Wilbur, Ronnie B. 1996. Evidence for the function and structure of wh-clefts in American
Sign Language. In International review of sign linguistics, Vol. 1, ed. William H.
Edmondson and Ronnie. B. Wilbur, 209–256. Mahwah, NJ: Lawrence Erlbaum
Associates.
Gesture as the substrate in the process of ASL grammaticization 223

Wilbur, Ronnie B. 1999. Metrical structure, morphological gaps, and possible grammati-
calization in ASL. Sign Language & Linguistics 2:217–244.
Wilbur, Ronnie B. and Cynthia Patschke. 1999. Syntactic correlates of brow raise in
ASL. Sign Language & Linguistics 2:3–40.
Wilcox, Sherman and Phyllis Wilcox. 1995. The gestural expression of modality in ASL.
In Modality in grammar and discourse, ed. Joan Bybee and Suzanne Fleischman,
135–162. Amsterdam: Benjamins.
Woll, Bencie. 1981. Question structure in British Sign Language. In Perspectives on
British Sign Language and deafness, ed. B. Woll, J. Kyle and M. Deuchar, 136–
149. London: Croom Helm.
Woodward, James. 1978. Historical bases of American Sign Language. In Understand-
ing language through sign language research, ed. Patricia Siple, 333–348. New
York: Academic Press.
Woodward, James. 1980. Some sociolinguistic aspects of French and American Sign
Language. In Recent perspectives on American Sign Language, ed. Harlan Lane
and François Grosjean, 103–118. Hillsdale NJ: Lawrence Erlbaum Associates.
Wylie, Laurence. 1977. Beaux gestes: A guide to French body talk. Cambridge, MA The
Undergraduate Press.
9 A crosslinguistic examination of the lexicons
of four signed languages

Anne-Marie P. Guerra Currie, Richard P. Meier,


and Keith Walters

9.1 Introduction
Crosslinguistic and crossmodality research has proven to be crucial in un-
derstanding the nature of language. In this chapter we seek to contribute to
crosslinguistic sign language research and discuss how this research intersects
with comparisons across spoken languages. Our point of departure is a series
of three pair-wise comparisons between elicited samples of the vocabularies of
Mexican Sign Language (la Lengua de Señas Mexicana or LSM) and French
Sign Language (la Langue des Signes Française or LSF), Spanish Sign Lan-
guage (la Lengua de Signos Española or LSE), and Japanese Sign Language
(Nihon Syuwa or NS). We examine the extent to which these sample vocabular-
ies resemble each other. Writing about “sound–meaning resemblances” across
spoken languages, Greenberg (1957:37) posits that such resemblances are due
to four types of causes. Two are historical: genetic relationship and borrowing.
The other two are connected to nonhistorical factors: chance and shared symbol-
ism, which we here use to mean that a pair of words happens to share the same
motivation, whether iconic or indexic. These four causes are likely to apply to
sign languages as well, although – as we point out below – a genetic linguistic
relationship may not be the most appropriate account of the development of
three of the sign languages discussed in this chapter: LSF, LSM, and LSE.
The history of deaf education through the medium of signs in Mexico sheds
light on why the three specific pair-wise comparisons that form the basis of
this study are informative. Organized deaf education was attempted as early as
April 15, 1861, when President Benito Juárez and Minister Ignacio Ramı́rez
issued a public education law that called for the establishment of a school
for the deaf in Mexico City and expressed clear intentions to establish similar
schools throughout the Republic (Sierra 1934). Eduardo Huet, a deaf Frenchman
educated in Paris who had previously established and directed a school for the
deaf in Brazil, learned of the new public school initiative and decided to travel
to Mexico. He arrived in Mexico City in 1866, and soon after established a
school for the deaf (Sierra 1934). We assume that LSF in some form was at
least initially used as the medium of instruction there, or heavily influenced the

224
The lexicons of four signed languages 225

medium of instruction. LSF was therefore an available source of borrowing for


what was eventually to become LSM.
It is likely that before Huet established the Mexico City school (and even
well after that time), the deaf in Mexico used home signs. Moreover, indigenous
sign languages may well have been in existence.1 If either of these scenarios is
true, the home sign systems or one or more of the indigenous sign languages
may have come into contact with the sign language used as a medium of in-
struction at La Escuela Nacional de Sordomudos (The National School for
Deaf-Mutes).2 To date, we do not know definitively if another sign language
was in existence in Mexico City at the time Huet arrived, nor do we know if the
deaf were largely isolated from each other or if they had already established
communities among themselves. However, once the school was established,
deaf people were brought together for educational purposes, a situation that
likely fostered the development of social networks and communities among
the deaf to a degree and in ways previously unknown. This situation would
also have created the sociolinguistic conditions to support the development of
a conventionalized sign language with LSF available as one of the sources for
vocabulary.
In this chapter we compare a small sample of LSM vocabulary with counter-
part signs in three other signed languages: LSF, LSE, and NS. The LSM–LSF
comparison offers an informative perspective inasmuch as these two languages
share a historical link through deaf education, as detailed above. The LSM–
LSE comparison allows us to consider the degree to which the cultural, histor-
ical, and linguistic ties shared by the dominant Spanish-speaking cultures of
Mexico and its former colonial power, Spain, manifest themselves in the two
signed languages. This comparison enables us to evaluate the evidence for the
common assumption that Mexican Sign Language and Spanish Sign Language
are, or must be, closely related because of the cultural traits – including the
widespread use of spoken and written Spanish – that Mexico and Spain share.
Just as important as comparisons between languages that have known linguis-
tic and educational connections are comparisons between languages that have
very distinct histories. Japanese Sign Language (NS) is not related to LSM in
the ways that LSF and LSE are known to be. Unlike LSF, NS has no known

1 Smith Stark (1990:7, 52) confirms the existence of home signs in Mexico City and states that there
were other manual languages in existence among communities with a high frequency of deaf
members, such as a community of potters near the Texas–Mexico border and Chican, a Yucatec
community. Johnson (1991) reports an indigenous sign language used in a Mayan community in
the Yucatan. However, it is not clear if these sign languages existed in 1866 or what the nature
of those communities might have been at that time.
2 Similarly, in the USA sign languages extant prior to the arrival on these shores of French Sign
Language were likely contributors to the development of ASL. In particular, the sign language
that had developed on Martha’s Vineyard was a probable contributor to ASL (Groce 1985).
226 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters

historical link to LSM through deaf education. Nor would we expect NS to be


influenced by the cultural history shared by Mexico and Spain or the dominance
of written and spoken Spanish in these two countries. For these reasons, the
LSM–NS comparison presents an opportunity to assess the degree of similarity
between two unrelated signed languages. Thus, each of these pairwise compar-
isons provides unique insights into the roles various factors play in contributing
to similarity in the lexicons of signed languages.

9.2 Methodology
The LSM data come from dissertation fieldwork (Guerra Currie 1999) con-
ducted by the first author in Mexico City and in Aguascalientes, a city some
300 miles northeast of Mexico City. Six fluent Deaf signers were consulted: the
three consultants in Aguascalientes were all first-generation deaf, whereas the
three consultants from Mexico City were all second- or third-generation deaf
signers. The videotaped data used in the analyses reported here were elicited
using two commercially available sets of flash cards (Palabras basicas represen-
tadas por dibujos 1993, Product No. T-1668, and Mas palabras representadas
por dibujos 1994, Product No. T-1683, TREND Enterprises, Inc. St. Paul, MN).
This data set was augmented by vocabulary drawn from Bickford (1991), and
a few lexical items that occurred spontaneously in conversation between na-
tive signers. To elicit LSM vocabulary, the consultants were shown the set of
flash cards; most contained a picture that illustrated the meaning of the accom-
panying Spanish word. The LSM data comprise 190 signs elicited from each
of the three Mexico City consultants and 115 signs elicited from each of the
three Aguascalientes consultants; thus, a total of 915 LSM sign tokens were
examined. The vocabulary items that were examined consist predominately of
common nouns drawn from the basic or core vocabulary in such categories
as foods, flora and fauna, modes of transportation, household objects, articles
of clothing, calendrical expressions, professions, kinship terms, wh-words and
phrases, and simple emotions. We did not examine number signs, because the
likely similarity of signs such as ONE or FIVE would lead to overestimates
of the similarities in the lexicons of different signed languages. Likewise, we
chose not to elicit signs for body part signs and personal pronouns (Woodward
1978; McKee and Kennedy 2000).
Other sources for the analysis presented here include videotapings of elicited
vocabulary from LSF, LSE, and NS.3 The consultants for LSE and NS were

3 The first author is greatly indebted to Chris Miller for collecting the LSF data while in France, to
Amanda Holzrichter for sharing data from her dissertation corpus (Holzrichter 2000), which she
collected while in Spain, and to Daisuke Sasaki for collecting the NS data while doing research
at Gallaudet University. There may be particularly significant dialect variation across Spain in
its signed language (or languages). The LSE data reported here were collected in Valencia.
The lexicons of four signed languages 227

Do the sign tokens under consideration share the same meaning?

YES NO

The tokens for this sign are not


considered in this investigation
Do the tokens share the same
values on at least two of the three
major parameters of sign formation?

YES NO

Similarly-articulated signs Distinctly-articulated signs

Figure 9.1 Decision tree for classification of sign tokens in corpus

shown the same set of flash cards as was used with the LSM consultants.4 The
LSF consultant was simply shown a set of written French words corresponding
to those on the Spanish language flashcards. LSE, LSF, and NS counterparts
were not elicited for all the LSM signs in our data set; thus, the LSM data are
compared to 112 LSF signs, 89 LSE signs, and 166 NS signs.
In our analysis of these data, signs from different sign languages were iden-
tified as “similarly-articulated” if those signs shared approximately the same
meaning and the same values on any two of the three major parameters of hand-
shape, movement, and place of articulation. A subset of similarly-articulated
signs includes those signs that are articulated similarly or identically on all three
major parameters; these are called “equivalent variants.” Figure 9.1 steps the
reader through the process of how the pairs of signs were categorized.
Several examples help to clarify how these criteria were used. The LSM
and LSF signs for COUSIN are identical except for initialization, the LSM
sign being articulated with a P handshape (cf. Spanish ‘primo’) while the
LSF sign is articulated with a C handshape (cf. French ‘cousin’). These are
treated as similarly-articulated signs. Similarly, the LSM and LSF signs for
FIRE are identical except that the former is articulated with a circular move-
ment whereas the latter involves noncircular movement; they also qualify as
similarly-articulated signs despite the difference in articulation on one major
4 For the elicitation of signs from the LSM consultants, the written Spanish word was covered.
This was not done during the elicitation of the LSE data.
228 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters

parameter. Although differences in orientation certainly exist between signs that


are coded as similarly-articulated, in this analysis orientation is not considered
a major formational parameter and thus orientation differences did not force
the use of the distinctly-articulated category.
In contrast, two sets of cases constitute the set of signs that are not consid-
ered similarly articulated. First, signs varying on two or more major parameters
were considered distinctly-articulated signs. Likewise, because “sharing the
same meaning” was operationalized narrowly, cases of semantic narrowing or
extension were not considered. Thus, to use an example from spoken language,
even though some varieties of Arabic use /u t/ to mean ‘whale’ while others
use it to mean ‘fish,’ the methods of data collection used here would miss this
and similar cognates when comparing varieties. However, such cognates have
long been a major source of information used by historical linguists in under-
standing the diachronic relationships among dialects and languages. Hence, in
analyzing only signs with approximately identical meanings and similar forms,
we ultimately provide a conservative estimate of the strength and nature of
similarities between the languages examined, especially those known to have
been in contact.
Each sign token available within the LSM corpus from any one of the six
LSM consultants was compared to signs with similar meanings in the three other
signed languages. So, for example, we would identify a similarly-articulated
sign pair in LSF and LSM if one of the variants identified in our LSM data shared
two of the three major parameters with its counterpart in the LSF corpus. The fact
that our data from LSF and LSE come from only one signer of each language,
combined with the fact that we have NS data from only two signers, means that
we have much less information about variant forms in those languages than
we do for LSM. This fact could result in some underestimation of the extent of
similarity between the vocabulary of LSM and those of the other sign languages.

9.3 Results
Table 9.1 below summarizes the total number of sign pairs included in the
study and the resulting set of similarly-articulated signs for each pair-wise
comparison. As noted above, the members of each sign pair shared the same
approximate meaning. Out of the 112 LSM–LSF sign pairs examined in this
study, 43 pairs (38 percent) were coded as similarly-articulated. In the LSM–
LSE comparison of 89 pairs, 29 (33 percent) were coded as similarly-articulated.
For the third pair-wise comparison, of the 166 LSM–NS sign pairs examined,
39 pairs (23 percent) were coded as similarly-articulated. Not surprisingly, the
largest percentage of similarly-articulated signs was found in the LSM and LSF
pair-wise comparison, whereas the smallest percentage of similarly-articulated
signs was found in the LSM–NS comparison.
The lexicons of four signed languages 229

Table 9.1 Summary of similarly-articulated signs for the three


crosslinguistic studies

Total Similarly-articulated
Pair-wise sign Borrowed Shared signs (percentages
comparison pairs signs symbolism Coincidence in parentheses)

LSM–LSF 112 12 31 0 43 (38)


LSM–LSE 89 0 29 0 29 (33)
LSM–NS 166 0 39 0 39 (23)

Table 9.1 also reports our analyses of the likely sources of similarly-articulated
signs identified in our analyses of the three language pairs. Note that no signs
were analyzed as similarly-articulated due to coincidence; in all cases, similarly-
articulated signs could be attributed to either shared symbolism or borrowing.
A similarly-articulated pair of signs would have been attributed to coincidence
only if the signs were arbitrary in form and came from language pairs (such
as LSM–NS) that share no known historical or cultural links. For these data,
most similarly-articulated signs may be ascribed to shared symbolism, inas-
much as the forms of these signs appear to be drawing from similar imagistic
sources, such as shared visual icons in such sign pairs as ‘fire,’ ‘bird,’ and
‘house.’
By employing the same criteria as those employed in the comparisons be-
tween related languages, the comparison between LSM and NS enables us to
suggest a baseline level of similarity for unrelated signed languages. Out of
166 NS signs, 39 signs (23 percent) are similarly-articulated with respect to
their LSM counterparts. As noted, no LSM signs appear to be borrowings from
NS, a result that is not surprising. The set of signs that are similarly-articulated
consists of iconic signs that are also found to be similarly-articulated in the
other pair-wise comparisons; these sign pairs include those for ‘balloon,’ ‘book,’
‘boat,’ ‘bird,’ ‘fire,’ and ‘fish.’ These signs are also similarly-articulated with
respect to their American Sign Language (ASL) counterparts. We consider the
similarity in these sign pairs to be due to shared symbolism.
Although there are no clear examples of borrowings in the pair-wise compar-
ison of LSM and LSE, the number of similarly-articulated signs is nonetheless
greater than that seen in the LSM–NS pair-wise comparison. The higher level
of similarity between LSM and LSE is perhaps due to shared symbolism, which
is likely to be greater between languages with related ambient cultures (LSM–
LSE) than between languages that have distinct ambient cultures (LSM–NS).
The existence of borrowings between LSM and LSF is not surprising given
the historic and linguistic links between LSM and LSF mentioned in the in-
troduction. The borrowings of signs from LSF into LSM may be attributed to
230 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters

the prestige of LSF signs among the Deaf community during the early period
of Mexican Deaf education. Although these 12 borrowings are articulated sim-
ilarly in LSF and LSM, these signs are not articulated similarly in LSM and
ASL. Thus, the 12 signs that we classify as lexical borrowings from LSF into
LSM cannot be linked to contact with ASL. These 12 signs are:
r kinship terms: HERMANO/FRERE ‘brother,’ HERMANA/SOEUR ‘sister,’
PRIMO/COUSIN ‘cousin,’ SOBRINO/NEVEU ‘nephew’;
r calendric expressions: MES/MOIS ‘month,’ SEMANA/SEMAINE ‘week’;
languages: ESPAÑOL/ESPAGNOL ‘Spanish’ and INGLÉS/ANGLAIS
‘English’;
r terms for natural phenomena: ESTRELLA/ÉTOILE ‘star’ and AGUA/EAU
‘water’;
r an emphatic: VERDAD/VRAI ‘true’; and
r an abstract concept: CURIOSIDAD/CURIOSITÉ ‘curiosity’.
These sign pairs are analyzed as borrowings due to the relatively low iconicity
of these signs; therefore the likelihood of independent parallel development is
quite small.
Although there may be other borrowings among the set of similarly-articulated
signs identified in the pair-wise comparison of LSM–LSF, the status of these
signs is uncertain, and this uncertainty ultimately raises significant questions
about how researchers understand and investigate the diachronic relationships
among sign languages. For example, one sign pair that may be the result of
borrowing from LSF – but that we have not included in the above set of clear
borrowings from that language – is the pair AYUDA/AIDE ‘help.’ In LSF AIDE,
the flat dominant hand (a B hand) contacts the underside of the nondominant
elbow and lifts the nondominant arm (which in our data has a fisted handshape).
In LSM AYUDA, the dominant hand (a fisted A handshape with the thumb
extended) rests on the nondominant B hand; these two hands move upward
together. Our rationale for excluding this pair of signs is that LSM AYUDA
may be borrowed from ASL, inasmuch as the LSM sign resembles the ASL
sign to a greater degree than it resembles the LSF sign.
However, the results of research on ASL lead us to the hypothesis that this
LSM sign might, in fact, have been borrowed from LSF and not ASL. Frishberg
(1975) and Woodward (1976) discuss the form of the ASL sign HELP in the
light of historical contact between LSF and ASL and historical variation within
ASL. They suggest that this sign has undergone language-internal historical
change resulting in centralization of the sign (Frishberg 1975) or in an elbow-
to-hand change in place of articulation (Woodward 1976). This historical change
results in a place of articulation change from the elbow, as in the LSF sign, to
the nondominant hand, as in the ASL and LSM signs. Frishberg and Woodward
suggest that this elbow-to-hand change is a historical process that several ASL
signs have undergone.
The lexicons of four signed languages 231

Interestingly, this process is also seen in the LSM data in the sign SEMANA
‘week.’ The LSF sign is articulated at the elbow; the LSM sign is identi-
cal with a single difference in the place of articulation at the base of the
nondominant hand. What is noteworthy is that, although the members of the
sign pair SEMANA/SEMAINE ‘week’ are similarly-articulated in LSM and
LSF, they are in no way similar to the corresponding ASL sign WEEK. Thus,
it is possible that the same historical change discussed for the ASL and LSF
sign HELP/AIDE ‘help’ may also have occurred for the LSM and LSF signs
SEMANA/SEMAINE. This similarity between the LSF AIDE ‘help’ and SE-
MAINE ‘week,’ on the one hand, and the LSM AYUDA ‘help’ and SEMANA
‘week,’ on the other hand, suggests the possibility that the LSM sign AYUDA
may also have been borrowed or derived from LSF instead of ASL.
As an alternative to our analysis that assumes borrowing to be an impor-
tant source of similarity between LSF and LSM, one might contend that LSM
signs treated here as borrowings from LSF are, in fact, cognate signs. By “cog-
nate signs,” we mean pairs of signs that share a common origin and that are
from genetically related languages. Although some researchers (Stokoe 1974;
Smith Stark 1990) have argued that the relationship that exists between LSF and
American, Mexican, and Brazilian sign languages, among others, is best seen
as a genetic one with French as the “mother” language, we see several reasons
to doubt this claim (although we certainly agree that LSF has influenced the
development of the signed languages it came into contact with in the nineteenth
and twentieth centuries). If LSM and LSF were indeed genetically related, one
might have expected a much higher percentage of similar signs than our analy-
sis reveals.5 As is, the percentage of similarly articulated signs revealed by the
LSM–LSF comparison (38 percent) is only marginally greater than that found
in the LSM–LSE analysis (33 percent). In contrast, signed languages that are
known to be related show a much greater degree of overlap in their lexicons.
Thus, comparisons of British, Australian, and New Zealand Sign Languages
have indicated that these languages may share 80 percent or more of their vo-
cabulary (Johnston, in press; also McKee and Kennedy 2000). Additionally,
“languages arising outside of normal transmission are not related (in the ge-
netic sense) to any antecedent systems,” according to Thomason and Kaufman
(1988:10; emphasis in original). Based on what we know about the history of the
LSM and LSF, it is highly unlikely that these languages are genetically related
inasmuch as they have not arisen from “normal transmission.” Reflecting the
perspective of historical linguists in general, Thomason and Kaufman define

5 However, similarity in basic lexicon does not necessarily indicate a genetic relationship, as the
history of English demonstrates. Thus, the facts that after the Norman invasion in 1066 Middle
English borrowed a substantial fraction of its vocabulary from Norman French and that Early
Modern English borrowed many words from Latin do not mean that English should be considered
a Romance language.
232 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters

normal transmission as a language being transmitted “from parent generation to


child generation and/or via peer group from immediately older to immediately
younger, with relatively small degrees of change over the short run, given a
reasonably stable sociolinguistic context” (1988:9–10; emphasis in original).
Nonetheless, it is historically and linguistically apparent that LSM and LSF
have come into contact with each other and that LSF has had some influence
over the development of the LSM lexicon. However, the historical development
of LSM and LSF points toward a nongenetic relationship. It is for this reason
that the sign pairs that we analyzed as borrowings were considered as such and
not as cognate signs. Borrowings are a consequence of language contact. In
this instance, the language contact situation was between one man and a small
community. Moreover, the circumstances surrounding the origins of LSM at
the school run by Huet contrast sharply with those surrounding the normal
transmission of ASL or any other sign language, which occurs between parents
and children and between successive generations of peer groups at, for example,
residential schools.

9.4 Discussion
Our analysis of these admittedly small data sets leads us to several conclusions.
First, the findings of the pair-wise comparison between LSM and NS suggest a
possible base level of the percentage of similarly-articulated signs due to shared
symbolism between signed languages. Even for these two signed languages –
which are quite remote from each other – similarly-articulated signs constituted
23 percent of the sampled vocabulary. Second, the lexicons of LSF and LSE
show greater overlap with LSM than does NS. This finding is not surprising,
given the known historical and cultural ties that link Mexico, Spain, and France.
Third, only the LSM–LSF comparison revealed strong evidence for borrowings
into LSM. These borrowings are likely due to the use of LSF – or of signs
drawn from LSF – in the language of instruction in the first school for the
deaf in Mexico City. No obvious borrowings were identified in the LSM–LSE
comparison. The comparison between LSM and LSE addresses the commonly
held assumption that there might be a genetic relationship between these two
languages. The limited data available provide no evidence for such a claim.
Several researchers have attempted to assess the similarity of vocabulary
across signed languages (Woodward 1978; Woll 1983; Kyle and Woll 1985;
Woll 1987; Smith Stark 1990). For example, using criteria detailed below, Woll
(reported in Kyle and Woll 1985) compared 15 signed languages and found that
an average of 35 percent to 40 percent of the 257 sampled lexical items were
similarly-articulated between any pair of the 15 languages examined.6 Woll

6 LSF is the only language common to Woll’s study and the current one.
The lexicons of four signed languages 233

(1983:85) coded these signs according to handshape, movement, and location,


as well as other features such as “orientation of fingers and palm, point of
contact between hand and the location of the sign, and fine handshape details.”
She organized the signs according to similar “types” that may have different
“versions.” Sign pairs that shared the same meanings and the same places of
articulation – but that differed in the two other parameters of movement and
handshape – were considered what we have labeled similarly-articulated signs.
As her method for coding signs as similar allows for two differences in the three
main parameters (place, movement, and handshape), her coding is more liberal
than ours. Additionally, Woll includes number signs in her set of 257 signs,
which might lead to an overestimation of similar signs for reasons indicated
earlier.
Although Woll’s criteria are more liberal than ours, the overall trends in her
study are pertinent. Three trends, in particular, are relevant for the discussion
of the data. The first is the observation of a baseline similarity across signed
languages and the second is the observation of a difference between signed and
spoken languages regarding the significance of the degree of lexical similarity.
Kyle and Woll (1985:168) state that 35–40 percent similarity in lexicons be-
tween unrelated spoken languages would be considered a very high degree of
similarity. As Greenberg (1957:37) states, with respect to spoken languages,
“where the percentage of [lexical] resemblance between languages is very high,
say 20 percent or more, some historic factor, whether borrowing or genetic re-
lationship, must be assumed.” However, according to Woll, signed languages
with a known close relationship share a much higher percentage of similarly-
articulated signs. For example, Kyle and Woll (1985:168) report that British
and Australian sign languages – which are historically related – share 80 per-
cent of their vocabulary, a result confirmed in more recent analyses cited earlier
(McKee and Kennedy 2000). It appears that lexical similarity is one area where
the modality of the language results in a difference between spoken and signed
languages. In other words, when comparing languages, researchers must con-
sider the modality of the language when examining the degree of similarity. We
agree with Woll that for signed languages one needs a much higher percent-
age of similarly-articulated lexical items than in spoken languages in order to
support the claim of a close relationship between two signed languages.
One source of lexical similarity across signed languages may lie in iconic-
ity. Smith Stark (1990) estimates the frequency of iconic signs in some of
the signed languages with which we have been concerned here: specifically,
Mexican, Brazilian, American, and French Sign Languages. In his analysis, he
considers whether signs are iconic (i.e. resemble their referent), indexical (i.e.
point to or evoke their referent directly), and/or symbolic (i.e. have forms that
are unmotivated). Although his method of coding is not clear, he reports that a
fairly high percentage of the signs in his data are in some way indexically or
234 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters

iconically motivated. Among the four signed languages he analyzed, he found


that 45–66 percent of the signs examined exhibited a notable degree of iconicity,
whereas 24–39 percent were coded as purely symbolic. These findings point
out another major difference between signed and spoken languages: there are
a great many more lexical items that are iconic in a signed language than in
a spoken language. However, as Kyle and Woll (1985:113) state: “Although
this visual imagery is more immediately apparent and more widespread than in
spoken language, the difference is likely to be of degree rather than kind.”
Given shared traditions and beliefs in the Western cultures where LSM, LSF,
LSE, and also ASL are signed, it would not be surprising to find that these
languages have exploited similar icons as the etymological bases for signs
with similar meanings. To a somewhat lesser extent, the same result seems to
hold for comparisons of LSM with NS. The analyses reported in this chap-
ter revealed a subset of signs that are similarly-articulated across LSM, LSF,
LSE, and NS. These signs tended to be highly iconic in form. The signs for
‘balloon,’ ‘book,’ ‘fire,’ ‘dresser,’ ‘key,’ and ‘red’ are included in this group
of similarly-articulated signs. This is not to say that these signs are merely
mimetic representations. Rather, this finding suggests that across signed lan-
guages, there may be a set of signs that tend to share icons and that tend to be
composed of similar formational elements. Thus, the likely similarity of such
signs does not necessarily imply a historical relationship between or among
these languages. Even without direct contact, pairs of signed languages may
in some instances have similarly-articulated signs to signify highly imageable
concepts.7
On the other hand, similarly-articulated sign pairs such as LSM and LSF
VERDAD/VRAI ‘true’ that are less overtly iconic than the signs for ‘balloon’,
‘book’, or ‘fire’ are more likely to be the result of language contact. Among the
three language pairs examined, the comparison between LSM and LSF offers
the most potential borrowings. This result is not surprising given the histori-
cal relationship between the French and Mexican systems of deaf education.
However, the question of the motivation for the particular borrowings described
remains unanswered. In Weinreich’s (1968:57) discussion of the potential mo-
tivations for lexical borrowings in spoken languages, he mentions that common
terms tend to be stable lexical items, whereas borrowing is more likely to oc-
cur for those lexical items that have a lower frequency of use. However, the
signs identified here as borrowings tend to be signs that would be expected to
be frequent in everyday usage. Another factor Weinreich discusses for spoken

7 Having said this, we hasten to add that in many instances sign languages have conventionalized
quite different icons for the same concept. Klima and Bellugi (1979) cite the sign for TREE in
three different sign languages: in ASL the icon seems to be a tree waving in the wind, in Danish
Sign Language the sign seems to sketch the round crown of a tree and its trunk, and in Chinese
Sign Language the icon is apparently the columnar shape of a tree trunk or of some trees.
The lexicons of four signed languages 235

languages may explain why these common terms may have been borrowed from
LSF: “social values.” Social attitudes may have been the factor that motivated
the LSF borrowings to be accepted into the LSM lexicon. It is possible that with
Huet’s establishment of the school for the deaf LSF was introduced, and the
attitudes toward LSF and the teachings of the school were positive, according
them a prestige that tends to come with what is considered educated language
use. These positive attitudes may have contributed to the acceptance of the bor-
rowing of common lexical items into the language of LSM. This same factor,
the prestige of educated language, may also contribute to the relatively high
incidence of initialized signs in LSM.
In conclusion, the resources of the visual – gestural modality – specifically, its
capacity for iconic representation – may promote greater resemblance between
the vocabularies of unrelated signed languages than we would expect between
unrelated spoken languages. The analysis presented here – specifically the com-
parison between LSE and NS – supports Woll’s (1983) proposal that there is
a relatively high base level of similarity between sign language vocabularies
regardless of the degree of historical relatedness. To some extent, the apparent
similarity of sign vocabularies may be an artifact of the relative youth of signed
languages. Time and historical change may obscure the iconic origins of signs
(Frishberg 1975). For an answer to this, we will have to wait and see.

Acknowledgments
This chapter is based on the doctoral dissertation of the first author (Guerra
Currie 1999). We thank David McKee and Claire Ramsey for their very helpful
comments on an earlier draft.

References
Bickford, Albert. 1991. Lexical variation in Mexican Sign Language.Sign Language
Studies 72:241–276.
Frishberg, Nancy. 1975. Arbitrariness and iconicity: Historical change in American Sign
Language. Language 51:696–719.
Greenberg, Joseph. 1957. Essays in linguistics. Chicago, IL: University of Chicago
Press.
Groce, Nora E. 1985. Everyone here spoke sign language: Hereditary deafness on
Martha’s Vineyard. Cambridge, MA: Harvard University Press.
Guerra Currie, Anne-Marie P. 1999. A Mexican Sign Language lexicon: Internal and
crosslinguistic similarities and variations. Doctoral dissertation, The University of
Texas at Austin.
Holzrichter, Amanda S. 2000. Interactions between deaf mothers and their deaf infants:
A crosslinguistic study. Doctoral dissertation, The University of Texas at Austin.
Johnson, Robert E. 1991. Sign language, culture and community in a traditional Yucatec
Maya village. Sign Language Studies 73:461–474.
236 A.-M. P. Guerra Currie, R. P. Meier, and K. Walters

Johnston, Trevor. 2001. BSL, Auslan and NZSL: Three sign languages or one? In Pro-
ceedings of the 7th International Conference on Theoretical Issues in Sign Lan-
guage Research (University of Amsterdam, Amsterdam, July, 2000), ed. Anne
Baker. Hamburg: Signum Verlag.
Johnston, Trevor. In press. BSL. Auslan and NZSL: Three signed languages or one? In
Anne E. Baker, Beppie van den Boagaerde and Onno Crasborn (eds.), A crosslin-
guistic perspective on sign languages. Hamburg: Signum Verlag.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Kyle, J. G. and B. Woll. 1985. Sign language: The study of deaf people and their
language. Cambridge: Cambridge University Press.
McKee, David and Graeme Kennedy. 2000. Lexical comparison of signs from American,
Australian, British and New Zealand Sign Languages. In The signs of language
revisited: An anthology to honor Ursula Bellugi and Edward Klima, ed. Karen
Emmorey and Harlan Lane, 49–76. Mahwah, NJ: Lawrence Erlbaum Associates.
Sierra, Ignacio. 1934. Compañerismo: Organo del club deportivo “Eduardo Huet.”
Mexico City: El Club Eduardo Huet.
Smith Stark, Thomas C. 1990. Una comparación de las lenguas manuales de México y
de Brasil. Paper read at IX Congreso Internaciónal de la Asociación de Lingüistica
y Filogı́a de América Latina (ALFAL), at Campinas, Brasil.
Stokoe, William C. 1974. Classification and description of sign languages. In Current
trends in linguistics, Vol. 12, ed. Thomas A. Sebeok. 345–371. The Hague: Mouton.
Thomason, Sarah G. and Terrence Kaufman. 1988. Language contact, creolization, and
genetic linguistics. Berkeley, CA: University of California Press.
Weinreich, Uriel. 1968. Languages in contact. The Hague: Mouton.
Woll, B. 1983. The comparative study of different sign languages: Preliminary analyses.
In Recent research on European sign languages, ed. Filip Loncke, Penny Boyes-
Braem, and Yvan Lebrun, 79–91. Lisse: Swets and Zeitlinger.
Woll, B. 1987. Historical and comparative aspects of British Sign Language. In Sign and
school: Using signs in deaf children’s development, ed. Jim Kyle, 12–34. Clevedon,
Avon: Multilingual Matters.
Woodward, James. 1976. Signs of change: Historical variation in American Sign Lan-
guage. Sign Language Studies 10:81–94.
Woodward, James. 1978. Historical bases of American Sign Languages. In Under-
standing language through sign language research, ed. Patricia Siple, 333–348.
New York: Academic Press.
Part III

Syntax in sign: Few or no effects of modality

Within the past 30 years, syntactic phenomena within signed languages have
been studied fairly extensively. American Sign Language (ASL) in particular
has been analyzed within the framework of relational grammar (Padden 1983),
lexicalist frameworks (Cormier 1998, Cormier et al. 1999), discourse repre-
sentation theory (Lillo-Martin and Klima 1990), and perhaps most widely in
generative and minimalist frameworks (Lillo-Martin 1986; Lillo-Martin 1991;
Neidle et al. 2000). Many of these analyses of ASL satisfy various syntactic
principles and constraints that are generally taken to be universal for spoken
languages (Lillo-Martin 1997). Such principles include Ross’s (1967) Complex
NP Constraint (Fischer 1974), Ross’s Coordinate Structure Constraint (Padden
1983), Wh-Island Constraint, Subjacency, and the Empty Category Principle
(Lillo-Martin 1991; Romano 1991).
The level of syntax and phrase structure is where sequentiality is perhaps
most obvious in signed languages, and this may be one reason why we can fairly
straightforwardly apply many of these syntactic principles to signed languages.
Indeed, the overall consensus seems to be that the visual–gestural modality of
signed languages results in very few differences between the syntactic structure
of signed languages and that of spoken languages.
The three chapters in this section support this general assumption, revealing
minimal modality effects at the syntactic level. Those differences that do emerge
seem to based on the use of the signing space (as noted in Lillo-Martin’s chapter;
Chapter 10) or on nonmanual signals (as noted in the Pfau and Tang and Sze
chapters; Chapters 11 and 12). Nonmanual signals include particular facial
expressions and body positions that act primarily as grammatical markers in
signed languages. Both the use of space and nonmanual signals are integral
features of the signed modality and are used in all the signed languages that
have been studied to date (Moody 1983; Bos 1990; Engberg-Pedersen 1993;
Pizzuto et al. 1995; Meir 1998; Sutton-Spence and Woll 1998; Senghas 2000;
Zeshan 2000).
Chapter 10 starts with the autonomy of syntax within spoken languages and
extends this concept to signed languages, concluding that while there must be
some modality effects at the articulatory–perceptual level, there need not be

237
238 Syntax in sign: few or no effects of modality

at the syntactic level. Lillo-Martin goes on to explore one facet affecting the
syntax and morphology of signed languages that does show modality effects:
the use of signing space for pronominal and anaphoric reference. This issue
is also explored in more detail in Part IV of this volume on deixis and verb
agreement.
Chapter 11 is an exploration of split negation in German Sign Language
(Deutsche Gebärdensprache or DGS), with comparisons to ASL and also many
spoken languages. Pfau finds that split negation occurs in DGS and closely
resembles split negation patterns found in many spoken languages. In addition,
there is variation in negation patterns within the class of signed languages, so
that while DGS has split negation, ASL does not. Thus, split negation essentially
shows no modality effect. However, Pfau identifies one potential modality effect
related to the nonmanual marking (headshake) associated with negation. This
nonmanual marking, which Pfau argues is essentially prosodic, acts somewhat
differently from prosody in spoken languages.
In Chapter 12 Tang and Sze look at the structure of nominals in Hong Kong
Signed Language (HKSL). They find minimal modality effects at the structural
level, where HKSL nominals have the basic structure [Det Num N]. However,
Tang and Sze note that there may be a modality effect in the types of nominals
that receive definiteness marking. In HKSL bare nouns often receive marking for
definiteness, which in HKSL is realized nonmanually in eye gaze or role shift.
Like Pfau’s negation marking, Tang and Sze note variation of this definiteness
marking across signed languages. Thus, while definiteness is expressed in ASL
with head tilt and eye gaze, only eye gaze is used in HKSL.
Chapters 11 and 12 in particular constitute significant contributions to the
body of literature on signed languages other than ASL, and indicate a very
strong need for more crosslinguistic work on different signed languages. Only
by looking at a wide variety of signed languages will we be able to tease apart
what features of human language are affected by language modality, and what
features are part of universal grammar.

kearsy cormier

References
Bos, Heleen. 1990. Person and location marking in Sign Language of the Netherlands:
Some implications of a spatially expressed syntactic system. In Current trends in
European sign language research, ed. Sigmund Prillwitz and Tomas Vollhaber,
231–246. Hamburg: Signum Press.
Cormier, Kearsy. 1998. Grammatical and anaphoric agreement in American Sign Lan-
guage. Masters thesis, University of Texas at Austin.
Cormier, Kearsy, Stephen Wechsler, and Richard P. Meier. 1999. Locus agreement
in American Sign Language. In Lexical and constructional aspects of linguistic
Syntax in sign: few or no effects of modality 239

explanation, ed. Gert Webelhuth, Jean-Pierre Koenig and Andreas Kathol, 215–
229. Stanford, CA: CSLI Press.
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language. Hamburg: Signum
Press.
Fischer, Susan D. 1974. Sign language and linguistic universals. In Actes du Colloque
Franco-Allemand de Grammaire Transformationnelle, ed. Christian Rohrer and
Nicholas Ruwet, 187–204. Tübingen: Niemeyer.
Lillo-Martin, Diane. 1986. Two kinds of null arguments in American Sign Language.
Natural Language and Linguistic Theory 4:415–444.
Lillo-Martin, Diane. 1991. Universal grammar and American Sign Language. Boston,
MA: Kluwer.
Lillo-Martin, Diane. 1997. The modular effects of sign language acquisition. In Relations
of language and thought: The views from sign language and deaf children, ed.
Marc Marschark, Patricia Siple, Diane Lillo-Martin, Ruth Campbell and Victoria
S. Everheart, 62–109. New York: Oxford University Press.
Lillo-Martin, Diane, and Edward Klima. 1990. Pointing out differences: ASL pronouns
in syntactic theory. In Theoretical issues in sign language research, Vol. 1: Lin-
guistics, ed. Susan D. Fischer and Patricia Siple, 191–210. Chicago, IL: University
of Chicago Press.
Meir, Irit. 1998. Thematic structure and verb agreement in Israeli Sign Language. Doc-
toral dissertation, Hebrew University of Jerusalem.
Moody, Bill. 1983. La langue des signes. Vincennes: International Visual Theatre.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert Lee. 2000.
The syntax of American Sign Language. Cambridge, MA: MIT Press.
Padden, Carol A. 1983. Interaction of morphology and syntax in American Sign Lan-
guage. Doctoral dissertation, University of California at San Diego, CA.
Pizzuto, Elena, Emanuela Cameracanna, Serena Corazza, and Virginia Volterra. 1995.
Terms for spatio-temporal relations in Italian Sign Language. In Iconicity in lan-
guage, ed. Raffaele Simone, 237–256. Amsterdam: John Benjamins.
Romano, Christine. 1991. Mixed headedness in American Sign Language: Evidence
from functional categories. MIT Working Papers in Linguistics 14:241–254.
Ross, J. R. 1967. Constraints on variables in syntax. Doctoral dissertation, MIT.
Senghas, Ann. 2000. The development of early spatial morphology in Nicaraguan Sign
Language. Proceedings of the Annual Boston University Conference on Language
Development 24:696–707.
Sutton-Spence, Rachel, and Bencie Woll. 1998. The linguistics of British Sign Language.
Cambridge: Cambridge University Press.
Zeshan, Ulrike. 2000. Sign language in Indo-Pakistan: A description of a signed lan-
guage. Philadelphia, PA: John Benjamins.
10 Where are all the modality effects?

Diane Lillo-Martin

10.1 Introduction
Sign languages are produced and perceived in the visual modality, while spo-
ken languages are produced and perceived in the auditory modality. Does this
difference in modality have any effect on the structures of these two types
of languages? Much of the research on the structure of sign languages has
mentioned this issue, but it is far from resolved. To some authors, the differ-
ences between sign languages and spoken languages are paramount, because
the study of “modality effects” is a contribution which sign language research
uniquely can make. To others, the similarities between sign languages and spo-
ken languages are most important, for they can tell us how certain properties
of linguistic systems transcend modality and are, therefore, truly universal. Of
course, both of these goals are worthy, and this book is testimony to the fruits
that such endeavors can yield.
In this chapter I address the question of modality effects by first examining the
architecture of the language faculty. By laying out my assumptions about how
language works in the general sense, predictions about the locus of modality
effects can be made. I then take up an issue that is a strong candidate for
a modality effect: the use of space for indicating reference in pronouns and
verbs. I review some of the issues that have been discussed with respect to
this phenomenon, and offer an analysis that is in keeping with the theoretical
framework set up at the beginning. I do not offer this analysis as support for
the theoretical assumptions but, instead, the framework provides support for
the analysis. Interestingly, my conclusions turn out to be rather similar to a
proposal made on the basis of some very different assumptions about the nature
of the language faculty.

10.2 The autonomy of syntax


One of the fundamental assumptions of generative grammar has been the auton-
omy of syntax (Chomsky 1977). What this means is that the representations and
derivations of the syntactic component do not refer to phonological structures
241
242 Diane Lillo-Martin

or semantic structures; likewise, phonological or semantic rules do not refer to


syntactic structures.1 As Jackendoff (1997:27) puts it:
[P]honological rules cannot refer directly to syntactic categories or syntactic constituency.
Rather, they refer to prosodic constituency, which . . . is only partially determined by syn-
tactic structure . . . Conversely, syntactic rules do not refer to phonological domains or
to the phonological content of words.

Clearly, the syntactic component and the phonological component must con-
nect at some point. In recent literature, this point has been called an “interface.”
Some of the recent research in generative syntax within the Minimalist Program
(Chomsky 1995) has sought to determine where in a derivation the syntax–
phonology interface lies, and the properties it has. For example, it has long
been assumed by some syntacticians (e.g. Chomsky 1981) that there may be
“stylistic” order-changing rules that apply in the interface component connect-
ing syntax with phonology. However, aside from the operations of this interface
level, known as PF (Phonetic Form), it is generally assumed that the operations
of the syntax and of the phonology proper are autonomous of each other. Thus,
quoting again from Jackendoff (1997:29):
For example, syntactic rules never depend on whether a word has two versus three
syllables (as stress rules do); and phonological rules never depend on whether one
phrase is c-commanded by another (as syntactic rules do). That is, many aspects of
phonological structure are invisible to syntax and vice versa.

In generative grammar, this hypothesis is captured by models of the architecture


of the language faculty in which the output of syntactic operations are fed,
through the PF component, to the phonology proper. While the details of the
models have changed over the years, the assumption of autonomy has remained.

10.2.1 Autonomy and signed languages


These proposals about the autonomy of syntax were made after consideration
of the properties of only spoken languages.2 However, it is now well established
that signed languages are in every sense “language,” and theories of the nature
of human language must be broad enough to accommodate the properties of
signed languages as well as spoken languages. What would the theory of the
autonomy of syntax lead us to expect about the nature of sign languages? Do
sign languages force us to reconsider the notion of autonomy?
The null hypothesis is that there are no differences between spoken languages
and signed languages. Hence, we might expect that sign languages will display
1 From here on, I discuss only autonomy of syntax and phonology, since this is the connection that
is immediately relevant for questions about potential modality effects.
2 Jackendoff’s recent works, while primarily discussing spoken languages, are an exception in
explicitly aiming for a model of language that incorporates sign languages as well.
Where are all the modality effects? 243

autonomy just as spoken languages do. What does this mean for the analysis of
phonology and syntax in sign languages?
The phonology is the component of the grammar that interacts with the
“articulatory–perceptual interface.” That is, the output of the phonological com-
ponent is the input to the articulatory component (for production); or the output
of the perceptual component is the input to the phonological component (for
comprehension). While these mappings are far from simple, it is clear that the
modality of language must be felt in the phonological component. Thus, for
example, the features of a sign language representation include notions like
“selected fingers” and “circular movement,” while those of a spoken language
include “tongue tip” and “voice.” In other words, the modality of language
affects the phonological component.
In view of this inescapable conclusion, it is remarkable to notice how many
similar properties sign phonologies and spoken phonologies share. Presumably,
these properties come from the general, abstract properties of the language
faculty (known as Universal Grammar, or UG). Since I do not work in the details
of phonology, I do not offer here any argument as to whether these properties are
specific to language or come from more general cognitive principles, although I
have made my predilections known elsewhere (see Lillo-Martin 1997; however,
for the point of view of some real sign language phonologists, see Sandler 1993;
van der Hulst 2000; Brentari, this volume). Let it suffice for here to say that
although certain aspects of the modality will show up in the phonology, it has
been found that, as a whole, the system of sign language phonology displays in
general the same characteristics as spoken language phonology.
Even though the phonological components for signed languages and spoken
languages must reveal their modalities to some degree, the theory of the auton-
omy of syntax allows for a different claim about that level of the grammar. If
syntax and phonology are autonomous, there is no need for the syntactic com-
ponents of signed and spoken languages to differ. The null hypothesis, then, is
that they do not differ. In other words, the modality of language does not affect
the syntactic component.
This is not to say, of course, that any particular sign language will have
the same syntax as any particular spoken language. Instead, I assume that the
abstract syntactic principles of UG apply equally to languages in the signed
and spoken modalities. Where UG permits variation between languages, sign
languages may vary from spoken languages (and from each other). Where
UG constrains the form of spoken languages, it will constrain sign languages
as well. A clear counterexample to the UG hypothesis for sign language could
come from a demonstration that universal principles of grammar – for example,
the principle of structure dependence or the constraints on extraction – apply
in spoken languages but not in sign languages. To my knowledge, no such
claim has been made. On the contrary, several researchers have claimed that
244 Diane Lillo-Martin

sign languages do adhere to the principles of UG (Fischer 1974; Padden 1983;


Lillo-Martin 1991; Neidle et al. 2000).
However, there have been claims that sign languages have certain syntactic
properties that are a result of the visual-spatial modality. Since the Universal
Grammar hypothesis leads to the expectation that signed and spoken languages
would all vary within the same set of dimensions, this observation requires fur-
ther explanation (see Sandler and Lillo-Martin 2001; Meier this volume). One
set of apparently modality-dependent characteristics relates to the properties of
verb agreement to be discussed here. Such claims should be investigated care-
fully for their potential to provide evidence against the autonomy hypothesis.
First, it must be established that the claims represent true generalizations about
some structures unique to sign languages. Next, the level of the phenomenon
needs to be investigated. Modality effects at the phonological level may not
constitute evidence against the autonomy hypothesis.
Finally, the nature of the phenomenon should be considered. The strongest
version of the autonomy hypothesis applied to sign language would say that the
syntactic component of every sign language is a potential syntactic component
for a spoken language, and vice versa. However, we can break down the contents
of a mental grammar into those aspects that come from UG and those that are
learned. There must be abundant positive evidence in the environment for those
aspects of language that are learned. A weaker version, then, of the autonomy
hypothesis applied to sign language would be that there would be no difference
between signed and spoken languages with respect to the UG component. Any
differences, then, would have to be learnable. While this version of the hypoth-
esis reduces the predicted universal properties of language, it retains the crucial
assumption that unlearnable properties (e.g. constraints) are universal.
To summarize, the model presented here is one in which no modality effects
would be found in the syntactic component, although the phonological compo-
nent would contain information about modality. That is, modality effects should
be found only at the interface: where the phonological component interacts with
the articulatory–perceptual components. Any exceptions must be learnable. Let
us now consider whether the available evidence complies with this model.

10.3 “Spatial syntax”


One of the greatest challenges for the framework outlined above comes from
the use of spatial locations in the grammar of sign languages. The importance
of understanding how space functions in sign language is underscored by the
number of papers in this volume addressing this topic. Clearly, the use of space
is a candidate modality effect that must be examined carefully.
If spatial location were simply one of the components of a sign – as it is
in ASL signs like MOTHER or DOG – there would not be such concern over
Where are all the modality effects? 245

its analysis. As Liddell (1990) points out, for some signs their location in
the signing space simply reflects the articulatory function of space. However,
spatial locations are also used to pick out referents and designate the location
of elements. In such cases, spatial locations are not simply sublexical, but in
addition they convey meaning. Most researchers have assumed (implicitly) that
spatial locations are therefore morphemic, and that – just as Supalla (1982)
provided a componential morphological analysis of the complex movements
found in classifier constructions – some kind of componential morphological
analysis of spatial locations could be provided.
DeMateo (1977) was an exception: he argued that spatial locations could
not be analyzed morphologically. More recently, this same conclusion has been
argued for in a series of publications by Liddell (1990; 1994; 1995; 2000).
Before summarizing Liddell’s position, the following sections briefly describe
the meaningful use of space in American Sign Language (ASL).

10.3.1 The use of space in pronouns and verb agreement


Pronouns in ASL (and as far as we know, every other sign language investi-
gated to date) can be described as indexical pointing to locations that represent
referents. In many sign languages, to sign ‘I’ or ‘me,’ the signer points to his or
her own chest, although in Japanese Sign Language (Nihon Syuwa or NS) the
signer points to his or her own nose (Japan Sign Language Research Institute
1997). To sign ‘you,’ the signer points to the addressee. To sign ‘he’ or ‘she,’
the signer points to the intended referent. If the intended referent is not present
in the signed discourse, the signer indicates a spatial location that is associated
with the referent, and points to this location. Often, the locations toward which
points are directed are referred to as “loci,” or R(eferential) loci.
Using space for pronominal reference seems to make the system of pronouns
in sign languages rather different from that of spoken languages. Two issues
are mentioned here (these issues are also discussed in Lillo-Martin and Klima
1990). First, there seems to be no upper limit to the number of referents that
can be pointed to using (distinct) pronoun signs. Indexical pointing toward any
spatial locus may constitute a pronoun referring to a referent at that locus. Since
between any two geometric points there is another point, it would seem that the
number of potential pronoun signs is nonfinite. Second, unlike spoken language
pronouns, sign language pronouns are generally unambiguous. Pointing to
the location of a referent picks out that referent, not a class of potential ref-
erents (such as third person males). These two issues are discussed at more
length below.
The process long known as verb agreement in ASL (and other sign languages)
makes use of the loci described for pronouns. Verb agreement involves modi-
fying the form of a verb so that its beginning and ending locations correspond
246 Diane Lillo-Martin

(a) (b)

Figure 10.1 ASL verb agreement: 10.1a ‘I ask her’; 10.1b ‘He asks me’

(usually) to the locations of the referents intended as subject and object, re-
spectively. Often, the verb also rotates so that it “faces” the object as well. This
process is illustrated in Figure 10.1.
The process of verb agreement illustrated in Figure 10.1 applies to a class
of verbs, but not to all verbs in ASL. Padden (1983) identified three classes of
verbs:
r those that take agreement as described above;
r those verbs such as ASL PUT that agree with spatial (i.e. locative) arguments;
and
r those that take no agreement at all.
Furthermore, the class of agreeing verbs contains a subclass of “backwards”
verbs, which move from the location of the object to the subject, instead of
vice versa. The distinctions between the various classes of verbs with respect
to agreement have received considerable attention. The importance of these
distinctions for the analysis of verb agreement is brought out below.

10.4 Is space really syntax?

10.4.1 The traditional view


According to the traditional description above, loci are linguistic elements with
the following characteristics. First, nominals are associated with loci (this pro-
cess is sometimes called “nominal establishment” or “establishing a referent”).
These loci determine the direction of pointing in pronouns. Furthermore, they
determine the beginning and ending positions of agreeing verbs.
Some authors have suggested ways to implement this idea by describing verbs
(and pronouns) as specified lexically for certain components, such as handshape
and skeletal structure, but missing out information about the initial and final
location (see, for example, Sandler 1989). This information comes from the
Where are all the modality effects? 247

HC

L M L

[agreement locus] [agreement locus]

Figure 10.2 Verb agreement template (after Sandler 1989)

morphological process of agreement. When the agreement template combines


with the verb root, the verb is fully derived. Sandler’s (1989) representation of an
agreeing verb is given in Figure 10.2. The problem that arises, however, is how
to specify in the grammar the information that is filled in through the process
of verb agreement. The locations must be filled in, but how are these locations
described in the lexicon? In other words, what does the verb agree with? The
general assumption has been that the verb agrees with the subject/object in
person (and number). This view is put clearly by Neidle et al. (2000:31): “spatial
locations constitute an overt instantiation of phi-features (specifically, person
features).” However, recall that any number of referents may be established and
referred to using verb agreement. If loci represent person features, how many
person distinctions must be made in the grammar?
The first traditional account followed the standard analysis of languages like
the European ones which mark person, number, and gender on pronouns and
verbal morphology. According to that account, first person is marked in ASL by
using the location of the signer; second person by the location of the addressee;
and multiple third persons can be marked using other spatial locations. (This
view was adopted by Padden 1983 and many others.)
Lillo-Martin and Klima (1990) and Meier (1990) noticed problems with this
traditional analysis.3 In particular, Meier argued that no linguistic distinction is
made between second and third person. Although there may well be referents in
a discourse who play the roles of second (addressee) and third (non-addressee)
person, these referents are not picked out using distinct mechanisms in the gram-
mar. The uses of loci for second and third persons are indistinguishable; only
the role played by a referent in a particular discourse separates these persons.
On the other hand, Meier argued that ASL does make a linguistic distinction
between first and nonfirst person. The location for first person reference is
fixed, not changing like that for nonfirst referents. The plural form of the first
person pronoun is morphologically idiosyncratic, while the plural forms for
3 While both of these chapters discussed the problem with respect to pronouns only, the same
points – and presumably, analyses along the same lines – would apply to verb agreement. The
papers are about the categorical distinctions made by the grammar of ASL, which provide features
of nouns with which verbs agree.
248 Diane Lillo-Martin

the nonfirst persons are completely componential. Importantly, the first person
form may be used to pick out a referent other than the signer, in contexts of
direct quotation (and what is often called “role shift”), just as first person forms
may do in spoken languages. Thus, according to Meier, ASL marks a two-way
person contrast: first vs. nonfirst.
This conclusion has been shared by numerous authors working on ASL and
other sign languages. For example, Engberg-Pedersen (1993) makes a simi-
lar argument for a first/nonfirst distinction in Danish Sign Language, as does
Smith (1990) for Taiwanese Sign Language, Meir (1998) for Israeli Sign Lan-
guage, and Rathmann (2000) for DGS. The main idea seems to be that there
is a grammatical distinction between first and nonfirst person, with multiple
realizations of nonfirst. Neidle et al. (2000:31) state that although “there is a
primary distinction between first and non-first persons, non-first person can be
further subclassified into many distinct person values.”

10.4.2 The problem


Liddell (1990) observed that this traditional view of a locus can be described
as “referential equality,” by which “Referenta = locusa .” Furthermore, the lo-
cus has often been considered a point in signing space. However, as Liddell
showed, agreement is not with points in space. Verbs are lexically specified for
the height of articulation. The verb GIVE is articulated at about chest height,
ASK is articulated at about chin height, and GET-SAME-IDEA-SAME-TIME
is articulated at forehead height. Thus, these three verbs would move with re-
spect to three different points even for the same referents. Apparently, a locus
must have some depth.4 In addition, sometimes verbs indicate not only their lex-
ically specified height with respect to the signer, but also the relative heights of
the subject or object. In this way, a signer might sign ASK toward a lower point,
to indicate asking a child; or toward a higher point, to indicate asking a very tall
person.
In a series of publications Liddell (1990; 1994; 1995; 2000) repeatedly raises
the questions of how the spatial loci can be morphologically analyzed, and
what the phonological specification of the (so-called) verb agreement process
is. In order to accommodate his observations about loci, Liddell proposes a
new analysis of the use of space in pronouns and agreement. First, he argues
that the relation between a referent and a locus is not referential equality, but
“location fixing.” In this view, associating a referent with a locus amounts to
expressing “referenta is at locusa .” The referent might be present in the current

4 Alternatively, agreement might be described in terms of vectors, as proposed by Padden


(1990:125), which are lexically specified for height.
Where are all the modality effects? 249

physical situation. If not, location fixing might serve to establish a locus for a
“surrogate” (an imaginary referent of full size, used in the cases where verbs
indicate relative height of referents), or a “token” (a schematic referent with
some depth, but equivalent and parallel to the signer).
Crucially, what Liddell (1995:25–26) recognizes is that in order for pronouns
or verbs to make use of the locations of present referents, surrogates, or tokens:
there is no . . . predictability associated with the locations that signs may be directed
toward. The location is not dependent on any linguistic features or any linguistic category.
Instead it comes directly from the signer’s view of the surrounding environment.

Thus, he argues (pp. 24–25), loci are not morphemic:


There appears to be an unlimited number of possible locations for referents in Real
Space and, correspondingly, an unlimited number of possible locations toward which
the hand may be directed. Attempting a morphemic solution to the problem of directing
signs toward any of an unlimited number of possible locations in Real Space would
either require an unlimited number of location and direction morphemes or it would
require postulating a single morpheme whose form was indeterminate. . . . The concept
of a lexically fixed, meaningful element with indeterminate form is inconsistent with
our conception of what morphemes are.

Given this state of affairs, Liddell concludes that there is no linguistic process
of verb agreement in ASL. Instead, he proposes (p. 26) that:
the handshapes, certain aspects of the orientations of the hand, and types of movement
are lexically specified through phonological features, but . . . there are no linguistic fea-
tures identifying the location the hands are directed toward. Instead, the hands are
directed . . . by non-discrete gestural means.

In other words, he employs a mixture of linguistic and gestural elements to


analyze “indicating verbs,” and specifically argues that the process employed
is not agreement.

10.4.3 Why there is verb agreement in ASL


While Liddell’s observations are apt, his conclusion is too strong. There are
several reasons to maintain an analysis of agreement in ASL. First, the first
person and plural agreement forms do have a determinate shape. Just as Meier
(1990) used a similar observation about pronouns to argue for a first/nonfirst
person distinction in the pronominal system, the first person form of verbs is
an identifiable agreement feature. Although the first person form is motivated,
it is determinate and listable.5 Unlike the nonfirst forms, the first person form
5 Of course, as Liddell points out, even for first-person forms there is not one point for agreeing
verbs, since they are lexically specified for different heights.
250 Diane Lillo-Martin

employs a specifiable location that is also used for nonreferential lexical con-
trasts. Plural forms (dual, exhaustive, and multiple) have specific morphological
shapes that combine predictably with roots. In fact, as Meier points out, the first
person plural forms WE, OUR, and OURSELVES involve lexically specified
locations “at best only partially motivated” (Meier 1990:180), despite the pos-
sibility for “pointing to” the signer and a locus or loci representing the other
referents.
Liddell himself does not reject the notion that there is a specific first person
form, at least in pronouns (Liddell 1994). However, McBurney (this volume),
adopting Liddell’s framework, argues that the first/nonfirst distinction is not a
grammatical one. For the reasons given here and below, I think the distinction
is real.
Furthermore, there are numerous constraints on the agreement process. For
one thing, as mentioned earlier, only a subset of verbs mark agreement at all.
Meir (1998) characterized verbs that may take agreement as “potential posses-
sors,” because on her analysis agreement verbs have a transfer component in
their predicate–argument structure. Mathur (2000; Rathmann and Mathur, this
volume) characterized verbs that may take agreement as those taking two ani-
mate arguments; similarly, Janis (1995) limited agreement to verbs with particu-
lar semantic relations, including animate patients, experiencers, and recipients.
These characterizations of agreeing verbs are largely overlapping and, impor-
tantly, they bring out the fact that many verbs do not show agreement. Further-
more, agreement affects particular syntactic roles: subject and object for transi-
tive verbs; subject and indirect object for di-transitives. Intransitives do not mark
agreement; di-transitives do not mark agreement with their direct object. If there
is no linguistic process of agreement – but rather a gestural procedure for indi-
cating arguments – why should the procedure be limited by linguistic factors?
To be sure, Liddell (1995) himself points out that the indicating process
must interact closely with the grammar. He points out the observation made by
Padden (1983) – and before that by Meier (1982) – that while object agreement
is obligatory, subject agreement is optional; as well as the fact that certain
combinations are ruled out (e.g. FLIRT-WITH-me). He does not, however,
offer a way to capture these facts under a system with no linguistic agreement
process. Many of the ruled-out forms can be attributed to phonetic constraints, as
offered by Mathur and Rathmann (Mathur 2000; Mathur and Rathmann 2001).
How would such constraints apply to forms generated outside the grammar?
The arguments given so far have also been made by others, including Aronoff
et al. (in submission), Meier (2002), and Rathmann and Mathur (this volume).
Meier (2002) provides several additional arguments, and discusses at length
how the evidence from the development of verb agreement also supports its
existence as a linguistic phenomenon. He discusses development both for the
young child acquiring a sign language (compare Meier 1982), and in terms of
Where are all the modality effects? 251

the emergence of a new sign language, as recently documented in Nicaragua


(Senghas et al. 1997; Senghas 2000). These observations make a strong case
for the existence of a linguistic process of verb agreement.
Another important source of evidence for the linguistic status of verb agree-
ment in sign languages comes from its interaction with various syntactic phe-
nomena. If the treatments suggested here for the following phenomena are on
the right track, then some aspects of verb agreement must be considered a
linguistic process which applies before the end of the syntactic derivation.
As one example, it has been argued that verb agreement plays a role in the
licensing of null arguments in ASL (Lillo-Martin 1986). More recently, Bahan
(1996) and Neidle et al. (2000) have argued that nonmanual realizations of
agreement may license null arguments as well as manual agreement. Whether
morphological agreement is limited to manual realization or takes both manual
and nonmanual forms, if it plays a role in licensing of null arguments then this
is good evidence for the syntactic relevance of agreement.
Further evidence for the syntactic relevance of agreement comes from
Brazilian Sign Language (Lingua de Sinais Brasileira or LSB). According to
Quadros (1999), the phrase structure of sentences with agreeing verbs in LSB
is distinct from that of sentences with plain verbs. Evidence for this distinction
comes from several sources. The most striking difference between structures
with agreeing and plain verbs in LSB is the behavior of negation. While the
negative element NO may come between the subject and an agreeing verb in
LSB, it may not come between the subject and a non-agreeing verb. Instead,
negation with non-agreeing verbs must come sentence-finally (a position also
available for sentences with agreeing verbs). Examples are given in (1)–(4).
neg
(1) IXa JOHN IXb MARY a GIVEb BOOK NO
John does not give Mary a book. (LSB; Quadros 1999:152)
neg
(2) JOHN DESIRE CAR NO
John does not like the car. (LSB; Quadros 1999:117)
neg
(3) IXa JOHN NO a GIVEb BOOK
John does not give the book (to her). (LSB; Quadros 1999:116)
neg
(4) *JOHN NO DESIRE CAR
John does not like the car. (LSB; Quadros 1999:116)
Another difference between plain and agreeing verbs in LSB is in the ordering
of the subject and object. While plain verbs require subject–verb–object (SVO)
order (in the absence of operations such as topicalization or focus), agreeing
verbs permit preverbal objects. It has sometimes been claimed that the same is
252 Diane Lillo-Martin

true for ASL (e.g. Fischer 1974), but the status of such non-SVO (subject–verb–
object) utterances as a single clause in ASL is disputed (compare Padden 1983).
LSB also differs from ASL in that it has an “auxiliary”: a lexical item to mark
agreement with non-agreeing verbs. Such an element, although referred to by
various names, has been observed in Taiwanese Sign Language (Smith 1990),
Sign Language of the Netherlands (Bos 1994), Japanese Sign Language (Fischer
1996), and German Sign Language (Rathmann 2000). Although much more
detailed work is needed – in particular to find the similarities and differences
between the auxiliary-like elements across sign languages – it seems that in
these sign languages the auxiliary element is used when agreement is blocked.
Interestingly, there may be structural differences between sentences with and
without the auxiliary (Rathmann 2000).
The behavior of the auxiliary in the sign languages that have this element is
further evidence for the linguistic status of agreement. However the auxiliary
is to be analyzed, like an agreeing verb it moves between the subject and object
loci. If it interacts with syntactic phenomena, it must be represented in some
way in the derivation, i.e. in the linguistic system.
To summarize, because of the ways that the system known as verb agreement
interacts with other aspects of the linguistic system, it must itself be a linguistic
system. Further, I have argued, at least some part of agreement must be rep-
resented in the syntax, because it interacts with other aspects of the syntax.
However, does it matter to the syntax that verb agreement is realized spatially?
How would this be captured in a syntax autonomous from phonology? As
Liddell has pointed out, if the spatial locations used in agreement could be ana-
lyzed morphemically, the fact that agreement uses space would be no more rele-
vant to the syntax than the fact that UGLY and DRY are minimal pairs differing
only in location. As he pointed out, however, it seems that the spatial locations
cannot be analyzed morphemically. How, then, is this problem to be resolved?

10.5 An alternative analysis employing agreement


The analysis that I propose comes from a combination of some of the essen-
tials of the Lillo-Martin and Klima (1990) analysis of pronouns, together with
Meier’s (1990) crucial distinction, as well as a piece of Liddell’s (2000) con-
clusion. It bears some resemblance to the analysis of verb agreement offered
by Aronoff et al. (in submission). Their proposal also makes use of ideas from
Lillo-Martin and Klima and Meier, but goes beyond those articles by making
an explicit proposal for agreement. The present proposal differs from Aronoff
et al. in accepting part of Liddell’s solution to the problem of nonfiniteness of
loci. It also adopts some, but not all, of the proposals made by Mathur (2000),
and largely overlaps with that of Rathmann and Mathur (this volume). A more
detailed comparison of this proposal with these is offered below.
Where are all the modality effects? 253

Lillo-Martin and Klima recognize that a nonfinite list of possible pronoun


signs cannot be part of the lexicon. As an alternative, they propose separating
out the location from the rest of the sign in its lexical specification. That is,
they argue that there is only one pronoun sign, specified for handshape and
movement, but not for location. This means that person distinctions are not
made in the pronominal system, neither in the lexical entries nor in the syntax.
Adopting the more general notion that noun phrases carry a referential
index,6 Lillo-Martin and Klima propose that as for any language, when a pro-
noun is inserted in the derivation of a sentence, it is assigned a referential index
(or R index). In spoken languages referential indices serve to identify relation-
ships between pronouns and potential antecedents. Coindexing is interpreted
as coreference; noncoindexing as noncoreference. In the discourse representa-
tion, pronouns with identical referential indices are interpreted as picking out
the same referent. Signed languages are unique, they claim, only in that the
referential index is overtly realized; in contrast, it is unexpressed in spoken lan-
guages. That is, signs with identical referential indices must point to the same
loci (R loci); signs with different indices must point to distinct loci.
Lillo-Martin and Klima proposed that there is only one pronoun in the ASL
lexicon (a conclusion also reached by Ahlgren 1990 for Swedish Sign Lan-
guage). However, the arguments for a first/nonfirst distinction made by Meier
have since been adopted by Lillo-Martin, together with the above analysis for
nonfirst forms. How can this analysis be updated to account for verb agreement
and Liddell’s observations about the use of space?
First, note that the analysis by Lillo-Martin and Klima did not specify how the
distinct loci represented by noncoindexing would be realized phonologically.
Their point was to show that information about spatial loci was not needed
in the syntax. In fact, as Liddell argues, it may be impossible to provide a
componential analysis of spatial loci for the use of phonology either. There are,
then, two courses that may be taken. The first is to allow modality effects in the
phonology – since we know that effects of the modality must be allowed in the
phonology – and simply to accept the notion of non-analyzable phonological
material. The second course is to follow Liddell in taking the non-analyzable
out of language altogether. Following Liddell, then, the locations toward which
pronouns are directed would come from the gestural component, which interacts
with language – but not from language in the narrow sense.
This explanation may be clearer if we compare the proposal to what we
observe about pointing gestures that accompany speech. In spoken English,
pointing gestures often accompany pronouns, such as when a speaker indicates
three distinct referents while saying, ‘I saw him and him, but not him.’ Like

6 In current syntactic theory of the Minimalist Program, indices have been removed from syntactic
representations.
254 Diane Lillo-Martin

pronouns in sign language, these gestures are unlistable (a speaker may point
to any location in space), and they disambiguate the reference of the pronouns
they accompany. The present proposal is that the nonfirst singular sign language
pronoun is lexically and syntactically ambiguous, just as ‘him’ is in the English
example; however, when it combines with a gesture, it may be directed at any
location, and its reference is disambiguated.
So far, my proposal is almost just like Liddell’s. The difference comes when
we move to verb agreement. First, note that although they combine linguistic
and gestural components, I have not refrained from calling pointing in sign
language “pronouns.” As pronouns, they are present in the syntactic structure
and participate in syntactic and semantic processes. For example, I expect that
sign language pronouns adhere to conditions on pronouns such as the principles
of the Binding Theory of Chomsky (1981) or their equivalent. I know of no
evidence against this.
I take the same approach to verb agreement. Again following Liddell, I am
convinced that a combination of linguistic and gestural explanations is necessary
to account for the observed forms of verbs. However, unlike Liddell, I do not
take this as reason to reject the notion that verbs agree. In particular, I have
given reasons above to support the claim that a class of verbs in ASL agree in
person (first vs. nonfirst) and number (singular, dual, and multiple at least) with
their subject and object. Hence, my proposal is that there is a process of verb
agreement whereby verbs agree with their arguments in person and number, but
the realization of agreement must also ensure that coindexing corresponds to
the use of the same locus, a process which must involve a gestural component.
I believe that Meier (2002) concurs with this conclusion when he states that
“although the form of agreement may be gestural, the integration of these
gestural elements into verbs is linguistically determined.”
The proposal that sign language verbs combine linguistic and gestural compo-
nents is different from the English example in that for sign language, both the lin-
guistic and gestural components use the same articulators. Okrent (this volume)
discusses at length this aspect of Liddell’s proposal, and provides helpful infor-
mation about gesture accompanying spoken languages by which to evaluate the
proposal that verbs combine linguistic and gestural elements in sign language.

10.5.1 Predictions of this account


The main claim of this account is that while a first person/nonfirst person dis-
tinction exists in the syntax of ASL, no further person distinctions are made in
the syntax. This is not, of course, to say that all nonfirst forms are equivalent to
each other; clearly, coreference requires use of the same location; however, ac-
cording to this proposal this is not a syntactic requirement, but one of a different
level. If a signer intends to pick out the same referent twice, then both instances
Where are all the modality effects? 255

must use the same R locus. This is so for all instances of intended coreference
within a discourse, unless a new location is established for a referent, either
through repeating a location-assigning procedure or through processes that dis-
place referents (Padden 1983). Within a sentence, multiple occasions of picking
out the same referent will also be subject to this requirement. One type of such
within-sentence coreference is the two uses of the pronoun in sentences like the
ASL sentence meaning ‘Hej thinks hej will win.’ Another type is the corefer-
ence between a noun phrase and its “copy” in “subject pronoun copy” (Padden
1983). The various mechanisms for picking out a referent must be directed at
the same location. However, what that location is need not be specified in the
syntax. Any two instances of coindexing must employ the same location. This
does not mean, however, that a categorial distinction is being made between the
various possible locations for that referent.
In this context, note the observation made by McBurney (this volume) that
no two lexical signs contrast for their locations in “neutral space.” Apparently,
the spatial contrasts used in agreement are not lexically relevant. The claim
here is that they are also not syntactically relevant. If the difference between
various nonfirst locations is irrelevant to the syntax, this means that no syntactic
principle or process would treat a location on the right, say, differently from
a location on the left. On the other hand, the syntax may treat the first person
forms differently from the nonfirst forms as a group. Various arguments that
the first person form is distinct in this way were offered in the discussion of
Meier’s (1990) proposal for a two person system. Another argument he offered
has to do with the use of the first person form in what is commonly known as
“role shifting,” to which I would like to add some comments.
Meier observed that the first person pronoun may pick out a referent other
than the signer, in contexts of “role shifting.” This technique is used for reported
speech, but also more broadly to indicate that a scene is being conveyed from the
point of view of someone other than the signer. Just as in the English example,
‘Bush said, “I won,” ’ or perhaps, ‘Bush is like, “wow, I won!” ’ the first person
pronoun may be used when quoting the words or thoughts or perspective of
another.
What is important for the present purposes is that this special characteristic is
reserved for the first person pronoun. Other pronouns do not “shift” during role
shift (as pointed out by Engberg-Pedersen 1995; also Liddell 1994). In Lillo-
Martin and Klima (1990) and Lillo-Martin (1995) we compared the special
characteristics of the first person pronoun to logophoric elements in languages
such as Ewe and Gokana. These elements have special interpretations in certain
contexts, such as reported speech or verbs reflecting point of view. Many other
proposals have also been made regarding the analysis of “role shift” (see, for
example, Engberg-Pedersen 1995; Poulin and Miller 1995). Whatever the best
analysis for the shifting nature of the first person pronoun in ASL, it is clear that
256 Diane Lillo-Martin

the grammar must be able to refer to the first person form separately from the
nonfirst forms. However, it never seems to be necessary to refer to the nonfirst
form in location “a” distinct from the nonfirst form in location “b.”
Another prediction of this account is that there may be certain special situ-
ations in which the first/nonfirst contrast is evident, but the contrast between
different nonfirst locations is neutralized. One such special situation may come
up in children acquiring ASL. Loew (1984) observed that a three-year-old child
acquiring ASL went through a stage in which different nonfirst characters in a
single discourse were all assigned the same locus: a so-called “stacking” error.
At this point, however, children do not generally make errors with the first per-
son form. This contrast has generally been seen as one showing that children
acquire correct use of pronouns and verb agreement for present referents earlier
than they do for nonpresent referents. However, it is also compatible with the
suggestion that they might first acquire the first/nonfirst contrast, and only later
acquire the distinction between different nonfirst locations.
Poizner et al. (1987) observed the opposite problem in one of their aphasic
signers, Paul D. He made numerous errors with verbal morphology. One exam-
ple cited is the use of three different spatial loci when the same location was
required. However, this error was with spatial verbs (i.e. with verbs that mark
locative arguments), not verbs marking agreement with human arguments. It
would be interesting to know if Paul D made any similar errors with agreeing
verbs. It would also be helpful to re-evaluate data from children, aphasics, and
perhaps other special populations, to look for evidence that the first–nonfirst
contrast may be treated differently from the contrast between various nonfirst
forms in these populations.

10.6 Other alternatives


The view of agreement and pronouns suggested here is a blend of ideas from
various sources. In this section I mention a few other views and try to clarify
what my view shares with the others, and how they are distinct. Aronoff et al.
(in submission) view agreement in general as a process of copying referential
indices. Syntactically, this process is universal; but morphological manifes-
tations of agreement vary greatly from language to language. Their view of
agreement in sign languages is that it consists of copying referential indices,
such that the referential index of the source argument is copied onto the initial
location segment of agreeing verbs, and the index of the goal argument onto
the final location segment. They specify source and goal locations rather than
subject and object, following Meir’s (1998) analysis of agreement as marking
source–goal by the path of movement, and subject–object by “facing.” Then, in
addition to specifying locations, the process of agreement provides information
about the verb’s facing (toward the object).
Where are all the modality effects? 257

I find their view of agreement as copying (or perhaps checking) referential


indices felicitous. Aronoff et al. present empirical evidence for this view from
a fascinating example of a rare type of spoken language agreement that takes
a form remarkably parallel to that of sign languages. For example, in Bainouk
(a Niger-Congo language), certain forms (such as loan words) are outside of
the regular gender agreement system. In such cases, rather than showing no
agreement at all in the relevant contexts, an agreeing form may copy the first
consonant–vowel (CV) of the noun stem. This kind of system is called “literal
alliterative agreement.”
McBurney (this volume) provides an extensive array of reference systems
across spoken languages, none of which pick out referents in the same way
as signed languages. Without having had access to the facts of Bainouk, she
suggests that a hypothetical spoken language might copy some phonological
feature of a noun’s root for further reference. It would seem that Bainouk
provides a real example of a language employing such means in its agreement
system. Granted, this type of agreement is very rare in spoken languages, but
it indicates that the human language faculty has the capacity to develop an
agreement system that uses copying of the form of one element onto another.
However, Aronoff et al. point out an important difference between literal
alliterative agreement and agreement in sign language: “the R-loci that nouns
are associated with are not part of their phonological representations and are
not lexical properties of the nouns in any way.” This detail points to the problem
that led Liddell – and me – to posit a gestural component to agreement in sign
language. Aronoff et al. reject this idea explicitly, citing evidence (such as that
discussed above) for the linguistic status of agreement. As I have stated, unlike
Liddell I do not reject the linguistic status of agreement.
Another proposal which maintains the linguistic status of agreement while
admitting a gestural component is that of Mathur (2000). Mathur is concerned
with answering Liddell’s challenge to specify phonologically the output of the
agreement rule. Mathur’s proposal represents an attempt to go beyond spec-
ifying the path movement (and facing) of agreeing verbs, in order to fully
characterize the changes in location, movement, orientation, and handedness
that verbs experience under agreement; this includes the different outputs for
different agreeing forms of the same verb. To do so, Mathur suggests envision-
ing the base form of the verb within a sphere that rotates. The sphere is marked
with endpoints that move to align with the loci of the subject and object. The
output of agreement is then a result of the base form of the verb, alignment of
the sphere, and phonetic constraints on articulation.
Mathur recognizes the problem posed by Liddell regarding the analyzability
of loci. In response he follows Liddell in concluding that the linguistic compo-
nent must connect with the gestural for the specification of loci: specifically, for
the specification of the endpoints with which the sphere aligns. Mathur (2000)
258 Diane Lillo-Martin

adopts the Theory of Distributed Morphology (Halle and Marantz 1993), under
which the morphological component is reserved for processes of affixation. In
Mathur’s model of align-sphere, agreement is not a process of affixation; rather,
it is a “re-adjustment” rule. He hence puts agreement at the phonological level.
Mathur discusses extensively the phonological effects of agreement, and he
shows evidence that the location of endpoints does have an effect on the output
of the agreement process. For example, for a right-handed signer, articulating
an agreeing verb such as WARN with a subject location on the right and object
location on the left presents no problem. However, if the subject location is
on the left and the object location is on the right, the regular output of the
alignment process would violate phonetic constraints on ASL (in fact, it would
be impossible to articulate, given the physical constraints of the human body).
Instead, some other form must be used, such as changing hand dominance for
this sign or omitting the subject agreement marker. This shows that the output
of the phonological process is affected by the specific locations used in the sign.
This conclusion is compatible with Mathur’s proposal that the whole process
of agreement is phonological, and that the phonological component accesses
the gestural.
The idea that agreement as a re-adjustment rule is purely phonological
leads to the expectation that it has no syntactic effects, since phonological
re-adjustment rules do not apply until after syntax. However, we have seen
evidence for syntactic effects of agreement in Section 10.3.3. Independently,
Rathmann and Mathur (this volume) have identified additional syntactic effects
of agreement. Accordingly, the more recent work develops Mathur’s (2000)
proposal by putting forth a model of agreement that contains an explicit “ges-
tural space” connecting conceptual structure with the articulatory–perceptual
interface, but also including syntactic aspects of agreement within the syntac-
tic structure. In this way, the syntactic effects can be accounted for without
losing the detailed account of the articulation of agreeing verbs developed
previously.

10.7 Conclusions
I have argued that there is a linguistic process of agreement in ASL, but I
have agreed with Liddell that in order to account for this process fully some
integration of linguistic and gestural components must be made. It is interesting
that I come to this conclusion given my working assumptions and theoretical
framework, which are quite distinct from his in many ways.
Much further work remains to be done on this issue. In particular, stronger
evidence for the interaction of verb agreement with syntax should be sought.
Additional evidence regarding the status of the various nonfirst loci is also
needed. Another domain for future research concerns the very similar problems
Where are all the modality effects? 259

that arise for spatial verbs and classifiers. Although these predicates do not
indicate human arguments, they make use of space in a way that poses the same
challenge to componential analysis as the agreeing verbs. This challenge should
be further investigated under an approach that combines gestural and linguistic
components.
Finally, in answering the question that forms the title of this chapter, I have fo-
cused on separating phonology from syntax. A deeper understanding of modal-
ity effects must explore this putative separation further, and also delve into the
phonological component, examining where modality effects are found – and
not found – within this part of the grammar.

Acknowledgments
This research was supported in part by NIH grant number DC-00183. My
thoughts on the issues discussed here profited from extensive discussions about
verb agreement in sign language with Gaurav Mathur. I would also like to
thank the organizers, presenters, and audience of the Texas Linguistics Society
meeting on which this volume is based for an energizing and thought-provoking
meeting. It was a pleasure to attend a conference which was so well-focused and
informative. I have also had productive conversations about these issues with
many people, among whom Karen Emmorey and Richard Meier were especially
helpful. Richard Meier also provided helpful comments on the written version.
Finally, I would like to acknowledge the graduate students in my course on
the structure of ASL who – several years ago – encouraged me to consider
the idea that the spatial “problems” of ASL could be addressed by an analysis
employing gesture.

10.8 References
Ahlgren, Inger. 1990. Deictic pronouns in Swedish and Swedish Sign Language. In
Theoretical issues in sign language research, Vol. 1: Linguistics, ed. Susan Fischer
and Patricia Siple, 167–174. Chicago, IL: University of Chicago Press.
Aronoff, Mark, Irit Meir, and Wendy Sandler. In submission. Universal and particular
aspects of sign language morphology.
Bahan, Benjamin. 1996. Non-manual realization of agreement in American Sign Lan-
guage. Doctoral dissertation, Boston University, Boston, MA.
Bos, Heleen. 1994. An auxiliary in Sign Language of the Netherlands. In Perspectives
on sign language structure: Papers from the 5th International Symposium on sign
language research, ed. Inger Ahlgren, Brita Bergman and Mary Brennan, 37–53.
Durham: International Sign Linguistics Association, University of Durham.
Chomsky, Noam. 1977. Essays on form and interpretation. New York: North-Holland.
Chomsky, Noam. 1981. Lectures on government and binding. Dordrecht: Foris.
Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press.
260 Diane Lillo-Martin

DeMateo, Asa. 1977. Visual imagery and visual analogues in American Sign Language.
In On the other hand: New perspectives on American Sign Language, ed. Lynn
Friedman, 109–136. New York: Academic Press.
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language. Hamburg: Signum-
Verlag.
Engberg-Pedersen, Elisabeth. 1995. Point of view expressed through shifters. In Lan-
guage, gesture, and space, ed. Karen Emmorey and Judy Reilly, 133–154. Hillsdale,
NJ: Lawrence Erlbaum Associates.
Fischer, Susan D. 1974. Sign language and linguistic universals. Paper presented
at Actes du Colloque Franco-Allemand de Grammaire Transformationnelle,
Tübingen.
Fischer, Susan D. 1996. The role of agreement and auxiliaries in sign language. Lingua
98:103–120.
Halle, Morris and Alec Marantz. 1993. Distributed Morphology and the pieces of in-
flection. In The view from Building 20: Essays in linguistics in honor of Sylvain
Bromberger, ed. Ken Hale and Samuel J. Keyser, 111–176. Cambridge, MA: MIT
Press.
Jackendoff, Ray. 1997. The architecture of the language faculty. Cambridge, MA: MIT
Press.
Janis, Wynne. 1995. A crosslinguistic perspective on ASL verb agreement. In Language,
gesture, and space, ed. Karen Emmorey and Judy Reilly, 195–223. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Japan Sign Language Research Institute (Nihon syuwa kerkyuusho), ed. 1997. Japanese
Sign Language dictionary (Nihongo-syuwa diten). Tokyo: Japan Federation of the
Deaf.
Liddell, Scott K. 1990. Four functions of a locus: Reexamining the structure of space
in ASL. In Sign language research: Theoretical issues, ed. Ceil Lucas, 176–198.
Washington, DC: Gallaudet University Press.
Liddell, Scott K. 1994. Tokens and surrogates. In Perspectives on sign language struc-
ture: Papers from the 5th International Symposium on Sign Language Research, ed.
Inger Ahlgren, Brita Bergman and Mary Brennan, 105–119. Durham: International
Sign Linguistics Association, University of Durham.
Liddell, Scott K. 1995. Real, surrogate, and token space: Grammatical consequences
in ASL. In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly,
19–41. Hillsdale, NJ: Lawrence Erlbaum Associates.
Liddell, Scott K. 2000. Indicating verbs and pronouns: Pointing away from agreement. In
The signs of language revisited: An anthology to honor Ursula Bellugi and Edward
Klima, ed. Karen Emmorey and Harlan Lane, 303–320. Mahwah, NJ: Lawrence
Erlbaum Associates.
Lillo-Martin, Diane. 1986. Two kinds of null arguments in American Sign Language.
Natural Language and Linguistic Theory 4:415–444.
Lillo-Martin, Diane. 1991. Universal grammar and American Sign Language: Setting
the null argument parameters. Dordrecht: Kluwer.
Lillo-Martin, Diane. 1995. The point of view predicate in American Sign Language.
In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 155–170.
Hillsdale, NJ: Lawrence Erlbaum.
Lillo-Martin, Diane. 1997. The modular effects of sign language acquisition. In Relations
of language and thought: The view from sign language and deaf children, ed.
Where are all the modality effects? 261

Marc Marschark, Patricia Siple, Diane Lillo-Martin, Ruth Campbell and Victoria
Everhart, 62–109. New York: Oxford University Press.
Lillo-Martin, Diane, and Edward S. Klima. 1990. Pointing out differences: ASL pro-
nouns in syntactic theory. In Theoretical issues in sign language research, Vol. 1:
Linguistics, ed. Susan D. Fischer and Patricia Siple, 191–210. Chicago, IL:
University of Chicago Press.
Loew, Ruth. 1984. Roles and reference in American Sign Language: A developmental
perspective. Doctoral dissertation, University of Minnesota, MN.
Mathur, Gaurav. 2000. Verb agreement as alignment in signed languages. Doctoral
dissertation, MIT, Cambridge, MA.
Mathur, Gaurav and Christian Rathmann. 2001. Why not GIVE-US: an articulatory
constraint in signed languages. In Signed languages: Discoveries from international
research, ed. V. Dively, M. Metzger, S. Taub and A. Baer, 1–26. Washington, DC:
Gallaudet University Press.
Meier, Richard P. 1982. Icons, analogues, and morphemes: The acquisition of verb
agreement in ASL. Doctoral dissertation, University of California, San Diego, CA.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical is-
sues in sign language research, ed. Susan D. Fischer and Patricia Siple, 175–190.
Chicago, IL: University of Chicago Press.
Meier, Richard P. 2002. The acquisition of verb agreement: Pointing out arguments for
the linguistic status of agreement in signed languages. In Current developments
in the study of signed language acquisition, ed. Gary Morgan and Bencie Woll.
Amsterdam: John Benjamins.
Meir, Irit. 1998. Thematic structure and verb agreement in Israeli Sign Language. Doc-
toral dissertation, The Hebrew University of Jerusalem.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: MIT Press.
Padden, Carol A. 1983. Interaction of morphology and syntax in American Sign Lan-
guage. Doctoral dissertation, University of California, San Diego, CA.
Padden, Carol A. 1990. The relation between space and grammar in ASL verb mor-
phology. In Sign language research: Theoretical issues, ed. Ceil Lucas, 118–132.
Washington, DC: Gallaudet University Press.
Poizner, Howard, Edward S. Klima, and Ursula Bellugi. 1987. What the hands reveal
about the brain. Cambridge, MA: MIT Press.
Poulin, Christine and Christopher Miller. 1995. On narrative discourse and point of view
in Quebec Sign Language. In Language, gesture, and space, ed. Karen Emmorey
and Judy Reilly, 117–131. Hillsdale, NJ: Lawrence Erlbaum Associates.
Quadros, Ronice Müller de. 1999. Phrase structure of Brazilian Sign Language. Doctoral
dissertation, Pontifı́cia Universidade Católica do Rio Grande do Sul.
Rathmann, Christian. 2000. The optionality of Agreement Phrase: Evidence from signed
languages. Masters report, University of Texas, Austin, TX.
Sandler, Wendy. 1989. Phonological representation of the sign: Linearity and nonlin-
earity in ASL phonology. Dordrecht: Foris.
Sandler, Wendy. 1993. Sign language and modularity. Lingua 89:315–351.
Sandler, Wendy, and Diane Lillo-Martin. 2001. Natural sign languages. In The handbook
of linguistics, ed. Mark Aronoff and Jamie Rees-Miller, 533–562. Malden, MA:
Blackwell.
262 Diane Lillo-Martin

Senghas, Ann. 2000. The development of early spatial morphology in Nicaraguan Sign
Language. In The Proceedings of the Boston University Conference on Language
Development, ed. S.C. Howell, S.A. Fish and T. Keith-Lucas, 696–707. Boston,
MA: Cascadilla Press.
Senghas, Ann, Marie Coppola, Elissa Newport, and Ted Supalla. 1997. Argument struc-
ture in Nicaraguan Sign Language: The emergence of grammatical devices. In The
Proceedings of the Boston University Conference on Language Development, ed.
E. Hughes, M. Hughes and A. Greenhill, 550–561. Boston, MA: Cascadilla Press.
Smith, Wayne. 1990. Evidence for auxiliaries in Taiwan Sign Language. In Theoretical
issues in sign language research, Volume 1: Linguistics, ed. Susan D. Fischer and
Patricia Siple, 211–228. Chicago, IL: University of Chicago Press.
Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American
Sign Language. Doctoral dissertation, University of California, San Diego, CA.
van der Hulst, Harry. 2000. Modularity and modality in phonology. In Phonological
knowledge: Conceptual and empirical issues, ed. Noel Burton-Roberts, Philip Carr
and Gerard Docherty. Oxford: Oxford University Press.
11 Applying morphosyntactic and phonological
readjustment rules in natural language negation

Roland Pfau

11.1 Introduction
As is well known, negation in natural languages comes in many different forms.
Crosslinguistically, we observe differences concerning the morphological char-
acter of the Neg (negation) element as well as concerning its structural position
within a sentence. For instance, while many languages make use of an indepen-
dent Neg particle (e.g. English and German), in others, the Neg element is affixal
in nature and attaches to the verb (e.g. Turkish and French). Moreover, a Neg
particle may appear in sentence-initial position, preverbally, postverbally, or in
sentence-final position (for comprehensive typological surveys of negation, see
Dahl 1979; 1993; Payne 1985).
In this chapter I am concerned with morphosyntactic and phonological prop-
erties of sentential negation in some spoken languages as well as in German
Sign Language (Deutsche Gebärdensprache or DGS) and American Sign Lan-
guage (ASL). Sentential negation in DGS (as well as in other sign languages) is
particularly interesting because it involves a manual and a nonmanual element,
namely the manual Neg sign NICHT ‘not’ and a headshake that is associated
with the predicate. Despite this peculiarity, I show that on the morphosyntactic
side of the Neg construction, we do not need to refer to any modality-specific
structures and principles. Rather, the same structures and principles that allow
for the derivation of negated sentences in spoken languages are also capable of
accounting for the sign language data.
On the phonological side, however, we do of course observe modality-specific
differences; those are due to the different articulators used. Consequently, in
a phonological feature hierarchy for signed languages (like the one proposed
for ASL by Brentari 1998), reference must be made to qualitatively differ-
ent features. In order to investigate the precise nature of the modality effect,
I first show how in some spoken languages certain readjustment rules may
affect phonological or morphosyntactic features in the context of negation.
In the Western Sudanic language Gã, for example, the Neg suffix triggers a
change of tone within the verb stem to which it is attached. I claim that, in
exactly the same way, phonological readjustment rules in DGS may change

263
264 Roland Pfau

the surface form of a given sign by accessing the nonmanual node of a feature
hierarchy.
This chapter is organized as follows: In Section 11.2 I give a short sketch of
the basic assumptions of the theoretical framework I adopt, namely the frame-
work of Distributed Morphology. Once equipped with the theoretical tools, I
demonstrate in Section 11.3 how negated structures in various languages can
uniformly be derived within this framework. In order to exemplify the relevant
mechanisms, I use negation data from French (Section 11.3.1), Háusá (Sec-
tion 11.3.2), Gã (Section 11.3.3), and DGS (Section 11.3.4). In Section 11.3.5
the analysis given for DGS is motivated by a comparison of DGS and ASL.
The discussion of the Gã and DGS examples illustrates the important role of
readjustment rules. Further instances of the application of different readjust-
ment rules in the context of negation are presented in Section 4. Finally, in
Section 5, I focus on possible modality-specific properties of negation. In par-
ticular, I compare properties of tone spreading in spoken languages to properties
of nonmanual spreading in DGS.

11.2 Distributed morphology


In my brief description of the central ideas of Distributed Morphology I con-
centrate on those aspects of the theory that are relevant for the subsequent
discussion of spoken and signed language data (for a more comprehensive
account, see Harley and Noyer 1999). In the view of Distributed Morphology
(Halle 1990; 1994; Halle and Marantz 1993; 1994), morphology is not restricted
to one component of the grammar; rather it is taken to be distributed among
several different components. Consequently, word formation may take place at
any level of grammar by operations such as head movement and merger of ad-
jacent heads. Figure 11.1 illustrates the five-level conception of the grammar
as adopted by Halle and Marantz (1993), also indicating what operations are
assumed to take place at what level. Three aspects of the theory are of major
importance in the present context. First, the theory of Distributed Morphology
(DM) is separationistic in nature in that it adopts the idea that the mechanisms
that are responsible for producing the form of syntactico-semantically complex
expressions are separated from the mechanisms that produce the form of the
corresponding phonological expression. Thus, one of the core assumptions of
DM is that the terminal nodes that are manipulated at the syntactic levels LF
(Logical Form), DS (Deep Structure) and SS (Surface Structure) consist of
morphosyntactic and semantic features only. The assignment of phonological
features to those morphosyntactic feature bundles does not take place until the
level of Morphological Structure (MS), which is the interface between syntax
and phonology. The mechanism responsible for the assignment of phonological
features is the vocabulary insertion.
Morphosyntactic and phonological readjustment rules 265

DS (Deep Structure)
manipulation of morphosyntactic and
semantic feature bundles (via movement
and merger)
SS (Surface Structure)

morphological operations (e.g. merger, fusion),


MS (Morphological Structure) addition of morphemes (e.g. Agr),
application of readjustment rules

assignment of phonological features


PF (Phonological Form) via Vocabulary insertion,
phonological readjustment rules
LF (Logical Form)

Figure 11.1 Five-level conception of grammar


Source: Halle and Marantz 1993

A second important characteristic is that there is no one-to-one relation be-


tween terminal elements in the syntax and phonological pieces at PF (Phonologi-
cal Form). Possible mismatches are the result of operations that manipulate
terminal elements at MS. The operation fusion, for instance, reduces the num-
ber of independent morphemes by fusing two sister nodes into a single node;
for example, fusion of Tns and AgrS into a single morpheme in English and
fusion of case and number morphemes in Russian (Halle 1994). Moreover, only
at MS may morphemes be inserted; subject–verb agreement, for example, is
implemented by adjunction of an Agr morpheme to the Tns node. Subsequently,
features of the subject are copied onto the Agr node.
Third, at the levels of MS and PF, readjustment rules may apply. One set
of rules manipulates morphosyntactic features in the context of other features;
when a rule deletes a feature, it is called an impoverishment rule (see Halle
1997; Noyer 1998). Since the selection of a Vocabulary item crucially depends
on the feature composition of a terminal node, these morphosyntactic readjust-
ment rules must apply before Vocabulary insertion takes place. The other set
of rules changes the phonological form of already inserted Vocabulary items
(phonological readjustment); therefore, these rules must apply after Vocabulary
insertion.1 Various examples for the different kinds of readjustment rules are
given below.

1 Note that phonological readjustment rules are triggered by the morphological environment; that
is, an affix causes a phonological change in the stem it attaches to (e.g. umlaut formation in
some German plural nouns: Haus ‘house’ → Häus-er ‘house-pl’). They are therefore to be
distinguished from other phonological alterations like assimilation rules (e.g. place assimilation
in the words imperfect vs. indefinite) or final devoicing.
266 Roland Pfau

11.3 The derivation of negated sentences


In this section I take a closer look at sentential negation in some spoken lan-
guages as well as in German Sign Language (DGS). In particular, I show how
negated structures are derived within the DM framework by means of verb
movement (or movement of another head constituent) in the syntax, addition
of agreement morphemes at the level of MS, Vocabulary insertion and, some-
times, through the application of morphosyntactic or phonological readjustment
rules.
My spoken language sample (Sections 11.3.1–11.3.3, and Section 11.4) is
of course not a random one. All the languages I discuss either show split
negation (at least optionally) or the data involve the application of readjust-
ment rules; for some of the languages both are true. This particular choice
of data is due to the fact that, in my opinion, DGS shows exactly the same
pattern, that is, split negation and phonological readjustment. Moreover, the
five languages that I will analyze in more detail – namely French, Háusá,
Gã, DGS, and ASL – differ from each other in important respects as far
as the syntactic structures and the syntactic status of their Neg elements are
concerned.
Within a generative framework, the distribution of Neg elements in a sen-
tence is captured by the assumption that the functional element Neg heads a
phrase of its own, a negation phrase (NegP). Within a NegP, two positions
are available, namely the head position and the specifier position. In case
both positions are filled we are dealing with an instance of split negation (as,
for example, in French). Whenever the element in head position is attached
to the verb stem in the course of the derivation by moving the verbal head
to Neg, we observe morphological negation (as, for example, in Turkish).
The specifier (Spec) and the head of NegP stand in a Spec–head relation-
ship. If Spec is empty, this position must be filled by an empty operator in
order to satisfy the so-called Neg-criterion (Haegeman and Zanuttini 1991;
Haegeman 1995).2

11.3.1 French
Probably the best known language with split negation is French. In French,
negation manifests itself in the two Neg elements ne and pas which frame the
verb; in colloquial speech the preverbal element ne may be dropped. Consider
the examples in (1).

2 The Neg-criterion: (a) A Neg-operator must be in a Spec-head configuration with an X◦ [Neg];


(b) An X◦ [Neg] must be in a Spec-head configuration with a Neg-operator. A Neg-operator is
a negative phrase in a scope position; taken from Haegeman (1995:106).
Morphosyntactic and phonological readjustment rules 267

(1) Split negation in French


a. Nous oubli-er-ons les choses désagréables.
We forget-fut-1.pl the things unpleasant
‘We will forget the unpleasant things.’
b. Nous n’-oubli-er-ons pas les choses désagréables.
We neg-forget-fut-1.pl neg the things unpleasant
‘We will not forget the unpleasant things.’

Following Pollock (1989) and Ouhalla (1990), I assume that ne is the head
of the NegP while pas is its specifier. The tree structure in (2a) is adopted
from Pollock 1989; however, due to DM assumptions, there is no agreement
projection present in the syntax. In order to derive the surface serialization, the
verb is raised. It adjoins first to Neg and, in a second step, the newly formed
complex is adjoined to Tns. These movement operations are indicated by the
arrows in (2a).

(2) a. Verb movement in French b. Adjoined structure under Tns


after insertion of AgrS node
TnsP
Tns

Tns NegP Neg Tns

-er Neg V Tns AgrS


Spec Neg'

pas
Neg VP

(ne-)
V DP

oublie

The tree within the circle in (2b) shows the full detail of the adjoined struc-
ture under the Tns node after verb movement.3 At MS subject agreement is

3 The reader will notice that the derivation of the complex verb in the tree structure (2a) – as well as
in the structures to follow – involves a combination of left and right adjunction. I assume that the
prefix or suffix status of a given functional head is determined by its feature composition; French
Neg, for example, is marked as a prefix while Tns constitutes a suffix. Rajesh Bhatt (personal
communication) points out to me that within the DM framework, such an assumption may be
superfluous since DM provides a mechanism, namely merger, which is capable of rearranging
hierarchical structures at MS. In the present context, however, space does not permit me to go
into this matter any further.
268 Roland Pfau

implemented as a sister node of Tns (in this and the following structures,
the agreement nodes that are inserted at MS are marked by a square). As a
consequence, the derived structure of the French verb is [Neg–Verb–Tns–AgrS],
as shown by the verb in (1b).
The Vocabulary item for French Neg (that is, for the negative head) is given
in (3); no readjustment rules apply.

(3) Vocabulary item for Neg in French


[neg] ↔ /ne-/

Consequently, the characteristics of French negation are as follows: French


shows split negation. The negative head ne is affixal in nature and is attached
to the verb in the syntax while pas is a free particle. Since ne may be dropped
in colloquial speech, we must assume that the head of NegP may be filled by
an empty affix (Pollock 1989; Ouhalla 1990).4 In French, NegP stands below
TnsP and has its specifier on the left-hand side.

11.3.2 Háusá
Háusá is a Chadic language and is the most widely spoken language of West
Africa. It is the lingua franca of Northern Nigeria, where it is the first language
of at least 25 million people. There are also about five million speakers in
neighboring Niger.
The properties of negation in Háusá are somewhat different from the ones
observed in French. As can be seen in the affirmative sentence in (4a), the tense–
aspect and agreement morphemes in Háusá surface preverbally as a functional
complex while the verb remains uninflected (Caron 1990).5 In all aspects except
the progressive, negation consists of two elements: the first Neg element, a low-
toned bà, is part of the functional complex; the second Neg element, a high-toned
bá, appears in sentence-final position (4b).

4 Otherwise, Colloquial French would present us with a situation that is at odds with the principles
of X-bar theory, namely a situation where the head of a projection is missing. Therefore, the
head of the French NegP is assumed to be realized as an abstract morpheme in much the same
way that the English Agr paradigm is realized in terms of abstract elements. Ouhalla (1990)
proposes to extend this analysis to the group of languages in which the negation element seems
to have adverb-like properties: in all of these, he concludes, the Neg elements are specifiers of
a NegP whose head is an abstract morpheme.
5 One might be tempted to analyze the preverbal functional complex as being prefixed to the
verb. Such an analysis is, however, not corroborated by the facts since certain emphatic and
adverbial particles may appear between the functional complex and the verb, as for example
in Náá kúsá káámàà shı́ ‘1.sg.perf almost catch him’ (‘I almost caught him’; example from
Hartmann 1999).
Morphosyntactic and phonological readjustment rules 269

(4) Split negation in Háusá (Hartmann 1999)


a. Kàndé tá-kàn dáfà kı́ı́fı́ı́.
Kandé 3.sg.f-hab cook fish
‘Kandé usually cooks fish.’
b. Kàndé bà-tá-kàn dáfà kı́ı́fı́ı́ bá.
Kandé neg-3.sg.f-hab cook fish neg
‘Kandé usually does not cook fish.’
In the syntactic tree structure, the first Neg element is the head of NegP
while the second Neg element occupies a right specifier of NegP. Hartmann
(1999) assumes that the Háusá verb is not moved at all. What is moved – at
least in negated sentences – is the negative head, which raises and adjoins to
Asp in the syntax. I also follow Hartmann (1999) in assuming that there is
no Tns projection in Háusá syntax; rather, all temporal information is sup-
plied by aspectual suffixes. The tree in (5a) illustrates the relevant movement
operation.
(5) a. Neg movement in Háusá b. Adjoined structure under Asp
after insertion of AgrS node

AspP Asp

Neg Asp
DP Asp'
AgrS Asp

Asp NegP

−kàn
Neg' Spec


Neg VP

bà−
V DP

dáfà

Again, the circle in (5b) gives full details of the adjoined structure. At MS
an agreement morpheme is inserted as a sister of the Asp node. Therefore, the
structure of the Háusá preverbal functional complex is [Neg–AgrS–Asp];
cf. (4b).
270 Roland Pfau

Example (6) shows the Vocabulary item for the negative head in Háusá, a
low tone prefix. Again, as in French, no readjustment of any kind takes place.

(6) Vocabulary item for Neg in Háusá


[neg] ↔ /bà-/

In Háusá, AgrS and Asp are sometimes fused; this is true, for example, for the
perfective aspect. As mentioned above, fusion reduces the number of terminal
nodes and only one Vocabulary item that matches the morphosyntactic features
of the fused node is inserted; cf. (7a). In a negative perfective sentence, fu-
sion may even take place twice and consequently only one Vocabulary item is
inserted for the whole functional complex; cf. (7b).

(7) Fusion of functional heads in Háusá (Hartmann 1999)


a. Kàndé táá dáfà kı́ı́fı́ı́.
Kandé 3.sg.f.asp(perf) cook fish
‘Kandé cooked fish.’
b. Ubán-kı̀ bài bı́yá kúδı́ı́ bá.
Father-your neg.3.sg.asp(perf) pay money neg
‘Your father did not pay the money.’

To sum up, we may note the following: As in French, we observe split negation
in Háusá which is a combination of morphological and particle negation; the
negative head bà- attaches to the aspect node while the particle bá appears in
sentence-final position. The syntactic structure, however, is different. The NegP
stands below Asp and – in contrast to the French facts – has a right specifier
position.

11.3.3 Gã (Gan)


The third spoken language I describe in more detail is the language Gã. Gã
is a Western Sudanic language spoken by about one million people in Ghana
in the coastal area around the capital Accra. In Gã the realization of Neg on
the verb depends on the tense specification of the sentence, the most inter-
esting case being the past tense. In the past tense, there is no visible Neg
suffix. However, the shape of the verbal stem is altered: the last vowel of the
stem is lengthened and its tone is raised (8b). In the perfect tense the suffix
-kò is used and, moreover, there is a tone change in the stem (8d). Also, in
the negated future there is no tense affix on the verb and the Neg suffix -η
appears (8f).
Morphosyntactic and phonological readjustment rules 271

(8) Negation in Gã (Ablorh-Odjidja 1968)


a. Mı̀-gbè gbèé kò. b. Mı̀-gbée gbèé kò.
1.sg.past-kill dog art 1.sg.past-kill.neg dog art
‘I killed a dog.’ ‘I did not kill a dog.’
c. Mı́-yè nı́ı̀ mómó. d. Mı́-yé-kò nókò.
1.sg.perf-eat meal already 1.sg.perf-eat-neg something
‘I have already eaten my meal.’ ‘I have not eaten anything.’
e. È-bàá-hòó wónù. f. È-hòó-η wónù.
3.sg.fut-cook soup 3.sg-cook-neg.fut soup
‘He/she will cook soup.’ ‘He/she will not cook soup.’
In (9a) the derivation is exemplified for the inflected past tense verb in (8b). In the
syntax, the verb is raised to Neg and then to Tns. This leads to the adjoined struc-
ture in (9b), which is a mirror image of what we have observed for French above.
(9) a. Verb movement in Gã b. Adjoined structure under Tns
after insertion of AgrS node

TnsP Tns

Tns Neg
DP Tns'
AgrS Tns V Neg

Tns NegP

Spec Neg'

Neg VP

−Ø
V DP

gbè

At MS, AgrS and Tns fuse in the past (8a,b) and perfect (8c,d) tense and the
Vocabulary items mı̀- and mı́-, respectively, are inserted under the fused node.
In the affirmative future (8e), however, fusion does not take place. Since there is
no tense prefix in the negative future (8f ), we must assume either that the tense
feature is deleted or that Tns fuses with Neg. As we have seen above, each tense
specification implies the insertion of a different Neg morpheme. The Vocabulary
items for Neg are given in (10).
272 Roland Pfau

(10) Vocabulary items for Neg in Gã


[neg] ↔ -Ø / [+past]
[neg] ↔ /-kò/ / [+perf]
[neg] ↔ /-η/ / [+fut]

In contrast to French and Háusá, phonological readjustment plays an important


role in Gã in the context of negation. The readjustment rules in (11) account for
the stem internal changes in the past and perfect tense, a change from low to
high tone in both cases and, moreover, vowel lengthening in the past tense. It
is particularly interesting that the observed changes are triggered by an empty
suffix in the past tense, implying that negation manifests in high tone and vowel
lengthening alone.
(11) Readjustment rules triggered by Neg (tone change)
a. V]Verb V]Verb / [neg] [+past]

[H] [+long]

b. V]Verb V]Verb / [neg] [+perf]

[H]

In Section 11.3.4 I show that the Gã past tense pattern parallels the one we
observe in DGS. Before doing so, however, let me sum up the facts for Gã. In
contrast to French and Háusá, Gã does not make use of split negation. Once
again, the head of the NegP is affixal in nature, but this time the specifier position
is empty (that is, there is an empty operator in SpecNegP). Most remarkably, in
the past tense the specifier as well as the head of NegP are void of phonological
material; there is, however, an empty affix in Neg (as in colloquial French)
that triggers the obligatory readjustment rule in (11a). In Gã (as in French), the
NegP is situated below Tns and shows a left specifier.

11.3.4 German Sign Language (DGS)


As mentioned earlier, sentential negation in signed languages is particularly
interesting because it comprises a manual and a nonmanual component. The
manual part is a Neg sign which is optional, while the nonmanual part is a
headshake. This pattern has been observed for various signed languages.6
6 For American SL, see, for example, Liddell 1980; Veinberg and Wilbur 1990; Neidle et al. 2000;
for Swedish SL, see Bergman 1995; for Norwegian SL, see Vogt-Svendsen 2000; for Dutch SL,
see Coerts 1990; for British SL, see Deuchar 1984; for French SL, see Rondal et al. 1997; for
Swiss German SL, see Boyes Braem 1995; for Argentine SL, see Veinberg 1993; for Chilean
SL, see Pilleux 1991; for Pakistani SL, see Zeshan 1997.
Morphosyntactic and phonological readjustment rules 273

In DGS the negative headshake (hs) is obligatory and is necessarily associated


with the predicate (be it verbal, adjectival, or nominal). Moreover, the optional
manual Neg sign NICHT ‘not’ is one of the very few elements that may follow
the verb as is illustrated in (12b).
(12) Negation in DGS
a. MUTTER BLUME KAUF
mother flower buy
‘Mother buys a flower.’
hs hs
b. MUTTER BLUME KAUF (NICHT)
mother flower buy.neg (not)
‘Mother does not buy a flower.’
DGS is a strict subject–object–verb (SOV) language which does not exhibit any
asymmetries between matrix sentences and embedded sentences like, for ex-
ample, spoken German.7 The structure in (13) represents the (partial) syntactic
tree structure we assume for DGS (Glück and Pfau 1999; Pfau 2001).
(13) a. Verb movement in DGS
NegP

Neg' NICHT

TnsP Neg

DP Tns' −Ø

VP Tns

DP V Neg

KAUF Tns Neg

Tns V

b. Adjoined structure under Neg


7 With (di)transitive verbs that require two human arguments, a person agreement marker (PAM)
often finds use in DGS. Rathmann (2000) points out there is an interesting correlation between
the presence of a PAM and word order, that is, whenever a PAM is present, the word order is more
flexible. Rathmann (2000) also claims that there is a [±PAM] parameter for signed languages
and that only [+PAM]-languages allow for the projection of an AgrP while [−PAM]-languages
(as e.g. ASL) do not (see also Rathmann 2001).
274 Roland Pfau

As can be seen, I assume that from a typological point of view DGS belongs
to the class of languages with split negation. The manual sign NICHT is base
generated in the specifier position of the Neg phrase while the head of the NegP
contains an empty affix that is attached to the verb stem in the course of the
derivation (for motivation of this analysis, see Section 11.3.5). In the syntax,
the verb raises to Tns and then to Neg. Note that I do not consider agreement
verbs in the present context (on the implementation of agreement nodes, see
Glück and Pfau 1999 and Pfau and Glück 1999). In the case of a plain verb – as
in (12) – insertion of agreement morphemes does not take place. The Vocabulary
item for Neg in DGS is a zero affix:
(14) Vocabulary item for Neg in DGS
[neg] ↔ -Ø
As in the Gã past tense, the Neg feature realized by the empty affix triggers
a phonological readjustment rule that leads to a stem-internal modification.
In DGS, the rule in (15) applies to the nonmanual component of the featural
description of the verb sign by adding a headshake to the nonmanual node of
the phonological feature hierarchy (for discussion, see Section 11.5).8
(15) Readjustment rule triggered by Neg (addition of nonmanual feature)
nonmanual → nonmanual / [neg]

[headshake]
Note that it is not at all uncommon for empty affixes to trigger readjustment
rules in spoken languages either. For example, ablaut in the English past tense
form sang is triggered by an empty Tns node, while in the German plural noun
Väter ‘fathers’ (singular Vater) umlaut is triggered by an empty plural suffix.
Sentential negation in DGS is therefore characterized by the following facts:
Like French and Háusá, DGS belongs to the group of languages that show split
negation. The manual element NICHT is base generated in SpecNegP; this
element is, however, optional. (In this respect DGS differs from French where
the negative head was shown to be optional.) As far as the negative head is
concerned, DGS resembles the Gã past tense in that an empty affix occupies
this position that is capable of triggering a phonological readjustment rule.
Since NICHT appears in sentence-final position, I assume that SpecNegP in
DGS (as in Háusá) is on the right.

8 In the present chapter I focus on negation in DGS. For a more comprehensive Distributed Mor-
phology account of verbal inflection in DGS (e.g. Vocabulary items for agreement morphemes
and phonological readjustment rules triggered by empty aspect and agreement [classifier] suf-
fixes), see Glück and Pfau 1999; Pfau and Glück 1999.
Morphosyntactic and phonological readjustment rules 275

What distinguishes DGS from all the languages discussed so far is the po-
sition of NegP vis-à-vis TnsP within the syntactic structure. In contrast to
French, Háusá, and Gã, Neg selects TnsP as its complement in DGS. Accord-
ing to Zanuttini (1991), a NegP can be realized in exactly those two structurally
distinct positions, namely above TnsP – as in DGS and, according to Zanuttini
(1991), in Italian – and below TnsP – as exemplified by the other languages
discussed above.

11.3.5 Motivating split negation in DGS: A comparison with ASL


In general, both positions within NegP are assumed to be filled in a given
language when we observe two independent Neg elements in a sentence (as,
for example, in French and Háusá). In languages in which a negation ele-
ment surfaces as a constituent of the verbal complex (or a functional complex,
as in Háusá), this element is clearly a head. That is, the verb raises to the
negative head in the course of the derivation and the Neg element attaches
to the verb stem. In this case, the specifier position may either be filled by
lexical material (Háusá) or by an empty operator (as in Turkish where nega-
tion is expressed by a verbal suffix only). The Gã past tense example (8b)
shows that negation may be realized by a stem internal modification only. Fol-
lowing DM assumptions, this implies that in the Gã past tense the specifier
position of NegP is occupied by an empty operator, while the head hosts an
empty affix that attaches to the verb and triggers a phonological readjustment
rule.
As illustrated above, in DGS negation is expressed by two elements, namely
the lexical Neg sign NICHT and a headshake. Since the headshake is associated
with the verb (as part of the verbal complex), I assume that it is triggered by
an empty affix in Neg in the same way as the stem-internal modification in Gã.
That is, I take the headshake to be introduced by a phonological readjustment
rule.
However, it has been claimed for ASL that the presence of a manual Neg
sign and a headshake does not necessarily imply that both positions within
NegP are filled. Neidle et al. (1998; 2000), for instance, propose a syntac-
tic structure for ASL in which both elements are taken to occupy the head
position of the NegP. Below, I briefly compare ASL negation to the DGS
facts. The available ASL data suggest that sentential negation has different
properties in ASL, a fact which I take to be due to the different syntactic
structure as well as to the distinct status of its manual Neg element. Therefore,
the comparison of data corroborates the analysis given above for
DGS.
In contrast to DGS, the basic word order in ASL is SVO (subject–verb–
object) (16a). When present, the manual Neg sign NOT usually precedes the
276 Roland Pfau

verb in ASL, as is illustrated by the example in (16b).9 As in DGS, the manual


Neg element may be dropped as is shown by example (16c).
(16) Negation in ASL (Neidle et al. 2000:44f)
hs
a. JOHN BUY HOUSE b. JOHN NOT BUY HOUSE
‘John is buying a house.’ ‘John is not buying a house.’
hs
c. JOHN BUY HOUSE
‘John is not buying a house.’
The syntactic structure for ASL as proposed by Neidle et al. (1998; 2000) is
given in (17). Neidle et al. claim that there are two elements in the head position
of the NegP: the manual sign NOT as well as a syntactic Neg feature which
is realized by the headshake. In ASL it is also possible for the manual sign to
be dropped. However, there is then no manual material under Neg left for the
headshake to associate with and, for that reason, it is forced to spread over the
entire c-command domain of the Neg node, as in (16c).

(17) A syntactic structure for ASL


TnsP

subject Tns'

Tns NegP

Spec Neg'

Neg VP

NOT
V object
and [+neg]

The differences between ASL and DGS are as follows. First of all, the specifier
of NegP is not filled by a lexical Neg element in ASL and, therefore, ASL does
not exhibit split negation. Moreover, since the manual element NOT in the head
of NegP is not affixal in nature, movement of the verb to Neg is blocked (that is,
the negation element does not surface as a constituent of the verbal complex).
9 Neidle et al. (2000) point out that in case the negative marking does not spread, the sentence
receives an emphatic interpretation.
Morphosyntactic and phonological readjustment rules 277

These different properties allow for two interesting predictions. First, since
the ASL verb does not raise to Neg, we predict that it is possible for the verb to
be realized without an accompanying headshake when the manual Neg element
is present. This prediction is borne out, as is illustrated by the grammaticality
of example (16b). In contrast to that, headshake on the manual element alone
gives rise to ungrammaticality in DGS due to the fact that verb movement to
Neg is forced by the Stray Affix Filter (18a). That is, in DGS the empty Neg
affix is always attached to the verb and triggers the phonological readjustment
rule (15). Second, for the same reason, the headshake in DGS has an element to
be associated with (i.e. the verb) even when the manual sign is dropped. This is
not, however, true for ASL where verb movement does not apply. When NOT is
dropped, it is impossible for the headshake to spread only over the verb. Rather,
spreading has to target the entire c-command domain of Neg, that is, the entire
VP. Consequently, the ASL example (18b) is ungrammatical, in contrast to the
DGS example (18c).10

(18) Some contrasts between DGS and ASL


hs
a. *MUTTER BLUME KAUF NICHT
mother flower buy not
‘Mother does not buy a flower.’
hs
b. *JOHN BUY HOUSE
‘John is not buying a house.’
hs
c. MUTTER BLUME KAUF
mother flower buy.neg
‘Mother does not buy a flower.’
To sum up, the observed grammaticality differences between DGS and ASL
receive a straightforward explanation if we assume that the two languages
have different syntactic structures with different elements occupying the head
position of the NegP. In ASL NegP appears below TnsP with its specifier on
the left. In contrast to DGS, the manual Neg element in ASL occupies the head
of NegP while the specifier position is always empty. This distribution implies

10 In her very interesting study, Wood (1999) shows that in ASL, VP movement may occur in which
the entire VP moves to SpecNegP leaving NOT behind in sentence-final position. Consequently,
the serialization MARY NOT BREAK FAN (without VP shift) is as grammatical as the sequence
MARY BREAK FAN NOT (in which VP shift to SpecNegP has applied). As expected, VP shift
to SpecNegP is impossible in DGS since this position is taken by the manual sign NICHT (hence
the ungrammaticality of *MUTTER NICHT BLUME KAUF ‘mother not flower buy’ which
should be a possible serialization if NICHT occupied the head of NegP as in ASL).
278 Roland Pfau

that from a typological point of view DGS and ASL are different in that ASL
does not show split negation.
Furthermore, the nonmanual marking in ASL is not introduced by a phono-
logical readjustment rule (in contrast to DGS where this rule is triggered by an
empty affix), since in ASL the verb does not raise to Neg. Rather, the nonman-
ual marking is associated with the manual sign in the negative head and it is
forced to spread whenever there is no lexical carrier. This spreading process is
determined by syntactic facts, that is, it may not simply pick the neighboring
sign but rather has to spread over all hierarchically lower elements (over its
c-command domain). For further comparison of DGS and ASL negation, see
Pfau and Glück 2000.11

11.4 More languages, more readjustments


In this section I present further negation data from various languages, all of
which involve some sort of morphosyntactic or phonological readjustment.
The descriptions to follow are more roughly sketched, and I do not try to give
syntactic tree structures for any of these languages. The readjustment rules
I discuss below affect agreement features (Estonian), case features (Russian),
aspect features (Maung), phonological features (Nanai, Shónà, Swahili, Venda),
and tonal features (Venda, Kinyarwanda, Twi). The overall picture that emerges
is that the principles of Distributed Morphology allow for the very diverse
negation patterns to be accounted for in a straightforward way.
Let us first consider Estonian, a Finno-Ugric language spoken by about one
million people in the Republic of Estonia. In Estonian the Neg particle ei pre-
cedes the verb. Moreover, an optional postverbal particle mitte may be used for
reinforcement. The examples in (19) illustrate that with negation subject agree-
ment is lost in the present (19b) and in the past tense (19d,f). In the latter case,
the verb is inflected for tense only (note that the Vocabulary item for [+past] is
different with negation).

11 Unfortunately, the examples given for other sign languages (see references in footnote 6) do not
allow for safe conclusions about their typological classification. From a few examples cited in
Zeshan (1997:94), we may – albeit very tentatively – infer that Pakistani Sign Language patterns
with DGS in that the manual Neg sign follows the verb (i) and the headshake may be associated
with the verb sign only in case the Neg sign is dropped (ii).
hs
i. DEAF VAH SAMAJH NAHI:N’
deaf index understand neg
‘The deaf do not understand.’
hs
ii. PA:KISTA:N INTIZ”A:M SAMAJH
Pakistan organize understand
‘The Pakistani do not know anything about organization.’
Morphosyntactic and phonological readjustment rules 279

(19) (Split) negation in Estonian (Tuldava 1994)


a. Mina tule-n praegu. b. Mina ei tule (mitte) praegu.
I come-1.sg now I neg come (neg) now
‘I’m coming now.’ ‘I’m not coming now.’
c. Mina istu-si-n kodus. d. Mina ei istu-nud kodus.
I sit-past-1.sg at.home I neg sit-past at.home
‘I sat at home.’ ‘I did not sit at home.’
e. Meie istu-si-me kodus. f. Meie ei istu-nud kodus.
We sit-past-1.pl at.home We neg sit-past at.home
‘We sat at home.’ ‘We did not sit at home.’

I take the derivation in Estonian to parallel the one described for French above:
in the syntax, the verb raises and adjoins to the negative head ei and the whole
complex raises further to Tns. The optional Neg particle mitte stands in Spec-
NegP. Contrary to the French data, however, a readjustment rule is active at
MS in Estonian in the context of negation. More precisely, we are dealing
with a rule of impoverishment that deletes the AgrS feature whenever the
sentence is negated (compare Halle 1997; Noyer 1998). The Vocabulary item
for Estonian Neg is given in (20), and the relevant readjustment rule is given
in (21).

(20) Vocabulary item for Neg in Estonian


[neg] ↔ /ei-/

(21) Readjustment rule triggered by Neg (Impoverishment)


[AgrS] ↔ Ø / [neg]
Impoverishment is just one way of influencing the featural composition of a
terminal node at MS. Another option for readjustment is to change a certain
morphosyntactic feature in the environment of another such feature. This case
is illustrated by Russian and Maung, which we consider next.
In Russian, for instance, negation is expressed by the particle ne. Interestingly,
the direct object of the verb which is accusative in the affirmative sentence shows
up in the genitive case in the negative counterpart:

(22) Negation in Russian (Lyovin 1997)


a. Ja vižu košku. b. Ja ne vizu koški.
I see cat(acc) I neg see cat(gen)
‘I see [a] cat.’ ‘I do not see [a] cat.’
280 Roland Pfau

Obviously, the presence of Neg triggers a readjustment rule that is somewhat


different from the ones discussed so far. In Russian the relevant rule affects a
case feature and turns accusative into genitive case:12
(23) Readjustment rule triggered by Neg (feature change)
[acc] → [gen] / [neg]
A different feature is affected in Maung, an Australian language spoken on the
Goulburn Islands near the north coast of Arnhem Land in the Northern Territory
of Australia. Maung is a SVO language; a prefix indicates subject and object
agreement while mostly suffixes indicate tense and aspect. In negated sentences
the preverbal Neg element marig appears and the verb takes an irrealis suffix:
the potential in present and future negatives – as in (24b) and the hypothetical
in past negatives; as in (24d).
(24) Negation in Maung (Capell and Hinch 1970)
a. ηabi ηi-udba-Ø. b. ηabi marig ηi-udba-ji.
I 1.sg.s/3.sg.o-put-pres I neg 1.sg.s/3.sg.o-put-pot
‘I put(pres.) it.’ ‘I do not/will not put it.’
c. ηabi ηi-udba-η. d. ηabi marig ηi-udba-nji.
I 1.sg.s/3.sg.o-put-past I neg 1.sg.s/3.sg.o-put-hyp
‘I put(past) it.’ ‘I did not put it.’
The examples in Capell and Hinch (1970:102) suggest that marig immediately
precedes the verb. Since the authors name the [Neg+V] complex a “compound”
(p. 80) we may assume that marig is the head of the NegP and that the derivation
parallels the one given for Estonian above:
(25) Vocabulary item for Neg in Maung
[neg] ↔ /marig-/
Before Vocabulary insertion takes place, a feature changing readjustment rule
will apply at MS. This rule inserts an aspect feature according to the Tns
specification of the affirmative sentence (note that moreover the Tns feature
will be deleted):
(26) Readjustment rules triggered by Neg (feature insertion)
a. [Asp] → [+pot] / [neg] [−past]
b. [Asp] → [+hyp] / [neg] [+past]

12 A similar phenomenon is reported by Frajzyngier (1993) for certain constructions in Mupun, a


Western Chadic language spoken by about 10,000 people in Central Nigeria.
Morphosyntactic and phonological readjustment rules 281

Finally, what we can safely infer from the data is that at MS, two agreement
morphemes must be implemented, one for subject and one for object agreement.
These morphemes will subsequently fuse and only one Vocabulary item that
matches the feature description of the fused node will be inserted; for example,
ηi- in (24).
Another striking modification, this time one of phonological nature, is ob-
served in the Tungusic language Nanai spoken in Eastern Siberia and a small
area in Northeastern China. In Nanai, the final vowel of a verb stem is lengthened
in order to express negation (and a distinct tense marker is used). Diachroni-
cally, this modification is due to the fusion of the verb with the formerly used
separate negative auxiliary ∂ (which is still used, for example, in the related
languages Orok and Evenki):

(27) Negation in Nanai (Payne 1985)


a. Xola-xa-si. b. Xolaa-ci-si.
read-past-2.sg read(neg)-past-2.sg
‘You did read.’ ‘You did not read.’

Consequently, the relevant readjustment rule has to target the quality of the
stem-final vowel, as sketched in (28).

(28) Readjustment rule triggered by Neg (vowel lengthening)


V]Verb → V]Verb / [neg]

[+long]

A different alternation concerning the vowel quality is observed in the Bantu


language Shónà. In Shónà (spoken in Zimbabwe and Zambia), negation is
expressed morphologically by the low tone prefix hà-. At the same time, a
change in the stem-final vowel -à is triggered which becomes -ı̀ (or -è in some
Shónà dialects) in a negative context, as illustrated by (29).

(29) Negation in Shónà (Brauner 1995)


a. Ndı̀-nó-èndà kù-chı̀kórò. b. Hà-ndı̀-èndı̀ kù-chı̀kórò.
1.sg-pres-go to-school neg-1.sg-go to-school
‘I go to school.’ ‘I do not go to school.’
c. Hà-ndı̀-chá-èndı̀ kù-chı̀kórò.
neg-1.sg-fut-go to-school
‘I will not go to school.’
282 Roland Pfau

The readjustment rule in (30) takes into account that in Shónà, the
particular change of vowel is observed in the present and future tense
only.13

(30) Readjustment rule triggered by Neg (vowel change)


V]Verb → [−back; +high] / [neg] [−past]

In Venda (a language spoken in the south African homeland of the same name),
a similar process applies in the present (31a,b) and in the future tense, in the
latter only when the Tns suffix is deleted (31c,d). Consequently, an alternative
form of the negated future in (31d) would be à-rı́-ngá-dó-shúmà (no deletion
of Tns, no vowel change; compare Poulos 1990:259).

(31) Negation in Venda (Poulos 1990)


a. Rı̀-(à)-shúmá. b. À-rı́-shúmı̀.
1.pl-(tns)-work neg-1.pl-work
‘We work.’ ‘We do not work.’
c. Rı̀-dò-shúmá. d. À-rı́-ngá-shúmı̀.
1.pl-fut-work neg-1.pl-neg-work
‘We will work.’ ‘We will not work.’

What is particularly interesting with respect to the Venda data is the fact that
together with the vowel change a tone change comes into play. For the high
tone verb ù-shúmá ‘to work’, the final vowel is not only changed from a to i
(as in Shónà) but also receives a low tone in the above examples. In this sense,
readjustment in Venda can be seen as a combination of what happens in Shónà
(30) with a tone changing rule (like the one in (33) below). Note that tone
patterns of inflected verbs crucially depend on the basic tone pattern of the
respective verb stem and that, moreover, tone changes differ from tense to tense
(for details, see Poulos 1990:575ff ).
Kinyarwanda is another language of the Bantu family spoken by about six
million people in Rwanda and neighboring Zaı̈re. Negation in Kinyarwanda
is comparable to what we have observed in Gã and Venda since a change of
tone is involved. For every person except the 1st person singular, negation is
expressed by the prefix nt(ı̀)- (32b); for the 1st person singular the prefix is
sı̀- (32d). The interaction of tense and aspect morphemes is quite intricate and
shall not concern us here. Of importance, however, is the fact that with negation

13 Shónà has two past tenses – the recent and the general past – both of which are negated by
means of the negated form of the auxiliary -né ‘to have’ plus infinitive. This auxiliary follows
another readjustment rule, which I will not consider here.
Morphosyntactic and phonological readjustment rules 283

a lexical high tone on the verb is erased. Moreover, the tone on the aspect suffix
is sometimes raised; compare (32b).

(32) Negation in Kinyarwanda (Overdulve 1975)


a. À-rà-kór-à cyàànè. b. Nt-àà-kòr-á cyàànè.
3.sg-tns-work-asp hard neg-3.sg-work-asp hard
‘He/she works hard.’ ‘He/she does not work hard.’
c. N-à-kóz-è (=kór+yè) cyàànè. d. Sı̀-n-à-kòz-è cyàànè.
1.sg-tns-work-asp hard neg-1.sg-tns-work-asp hard
‘I worked hard.’ ‘I did not work hard.’

The aspect suffix in (32c,d) is actually -yè. But the glide y combines with
preceding consonants in a very complex way; for monosyllabic stems ending in
r the rule is: r + y = z (Overdulve 1975:133). The phonological readjustment
rule for high tone verbs is given in (33); what we observe is a case of high tone
delinking:

(33) Readjustment rule triggered by Neg (tone change)


[. . .V. . .]Verb / [neg]
=
[H]

In Twi (Akan), another language spoken in Ghana, change of tone is observed


at least in the negative forms of the present and of the past tense. Negation is
expressed by a low tone nasal prefix (which is homorganic with the following
consonant) plus a high tone on the last syllable of the verb stem. An illustrative
past tense sentence pair is given in (34).

(34) Negation in Twi (Redden and Owusu 1963)


a. Yε-tè Twı́ı̀. b. Yε-n-té Twı́ı̀.
1.pl-speak Twi 1.pl-neg-speak Twi
‘We speak Twi.’ ‘We do not speak Twi.’

What I take to be particularly remarkable with respect to the above examples


on the one hand is the fact that phonological readjustment may be triggered
by an empty Neg affix (as exemplified by Gã and Nanai). On the other hand,
we have seen that readjustment may not only manipulate morphosyntactic and
phonological features but also tonal (prosodic) features - as exemplified by
284 Roland Pfau

Table 11.1 Negation: A comparative chart

Language Spoken Split Morphological Vocabulary Readjustment at


example number in negation? negation? item MS (rule number)

French (1) France yes yes ne- –


Háusá (4;7) N Nigeria yes yes ba- –
Gã (8) Ghana no yes -Ø (past) tone change (11)
-kò (perf.)
DGS (12;18) Germany yes yes -Ø change of non-
manuals (15)
ASL (16;18) USA no no NOT & [+neg] –
Estonian (19) Estonia yes yes ei- impoverishment
(AgrS) (21)
Russian (22) Russia no no (ne) feature change
(case) (23)
Maung (24) N Australia no yes (?) marig- feature change
(aspect) (26)
Nanai (27) E Siberia no yes -Ø vowel length-
NE China ening (28)
Shónà (29) Zimbabwe/ no yes hà- vowel change
Zambia (30)
Venda (31) Venda no yes à- vowel and
(S Africa) tone change
Kinyarwanda Rwanda/ no yes ntı̀- tone change
(32) Zaı̈re sı̀- (1.Sg) (33)
Twi (Akan) Ghana no yes n-/m- tone change (≈ as
(34) in (11b))

Gã, Venda, and Twi - which are taken to be autosegmental in nature (Goldsmith
1979; 1990).14
Table 11.1 sums up and compares the languages discussed above. It shows
by what means negation is expressed, that is, if a certain language involves
split and/or morphological negation. Moreover, the Vocabulary item for Neg
(for the negative head) is given and it is indicated what kind of readjustment
rule (if any) applies at the grammatical level of MS. After having presented
further data that make clear in which manner readjustment rules are sometimes
14 Becker-Donner (1965) presents remarkable data from Mano, a Western Sudanic language spoken
in Liberia. In Mano, one way of expressing negation is by a tone change alone. This tone change,
however, appears on the pronoun and not on the verb: Ko yı́dò ‘We know.’, Kô yı́dò ‘We do
not know.’ Since it is not clear from her data if the pronoun could possibly be analyzed as an
agreement prefix, I do not discuss this example further. In any case, it is different from all the
examples discussed above because negation does neither introduce a separate morpheme nor
affect the verb stem at all.
Morphosyntactic and phonological readjustment rules 285

correlated with the presence of a Neg feature in spoken as well as in a signed


language, I once again focus on DGS (in Section 11.5) in order to investigate
the precise nature of the phonological change that was shown to accompany
negation in this language.

11.5 Discussion: What about modality effects?


In the previous section, I sketched the derivation of negated sentences for some
spoken languages as well as for DGS and ASL. As it turns out, negation offers a
good source of data for illustrating the basic concepts of Distributed Morphol-
ogy; these concepts are: movement operations, late insertion of Vocabulary
items and application of various readjustment rules at the grammatical levels
of Morphological Structure and Phonological Form.
I have illustrated how this theoretical framework allows for the derivation of
negated structures in a signed language in exactly the same way as it allows
for the derivation of such structures in spoken languages. Thus, as far as the
morphosyntactic side is concerned, we do not need to refer to any modality-
specific principles. This is definitely a welcome result, since it allows for a
uniform description of the phenomenon (note that it is also the expected result
if we take the idea of Universal Grammar seriously).
It is, however, important to also investigate if things are possibly somewhat
different when we enter the realm of phonological rules and principles. We have
seen that amongst other things readjustment rules are capable of changing the
phonological form of a verb stem (tone change, feature change). For DGS, I
claim that a phonological readjustment rule may affect the nonmanual compo-
nent of a sign by adding a headshake. Of course, this is not at all surprising
since it has long been realized that nonmanual features like facial expressions
and face and body positions have to be included in the phonological description
of a sign.
As a matter of fact, the linguistic nonmanual marking in signed languages
may serve three different functions (Wilbur and Patschke 1998):
r a lexical function where the nonmanual does not bear any independent mean-
ing but rather is part of the lexical entry of a sign; lexically specified nonman-
ual features have the same ability to carry lexical contrast as, for example,
features of handshape, place, and movement;
r a morphological function in which the nonmanual functions as an indepen-
dent, simultaneously realized morpheme which may, for example, have ad-
jectival or adverbial meaning;
r a morphosyntactic function where the nonmanual marking is triggered by
a syntactic feature (e.g. [+wh], [+neg]) and is capable of spreading over a
sequence of words.
Brentari (1998) takes this into account by including nonmanuals in a feature
hierarchy for signed languages. The feature tree she proposes is given in (35).
286 Roland Pfau

(35) A feature hierarchy for signed languages (Brentari 1998:26, 130)


root

inherent features prosodic features

nonmanual
articulator place of articulation

setting
nonmanual manual
path
H2 H1
orientation

aperture
In Brentari’s Prosodic Model, a fundamental distinction is made between in-
herent features and prosodic features, both of which are needed to achieve all
lexical contrasts. Since both branches contain a nonmanual node, a few words
need to be said about the status of the negative headshake.
In Brentari’s definition of inherent and prosodic features both are said to be
“properties of signs in the core lexicon” (1998:22). This, however, is not true
for the kind of nonmanual modification I have discussed above: the negative
headshake on the verb sign is not part of the sign’s lexical entry.15 Rather, it
is added to the feature description of the sign by means of a morphosyntactic
operation. Still, the presence of this feature in the surface form of the verb sign
needs to be accounted for in the featural description of the sign (the same is true
for other morphosyntactic and morphological features; e.g. movement features
in aspectual modulation or handshape features in classification).
As far as the negative headshake on the verb is concerned, I assume that in
DGS it is part of the prosodic branch of the feature hierarchy for the following
reasons:
r The negative headshake is a dynamic property of the signal.
r The negative headshake has autosegmental status; that is, it behaves in a way
similar to tonal prosodies in tone languages.
r The negative headshake appears to be synchronized with movement features
of the manual component of the sign.16
r The negative headshake is capable of spreading.
15 The negative headshake on the Neg sign NICHT is, however, part of the lexical entry of this
sign. For this reason, the nonmanual marking on NICHT was represented by a separate line in
(13b) above, implying that the negative headshake on NICHT is not due to a spreading process.
In the actual utterance, however, the headshake is realized continuously.
16 Brentari (1998:173) notes that outputs of forms containing both manual and nonmanual
prosodic features are cotemporal. She mentions the sign FINALLY, which has the accompanying
Morphosyntactic and phonological readjustment rules 287

The fourth criterion deserves some comments. In the DGS examples (12b) and
(18c) above, the negative headshake was indicated as being associated with
the verb sign only. It is, however, possible for the headshake to spread onto
neighboring constituents, for example onto the direct object BLUME ‘flower,’
as indicated in (36a). It is not possible for the headshake to spread onto parts
of phrases as is illustrated by the ungrammaticality of example (36b) in which
the headshake is associated with the adjective ROT ‘red’ only.

(36) Spreading of headshake in DGS


hs hs
a. MANN BLUME KAUF b. ∗ MANN BLUME ROT KAUF
man flower buy.neg man flower red buy.neg
‘The man is not buying a ‘The man is not buying a red
flower.’ flower.’

Following the analysis sketched above, the headshake on the direct object in
(33a) has its origin in phonological readjustment of the verb stem, that is, a
prosodic feature of the verb has spread onto a neighboring constituent.17 Since
I claim in the second criterion above that the negative headshake behaves in a
way similar to tonal prosodies in tone languages, we now need to consider if
similar phenomena are in fact attested in spoken languages. The question is: Are
tones in spoken languages capable of spreading across word boundaries? And
if so, is the spreading process comparable to the one observed for nonmanuals
in DGS?

nonmanual component ‘pa,’ an opening of the mouth which is synchronized with the beginning
and end of the movement of the manual component. Moreover, nonmanual behavior may imitate
the movement expressed in the manual component (e.g. in the sign JAW-DROP in which the
downward movement of the dominant hand is copied by an opening of the mouth). Similarly,
for a DGS sign like VERSTEH ‘understand’ which is signed with a V-hand performing a back
and forth movement on the side of the forehead, the side-to-side movement of the negative
headshake is synchronized with the movement of the manual component to indicate negation
of that verb.
17 If spreading of the nonmanual marking was in fact syntactically determined in DGS (as was
claimed to be true for ASL in Section 11.3.5), then it should not be optional. Recall that in ASL,
spreading of the nonmanual marking over the entire c-command domain of Neg is obligatory
whenever the manual Neg element is dropped. Note that there is no difference in interpreta-
tion between the DGS sentence with headshake over the verb sign only (18c) and the sentence
with headshake over the verb sign and the object DP (36a). Most importantly, the former
variant does not imply constituent negation (as in ‘John did not buy flowers, he stole them’).
Also note that my above analysis of DGS negation is not to imply that all syntactic phenom-
ena in DGS that are associated with nonmanual markings (e.g. wh-questions, topicalizations)
result from the application of phonological readjustment rules (that are triggered by empty
affixes). Rather, I assume that these phenomena are much more similar to the corresponding
ASL constructions in that the spreading of the respective nonmanual marking is syntactically
determined.
288 Roland Pfau

The answer to the first question is definitely positive. In the literature, the
relevant phenomenon is usually referred to as external tone sandhi.18 Below,
I present representative examples from the three Bantu languages Kinande,
Setswana, and Tsonga.19
In Kinande (spoken in Eastern Zaı̈re), the output of lexical tonology provides
tone bearing units that have high or low tone or are toneless. In (37), e- is the
initial vowel (IV) morpheme and ki- is a class 7 noun prefix. In a neutral
environment, that is, one where no postlexical tone rules apply, the two sample
nouns e-ki-tabu ‘book’ and e-ki-tsungu ‘potato’ surface as è-kı̀-tábù and è-kı̀-
tsùngù, respectively. However, in combination with the adjective kı́-nénè ‘big,’
whose prefix bears a high tone, a change is observed: the high tone of the
adjective prefix spreads leftwards onto the last syllable of the noun. It does not
spread any further as is illustrated by example (37b) (in the following examples
the site(s) of the tone change are underlined).
(37) Regressive high tone spreading in Kinande (Hyman 1990:113)
a. è-kı̀-tábù → è-kı̀-tábú kı́-nénè
iv-7-book iv-7-book pre-big
‘big book’
b. è-kı̀-tsùngù → è-kı̀-tsùngú kı́-nénè
iv-7-potato iv-7-potato pre-big
‘big potato’
Other remarkable tone sandhi phenomena are described by Creissels (1998) for
Setswana (spoken in South Africa and Botswana). By themselves, the Setswana
words bàthò ‘persons’ and bàηwı̀ ‘certain, some’ have no high tone, and no high
tone appears when they combine in a phrase; compare (38a). In (38b), however,
the high tone of the morpheme lı́- ‘with (comitative)’ that is prefixed to the
noun spreads rightwards onto three successive syllables.20

18 Internal tone sandhi refers to tone spreading within the word boundary, as exemplified by the
complex Shónà verb kù-téng-és-ér-á ‘inf-buy-caus-to-vs’ (‘to sell to’) where the root /teng/
is assigned a single high tone on the tonal tier that spreads rightwards onto the underlyingly
toneless extensional and final-vowel (VS) suffixes (see Kenstowicz 1994:332, whose discussion
builds on results by Myers 1987).
19 I am very grateful to Scott Myers who brought this phenomenon to my attention and was so
kind to supply some relevant references.
20 As a matter of fact, the high tone spreads first to two successive toneless syllables inside the
noun. Since by that the final syllable receives high tone, the conditions for the application of
a spreading rule operating at word boundaries are created and the high tone may thus spread
further, from the final syllable of the noun to the first syllable of the following word. Compare
the phrase lı́-bálı́mı̀ bàηwı̀ ‘with-farmers certain’ in which the noun bàlı̀mı̀ has three low tone
syllables; therefore, spreading of the high tone from the prefix does not affect the final syllable
and cannot proceed further across the word boundary (Creissels 1998:151). Consequently, what
we observe in (38b) is a combination of internal and external tone sandhi.
Morphosyntactic and phonological readjustment rules 289

(38) Progressive high tone spreading in Setswana (Creissels 1998:150)


a. bàthò bàηwı̀ b. lı́-báthó báηwı̀
persons certain with-persons certain
‘certain persons’ ‘with certain persons’
The last example I cite comes from Tsonga (spoken in Mozambique and South
Africa). Baumbach (1987) observes various instances in which a high tone
preceding a word with only low tones spreads onto all syllables of this word
except for the last one (his Tonological Rule 1). One particularly interesting case
is that of an object with low tones only following a high tone verb. In (39a,b),
the first two syllables of the objects xı̀kòxà ‘old woman’ and nhwànyànà ‘girl’
receive high tone due to progressive high tone spreading.21
(39) Progressive high tone spreading in Tsonga (Baumbach 1987:48)
a. xı̀kòxà → Vá pfúná xı́kóxà.
old.woman they help old.woman
‘They help the old woman.’
b. nhwànyànà → Ú rhándzá nhwányánà.
girl he likes girl
‘He likes the girl.’
The above examples make clear that spreading of prosodic material across word
boundaries is in fact attested in spoken languages as well as sign languages. For
instance, a high tone may spread leftwards onto the last syllable of a noun (as in
Kinande), rightwards from a high tone prefix throughout a noun and possibly
onto the first syllable of the following word (as in Setswana) or rightwards
from a verb onto the first two syllables of a neighboring noun (as exemplified
by Tsonga).
However, we still have to consider the second question posed above, namely
if the Bantu spreading processes are in fact similar to the one observed for
nonmanuals in DGS. A comparison of the DGS example in (36a) with the
spoken language examples in (37) to (39) suggests that the processes involved
are somewhat different. Remember that tone spreading in Bantu may proceed
throughout a word and may possibly affect one or two syllables of a neighboring
constituent. In contrast to this, regressive spreading of the nonmanual feature
[headshake] in DGS always affects the whole preceding word, for example the
direct object BLUME ‘flower’ in (36a).

21 It should be noted, however, that in Tsonga (as in many other Bantu languages) a number of
consonants (depressor consonants) prevent the process of progressive tonal spreading to go
beyond them (Baumbach 1987:53ff; for depressor consonants in Ikalanga [Botswana], also see
Hyman and Mathangwane 1998).
290 Roland Pfau

Note that the notion of syllable in signed languages is a matter of debate.


Some authors (e.g. Coulter 1982; Wilbur 1990; Perlmutter 1992) have claimed
that for the most part, signs consist of one syllable only (for BLUME – according
to Perlmutter’s analysis – a hold – movement – hold sequence with handshape
change on the dominant hand).22 Following this line of reasoning, we might
argue that the nonmanual spreading in (36a) does not affect the preceding word
but rather targets exactly one syllable to the left of the verb sign.
Such an analysis does not, however, stand up to closer examination. It turns
out that when the direct object is made more complex – for example, by addition
of an adjective such as ROT ‘red’ (as in (36b)) or by addition of a PP or a
relative clause – the nonmanual must spread regressively over all these elements.
Obviously, spreading in DGS may target more than one neighboring word
(that is, more than one neighboring syllable).23 Still, this apparent difference
to spoken languages should be treated with some caution, since, at this point, I
am not in a position of saying with certainty to what extent tone spreading in
spoken languages is actually constrained. Consider, for instance, the following
sentence from Kipare, a Bantu language of Tanzania:

(40) An instance of across-the-board lowering in Kipare (Odden 1995:


462f)

/vá!ná vékéjílá nkhúkú ndórí nkhúndú jángú/

H L H
→ vánà vèkìjìlà nkhùkù ndòrì nkhùndù jàngù
children while.3.PL.eat chickens little red my
‘while the children eat those little red chickens of mine’

Underlyingly, each word/morpheme in the Kipare sequence in (40) contributes


a high tone. Odden (1995) assumes that due to a tone-fusing version of the
Obligatory Contour Principle (OCP),24 adjacent high tones are combined into
one multiply-linked high tone (H) at the phrasal level. He further assumes that
there is a floating low tone (L) following the first tone bearing unit of vana that
22 Coulter (1982) points out that compounds, reduplicated signs, and loan signs from fingerspelling
are an exception. In addition to that, Perlmutter (1992) presents evidence for a few bisyllabic
lexemes in ASL (e.g. signs with two distinct movement segments).
23 Obviously, optional spreading of the headshake in DGS is not syntactically determined (as in
ASL), but still it is syntactically constrained. That is, if spreading applies, it must apply over
entire phrases; hence the ungrammaticality of (36b).
24 With respect to tones, the OCP is formulated by Goldsmith (1979) as follows: “At the melodic
level of the grammar, any two adjacent tonemes must be distinct.” In its most general form, the
principle is stated by McCarthy (1988): “Adjacent identical elements are prohibited.” According
to this general statement, the OCP applies to any two identical features or nodes which are
adjacent on a given tier.
Morphosyntactic and phonological readjustment rules 291

turns a prepausal sequence of high tones into low ones. Thus, across-the-board
lowering of high tones is explained by postulating that there is only a single
high tone in such cases.
However, a similar explanation is not available for the potential across-the-
board spreading of nonmanuals in DGS. In contrast to Kipare where the whole
sequence is assumed to be linked to a single prosodic feature (H), no such
feature is present in the DGS examples; that is, the possibly complex object DP
is not linked to a single nonmanual feature of any kind. Consequently, if at all,
we may speculate that in the context of spreading properties, we are actually
encountering a modality effect.
A possible reason for this modality effect might be that in spoken languages
every tone bearing unit must bear a tone and every tone must be associated
with a tone bearing unit; consequently, no vowel can be articulated without a
certain tone value. Due to this restriction, spreading of tone requires repeated
delinking or change of a tone feature. Across-the-board spreading (as in Kipare)
is possible only when the whole sequence is multiply-linked to a single H. But
this observation does not hold for the sign language data under consideration
since skeletal positions in DGS are not necessarily inherently associated with a
prosodic (nonmanual) feature, say, a headshake. For this reason, the spreading
of the nonmanual in DGS does not imply delinking or feature change; rather,
a feature is added to the featural description of a sign. For the same reason,
assuming a single multiply-linked prosodic feature is not a possible explanation
for the DGS facts.25

11.6 Conclusion
The general picture that emerges from the above discussion of spoken language
and signed language negation is that the processes involved in the derivation
of negated sentences are in fact very similar. Not surprisingly, the language-
specific syntactic structures are subject to parametric variation as far as, for
example, the selectional properties of functional heads and the position of spec-
ifiers are concerned (however, for a different view, see Kayne 1994; Chomsky
1995). Still, the relevant syntactic (head movement, adjunction) and morphosyn-
tactic (merger, fusion) operations are exactly the same. Moreover, in both modal-
ities readjustment rules may apply at the postsyntactic levels of Morphological
25 Note that other nonmanual features, such as raised eyebrows or head tilt, do not interfere with
the negative headshake since nonmanuals may be layered in signed languages (Wilbur 2000).
A hypothetical test case for a blocking effect would be one in which a nonmanual on the same
layer interferes with the spreading process, for example, an element within the object DP which
is associated with a headnod. In such a case, it would be interesting to examine if the headnod is
delinked and spreading of the headshake proceeds in the familiar way, or if the headnod rather
blocks further spreading of the headshake. I did not, however, succeed in constructing such an
example (possibly due to semantic awkwardness).
292 Roland Pfau

Structure and Phonological Form for instance, a zero affix may trigger a stem-
internal phonological change (as was exemplified with the help of negation data
from Gã, DGS, and Nanai).
In DGS the feature that is introduced by phonological readjustment is the
nonmanual feature [headshake]. Referring to the phonological feature hierar-
chy proposed by Brentari (1998), I claimed that feature to be a prosodic one.
Interestingly, the headshake, which is initially associated only with the verb
sign, is capable of spreading over neighboring constituents. However, similar
spreading of prosodic material (external tone sandhi) has been shown to apply
in some spoken languages.
Are there therefore no modality effects at all? In Section 11.5 I tentatively
claim that one such effect might be due to the different nature of the prosodic
material involved. While tone bearing units in spoken languages must be as-
sociated with some tone value, it is not the case that similar units in signed
languages must always be associated with some value for a given nonmanual
feature; for example, headshake/headnod. Consequently, spreading of the non-
manual is not blocked by interfering prosodic material (on the same layer) and
may, therefore, proceed over a sequence of words. Once again, it is necessary
to emphasize that further research is necessary in order to evaluate the validity
of this claim.

Acknowledgments
I am particularly indebted to my colleague Susanne Glück for fruitful mutual
work and many stimulating discussions. I am also very grateful to Daniela Happ
and Elke Menges for their invaluable help with the DGS data. Moreover, I wish
to thank Rajesh Bhatt, Katharina Hartmann, Meltem Kelepir, Gaurav Mathur,
Scott Myers, Carol Neidle, Christian Rathmann, and Sandra Wood, as well as
an anonymous reviewer, for their comments on an earlier version of this chapter.

11.7 References
Ablorh-Odjidja, J. R. 1968. Ga for beginners. Accra: Waterville Publishing.
Baumbach, E. J. M. 1987. Analytical Tsonga grammar. Pretoria: University of South
Africa.
Becker-Donner, Etta. 1965. Die Sprache der Mano (Österreichische Akademie der
Wissenschaften, Sitzungsbericht 245 (5)). Wien: Böhlaus.
Bergman, Brita. 1995. Manual and nonmanual expression of negation in Swedish Sign
Language. In Sign language research 1994: Proceedings of the Fourth European
Congress on Sign Language Research, ed. Heleen Bos and Trude Schermer, 85–
103. Hamburg: Signum.
Boyes Braem, Penny. 1995. Einführung in die Gebärdensprache und ihre Erforschung.
Hamburg: Signum.
Morphosyntactic and phonological readjustment rules 293

Brauner, Siegmund. 1995. A grammatical sketch of Shona. Köln: Köppe.


Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Capell, A. and H. E. Hinch. 1970. Maung grammar, texts and vocabulary. The Hague:
Mouton.
Caron, B. 1990. La négation en Haoussa. Linguistique Africaine 4:32–46.
Chomsky, Noam. 1995. The minimalist program. Cambridge, MA: MIT Press.
Coerts, Jane. 1990. The analysis of interrogatives and negations in Sign Language of the
Netherlands. In Current trends in European Sign Language research: Proceedings
of the 3r d European Congress on Sign Language Research, ed. Siegmund Prillwitz
and Tomas Vollhaber, 265–277. Hamburg: Signum.
Coulter, Geoffrey R. 1982. On the nature of ASL as a monosyllabic language. Paper
presented at the annual meeting of the Linguistic Society of America, San Diego,
CA.
Creissels, D. 1998. Expansion and retraction of high tone domains in Setswana. In
Theoretical aspects of Bantu tone, ed. Larry M. Hyman and Charles W. Kisseberth,
133–194. Stanford: CSLI.
Dahl, Östen. 1979. Typology of sentence negation. Linguistics 17:79–106.
Dahl, Östen. 1993. Negation. In Syntax: An international handbook of contemporary
research, ed. Joachim Jacobs, Arnim von Stechow, Wolfgang Sternefeld and Theo
Vennemann, 914–923. Berlin: de Gruyter.
Deuchar, M. 1984. British Sign Language. London: Routledge and Kegan Paul.
Frajzyngier, Zygmunt. 1993. A grammar of Mupun. Berlin: Reimer.
Glück, Susanne and Roland Pfau. 1999. A Distributed Morphology account of verbal
inflection in German Sign Language. In Proceedings of ConSOLE 7, ed. Tina
Cambier-Langeveld, Anikó Lipták, Michael Redford and Eric Jan van der Torre,
65–80. Leiden: SOLE.
Goldsmith, John. 1979. Autosegmental phonology. New York: Garland.
Goldsmith, John. 1990. Autosegmental and metrical phonology. Oxford: Blackwell.
Haegeman, Liliane. 1995. The syntax of negation. Oxford: Cambridge University
Press.
Haegeman, Liliane and Raffaella Zanuttini. 1991. Negative heads and the Neg-criterion.
The Linguistic Review 8:233–251.
Halle, Morris. 1990. An approach to morphology. In Proceedings of NELS 20, 150–185.
GLSA, University of Massachusetts, Amherst.
Halle, Morris. 1994. The Russian declension. An illustration of the theory of Dis-
tributed Morphology. In Perspectives in phonology, ed. Jennifer Cole and Charles
W. Kisseberth, 29–60. Stanford: CSLI.
Halle, Morris. 1997. Distributed Morphology: Impoverishment and fission. In MIT Work-
ing Papers in Linguistics 30, 425–449. Department of Linguistics and Philosophy,
MIT, Cambridge, MA.
Halle, Morris and Alec Marantz. 1993. Distributed Morphology and the pieces of in-
flection. In The view from building 20. Essays in linguistics in honor of Sylvain
Bromberger, ed. Ken Hale and Samuel J. Keyser, 111–176. Cambridge, MA: MIT
Press.
Halle, Morris and Alec Marantz. 1994. Some key features of Distributed Morphology.
In MIT Working Papers in Linguistics 21, 275–288. Department of Linguistics and
Philosophy, MIT, Cambridge, MA.
294 Roland Pfau

Harley, Heidi and Rolf Noyer. 1999. Distributed Morphology. Glot International 4:3–9.
Hartmann, Katharina. 1999. Doppelte Negation im Háusá. Paper presented at Generative
Grammatik des Südens (GGS 1999). Universität Stuttgart, May 1999.
Hyman, Larry M. 1990. Boundary tonology and the prosodic hierarchy. In The
phonology-syntax connection, ed. Sharon Inkelas and Draga Zec, 109–125.
Chicago, IL: University of Chicago Press.
Hyman, Larry M. and J.T. Mathangwane. 1998. Tonal domains and depressor consonants
in Ikalanga. In Theoretical aspects of Bantu tone, ed. Larry M. Hyman and Charles
W. Kisseberth, 195–229. Stanford : CSLI.
Kayne, Richard. 1994. The antisymmetry of syntax. Cambridge, MA: MIT Press.
Kenstowicz, Michael. 1994. Phonology in generative grammar. Cambridge, MA:
Blackwell.
Liddell, Scott. 1980. American Sign Language syntax. The Hague: Mouton.
Lyovin, Anatole V. 1997. An introduction to the languages of the world. Oxford: Oxford
University Press.
McCarthy, John. 1988. Feature geometry and dependency: A review. Phonetica 43:
84–108.
Myers, Scott. 1987. Tone and the structure of words in Shona. Doctoral dissertation,
University of Massachusetts, Amherst, MA. (Published by Garland Press, New
York. 1990.)
Neidle, Carol, Benjamin Bahan, Dawn MacLaughlin, Robert G. Lee, and Judy Kegl.
1998. Realizations of syntactic agreement in American Sign Language: Similarities
between the clause and the noun phrase. Studia Linguistica 52:191–226.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: MIT Press.
Noyer, Rolf. 1998. Impoverishment theory and morphosyntactic markedness. In Mor-
phology and its relation to phonology and syntax, ed. S. G. Lapointe, D.K. Brentari
and P. M. Farrell, 264–285. Stanford: CSLI.
Odden, David. 1995. Tone: African languages. In The handbook of phonological theory,
ed. John A. Goldsmith, 444–475. Cambridge, MA: Blackwell.
Ouhalla, Jamal. 1990. Negation, relativized minimality and the aspectual status of aux-
iliaries. The Linguistic Review 7:183–231.
Overdulve, C. M. 1975. Apprendre la langue rwanda. The Hague: Mouton.
Payne, John R. 1985. Negation. In Language typology and syntactic description, Vol.1:
Clause structure, ed. Timothy Shopen, 197–242. Cambridge: Cambridge University
Press.
Perlmutter, David M. 1992. Sonority and syllable structure in American Sign Language.
Linguistic Inquiry 23:407–442.
Pfau, Roland. 2001. Typologische und strukturelle Aspekte der Negation in Deutscher
Gebärdensprache. In Gebärdensprachlinguistik 2000: Theorie und Anwendung, ed.
Helen Leuninger and Karin Wempe, 13–31. Hamburg: Signum.
Pfau, Roland and Susanne Glück. 1999. The pseudo-simultaneous nature of complex
verb forms in German Sign Language. In Proceeding of the 28th Western Conference
on Linguistics, ed. Nancy M. Antrim, Grant Goodall, Martha Schulte-Nafeh and
Vida Samiian, 428–442. Fresno, CA: CSU.
Pfau, Roland and Susanne Glück. 2000. Negative heads in German Sign Language
and American Sign Language. Paper presented at 7th International Conference
Morphosyntactic and phonological readjustment rules 295

on Theoretical Issues in Sign Language Research (TISLR 2000). Universiteit van


Amsterdam, July.
Pilleux, Mauricio. 1991. Negation in Chilean Sign Language. Signpost Winter 1991:
25–28.
Pollock, Jean-Yves. 1989. Verb movement, universal grammar, and the structure of IP.
Linguistic Inquiry 20:365–424.
Poulos, George. 1990. A linguistic analysis of Venda. Pretoria: Via Afrika Ltd.
Rathmann, Christian. 2000. Does the presence of person agreement marker predict
word order in signed languages? Paper presented at 7th International Conference
on Theoretical Issues in Sign Language Research (TISLR 2000). Universiteit van
Amsterdam, July.
Rathmann, Christian. 2001. The optionality of Agreement Phrase: Evidence from signed
languages. Paper presented at Conference of the Texas Linguistic Society (TLS
2001). University of Texas at Austin, March 2001.
Redden, J. E. and N. Owusu. 1963. Twi: Basic course. Washington, DC: Foreign Service
Institute.
Rondal, J.-A., F. Henrot, and M. Charlier. 1997. Le langage des signes. Aspects psy-
cholinguistiques et éducatifs. Sprimont: Mardaga.
Tuldava, J. 1994. Estonian textbook: Grammar, exercises, conversation. Bloomington,
IN: Indiana University Research Institute for Inner Asian Studies.
Veinberg, Silvana C. 1993. Nonmanual negation and assertion in Argentine Sign
Language. Sign Language Studies 79:95–112.
Veinberg, Silvana C., and Ronnie B. Wilbur. 1990. A linguistic analysis of the negative
headshake in American Sign Language. Sign Language Studies 68:217–244.
Vogt-Svendsen, Marit. 2000. Negation in Norwegian Sign Language and in contrast to
some features in German Sign Language. Poster presented at 7th International
Conference on Theoretical Issues in Sign Language Research (TISLR 2000).
Universiteit van Amsterdam, July.
Wilbur, Ronnie B. 1990. Why syllables? What the notion means for ASL research. In
Theoretical issues in sign language research, Vol. 1: Linguistics, ed. Susan Fisher
and Peter Siple, 81–108. Chicago, IL: University of Chicago Press.
Wilbur, Ronnie B. 2000. Phonological and prosodic layering of nonmanuals in American
Sign Language. In The signs of language revisited: An anthology to honor Ursula
Bellugi and Edward Klima, ed. Karen Emmorey and Harlan Lane, 215–244.
Mahwah, NJ: Lawrence Erlbaum Associates.
Wilbur, Ronnie B. and Cynthia G. Patschke. 1998. Body leans and the marking of
contrast in American Sign Language. Journal of Pragmatics 30:275–303.
Wood, Sandra K. 1999. Semantic and syntactic aspects of negation in ASL. MA thesis,
Purdue University, West Lafayette, IN.
Zanuttini, Raffaella. 1991. Syntactic properties of sentential negation. A comparative
study of Romance languages. Doctoral dissertation, University of Pennsylvania.
Zeshan, Ulrike. 1997. “Sprache der Hände?”: Nichtmanuelle Phänomene und Nutzung
des Raums in der Pakistanischen Gebärdensprache. Das Zeichen 39:90–105.
12 Nominal expressions in Hong Kong Sign
Language: Does modality make a difference?

Gladys Tang and Felix Y. B. Sze

12.1 Introduction
Signed language research in recent decades has revealed that signed and spoken
languages share many properties of natural language, such as duality of pattern-
ing and linguistic arbitrariness. However, the fact that there are fundamental
differences between the oral–aural and visual–gestural modes of communica-
tion leads to the question of the effect of modality on linguistic structure. Var-
ious researchers have argued that, despite some superficial differences, signed
languages also display the property of formal structuring at various levels of
grammar and a similar language acquisition timetable, suggesting that the prin-
ciples and parameters of Universal Grammar (UG) apply across modalities
(Brentari 1998; Crain and Lillo-Martin 1999; Lillo-Martin 1999). The fact that
signed and spoken languages share the same kind of cognitive systems and
reflect the same kind of mental operations was suggested by Fromkin (1973),
who also argued that having these similarities does not mean that the differences
resulting from their different modalities are uninteresting. Meier (this volume)
compares the intrinsic characteristics of the two modalities and suggests some
plausible linguistic outcomes. He also comments that the opportunity to study
other signed languages in addition to American Sign Language (ASL) offers a
more solid basis to examine this issue more systematically.
This chapter suggests that a potential source of modality effect may lie in the
use of space in the linguistic and discourse organization of nominal expressions
in signed language. In fact, some researchers in this field have proposed that
space plays a relatively more prominent role in signed language than in spoken
language. As Padden (1990) claims, in spoken language space is only something
to be referred to; it represents a domain in our mental representation in which
different entities and their relations are depicted. On the other hand, space is
physically accessible and used for linguistic representation in signed language.
This includes not just the neutral signing space, but also space around or on
the signer’s body.1 Poizner, Klima and Bellugi (1987) distinguish two different
1 The space for representing syntactic relations with loci was originally proposed by Klima and
Bellugi (1979) as a horizontal plane in front of the signer at the trunk level. Kegl (1985) argued
that loci in the signing space are not restricted to this horizontal plane.

296
Nominal expressions in Hong Kong Sign Language 297

uses of space in signed language: spatial mapping and spatialized syntax. Spatial
mapping describes through signing the topographic or iconic layout of objects
in the real world. At the same time, certain syntactic or semantic properties like
verb agreement, pronominal, and anaphoric reference also use locations or loci
in space for their linguistic representation.
In fact, if objects and entities are being referred to through nominal expres-
sions in natural language, the relationship between syntactic structure, space,
and nominal reference in signed language requires a detailed examination. In a
signing discourse, objects and entities are either physically present, or concep-
tually accessible through their associated loci in the signing space, or they are
simply being referred to in the universe of discourse. A research question thus
arises as to whether, or to what extent, the presence or absence of referents in the
signing discourse influences the linguistic organization of nominal expressions
in the language.
In what follows, we first present a description of the internal structure of
the nominal expressions of Hong Kong Sign Language (HKSL). Where ap-
propriate, comparative data from ASL and spoken languages such as English
and Cantonese are also adopted. We illustrate how Hong Kong deaf signers
encode (in)definiteness through syntactic cues, such as the structure of nominal
expressions, syntactic position, as well as nonmanual markings. Toward the
end of the chapter, we provide an account of the distribution and interpretation
of certain nominal expressions in the HKSL discourse, using Liddell’s (1994;
1995) concept of mental spaces. We suggest that the types of mental space in-
voked during signing serve as constraints for the distribution and interpretation
of certain nominal expressions in the HKSL discourse.

12.2 Nominal expressions of HKSL


The possibility that the NP (noun phrase) structure in HKSL is similar to Can-
tonese can be readily refuted by the observation that NPs in HKSL that involve
common nouns do not have a classifier phrase (CLP) projection (see Tang
1999).Cheng and Sybesma (1999) report that Cantonese is a numeral classi-
fier language and the classifier phrase is projected between NumP and NP,
yielding a surface order of [Det Num Cl N]. Following Allan’s (1977) typol-
ogy, HKSL belongs to the category of classifier predicate languages. Similar
to ASL, the classifiers of HKSL are verbal rather than nominal, and they en-
ter into the predicate construction of the language. Nominal expressions of
HKSL show a syntactic order of [Det Num N], and referential properties such
as (in)definiteness, genericity as well as specificity – which are encoded in part
by classifiers in Cantonese – are marked by a difference in syntactic structure
or position (preverbal or postverbal) in HKSL. Moreover, while all modifiers
precede the noun in Cantonese, the syntactic order of nominal expressions in
298 Gladys Tang and Felix Sze

Figure 12.1 INDEXdet i

HKSL appears to be quite variable because the data reveal that the pointing sign
and the numeral sign may either precede or follow the noun.

12.3 Determiners

12.3.1 Definite determiners


Both definite and indefinite determiners are observed in HKSL. The pointing
sign glossed as INDEXdet is associated with a definite referent.2 As illustrated
in Figure 12.1, the index finger points outward, usually toward the location of
the referent in the immediate physical environment or toward an abstract locus
in space.
Like its ASL counterpart, INDEXdet is found either prenominally or post-
nominally. However, a majority of our cases are prenominal. In ASL this sign
in prenominal position is a definite determiner, equivalent to the English article
‘the’; it also encodes the spatial location of the referent (1a). If it occurs in post-
nominal position, this sign would be interpreted as an adverbial ‘here/there’
(MacLaughlin 1997). In HKSL, this sign in prenominal position is also inter-
preted as a definite determiner (2a; see Figure 12.2), equivalent to the demon-
stratives ‘nei go’ (this) or ‘go go’ (that) in Cantonese.3 Also, this sign does
not yield an indefinite reading; (2b) is unacceptable unless it is interpreted as a
demonstrative ‘this’ or ‘that.’ Although MacLaughlin (1997) suggests that the
prenominal pointing sign is a determiner and the postnominal one is an adver-
bial, the HKSL data show that the postnominal pointing signs are ambiguous
between a determiner and an adverbial (2c). If it is interpreted as an adverbial,
2 The view that a pointing sign in either prenominal or postnominal position is a definite determiner
was put forward by Wilbur (1979). Zimmer and Patschke (1990) also suggest that this pointing
sign in ASL may occur simultaneously with a noun.
3 In HKSL the nearest equivalent gloss is THIS or THAT. This is probably due to the lack of an
article system in Cantonese; a definite determiner is usually translated as ‘nei go’ (‘this’) or ‘go
go’ (‘that’) in Cantonese. We leave open the issue of whether INDEXdet represents an instance
of the article system in HKSL.
Nominal expressions in Hong Kong Sign Language 299

(a) (b) (c)

Figure 12.2 ‘That man eats rice’: 12.2a INDEXdet i ; 12.2b MALE; 12.2c
EAT RICE

it is possible that this adverbial is adjoined to N of the NP, hence leading to a


different syntactic analysis.
Crucially, INDEXdet in HKSL can be inflected for number in both prenomi-
nal and postnominal positions. In (2d), the postnominal INDEXdet is inflected
for plural by incorporating a circular movement into its articulation while the
handshape remains unchanged (see Figure 12.3). This possibility of postnomi-
nal plural inflection suggests that postnominal INDEXdet-pl can be a determiner
in HKSL. Note that in ASL, the postnominal pointing sign, being only an adver-
bial, cannot be inflected for number (MacLaughlin 1997). Consistent with this
observation, it is – according to our informants – odd to have both a prenom-
inal and a postnominal pointing sign as shown in (2e) since the postnominal
INDEXdet can also be interpreted as a determiner.
(1) a. [IXi MALE]DP ARRIVE4
‘The/That man is arriving.’ (MacLaughlin 1997:117)
egi
(2) a. [INDEXdet i MALE]DP EAT RICE5
‘That man eats rice.’ egi
b. JOHN WANT BUY [INDEXdet i BOOK]DP
‘John wants to buy that/*a book.’

4 All manual signs of HKSL in this chapter are glossed with capital letters. Where the data involve
ASL, they are noted separately. Hyphenation between two signs means that the two signs form a
compound. Underscoring is used when more than one English word is needed to gloss the sign.
Subscripted labels like INDEXdet are used to state the grammatical category of the sign and/or
how the sign is articulated. Subscripted indices on the manual sign or nonmanual markings like
eye gaze (e.g. egi ) are used to indicate the spatial information of the referent. INDEXdet i means
the definite determiner is pointing to a location i in space. As for nonmanual markings, ‘egA ’
means eye gaze directed toward the addressee; ‘egpath ’ means eye gaze that follows the path of
the hand; ‘rs’ refers to role shift in the signing discourse. In some transcriptions, RH refers to
the signer’s right hand and LH refers to the left hand.
5 Optionally, eye gaze may extend over only the initial determiner, rather than over the entire DP.
300 Gladys Tang and Felix Sze

(a) (b) (c)

Figure 12.3 ‘Those men are reading’: 12.3a MALE; 12.3b INDEXdep-pl i ;
12.3c READ

c. [MALE INDEXdet i/adv i ]DP SLEEP


‘(That) man (that/there) is sleeping.’
d. [MALE INDEXdet-pl i ]DP READ
‘Those men are reading.’
e. ??[INDEXdet i BOOK INDEXdet i/adv i ]DP EXPENSIVE
‘That book there is expensive.’
egi
f. [FEMALE-KID]DP COME
‘That girl is coming.’

According to MacLaughlin (1997), nonmanual markings in ASL are ab-


stract agreement features contained in D.6 When the manual sign is present,
the markings may co-occur with it and their spread over the C-command
domain of DP is optional. Definite referents are marked by head tilt and/or
eye gaze. In HKSL head tilt is seldom used with INDEXdet to mark a defi-
nite referent. Very often, this definite determiner sign is accompanied by eye
gaze directed at the spatial location of the referent. This nonmanual mark-
ing may co-occur with the sign alone or spread to N (2a). If this sign is not
present, eye gaze directed at the locus of the referent is obligatory (2f). These
findings preliminarily suggest that the system of nonmanual agreement mark-
ings for definite referents between ASL and HKSL may be different. How-
ever, in both languages the nonmanual agreement markers co-occur with the
manual sign in D and may optionally spread over the C-command domain
of D. Also, nonmanual markings are obligatory when the manual sign is not
present.

6 MacLaughlin (1997) argues that ±definite features and agreement features are located in D in
ASL. Nonmanual markings like head tilt and eye gaze are associated with these semantic and
syntactic features.
Nominal expressions in Hong Kong Sign Language 301

(a) (b)

Figure 12.4a ONEdet/num ; 12.4b ONEnum

12.3.2 Indefinite determiners


Neidle et al. (2000) suggest that SOMETHING/ONEdet in ASL is an indefinite
determiner and the degree of tremoring motion is associated with the degree
of unidentifiability of the referent.7 If a referent is maximally identifiable, the
tremoring motion is minimal and the sign is almost identical to the numeral
sign ONEnum . SOMETHING/ONEdet is not directed toward a location, but a
diffuse area in space.
There is a singular indefinite determiner in HKSL. This sign, glossed as
ONEdet , is articulated with the same handshape used for the definite determiner
but the index finger points upward instead of outward (see Figure 12.4a). Unlike
the indefinite determiner in ASL, ONEdet in HKSL does not involve a tremoring
motion. This sign usually selects an N, forming a [ONEdet N] constituent (3a).
In both preverbal and postverbal positions, [ONEdet N] is indefinite and spe-
cific (3a and 3b). This sign is ambiguous when it occurs in prenominal position
because ONEdet and ONEnum are similar in terms of articulation (3a and 3b).8
However, if it occurs in postnominal position, only a quantificational reading is
expected (3c, d, e). Some older deaf signers mark number in postnominal posi-
tion by rotating the forearm so that the palm faces the signer (see Figure 12.4b),
which differs from the prenominal ONEdet/num whose articulation shows con-
tralateral palm orientation (see Figure 12.4a). ONEdet is optional if the referent
is singular, indefinite and specific (3f).
egA
(3) a. YESTERDAY [ONEdet/num FEMALE-KID]DP COME
‘A girl came yesterday.’

7 Neidle et al. (2000) observe that SOMETHING/ONE may occur alone, and it is interpreted as
a pronominal equivalent to English ‘something’ or ‘someone.’
8 A distinction suggested by MacLaughlin (1997) is the presence of stress in the articulation of
numeral ONE. Our data shows that stress occurs only in postnominal ONE.
302 Gladys Tang and Felix Sze

b. [MALE]DP KICK [ONEdet/num DOG]DP


‘The man kicked a dog.’
c. YESTERDAY [FEMALE-KID ONEnum ]DP COME
‘One girl came yesterday.’
d. FATHER WANT BUY [DOG TWOnum ]DP
‘Father wants to buy two dogs.’
e. [[FEMALE ONEnum ]DP [MALE ONEnum ]DP ]DP 2 PERSON SIT
NEXT TO EACH OTHER
‘One woman and one man sat next to each other.
egA
f. [MALE]DP CYCLE
‘A man is cycling.’
As mentioned above, [ONEdet N] is usually indefinite and specific. Some-
times, this constituent may be preceded by HAVE (see Figure 12.5), or
ONEdet/num is simply omitted, yielding a [HAVE N] sequence (4a,b). HAVE
appears to be a loan sign from signed Cantonese ‘jau’ and has been quite es-
tablished in the HKSL lexicon.
(4) a. HAVE [ONEdet/num FEMALE]DP STEAL DOG
‘A female stole a/the dog.’

(a) (b) (c)

(d) (e) (f)

Figure 12.5 ‘A female stole a dog’: 12.5a HAVE; 12.5b ONEdet/num ; 12.5c
FEMALE; 12.5d–e STEAL; 12.5f DOG
Nominal expressions in Hong Kong Sign Language 303

b. HAVE [MALE]DP STEAL DOG


‘A male stole a/the dog.’

With [HAVE [ONEdet/num N]DP ], the ONE sign is interpreted as a numeral and
the sign sequence is similar to the existential constructions in Cantonese except
for the absence of a classifier in the constituent:9

(5) [Jau [saam zek gai]DP ] sei zo


Have three cl chicken die asp
‘Three chickens died.’
In terms of referential properties, [HAVE [ONEnum N]DP ] or [HAVE [N]DP ] may
refer to indefinite specific or nonspecific referents.10 Note that [HAVE [N]DP ]
or [HAVE [ONEnum N]DP ] in HKSL does not occur in postverbal position, as
in (6).

(6) *[INDEXdet MALE]DP BUY [HAVE [CAR]DP ]


‘That man bought a car.’
In ASL, in addition to eye gaze, the indefinite determiner is “accompanied
by a non-manual expression of uncertainty which includes a wrinkled nose,
furrowed brows, and a slight rapid head shake” (Neidle et al. 2000:90). Head
tilt has not been found to be associated with indefinite referents in ASL. If
eye gaze to a location in space occurs during the expression of an indefinite, it
targets a more diffuse area than a point in space. In HKSL eye gaze for indefinite
specific referents seldom spans a diffuse area in space. Instead, it is directed
toward the addressee (3a,f); unlike cases of definite reference, the signer does
not break eye contact with the addressee. This pattern of eye gaze is extremely
common when the signer introduces a new referent in the signing discourse;

9 ‘Jau’ (‘have’) in Cantonese is an existential verb which may be preceded by an adverbial such
as ‘nei dou’ (‘here’) or ‘go dou’ (‘there’):
i. Nei dou jau saam zek gai
Here have three cl chicken
‘There are three chickens here.’
Note that if the noun is singular and indefinite, the numeral is omitted, yielding a constituent
like the one below:

ii. Jau zek gai sei zo


Have cl chicken die asp
‘A chicken died.’
10 [HAVE NUM N] usually refers to an indefinite specific referent. The numeral in this sign
sequence can be postnominal, as in the utterance:

i. [HAVE MALE THREE]DP STEAL DOG


‘Three men stole a/the dog.’
304 Gladys Tang and Felix Sze

(a) (b) (c)

Figure 12.6a–b ONEdet-path ; 12.6c PERSON

with this pattern of eye gaze, the introduction of the new referent is interpreted
as referring to a specific indefinite referent.11
What if the referent is indefinite and nonspecific? The data show that [ONEdet
N] in postverbal position may apply to a nonspecific indefinite referent (7a).
However, when the signer wishes to indicate that he or she is highly uncertain
about the identifiability of the referent, the index finger moves from left to right
with a tremoring motion involving the wrist. This sign usually combines with
an N, as shown in (7b) (see Figure 12.6) and (7c):
(7) a. FATHER LOOK FOR [ONEdet/num POLICEMAN]
‘Father is looking for a/one policeman.’
egpath egA
b. [INDEXpro-3p BOOK]DP GIVE [ONEdet-path PERSON]DP
‘His book was given to someone.’
c. [INDEXdet MALE] WANT TALK [ONEdet-path STUDENT]DP
‘That man wants to talk to a student.’
[ONEdet-path N] normally occurs in postverbal position and is accompanied
by round protruded lips, lowered eyebrows and an audible bilabial sound.
When this sign is produced, the signer’s eye gaze is never directed at a spe-
cific point in space; instead, it follows the path of the hand, suggesting that
there is no fixed referent in space. Note that this eye gaze pattern does not
spread to the noun. Usually, it returns to the addressee and maintains eye
contact with him (or her) when the noun is signed (7b). Alternatively, eye

11 Sometimes, a shift in eye gaze from the addressee to a specific location together with a pointing
sign is observed when the signer tries to establish a locus for the new referent:
egA egi
i. MALE INDEXadv i STEAL DOG
‘A man there stole the dog.’
This sign is taken to be an adverbial in our analysis.
Nominal expressions in Hong Kong Sign Language 305

gaze is directed at the addressee, maintaining eye contact with him through-
out the entire DP. Unlike ASL, ONEdet-path alone is not a pronominal and it is
[ONEdet-path PERSON] that is taken to be a pronominal equivalent to the En-
glish ‘someone.’ Relative to [ONEdet-path PERSON], it seems that [ONEdet-path
N] is not yet established firmly in HKSL, as the informants’ judgments on
this constituent are not unanimous, as is the case for other nominal expres-
sions. While all of our deaf informants accept [ONEdet-path PERSON], some
prefer [ONE N] or a bare noun to [ONEdet-path N] for nonspecific indefinite
referents.
In sum, in terms of nonmanual markings, definite determiners require that
eye gaze be directed to a specific location in space. On the other hand, the
signer maintains eye contact with the addressee when he introduces an in-
definite specific or nonspecific referent to the discourse. However, variation
is observed with the eye gaze pattern for indefinite nonspecific referents. The
ONEdet-path sign may also be accompanied by eye gaze that tracks the path of the
hand.

12.4 Pronouns
It has been assumed that pronouns are determiners (Abney 1987; Cheng and
Sybesma 1999). MacLaughlin (1997) argues that pronouns and definite deter-
miners in ASL are the same lexical element, base-generated in D. In HKSL
the pointing sign may be interpreted as a pronoun when signed alone, hence
glossed as INDEXpro . We assume that this manual sign is base-generated in
D and has a [+definite] feature. It can be inflected for person and number
(8a,b,c). Note also that (8d) is ambiguous; it can either be a pronominal or a
demonstrative.

egi
(8) a. [INDEXpro-3p i ]DP CRY
‘She cried.’
b. [INDEXpro-1p i ]DP LOVE [INDEXpro-3p j ]DP
‘I love him.’
c. [INDEXpro-1p i ]DP LOVE [INDEXpro-3p-pl j ]DP
‘I love them’
d. [INDEXpro-3p i/det i ]DP TALL, [INDEXpro-3p j/det j ]DP SHORT
‘It/This (tree) is tall, it/this (tree) is short.’

In HKSL pronouns are optionally marked by eye gaze directed at the location
of the referent in space, similar to the definite determiner (8a). Based on the
observations made so far, INDEXdet and INDEXpro are associated with the
definiteness and agreement features in HKSL.
306 Gladys Tang and Felix Sze

(a) (b)

Figure 12.7a POSSdet i ; 12.7b POSSneu

12.5 Possessives
There are two signs for possessives in HKSL: a possessive marker, glossed as
POSS, and a sign similar to INDEXpro , which is interpreted as a possessive
pronoun. Similar to ASL, POSS is articulated with a B handshape with all the
extended fingers (thumb included) pointing upward. POSS may be neutral or
inflected such that the palm is oriented toward the location of the possessor
in space (see Figures 12.7a, 12.7b). As we shall see, this possessive marker is
highly restricted in distribution in HKSL. It differs from ASL in the following
respects. First, possessive DPs in HKSL that are transitive (i.e. categorize for
a NP) do not have an overt possessive marker, as in (9a) and (9b). Therefore,
(9c) is unacceptable in HKSL.12
egi
(9) a. [PETER CAR]DP BREAK DOWN
‘Peter’s car broke down.’
b. YESTERDAY I SIT [PETER CAR]DP
‘I rode in Peter’s car yesterday.’
c. *[PETERi POSSi CAR] OLD
‘Peter’s car is old.’
In ASL, possessive constructions require a possessive marker POSS that
agrees with the possessor (10a). Alternatively, POSS is a pronominal in (10b).
An equivalent structure in HKSL as shown in (11a) would be ruled out as
ungrammatical and POSS does not occur before the possessee as a pronominal
but INDEXpro does (11b):
(10) a. [FRANKi POSSi NEW CAR]DP (ASL data from Neidle et al.
2000:94)
‘Frank’s new car’
12 Some deaf signers accept this pattern; however, they admit that they are adopting signed
Cantonese, and the sequence can be translated as ‘Peter ge syu.’ The morpheme /ge/ is a
possessive marker in spoken Cantonese.
Nominal expressions in Hong Kong Sign Language 307

b. [POSSi NEW CAR]DP (ASL data from Neidle et al. 2000:94)


‘his new car’
(11) a. *YESTERDAY [POSSi NEW CAR]DP BREAK DOWN
‘His new car broke down yesterday.’
egi
b. [INDEXpro-3p i DOG]DP DIE
‘His dog died.’
In ASL, POSS occupies the D position and it becomes optional only when it is
associated with inalienable possession (12):
(12) [MARYi (POSSi ) EYE]
‘Mary’s eye’
Another difference between ASL and HKSL is that POSS in HKSL is re-
stricted to the predicative possessive context. In the predicative context, if the
possessor is not associated with a locus in space, POSS is uninflected (13a).
If the referent is physically accessible for identification, such as (13b), POSS
agrees with the spatial location of the referent. In this case, POSS may function
pronominally as a genitive, similar to INDEXpro (see Figure 12.8).
(13) a. [INDEXdet i BOOK]DP [[WONG POSSneu ]DP ]VP
‘That book is Wong’s.’
egj
b. [INDEXdet i DOG]DP [[POSSj /INDEXpro j ]DP ]VP
‘That dog is his.’
In (13a), we assume that the possessor surfaces in the specifier position of
DP, and POSS is base-generated in D and contains a [+definite] feature. If
the possessor is physically present or has already assumed a locus in the sign-
ing space, either an independent POSS or INDEXpro is used in the predica-
tive context, as shown in (13b). In this case, POSS is a pronominal and we

(a) (b) (c)

Figure 12.8 ‘That dog is his’: 12.8a INDEXdet i ; 12.8b DOG; 12.8c POSSdet j
308 Gladys Tang and Felix Sze

assume that it occupies D, similar to INDEXpro . Therefore, the orientation of


the palm and direction of movement agree with the spatial location of the
referent.
Neidle et al. (2000) propose that, in possessive constructions in ASL, head tilt
is associated with the possessor and eye gaze with the possessee. As mentioned
previously, head tilt as an agreement marker for definite determiners is seldom
employed in HKSL, neither is there a distinctive division of labor of nonmanual
markings for the possessor and the possessee in HKSL. Similar to the pronouns,
POSS and INDEXpro in possessive constructions are usually accompanied by
eye gaze at a specific location in space in a predicative context (13b; see Figure
12.7a).
In sum, we have provided a descriptive account of the syntactic constituents
of the nominal expressions in HKSL. Despite some differences in the sur-
face constructions of the languages being compared, the nominal expressions
of HKSL show formal properties of linguistic structuring that have been dis-
cussed in the spoken language literature. The data suggest that a lexical category
like the noun phrase in HKSL has above it a functional projection headed by
a determiner located in D (compare Abney 1987). According to Longobardi
(1994:613), a nominal expression is “an argument only if it is introduced by
a category D.” Therefore, the noun phrase that occupies an argument position
in our analysis is assumed to be a determiner phrase and acquires its refer-
ential properties through D. The manual signs for the determiners, pronouns,
and possessives – together with their associated nonmanual signs in HKSL –
demonstrate functions of D that are hypothesized to be associated with the
referential features such as ±definite and agreement features. Our data show
that the head N is usually associated with these manual signs in D and the scope
of nonmanual markings of D may cover N. Nevertheless, in the following sec-
tions, we turn to a phenomenon that might enable us to examine the modality
issue in a different light. We propose that while the visual–gestural modality
may not lead to a difference in linguistic structuring at the syntactic level, it
may influence the distribution and interpretation of nominal expressions in the
signing discourse.

12.6 Predominance of bare nouns: An indication of modality effects?


HKSL is similar to ASL in that both the definite and indefinite determiners
may be optional. As such, bare nouns are quite common in HKSL. They may
be definite (14a), indefinite specific (14b), indefinite nonspecific (14c), and
generic (14d). Also, almost all bare nouns occur in either preverbal or postver-
bal positions. The only exception is that in preverbal position, a bare noun
cannot be indefinite and nonspecific unless it is preceded by HAVE, forming a
[HAVE (ONE) N] constituent (see Section 12.3.2). Recovery of the respective
Nominal expressions in Hong Kong Sign Language 309

referential properties is a function of the discourse context in which they


occur.13

egi
(14) a. [DOG]DP CATCH MOUSE (definite)
‘The dog caught a mouse.’
egA
b. I SEE [DOG]DP LIE INDEXadv (indefinite specific)
‘I saw a dog lying there.’
egA
c. I GO CATCH [BUTTERFLY]DP (indefinite nonspecific)
‘I’ll go and catch a butterfly.’
egA
d. I LIKE [VEGETABLE]DP (generic)
‘I like vegetables.’
In a study of narratives in HKSL (Sze 2000), bare nouns were observed to
refer to definite referents for about 40% of all the nominal expressions under
study and 58% for indefinite and specific referents, as shown by examples (15a)
and (15b):
(15) a. [DOG]DP CL:ANIMAL JUMP INTO BASKET (definite)
‘The dog jumped into a basket.’
b. [MALE]DP RIDE A BICYCLE (indefinite specific)
‘A man is riding a bicycle.’
Many spoken languages do not allow bare nouns for such a wide range of
referents. English bare nouns, for instance, refer to generic entities only. In
Cantonese bare nouns only yield a generic reading. They cannot be definite in
either preverbal or postverbal positions (16a). To be definite, the count noun
‘horse,’ if singular, requires a lexical classifier ‘zek’ to precede it and a mass
noun like ‘grass’ is preceded by a special classifier ‘di,’ as shown in (16b)
(Matthews and Yip 1994). In postverbal position, a bare noun may yield an
indefinite nonspecific reading (16c).
(16) a. [Maa]DP sik [cou]DP (generic/*definite)
Horse eat grass
‘Horses eat grass.’/*‘The horse is eating the grass.’
b. [Zek maa]DP sik gan [di cou]DP (definite)
cl horse eat asp cl grass
‘The horse is eating the grass.’
13 It is not clear whether ASL exhibits a similar pattern of distribution with bare nouns. The data
from Neidle et al. (2000) suggest that bare nouns in both preverbal and postverbal positions can
be either indefinite or definite.
310 Gladys Tang and Felix Sze

c. Ngo soeng heoi maai [syu]DP (indefinite nonspecific)


I want go buy book
‘I want to buy a book.’
In what follows, we discuss the distribution of nominal expressions, in par-
ticular that of bare nouns in HKSL discourse. Although the effect of modality
on linguistic structure may be minimal at the syntactic level, we would like
to suggest that factors pertinent to a signing discourse may lead to differences
such as the distribution and interpretation of bare nouns. These factors can
be described in terms of the types of mental spaces invoked by the signer
during the flow of discourse, as well as the physical and conceptual loca-
tion of the referents in these mental spaces. One can view the interpretation
of nominal expressions in a signing discourse as a result of the interaction
between the signer’s knowledge of the syntactic properties of the nominal ex-
pressions, their respective referential properties and the type of mental space
invoked.

12.7 Mental spaces and nominal expressions: Toward an explanation


Liddell (1994; 1995; 1996) argues that there is a relationship between mental
spaces and nominal reference. In the spirit of Fauconnier (Fauconnier
1985; 1997), he argues that mental spaces are conceptual domains of refer-
ential structure that people talk about and that can be built up during discourse
as a common ground between the speaker and the addressee. In signed lan-
guage analysis, Liddell conceptualizes space, as having three types: real space,
surrogate space, and token space.14
Real space is a conceptual representation of the current, directly perceivable
physical environment. When the referents are present in the immediate envi-
ronment, they are represented in the real space of the signer. Pointing signs or
indicating verbs that serve a deictic function would be used because they entail
locational information of the referent in the real world. In surrogate space, the
referents are not physically present. However, the signer can introduce them
into the discourse as surrogate entities in that mental space. Reference to these
surrogates can be made through pointing signs, indicating verbs or role shift.
According to Liddell, surrogates may take first, second, and third person roles.15

14 Liddell’s concept of mental spaces actually differs from Fauconnier’s. The types of mental spaces
as described by Fauconnier (1985) are nongrounded (i.e. not in the immediate environment of
either the speaker or the addressee) and not physically accessible. The mental spaces proposed
by Liddell may be grounded and physically accessible.
15 We leave the debate on person distinctions in signed language open. For example, Meier (1990)
argues that ASL does not distinguish second and third person in the pronominal system.
Nominal expressions in Hong Kong Sign Language 311

In token space, conceptual entities are given a manifestation in confined, phys-


ical space. They usually assume a third person role.
According to Liddell, and subsequently Liddell and Metzger (1998), all these
mental spaces are grounded mental spaces, and the conceptual entities within
each of them can be referred to as part of the signer’s perception of the context.
They are not linguistic representations, but conceptual structures perceived by
the signer for meaningful mental representations of conceptions, things, events,
etc. However, they influence the nature of linguistic representations whose use
they underline. It is argued that spatial loci do not contain agreement features
but reflect location of referents only, and the pointing signs directed toward them
are deictic rather than anaphoric. While agreeing with Liddell’s proposal that
mental spaces being conceptual structures are essentially the same regardless of
the modality of communication, we adopt the position that, in signed language,
spatial loci contain agreement features for the manual and nonmanual markings
of the signs directed toward them. Space in signed language is both functional
and linguistic and its role changes according to the level of representation that
the grammar is associated with.
At the discourse level, the choice of grammatical reference in signed lan-
guage is a function of how “active” or “identifiable” a referent is in the concep-
tualizer’s awareness as well as the type of mental space selected.16 Therefore,
it is highly likely that less complex nominal expressions such as bare nouns
are used when the discourse content is sufficient for the signer to identify the
referent. In fact, research on pronominal reference suggests that there is consid-
erable uniformity across signed languages in the use of space for referential and
anaphoric purpose. Also, pronouns of signed language exhibit a high degree of
referential specificity since spatial location allows for the unambiguous identi-
fication of referents (Poizner, Klima and Bellugi 1987; McBurney this volume).
If the referents are physically present or have already been assigned a refer-
ential locus, less complex nominal expressions are likely to be used because
identification of the referent in this case does not require a great deal of lexical
content.17
Generally speaking, referential properties in spoken languages like English or
Cantonese are conveyed by linguistic cues such as the article system, syntactic
structure, or syntactic position. However, in HKSL we observe that the mental
spaces invoked by the signer interact with these linguistic cues in establishing
grammatical reference through the language.

16 Little signed language research to date has been conducted using the concept of mental spaces;
and most existing studies are concerned with pronominal reference and verb types in signed
language (Liddell 1995; van Hoek 1996).
17 Null arguments are also common in signed languages, and recently there has been a debate
on the recoverability of null arguments. Views have diverged into recoverability via discourse
topics (Lillo-Martin 1991) or via person agreement (Bahan et al. 2000).
312 Gladys Tang and Felix Sze

12.7.1 Bare nouns


In the absence of a determiner, the reference of bare nouns in HKSL may be
identified via eye gaze. While eye gaze at a specific location in space is observed
to be associated with definite referents, maintaining eye gaze at the addressee
suggests indefinite referents. We observe that the occurrence of bare nouns is
also dependent upon the type of mental space invoked as well as the accessibility
of the referents’ spatial information.
If real space is being invoked in the signer’s consciousness, the referents
are physically present in the immediate environment. In this case, the signer
generally resists using a bare noun to introduce a new referent into the signing
discourse or to refer to a previously mentioned referent. Instead, INDEXpro is
used since the referent is perceived by the signer to be maximally accessible
in real space. In other words, the signer assumes that the addressee is also
cognizant of the presence of the referent in the physical environment whose
spatial location is expressed by INDEXpro and its respective eye gaze. If the
referent is relatively further away from the signer but lies within a visible
distance, [INDEXdet N] would be used.
Bare nouns appear to be quite common in surrogate space and are used
to refer to either definite or indefinite referents. As mentioned, to introduce
an indefinite specific referent with a bare noun, the signer generally main-
tains eye contact with the addressee. This finding corroborates the observa-
tion of Ahlgren and Bergman (1994) with regard to Swedish Sign Language.
Also, if the context is transparent enough to allow unambiguous identifica-
tion of the referent, a bare noun is selected with eye gaze at the addressee
(17):

rsbody shifts left


egA egA
(17) [MALE]DP HITi [WOMAN]DP
‘A man hit a woman.’

In this context, the narrator is describing an event that happened the night before.
It involves a man hitting a woman. After introducing the man, the deaf signer
assumes the role of the male referent and hits at a specific location on his right
before he signs WOMAN, suggesting that the woman is standing on the right
side of the man who hits her. In both instances, the deaf signer gazes at the
addressee for MALE and WOMAN but his gaze turns to the direction of the
woman surrogate when he signs HIT.
We also found role shift to accompany bare nouns in HKSL; here, it is usually
associated with definite specific referents (18):
Nominal expressions in Hong Kong Sign Language 313

egA egA
(18) [MALE]DP CYCLE KNOW[MALE]DP BACK,
rsbody leans backward
[MALE]DP DRIVE CAR BE ARROGANT PRESS HORN
‘A man who was riding a bicycle knew that there was a male driving
a car behind him. The driver was arrogant and pressed the horn.’
This narrative features a driver and a cyclist. The cyclist in front notices that
there is a driver behind him. The driver arrogantly sounds the horn. Both men
in the event are introduced into the discourse using eye gaze directed at the ad-
dressee. However, to refer to the driver again as a definite referent, the signer’s
body leans backward to assume the role of the driver. Therefore, role shift in this
example is associated with a definite referent in surrogate space. However, role
shift appears to be more functional than grammatical since the data show that
this nonmanual marking spreads over the entire predicate (18). In other words,
role shift seems to cover the entire event predicate rather than a single nominal
expression.
Nevertheless, the use of eye gaze at the addressee to introduce an indefinite
specific referent as shown in (17) and (18) is quite common among the deaf
signers of HKSL. Alternatively, the signer may direct his eye gaze at a specific
location in space in order to establish a referential locus for the referent. The
latter phenomenon is also reported in Lillo-Martin (1991).
In a definite context, the bare noun is associated with either eye gaze directed
at the locus or role shift (19):

rsbody shifts left


egA egj egj egj
(19) [MALE]DP SEEj [BEGGAR]DP GIVEj MONEY
‘A man saw the beggar and gave money (to the beggar).’
In this example, the male is perceived by the signer to be on the left of the
beggar. When signing MALE, the signer’s eye gaze is directed at the addressee.
When he signs SEE, he shifts his body to the left to assume the role of the man,
suggesting that the man is on the left of the beggar. Note that, through eye gaze,
the object of the verb SEE agrees with the location of this ‘surrogate’ beggar in
space. His eye gaze continues to fix at that location when he signs BEGGAR
in the neutral signing space. This bare noun refers to a definite referent because
the beggar is already established in the previous discourse. In this example, the
signer maintains this shifted position once he assumes the role of the man; he
further signs GIVE whose indirect object agrees with the locus of the beggar.
Therefore, even if the referent for MALE is not assigned a locus in surrogate
space, role shift helps to identify the referent, and the verb has to agree with
the shifted position as subject.
314 Gladys Tang and Felix Sze

In our data, there are fewer bare nouns in token space than in surrogate
space. It could be that token space is invoked particularly upon the production
of classifier predicates. In this case, the referents are usually perceived to be
maximally accessible and INDEXpro is common. In fact, Liddell (1995) ob-
serves that the entities (tokens) in this type of mental space are limited to a third
person role in the discourse. Nevertheless, occasional instances of bare nouns
are found, as shown by the following example:

egi egj
(20) MALE PERSON BE LOCATEDi , FEMALE PERSON BE LOCATEDj ,
egj egi
INDEXpro-3p j j SCOLDi , MALE ANGRY, WALK TOWARD i HITj
‘A man is located here. A woman is located here (The man is placed
in front of the woman). She scolds him. The man becomes angry.
He walks toward her and hits her.’

In this example, a man is standing in front of a woman who keeps scolding


him. The man becomes angry, walks toward the woman and hits her. The first
mention of the man and woman is indefinite specific, and the signer is gazing
at the addressee. As the discourse continues, the male is mentioned again;
instead of using an INDEXpro-3p , as we observe with the second mention of
the woman referent, a bare noun is used but the eye gaze is directed toward a
human classifier (token) located at a specific point in space. It clearly indicates
that the bare noun in this context is referring to a definite referent.
Generally speaking, with the adoption of different forms of eye gaze, the ref-
erential properties of bare nouns can be established. This is possible because the
three types of mental space provide a conceptual structure for the comprehen-
sion of reference and coreference, and deaf signers capitalize on the functions
and constraints of these mental spaces. Where the relation between meaning
and referent is transparent or identifiable, a bare noun instead of a complex
nominal expression is preferred.

12.7.2 Determiners
As discussed previously, a definite determiner necessarily agrees with the spa-
tial location associated with the referent. It follows that if a signer does not
conceptualize a location in the signing space for the referent, definite determin-
ers would not be used. In fact, INDEXdet in HKSL can be associated with both
proximal and distal referents in surrogate space, as in (21a) and (21b):
egi
(21) a. [INDEXdet (center-downward) KID] SMART (proximal surrogate)
i DP
‘This kid is smart.’
Nominal expressions in Hong Kong Sign Language 315
egi
b. [INDEXdet (center-forward) MALE] SMART (distal surrogate)
i DP
‘That man is smart.’

INDEXdet in the above situations is used instead of INDEXpro although both


may be used for proximal referents. It may be that surrogate space is percep-
tually more remote than real space in the signer’s consciousness. A referent
physically located in real space may be regarded as more accessible than an
imagined surrogate even if the latter occupies the same location in surrogate
space. Therefore, INDEXdet to refer to a definite referent is preferred in surrogate
space rather than in real space.

12.7.3 Pronouns
Although a pronoun normally implicates full accessibility and identifiability of
its referent through anaphoric relations, given a situation where there is more
than one referent in the discourse, the use of pronouns might fail the principle
of identifiability. A third person pronoun in Cantonese is phonetically realized
as ‘keoi’ (‘he/she/it’) and interpretation is crucially dependent upon contextual
information. INDEXpro in HKSL typically provides spatial location of the refer-
ent in the signing space, leading to unambiguous identifiability. In Cantonese,
where more than one referent is involved, a complex nominal expression or
proper name is used instead to identify the referent in the discourse. In HKSL,
INDEXpro is seldom ambiguous, since it is directed at the referent either in
the immediate environment or via its conceptual location in space. As a conse-
quence, INDEXpro is found in all kinds of mental spaces, but more prominently
in real space and token space. In token space, it is common to use INDEXpro
directed at the classifier in the predicate construction. Prior to the articulation
of (22), a locative predicate is set up in such a way that the father is located
on the left and the son on the right. Both referents are represented by a human
classifier articulated with a Y handshape with the thumb pointing upward and
the pinky finger downward:

(22) egi
LH: FATHER PERSON BE LOCATEDi . . . . . . . . . . . . . . . . . . . . . . . . .
egj
RH: SON PERSON BE LOCATEDj
LH: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i SHOOTj
egi
RH: INDEXpro i PERSON BE LOCATEDj . . . . . . . . . . . . . . . . . .
‘The father is located on the left of the signer and the son is on the
right. He (the father) shot him (the son).’
316 Gladys Tang and Felix Sze

Having set up the spatial location of the two referents through the locative
predicates, the signer produces INDEXpro with his right hand (RH), directing it
at the human classifier (i.e. the father) located on the left (LH). Note that the eye
gaze that accompanies INDEXpro is also directed at the referent (i.e. the father) in
this token space. The right hand (RH) then retracts to the right and re-articulates
a locative predicate with a human classifier referring to the son. The left hand
(LH) changes to SHOOT showing subject–object agreement, indicating that it
is the father who shoots the son. The specific location of the tokens in space as
depicted through the classifier predicates favors the occurrence of INDEXpro in
subsequent signing.

12.7.4 Possessives
Our discourse data show that predicative possessive constructions that con-
tain POSS are common in real space (23a,b). What triggers such a distri-
bution? We argue that the presence of the referent, especially the possessee,
in the immediate physical environment is a crucial determinant. To refer to
the possessee that is physically present, a pronominal index as grammatical
subject with eye gaze at a particular location is observed. It is usually fol-
lowed by a predicative possessive construction in which POSS may func-
tion as a possessive marker or a pronominal (23). When the possessor is
not present, as in (23a), [possessor POSSneu ] is adopted in the predicative
construction and it is usually directed toward the signer’s right at the face
level while the signer maintains eye contact with the addressee. Even if the
possessor is present, as in (23b), the sign for the possessor JOHN is op-
tional but POSS has to agree with the specific location of the possessor in
space.

egi egi
(23) a. [INDEXpro-3p i ]DP [JOHN POSSneu ]DP , [INDEXpro-3p i ]DP SICK
‘It (the dog) is John’s. It is sick.’ (possessee present, possessor
not present)
egi egj egi
b. [INDEXpro-3p i ]DP [(JOHN) POSSj ]DP , [INDEXpro-3p i ]DP SICK
‘It (the dog) is his. It is sick.’ (possessee present, possessor
present)

If the possessee is absent in the physical environment, to refer to it in real


space, a full possessive DP in the form of [possessor possessee] would be used
(24,25):
Nominal expressions in Hong Kong Sign Language 317

egi
(24) [INDEXpro-3p i DOG]DP SICK (possessor present, possessee
absent)
‘His dog is sick.’
In (24), INDEXpro is interpreted as a possessive pronoun that selects a noun as
its complement. In (25) a full determiner phrase is used to refer to a definite
referent, and the nonmanual marking for INDEXdet has to agree with the location
of the possessor, which is assumed to be distant from the signer.
egi
(25) [INDEXdet i MALE DOG]DP SICK (possessor present, possessee
absent)
‘That man’s dog is sick.’
Where both the possessor and the possessee are absent from the immediate
environment, a possessive DP in the form of [possessor possessee] is observed
without any specific nonmanual agreement features (26).
(26) [JOHN DOG]DP SICK (possessor absent, possessee absent)
‘John’s dog is sick.’
To summarize, one can observe that, in real space, the choice of possessive
constructions is determined in part by the presence or absence of the referents
in the immediate physical environment.

12.8 Conclusion
The data described in this chapter show that while conforming to general prin-
ciples of linguistic structuring at the syntactic level, the nominal expressions in
HKSL display some variation in nonmanual markings and syntactic order when
compared with ASL. First, while it has been claimed that unique nonmanual
markings including both head tilt and eye gaze are abstract agreement features
for D in ASL, data from HKSL show that only eye gaze at a specific location is
a potential nonmanual marker for definiteness. Eye gaze at a specific location
in space co-occurs with a definite referent, but maintaining eye contact with the
addressee is associated with an indefinite referent.
Second, there appears to be a subtle difference between signed and spoken
languages in the types of nominal expressions that can denote (in)definiteness.
We observe that bare nouns are common in HKSL and they are accompanied
by different nonmanual markings to refer to definite, indefinite, and generic
referents. Definite bare nouns may also be reflected by the signer’s adoption
of role shift in our data. Third, we observe that there is a relationship between
the type of mental spaces and the distribution of nominal expressions for refer-
ential purpose. This reflects the signer’s perceived use of space in the signing
318 Gladys Tang and Felix Sze

discourse, in particular his or her choice of mental spaces for the representa-
tion of entities and their relations. Nevertheless, the analysis shows a reliance
on narrative data. More data, especially those from free conversations or from
other signed languages, are sorely needed in order to verify the observations
presented in this chapter.

Acknowledgments
The research was supported by the Direct Grant of the Chinese University
of Hong Kong, No. 2101020. We would like to thank the following people for
helpful comments on the earlier drafts of this chapter: Gu Yang, Steve Matthews,
the editors, and two anonymous reviewers. We would also like to thank Tso Chi
Hong, Wong Kai Fung, Lam Tung Wah, and Kenny Chu for providing intuitive
judgments on the signed language data. We thank also Kenny Chu for being
our model and for preparing the images.

12.9 References
Abney, Steven P. 1987. The English noun phrase in its sentential aspect. Doctoral dis-
sertation, MIT, Cambridge, MA.
Ahlgren, Inger and Brita Bergman. 1994. Reference in narratives. In Perspectives on
sign language structure: Papers from the 5th International Symposium on Sign
Language Research, ed. Inger Ahlgren, Brita Bergman and Mary Brennan, 29–36.
Durham: International Sign Linguistics Association, University of Durham.
Allan, Keith. 1977. Classifiers. Language 53:285–311.
Bahan, Benjamin, Judy Kegl, Robert G. Lee, Dawn MacLaughlin and Carol Neidle.
2000. The licensing of null arguments in American Sign Language. Linguistic
Inquiry 31:1–27.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Cheng, Lisa and Rint R. Sybesma. 1999. Bare and not-so-bare nouns and the structure
of NP. Linguistic Inquiry 20:509–542.
Crain, Stephen and Diane Lillo-Martin. 1999. An introduction to linguistic theory and
language acquisition. Malden, MA: Blackwell.
Fauconnier, Gilles. 1985. Mental spaces: Aspects of meaning construction in natural
language. Cambridge, MA: MIT Press.
Fauconnier, Gilles. 1997. Mapping in thought and language. Cambridge: Cambridge
University Press.
Fromkin, Victoria A. 1973. Slips of the tongue. Scientific American 229:109–117.
Kegl, Judy. 1985. Locative relations in American Sign Language: Word formation,
syntax and discourse. Doctoral dissertation, MIT, Cambridge, MA.
Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liddell, Scott K. 1980. American Sign Language syntax. The Hague: Mouton.
Liddell, Scott K. 1994. Tokens and surrogates. In Perspectives on sign language
Nominal expressions in Hong Kong Sign Language 319

structure: Papers from the 5th International Symposium on Sign Language Re-
search, ed. Inger Ahlgren, Brita Bergman and Mary Brennan, 105–119. Durham,
England: International Sign Linguistics Association, University of Durham.
Liddell, Scott K. 1995. Real, surrogate and token space: grammatical consequences in
ASL. In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 19–42.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Liddell, Scott K. 1996. Spatial representation in discourse: comparing spoken and sign
language. Lingua 98:145–167.
Liddell, Scott K. and Melanie Metzger. 1998. Gesture in sign language discourse. Journal
of Pragmatics 30:657–697.
Lillo-Martin, Diane. 1991 Universal Grammar and American Sign Language: Setting
the null argument parameters. Dordrecht: Kluwer Academic.
Lillo-Martin, Diane. 1999. Modality effects and modularity in language acquisition: The
acquisition of American Sign Language. In Handbook of child language acquisi-
tion, ed. Tej K. Bhatia and William C. Ritchie, 531–568. San Diego, CA: Academic
Press.
Longobardi, Giuseppe. 1994. Reference and proper names: a theory of N-movement in
syntax and logical form. Linguistic Inquiry 25:609–665.
MacLaughlin, Dawn. 1997. The structure of determiner phrases: Evidence from
American Sign Language. Doctoral dissertation, Boston University, Boston, MA.
Matthews, Stephen and Virginia Yip. 1994. Cantonese: A comprehensive grammar.
London: Routledge.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical issues
in sign language research. Vol. 1: Linguistics, ed. Susan D. Fischer and Patricia
Siple, 175–190. Chicago, IL: University of Chicago Press.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language. Cambridge, MA: MIT Press.
Padden, Carol. 1990. The relation between space and grammar in ASL verb mor-
phology. In Sign language research: Theoretical issues, ed. Ceil Lucas, 118–132.
Washington, DC: Gallaudet University Press.
Poizner, Howard, Edward Klima, and Ursula Bellugi. 1987. What the hands reveal about
the brain. Cambridge, MA: MIT Press.
Sze, Felix Y. B. 2000. Space and nominals in Hong Kong Sign Language. M.Phil. thesis,
Chinese University of Hong Kong.
Tang, Gladys. 1999. Motion events in Hong Kong Sign Language. Paper presented at
the Annual Research Forum, Hong Kong Linguistic Society, Chinese University of
Hong Kong, December.
van Hoek, Karen. 1996. Conceptual locations for reference in American Sign Language.
In Spaces, worlds, and grammar, ed. Gilles Fauconnier and Eve Sweetser, 334–350.
Chicago, IL: University of Chicago Press.
Wilbur, Ronnie B. 1979. American Sign Language and sign systems. Baltimore, MD:
University Park Press.
Zimmer, June and Cynthia Patschke. 1990. A class of determiners in ASL. In Sign
language research: Theoretical issues, ed. Cecil Lucas, 201–210. Washington, DC:
Gallaudet University Press.
Part IV

Using space and describing space: Pronouns,


classifiers, and verb agreement

The hands of a signer move within a three-dimensional space. Some signs


contact places on the body that are near the top of the so-called signing space.
Thus, the American Sign Language (ASL) signs FATHER, BLACK, SUMMER,
INDIA, and APHASIA all contact the center of the signer’s forehead. Other
signs contact body regions low in the signing space: RUSSIA, NAVY, and
DIAPER target locations at or near the signer’s waist. Still other signs move
from location to location within space: the dominant hand of SISTER moves
from the signer’s cheek to contact with the signer’s nondominant hand; that
nondominant hand is located in the “neutral space” in front of the signer’s
torso. In the sign WEEK, the dominant hand (with its extended index finger)
moves across the flat palm of the nondominant hand. As these examples indicate,
articulating the signs of ASL requires that the hands be placed in space and be
moved through space.
Is this, however, different from the articulation of speech? The oral articula-
tors also move in space: the mouth opens and closes, the tongue tip and tongue
body move within the oral cavity, and the velum is raised and lowered. Yet the
very small articulatory space of speech is largely hidden within our cheeks,
meaning that the actions of the oral articulators occur largely (but not entirely)
out of sight. In contrast, the actions of the arms and hands are there for everyone
to see. The fact that the articulation of speech is out of view accounts for the
failure of lipreading; lipreading is ineffective because most movements of the
oral articulators are not visible to be read.
Somewhat surprisingly the kinds of signs in ASL and other signed languages
that are listed in dictionaries make less use of space than one might have ex-
pected. For such signs, there is no contrast between locations on the left side of
the neutral space in front of the signer vs. locations on the right side of the space
in front of the signer.1 And in the phonologies of spoken languages there is no
contrast between alveolars (e.g. [t] or [d]) made on the left side of the mouth
1 The only potential counterexamples of which I am aware involve the signs for EAST and WEST;
one variant of EAST moves to the right, whereas a variant of WEST moves to the left. In addition,
certain signs for HEART are generally made on the left side of the midline, regardless of whether
the signer is right handed or left handed.

321
322 Using space and describing space

vs. the right side, even though it would seem that the tongue tip has sufficient
mobility to achieve such a contrast.
However, dictionary entries do not make a language. Spatial distinctions are
widespread elsewhere in ASL and other signed languages. Most pronouns in
ASL are pointing signs that indicate the specific locations of their referents, or
locations that have – during a sign conversation – been associated with those ref-
erents. So, if I am discussing John and Mary – both of whom happen to be in the
room – it makes all the difference in the world whether I point to John’s location
on the left, as opposed to Mary’s location on the right. If John and Mary are not
present, I the signer may establish a location on my right for Mary and one on my
left for John (or vice versa). I can then point back to the locations when I want
to refer to John or Mary later in a conversation. My addressee can do the same.
Spatial distinctions are crucial to the system of verb agreement, whereby a
transitive verb “agrees with” the location associated with its direct or indirect
object (depending on the particular verb) and optionally with its subject. Along
with word order, the spatial modification of verbs is one of the two ways in
which sign languages mark the argument structure of verbs. So, a sign such as
GIVE obligatorily moves toward the spatial location associated with its indirect
object (the recipient). Optionally, the movement path of GIVE may start near a
location associated with its subject.
Sign languages also use the sign space to talk about the locations of objects
in space and their movement; this is the phenomenon that Karen Emmorey
addresses in her chapter (Chapter 15). So-called classifier handshapes indicate
whether a referent is, in the case of ASL, a small animal, a tree, a human, an
airplane, a vehicle other than an airplane, or a flat thin object, among other
categories.2 A classifier handshape on the dominant hand can be moved with
respect to a classifier on the nondominant hand to indicate whether, for instance,
a car drove in front of a tree, or behind a tree, or whether it crashed into
the tree. Karen Emmorey argues that the use of space to represent space – in
contrast to the prepositional or postpositional phrases that are characteristic of
many spoken languages – means that different cognitive abilities are required to
comprehend spatial descriptions in ASL than in spoken languages. In particular,
she argues that comprehension of signed spatial descriptions requires that the
addressee perform a mental transformation on that space that is not required in
the comprehension of spoken descriptions.
In her chapter, Susan McBurney makes a useful distinction between modal-
ity and medium (Chapter 13). For her, modality refers to the articulatory and
perceptual apparatus used to transmit and receive language. For visual–gestural
languages, that apparatus includes the manual articulators and the visual system.
2 There is now considerable debate about the extent of the analogy between classifier constructions
in signed languages and those in spoken languages. It is the verbal classifiers of the Athabascan
languages to which sign classifiers may bear the strongest resemblance (see Newport 1982).
Using space and describing space 323

However, the medium for naturally-evolved signed languages is space (and, of


course, time), whereas the medium for spoken languages is time (see Pinker
and Bloom 1990). It is not inevitable that visual–gestural languages would ex-
ploit the spatial medium, but all naturally-evolved signed languages seem to
do so. Artificial sign systems such as Signing Exact English (SEE 2) and other
forms of Manually-Coded English (see Supalla and McKee, Chapter 6) make
little or no use of the resources that the signing space makes available. Instead
these sign systems depend crucially on the sequencing of units that correspond
to the morphology of English. Interestingly, Supalla (1991) has demonstrated
that deaf children exposed only to SEE 2 altered if by adding a system of verb
agreement very much akin to that of natural signed languages.
Chapters 13–17 focus on these three areas of the linguistics of signed lan-
guages: pronouns, verb agreement, and classifiers. All three of these domains
depend crucially on the meaningful use of the sign space. In a discussion of
how signers may refer to nonpresent referents, Klima and Bellugi (1979:277)
suggested that the signer can use the space in front of him or her “as a kind
of stage.” Lillo-Martin (Chapter 10) makes clear that we may be particularly
likely to find modality differences between signed and spoken languages in the
use of the sign space.
At first glance, the pronominal signs of most signed languages seem hardly
worth our attention: most typically, the nominative/accusative forms have an
extended index finger identical to the pointing gestures of hearing nonsigners. A
point that contacts the center of the signer’s own chest refers to the signer himself
or herself (or to a quoted signer); a point toward the addressee refers to that
person; and a point to some non-addressed participant in the conversation refers
to that individual. There are no surprises here. Possessive pronouns differ only
in handshape: in ASL, a flat hand (specifically a B hand) replaces the extended
index finger of the nominative/accusative forms. It is clear that these pointing
signs serve the same functions as the first, second and third person pronouns
of a language like English. But is it appropriate to use the terminology of
grammatical person in describing these pointing signs? Early descriptions of
ASL tended to do so and consequently posited a three person system very much
akin to the person distinctions of the pronominal systems of spoken languages.
In a 1990 publication, Meier argued that such analyses were not well motivated,
for two fundamental reasons:
r Early analyses of person considered conversational behaviors of the speaker/
signer that distinguish addressee from non-addressed participant (e.g. direc-
tion of gaze, or orientation of the signer’s torso) to be grammaticized markers
of second vs. third person; and
r The spatialized nature of ASL pronouns actually allows many more referents
to be distinguished than can be made by a binary distinction between second
and third person.
324 Using space and describing space

Instead of a three person system, Meier suggested what is in effect a compro-


mise; he argued that the pronominal system of ASL is best described in terms of
a first/nonfirst person distinction and that there is no evidence for a grammatical
distinction in ASL between second and third person. (Just to reiterate: saying
that there is no grammatical distinction between second and third persons does
not, of course, mean that the language has difficulty in distinguishing reference
to the addressee from reference to some non-addressed participant.) His argu-
ments for a distinct first person hinged on certain idiosyncratic properties of
first person forms, in particular the pronouns WE and OUR, and on the way in
which first person pronouns are interpreted in direct quotation. The first/nonfirst
analysis has been adopted in the description of other signed languages as well,
such as Taiwanese Sign Language (Smith 1990) and Danish Sign Language
(Engberg-Pedersen 1993).
In her chapter, Susan McBurney (Chapter 13) returns to the issue of what
the nature of the person distinctions are in ASL and a variety of other signed
languages for which she could find evidence in the published literature. She
makes the important observation that – unlike the pronominal systems of spo-
ken languages – the pronominal systems of signed languages are fundamentally
uniform from one sign language to the next. Although there is crosslinguistic
variation in sign pronominal systems (e.g. in the gender distinctions of certain
Asian sign languages), that variation pales in comparison to the dramatic differ-
ences encountered in the pronominal systems of spoken languages. McBurney’s
chapter also makes clear that the analysis of person in signed languages remains
unsettled. She argues, contrary to Meier (1990), that there is no person system in
signed languages, but that there is a grammaticized set of number distinctions.
Here McBurney echoes the recent work of Scott Liddell (2000). In contrast,
two other chapters in this volume – those by Lillo-Martin (Chapter 10) and by
Rathmann and Mathur (Chapter 14) – adopt the first/nonfirst model. Clearly
there is room for more research on this issue.
Christian Rathmann and Gaurav Mathur (Chapter 14) tackle many of the
same issues as Susan McBurney, but their focus is on the verb agreement sys-
tems of signed languages. Again they observe the essential uniformity of the
form of agreement systems from one signed language to the next. However,
like Diane Lillo-Martin, they also present arguments to suggest that agreement
is very definitely part of the grammar of signed languages, inasmuch as signed
languages are distinctly not uniform with respect to the syntax of agreement.
Some of the most persuasive evidence for this claim comes from a difference
amongst signed languages that looks to be parametric: some signed languages
(e.g. Brazilian Sign Language and German Sign Language) have auxiliary-like
elements that carry agreement, and some languages (e.g. American Sign Lan-
guage and British Sign Language) do not. Whether a language has auxiliaries
that carry agreement when the main verb cannot has consequences for word
Using space and describing space 325

order, for the distribution of negative markers, and for the licensing of null
arguments.
However, Rathmann and Mathur also adopt something of a compromise
position when they assert that the markers of agreement – the loci in space with
respect to which agreeing verbs move – are not phonologized. Here they too
echo the recent arguments of Liddell (2000). In their view – and in Liddell’s –
there is not a listable set of locations with which verbs may agree; this is
what Rathmann and Mathur call the “infinity problem.” In other words, the
phonology of ASL and of other signed languages does not make a listable set
of spatial contrasts available as the markers of agreement. For example, when
verbs agree with spatial loci associated with referents that are present in the
immediate environment, those locations are determined not by grammar but
by the physical locations of those referents. From this, Rathmann and Mathur
conclude that the spatial form of agreement is gestural, not phonological. So,
in their view, agreement is at once linguistic and gestural.
The two remaining chapters – Chapter 16 by Gary Morgan, Neil Smith, Ianthi
Tsimpli, and Bencie Woll and Chapter 17 by David Quinto-Pozos – address
the use of agreement in special populations. Gary Morgan and his colleagues
describe the acquisition of British Sign Language (BSL) by Christopher, an
autistic-like adult who shows an extraordinary ability to learn languages, as
evidenced by his knowledge (not necessarily complete) of some 20 or so spoken
languages. However, paired with his linguistic abilities are, according to Morgan
and colleagues, significant impairments in visuo-spatial abilities and in motor
co-ordination.
Christopher was formally trained in BSL over a period of eight months. His
learning of BSL was compared to a control group of hearing, intellectually nor-
mal undergraduates. This training was analogous to formal training that he had
previously received in Berber, a spoken language (Smith and Tsimpli 1995). In
Berber, Christopher showed great enthusiasm, and apparently great success, in
figuring out the subject agreement morphology. However, his acquisition of BSL
appears to proceed on a different track. Although he succeeded in learning some
significant aspects of BSL, including the recall of individual signs, the ability to
produce simple sentences, the ability to recognize fingerspelling, and the use of
negation, he showed little success in acquiring the classifier morphology of BSL,
and his acquisition of verb agreement is more limited than what Morgan and
his colleagues would have expected given Christopher’s success in acquiring
morphology in spoken languages. Christopher could not, for example, estab-
lish locations within the sign space for nonpresent referents. On the authors’
interpretation, Christopher lacked the spatial abilities necessary to acquire the
verb agreement and classifier systems of BSL. In their view, the acquisition of
certain key aspects of signed languages depends crucially on intact spatial abil-
ities. This conclusion appears to converge with Karen Emmorey’s suggestion
326 Using space and describing space

that different cognitive abilities are implicated in the processing of spatial de-
scriptions in signed languages than in spoken languages.
In Chapter 17 David Quinto-Pozos reports a case study of two Deaf-Blind
signers in which he compares their use of the sign space to that of two Deaf,
sighted signers. Quinto-Pozos asked the participants in his study to memorize a
short narrative; each then recited that narrative to each of the other three partic-
ipants. The sighted participants made great use of the signing space, even when
they were addressing a Deaf-Blind subject. In contrast, the Deaf-Blind signers
made little use of the spatial devices of ASL. In general, they did not establish
locations in space for the characters in their stories, nor did they use points to
refer to those characters. Instead, their use of pronominal pointing signs was
largely limited to self-reference and to reference to the fictive addressee of
dialogue in the story that they were reciting. Even with just two participants,
Quinto-Pozos finds interesting differences between the Deaf-Blind signers. For
example, one showed frequent use of the signing space with agreeing verbs;
the other did not. The two Deaf-Blind signers also differed in how they referred
to the characters in their narratives: one used proper names (and did so more
frequently than the sighted signers); the other used common nouns or pronouns
derived from Signed English. At this point, we do not know the extent to which
Quinto-Pozos’s results will generalize to other Deaf-Blind signers. For exam-
ple, native Deaf-Blind signers – including those who are congenitally blind –
might make greater use of the sign space. However, his results raise the possi-
bility that full access to the spatial medium of signed languages may depend
on vision.

richard p. meier

References
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language. Hamburg: Signum-
Verlag.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liddell, Scott K. 2000. Indicating verbs and pronouns: pointing away from agreement.
In The signs of language revisited, ed. Karen Emmorey and Harlan Lane, 303–320.
Mahwah, NJ: Lawrence Erlbaum Associates.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical is-
sues in sign language research, Vol. 1: Linguistics, ed. Susan D. Fischer and Patri-
cia Siple, 175–190. Chicago, IL: University of Chicago Press.
Newport, Elissa L. 1982. Task specificity in language learning? Evidence from speech
perception and American Sign Language. In Language acquisition: The state of
the art, ed. Eric Wanner and Lila Gleitman, 451–486. Cambridge: Cambridge
University Press.
Using space and describing space 327

Pinker, Steven and Paul Bloom. 1990. Natural language and natural selection. Behavioral
and Brain Sciences 13:707–784.
Smith, Neil and Ianthi-Maria Tsimpli. 1995. The mind of a savant. Oxford: Blackwell.
Smith, Wayne. 1990. Evidence for auxiliaries in Taiwanese Sign Language. In Theoret-
ical issues in sign language research, Vol. 1: Linguistics, ed. Susan D. Fischer and
Patricia Siple. Chicago, IL: University of Chicago Press.
Supalla, Samuel J. 1991. Manually Coded English: The modality question in signed
language development. In Theoretical issues in sign language research, Vol. 2:
Psychology, ed. Patricia Siple and Susan D. Fischer, 85–109. Chicago, IL: Univer-
sity of Chicago Press.
13 Pronominal reference in signed and spoken
language: Are grammatical categories
modality-dependent?

Susan Lloyd McBurney

13.1 Introduction
Language in the visual–gestural modality presents a unique opportunity to ex-
plore fundamental structures of human language. One of the larger, more com-
plex questions that arises when examining signed languages is the following:
how, and to what degree, does the modality of a language affect the structure of
that language? In this context, the term “modality” refers to the physical systems
underlying the expression of a language; spoken languages are expressed in the
aural-oral modality, while signed languages are expressed in the visual–gestural
modality.
One apparent difference between signed languages and spoken languages
relates to the linguistic expression of reference. Because they are expressed in
the visual–gestural modality, signed languages are uniquely equipped to convey
spatial–relational and referential relationships in a more overt manner than is
possible in spoken languages. Given this apparent difference, it is not unrea-
sonable to ask whether systems of pronominal reference in signed languages
are structured according to the same principles as those governing pronominal
reference in spoken languages.
Following this line of inquiry, this typological study explores the grammatical
distinctions that are encoded in pronominal reference systems across spoken
and signed languages.Using data from a variety of languages representing both
modalities, two main questions are addressed. First, are the categories encoded
within pronoun systems (e.g. person, number, gender, etc.) the same across
languages in the two modalities? Second, within these categories, is the range of
distinctions marked governed by similar principles? Because spatial–locational
distinctions play such an integral role in pronominal reference across signed
languages, I explore in greater depth spatial marking within spoken language
pronominal systems. In particular, I examine demonstratives and their use as
third person pronominal markers in spoken languages.
The structure of the chapter is as follows. In Section 13.2 I present and discuss
pronominal data from a range of spoken and signed languages. In Section 13.3
I compare pronominal reference in signed and spoken languages, and discuss

329
330 Susan Lloyd McBurney

four ways in which signed language pronominal systems are typologically un-
usual. Section 13.4 examines spatial marking within spoken language pronomi-
nal systems, with particular focus on demonstratives. In Section 13.5 I present an
analysis of signed language pronominal reference that is based on a distinction
between the “modality” and the “medium” of language. Finally, in Section 13.6
I return to the sign language data and discuss the effects of language medium
on the categories of person, number, and gender.

13.2 Pronominal systems across signed and spoken languages


There exists a wide range of semantic information that can be encoded in the
pronominal systems of the world’s languages. Among the types of information
encoded are the following: person, number, gender, distance and proximity,
kinship status, social status, case, and tense (Mühlhäusler and Harré 1990).
In this chapter I examine personal pronouns (I, you, s/he, etc.) across spoken
and signed languages.1 The approach I take focuses on the categories marked
within the pronoun systems of languages across the two modalities and on the
question of whether the range of distinctions marked within these categories is
governed by similar principles. In the context of this discussion, I use the term
“category” to refer to the semantic notions of person, number, gender, and so
on, while the term “distinction” refers to the distinct markings or values within
that category. For example, the pronoun system of a given language might mark
a three-way distinction (first, second, third) in the category person.

13.2.1 Typological variation in spoken language pronominal systems


Spoken languages vary in the number and types of semantic contrasts that are
encoded in their pronominal systems. For example, the pronominal system of
English for the nominative case is shown in Table 13.1.2 In this table we see
that English has the common three-way (first, second, third) distinction in the
category of person. In the category of number, English distinguishes singular
and plural in the first and third persons, but does not mark for number in the
second person. Finally, there is a three-way distinction in gender within third
person singular only. This type of distinction has been referred to as “natural
gender,” whereby all nonhuman referents are neuter, and human referents are
either masculine or feminine.
1 The present chapter deals specifically with pronouns that refer to particular entities in a discourse;
I do not consider either bound pronouns or the tracking of referents in longer stretches of
discourse. Although very little work has been done in the area of bound pronouns in ASL (or
any other signed language), brief discussions can be found in Lillo-Martin (1986) and Lillo-
Martin and Klima (1990). On the use of space for reference in lengthy narratives involving
multiple characters and/or shifts in perspective, see discussions in Meier (1990), Lillo-Martin
(1995), and Winston (1995).
2 In this discussion I do not consider distinctions of case within pronominal systems.
Pronominal reference in signed and spoken language 331

Table 13.1 English personal pronouns (nominative case)

Person Singular Plural

1 I we
2 you you
he
3 she they
it

Asheninca, a pre-Andine Arawakan language spoken in central Peru, encodes


few contrasts in the pronoun system (Table 13.2). Like English, Asheninca
exhibits a three-way distinction in the category of person. However, the only
number distinction present in the personal pronoun system is aaka; because it
is an inclusive pronoun (denoting first person and second person), this pronoun
form necessarily involves more than one person. Beyond this, number is not part
of the pronoun system. Rather, plural is expressed through morphemes that are
regular inflections of verb and noun morphology. Gender is marked only in the
third person (singular), and the distinction is two-way, masculine and feminine.
Slightly more extensive number marking is seen in Nagala, a Ndu language
spoken in New Guinea. Table 13.3 presents data from the personal pronouns
of Nagala. Here we see three distinctions in number (singular, dual, and plu-
ral) carried across all three persons. In addition, Nagala has fairly rich gender
marking, in that there is a two-way distinction (masculine/feminine) across all
singular forms.
Nogogu, an Austronesian language spoken in the Melanesian Islands, has
even richer number marking within its personal pronoun system. In Table 13.4
we see a four-way distinction in number (singular, dual, trial, and plural)
throughout all three persons, as well as an inclusive/exclusive distinction within
the first person. The inclusive form is used when the person addressed is included

Table 13.2 Asheninca personal pronoun

Person Singular More than one

1 naaka (first exclusive) aaka (first inclusive)


2 eeroka
3 irirori (masculine)
iroori (feminine)

Source: Reed and Payne 1986:324


332 Susan Lloyd McBurney

Table 13.3 Nagala personal pronouns

Person Singular Dual Plural

1 wn (m.) yn nan


ñən (f.)
2 mən (m.) bən gwn
yn (f.)
3 kər (m.) (kə) bər rr
yn (f.)

Source: adapted from Laycock 1965, in Mühlhäusler and Harré 1990:83

in the first person plural, and the exclusive form is used otherwise. None of the
Nogogu pronouns are marked for gender.
Finally, we come to Aranda, an Australian Aboriginal language. Table 13.5
presents partial data from the personal pronouns in Aranda. The Aranda data
reveal a three-way distinction in person marking, as well as number mark-
ing for dual and plural (singular data were not available). What is most unusual
about the Aranda pronominal system is the extensive marking of kinship distinc-
tions. Two major distinctions in kinship are encoded throughout the pronominal
system: agnatic vs. non-agnatic (where agnatic denotes an individual related
through a line of patrilineal descent) and harmonic vs. disharmonic (where har-
monic refers to a person from the same generation or a generation that differs
by an even number).
Although the data presented above are a miniscule sampling of the world’s
languages, this brief survey serves to illustrate two points. First, spoken lan-
guage pronominal systems vary in the categories marked; some languages mark
a small number of categories (Nogogu, for example, marks only person and
number), while others encode a wider range of categories (Aranda’s marking
for kinship). Second, spoken languages differ in the range of distinctions marked
within certain categories; compare, for example, Asheninca (which has a plural

Table 13.4 Nogogu personal pronouns

Person Singular Dual Trial Plural

1 exclusive (i) nou omorua omotulu emam


inclusive orua otolu rie
2 i niko omrua omtolu emiu
3 i nikin rurua ritolu i rir, rire

Source: Ray 1926, in Forchheimer 1953:81


Pronominal reference in signed and spoken language 333

Table 13.5 Aranda personal pronouns

Agnatic
Number Harmonic Disharmonic Non-Agnatic

Dual 1 ili-n il-ak il-ant


2 an-atir mpil-ak mpil-ant
3 il-atir al-ak al-ant
Plural 1 un-ar un-aki-ar un-anti-r
2 anj-ariy ar-aki-r ar-anti-r
3 il-ariy in-aki-r in-anti-r

Source: Hale 1966, in Mühlhäusler and Harré 1990:165

pronoun only in the first person) and Nogogu (which has four distinctions in
number – singular, dual, trial, and plural – across all three persons).

13.2.2 Typological variation in signed language pronominal systems


I begin my discussion of signed language pronominal systems by examining
pronouns in American Sign Language (ASL). This is then followed by a pre-
sentation and discussion of pronoun systems in five other signed languages.

13.2.2.1 Pronouns in American Sign Language. ASL is a member


of the French Sign Language Group (Woodward 1978a; 1978b).3 Like other
signed languages, ASL is a visual–gestural language that makes extensive use
of space for reference to individuals within a discourse. In this section I present
the standard analysis of pronominal reference in ASL (Friedman 1975; Klima
and Bellugi 1979; Bellugi and Klima 1982; Padden 1988).4
Figure 13.1 is a two-dimensional representation of signing space as it is used
for pronominal reference in ASL. The form of personal pronouns in ASL is an
index, or 1 handshape (closed hand with the index finger extended), directed
toward a point in space (or locus), one that exists along a chest-level horizontal
plane in front of the signer. For first person reference, the index is directed
toward the signer’s chest (number 1 in Figure 13.1), while for second person
3 Woodward (1978b) identifies four major sign language families, based on hypothesized rela-
tionships between sign language varieties. For more detailed discussion of the history of ASL
and its relation to French Sign Language, see Lane (1984).
4 To my knowledge, the term “standard analysis” has not been used in the literature. I have chosen
to adopt this term here for two reasons: first, it is this analysis of ASL pronominal reference
that has been most widely accepted among sign language linguists; and, second, the standard
analysis of ASL pronominal reference is most similar in structure to theories of pronominal
reference in spoken language. As is discussed in Section 13.6, the standard analysis is not the
only analysis of pronominal reference that has been proposed.
334 Susan Lloyd McBurney

addressee

2
d c

e b

... a
1
signer

Figure 13.1 ASL signing space as used for pronominal reference

reference it is directed out toward a point in front of the addressee’s chest


(number 2 in Figure 13.1).
For referents that are not physically present in the discourse (i.e. third per-
sons), the index is directed toward a point in the signing space that has been
previously associated with that referent. This association of a point in space
with a nonpresent referent has been referred to as “nominal establishment” or
“localization.” Nonpresent referents can be established in the signing space via
a number of localization strategies. One strategy is to sign the name of the
referent and then articulate an index toward the point in space at which the
referent is being established. For example, a signer might fingerspell or sign
the name M-A-R-Y, then articulate an index toward locus ‘a’ (in Figure 13.1).
Once this referent, Mary, has been established at locus ‘a,’ further reference
to her is through an index directed toward point ‘a.’ Other strategies for estab-
lishing a nonpresent referent in the signing space include signing the name of
the referent at the referential locus (e.g. signing M-A-R-Y at location ‘a’), and
signing the name of the referent then indicating the referential locus with eye
gaze (e.g. signing M-A-R-Y and looking at locus ‘a’). In theory, an unlimited
number of nonpresent referents can be localized to distinct locations in the
signing space (a, b, c, d, e . . . in Figure 13.1). However, it appears that memory
and processing constraints limit the number of locations that are actually used
within a discourse.5
Whichever strategy is used to establish a nonpresent referent at a location
in the signing space, an index directed toward that location is interpreted as a
pronoun referring back to that specific referent. These locations in space are
5 I am not aware of any research on specific limitations with respect to spatial locations and
pronominal reference in signed languages. However, investigations of general cognitive abilities
and the capacity of short term or working memory (Miller 1956) suggest that the limit is
somewhere between five and seven units of information.
Pronominal reference in signed and spoken language 335

Table 13.6 ASL personal pronouns

Person Singular Dual Trial Quadruple Quintruple Plural

inclusive
     
1 exclusive     
2      
3 a,b,c,d . . .   ? ? 

considered to be part of the grammar of ASL; they form the base of pronominal
reference and play a crucial role in conveying person distinctions throughout
discourse.
The pronominal system of ASL patterns as shown in Table 13.6.6 Looking
first to the category of person, we see that the three-way distinction of first,
second, and third person exists throughout the pronominal system. Particu-
larly interesting, from a crosslinguistic perspective, is the large number of third
person singular pronouns. As was discussed above, nonpresent referents are
established (or localized) at distinct locations in the signing space. Since there
are an unlimited number of locations in space, it has been argued that there is
a potentially infinite number of distinct pronominal forms (Lillo-Martin and
Klima 1990). Additionally, because individual referents are associated with
distinct locations in the signing space, pronominal reference to single individ-
uals is unambiguous; once Mary has been established at location ‘a,’ an index
directed toward location ‘a’ unambiguously identifies Mary as the referent for
the duration of the discourse.7
ASL has a very rich system of marking for number in its pronouns. The
singular form is an index directed toward a point along the horizontal signing
plane, while plural forms have an added arc-shaped, or sweeping, movement.

6 Table 13.6 represents an analysis and summary of pronoun forms elicited from one native Deaf
signer (a Deaf individual with Deaf parents). Although many of the distinctions represented in
this table are commonly known to exist, the distinctions in number marking are ones that exist
for this particular signer. Whether distinct number-marking forms are prevalent across signers
is a question for further research.
7 There are, in fact, certain circumstances in which reference is not wholly unambiguous. In ASL
discourse abstract concepts and physical locations (such as cities) can also be localized in the
signing space, usually as a strategy for comparison. For example, a signer might wish to compare
her life growing up in Chicago with her experiences as an adult living in New York City. In
this instance, Chicago might be localized to the signer’s left (at locus ‘e’ in Figure 13.1) and
New York to the signer’s right (at locus ‘b’). In the course of the discourse the signer might
also establish a nonpresent referent, her mother perhaps, at locus ‘e’ because her mother lives
in Chicago. Consequently, later reference to locus ‘e’ could be argued to be ambiguous, in that
an index to that locus could be interpreted as a reference to either the city of Chicago or to
the signer’s mother. The correct interpretation is dependent upon the context of the utterance. I
thank Karen Emmorey for bringing this exception to my attention.
336 Susan Lloyd McBurney

In addition to the singular and plural forms, ASL appears to have dual, trial,
quadruple, and quintuple forms throughout much of the pronoun system. The
trial, quadruple, and quintuple forms are formed by replacing the index hand-
shape with a numeral handshape (the handshape for 3, 4, and 5, respectively),
changing the orientation from palm downward to palm upward, and adding
a small circular movement. The dual form is formationally distinct from the
other forms, in that the handshape that surfaces is distinct from the numeral
two; whereas the numeral two has the index and middle fingers extending from
an otherwise closed fist, the handshape in the dual form is the K handshape (the
thumb is extended and placed between the two fingers, and the middle finger is
lowered slightly, so that it is perpendicular to the thumb). The movement in the
dual is also distinct from that found in the trial, quadruple, and quintuple forms;
the K handshape is moved back and forth between the two loci associated with
the intended referents.
In addition to the number marking, it appears that ASL has an inclusive/
exclusive distinction in the first person. In a study of plural pronouns and the
inclusive/exclusive distinction in ASL, Cormier (1998) reports that the exclusive
can be marked by displacement of the sign. For example, the general plural
WE (which is normally articulated with the index touching two points along a
horizontal plane at the center of the chest) can be displaced slightly either to the
right or to the left side of the chest. This instance of WE, which Cormier glosses
as WE-DISPLACED, is marked as first person exclusive, and is interpreted as
[+speaker], [–addressee], and [+non-addressed speech act participant].8 In the
pronoun forms summarized in Table 13.6, the exclusive form of WE is marked
by displacement of the sign, as well as a slight backward lean of the body. The
inclusive and exclusive forms of the dual pronoun are differentiated by whether
or not the location of the addressee is indexed in the sign. Cormier reports
that for the trial, quadruple, and quintuple forms, the locations of the included
referents are often, but not always indexed, but displacement of the sign to the
side of the chest is always present in the exclusive form. The ASL pronoun
forms elicited for this study seem to support this observation.
The extensive marking for number evidenced in ASL raises an interesting
question: do these forms constitute true grammatical number marking (i.e. num-
ber marking that is internal to the pronoun system) or are they the result of an
independent morphological process that happens to surface in pronouns? Based
upon the limited data that are available, I argue that the dual form is an instance

8 Cormier’s (1998) examination of the inclusive/exclusive distinction in ASL is far more compre-
hensive than my discussion of it suggests. She develops an analysis of plural pronouns based
on a distinction between lexical plurals (including the general plural WE, number incorporated
forms 3/4/5-OF-US, the sign A-L-L, as well as the possessive OUR) and indexical plurals (in-
cluding dual and composite forms, individual indexes to all those included). Whereas indexical
plurals index the locations of individual referents, lexical plurals do not.
Pronominal reference in signed and spoken language 337

of grammatical number marking, while the trial, quadruple, and quintuple forms
are not. Three facts of ASL support this interpretation. First, in spoken language
grammatical number marking, dual and trial forms are, by and large, not etymo-
logically derived from the numerals in the language (Last, in preparation).9 For
example, the morpheme that distinguishes a trial form in a given pronominal sys-
tem is not etymologically derived from the numeral three. In contrast, the mor-
phemes (handshapes) that distinguish the trial, quadruple, and quintuple forms
in ASL are the very same handshapes that serve as numerals in the language.
The handshape in the dual form, however, is distinct from the numeral two.
Second, the number handshapes that present within the trial, quadruple, and
quintuple forms of ASL pronouns are systematically incorporated into a lim-
ited number of nonpronominal signs in ASL. This morphological process has
been referred to as “numeral incorporation” (Chinchor 1979; Liddell 1996).
For example, signs having to do with time (MINUTE, HOUR, DAY, WEEK,
MONTH, YEAR) incorporate numeral handshapes to indicate a specific num-
ber of time units. The basic form of the sign WEEK is articulated by moving an
index, or 1, handshape of the dominant hand (index finger extended from the fist)
across the upturned palm of the nondominant hand. The sign THREE-WEEKS
is made with a 3 handshape (thumb, index, and middle finger extended from the
fist), and the sign for FOUR-WEEKS with the 4 handshape (all four fingers ex-
tended from the fist).10 Other signs that can take numeral incorporation include
EXACT-AGE, APPROXIMATE-AGE, EXACT-TIME, DOLLAR-AMOUNT,
and HEIGHT. Numeral incorporation is clearly a productive (though limited)
morphological process, one that surfaces in several areas of the language. Sig-
nificantly, the handshape for the numeral two (as opposed to the K handshape
that surfaces in the dual pronominal form) is also involved in this productive
morphological process. Thus, the data available suggest that the trial, quadru-
ple, and quintuple forms that surface in parts of the pronominal system are
instances of numeral incorporation, not grammatical number marking.
The final argument for treating trial, quadruple, and quintuple number mark-
ing in ASL as something other than grammatical number marking has to do
with obligatoriness. Last (personal communication) suggests that in order to be
considered grammatical number marking, the marking of a particular number
distinction within a pronominal system has to exhibit some degree of obliga-
toriness. Whereas the dual form appears to be obligatory in most contexts (and
9 Greville Corbett (personal communication) notes some exceptions in a number of Austronesian
languages, where forms indicating ‘we-three’ and ‘we-four’ appear to be etymologically related
to numerals.
10 The handshapes that can be incorporated into these signs appear to be limited to numerals ‘1’
through ‘9’. While the number signs for ‘1’ through ‘9’ are static, the signs for numbers ‘10’ and
above have an internal (nonpath) movement component. These numbers cannot be incorporated
because the resulting sign forms would violate phonological constraints in the language (Liddell
et al. 1985).
338 Susan Lloyd McBurney

therefore can be considered an instance of grammatical number marking), it


does not appear that the trial, quadruple, and quintuple forms are obligatory.
In other words, the general plural form might be used in certain instances in-
stead of one of these specific number-marking forms. For example, Baker and
Cokely (1980:208) comment that “if the signer wants to refer to a group of
people (three or more) without emphasizing each individual, the Signer can use
the ‘1’ handshape an ‘draw’ an arc that includes all the people the Signer wants
to talk about.”11
Before leaving this topic, however, one additional point deserves considera-
tion, one that relates to proposed crosslinguistic universals. Ingram (1978) has
proposed a “universal constraint” on systems of number, whereby languages
fall into one of three categories based on the number of distinctions marked:
r one, more-than-one;
r one, two, more-than-two; and
r one, two, three, more-than-three.
If the trial, quadruple, and quintuple forms of numeral incorporation within
the pronoun system of ASL are interpreted as grammatical number marking,
then the ASL data would have to be characterized as marking a greater range of
distinctions than has been found in spoken languages (one, two, three, four, five,
more-than-five). ASL would thus be in violation of proposed universal systems
of number. Of course one should not allow theory to dictate interpretation of the
data; if the ASL data showed clear evidence of being true grammatical number
marking, then the universal status of the constraint proposed by Ingram would
be significantly weakened. However, as the discussion above illustrates, there
is reason to believe that certain aspects of the number marking (trial, quadruple,
quintuple) are not part of grammatical number.

13.2.2.2 Pronominal reference in other signed languages. In this


section I examine pronominal reference in five other signed languages. The
languages considered are Italian Sign Language (Lingua Italiana del Segni or
LIS) (Pizzuto 1986; Pizzuto, Giurana, and Gambino 1990), Australian Sign
Language, or Auslan (Johnston 1991; 1998), Danish Sign Language (dansk
tegnsprog or DSL) (Engberg-Pedersen 1986; 1993), Indo-Pakistani Sign Lan-
guage, or IPSL (Vasishta, Woodward, and Wilson 1978; Zeshan 1998; 1999;
personal communication), and Japanese Sign Language (Nihon Syuwa or NS)
(Fischer 1996; Supalla and Osugi, unpublished). These five languages repre-
sent at least three distinct sign language families: the French Sign Language
11 An additional factor that may be related to the optionality of the trial, quadruple, and quintuple
forms has to do with articulatory constraints. If a signer wishes to use a trial form (meaning the
three of them), this form might only be possible if the loci associated with the three referents
are adjacent to each other. For example, if the three referents have been localized at ‘a,’ ‘c,’
and ‘e’ (see Figure 13.1) it would be impossible to articulate a trial form that could specify
which referents were included and exclude those that were not. In such circumstances, the
plural marking of a pronoun pushes the system in less-indexic directions.
Pronominal reference in signed and spoken language 339

Table 13.7 Person distinctions across signed languages

ASL LIS Auslan DSL IPSL NS

1 1 1 1 1 1
2 2 2 non 1 2 2
3. . . 3. . . 3. . . 3. . . 3. . .

Group (LIS and DSL), the British Sign Language Group (Auslan), and the
Asian Sign Language Group (NS) (Woodward 1978b). Although the precise
historical affiliation of Indo-Pakistani Sign Language (IPSL) is at present un-
known, evidence suggests IPSL is not related to any European signed languages
(Vasishta, Woodward, and Wilson 1978).12
Rather than summarize data from each signed language separately, I ex-
amine each individual category (person, number, gender) in turn. To facilitate
comparison, I also include the data from ASL. I begin with an examination
of person distinctions (Table 13.7). All but one of the signed languages ana-
lyzed here utilize an index (or some highly similar handshape, such as a lax
index) directed toward the signer or the addressee to indicate first and second
person, respectively. DSL has forms that directly correspond to these first and
second person pronouns, but Engberg-Pedersen (1993) argues for a distinction
between first person and nonfirst person (discussed below).13 Whether or not
a second/third person distinction exists, all signed languages considered here
use strategies similar to those used in ASL to establish (or localize) nonpresent
referents along a horizontal plane in the signing space.14 In addition, all signed
languages appear to allow a theoretically unlimited number of third person (or
nonfirst person) pronouns and, because individual referents are associated with
distinct loci in the signing space, reference to individuals is unambiguous.
The number distinctions in these signed languages pattern as shown in
Table 13.8.15 LIS, Auslan, and DSL all have a singular/plural distinction similar
12 To an even greater extent than is true with ASL, the data available on pronouns in these signed
languages is incomplete. Consequently, there are gaps in the data I present and discuss. In addi-
tion, relatively little is known concerning the historical relationships between signed languages
outside the French and British Sign Language Groups.
13 Like Engberg-Pedersen, Meier (1990) argues for a first/nonfirst distinction in ASL pronouns.
His analysis is also discussed below.
14 Zeshan (1998) points out that in IPSL, for some referents it is required or possible to localize
referents in the upper signing space, as opposed to along the horizontal plane. The upper signing
space is used for place names, and can be used for entities that have been invested with some
degree of authority as well as referents that are physically remote from the signer (for example,
in a telephone conversation).
15 In Table 13.8 I have included all information that is available in the literature, including data
covering dual, trial, and quadruple forms in DSL and Auslan. Based on available data, I am not
able to comment on whether or not these forms constitute grammatical number or rather are
instances of numeral incorporation, as I have argued is the case for ASL.
340 Susan Lloyd McBurney

Table 13.8 Number distinctions across signed languages

ASL LIS Auslan DSL IPSL NS

singular singular singular singular transnumeral dual


plural plural plural plural (inclusive/exclusive) ?
2,3,4,5 ? 2,3,4 2,3,4 nonspecific plural

to that present in ASL, where the plural form is marked by an arc-shaped move-
ment. In addition, Auslan and DSL appear to have dual, trial, and quadruple
forms as well (the sources available for LIS did not mention these distinctions).
No number-marking data were available for NS.
Number marking in IPSL is, in certain respects, distinct from number mark-
ing in the other signed languages considered here. Zeshan (1999; personal
communication) reports that IPSL has a transnumeral form that is unspecified
for number. In other words, a single point with an index finger can refer to any
number of entities; it is the context of the discourse that determines whether
singular or plural reference is intended. This is true not only for second and third
person reference, but for first person reference as well. IPSL also has a dual
form (handshape with middle and index finger extended, moving between two
points of reference) that can mark for “inclusive/exclusive-like” distinctions.
In addition, IPSL has a “nonspecific plural” (a half-circle horizontal move-
ment) that refers to an indefinite number of persons exclusively (not used with
nonhuman entities).
Finally, gender marking across these signed languages patterns as shown in
Table 13.9. Only one of the six signed languages considered here has (possibly)
morphological marking for gender; as the discussion below reveals, it is not clear
whether this gender marking is in fact an integral component of the pronominal
system.
Japanese and other Asian signed languages (Taiwan Sign Language; see
Smith 1990) are unique in that they use classifier handshapes to mark gender
in certain classes of signs. In NS a closed fist with the thumb extended upward

Table 13.9 Gender distinctions across signed languages

ASL LIS Auslan DSL IPSL NS

???
– – – – – male
female
(optional?)
Pronominal reference in signed and spoken language 341

represents ‘male,’ while the same fist with the pinkie finger extended represents
‘female.’ Fischer and Osugi (2000) point out that the male and female hand-
shapes are borrowed directly from Japanese culture, but that the use of these
handshapes in NS appears to be completely grammaticized.
Supalla and Osugi (unpublished) report that these gender handshapes are
used in four morphosyntactic paradigms:
r nominal lexemes referring to humans, where a combination of the two gender
handshapes refers to ‘couple’;
r classifier predicate constructions, where a verb of motion or location incor-
porates the masculine gender handshape to represent any animate entity;
r kinship lexemes, where the handshape marks the gender of the referent, and
placement in relation to other articulators denotes the familial status, as in
‘daughter’; and
r inflectional morphemes incorporated into agreement verb constructions, where
a gender handshape can mark the gender of the subject or object.
Fischer (1996) reports that, in addition to co-occurring with verb agreement,
gender marking can co-occur with pronominal indexes. She gives only the
following example:
(1) MOTHERa COOK CAN INDEXa-1
‘Mother can cook, she can.’
(INDEXa-1 simultaneously indicates gender and location). In (1) subscript let-
ters indicate spatial locations, while the subscript number ‘1’ indicates the
female gender handshape. Fischer (personal communication) comments that
most often the gender handshape is articulated on the nondominant hand, with
the dominant hand index pointing to it. Less common is a form where the
gender handshape is incorporated into the pronoun itself; in example (1) the
female gender handshape ‘1’ would be articulated at location ‘a.’ It is not clear
what restrictions apply to the use of gender handshapes within the pronominal
system (for example, can they be used across all three person distinctions?),
nor is it clear whether or not this gender marking is a required component of a
well-formed pronoun.16

13.3 Pronominal reference in signed languages:


Typological considerations
Having examined pronominal systems in a range of spoken and signed lan-
guages, we are now in a position to make some typological observations. In this
16 Daisuke Sasaki (personal communication) thinks gender marking in NS pronouns is optional.
If gender marking is, in fact, optional, this would suggest that it is not grammatical gender
marking, but rather a productive morphological process at work (optionally) in parts of the
pronoun system. See discussion of number marking in Section 13.2.2.1.
342 Susan Lloyd McBurney

section I discuss four such observations, and go on to discuss what might be at


the core of the differences between pronominal reference in spoken and signed
languages.

13.3.1 Typological homogeneity


Despite the fact that the signed languages analyzed here represent several lan-
guage families, there appears to be a tremendous amount of uniformity across
signed languages in the way pronominal reference is structured. One aspect
of the phonological form of personal pronouns, the index handshape, is either
identical across languages, or is very closely related.17 As for location, first
and second person reference is virtually identical across the signed languages
considered here, and all of the signed languages reviewed use locations in the
signing space for reference to nonpresent individuals.18 Through strategies of
nominal establishment (or localization) that are similar across languages, non-
present referents are established (or localized) at distinct locations in the signing
space, and further reference to an individual within a given discourse is through
an index to the specific location with which she or he has been associated.

13.3.2 Morphophonological exclusivity


In signed languages, locations in space seem to be reserved for referential pur-
poses (pronouns and verbal agreement markers).19 In other words, there exists
a particular subset of phonemes (locations in space) that are used for reference,
as person markers within the pronoun and verbal agreement systems. The lo-
cations in space at which referents are established are lexically noncontrastive;

17 While the phonological form of personal pronouns is highly similar across signed languages,
this is not true in the case of possessive and reflexive pronouns. Although locations in space
are still used, there appears to be considerable variation in handshape among languages in the
possessives and reflexives.
18 Japanese Sign Language appears to have two forms of the first person; one form contacts the
chest and a second form contacts the nose of the signer (a gesture that is borrowed from Japanese
hearing culture). Similarly, Farnell (1995) notes that in Plains Indian Sign Language reference
to self sometimes takes the form of an index finger touching the nose.
19 This is, in fact, an oversimplification. Locations in the signing space are also used in classifier
constructions, where specific handshapes (classifiers) are combined with location, orientation,
movement, and nonmanual signals to form a predicate (Supalla 1982; 1986). For example, the
English sentence The person walked by might be signed in ASL by articulating the person
classifier (a 1 handshape, upright orientation) and moving it from right to left across the sign-
ing space. In these constructions the use of space is not lexical, but rather topographic. The
relationship between locations in space is three-dimensional; there exists a one-to-one corre-
spondence between the elements of the classifier predicate and what they represent in the real
world. For a discussion of the topographic use of space, see Emmorey (this volume). In terms
of morphophonological exclusivity, my claim is that when space is used lexically (as opposed
to topographically), locations in space seem to be reserved for referential purposes.
Pronominal reference in signed and spoken language 343

Table 13.10 Possible analogously structured system of pronominal reference

Nominal establishment Pronominal reference

Signed Proper index to index to unambiguous


+ =
language name [locationα ] [locationα ] reference
Spoken Proper pronominal root + pronominal root + unambiguous
+ =
language name [phonemeα ] [phonemeα ] reference

there are no minimal pairs that are distinguished solely by spatial location.20
To my knowledge, there are no spoken languages that pattern this way, where a
particular subset of phonemes is used exclusively for a specific morphological
purpose, such as pronominal reference.

13.3.3 Morphological paradigm


The structure of pronominal paradigms in signed languages is highly unusual
from a crosslinguistic perspective. One could imagine a spoken language with
a pronominal system structured in a way that is analogous to what we see
across signed language pronominal systems. A schematic diagram of such a
system might look as shown in Table 13.10. This hypothetical spoken language
might establish nominals in a discourse in the following manner: the speaker
utters a proper name (‘Mary’) and then utters a series of two phonemes; the
first (let’s say /t/) serves as the pronominal root, the second (let’s say /i/) as a
‘person marker’ that, through a one-to-one association, uniquely identifies the
nonpresent referent Mary. For the remainder of the discourse, reference back
to Mary would be through uttering /ti/. For example, the speaker might say ‘I
like /ti/,’ which would be interpreted as meaning ‘I like Mary.’ Each time the
word /ti/ is uttered, Mary is unambiguously referred to. A pronominal system
thus structured is unattested in spoken language.21
20 Liddell (2000b) makes this point in a recent publication, but notes one exception: the signs
GOAL and POINT. Both are two-handed signs; in both the nonmoving (nondominant) hand
has a 1 handshape with the finger pointing upward, and the moving (dominant) hand, also a
1 handshape, is directed toward the tip of the nondominant hand finger. Liddell notes that what
distinguishes these two signs is the placement of the stationary hand; when articulated at the
level of the forehead, the sign is GOAL, but when articulated at the level of the abdomen, the
sign is POINT. Although locations in the signing space appear to be lexically contrastive here, it
is worth noting that these two signs are distinctly articulated with respect to the relative vertical
position of the nondominant hand, rather than with respect to different locations within the
horizontal signing plane. There are no two signs in ASL that differ only in location along a
horizontal signing plane.
21 In legal and mathematical texts, however, variables are used in a manner that approaches sim-
ilarity to sign language pronominal paradigms. For example, “If a person x steals something
344 Susan Lloyd McBurney

13.3.4 Referential specificity


The pronominal systems in signed languages exhibit a high degree of “refer-
ential specificity.” I am defining referential specificity as the degree to which
full referential information is recoverable from the morphology. The location
component of singular pronouns (in all signed languages studied to date) allows
for complete and unambiguous identification of referents within a discourse. As
a result, the relationship between form and meaning (referent) is non-arbitrary.
Indeed, signed and spoken languages appear to differ in their capacity for
encoding referential information in pronouns. In an attempt to characterize this
difference, one could posit a continuum of referential specificity (shown in
Figure 13.2). A language like Asheninca, which marks minimal contrasts in
the pronoun system, would fall to the left of a language like Nagala, which has
more extensive marking for both number and gender. Nagala, in turn, might
fall to the left of a language like Aranda, which, in addition to marking for
increased number distinctions, also has a rich system of marking for kinship.
Evaluating pronominal systems in this way, signed languages would fall far
to the right of spoken languages; because the location component of pronouns
allows for complete and unambiguous reference, signed language pronouns
have an extremely high degree of referential specificity. Furthermore, because
pronominal systems across signed languages are structured so similarly, signed
languages would cluster together at the high end of the continuum. If gen-
der marking in NS turns out to be an integral component of the pronominal
paradigm, then perhaps NS would fall slightly more to the right.
Critical evaluation of the proposed continuum in Figure 13.2 raises the fol-
lowing question: is this, in fact, a continuum? With respect to referential speci-
ficity, is the difference between spoken language pronoun systems and signed
language pronoun systems a difference of degree, or is it rather a difference
of kind? In Section 13.4 I address this question and argue for the latter by
examining spatial marking in signed and spoken language pronoun systems.

13.4 Spatial marking in pronominal systems


We have seen that pronominal reference in signed languages is highly uniform
(typologically homogeneous) and, at the same time, highly unusual from a
crosslinguistic (crossmodality) perspective. What underlies this difference be-
tween signed and spoken languages is the use of spatial marking for referential
purposes.

from a person y, and x sells it to a person z, then z is not obliged to give it back to y.” I thank an
anonymous reviewer for bringing this to my attention. Also, Aronoff, Meir and Sandler (2000)
discuss a small number of spoken languages that have verbal agreement systems similar in
structure to the hypothetical system discussed in Table 13.10.
Pronominal reference in signed and spoken language 345

Nagala
English IPSL Auslan
Nogogu ISL DSL
Asheninca Aranda
ASL NS

?
LOW DEGREE HIGH DEGREE
SPOKEN SIGNED
LANGUAGES LANGUAGES

Figure 13.2 Continuum of referential specificity

In this section I examine spatial marking in pronominal systems and focus on


two questions. First, how is spatial marking used in spoken language pronouns?
Second, can spatial marking, as it is used in signed language pronominal refer-
ence, be accounted for within a framework based on grammatical distinctions?

13.4.1 Spatial marking in spoken language pronominal systems


Spatial–locational information can be conveyed through several different gram-
matical markers, among them locative adverbs, prepositions, and demonstra-
tives. Here I focus on demonstratives, as they interact most directly with ref-
erence in natural language. Two characteristics of demonstratives make them
particularly relevant to the issues at hand: first, demonstratives have deictic fea-
tures that serve to indicate the location of the referent in the speech situation;
and, second, demonstratives are frequently a source of third person pronouns,
through a process of grammaticalization.22

13.4.1.1 Demonstrative pronouns in spoken language. Diessel (1999:


36) writes that:
all languages have at least two demonstratives locating the referent at two different points
on a distance scale: a proximal demonstrative referring to an entity near the deictic center,
and a distal demonstrative indicating a referent that is located at some distance to the
deictic center.

22 Diessel (1999:35–36) defines deictic expressions as “linguistic elements whose interpretation


makes crucial reference to some aspect of the speech situation.” He goes on to note that deictic
expressions can be divided into three semantic categories: person deictics (I and you; speech
participants), place deictics (objects, locations, or persons apart from speech participants), and
time deictics (which indicate a temporal reference point relative to the speech event). It is place
(or spatial) deictics that are the focus of the present discussion.
346 Susan Lloyd McBurney

Table 13.11 English demonstrative pronouns

this (proximal)
that (distal)

The deictic center (also referred to as the origo) is most often associated with
the location of the speaker. Table 13.11 shows the two-term deictic distinction
found in English demonstrative pronouns. 23 Here, the proximal form this refers
to an entity near the speaker, while the distal form that refers to an entity that
is at a distance from the speaker.
In contrast to the two-way distinction found in English, many languages have
three basic demonstratives. In such languages, the first term denotes an entity
that is close to the speaker, while the third term represents an entity that is
remote relative to the space occupied by speaker and addressee. As Anderson
and Keenan (1985) note, three-term systems differ in the interpretation given to
the middle term. Take, for example, the following demonstrative forms, found
in Quechua, an Amerindian language spoken in central Peru (Table 13.12).
The type of three-way distinction evidenced in Quechua has been characterized
as “distance-oriented,” in that the middle term refers to a location that is a
medial distance relative to the deictic center (or speaker) (Anderson and Keenan
1985:282–286).
In contrast to the distance-oriented distinctions encoded in Quechua
(Table 13.12), we have the following system in Pangasinan, an Austronesian
language spoken in the Philippines (Table 13.13). The three-term deictic system
in Pangasinan is “person-oriented”; in this system the middle term denotes a
referent that is close to the hearer (as opposed to a medial distance relative to the
speaker).
The demonstrative pronoun system of Khasi, an Austro-Asiatic language
spoken in India and Bangladesh, patterns as shown in Table 13.14. The demon-
strative system in Khasi is based on six demonstrative roots, which are paired
with personal pronouns, u ‘he’ and ka ‘she.’ Three of the demonstratives locate
the referent on a distance scale (proximal, medial, distal). Khasi demonstrative
pronouns encode two additional deictic dimensions: visibility (ta ‘invisible’)
and elevation (tey ‘up’, thie ‘down’). The elevation dimension indicates whether
the referent is at a higher or lower elevation relative to the deictic center, or
speaker.

13.4.1.2 Grammaticalization of pronominal demonstratives in spo-


ken language. In the previous section we examined spatial deixis in spoken
23 In this section I discuss only singular demonstrative forms.
Pronominal reference in signed and spoken language 347

Table 13.12 Quechua demonstrative pronouns

kay ‘this (one)/here’ (proximal)


chay ‘that (one)/there’ (medial)
taqay ‘that (one)/over there’ (distal)

Source: Weber 1986:336

languages and saw a range of spatial distinctions encoded across demonstra-


tives. In contrast to what is seen above, the spatial marking in signed languages
exists directly within the personal, as opposed to the demonstrative, pronoun
system. In order to understand the extent to which the spatial marking of signed
languages can be accounted for within a framework based on grammatical dis-
tinctions, we must look at the ways in which spatial marking surfaces within
the personal pronoun systems of spoken languages.
In many spoken languages, demonstratives undergo diachronic change and
develop into grammatical markers such as relative pronouns, complementizers,
sentence connectives, and possessives (Diessel 1999). Of particular interest
here are languages in which pronominal demonstratives develop into third per-
son pronominal markers. For example, Lak, a northeast Caucasian language
spoken in southern Daghestan, has the demonstrative base forms shown in
Table 13.15. This system of demonstrative marking is similar to that found in
Khasi (Table 13.14) in that it encodes a distinction based on elevation.
What is interesting about Lak is that these demonstrative base forms interact
with the personal pronoun system. The personal pronouns of Lak are limited
to the first and second person. There is no independent pronominal form repre-
senting a third person distinction; rather any of the five deictics can function as
third person pronouns. Thus, the personal pronouns of Lak pattern as shown in
Table 13.16. In Lak the third person pronouns are grammaticalized demonstra-
tives and, as such, are spatially marked; there are five separate forms that can
be used to refer to a third person, and the choice of form is determined by the
spatial location of the referent itself.
Bella Bella, a Salishan language spoken in British Columbia, is another
language in which demonstratives have grammaticalized and are used as third

Table 13.13 Pangasinan demonstrative pronouns

(i)yá near speaker


(i)tán near hearer
(i)mán away from speaker and hearer

Source: Benton 1971:88


348 Susan Lloyd McBurney

Table 13.14 Khasi demonstrative pronouns

Masculine singular (u ‘he’) Feminine singular (ka ‘she’)

Proximal u-ne ka-ne


Medial (near hearer) u-to ka-to
Distal u-tay ka-tay
Up u-tey ka-tey
Down u-thie ka-thie
Invisible u-ta ka-ta

Source: adapted from Nagaraja 1985, Rabel 1961, in Diessel 1999:43

person pronouns. Similar to Lak, Bella Bella has independent forms for first
and second person pronouns, but recruits the demonstrative forms to serve as
third person pronouns. The forms shown in Table 13.17 can all be used as third
person pronouns. Thus, third person pronouns in Bella Bella have a seven-fold
distinction that relies on proximity to speaker, hearer, and other, as well as
visibility and presence.
Although only two languages have been discussed here, there are many others
that utilize demonstrative (spatially deictic) pronouns for third person reference.
Several languages of the Pacific Northwest (for example Northern Wakashan)
mark a variety of spatial categories in the third person. Additionally, many
Indo-Aryan languages (Sinhala and Hindi among them) utilize demonstratives
as third person pronominal forms.

13.4.2 Spatial marking: Spoken and signed languages compared


Having examined data from demonstrative systems across the two modalities,
we are now in a position to draw some comparisons between spatial marking
in spoken and signed languages. The discussion here focuses on two distinct
questions. First, what is the range of spatial distinctions marked in languages of
the two modalities? Second, what role does spatial marking play within spoken
and signed languages?

Table 13.15 Lak demonstrative base forms

va ‘near to speaker’
mu ‘near to addressee’
ta ‘distant from both, neutral’
ga ‘below speaker’
ka
. ‘above speaker’

Source: Friedman 1994:79


Pronominal reference in signed and spoken language 349

Table 13.16 Lak personal pronouns

na 1st person pronoun


ina 2nd person pronoun
va near to speaker
mu near to addressee
ta 3rd distant from both
ga below speaker
ka
. above speaker

Source: adapted from Friedman 1994:79–80

The range of spatial distinctions marked within spoken language reference


systems, while it is to some degree varied, is in principle limited. In the lan-
guages examined here, the range of spatial distinctions marked was between
two (English) and seven (Bella Bella). While there may in fact be languages
that distinguish more that seven spatial markings, the number of spatial dis-
tinctions that function within systems of pronominal reference is going to be
limited.
Signed languages, on the other hand, exhibit an unlimited number of spatial
distinctions within their pronoun systems. There are an unlimited number of
locations in the signing space, all of which can be used as a locus for establishing
a referent in space (Lillo-Martin and Klima 1990). In practice, of course, the
number of locations actually used within a given discourse is limited. These
limitations are, however, not imposed by the grammar of the language, but
rather are due to articulatory and perceptual factors.
The role that spatial marking plays within pronominal systems of individual
spoken languages varies. Some languages (for example English) use spatial
marking only within the demonstrative pronouns, while others (Lak and Bella
Bella) use spatial marking throughout the third person pronouns as well. In
either case, spatial marking in spoken languages appears to be restricted to

Table 13.17 Bella Bella third person pronouns

gj aqu near first person (visible)


qauqu near second person (visible)
qequ near third person (visible)
gj atsqu near first person (invisible)
qauxtsqu near second person (invisible)
qetsqu near third person (invisible)
qkj equ removed from presence

Source: adapted from Boas 1947:296–297


350 Susan Lloyd McBurney

demonstrative pronouns and, in some languages, grammaticalized demonstra-


tives that function as third person pronouns.
Unlike spoken languages, spatial marking is prevalent throughout all signed
language pronominal systems that have been studied to date. Additionally, there
is minimal variation between signed languages in the way they use space for
referential purposes. Whereas in spoken languages marking for spatial location
is restricted to demonstratives and grammaticalized third person pronouns, in
signed languages spatial marking is present throughout the entire pronominal
system. In ASL, for example, locations in the signing space are utilized within
the possessive, reflexive, and reciprocal pronoun constructions in addition to
being used in the personal pronouns. As is the case with personal pronouns,
the location component of these structures allows for unambiguous reference.
Furthermore, spatial marking is an integral part of the verbal agreement system
in all signed languages. Once referents have been established at locations in the
signing space, those verbs that require subject and object agreement (agreement
verbs; Padden 1988) are articulated between these locations. For example, with
the agreement verb ASK: 1 ASK2 would be interpreted as ‘I ask you,’ while
1 ASKa would be interpreted as ‘I asked (specific individual associated with
locus a).’
One additional difference between spatial marking in spoken and signed
languages has to do with the structure of the spatial deictic system itself. As was
discussed in Section 13.4.1.1, languages that have a three-term distinction differ
in the interpretation given to their middle terms; some are distance-oriented,
while others are person-oriented (Anderson and Keenan 1985). The spatial
marking that is present in signed languages does not seem to fit neatly into
either of these two categories; rather, spatial marking in signed languages is
based on absolute locations within the signing space.
In an early publication, Siple (1982:315) writes: “Those differences between
spoken and signed language which are purported to occur will be seen to be
more quantitative than qualitative.” The results from the present typological
study contradict this claim. I would argue that spatial marking within signed
language pronoun systems is qualitatively different from what we see in spoken
language pronoun systems. In Section 13.5 I explore this difference and propose
a framework that will help us characterize the unusual aspects of sign language
pronominal reference.

13.5 The modality/medium distinction


In Section 13.1 one of the questions posed was the following: how, and to what
degree, does the modality of a language affect the structure of that language?
This question has been a topic of discussion for as long as signed languages have
Pronominal reference in signed and spoken language 351

been a focus of linguistic research. While modality effects have received a great
deal of attention in the literature, the distinction between modality and medium
has not. In an attempt to provide a principled account for the differences between
pronominal reference in spoken and signed languages, I propose a preliminary
analysis that relies on a distinction between the modality of a language and the
medium of a language.
The “modality” of a language can be defined as the physical or biological
systems of transmission on which the phonetics of a language relies. There are
separate systems for production and perception. For spoken languages, pro-
duction relies upon the vocal system, while perception relies on the auditory
system. Spoken languages can be categorized, then, as being expressed in the
vocal-auditory modality. Signed languages, on the other hand, rely on the gestu-
ral system for production and the visual system for perception. As such, signed
languages are expressed in the visual–gestural modality.
The “medium” of a language I define as the channel (or channels) through
which a language is conveyed. More specifically, channel refers to the dimen-
sions of space and time that are available to a given language. Defined as such,
I suggest that the medium of spoken languages is “time,” which in turn can be
defined as “a nonspatial continuum, measured in terms of events that succeed
one another.”24 Indeed, all spoken languages unfold in time; speech segments,
morphemes, and words follow one another, and the order in which they appear is
temporally constrained. This is not to say that all aspects of spoken language are
entirely segmental in nature. Autosegmental approaches to phonology (in which
tiers comprised of linear arrangements of discrete segments are co-articulated)
have proven essential in accounting for certain phonological phenomena (tone
spreading and vowel harmony among them). However, the temporal character
of spoken languages is paramount, while “spatial” relations play no role (by this
I mean the segments of spoken languages have no inherent spatial–relational
value).25
Whereas spoken languages are limited to the temporal medium, signed lan-
guages are able to utilize an additional medium, that of “space”: “a boundless,
three-dimensional extent in which objects occur and have relative position and
direction.” It is certainly not the case that signed languages exist apart from time;
like spoken languages, the signs of signed languages are temporally ordered.
Additionally, although much of sign language phonology has been argued to

24 The definitions of time and space are taken from Webster’s online dictionary: www.m-w.com
25 In their paper on the evolution of the human language faculty, Pinker and Bloom (1990:712)
discuss the vocal-auditory channel and argue that “language shows signs of design for the
communication of propositional structures over a serial channel.” Although their use of the term
“channel” seems to cover both modality and medium (as defined in the present chapter), their
observations seem to fall in line with the observations made here.
352 Susan Lloyd McBurney

be simultaneous (in the sense that the components of a sign – handshape, lo-
cation, movement, orientation, and nonmanual features – are simultaneously
articulated), research suggests that linear segments do exist, and that the or-
dering of these segments is an important aspect of phonological structure (for
an overview, see Corina and Sandler 1993). Nevertheless, signed languages
are unique in that they have access to the three dimensions of space; thus, the
medium of signed languages is space and time. Significantly, it is the spatial
medium, a medium not available to spoken languages, that affords a radically
increased potential for representing spatial relationships in an overt manner.26
Returning to the question of pronominal reference, the fact that signed lan-
guages are expressed through the visual–gestural modality does not preclude a
fully abstract, grammatical system of reference. Signed languages could have
developed systems of reference that utilize the modality-determined abstract
building blocks that are part of the language (handshapes, locations on the
body, internal movements, etc.) without using space. Instead of localizing ref-
erents at distinct locations in the signing space, reference might look something
like (2).
(2) Possible sign language pronoun system
M-A-R-Y [F handshape to left shoulder], B-I-L-L [F handshape
to chest]
[F handshape to left shoulder] LIKE [F handshape to chest]
‘Mary likes Bill.’
In principle, there is no reason this kind of system for referring to individuals
in the discourse would not work. However, there are no known natural signed
languages that are structured this way; all natural signed languages take full
advantage of the spatial medium to refer to referents within a discourse.
There are, in fact, artificial sign systems that do not use space, or use it in
a very restricted manner. One example is Signing Exact English (SEE 2), an
English-based system of manual communication developed by hearing educa-
tors to provide deaf students with visible, manual equivalents of English words
and affixes. For a discussion of the acquisition of these artificial sign systems
and the ways in which Deaf children adapt them, see Supalla and McKee (this
volume). 27 The fact that deaf children learning artificial sign systems spon-
taneously use space in ways that are characteristic of ASL and other natural
signed languages is strong support for the extent to which space is an essential
and defining feature of signed languages (Supalla 1991).
26 One area in which we see this potential fully exploited is the classifier systems of signed
languages. See footnote 19 for a brief description of the way in which classifiers utilize space.
For an overview of classifier predicates in ASL, see Supalla (1986).
27 See also Quinto-Pozos (this volume) for a discussion of the more limited use of space by
Deaf-Blind signers; it appears that the tactile-gestural modality to which Deaf-Blind signers are
limited provides a more limited access to the medium of space.
Pronominal reference in signed and spoken language 353

13.6 Sign language pronouns revisited


Acknowledging the distinction between the modality of language and the med-
ium of language helps clarify certain aspects of pronominal reference in signed
languages. In particular, by viewing reference in signed languages as medium-
driven, the typologically unusual aspects of signed language pronominal sys-
tems receive some explanation. Each of the typological considerations discussed
in Section 13.3 (typological homogeneity, morphophonological exclusivity,
morphological paradigm, and referential specificity) has its roots in space and
the way it is used in signed language reference.
While an understanding of language medium helps clarify why reference in
signed languages is different, it does little to explain exactly how it is different.
In other words, what is the precise nature of this difference, and what impact
does this difference have on the formal structure of the language? I address these
issues by returning to the questions posed at the beginning of this chapter: are the
categories encoded within pronoun systems (e.g. person, number, gender, etc.)
the same across languages in the two modalities, and within these categories,
is the range of distinctions marked governed by similar principles?
In this section I argue that the category number is lexically marked in sign
language pronoun systems, but that the category of person is not. I also discuss
gender marking in sign language pronominal systems and suggest some possible
explanations for the fact that the category of gender does not appear to be an
integral component of any signed language pronominal system.

13.6.1 Number marking in signed language pronominal systems


As the data in Section 13.2.2 illustrate, number appears to be a category that
is formally (i.e. grammatically) marked in signed language pronominal sys-
tems. Two sets of facts support a grammatical analysis of number marking
in sign language pronouns. First, the distinctions that are marked within the
category of number are similar to those marked in spoken language pronomi-
nal systems (singular, plural, dual, inclusive/exclusive). Within the majority of
signed language pronoun systems, the distinction between singular and plural
is consistently carried by modulating the movement of the index (adding an
arc-shaped movement). This arc-shaped movement is a movement feature that
surfaces in other lexical items, for example, in the ASL signs COMMITTEE
and POWER. The exception to this is IPSL, where the distinction between sin-
gular and plural is formationally neutralized in most contexts; the transnumeral
form (index directed toward a point in space) is unspecified for number, and
is interpreted as either singular or plural based on the context of the utterance.
While IPSL is clearly the exception (no other signed languages studied have
patterned this way), the transnumeral form does constitute variation. Thus, with
354 Susan Lloyd McBurney

respect to whether and how the singular/plural distinction is marked, we see at


least some variation among signed languages.28
A second fact that argues for number marking in signed languages has to do
with the dual form, at least as it exists in ASL. As discussed in Section 13.2.2.1,
there is evidence to suggest that number distinctions within ASL pronouns are
constrained in ways that are similar to spoken languages. The dual form of the
personal pronoun (WE-TWO/INCLUSIVE, WE-TWO/EXCLUSIVE, YOU-
TWO, THOSE-TWO, etc.) appears to be consistent and mandatory throughout
the system. While the trial, quadruple, and quintuple forms are used in some
situations, I have argued that they are not grammatical number marking but
rather are instances of numeral incorporation and are fully componential in their
morphology. Distinguishing between the dual as grammatical number marking
and the other forms as existing outside the core of the pronominal system results
in ASL conforming to the universal systems of number proposed by Ingram
(see discussion in Section 13.2.2.1). Thus, we see in ASL (and potentially other
signed languages) a system of number marking that is constrained in ways that
are similar to spoken language.

13.6.2 Person marking in sign language pronominal systems


Do signed languages systematically encode a distinction with respect to per-
son? This issue has received considerable attention of late (for discussion, see
Meier 1990) and analyses fall at various points along a continuum of distinc-
tions that are marked.29 At one end are those who argue that there are no person
distinctions (Ahlgren 1990; Lillo-Martin and Klima 1990). Moving along the
continuum, Meier (1990) and Engberg-Pedersen (1993) argue for a grammat-
ical distinction between first and nonfirst person, while the standard analysis
(Friedman 1975 and others) suggests a three-way distinction: first, second, and
third. Finally, representing the other end of the continuum, Neidle et al. (2000)
argue that nonfirst person can be subdivided into many distinct persons. The use
of space for referential purposes, they write, “allows for finer person distinc-
tions than are traditionally made in languages that distinguish grammatically
only among first, second, and third person” (p.36).
The wide range of analyses set forth to explain person distinctions is, I be-
lieve, a reflection of the complex and (crossmodally) typologically unusual
nature of sign language pronominals. The high degree of referential specificity

28 Indeed, as the number of signed languages studied increases, it is quite likely that other types
of variation in number marking will be found.
29 Although my comments are framed with respect to person distinctions in ASL, the typological
homogeneity that characterizes pronominal reference across signed languages makes it possible
to extend the analysis to other signed languages. Where appropriate, I have included references
to works on other signed languages.
Pronominal reference in signed and spoken language 355

(the unambiguous reference that the medium, space, allows) makes the appli-
cation of standard models of person distinction challenging, if not potentially
problematic. In the remainder of this section I discuss some of these problems.
Ingram (1978) approaches the category of person by examining the lexical
marking of deictic features (speaker, hearer, other). In evaluating languages with
respect to this, Ingram asks, “what are the roles or combination of roles in the
speech act that each language considers to be of sufficient importance to mark by
a separate lexical form?” (p.215). Approached in this manner, English would be
analyzed as having a five-way system: I, we, you, he, they; see Table 13.1 above.
Gender and case distinctions do not come into consideration here because they
are not pertinent to the roles of individuals in speech acts. Thus, English can be
said to mark a lexical distinction between first, second, and third persons.
Using this framework as a guideline, what roles or combination of roles does
ASL mark with a separate lexical form? Are the individual pronoun forms in
ASL separate lexical forms? Addressing first the question of separate, if we
interpret “separate” to mean distinct, the answer would be yes. The various
pronouns in ASL are both formationally distinct (in the sense that distinct loca-
tions in the signing space are utilized for all pronouns) as well as semantically
or referentially distinct (reference to individuals within the discourse is unam-
biguous). But do these individual pronouns constitute separate lexical forms?
Considering only singular reference for a moment, the standard analysis of
pronominal reference argues for a three-way distinction in person: first person
(index directed toward signer), second person (index directed toward a locus
in front of the addressee), and third person (index directed toward a point in
space previously associated with nonpresent referent). On this view, person
distinctions in ASL, as well as in the other signed languages reviewed here, are
based on distinct locations in the signing space.30
In order to evaluate the question of which person distinctions (if any) exist in
ASL we must ask whether these locations in space are lexical; in other words,
are the locations in space that are used for pronominal reference, that are used to
distinguish individual referents in a discourse, specified in the lexicon? For the
purposes of this discussion, I define the lexicon as the component of a grammar
that contains information about the structural properties of the lexical items in
a language. As such, the lexicon contains semantic, syntactic, and phonological
specifications for the individual lexical items within the language.
In a grammatical analysis of spatial locations, spatial features must have
phonological substance. In other words, the locations toward which pronouns
are directed must be part of a phonological system, and must be describable
using a set of discrete phonological features. Liddell (2000a) addresses this

30 The proposed first person pronoun in ASL is not, in fact, articulated at a location in the signing
space. Rather, it contacts the signer’s chest. This fact is addressed below.
356 Susan Lloyd McBurney

issue with respect to ASL pronouns and agreement verbs by reviewing the liter-
ature on spatial loci and pointing out the lack of adequate explanation concern-
ing phonological implementation. One system of phonological representation
(Liddell and Johnson 1989) attempted to specify locations in space by means of
seven possible vectors radiating away from the signer, four possible distances
away from the signer along that vector, and several possible height features.
Combinations of vector, distance, and height could result in a large number of
possible locations (loci) in the signing space. Although this is the most com-
plete attempt at providing phonological specification for locations in the signing
space, Liddell (2000a: 309) points out that it remains fundamentally inadequate:
signers do not select from a predetermined number of vectors or heights if the person or
thing is present. . . the system of directing signs is based on directing signs at physically
present entities, regardless of where the entity is with respect to the signer.

He concludes that there is “no adequate means of giving phonological substance


[to these structures]” (p.310). In other words, these locations in the signing space
cannot be phonologically specified.
I find Liddell’s arguments convincing, and the ramifications of his findings are
significant. If, in fact, these locations cannot be phonologically specified, then
we must conclude that the locations themselves cannot be part of the lexicon.
If these locations are not part of the lexicon, then they cannot be part of the
lexical marking of deictic features. Following this line of analysis, one would
have to conclude that person distinctions in ASL are not lexically marked. Thus,
although the various pronouns in a given discourse can be analyzed as separate
(or formationally distinct), there is no evidence for lexical marking of deictic
features. Again, because the spatial locations used for pronominal reference are
not phonologically specifiable, there is no basis for person distinctions in ASL
pronouns.
At this point one might argue that an adequate means of specifying the phono-
logical representation of locations in space is at least possible: sign language
phonologists simply have not come up with it yet. Allowing for this possibility,
let us assume for the moment that these locations in space can indeed be spec-
ified in the lexicon. If this is the case, then we are faced with a separate, more
serious, problem. Nonpresent referents can be localized at an infinite number of
distinct locations in space, “between any two points that have been associated
with various referents, another could in principle be established” (Lillo-Martin
and Klima 1990:194). This means that there are an infinite number of distinct
lexical forms within the pronoun systems of signed languages. According to
this analysis, signed languages would show not a three-way distinction in per-
son, but rather an infinite number of distinctions in person, each marked by a
separate lexical form. I would argue that such a “system” does not constitute
person at all.
Pronominal reference in signed and spoken language 357

13.6.2.1 An alternative analysis of pronouns in ASL . The typolog-


ical data discussed in this paper are consistent with an alternative analysis
of pronouns that has been proposed by Liddell (1994; 1995; 2000a; 2000b).31
Liddell’s analysis utilizes mental space theory, whereby “constructing the mean-
ings of sentences depends on the existence of mental spaces – a type of cognitive
structure distinct from linguistic representations” (Liddell 2000b:340–341, after
Fauconnier 1985; 1997). Mental spaces are conceptual structures that speakers
build up during discourse. A grounded mental space is a mental space whose
entities are conceived of as being present in the immediate environment. Non-
present referents (those that have been localized or established at locations in
signing space) are viewed as conceptually present entities within a grounded
mental space.32
Liddell argues that pronouns are directed toward elements of grounded men-
tal spaces. When a pronoun is directed toward a physically present referent
(such as the signer and the addressee), the direction is not lexically fixed, but
rather depends on the actual physical location of the referent. Because a present
referent can be in an unlimited number of physical locations, there are no lin-
guistic features or discrete morphemes that can specify the direction of the
sign. For nonpresent referents, pronouns are directed at elements (tokens and
surrogates) that are conceived of as present in a grounded mental space.
In contrast to the standard analysis of pronominal reference, Liddell argues
that pronouns are a combination of linguistic and gestural elements. The linguis-
tic elements (handshape, aspects of orientation, and some types of movement)
are describable using discrete linguistic features. The direction and goal (or
end point) of the movement, however, are not linguistic at all, but rather are
gestural.

13.6.2.2 Conceptual iconicity in signed language pronominal refer-


ence. At this point in the discussion it may be useful to step back and take
a look at language within the broader context of cognitive abilities. In partic-
ular, I would like to examine what Pederson and Nuyts (1997) have termed
“the relationship question”; namely, what is the relationship between linguistic
representation and conceptual representation? Although the precise nature of
this relationship is the subject of much debate within cognitive science, here I
assume that language and conceptualization have, at least to a certain degree,
separate systems of representation. In other words linguistic representations are
distinct from conceptual representations. Given this, and in light of the present
discussion of reference in signed languages, it is worthwhile to ask whether the
31 Liddell’s argument encompasses not only pronouns in ASL, but also agreement markers. Here
I discuss his analysis only as it relates to pronouns.
32 Liddell (1994) distinguishes between two types of nonpresent referents, tokens and surrogates.
Both are conceived of as being present in the grounded mental space.
358 Susan Lloyd McBurney

modality and/or the medium of a language might in some way have an influence
on the interface between language and conceptualization?
As was discussed in Section 13.5, because signed languages are able to uti-
lize the spatial medium, they are uniquely equipped to convey spatial–relational
information in a very direct, non-abstract manner. As a result, the interface
between certain conceptual structures and their linguistic representations is
qualitatively different. Specifically, signed languages enjoy a high degree of
“conceptual iconicity” in certain subsystems of the languages (pronominal ref-
erence and classifier predicates being the two most evident). This conceptual
iconicity is, I believe, an affordance of the medium.
Participant roles (speaker/signer, addressee, other) are pragmatic constructs
that exist within all languages, spoken and signed. However, the manner in
which these roles are encoded is, at a very fundamental level, medium-dependent.
In order to encode participant roles, spoken languages require a level of abstrac-
tion; a formal (i.e. purely linguistic) device is necessary to systematically encode
distinctions in person. The resulting systems of grammatical person utilize dis-
tinctly linguistic forms (separate lexical forms, in Ingram’s framework) to refer
to speaker, addressee, and other. These forms are lexically specifiable, and are
internal to the grammar of the language.
Signed languages are unique in that they do not require this level of ab-
straction. Because signed languages are conveyed in the spatial medium (and
because reference to individuals within a discourse is unambiguous), formal,
language-internal, marking of person is unnecessary. The coding of participant
roles is accomplished not through linguistic devices, but rather through ges-
tural deixis. The participants within a discourse are unambiguously identified
through these deictic gestures, which are systematically incorporated into the
system of pronominal reference.
It is definitely not the case that the entire pronominal system (in ASL, for
example) is devoid of grammatical or linguistic features. Case information is
carried by the handshape components of referential signs; person pronouns
are articulated with the 1 handshape, possessive pronouns with the B hand-
shape, and reflexives with the “open A” handshape. In addition, as argued in
Section 13.6.1, number appears to be a category that is grammatically marked
in ASL. Crucially, however, locations in space, as they are used for reference
across signed languages, do not constitute grammatical person distinctions. I
am arguing (in support of Lillo-Martin and Klima 1990; Ahlgren 1990) that
there are no formal person distinctions in signed languages. Rather “gestural
deixis” (along the lines of Liddell’s analysis) serves to identify referents within a
discourse.

13.6.2.3 Referential specificity revisited. In light of the distinction


between modality and medium for which I have argued above, it may be useful to
Pronominal reference in signed and spoken language 359

revisit the issue of referential specificity in pronominal reference. In particular,


it seems that the continuum of referential specificity proposed in Section 13.3.4
can be further analyzed as being composed of two separate continua: semantic
specificity and indexic specificity. In any given language, the identity of a
referent may be determined either semantically or indexically. Spoken language
pronoun systems are relatively rich semantically in that they rely on the use of
formal semantic features to convey distinctions of person, number, and gender.
The fact that the spoken languages examined above fall at various points along a
continuum of referential specificity (see Figure 13.2) is a reflection of the degree
to which the richness of semantic marking varies across spoken languages.
Signed language pronoun systems, on the other hand, are relatively impover-
ished semantically; with the exception of numerosity, signed language pronouns
rank very low on a continuum of semantic specificity. However, whereas they
have a low degree of semantic specificity, signed languages as a whole demon-
strate a very high degree of indexic specificity. Because signed languages have
access to and fully utilize the three dimensions of space, reference to single
individuals within a discourse is fully indexic. Significantly, it is when pronoun
forms are semantically marked for number that the system is pushed in less
indexic directions (discussed at greater length below).

13.6.2.4 Arguments for a first/nonfirst distinction. Before leaving


this topic, I address some of the arguments that have been set forth as evidence
for a distinction between first person and nonfirst person in sign language pro-
nouns. Engberg-Pedersen (1993) has argued for a distinction between first and
nonfirst person in Danish Sign Language (DSL). As evidence in support of this
distinction, she points out that the first person pronoun differs formally from
other pronouns in two ways. First, it is “the only form in which the manual artic-
ulator makes contact with something, namely the signer’s body as representing
the referent” (Engberg-Pedersen 1993:134). Second, the first person pronoun is
the only pronoun that is not always articulated with an index handshape; other
handshapes used include a loose index handshape, loose flat hand, and hand-
shapes identical to the handshape used in the verb that follows the first person
pronoun. These same arguments could be made with respect to the posited first
person pronoun in ASL (as well as the other signed languages discussed). As I
am not familiar with DSL, my arguments against Engberg-Pedersen’s analysis
are framed with respect to the facts of ASL.
The formational differences that form the basis of Engberg-Pedersen’s anal-
ysis can be explained by other factors. With respect to the claim that the first
person pronoun is distinct because it contacts something (the signer’s chest),
an alternative explanation is available. Namely, that the form of this index
is determined by the phonology, by the phonological rules that are active in
the language. Various locations on the signer’s body can be used as places of
360 Susan Lloyd McBurney

articulation for well-formed signs; in ASL, these include locations on the neck,
upper arm, elbow, forearm, as well as several distinct locations on the face,
chest, and nondominant hand. The center of the chest is, without question, one
of these locations, as evidenced by the fact that there are numerous lexical items
in ASL whose specification for location is the center of the chest (LIKE, FEEL,
EXCITED, WHITE). To my knowledge, however, there are no signs whose
specification for location is the area just in front of the chest. I would argue that
the first person pronoun contacts the chest because the well-formedness con-
straints that are active in the phonology of the language require that it do so.33 In
other words, an index directed toward the chest but not actually contacting the
chest could be argued to be in violation of well-formedness constraints that ex-
clude the area in front of the chest as a permissible place of articulation for a sign.
The fact that the pronoun referring to the addressee (second person in the stan-
dard analysis) does not contact the chest of the addressee is also due to phonolog-
ical well-formedness constraints; in signed languages, locations on other peo-
ple’s bodies are not permissible places of articulation for well-formed signs.34
Engberg-Pedersen’s second argument for distinguishing the category of first
person is based on handshape variation that occurs with the first person pronoun
forms in DSL. While the data Engberg-Pedersen provides with respect to this
issue are incomplete, observations of similar variation in ASL “first person”
pronouns suggest that the variation might be due in some instances to surface
phonetic variation and in others to morphophonological processes, in particular
handshape assimilation. The data in (3) from ASL illustrate the first type of
variation.35
(3) MY NEIGHBOR TEND TALK+++, PRO-1 1 HATE3 HER-3 GOSSIP.
‘My neighbor, she tends to talk a lot. I hate her gossiping!’
In this particular utterance, the phonological form of the “first person” pronoun
(PRO-1) is a loose index handshape (index finger is partially extended, other
three fingers are loosely closed). Whereas the citation form of this pronoun is
a clearly articulated index, I would argue that what surfaces here is an instance
of phonetic variation.
A second example (4) illustrates handshape assimilation.
(4) DOG STUBBORN. PRO-1 FEED PRO-3, REBEL, REFUSE EAT
‘The dog is stubborn. I feed it, but it rebels, refuses to eat.’

33 This is perhaps an overstatement; phonetic variation may lead to an index directed toward, but
not actually contacting, the chest.
34 An exception to this might be found in infant-directed or child-directed signing, during which
mothers (or other caregivers) sometimes produce pointing signs that contact a child.
35 Data in (3) and (4) are from a corpus of ASL sentences being used as stimuli for a neurolinguistic
experiment currently under way (Brain Development Lab, University of Oregon; Helen Neville,
Director).
Pronominal reference in signed and spoken language 361

In this utterance, the handshape of the “first person” pronoun is not the citation
form handshape (clearly articulated index), but rather something that more
closely resembles the handshape of the following verb, FEED (four outstretched
fingers together, and the thumb touching the middle of the fingers). In other
words, the handshape of the pronoun I has assimilated to the handshape of the
following sign, FEED.
Returning to Engberg-Pedersen’s posited distinction between first and non-
first person in DSL, I have shown that an alternative analysis is possible. Though
my comments are based not on DSL data but on similar data from ASL, I have
illustrated that the two formational differences she claims support a first person
distinction (contact with body and varying handshape) can, in fact, be inter-
preted as resulting from phonological factors.
Like Engberg-Pedersen, Meier (1990) has argued that ASL distinguishes
between first and non-first person in its pronouns. Meier’s arguments against
a formal distinction between second and third person pronouns in ASL are
convincing, and I fully agree with this aspect of his analysis.36 However, his
arguments for distinguishing between first and nonfirst pronouns are less clearly
convincing, and an alternative analysis is possible. Here I address two of his
arguments.
Analyzing data from role-playing in ASL, Meier states that “deictic points in
role-playing do mark grammatical person, as is indicated by the interpretation
of deictic points to the signer in role-playing” (Meier 1990:185). In role-playing
situations, he argues, the ASL pronoun INDEXs (an index to the signer) behaves
just like the English first-person pronoun, I , does in direct quotation. Although
Meier takes this as evidence of the category first person in ASL, an alternative
analysis exists. Couched within the Liddell’s framework (discussed above in
Section 13.6.2.1), each “deictic point” in a discourse, regardless of whether or
not role-playing is involved, is a point to an entity within a grounded mental
space. These entities are either physically present (in the case of the signer and
the addressee) or conceived of as present (in the case of nonpresent referents).
When role-playing occurs, the conceptual maps on which the mental spaces are
based shift. In other words, the conceptual layout of referents within a discourse
shifts in the context of role-playing. Role-playing or not, indexes still point to
entities within a grounded mental space, and referents are identified not through
abstract person features, but through gestural deixis.
36 Meier’s arguments are threefold. First, with respect to the ways in which points in space are
actually used in discourse, “the set of pointing signs we might identify as second person largely,
if not completely, overlaps with the set we would identify as third person” (Meier 1990:186).
Second, although eye gaze at the addressee is an important component of sign conversations, it
does not appear to be a grammatical marker of second person in ASL. Finally, Meier notes that,
while there exist gaps in the paradigms of agreement verbs that appear to be motivated by the
existence of a first person object, there are no gaps that arise with respect to either the addressee
(second person) or a non-addressed participant (third person).
362 Susan Lloyd McBurney

In addition to these arguments from role-playing, Meier suggests that the first
person plural pronouns WE, OUR, and OURSELVES provide further evidence
for person distinctions in ASL. The place of articulation of these signs, he argues,
is only partially motivated; they share the same general place of articulation as
the singular first person forms (the signer’s chest) but the place of articulation
does not indicate the real world locations of those other than the signer.
Although pronominal reference is unambiguous for singular pronouns, it is
not the case that the plural forms of pronouns are always unambiguous. I agree
with Meier on this point. Some plural pronouns are unable to take advantage
of spatial locations in the same way that singular pronouns are; articulatory
constraints can limit the ability to identify and coindicate plural referents that
are located at non-adjacent locations in the signing space. Take, for example,
the sign WE, which is normally articulated with the index handshape contacting
the right side of the chest, then arcing over to the left side of the chest. As Meier
notes, the articulation of this plural pronoun does not indicate the locations of
any referents other than the signer.
While Meier argues that this is evidence for a distinction between first and
nonfirst person, this is not the only possible analysis. Like the plural form WE,
there are instances in which non-first plural forms (THEY, for example) do not
indicate the locations of referents. Example (5) serves to illustrate this.
(5) [Context: The signer is describing her experience working at a Deaf school.
The individuals for whom she worked, while the topic of conversation,
have not been established at distinct locations in the signing space.]
t head nod
RESEARCH WORK, REGULAR. SOMETIMES FIND INFORMA-
TION FOR INDEX-PL.
‘I did research on a regular basis. Sometimes I found information for
them.’
In (5) the INDEX-PL serves as an unspecified general plural (a nonfirst person
plural in Meier’s terms) and is articulated by a small sweeping motion of the
index from left to right in neutral space. As none of the referents have been
established in the signing space, this plural pronoun is non-indexic.
A second set of nonsingular pronouns provides an additional example. Cormier
(1998) notes that the number-incorporated signs (THREE-OF-US, FOUR-OF-
US, FIVE-OF-US) do not always index the locations of the referents; “modu-
lations for inclusive/exclusive interfere with the default indexic properties” of
these pronouns (p.23). Thus we see that when pronouns are marked for plurality,
the indexical function is sometimes suppressed. These non-indexical plurals can
be taken as evidence for grammatical number marking within ASL;37 however,
37 Since I have argued that the numeral incorporated forms are not instances of grammatical number
marking, this statement pertains most directly to examples like (5) above.
Pronominal reference in signed and spoken language 363

the fact that they can surface in both “first” and “nonfirst” constructions sug-
gests that the non-indexical WE is insufficient evidence for the existence of a
first person category.
The fact that WE is more often non-indexical than the plural forms YOU-
ALL and THEY can be analyzed as resulting from the unusual semantics of WE
(speaker + other(s)). Generally speaking, the category traditionally referred to
as first person plural is anomalous across languages; as Benveniste (1971: 202)
and others point out, “ ‘we’ is not a multiplication of identical objects but a junc-
tion between ‘I’ and the ‘non-I’, no matter what the content of this ‘non-I’ may
be.” This anomaly is, one could argue, one of denotational semantics; whereas
the plurals of “second” and “third” persons readily denote multiple addressees
and multiple nonpresent referents, respectively, a “first” person plural does not
typically denote multiple speakers.38 In the case of sign language pronominal
reference, if we refer to the pragmatic constructs of speaker, hearer, and other
(as opposed to the purely linguistic notions of person, which I have argued are
unnecessary in languages expressed in the spatial medium), the non-indexical
WE can be analyzed as just one way of expressing the concept of signer +
unspecified others.
Before moving on to a discussion of gender marking in signed language
pronominal systems, one additional piece of evidence against a distinction be-
tween first and nonfirst person is discussed. Recall that IPSL has a transnumeral
form that is unspecified for number, where a single point with an index finger
can refer to any number of entities (Zeshan 1999; personal communication).
As discussed in Section 13.2.2.2, the transnumeral form surfaces across all
“persons,” first, second, and third. If there were a formal distinction between
first and nonfirst persons, we might expect that number marking, in this case
transnumerality, would reflect this distinction as well. The fact that first and
nonfirst person pronouns are treated identically with respect to transnumerality
suggests that the posited distinction is not well motivated.

13.6.3 Gender marking in signed languages


In Section 13.2.2.2, I suggested that, although gender handshapes may surface
in some pronoun forms in Japanese Sign Language, gender marking may not
be an integral component of the pronominal system of that language. Certainly,
38 ASL does, in fact, have pronominal forms that can (to varying degrees) clearly indicate which
referents other than the signer are included. One such form is the indexical plural WE-COMP
(‘composite we’) discussed in Cormier (1998), where individual pointing signs refer exhaus-
tively to each member of the set. Baker and Cokely (1980:208–209) discuss a separate collection
of forms that utilize an “arc-point”; with multiple present referents, “if the signer starts the arc
by pointing to him/herself and stops with the last person on the other end, the arc-point means
‘we’ or ‘all of us’.” These are all strategies for specifying which specific referents are to be
included in the semantically anomalous “signer and others” category of pronoun.
364 Susan Lloyd McBurney

gender distinctions could be marked systematically throughout the pronominal


system of a signed language, for example through gender handshapes. As best
we know, this has not happened.
One possible explanation for the lack of gender marking in signed language
pronominal systems has to do with functional motivations. The fact that loca-
tions in the signing space unambiguously identify referents in a discourse may
render gender distinctions unnecessary. If Mary has been localized at locus ‘a,’
an index directed toward ‘a’ unambiguously identifies Mary as the referent;
additional information regarding her gender is simply unnecessary. The unam-
biguous status of locations, one could say, trumps gender. A similar argument
could be put forth to explain why no signed languages mark distinctions in
kinship within their pronominal systems.
Having reviewed person, number, and gender distinctions, I now return to a
question posed earlier: can signed language pronominal reference be accounted
for within a framework entirely based on grammatical distinctions? In other
words, does the standard analysis of pronominal reference in signed languages
adequately account for the data? Based on the discussion of person distinctions
above, I would argue that it does not account for the data. While distinctions
of number in signed language pronouns appear to be grammatically marked,
person distinctions do not. I would argue that an analysis of sign language
pronouns that relies on person distinctions is inadequate; the locations in space
that lie at the heart of person distinctions cannot be lexically specified, and
therefore cannot be considered lexical marking of person.

13.7 Conclusions
The present study, which examines pronominal reference across spoken and
signed languages, reveals that, from a typological perspective, signed languages
are unusual. Because signed languages have access to the spatial medium, there
is a qualitative difference in spatial marking between languages in the two
modalities. Because of their medium, signed languages have greater potential
for non-arbitrary form–meaning correspondences within pronoun systems. Re-
lationships can be expressed overtly in spatial terms, and as a result reference
to individuals within a discourse is unambiguous.
The data from signed languages indicate that number is a category that is
grammatically marked in signed language pronouns, but the category of person
is not. The fact that the location component of pronouns cannot be lexically
specified precludes an analysis of lexical distinctions in person. In addition, I
have argued that, although participant roles (as pragmatic constructs) exist in
all human languages, spoken and signed languages differ in how these roles
are encoded. Spoken languages require a formal linguistic device to systemat-
ically encode distinctions in person. Signed languages, on the other hand, do
Pronominal reference in signed and spoken language 365

not. The coding of participant roles is accomplished not through abstract lin-
guistic categories of person, but rather through gestural deixis. The participants
within a discourse are unambiguously identified through deictic gestures that
are incorporated into the system of pronominal reference.
This having been said, an important question arises: what is the precise
status of this class of signs in ASL? In other words, are the signs typically
glossed as pronouns in fact pronouns at all? I have argued that because signed
languages are deeply rooted in the spatial medium, they are able to convey
spatial–relational information in a very direct manner and reference to indi-
viduals within a discourse is unambiguous. The resulting conceptual iconicity
renders formal, language-internal, marking of person unnecessary. If it is the
case that formal person distinctions do not exist in signed languages, then there
may be no basis for analyzing these signs as personal pronouns.
Although further research is needed, the results from the present study sug-
gest that the class of signs traditionally referred to as personal pronouns may,
in fact, be demonstratives. Describing this word class, Diessel (1999:2) writes
that demonstratives generally serve pragmatic functions, in that they are pri-
marily used to focus the addressee’s attention on objects or locations in the
speech environment. Within the framework of mental space theory, the point-
ing signs that have been analyzed as pronouns behave very much like demon-
stratives. These pointing signs are directed toward entities that are present
within the signing environment; for the signer and addressee, toward physi-
cally present entities, and for nonpresent referents toward conceptually present
entities.
If, in fact, this class of signs turns out to be more accurately classified as
demonstratives, then the typologically unusual nature of sign language “pro-
nouns” takes on a new meaning. Signed languages would be typologically
unusual not because the pronouns all share some unusual characteristics, but
because, as a class of human languages, there are no pronouns at all. This would
most certainly be a significant typological finding, and a clear example of the
extent to which the medium of a language affects the structure of that language.
To be sure, additional research is needed in order to understand more fully
the complex nature of spatial locations and the central role they play in signed
language reference. By elucidating the role of space in signed languages, we
will gain insight into the factors that shape language as well as the effects of
modality and medium on the structure of language.

Acknowledgments
I have benefited greatly from two reviewers’ comments, questions, and criti-
cisms, and I would like to thank them for their valuable contributions to this
chapter. Special thanks to Richard Meier for his thoughtful responses, and to
366 Susan Lloyd McBurney

David Corina, Fritz Newmeyer, and Soowon Kim for their assistance. An earlier
version of this chapter was presented at the Third Biennial Conference of the
Association for Linguistic Typology, University of Amsterdam, 1999. I would
like to thank the conference participants for their comments and suggestions.

13.8 References
Ahlgren, Inger. 1990. Deictic pronouns in Swedish and Swedish Sign Language. In
Theoretical issues in sign language research, Vol. I: Linguistics, ed. S. Fischer and
P. Siple, 167–174. Chicago, IL: The University of Chicago Press.
Anderson, Stephen R. and Edward L. Keenan. 1985. Deixis. In Language typology and
syntactic description, Vol. III: Grammatical categories and the lexicon, ed. Timothy
Shopen, 259–308. New York: Cambridge University Press.
Aronoff, Mark, Irit Meir, and Wendy Sandler. 2000. Universal and particular aspects of
sign language morphology. University of Maryland Working Papers in Linguistics,
10:1–33.
Baker, Charlotte and Dennis Cokely. 1980. American Sign Language: A teacher’s re-
source text on grammar and culture. Silver Spring, MD: T.J. Publishers.
Bellugi, Ursula and Edward Klima. 1982. From gesture to sign: Deixis in a visual–
gestural language. In Speech, place, and action, ed. Robert J. Jarvella and Wolfgang
Klein, 279–313. Chichester: John Wiley.
Benton, R.A. 1971. Pangasinan reference grammar. Honolulu, HI: University of Hawaii
Press.
Benveniste, Emile. 1971. Problems in general linguistics. Coral Gables, FL: University
of Miami Press.
Boas, Franz. 1947. Kwakiutl grammar. In Transactions of the American Philosophical
Society, Volume 37 (3). New York: AMS Press.
Chinchor, Nancy. 1979. Numeral incorporation in American Sign Language. Doctoral
dissertation, Brown University, Providence, Rhode Island.
Corina, David and Wendy Sandler. 1993. On the nature of phonological structure in sign
language. Phonology 10:165–207.
Cormier, Kearsy. 1998. How does modality contribute to linguistic diversity?
Manuscript, University of Texas, Austin, Texas.
Diessel, Holger. 1999. Demonstratives: Form, function, and grammaticalization. Typo-
logical Studies in Language, 42. Amsterdam: John Benjamins.
Engberg-Pedersen, Elisabeth. 1986. The use of space with verbs in Danish Sign Lan-
guage. In Signs of life: Proceedings of the 2nd European Congress of Sign Language
Research. ed. Bernard Tervoort, 32–51. Amsterdam: Institute of General Linguis-
tics of the University of Amsterdam.
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language: The semantics and
morphosyntax of the use of space in a visual language. Hamburg: Signum.
Farnell, Brenda. 1995. Do you see what I mean? Plains Indian sign talk and the embod-
iment of action. Austin: University of Texas Press.
Fauconnier. Giles. 1985. Mental spaces: Aspects of meaning construction in natural
language. Cambridge, MA: The MIT Press.
Fauconnier, Giles. 1997. Mappings in thought and language. Cambridge: Cambridge
University Press.
Pronominal reference in signed and spoken language 367

Fischer, Susan D. 1996. The role of agreement and auxiliaries in sign language. Lingua
98:103–119.
Fischer, Susan D. and Yutaka Osugi. 2000. Thumbs up vs. giving the finger: Indexical
classifiers in NS and ASL. Paper presented at the Seventh International Conference
on Theoretical Issues in Sign Language Research, Amsterdam, July.
Forchheimer, Paul. 1953. The category of person in language. Berlin: Walter de Gruyter.
Friedman, Lynne A. 1975. On the semantics of space, time, and person reference in the
American Sign Language. Language 51:940–961.
Friedman, Victor A. 1994. Ga in Lak and the three “there”s: Deixis and markedness
in Daghestan. NSL 7: Linguistic studies in the non Slavic languages of the Com-
monwealth of Independent States and the Baltic Republics, ed. Howard I. Aronson.
79–93. Chicago, IL: Chicago Linguistic Society
Hale, K.L. 1966. Kinship reflections in syntax, Word 22:318–324.
Ingram, David. 1978. Typology and universals of personal pronouns. In Universals
of human language, Vol. 3: Word structure, ed. Joseph H. Greenberg, 213–247.
Stanford, CA: Stanford University Press.
Johnston, Trevor. 1991. Spatial syntax and spatial semantics in the inflection of signs
for the marking of person and location in Auslan. International Journal of Sign
Linguistics 2:29–62.
Johnston, Trevor. 1998. Signs of Australia: A new dictionary of Auslan. North Rocks,
NSW: North Rocks Press.
Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Lane, Harlan. 1984. When the mind hears: A history of the Deaf. New York: Random
House.
Last, Marco. In preparation. Expressions of numerosity: A cognitive approach to
crosslinguistic variation in grammatical number marking and numeral systems.
Doctoral dissertation, University of Amsterdam.
Laycock, Donald C. 1965. The Ndu language family. Canberra: Australian National
University.
Liddell, Scott. 1994. Tokens and surrogates. In Perspectives on Sign Language Structure:
Papers from the 5th International Symposium on Sign Language Research, Vol. I,
ed. I. Ahlgren, B. Bergman, and M. Brennan, 105–119. Durham, England: The
International Sign Linguistics Association.
Liddell, Scott. 1995. Real, surrogate, and token space: Grammatical consequences in
ASL. In Sign, gesture, and space, ed. Karen Emmorey and Judy Reilly, 19–41.
Hillsdale, NJ: Lawrence Erlbaum.
Liddell, Scott. 1996. Numeral incorporating roots and nonincorporating prefixes in
American Sign Language. Sign Language Studies 92:201–225.
Liddell, Scott. 2000a. Indicating verbs and pronouns: Pointing away from agreement. In
The signs of language revisited: An anthology to honor Ursula Bellugi and Edward
Klima. ed. Harlan Lane and Karen Emmorey, 303–320. Mahwah, NJ: Lawrence
Erlbaum.
Liddell, Scott. 2000b. Blended spaces and deixis in sign language discourse. In Lan-
guage and gesture: Window into thought and action, ed. David McNeill, 331–357.
Cambridge: Cambridge University Press
Liddell, Scott and Robert Johnson. 1989. American Sign Language: The phonological
base. Sign Language Studies 64:195–277.
368 Susan Lloyd McBurney

Lillo-Martin, Diane. 1986. Parameter setting: Evidence from use, acquisition, and break-
down in American Sign Language. Doctoral dissertation, University of California,
San Diego, CA.
Lillo-Martin, Diane. 1995. The point of view predicate in American Sign Language.
In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 155–170.
Hillsdale, NJ: Lawrence Erlbaum.
Lillo-Martin, Diane and Edward S. Klima. 1990. Pointing out differences: ASL pro-
nouns in syntactic theory. In Theoretical issues in sign language research. Vol. I:
Linguistics, ed. S. Fischer and P. Siple, 191–210. Chicago, IL: The University of
Chicago Press.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical issues
in sign language research. Vol. I: Linguistics. ed. S. Fischer and P. Siple, 175–190.
Chicago, IL: The University of Chicago Press.
Miller, G. A. 1956. The magical number seven, plus or minus two: Some limits on our
capacity for processing information. Psychological Review 63:81–97.
Mühlhäusler, Peter and Rom Harré. 1990. Pronouns and people: The linguistic con-
struction of social and personal identity. Oxford: Basil Blackwell.
Nagaraja, K. S. 1985. Khasi: A descriptive analysis. Pune, India: Deccan College.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: The MIT Press.
Padden, Carol. 1988. Interaction of morphology and syntax in American Sign Language.
New York: Garland.
Pederson, Eric and Jan Nuyts. 1997. On the relationship between language and concep-
tualization. In Language and conceptualization, ed. Jan Nuyts and Eric Pederson,
1–12. Cambridge: Cambridge University Press.
Pinker, Steven and Paul Bloom. 1990. Natural language and natural selection. Behavioral
and Brain Sciences 13:707–784.
Pizzuto, Elena. 1986. The verb system of Italian Sign Language (LIS). In Signs of
life: Proceedings of the Second European Congress of Sign Language Research,
ed. Bernard Tervoort, 17–31. Amsterdam: Institute of General Linguistics of the
University of Amsterdam.
Pizzuto, Elena, Enza Giurana, and Giuseppe Gambino. 1990. Manual and nonmanual
morphology in Italian Sign Language: Grammatical constraints and discourse pro-
cesses. In Sign language research: Theoretical issues, ed. Ceil Lucas, 83–102.
Washington DC: Gallaudet University Press.
Rabel, L. 1961. Khasi: A language of Assam. Baton Rouge, LA: Louisiana State Uni-
versity Press.
Ray, Sidney Herbert. 1926. A comparative study of the Melanesian Island languages.
London: Cambridge University Press.
Reed, Judy and David L. Payne. 1986. Asheninca (Campa) pronominals. In Pronominal
systems, ed. Ursula Wiesmann, 323–331. Tübingen: Narr.
Siple, Patricia. 1982. Signed language and linguistic theory. In Exceptional language and
linguistics, ed. Loraine K. Obler and Lise Menn, 313–338. New York: Academic
Press.
Smith, Wayne. 1990. Evidence for auxiliaries in Taiwan Sign Language. In Theoretical
issues in sign language research, Vol. I: Linguistics, ed. Susan Fischer and Patricia
Siple, 211–228. Chicago: The University of Chicago Press.
Pronominal reference in signed and spoken language 369

Supalla, Samuel J. 1991. Manually Coded English: The modality question in sign lan-
guage development. In Theoretical issues in sign language research, Vol. II: Psy-
chology, ed. Patricia Siple and Susan Fischer, 85–110. Chicago: University of
Chicago Press.
Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American
Sign Language. Doctoral dissertation, University of California, San Diego, CA.
Supalla, Ted. 1986. The classifier system in American Sign Language. In Noun classes
and categorization, ed. Colette Craig, 181–214. Amsterdam: John Benjamins.
Supalla, Ted and Yutaka Osugi. Unpublished. Gender handshapes in JSL (Japanese Sign
Language). Course lecture notes, University of Rochester.
Vasishta, Madan M., James C. Woodward, and Kirk L. Wilson. 1978. Sign language
in India: Regional variation within the deaf population. Indian Journal of Applied
Linguistics 4:66–74.
Weber, David J. 1986. Huallaga Quechua pronouns. In Pronominal systems, ed. Ursula
Wiesmann, 333–349. Tübingen: Narr.
Winston, Elizabeth A. 1995. Spatial mapping in comparative discourse frames. In Lan-
guage, gesture, and space, ed. Karen Emmorey and Judy Reilly, 87–114. Hillsdale
NJ: Lawrence Erlbaum.
Woodward, James C. 1978a. Historical basis of American Sign Language. In Under-
standing language through sign language research, ed. P. Siple, 333–348. New
York: Academic Press.
Woodward, James C. 1978b. All in the family: Kinship lexicalization across sign lan-
guages. Sign Language Studies 19:121–138.
Zeshan, Ulrike. 1998. Functions of the index in IPSL. Manuscript, University of Cologne,
Germany.
Zeshan, Ulrike. 1999. Indo-Pakistani Sign Language. Manuscript, Canberra, Australian
National University, Research Centre for Linguistic Typology.
14 Is verb agreement the same crossmodally?

Christian Rathmann and Gaurav Mathur

14.1 Introduction
One major question in linguistics is whether the universals among spoken lan-
guages are the same as those among signed languages. Two types of universals
have been distinguished: formal universals, which impose abstract conditions
on all languages, and substantive universals, which fix the choices that a lan-
guage makes for a particular aspect of grammar (Chomsky 1965; Greenberg
1966; Comrie 1981). It would be intriguing to see if there are modality dif-
ferences in both types of universals. Fischer (1974) has suggested that formal
universals like some syntactic operations apply in both modalities, while some
substantive universals are modality specific. Similarly, Newport and Supalla
(2000:112) have noted that signed and spoken languages may have some dif-
ferent universals due to the different modalities.
In this chapter we focus on verb agreement as it provides a window into some
of the universals within and across the two modalities. We start with a working
definition of agreement for spoken languages and illustrate the difficulty in
applying such a definition to signed languages. We then embark on two goals:
to investigate the linguistic status of verb agreement in signed language and
to understand the architecture of grammar with respect to verb agreement. We
explore possible modality differences and consider their effects on the nature
of the morphological processes involved in verb agreement. Finally, we return
to the formal and substantive universals that separate and/or group spoken and
signed languages.

14.2 A working definition of verb agreement


Agreement is a linguistic phenomenon whereby the presence of one element
in a sentence requires a particular form of another element that is grammati-
cally linked to the first element. In many documented spoken languages, the
particular form of the second element, usually the verb, depends on the phi-
features (person, number, and/or gender features) of the first element, typically
the subject of the sentence.
370
Is verb agreement the same crossmodally? 371

Table 14.1 Null agreement system


Yoruba: lo ‘go’ Japanese: tazuneru ‘ask’

Person Singular Plural Person Singular Plural

1st lo- lo- 1st tazune- tazune-


2nd lo- lo- 2nd tazune- tazune-
3rd lo- lo- 3rd tazune- tazune-

Spoken languages vary as to whether they show null, weak or strong agree-
ment (e.g. Speas 1995). Null agreement languages do not show overt agreement
for any combination of person, number, and/or gender features (see Table 14.1).
Other languages like Brazilian Portuguese and English (Table 14.2) show overt
agreement for some feature combinations. If there is no overt agreement for
a certain combination, a phonetically null affix, represented here by ø, is at-
tached to the verb.1 Positing phonetically null affixes in languages like Brazilian
Portuguese or English is justified by the fact that they contrast with overt agree-
ment for other combinations within the paradigm. This is different from null
agreement languages, where not even a phonetically null affix is attached.
Languages that have strong agreement (Table 14.3) show overt agreement
for all feature combinations, even if the same form is used for two or more
combinations, e.g. -en for first person plural and third person plural in German.
One characteristic common to all types of spoken languages showing overt
agreement is that they show subject agreement. In rare cases, the verb may agree
with the object – e.g. Huichol (Comrie 1982:68–70) and Itelmen (Bobaljik and
Wurmbrand 1997) – but these languages usually show subject agreement too,
which suggests that object agreement is more marked than subject agreement
in spoken languages.
There seems to be little controversy in the literature regarding the realization
of verb agreement in spoken languages. On the other hand, in the literature on
signed languages, the status of verb agreement is still under debate. Consider
Figure 14.1, which shows what many people mean by “verb agreement” in
signed languages. As the figure shows for American Sign Language (ASL), the
handshape for ASK (an index finger) bends as it moves from one location to an-
other in front of the signer. Each location can be understood as representing a ref-
erent: the first would be associated with the asker and the second with the askee.
While the forms for ask differ across the signed languages with respect
to hand configuration and other lexical properties, the forms undergo exactly
the same changes to mark the meaning of you ask me. See the examples in
1 The agreement forms given for the spoken languages are valid in the present tense; they may
look different in other tenses, providing another source of variation.
372 Christian Rathmann and Gaurav Mathur

Table 14.2 Weak agreement system


Brazilian Portuguese: perguntar ‘ask’ English: ask

Person Singular Plural Person Singular Plural

1st pergunt-o pergunt-amos 1st ask-ø ask-ø


2nd pergunt-ø pergunt-ø 2nd ask-ø ask-ø
3rd pergunt-a pergunt-am 3rd ask-s ask-ø

Figure 14.2: FRAGEN in German Sign Language (Deutsche Gebärdensprache


or DGS), QUESTION in Australian Sign Language (Auslan) and TAZUNERU
in Japanese Sign Language (Nihon Syuwa or NS).
Sandler (1993), Meir (1998), and Newport and Supalla (2000) have made
similar observations. In addition, Supalla (1997), Mathur (2000), and Mathur
and Rathmann (2001), who carried out systematic comparisons of verb agree-
ment across several signed languages, have confirmed that the shape of the
agreement form is the same in all the signed languages studied.2 We assume
that the generalizations hold for all signed languages. For more examples from
DGS, ASL, NS, and Auslan, see Table 14.4.
If we call agreement with the asker “subject agreement” and that with the
askee “object agreement,” one substantive universal for signed languages is that
subject agreement seems to be more marked than object agreement. If there is
one agreement slot available, it is with the object, not with the subject (Meier
1982).3 If there are two agreement slots, object agreement is obligatory while
subject agreement is optional (Meier 1982; Padden 1983; Supalla 1997). In
addition, the “multiple” morpheme consisting of an arc movement is available
only for object agreement (Padden 1983).
Two apparent exceptions are the signs HABEN ‘have’ in DGS and HAVE in
British Sign Language, which seem to require subject agreement only, but these
forms may be analyzed as ‘be-at-person’ as in Russian (e.g. u menja est kniga,
literally ‘at pronoun1st.sg.GENITIVE is book’) or as possessive pronouns (for DGS
data, see Ehrlenkamp 1999) and deserve further investigation.4
2 In particular, Supalla (1997) uses data from ASL, British Sign Language, Finnish Sign Language,
Italian Sign Language, NS, and Swedish Sign Language. Mathur and Rathmann (2001) examine
Auslan, DGS, ASL, and Russian Sign Language. Mathur (2000) looks at the same set of signed
languages, with NS replacing Russian Sign Language.
3 This is also true for reflexive forms, which have one agreement slot and therefore must agree
with the object (Janis 1992:338; Meir 1998:278).
4 The DGS sign HABEN, sometimes glossed as BESITZEN ‘own’ (e.g. in Keller 1998), is made
with a wiggling movement and a ‘sch’ gestural mouthing which means that someone is in
possession of something. This is distinguished from the form DA ‘there’ which has a straight
movement and a ‘da’ mouthing, meaning that something or someone is there. While DA has
been glossed as ‘there’, it seems to be capable of showing agreement with the location of the
subject, which is not otherwise possible with the ASL sign THERE.
Is verb agreement the same crossmodally? 373

Table 14.3 Strong agreement system


Spanish: preguntar ‘ask’ German: fragen ‘ask’

Person Singular Plural Person Singular Plural

1st pregunt-o pregunt-amos 1st frag-e frag-en


2nd pregunt-as pregunt-aı́s 2nd frag-st frag-t
3rd pregunt-a pregunt-an 3rd frag-t frag-en

How do we characterize the reference to the asker and to the askee, as well
as their relation to each other and to the verb? How do we account for the large
number of substantive universals across signed languages with respect to verb
agreement? We briefly review the sign language literature, which has mostly
focused on ASL, to understand how these issues have been addressed.

14.3 Literature review on verb agreement in signed language(s)

14.3.1 Classic view


Stokoe, Casterline, and Croneberg (1965:279–282) recognized that some verbs
move away from and toward the signer (e.g. ASL signs like ASK and TAKE)5
and that the change from one direction to another could be understood as a verbal
inflection for “personal reference.” Fischer (1973), Friedman (1975; 1976), and
Fischer and Gough (1978) analyzed these verbs in greater detail, calling them
“(multi)-directional verbs” (e.g. ASL GIVE). Fischer and Gough also noted that
some verbs may change their orientation (e.g. ASL TEASE) and/or location
(e.g. ASL OWE). The next question is whether the changes can be considered
an inflection process.

14.3.2 Simultaneity view


In the period ranging from the late 1970s to the early 1980s, the modulation
of the verb according to loci on a horizontal plane was considered to be an
inflectional process that reflects “indexic reference” to first, second, and third
person (Klima and Bellugi 1979; Padden 1983). The other point raised in this
period is that morphemes inside the verb correspond to the object (and the
subject), but these morphemes are expressed simultaneously with the verb.
Meier (1982) gives two arguments for this view: it is possible to identify

5 The sign language literature now calls them “regular” and “backwards” verbs (compare Padden
1983).
374 Christian Rathmann and Gaurav Mathur

ASL

Figure 14.1 ASK ‘You ask me’ in ASL

a verb’s arguments from its direction of movement alone, and verbs like
TEASE and BOTHER do not move between loci but change only in their
orientation.6

14.3.3 Sequentiality/simultaneity view


One issue that remains is how to implement this morphemic analysis of verb
agreement. Sandler (1986; 1989) and Liddell and Johnson (1989) develop
phonological models based on the observation that signs may have not only
simultaneous but also sequential structure, e.g. a sign may have two hand-
shapes or locations in a sequence. These models make it possible to view the
“agreement morpheme” consisting of location features as an independent affix
that is attached to a verb stem underspecified for location.
Independently, Gee and Kegl (1982; 1983) and Shepard-Kegl (1985) pursue a
locative hypothesis in analyzing signs in terms of several morphemes, including
LOCs which mean “the reference point of a movement or location” and which
are represented separately from the verb stem so that they precede or follow the
base. Similarly, Bahan (1996) and Neidle, Kegl, MacLaughlin, Bahan, and Lee
(2000) use the distribution of nonmanual features to argue for the independent
status of agreement. In particular, they argue that eye gaze corresponds to object
agreement and head tilt to subject agreement (see Neidle et al. 2000:33–35 for
manuals, and 63–70 for nonmanuals).

14.3.4 R-locus view


The R-locus view is inspired by Lacy (1974) and is articulated by Lillo-Martin
and Klima (1990). Ahlgren (1990) using data from Swedish Sign
6 Prillwitz’s (1986) analysis considers this to be subject and object incorporation, based on data
from DGS.
Is verb agreement the same crossmodally? 375

DGS NS Auslan

Figure 14.2 ‘You ask me’ in DGS, NS, and Auslan

Language, Keller (1998) using data from DGS, Meir (1998) using data from
Israeli Sign Language, and Janis (1992), Bahan (1996), and Cormier, Wechsler
and Meier (1998) using data from ASL have also followed this kind of ap-
proach, under which the locus is represented as a variable in the linguistic
system, whose content comes from discourse.7 There is no need to represent
the locus overtly at the level of syntax. It is sufficient to use the referential
indices that are associated with loci during the discourse. Keller further sug-
gests that once the content is retrieved from discourse, it is cliticized onto the
verb.

14.3.5 Liddell’s view


One question that the R-locus view leaves open is whether the locus needs to be
specified phonologically when it is retrieved from the discourse. Liddell (1990;
1995; 2000a; 2000b) reconsiders the status of the locus in signed languages, i.e.
whether each point in the signing space receives its own phonological descrip-
tion and is listed in the lexicon as a morpheme. In Liddell (1990) he observes that
‘GIVE-to-tall-person’ would be directed higher in the signing space, whereas
‘GIVE-to-child’ would be directed lower, as originally observed by Fischer
and Gough (1978). Rather than describing such verbs as being directed to-
ward a point, Liddell (2000b) suggests that they are best described as being
directed toward entities in mental spaces.8 In addition, Liddell (2000a) argues
that such entities cannot be a proper part of the linguistic system. Instead, using

7 Ahlgren (1990:167) argues that “in Swedish Sign Language pronominal reference to persons
is made through location deictic terms” rather than through personal pronouns. Assuming that
the use of location deictic terms is dependent on discourse structure, we have grouped this
publication under the R-locus view.
8 A related point is made by Liddell (1990, 1995) that many agreement verbs are articulated at
a specific height. For example, the ASL signs ESP, TELL, GIVE, and INVITE are articulated
respectively at the levels of the forehead, the chin, the chest, and the lower part of the torso.
376 Christian Rathmann and Gaurav Mathur

Table 14.4 Verb classes according to the phonological manifestation


of agreement

Class DGS ASL NS Auslan

1. Verbs that change BEEINFLUSSEN PITY MITOMERU FEED


only in orientation ‘influence’ ‘approve’
LEHREN ANALYZE OKORU TEASE
‘teach’ ‘be angry at’
KRITISIEREN FILM SETTOKU-SURU COPY
‘criticise’ ‘persuade’
2. Verbs that change FRAGEN PHONE CHUUI-SURU ANSWER
only in direction of ‘ask’ ‘advise’
movement GEBEN VISIT KIRAI QUESTION
‘give’ ‘dislike’
HELFEN SHOW OKURU HIRE
‘help’ ‘send by post’
3. Verbs that change ANTWORTEN JOIN IU CHOOSE
both in orientation ‘answer’ ‘tell’
and direction of IGNORIEREN SEND EIKYOO-SURU FAX
movement ‘ignore’ ‘influence’
EINLADEN BLAME HIHAN-SURU PAY
‘invite’ ‘criticize’
4. Verbs that change (signs in this MEET KOROSU ATTACK
in orientation, category are ‘kill’
direction of usually used INFLUENCE MANE-SURU DECEIVE
movement, and the with PAM) ‘imitate’
relative positions of PICK-ON YOBU SEND-
the two hands with ‘call’ MAIL
respect to the body
5. Verbs that change in (signs in this CRITICIZE TASUKERU BLAME
orientation and the category are ‘help’
relative positions of usually used TRAIN HAGEMASU FLATTER
the two hands with with PAM) ‘encourage’
respect to the body DECEIVE DAMASU FLIRT
‘deceive’

Fauconnier’s (1985, 1997) model, Liddell (2000b:345) analyzes one exam-


ple of LOOK-AT as follows. There are three mental spaces: the “cartoon
space” where the interaction between the seated “Garfield” and his owner takes
place; a Real space containing mental representations of oneself and other en-
tities in the immediate physical environment; and a grounded blend, which
blends elements of the two spaces. In this blended space, the “owner” and
Is verb agreement the same crossmodally? 377

“Garfield” are mapped respectively from the “owner” and “Garfield” in the
cartoon space. From Real space, the “signer” is mapped onto “Garfield” in the
blended space.
Using entities in mental space removes the need to define a “locus” mor-
phologically or phonologically. It also follows from the account that verbs are
directed according to the height of the referent(s) rather than to a dot-like point
in space.

14.4 On the linguistic nature of verb agreement in signed languages


Comrie (1981:230), following Friedman (1976) and many others in the sign
linguistics field, says that the two modalities are different precisely because
signed languages use an “indefinite” number of points in the space in front
of the signer.9 We call this the “infinity view.” Liddell (2000a) also says that
the two modalities are different but for another reason: agreement verbs (or
“indicating verbs” in his terms) in signed languages indicate entities in mental
spaces. Before we can decide how to characterize the differences between the
two modalities, we discuss in greater detail:
r the infinity issue; and
r the representation of linguistic information.

14.4.1 Infinity issue = listability issue


There has been little explicit discussion of the morphemic status of the locus or
of the phonological implementation of loci. We explore both problems in light
of the infinity issue. To explore the infinity issue more, we distinguish between
“unboundedness” and “unlistability.”
We first illustrate with mathematical examples. The set of integers {0, 1,
2, . . . } is unbounded because there is no endpoint (although it has a starting
point at zero), but it is listable because it is always possible to predict literally
what the “next” number is. In contrast, the set of rational numbers between
zero and one is bounded due to the boundaries at zero and one, but is un-
listable because it is not possible to find, for example, the number that appears
“next” to zero, due to the fact that there will always be another number closer
to zero.
How do these notions apply to the lexicon and to the phonetics of a language?
The lexicon is listable, which seems to be a necessary criterion. The lexicon
is also bounded, although this is by no means a necessary criterion: while a

9 See, for example, Lillo-Martin (1991), Cormier et al. (1998), Bahan (1996:84) citing Neidle
et al. (1995) and Neidle et al. (2000).
378 Christian Rathmann and Gaurav Mathur

speaker/signer knows a finite number of lexical items at any moment in time,


the lexicon can be theoretically expanded with an infinite number of lexical
items over time.10
Let us turn to the set of loci in signed languages. There are two levels of
phonetic variation. The first level is concerned with form only, is analogous
to the level of phonetic variation in spoken languages, and does not pose a
problem. Following Lindblom’s (1990) H & H theory, if we establish a point
on the contralateral side to refer to John and want be as clear as possible, we
usually try to point back as closely as possible to the original point.11 If we
want to follow articulatory ease, we may point roughly in the direction of the
area around the point. This leads to phonetic variation that is unlistable.
There is another level of variation that does pose a challenge. Barring phonetic
variation, a different locus than John’s may have a different meaning. Since there
is theoretically a one-to-one correspondence between a locus and a referent,
each locus must be listed in the lexicon, even though the form looks the same
for each meaning, whereas in spoken languages such contrasts in meaning tend
to correspond to contrasts in form.
Such loci may be listable if we follow Meier (1990) in grouping them into
two categories: first person and nonfirst person. The first person category itself
is listable: it has only one member, namely the locus near (or on) the signer, and
the exact location is subject to crosslinguistic variation. It is the nonfirst person
category that is responsible for the unlistability of the set of loci, although it
can be understood as a bounded set falling within the signing space. Further-
more, the first person and nonfirst person categories are bounded by the distinct
handshapes that are used, e.g. the index finger for the nonpossessive and the
upright flat hand for the possessive form in ASL.
Then the set of loci is bounded but not listable in the nonfirst person category.
Yet the set of loci fails to meet the one criterion for a lexicon, which is listability,
not boundedness. The infinity issue is thus one of listability.

14.4.2 The representation of linguistic information in verb agreement


To address the listability issue, Liddell (2000b) has suggested that part of “verb
agreement” depends on entities in mental spaces. The only linguistic compo-
nent, he argues, comes from the verb itself, which has a lexical entry specifying
its meaning as well as its phonological shape, such as handshape and/or ori-
entation. Below, we present three arguments that the agreement part, separate
from the lexical entry of the verb, is linguistic in nature and that reference
10 See Jackendoff (1992:51), who says that there is theoretically an infinite number of concepts
to choose from that will be expressed through overt forms. One interesting question to be
investigated is whether languages in one modality choose to lexicalize more concepts than
languages in the other modality when it comes to the spatio-temporal domain.
11 See Cormier (in progress) for related work on this point.
Is verb agreement the same crossmodally? 379

to entities in mental spaces is not sufficient to predict some properties of verb


agreement.

14.4.2.1 Lehmann’s (1988) criteria. Liddell (2000a) uses Lehmann’s


(1988) criteria to argue that there is no “verb agreement” in signed languages.
The criteria are shown in (1).

(1) Lehmann’s (1988:55) criteria: Constituent B agrees with constituent


A (in category C) if and only if the following three conditions hold
true:
r There is a syntactic or anaphoric relation between A and B
r A belongs to a subcategory c of a grammatical category C, and A’s
belonging to C is independent of the presence or the nature of B.
r c is expressed on B and forms a constituent with it.

This last criterion focuses on the morphological side of agreement and does
not conclusively determine whether there is agreement as a general linguistic
process, especially when some of the morphemes are null. To argue for the
presence of verb agreement in the English sentence I ask her (as opposed to he
ask-s her), it is common to assume that there is a phonetically null morpheme
for first person singular attached to the verb ask, but this assumption is not
required by the criteria.
The criteria can also be applied in such a way that signed languages exhibit
agreement if the spatial location of a noun referent is taken to be a grammat-
ical category. For example, in the DGS sentence MUTTER IXi VATER IXj
i FRAGENj ‘the mother asks the father,’ FRAGEN can be said to agree with the
object IXj VATERin its spatial location if and only if IXj VATER is syntactically
related to FRAGEN (as an object); IXj VATERhas a particular spatial location
(notated by j), which is independent of the nature of FRAGEN; and this spatial
location is expressed as an endpoint of the verb.
Similarly, signed languages may pass the criteria if person is taken as the rele-
vant grammatical category. Verb forms for first person are distinctive from non-
first person forms (Meier 1990). The presence of a single distinctive form, like
the English third person singular form -s, is sufficient to pass Lehmann’s criteria.
Signed languages may also pass the criteria if number is the relevant gram-
matical category. Although number marking does not change the directionality
of the verb, and Liddell (2000a) is concerned with identifying the grammatical
category that drives the change in directionality, the presence of overt number
agreement in signed languages can be used to argue for the grammatical basis
of verb agreement. For instance, plural object agreement is expressed through
the “multiple” morpheme in a number of signed languages that we have studied
(DGS, ASL, Auslan, Russian Sign Language, and NS).
380 Christian Rathmann and Gaurav Mathur

Additionally, we have found in our data that NS has an optional auxiliary-


like marker for plural subject agreement. The dominant hand is in the spread
5 handshape, while the nondominant hand is in the A handshape with the thumb
extended. To express we all like you, the verb for like is accompanied by the
following form: the dominant hand is placed closer to the signer’s body, palm
facing away, and behind the nondominant hand; the two hands then push forward
simultaneously. For the corresponding form in the context you all like me, the
dominant hand, palm facing the signer, is farther away from the signer’s body
than the nondominant hand, and the two hands move toward the signer. Thus,
we see some crosslinguistic diversity with respect to the marking of number that
can be explained only if we admit a linguistic component for verb agreement
with parametric variation.
Lehmann’s criteria, nor any other approach to morphology for that mat-
ter, do not help determine whether spatial locations constitute a grammatical
category, so that they do not address whether agreement in signed languages
involves a linguistic component. Below, we provide two theory-neutral argu-
ments that do suggest verb agreement has a linguistic component in signed
languages.

14.4.2.2 The role of animacy. Comrie (1981) has demonstrated that


many spoken languages define and express animacy in overlapping ways. For
this reason, it has been difficult to establish animacy as a grammatical category,
but it nonetheless appears to play an important role in the linguistic system.
We show similarly that, in signed languages, there are four kinds of verbs that
differ in their ability to take animate and/or inanimate arguments. They also
differ as to whether they show agreement. For a list of DGS and ASL examples
for each kind of verb, see Table 14.5. We now provide four tests to distinguish
them.
The first test applies only to DGS, which has an auxiliary-like element called
Person Agreement Marker (PAM) that is inserted to show agreement if a verb
does not show agreement overtly, whether due to phonetic or pragmatic reasons
(Rathmann 2001). The test is whether PAM may appear with the verb. The other
tests apply in both DGS and ASL. The second test is whether the verb can be
inflected for the “multiple” number which involves adding a horizontal arc to
the verb root. The third test is whether the verb can cause the locus of an object
to shift, as in the ASL sign GIVE-CUP which has an animate subject that causes
the theme (a cup, in this instance) to shift its locus from one (source) place to
another (goal), as described by Padden 1990. The fourth test is whether object
agreement can be optionally omitted.
Now consider one type of verb that always requires two animate arguments,
e.g. DGS FRAGEN ‘ask’ and ASL TEASE. These verbs pass only the first two
Is verb agreement the same crossmodally? 381

Table 14.5 Verb types according to whether they accept


(in)animate arguments

Types DGS ASL

1. Verbs that appear only with two BEEINFLUSSEN ‘influence’ BAWL-OUT


animate arguments FRAGEN ‘ask’ ADVISE
BESCHEID-SAGEN ‘tell’ TEASE
2. Verbs that appear with two animate BEOBACHTEN ‘observe’ LOOK-AT
arguments (or with one animate FILMEN ‘film’ FILM
argument and one inanimate VERNACHLÄSSIGEN ‘ignore’ LEAVE
concrete argument)
3. Verbs that appear with two animate LEHREN ‘teach’ TEACH
arguments (or with one animate UNTERSTÜTZEN ‘support’ SUPPORT
argument and one inanimate abstract VERBESSERN ‘improve’ OFFER
argument)
4. Verbs that always appear with one KAUFEN ‘buy’ BUY
animate argument and one inanimate KOCHEN ‘cook’ STUDY
argument VORBEREITEN ‘prepare’ MAKE

tests: PAM may appear with the uninflected forms of such DGS signs, and the
verbs of this type in both languages may be modulated for the “multiple” form.
Otherwise, they cannot shift the locus of the (animate) argument nor can they
omit object agreement optionally.
Within this set is a subtype of verbs that may take two animate arguments
or that may take a concrete, inanimate argument instead of an animate one.
By “concrete” we mean the referent of the argument is something that we
can see in the real world. Examples include DGS BEOBACHTEN ‘look at’
and ASL LEAVE. If these verbs appear with two animate arguments, they
behave like the first set of verbs with respect to the four tests. However, if
they appear with an inanimate argument, they pass only the third test: they
can shift the locus of the object. In its usual sense, ‘give to someone’ moves
from the subject locus to the object locus. However, it can also mean ‘hand
an object.’ If so, the verb moves from the source locus to the goal locus,
even though the subject argument remains animate with the theta role as the
CAUSER of the event. When these verbs appear with an inanimate argument,
they cannot appear with PAM, nor be inflected for “multiple” but, importantly,
like the first class of verbs, object agreement with the inanimate argument is
obligatory.
There is another set of verbs, which may take two animate arguments, but
in some cases, they may instead take a nonconcrete inanimate argument. For
example, DGS LEHREN ‘teach’ and ASL OFFER may appear with two animate
382 Christian Rathmann and Gaurav Mathur

arguments and behave exactly like the verbs in the first class or they may
appear with a nonconcrete inanimate argument, as in ‘teach mathematics,’ ‘offer
promotion,’ and ‘support a certain philosophy.’ In these cases, the verbs pass
only the fourth test: they optionally leave out object agreement even if the
argument has been set up at a location in the space in front of the signer.
Otherwise, they cannot be used with PAM, nor with the “multiple” inflection,
nor can they shift the locus of the object.
These three types of verbs differ from other verbs like DGS KOCHEN ‘cook’
and ASL BUY which always take inanimate object arguments. Other verbs that
do not show agreement are those that take only one animate argument (DGS
SCHWIMMEN ‘swim’) and verbs that take a sentential complement (DGS
DENKEN ‘think’).12
There are several psych-verbs that assign the theta role of an EXPERIENCER
to an external argument, some of which have been previously assumed to be
plain (or non-agreeing) verbs, e.g. ASL LIKE and LOVE (Padden 1983; for
similar data in Israeli Sign Language, see Meir 1998). Some that select for
one argument – like DGS ERSCHRECKEN ‘shock’ or ASL SURPRISE –
do not qualify for verb agreement since they do not have the required two
arguments. On the other hand, we argue that psych verbs with two animate
arguments are agreeing verbs. First some verbs do show agreement, e.g. ASL
HATE, ADMIRE, PITY, and LOOK-DOWN-ON. Also, in DGS such verbs
may appear with PAM: MAG ‘like’ and SAUER ‘be mad at.’13 Other psych
verbs do not show agreement for the reason that they are articulated on the
body, e.g. DGS MAG ‘like’ or ASL LOVE.
The above characterizations seem to lead to the generalization, in line with
Janis (1992), that when verb agreement is present, it is either with an animate
direct object or with an indirect object if present, which itself tends to be
animate.

(2) a. NPsubject V NPdirect object(inanimate)


b. NPsubject V NPdirect object(animate)
c. NPsubject V NPindirect object(animate) NPdirect object(inanimate)

It is possible that the animate direct object in (2b) shares the same structural
position as the indirect object in (2c); similarly the (inanimate) direct object in
(2a) may share the same structural position as the direct object in (2c). If this is
correct, it would be straightforward to characterize verb agreement in terms of

12 This set is known as the class of “plain verbs” (Padden 1983).


13 See Meir (2000) who analyzes similar forms in Israeli Sign Language as instances of case
marking on the object.
Is verb agreement the same crossmodally? 383

the structural positions of the subject and the indirect object-like position.14 This
clustering of indirect objects with animate direct objects receives independent
evidence from Mohawk, where “noun incorporation is a property of inanimate
nouns that fill the direct object role” and where “noun incorporation of animate
direct objects is limited and noun incorporation of subjects and indirect objects
is completely impossible” yet visible agreement morphemes exist for these last
three categories (Baker 1996:20). In sum, defining verb agreement as referring
to entities in mental spaces alone does not seem to be sufficient for predicting
the different types of verbs with respect to agreement in terms of the animacy
properties of their arguments.

14.4.2.3 Asymmetries in the interaction of agreeing verbs with other


grammatical elements. Another piece of evidence for the linguistic nature of
verb agreement in signed languages comes from some asymmetries in interac-
tions between grammatical elements. We discuss several examples here.
First, in languages like DGS that use PAM, there is an asymmetry in syntac-
tic structure between sentences with PAM and sentences with agreeing verbs
(Rathmann 2001). In sentences with PAM, the object, which PAM cliticizes to,
may follow or precede negation, perfective aspect, and modals. In contrast, in
sentences with agreeing verbs, the object can only appear after these elements.
An example is provided below with negation.

(3) Sentences with PAM


top top
a. HANSi MARIEj [NegP NOCH∧ NICHT [AgrP i PAMj
Hans Marie not-yet PAM
[VP proj MAG ] ] ]
like
‘Hans does not yet like Marie.’
top top
b. HANSi MARIEj [NegP i PAMj proj NOCH∧ NICHT
Hans Marie PAM not-yet
[AgrP ti [VP MAG ] ] ]
like
‘Hans does not yet like Marie.’

14 Janis’s (1992) analysis is similar in that it uses grammatical relations to predict the presence of
verb agreement, but instead of characterizing the agreement exclusively in terms of grammatical
relations, this analysis uses a hierarchy of controller features that include case and semantic
relations.
384 Christian Rathmann and Gaurav Mathur

(4) Sentences with agreeing verbs


top top
a. HANSi MARIEj [NegP NOCH∧ NICHT [VP proj i FRAGENj ] ]
Hans Marie not-yet ask
‘Hans has not yet asked Marie a question.’
top top
b. ∗HANSi MARIEj [NegP proj i FRAGENj NOCH∧ NICHT [VP ti ] ]
Hans Marie ask not-yet
‘Hans has not yet asked Marie a question.’

In all the examples, the noun phrases HANS and MARIE are topicalized with a
special facial expression marker. While the object may be an overt noun phrase,
which would be MARIE in the above cases, it is more common to have a null
pronoun indicated as by a small pro. Then the difference between (3) and (4) lies
in the fact that PAM may be raised before the negation, but the verb FRAGEN
cannot, even though the verb bears agreement just like PAM. These facts suggest
that PAM licenses an additional layer of structure that allows the object to shift
from its base-generated position and raise above negation. If PAM functions
to show agreement when a verb cannot show it and if PAM licenses object
raising, this constitutes strong syntactic evidence for the linguistic component
of agreement in signed languages.
Furthermore, the use of PAM is available not only in DGS but also in other
signed languages such as NS (Fischer 1996), Taiwanese Sign Language (Smith
1990) and Sign Language of the Netherlands (Bos 1994). However, not all
signed languages have an element like PAM, for example ASL, British Sign
Language, and Russian Sign Language. Thus, there may be parametric variation
across signed languages with respect to whether they can use PAM or not, which
may explain the differing basic word orders (e.g. SVO vs. SOV, i.e. subject–
verb–object vs. subject–object–verb) (Rathmann 2001). This syntactic variation
across signed languages constitutes another piece of evidence for the linguistic
aspect of verb agreement in signed languages.
Another example comes from binding principles. Lillo-Martin (1991:62–63)
argues that when there is verb agreement, a small pro may appear in the object
position, as in *STEVEi SEEi pro ‘Steve saw himself.’ Like overt pronouns,
small pro is constrained by a binding principle that says roughly that a pronoun
cannot be bound by an antecedent within the same clause (Chomsky 1981:188).
Thus, the sentence is ruled out. We see that verb agreement in signed languages
interacts with pronouns in ways similar to spoken languages at the level of
syntax.
In line with Aronoff et al. (2000), and Meier (2002), Lillo-Martin (this vol-
ume), these examples make two points:
Is verb agreement the same crossmodally? 385
r There are syntactic constraints that reveal a linguistic component to verb
agreement.
r These constraints show the need for a syntactic module, which will be im-
portant later in the discussion of the architecture of grammar.

14.5 Reconciling the linguistic nature of verb agreement


with the listability issue
So far, we have shown that various properties of verb agreement can be predicted
only under the linguistic system. Given that reference to mental entities does not
provide the full story, the next question is how to reconcile the linguistic nature
of verb agreement with the listability issue. Some sign language researchers
have accepted unlistability as a modality difference, and let the manifestation
of the phi-features equal the locus, as has been suggested by Cormier et al.
(1998:227) and Neidle et al. (2000:31).
Even if we were to conclude that unlistability is unique to signed languages
and distinguishes them from spoken languages, we still have to go further in
addressing this listability issue. Recall that under the R-locus view, the locus
comes from discourse. While this view allows the syntactic component to re-
main autonomous from discourse structure and solves the listability issue by
moving it into the realm of discourse structure, there is the possibility that the
listability issue is brought back into the linguistic system by allowing the infor-
mation from discourse structure to enter the phonological structure and provide
phonetic content for the indices, since it must be ensured that distinct loci match
up with distinct referential indices.15
We sketch another scenario: all linguistic elements are handled first and
then bundled off together to the articulatory-perceptual interfaces, where they
are matched against some spatio-temporal conceptual structure that repre-
sents spatial relations among the loci. Like the above approach, there is no
need to provide a phonological specification for a locus. In addition, at the
articulatory-perceptual interfaces, linguistic elements, having followed linguis-
tic constraints, are matched with the spatio-temporal conceptual structure.
Recall from the previous section that agreement is generally with the indirect
object, otherwise with an animate direct object (and optionally with the subject).
These generally correspond respectively to the theta roles of the recipient or
animate patient for the object and to the theta role of agent for the subject.
Thus, argument structure can predict when verb agreement will be required
and, moreover, determine that two distinct loci are needed from the conceptual

15 Fiengo and May (1994:1) define the function of an index as affording “a definition of syntactic
identity: elements are the ‘same’ only if they bear occurrences of the same index, ‘different’ if
they bear occurrences of different indices.”
386 Christian Rathmann and Gaurav Mathur

structure. If the indices are distinct, the loci are automatically distinct and face
each other. This is one way to reconcile the linguistic nature of verb agreement
with the listability issue in signed languages.
Now let us return to the possibility raised by the infinity view: are signed
languages still considered to be a separate group from spoken languages? In
spoken languages, the elements that determine verb agreement are argument
structure, the indices (or more precisely, the phi-features) of the noun phrases,
as well as the visibility condition that these noun phrases may be assigned theta
roles only if they are made “visible” either through the assignment of abstract
Case (Chomsky 1981:Chapter 6) or through coindexation with a morpheme
in the verb via agreement or movement (Morphological Visibility Condition;
Baker 1996:17). This is exactly the same as the scenario sketched for signed
languages, if we take the visibility condition to mean that in the case of signed
languages, a noun phrase is “visible” for theta role assignment through agree-
ment in the sense of Baker (1996).
It seems then that the visibility condition on argument structure is a candidate
for a formal universal applying to both spoken and signed languages. It is not
with which argument the verb agrees that the argument structure decides, since
that is subject to variation (recall that agreement tends to be with the subject
in spoken languages and with the object in signed languages). Rather, it is
the fact that the argument structure predicts verb agreement that is universal
crossmodally.
While signed and spoken languages may be grouped together on the basis
of these formal universals, the infinity view is partly correct that there must be
some differences between the two modalities due to the listability issue. We
suggest that the differences lie at the articulatory-perceptual interfaces and we
flesh out the above scenario with an articulated architecture of grammar as it
pertains to verb agreement.

14.6 Modality differences in the application of the architecture


of grammar to verb agreement: A proposal

14.6.1 Adapting an architecture of grammar


To understand the modality differences regarding the listability issue, we need
to understand how the architecture of grammar interacts with the so-called
gestural space. We have chosen to adapt Jackendoff’s (1987; 1992; 1997) model,
because it is particularly suited to capturing the similar and different roles of the
gestural space in spoken and signed languages. In this model, there are several
modules, which have their own “primitives and principles of combination and
[their] own organization into subcomponents” (Jackendoff 1992:31). There are
Is verb agreement the same crossmodally? 387

Syntactic
structure

Visual input Vision


Articulatory- Phonological Conceptual
perceptual structure structure Action
Motor output interfaces Gestural etc.
space
as
medium

Figure 14.3 An adaptation of Jackendoff’s (1992) model

also “correspondence rules” linking one module with another. We go over each
module in clockwise fashion, starting from the top of Figure 14.3.
In syntax, elements are taken from the numeration (a set of lexical items
chosen for a particular derivation, in the sense of Chomsky 1995), merged and
moved. Here syntactic constraints apply, and the noun phrases are themselves
linked to conceptualizations of referents.
Conceptual structure maps onto “other forms of mental representation that
encode, for instance, the output of the visual faculty and the input to the formu-
lation of action” (Jackendoff 1992:32).16 This is the domain of mental represen-
tations that may be subject to further “inference rules,” which include not just
logical inference but also rules of “invited inference, pragmatics and heuristics.”
We focus on one part of conceptual structure, the spatio-temporal conceptual
structure. Since this module is concerned with relations between entities, we
suggest it is this part that interfaces with the gestural space.
Next, phonological structure can be broken into subcomponents, such as seg-
mental phonology, intonation contour, and metrical grid. This is the component
where phonological constraints apply both within and across syllables, defined
canonically in terms of consonants and vowels.
The above architecture applies to both spoken and signed languages. We
have made two adaptations to the architecture. The first difference is in the A–P
systems. In Jackendoff’s original model, the A–P systems are obviously the
auditory input system and vocal motor output. In this adaptation, the systems
for signed languages are the visual input and motor output systems for the hands
and nonmanuals.
As outlined in a number of current sign language phonology theories (e.g.
Sandler 1989; van der Hulst 1993; Brentari 1998), phonological structure in
signed languages is concerned with defining the features of a sign such as hand-
shape, orientation, location, movement, and nonmanuals like facial expressions,
16 Jackendoff (1992:33) mentions that this is one point of similarity with the theoretical framework
of cognitive grammar by Fauconnier 1985; 1997; Lakoff 1987; Langacker 1987.
388 Christian Rathmann and Gaurav Mathur

eye gaze, and head tilt. Phonological structure also encodes the constraints on
their combinations. While phonological structure may be modality-specific in
its content, it is a self-governing system that interacts in parallel ways with other
modules for both signed and spoken languages. We clarify the architecture by
inserting “A–P interfaces” between phonological structure and the input and
output systems.
We turn to the second difference in the above adaptation: there is a “ges-
tural space as medium” linking the conceptual structure with the articulatory–
perceptual interfaces (A–P interfaces).17 Here we have in mind representational
gestures. Such gestures include pointing out things (deixis) and indicating the
shape and/or size of an object as well as showing spatial relations. We do not
refer to other types of gesture such as pantomime or emblems like the F hand-
shape for ‘good’ (Kendon 2000) which uses an open hand with index finger
contacting the thumb. We assume that these gestures do not use the gestural
space; rather the emblems, for example, would come from a list of conven-
tionalized gestures which may vary crossculturally and which appear in both
spoken and signed languages.
The gestural space makes visible the relations encoded by the spatio-temporal
conceptual structure.18 The gestural space is a level of representation where a
given referent may be visualized as being on one side of that space. The spatio-
temporal conceptual structure is different in that it provides the referents and
their spatial relations, if any, but not necessarily where they are represented
in the space in front of the signer. Moreover, the form at the A–P interface
can be different from what is provided by the gestural space. For example,
the agreement rules in signed languages permit optional subject agreement
omission (Padden 1983). The referents for both subject and object may be
visualized within the gestural space in particular locations, but the location of
the subject does not have to be used at the A–P interface.
The following figure summarizes the role of each module in the architecture.
We now provide one example each from spoken and signed languages and
elaborate on these roles.

14.6.2 Modality differences in the use of the gestural space


14.6.2.1 Spoken languages. The gestural space as a medium is avail-
able to both modalities but is used differently with respect to verb agreement.
Several perspectives on gesture suggest that there is a fundamental distinction
17 Johnston (1996) has proposed a similar idea that the modality differences are due to the “medium”
of the gestural space.
18 See Aronoff et al. (2000) for a similar proposal that some morphological processes in signed
languages are “iconically based” because they can show “spatial cognitive categories and rela-
tions.”
Is verb agreement the same crossmodally? 389

Synactic structure
REFERENTIAL
INDICES
i, j
Articulatory-
perceptual system
MATCH BETWEEN Conceptual structure
Phonological
REFERENTIAL INDICES (including spatio-temporal
structure
i, j AND cognitive module)
UNDERSPECIFIED
CONCEPTUALIZATION CONCEPTUALIZATION OF
MORPHEME OR i, j
OF REFERENTS REFERENT

Gestural space
as medium
MAKING
CONCEPTUALIZATION
OF REFERENT VISIBLE

Figure 14.4 Making the conceptualization of referents visible

between speech and gesture and that the role of gesture is to aid speech, but none
of them suggest that the role of gesture is directly correlated to verb agreement.
To understand how gesture is otherwise used in spoken languages, consider
McNeill’s (2000:144) Growth Point Hypothesis, which suggests that there is
“an analytic unit combining imagery and linguistic categorial content.” While
gesture and speech are considered separate, they are combined together under a
“growth point” so that they remain tightly synchronized. Another perspective on
gesture comes from Kita’s (2000:163) Information Packaging Hypothesis: “the
production of a representational gesture helps speakers organize rich spatio-
temporal information into packages suitable for speaking.” Thus, the role of
the gestural space can be seen as an addition to the architecture of grammar for
spoken languages.
This role of the gestural space as an addition is one reason that the gestural
space is placed as interacting with the A–P interfaces, not with phonological
structure. It is not desirable to admit any phonological representation of gesture
during speech, since they access different motor systems. Gesture accesses the
motor system for the hands and arms, while speech accesses the motor system
for the vocal cords. If the gestural space interacts directly with the hand and
arm motor system at the A–P interface, there will be no conflict with the use
of speech in phonological structure, which then interacts with the vocal motor
system at the A–P interface.
390 Christian Rathmann and Gaurav Mathur

Let us see how this system works for a spoken word that is accompanied
by a gesture, such as the Spanish word for ‘go down’ bajar accompanied by a
spinning gesture to express the manner of rolling. This example is useful be-
cause Duncan (2001) has suggested that the manner gesture is not compensatory
but figures in “thinking-for-speaking” about motion events in verb-framed lan-
guages like Spanish, which express less path and manner information than
satellite-framed languages like English (Talmy 1985). In the syntactic struc-
ture, the verb bajar has an argument structure where only one theta role of the
THEME is assigned. The noun phrase in the example is cat, which receives
the theta role of the THEME as well as a referential index i. The conceptual
structure envisions a cat rolling down a pipe. At the same time, the phonological
structure provides the phonetic form of the subject and the correctly inflected
verb as determined in the syntax: el gato baja ‘the cat goes down.’ Optionally,
a spinning gesture is added from the gestural space, which makes visible the
manner of rolling that is present in the conceptual structure.
The addition of the gestural space depends on how much linguistic informa-
tion is provided. For example, if all the information from conceptual structure is
encoded in the linguistic form, there may be no need to add the use of gestural
space. If some information, such as the manner of the movement present in
the conceptual structure, is not encoded linguistically, gesture may be added to
make the information clear to the listener, as shown above. Gesture may also
help the speaker organize spatio-temporal information, for example, when giv-
ing directions over the phone (see Kita’s Information Packaging Hypothesis;
Kita 2000). However gesture still does not directly aid in the expression of verb
agreement in spoken languages.

14.6.2.2 Signed languages. In signed languages, the use of gestural


space is significant in some contexts, e.g. verb agreement, but not in other
contexts, and is constrained by specific linguistic elements. Elements from the
conceptual structure are made visible through the gestural space and must go
through a matching at the A–P interfaces with the linguistic elements from
syntactic and phonological structures. We discuss in detail the different ways a
conceptualization may be made visible, e.g. through the establishment and the
use of loci because, unlike in spoken languages, the matching between loci and
linguistic elements is crucial for verb agreement in signed languages.
There are several strategies for the gestural space to make the conceptualiza-
tions of a referent visible:
r follow the location of a physically present referent;
r envision a scenario with imaginary referents; or
r assign points through nominal establishment.
For convenience, we refer to all these forms as “loci.” These different forms
correspond to Liddell’s (1995) real, surrogate, and token space, respectively.
Is verb agreement the same crossmodally? 391

However, the difference is that under a strong interpretation of Liddell’s model,


there would be no “gestural space” or the “locus” at the A–P interfaces to speak
of, and it would be sufficient for the conceptual structure alone to determine the
form of verb agreement. Since it has been shown that this last point is not the
case, the model presented here instead stresses that it is the linguistic system that
must license the use of loci that are provided by the spatio-temporal conceptual
structure via the gestural space.
Various other complex discourse factors can determine where to establish
a locus. See Aubry (2000), for example, who describes how a higher point is
selected when referring to one’s boss, or Emmorey (2000), who describes how
loci may be used from “shared space.” In this sense, the conceptual structure
can be understood as operating within discourse structure, which can use other
markers such as eye gaze, head tilt, and role shift (Bahan 1996).
The role of the phonological structure in verb agreement can be seen in the
different manifestations of agreement that depend on the phonetic form of the
verb (see Fischer and Gough 1978; Mathur 2000; for an analysis of Auslan, see
Johnston 1991). See again Table 14.4 above for a list of examples from ASL,
DGS, Auslan, and NS. The phonological structure must know the phonological
properties of the verb to yield the correct agreement form.
There are two reasons for treating the gestural space separately from the
phonetic module:
r The phonetic module is not sophisticated enough to handle matching with
concepts/referents, so the referents must be mediated through the gestural
space; and
r There is no substantive content behind the “locus” to articulate in the phonetic
module: the locus can have different sizes ranging from a dot-like point to a
token to a full-scale representation within the gestural space.
Thus, the gestural space is not equal to the phonetic realization of loci but
rather is equal to the interface between the linguistic system and the conceptual
structure.
If the gestural space provides loci as licensed by the linguistic system, there is
no need to represent the loci phonologically. There is at any rate no substantive
content behind the loci to list in a lexicon and no phonological rule to determine
the choice of a particular locus. The phonology is interested in where the hands
move or are oriented, not in how or why the locus is set up. The gestural space
handles the “how” part, while the conceptual structure handles the “why” part.
We turn now to an example from DGS that illustrates the significance of
matching between gestural space and verb agreement. The verb FRAGEN ‘ask’
and two noun phrases, either null or overt, enter the syntax through numeration.
The verb assigns the theta roles AGENT and PATIENT to the subject and
object noun phrases respectively. Through the agreement rule, the verb’s “front”
corresponds to the subject and its “back” to the object.
392 Christian Rathmann and Gaurav Mathur

In the conceptual structure, there is a conceptualization of the referents, e.g.


the mother and the father. The conceptualization also includes the event of
the mother’s asking the father. These referents are then made “visible” in the
gestural space through assignment to particular locations: the mother on the
contralateral side and the father on the ipsilateral side.19
In the phonological structure, we see the phonetic properties of the verb
FRAGEN such as the lax F handshape with the palm facing the signer and a
straight movement. At the A–P systems, a form of FRAGEN is freely chosen. If
the form proceeds from the contralateral to the ipsilateral side, it passes because
the “front” and the “back” of the sign (corresponding to the subject and object
respectively) are correctly matched with the locations of the mother and the
father respectively.
Not all signs use the gestural space, e.g. DGS MUTTER ‘mother’ and VATER
‘father’ are articulated on the face. There are also signs that are articulated in
the space in front of the signer but still do not use gestural space, such as DGS
SCHWIMMEN ‘swim.’ For these signs, it is sufficient to specify in the lexicon
where they are articulated.

14.6.3 Phonetic gaps in verb agreement


So far, we have used the listability issue to motivate the need for separating
the linguistic component from the conceptual structure and gestural space, in
agreement with Liddell. The next question is, where in the architecture do
we place the gestural space? From the discussion of spoken languages, we
have seen that the gestural space interacts directly with the hand and arm motor
systems at the A–P interface. This avoids a potential clash with the phonological
representation of speech. In the case of signed languages, if the gestural space
interfaces with the phonological component, the listability problem reappears.
This provides a second reason to place the gestural space as interfacing with
the A–P systems instead of the phonological component. We provide one piece
of evidence for this choice from “phonetic gaps.”
In Mathur and Rathmann (2001), we discuss a couple of phonetic constraints
affecting verb agreement that lie at the A–P interfaces. If the matching with
gestural space occurs at the A–P interfaces rather than in phonological structure,
one prediction is that the matching will be subject to the phonetic constraints at
the A–P interfaces. A form may violate one of the phonetic constraints, leading
to a “crash.” We have called these crashes “phonetic gaps.”
For example, according to agreement rules, the you give us form of the ASL
sign GIVE requires that the arm rotate from the elbow in such a way that the
19 See Taub (2001), who argues that the direction of the path in verb agreement is predictable from
conceptual structure and other considerations. For work in a similar vein, we refer to Wilcox
(2000), who is interested in the metaphorical motivations behind verbs like GIVE.
Is verb agreement the same crossmodally? 393

fingertips trace an arc against the signer’s chest. However, this form does not
occur.20 This is due to a phonetic constraint that bans the elbow from rotating
inward (i.e. toward the body) while keeping the palm up and raising the shoulder
at the same time.
There are many other phonetic constraints that may interact with the agree-
ment forms in signed languages to yield those phonetic gaps. In such cases,
there are several alternatives to the expected (but phonetically barred) agree-
ment forms, as described by Mathur and Rathmann (2001): distalization, con-
traction, arc deletion, and the use of auxiliary-like elements (PAM) or overt
pronouns, among several others.
On the other hand, such gaps do not appear in the verb agreement paradigm
of spoken languages, since they do not have to match what is provided by the
gestural space. Rather, as shown in Section 14.2, they vary as to whether they
show null, weak, or strong agreement. Agreement morphemes may also have
variants (allomorphs) that occur in certain contexts, and there will naturally be
phonetic constraints on the combination of the verb stem with the agreement
morpheme(s). However, these are not comparable to the “phonetic gaps” we
see in signed languages.

14.7 Modality effects: Morphological processes for verb agreement


We suggest that one effect of the different uses of the gestural space is that
the morphological processes for expressing verb agreement are different in the
two modalities. Specifically, affixation is the most common means in spoken
languages while readjustment is the preferred means in signed languages.21

14.7.1 Spoken languages


Spoken languages use morphological processes without regard to gestural
space, so that there are a variety of processes. For example, as we saw in
Section 14.2, there may be null agreement where nothing is added, weak agree-
ment where phonetically null morphemes may be added along with overt mor-
phemes for a few person–number feature combinations, and strong agreement
where there is an overt affix for each combination of phi-features. The most
common process for verb agreement remains one of affixation, as schematized
in Figure 14.5. Affixation can be described in different ways, but regardless

20 In contrast, the corresponding DGS sign GEBEN has different phonetic properties. The first
person multiple object form of this verb does not violate any phonetic constraints.
21 There is an alternative approach with which no modality effect is assumed in the morphological
process: the index-copying analysis by Meir (1998) and Aronoff et al. (2000), which is based
on Israeli Sign Language and ASL. The advantages of this analysis vs. those of the analysis
presented in this chapter deserve a thorough discussion in the future.
394 Christian Rathmann and Gaurav Mathur

base + agreement

base af or af base or af base af or …

Figure 14.5 Affixation in spoken languages

of the theoretical framework adopted, the point remains that some content is
added to the verb in order to express agreement. This property may constitute a
substantive universal for spoken languages. Otherwise, spoken languages may
vary in how the content is sequenced with respect to the verb: prefixation, suf-
fixation, circumfixation or, in rare cases, infixation; or even a combination of
these. The first three options for affixation are illustrated in Figure 14.5.

14.7.2 Signed languages


The expression of verb agreement in signed languages can be described as
a readjustment process that changes the shape of the base, on analogy with
English stem-internal changes (Mathur 2000), as schematized in Figure 14.6.
The base consists of information about the handshape, orientation, location (if
lexically specified for height), and movement if it is different from a straight
movement away from the signer. The readjustment rotates the sign in such a
way that the base’s “back” matches the subject index while the sign’s “front”
matches the object index (Mathur 2000). The “back” and the “front” of the
sign are necessarily opposite each other on a line and cannot be split into two
independent affixes, so that the readjustment differs from an affixal approach
under the simultaneity/sequentiality view. The different options in Figure 14.6
reflect the possible outcomes of this readjustment process for a given verb in a
signed language.
This readjustment is a morphological process since it marks a change in
meaning, e.g. 1st.sg FRAGENnon1st.sg vs. non1st.sg FRAGEN1st.sg . It unifies differ-
ent processes, e.g. changing orientation only (as in the ASL sign TEASE) or
changing the direction of movement only (as in the ASL sign GIVE) or both,
under a single mechanism, in parallel with spoken languages that also use only
one mechanism for verb agreement.22 The readjustment then makes it possible
22 The same holds for backwards verbs as well. Meir’s (1998) analysis can predict the direction of
the movement in backwards verbs; we further assume that once the direction of the movement
is predicted according to the insights of Meir (1998), this direction is lexicalized as part of the
lexical entry of the backwards verb.
Is verb agreement the same crossmodally? 395

base + agreement

b b b
a or a or a or …
s s s
e e
e

Figure 14.6 Readjustment in signed languages

to represent any change in direction of movement at the same time as any change
in orientation. This process is not found in spoken languages for the expression
of verb agreement, and it is suggested that this is a true modality effect since
the (syntactic) agreement rule generates a form that must be matched with the
gestural space at the A–P interfaces. Evidence comes from the phonetic gaps
described in the previous section, which reveal the mismatches (and therefore
the interaction) between the A–P interface and the gestural space.
Moreover, the matching must obey one constraint specific to agreement: the
loci are no more than two in number (and, thus, opposite each other on the line
formed by the loci). Verb agreement must also obey syntactic constraints based
on the argument structure of the verb: it must agree with animate and inanimate
concrete arguments (see Section 14.4.2.2).

14.7.3 Implications
14.7.3.1 Uniformity in form. Spoken languages may be relatively
uniform in that they use the process of affixation for verb agreement, even
though the content varies from one language to another, not just in the verb stem
but also in the affix. On the other hand, for the expression of other functional
categories like aspect and tense, spoken languages may employ a whole variety
of other morphological processes such as:
r reduplication (e.g. Classical Greek grapho ‘I write’ vs. gegrapha ‘I have
written [perfective aspect]’); see also various languages discussed by Marantz
1982; Broselow and McCarthy 1983);
r stem-internal changes (e.g. German er rennt vs. er rannte and English he runs
vs. he ran);
r templatic morphology (e.g. Arabic katab ‘perfective active’ vs. aktub ‘im-
perfective active’ for write; McCarthy 1982).
It seems, then, that the relative uniformity of spoken languages with respect to
verb agreement does not extend to the expression of aspect and tense.
396 Christian Rathmann and Gaurav Mathur

On the other hand, in signed languages, uniformity extends beyond verb


agreement to temporal aspect and tense. For example, iterative aspect is usually
expressed through reduplication and continuative aspect through lengthening
and reduplication (for ASL examples, see Klima and Bellugi 1979; for British
Sign Language [BSL] examples, see Sutton-Spence and Woll 1999). Moreover,
tense is usually not marked through a manual modulation of the verb. It is inter-
esting that the uniformity of verb agreement is co-present with the uniformity
of temporal aspect and tense in signed languages.

14.7.3.2 Crosscategorial predictions. The architecture outlined so


far also makes specific predictions about pronouns and classifiers, since they
use the gestural space in signed languages. Like verb agreement, pronouns and
classifiers retain their linguistic components, and it is their linguistic compo-
nents that ultimately decide how they use gestural/conceptual structure.
In spoken languages, pronouns do not depend on the gestural space so that
there is great crosslinguistic variation in the forms and in the categories they
express, such as person, number, and inclusive/exclusive (see Cormier 1998;
McBurney, this volume). In signed languages, the linguistic component of pro-
nouns varies crosslinguistically just as it does in spoken languages. In ASL,
DGS, and BSL the nonpossessive pronoun uses an index finger that points to
the middle of the chest; in NS, the pronoun points to the nose. Also, the pos-
sessive form takes the B hand in ASL, but takes the lax H handshape (with a
twisting movement and a ‘psh’ gestural mouthing) in the dialect of DGS used
in northern Germany, and the A handshape in BSL; in NS the possessive form
seems to be the same as the nonpossessive form. Moreover, the syntax decides
whether to use the possessive or nonpossessive form (MacLaughlin 1997). For
example, if there are two noun phrases next to one another, this context calls for
a possessive form for one of the noun phrases. As mentioned, pronouns must
also obey syntactic principles such as binding principles (Lillo-Martin 1991).
In addition, within a signed language like ASL there are various pronominal
forms that do not require the use of the gestural space, for example, the demon-
strative THERE. Other pronominals have special restrictions on how space may
be used. For instance, there are two forms of THREE. One form has the fin-
gertips facing up and the palm facing the signer. This form can appear with
inanimate or animate noun phrases and can have an indefinite, distributive or
collective reading. The other form has the palm facing up with a slight rotating
movement (Baker-Shenk and Cokely 1980:370–371). This form has only a col-
lective reading and is restricted to animate arguments.23 Another form glossed
as AREA is made with the flat hand, palm down, and a slight rotating move-
ment. This form can be used to refer to a group of people, not a single person,
23 Cormier (1998) has discussed this form with regard to inclusive/exclusive pronouns.
Is verb agreement the same crossmodally? 397

and this sign may be articulated at different heights as discourse markers of


one’s perceived status with respect to the referent. Specific pronouns then use
those spatial locations that match their linguistic restrictions.
The same argument can be made with classifiers. Spoken languages vary
as to whether they have classifier constructions. If they have classifiers, they
mark different kinds of information through different morphological processes
such as affixation or incorporation (Allan 1977; Baker 1996; Grinevald 1999).
In contrast, there is greater uniformity among the classifier constructions in
signed languages. Yet they retain a linguistic component that is matched against
the gestural space as seen through various linguistic constraints that they are
subject to.
We understand a classifier construction as a combination of two parts: a
noun classifier and a MOV (movement) element (compare Supalla 1986; Schick
1990). The noun classifiers are specified for different handshapes, which vary
crosslinguistically. For example, ASL uses a 3-hand for all vehicles; DGS uses
the B-mid handshape for two-wheeled vehicles and the B-down handshape for
vehicles with four or more wheels; while Catalan Sign Language uses many
more handshapes to distinguish different types of vehicle (Fourestier 1998).24
Otherwise, the content of MOV is freely generated and matched against the
path and manner of motion encoded by the spatio-temporal cognition.
There are also morphemic constraints on the possible combinations of figure
and ground semantic classifiers which vary crosslinguistically. In DGS and ASL
the bent-V handshape is used as a classifier for a seated person or an animal.
In DGS this classifier cannot be combined with the vehicle classifiers (B-mid
or B-down handshape) to show a person sitting on a bike or getting into a car.
In ASL the bent-V classifier may be combined with the ASL vehicle classifier
(the 3 handshape) but only in a specific way: the bent-V classifier must contact
the palm of the vehicle classifier.
There are also syntactic constraints on classifier predicates. Semantic classi-
fiers assign the sole thematic role of THEME, while handling classifiers assign
the thematic roles of THEME and AGENT and optionally SOURCE and GOAL.
Semantic classifiers cannot be combined with an AGENT argument whereas
handling classifiers require this argument.

14.7.4 Recreolization
There is another possible factor behind the uniformity of verb agreement in all
signed languages: recreolization (Fischer 1978; Gee and Kegl 1982; Gee and
Goodhart 1988). In our view the relevant factor is the short length of the cycle of

24 The palm of the B-mid hand faces the contralateral side, while that of the B-down hand faces
downward.
398 Christian Rathmann and Gaurav Mathur

recreolization, namely one or two generations, that may slow the development
of verb agreement and restrict it to the use of the gestural space. The majority
of the Deaf community, who are born to hearing parents, lack native input
and take the process of creolization back to square one. Since they constantly
outnumber the multi-generation Deaf signers who could otherwise advance
the process of creolization (e.g. the case of Simon reported by Singleton and
Newport 1994; Newport 2000), the cycle of recreolization is perpetuated.

14.8 Summary
Having shown various differences and similarities between the two modalities,
we return to the discussion of formal and substantive universals. The two modal-
ities share the same architecture of grammar with respect to verb agreement,
with the exception that gestural space does not have the same function in spoken
languages and in signed languages. Other processes within the architecture are
the same crossmodally and would constitute formal universals:
r the theta criterion, which requires every theta role to be discharged from the
verb and every noun phrase to receive one; and
r the visibility condition that every noun phrase be made visible, e.g. through
case or agreement.
At the level of substantive universals, there seem to be modality-specific
universals. Within the spoken modality, languages vary as to whether they ex-
press agreement. When a language expresses agreement, it is usually with the
person, number, and/or gender features of the subject and is expressed through
affixation. Otherwise, the content of the affixation varies greatly, which adds
another layer of diversity. For signed languages, there seem to be a greater
number of substantive universals. All signed languages seem uniformly to ex-
press agreement, and this agreement is uniformly manifested in the same way,
through readjustment. Moreover, this agreement is restricted to animate (and
inanimate concrete) arguments. Finally, it is object agreement that is unmarked,
and number is one phi-feature that may be expressed through overt and separate
morphology.
While signed languages may be uniform with respect to the form of agree-
ment, they may vary in terms of whether an element like PAM is available
in that language and licenses an additional layer of structure in the sentence.
However, even when PAM is available in a signed language, PAM still under-
goes the same process of readjustment as a verb that shows overt agreement.
Thus, we stress that while signed languages may be uniform with respect to
the form of the agreement, they vary with respect to the surface structure of the
sentence.
The modality-specificity of the substantive universals with respect to the
form of agreement is hypothesized to arise from the different uses of the
Is verb agreement the same crossmodally? 399

spatio-temporal conceptual structure as made visible by the gestural space.


Since spoken languages do not require the use of the gestural space, there may
be greater crosslinguistic variation in the form and pattern of agreement. On
the other hand, since signed languages require the use of gestural space, there
is greater uniformity with respect to the realization of verb agreement, which
may in turn impact on other morphological processes and drive uniformity to a
higher level.
Why the gestural space is crucial in signed languages, but not in spoken
languages, is a question we do not attempt to resolve here. We do know that the
arms and hands permit the use of that space. Moreover, the properties of the
visual system – in particular, its capacities for object recognition and motion
perception – permit the hands to be tracked in space and permit handshapes
to be recognized. The properties of vision, and of what is visible, may have
consequences for the structure of signed languages. Recall that there seems
to be obligatory agreement whenever the referent of the argument is either an
animate or an inanimate concrete object, as opposed to an inanimate abstract
entity. This contrast suggests that, in their use of the visual modality, signed
languages may reflect whatever can be potentially visible in the world. This
includes imagined and/or nonpresent objects as long as they could be seen
in other situations. This is similar to what Talmy (2000) has suggested: in
representing spatial relations, a signed language “connects with aspects of the
visual system that govern scene-structure parsing.”
If it is the case that the gestural space provides a link between spatio-temporal
cognitive structure and the A–P interfaces, it seems that the conceptual and
phonological/phonetic structures are linked twice: once through the syntactic
component and again through the gestural space. There is a difference. The
syntactic links are the primary connection between the conceptual and phono-
logical: they are essential to language and cannot be omitted from either spoken
or signed languages. The gestural space is more a follow-up to the connec-
tions that have been established by syntax, and it may be optionally not used
during speech. Even in signed languages, not all grammatical processes use
this gestural space. For example, wh-question formation (Petronio and Lillo-
Martin 1997; Neidle et al. 2000), negation (Wood 1999; Pfau this volume)
and other derivational processes (Aronoff et al. 2000) do not use the gestural
space.
If verb agreement as well as pronouns and classifiers are the main gram-
matical processes that use the gestural space, are they different crossmodally?
Depending on where we are in the architecture of grammar, the answer can
be either yes or no. It seems that modality does not have an effect at the
levels of syntax and conceptual structure, whereas modality makes an im-
pact with respect to the matching between the A–P interfaces and gestural
space.
400 Christian Rathmann and Gaurav Mathur

Acknowledgments
We are very grateful to the following for helpful comments on an earlier draft of
the chapter: Michel deGraff, Karen Emmorey, Morris Halle, and Diane Lillo-
Martin. Finally we thank Richard Meier for his insightful discussion and ex-
tensive comments on the chapter. All remaining errors are our own.

14.9 References
Ahlgren, Ingrid. 1990. Deictic pronouns in Swedish and Swedish Sign Language. In
Theoretical issues in sign language research, Vol. 1: Linguistics, ed. Susan Fischer
and Patricia Siple, 167–174. Chicago, IL: The University of Chicago Press.
Allan, Keith. 1977. Classifiers. Language 53:285–311.
Aronoff, Mark, Irit Meir, and Wendy Sandler. 2000. Universal and particular aspects
of sign language morphology. Manuscript, State University of New York at Stony
Brook and University of Haifa.
Aubry, Luce. 2000. Vertical manipulation of the manual and visual indices in the person
reference system of American Sign Language. Masters thesis, Harvard University.
Bahan, Benjamin. 1996. Non-manual realization of agreement in American Sign Lan-
guage. Ph.D. dissertation, Boston University.
Baker, Mark. 1996. The polysynthesis parameter. New York: Oxford University Press.
Baker-Shenk, Charlotte and Dennis Cokely. 1980. American Sign Language: A teacher’s
resource text on grammar and culture. Silver Spring, MD: T.J. Publishers.
Bobaljik, Jonathan and Susanne Wurmbrand. 1997. Preliminary notes on agreement in
Itelmen. In PF: Papers at the Interface, ed. Benjamin Bruening, Yoon-Jung Kang
and Martha McGinnis, 395–423. MIT Working Papers in Linguistics, Vol. 30.
Bos, Heleen. 1994. An auxiliary verb in Sign Language of the Netherlands. In Perspec-
tives on Sign Language Structure: Papers from the 5th International Symposium
on Sign Language Research, Vol. 1, ed. Inger Ahlgren, Brita Bergman and Mary
Brennan, 37–53. Durham: ISLA.
Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA:
MIT Press.
Broselow, Ellen and John McCarthy. 1983. A theory of internal reduplication. The
Linguistic Review 3:25–88.
Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, MA: MIT Press.
Chomsky, Noam. 1981. Lectures on government and binding. Dordrecht: Foris.
Chomsky, Noam. 1995. The Minimalist Program. Cambridge, MA: MIT Press.
Comrie, Bernard. 1981. Language universals and linguistic typology: Syntax and mor-
phology. Oxford: Blackwell.
Comrie, Bernard. 1982. Grammatical relations in Huidrol. In Syntax and semantics,
Vol. 15: Studies in transitivity, ed. Paul J. Hopper and Sandra A. Thompson,
95–115. New York: Academic Press.
Cormier, Kearsy. 1998. How does modality contribute to linguistic diversity?
Manuscript, The University of Texas at Austin.
Cormier, Kearsy. 2002. Grammaticization of indexic signs: How American Sign Lan-
guage expresses numerosity. Doctoral dissertation, The University of Texas at
Austin.
Is verb agreement the same crossmodally? 401

Cormier, Kearsy, Steven Wechsler and Richard P. Meier. 1998. Locus agreement in
American Sign Language. In Lexical and constructional aspects of linguistic ex-
planation, ed. Gert Webelhuth, Jean-Pierre Koenig and Andreas Kathol, 215–229.
Stanford, CA: CSLI Publications.
Duncan, Sandra. 2001. Perspectives on the co-expressivity of speech and co-speech
gestures in three languages. Paper presented at the 27th annual meeting of the
Berkeley Linguistics Society.
Ehrlenkamp, Sonja. 1999. Possessivkonstruktionen in der Deutschen Gebärdensprache:
Warum die Gebärde ‘SCH’ kein Verb ist? Das Zeichen 48:274–279.
Fauconnier, Gilles. 1985. Mental spaces: aspects of meaning construction in natural
language. Cambridge, MA: MIT Press.
Fauconnier, Gilles. 1997. Mappings in thought and language. Cambridge: Cambridge
University Press.
Fiengo, Robert and Robert May. 1994. Indices and identity. Cambridge, MA: MIT Press.
Fischer, Susan. 1973. Two processes of reduplication in the American Sign Language.
Foundations of language 9:469–480.
Fischer, Susan. 1974. Sign language and linguistic universals. In Actes de Colloque
Franco-Allemand de Grammaire Transformationelle, ed. Christian Rohrer and
Nicholas Ruwet, 187–204. Tübingen: Niemeyer.
Fischer, Susan. 1978. Sign language and creoles. In Understanding language through
sign language research, ed. Patricia Siple, 309–331. New York: Academic Press.
Fischer, Susan. 1996. The role of agreement and auxiliaries in sign language. Lingua
98:103–120.
Fischer, Susan and Bonnie Gough. 1978. Verbs in American Sign Language. Sign Lan-
guage Studies 18:17–48.
Fourestier, Simone. 1998. Verben der Bewegung und Position in der Katalanischen
Gebärdensprache. Magisterarbeit, Universität Hamburg.
Friedman, Lynn. 1975. Space, time, and person reference in ASL. Language 51:940–961.
Friedman, Lynn. 1976. The manifestation of subject, object and topic in the American
Sign Language. In Subject and topic, ed. Charles Li, 125–148. New York: Academic
Press.
Gee, James and Wendy Goodhart. 1988. American Sign Language and the human bi-
ological capacity for language. In Language, learning and deafness, ed. Michael
Strong, 49–74. Cambridge: Cambridge University Press.
Gee, James and Judy Kegl. 1982. Semantic perspicuity and the locative hypothesis:
implications for acquisition. Journal of Education 164:185–209.
Gee, James and Judy Kegl. 1983. Narrative/story structure, pausing and American Sign
Language. Discourse Processes 6:243–258.
Greenberg, Joseph. 1966. Universals of language. Cambridge, MA: MIT Press.
Grinevald, Colette. 1999. A typology of nominal classification systems. Faits de
Langues, 1999, 14:101–122.
Jackendoff, Ray. 1987. Consciousness and the computational mind. Cambridge, MA:
MIT Press.
Jackendoff, Ray. 1992. Languages of the mind. Cambridge, MA: MIT Press.
Jackendoff, Ray. 1997. The architecture of the language faculty. Cambridge, MA: MIT
Press.
Janis, Wynne. 1992. Morphosyntax of the ASL verb phrase. Doctoral dissertation, State
University of New York at Buffalo.
402 Christian Rathmann and Gaurav Mathur

Johnston, Trevor. 1991. Spatial syntax and spatial semantics in the inflection of signs
for the marking of person and location in Auslan. International Journal of Sign
Linguistics 2:29–62.
Johnston, Trevor. 1996. Function and medium in the forms of linguistic expression found
in a sign language. In International Review of Sign Linguistics, Vol. 1, ed. William
Edmondson and Ronnie B. Wilbur, 57–94. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Keller, Jörge. 1998. Aspekte der Raumnutzung in der deutschen Gebärdensprache.
Hamburg: Signum Press.
Kendon, Adam. 2000. Language and gesture: unity or duality? In Language and gesture,
ed. David McNeill, 47–63. Cambridge: Cambridge University Press.
Kita, Sotaro. 2000. How representation gestures help speaking. In Language and gesture,
ed. David McNeill, 162–185. Cambridge: Cambridge University Press.
Klima, Edward and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Lacy, Richard. 1974. Putting some of the syntax back into the semantics. Paper presented
at the annual meeting of the Linguistic Society of America, New York.
Lakoff, George. 1987. Women, fire and dangerous things. Chicago, IL: University of
Chicago Press.
Langacker, Ronald W. 1987. Foundations of cognitive grammar, Vol. 1: Theoretical
prerequisites. Stanford, CA: Stanford University Press.
Lehmann, Christian. 1988. On the function of agreement. In Agreement in natural
language: approaches, theories, descriptions, ed. Michael Barlow and Charles
Ferguson, 55–65. Stanford, CA: CSLI Publications.
Liddell, Scott. 1990. Four functions of a locus: re-examining the structure of space
in ASL. In Sign language research: theoretical issues, ed. C. Lucas, 176–198.
Washington, DC: Gallaudet University Press.
Liddell, Scott. 1995. Real, surrogate, and token space: grammatical consequences in
ASL. In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 19–
42. Hillsdale, NJ: Lawrence Erlbaum Associates.
Liddell, Scott. 2000a. Indicating verbs and pronouns: pointing away from agreement. In
The signs of language revisited: an anthology to honor Ursula Bellugi and Edward
Klima, ed. Karen Emmorey and Harlan Lane, 303–320. Mahwah, NJ : Lawrence
Erlbaum Associates.
Liddell, Scott. 2000b. Blended spaces and deixis in sign language discourse. In
Language and gesture, ed. David McNeill, 331–357. Cambridge: Cambridge Uni-
versity Press.
Liddell, Scott and Robert Johnson. 1989. American Sign Language: the phonological
base. Sign Language Studies 18, 195–277.
Lillo-Martin, Diane. 1991. Universal grammar and American Sign Language: setting
the null argument parameters. Dordrecht: Kluwer Academic.
Lillo-Martin, Diane and Edward Klima. 1990. Pointing out differences: ASL pronouns in
syntactic theory. In Theoretical issues in sign language research, Vol. 1: Linguistics,
ed. Susan Fischer and Patricia Siple, 191–210. Chicago: The University of Chicago
Press.
Lindblom, Björn. 1990. Explaining phonetic variation: a sketch of the H&H theory.
In Speech production and speech modeling, ed. William Hardcastle and Alain
Marchal, 403–439. Dordrecht: Kluwer Academic.
Is verb agreement the same crossmodally? 403

MacLaughlin, Dawn. 1997. The structure of determiner phrases: evidence from


American Sign Language. Ph.D. dissertation, Boston University.
Marantz, Alec. 1982. Re reduplication. Linguistic Inquiry 13, 435–482.
Mathur, Gaurav. 2000. Verb agreement as alignment in signed languages. Ph.D. disser-
tation, Massachusetts Institute of Technology.
Mathur, Gaurav and Christian Rathmann. 2001. Why not GIVE-US: an articulatory
constraint in signed languages. In Signed languages: Discoveries from international
research, ed. Valerie Dively, Melanie Metzger, Sarah Taub and Anne-Marie Baer,
1–26. Washington, DC: Gallaudet University Press.
McCarthy, John. 1982. Prosodic templates, morphemic templates and morphemic tiers.
In The structure of phonological representation, Part One, ed. Harry van der Hulst
and Neil Smith, 191–223. Dordrecht: Foris.
McNeill, David. 2000. Catchments and contexts: non-modular factors in speech and
gesture production. In Language and gesture, ed. David McNeill, 312–328.
Cambridge: Cambridge University Press.
Meier, Richard P. 1982. Icons, analogues, and morphemes: the acquisition of verb agree-
ment in American Sign Language. Ph.D. dissertation, University of California at
San Diego, CA.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical issues
in sign language research, Vol. 1: Linguistics, ed. Susan Fischer and Patricia Siple,
175–190. Chicago, IL: The University of Chicago Press.
Meier, Richard P. 2002. The acquisition of verb agreement: pointing out arguments for
the linguistic status of agreement in signed languages. In Current developments
in the study of signed language acquisition, ed. Gary Morgan and Bencie Woll.
Amsterdam: John Benjamins.
Meir, Irit. 1995. Explaining backwards verbs in Israeli Sign Language: syntax-semantic
interaction. In Sign language research, ed. Helen Bos and Gertrude Schermer,
105–120. Hamburg: Sigum.
Meir, Irit. 1998. Thematic structure and verb agreement in Israeli Sign Language. Ph.D.
dissertation, The University of Haifa, Israel.
Neidle, Carol, Dawn MacLaughlin, Judy Kegl, Benjamin Bahan and Debra Aarons.
1995. Overt realization of syntactic features in American Sign Language. Paper
presented at Syntax Seminar, University of Trondheim, Norway.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert Lee. 2000.
The syntax of American Sign Language: functional categories and hierarchical
structure. Cambridge, MA: MIT Press.
Newport, Elissa. 2000. Reduced input in the acquisition of signed languages: contribu-
tions to the study of creolization. In Language creation and language change, ed.
Michel DeGraff, 161–178. Cambridge, MA: MIT Press.
Newport, Elissa and Ted Supalla. 2000. Sign language research at the millennium. In
The signs of language revisited, ed. Karen Emmorey and Harlan Lane, 103–114.
Mahwah, NJ: Lawrence Erlbaum Associates.
Padden, Carol. 1983. Interaction of morphology and syntax in American Sign Language.
Ph.D. dissertation, University of California, San Diego, CA.
Padden, Carol 1990. The relation between space and grammar in ASL verb morphology.
In Sign language research: theoretical issues, ed. Ceil Lucas, 118–132. Washington,
DC: Gallaudet University Press.
404 Christian Rathmann and Gaurav Mathur

Petronio, Karen and Diane Lillo-Martin. 1997. Wh-movement and the position of Spec,
CP: evidence from American Sign Language. Language 73, 18–57.
Prillwitz, Siegmund. 1986. Die Gebärde der Gehörlosen. Ein Beitrag zur Deutschen
Gebärdensprache und ihrer Grammatik. In Die Gebärde in Erziehung und Bil-
dung Gehörloser, ed. Sigmund Prillwitz, 55–78.Tagungsbericht. Hamburg: Signum
Verlag.
Rathmann, Christian. 2001. The optionality of Agreement Phrase: evidence from signed
languages. Unpublished manuscript, The University of Texas at Austin.
Sandler, Wendy. 1986. The spreading hand autosegment of ASL. Sign Language Studies
15, 1–28.
Sandler, Wendy. 1989. Phonological representation of the sign. Dordrecht: Foris.
Sandler, Wendy. 1993. A sonority cycle in American Sign Language. Phonology 10,
243–279.
Schick, Brenda. 1990. Classifier predicates in American Sign Language. International
Journal of Sign Linguistics 1:15–40.
Shepard-Kegl, Judy. 1985. Locative relations in American Sign Language word forma-
tion, syntax and discourse. Ph.D. dissertation, MIT.
Singleton, Jenny and Elissa Newport. 1994. When learners surpass their models: the
acquisition of American Sign Language from impoverished input. Manuscript,
University of Rochester.
Smith, Wayne. 1990. Evidence for auxiliaries in Taiwan Sign Language. In Theoretical
issues in sign language research, Vol. 1: Linguistics, ed. Susan Fischer and Patricia
Siple, 211–228. Chicago, IL: The University of Chicago Press.
Speas, Margaret. 1995. Economy, agreement, and the representation of null arguments.
Unpublished manuscript, University of Massachusetts, Amherst.
Stokoe, William, Dorothy Casterline and Carl Croneberg. 1965. A dictionary of
American Sign Language based on linguistic principles. Silver Spring, MD:
Linstok Press.
Supalla, Ted. 1986. The classifier system in American Sign Language. In Noun classes
and categorization, ed. Colette Craig, 181–214. Amsterdam: John Benjamins.
Supalla, Ted. 1997. An implicational hierarchy in the verb agreement of American Sign
Language. Unpublished manuscript, University of Rochester.
Sutton-Spence, Rachel and Bencie Woll. 1999. The linguistics of British Sign Language:
an introduction. Cambridge: Cambridge University Press.
Talmy, Leonard. 1985. Lexicalization patterns: semantic structure in lexical forms. In
Language typology and syntactic description, Vol. 3: Grammatical categories and
the lexicon, ed. T. Shopen, 57–149. Cambridge: Cambridge University Press.
Talmy, Leonard. 2000. Spatial structuring in spoken language and its relation to that
in sign language. Paper presented at the Third Workshop on Text Structure, The
University of Texas at Austin.
Taub, Sarah. 2001. Language from the body: iconicity and metaphor in American Sign
Language. Cambridge: Cambridge University Press.
van der Hulst, Harry. 1993. Units in the analysis of signs. Phonology 10:209–241.
Wilcox, Phyllis. 2000. Metaphor in American Sign Language. Washington, DC:
Gallaudet University Press.
Wood, Sandra. 1999. Syntactic and semantic aspects of negation in ASL. Masters thesis,
Purdue University.
15 The effects of modality on spatial language:
How signers and speakers talk about space

Karen Emmorey

15.1 Introduction
Most spoken languages encode spatial relations with prepositions or locative
affixes. Often there is a single grammatical element that denotes the spatial
relation between a figure and ground object; for example, the English spatial
preposition on indicates support and contact, as in The cup is on the table. The
prepositional phrase on the table defines a spatial region in terms of a ground
object (the table), and the figure (the cut) is located in that region (Talmy
2000). Spatial relations can also be expressed by compound phrases such as
to the left or in back of. Both simple and compound prepositions constitute a
closed class set of grammatical forms for English. In contrast, signed languages
convey spatial information using so-called classifier constructions in which
spatial relations are expressed by where the hands are placed in the signing space
or in relationship to the body (e.g. Supalla 1982; Engberg-Pedersen 1993).1 For
example, to indicate ‘The cup is on the table,’ an American Sign Language
(ASL) signer would place a C classifier handshape (referring to the cup) on
top of a B classifier handshape (referring to the table). There is no grammatical
element specifying the figure–ground relation; rather, there is a schematic and
isomorphic mapping between the location of the hands in signing space and the
location of the objects described (Emmorey and Herzig in press). This chapter
explores some of the ramifications of this spatialized form for how signers talk
about spatial environments in conversations.

15.2 Modality effects and the nature of addressee vs. speaker


perspective in spatial descriptions
Figure 15.1a provides a simple example of an ASL spatial description. An En-
glish translation of this example would be I entered the room. There was a table
to the left. In this type of narrative, the spatial description is from the point
of view of the speaker (for simplicity and clarity, “speaker” is used to refer to
1 The status of handshape as a classifier in these constructions has been recently called into question
(see chapters in Emmorey, in press).

405
406 Karen Emmorey

(a) Speaker’s perspective

I-ENTER TABLE THERE


CL: C (2h)

(b) Addressee’s perspective

YOU-ENTER TABLE THERE


CL: C (2h)

(c)

entrance
Figure 15.1 Illustration of ASL descriptions of the location of a table within
a room, described from: 15.1a the speaker’s perspective; or 15.1b the ad-
dressee’s perspective. Signers exhibit better comprehension for room descrip-
tions presented from the speaker’s perspective, despite the mental transforma-
tion that this description entails; 15.1c Position of the table described in (a)
and (b)

the person who is signing in these conversational contexts). The addressee, if


facing the speaker, must perform a mental transformation of signing space. For
example, in Figure 15.1a the speaker indicates that the table is to the left by ar-
ticulating the appropriate classifier sign on his left in signing space. Because the
addressee is facing the speaker, the location of the classifier form representing
the table is in fact to the right for the addressee. There is a mismatch between
The effects of modality on spatial language 407

the location of the table in the room being described (the table is on the left
as seen from the entrance) and what the addressee observes in signing space
(the classifier handshape referring to the table is produced to the addressee’s
right). In this case, the addressee must perform what amounts to a 180◦ mental
rotation to correctly comprehend the description.
Although spatial scenes are most commonly described from the speaker’s
point of view (as in Figure 15.1a), it is possible to indicate a different view-
point. ASL has a marked sign that can be glossed as YOU-ENTER, which
indicates that the scene should be understood as signed from the addressee’s
viewpoint (see Figure 15.1b). When this sign is used, the signing space in
which the room is described is, in effect, rotated 180◦ so that the addressee
is “at the entrance” of the room. In this case, the addressee does not need to
mentally transform locations within signing space. However, ASL descriptions
using YOU-ENTER are quite unusual and rarely found in natural discourse.
Furthermore, Emmorey, Klima, and Hickok (1998) found that ASL signers
comprehended spatial descriptions much better when they were produced from
the speaker’s point of view compared to the addressee’s viewpoint. In that study,
signers viewed a videotape of a room and then a signed description and were
asked to judge whether the room and the description matched. When the room
was described from the addressee’s perspective (using YOU-ENTER), the de-
scription spatially matched the room layout shown on the videotape, but when
signed from the speaker’s perspective (using I-ENTER), the description was
the reverse of the layout on the videotape (a simplified example is shown in
Figure 15.1). Emmorey et al. (1998) found that ASL signers were more accurate
when presented with descriptions from the speaker’s perspective, despite the
mental transformation that these descriptions entailed.
One might consider this situation analogous to that for English speakers who
must understand the terms left and right with respect to the speaker’s point of
view (as in on my left). The crucial difference, however, is that these relations
are encoded spatially in ASL, rather than lexically. The distinction becomes
particularly clear in situations where the speaker and the addressee are both in
the environment, observing the same scene. In this situation, English speakers
most often adopt their addressee’s point of view, for example giving directions
such as, pick the one on your right, or it’s in front of you, rather than pick the one
on my left or it’s farthest from me (Schober 1993; Mainwaring, Tversky, and
Schiano 1996). However, when jointly viewing an environment, ASL signers
do not adopt their addressee’s point of view but use what I term “shared space”
(Emmorey 2002). Figure 15.2 provides an illustration of what is meant by
shared space. In the situation depicted, the speaker and addressee are facing
each other, and between them are two boxes. Suppose the box on the speaker’s
left is the one that he (or she) wants shipped. If the speaker uses signing space
(rather than just pointing to the actual box), he would indicate the box to be
408 Karen Emmorey

(a) Shared Space (b) * Addressee viewpoint


Addressee Addressee

X X

X X

Speaker Speaker

Figure 15.2 Illustration of a speaker using: 15.2a shared space; and 15.2b
using the addressee’s spatial viewpoint to indicate the location of the box
marked with an “X” (the asterisk indicates that signers reject this type of
description). By convention, the half circle represents the signing space in
front of the signer. The “X” represents the location of the classifier sign used
to represent the target box (e.g., a hooked 5 handshape)

shipped by placing the appropriate classifier sign on the left side of signing
space. Note that, in this situation, no mental transformation is required by the
addressee. Instead, the speaker’s signing space is simply “mapped” onto the
jointly observed physical space: the left side of the speaker’s signing space
maps directly to the actual box on the right side of the addressee. In contrast, if
the speaker were to adopt the addressee’s point of view, producing the classifier
sign on his right, the location in signing space would conflict with the location
of the target box observed by the addressee.
Note that it is not impossible to adopt the addressee’s viewpoint when physical
space is jointly observed by both interlocutors. For example, the speaker could
describe an action of the addressee. In this case, the speaker would indicate a
referential shift through a break in eye gaze, and within the referential shift the
signer could sign LIFT-BOX using a handling classifier construction articulated
toward the right of signing space. The signing space in this case would reflect
the addressee’s view of the environment (i.e. the box is to the addressee’s
right).
In general, however, for situations in which the signer and addressee are
both observing and discussing a jointly viewed physical environment, there
is no true speaker vs. addressee point of view in signed descriptions of that
environment (Emmorey 1998; Emmorey and Tversky, in press). The signing
The effects of modality on spatial language 409

space is “shared” in the sense that it maps to the physically observed space and
to both the speaker’s and addressee’s view of the physical space. Furthermore,
the signer’s description of the box would be the same regardless of where the
addressee happened to be standing (e.g. placing the addressee to the signer’s
left in Figure 15.2, would not alter the signer’s description or the nature of the
mapping from signed space to physical space). Thus, in this situation, ASL
signers do not need to take into account where their addressee is located, unlike
English speakers who tend to adopt their addressee’s viewpoint. This difference
between languages derives from the fact that signers use the actual space in front
of them to represent observed physical space.
In sum, language modality impacts the interpretation and nature of speaker
and addressee perspectives for spatial descriptions. For descriptions of nonpre-
sent environments in ASL, an addressee must mentally transform the locations
within a speaker’s signing space in order to correctly understand the left–right
arrangements of objects with respect to the speaker’s viewpoint. For speech,
spatial information is encoded in an acoustic signal, which bears no resem-
blance to the spatial scene described. An English speaker describing the room
in Figure 15.1 might say either You enter the room, and a table is to your left
or I enter the room, and a table is to my left. Neither description requires any
sort of mental transformation on the part of the addressee because the relevant
information is encoded in speech rather than in space. 2 However, when English
speakers and addressees discuss a jointly viewed scene, an addressee may need
to perform a type of mental transformation if the speaker describes a spatial
location from his or her viewpoint. For example, if the speaker says Pick the
box on my left for the situation depicted in Figure 15.2, the addressee must un-
derstand that the desired box is on his or her right. Again, this situation differs
for signers because the speaker’s signing space maps to the observed physical
space and to the addressee’s view of that space. Signing space is shared, and
no mental transformation is required by the addressee.
For the situations discussed thus far, the speaker produced monologue de-
scriptions of environments (e.g. describing room layouts, as illustrated in
Figure 15.1) or the speaker described a jointly viewed environment (as illus-
trated in Figure 15.2). In the study reported in Section 15.4, I explore the
2 If an English speaker describes a situation in which the addressee is placed within the room (e.g.
You are at the back of the room facing the door, and when I walked in I noticed a table on my
left), then the addressee would indeed need to perform some type of mental transformation to
understand the location of the table with respect to his or her viewpoint (i.e. the table is on the
addressee’s right). However, for spatial descriptions that do not involve the addressee as part
of the environment, no such mental transformation would be required. For the description and
room matching task used by Emmorey et al. (1998), it would make no difference whether the
speaker described the room from her perspective (using I ) or from the addressee’s perspective
(using you). For ASL, however, the placement of classifier signs within signing space changes
depending upon whether the room description is introduced with YOU-ENTER or I-ENTER
(see Figure 15.1).
410 Karen Emmorey

situation in which two signers converse about a spatial scene that they are not
currently observing, focusing on how the addressee refers to and interprets loca-
tions within the speaker’s signing space. First, however, I examine the different
ways that signing space can be structured when describing a spatial environment
and how English speakers and ASL signers sometimes differ in their choice of
spatial perspective.

15.3 Spatial formats and route vs. survey perspective choice


When English speakers describe environments, they often take their listener
on a mental tour, adopting a “route” perspective, but they may also adopt a
“survey” perspective, describing the environment from a bird’s eye view, using
cardinal direction terms (Taylor and Tversky 1992; 1996). Route and survey
perspectives differ with respect to:
r point of view: moving within the scene vs. fixed above the scene;
r reference object: the addressee vs. another object/landmark; and
r reference terms: right–left–front–back vs. north–south–east–west.
Route and survey perspectives also correspond to two natural ways of experienc-
ing an environment (Emmorey, Tversky, and Taylor 2000). A route perspective
corresponds to experiencing an environment from within, by navigating it, and
a survey perspective corresponds to viewing an environment from a single out-
side position. The following are English examples from Linde and Labov’s
(1975) study of New Yorkers’ descriptions of their apartments:

Route perspective: As you open the door, you are in a small five-by-five room which
is a small closet. When you get past there, you’re in what we call the foyer . . . If you
keep walking in that same direction, you’re confronted by two rooms in front of you . . .
a large living room which is about twelve by twenty on the left side. And on the right
side, straight ahead of you again, is a dining room which is not too big . . . (p. 929).

Survey perspective: The main entrance opens into a medium-sized foyer. Leading off
the foyer is an archway to the living room which is the furthermost west room in the
apartment. It’s connected to a large dining room through double sliding doors. The dining
room also connects with the foyer and main hall through two small arches. The rest of
the rooms in the apartment all lead off this main hall which runs in an east–west direction
(p. 927).

Emmorey and Falgier (1999) found that ASL signers also adopt either a route
or survey perspective when describing environments, and that signers structure
signing space differentially, depending upon perspective choice. We found that
if signers adopted a survey perspective when describing an environment, they
most often used a diagrammatic spatial format, but when a route perspective was
adopted, they most often used a viewer spatial format. The term spatial format
The effects of modality on spatial language 411

Table 15.1 Properties associated with spatial formats in ASL

Diagrammatic space Viewer space

Signing space represents a map-like model Signing space reflects an individual’s view
of the environment. of the environment at a particular point in
time and space.
Space can have either a 2-D “map” or a 3-D Signing space is 3-D.
“model” format.
The vantage point does not change The vantage point can change.
(generally a bird’s eye view).
Relatively low horizontal signing space or a Relatively high horizontal signing space.
vertical plane.

Source: Emmorey 2002

refers to the topographic structure of signing space used to express locations


and spatial relations between objects. Table 15.1 summarizes the properties
associated with each spatial format.
Diagrammatic space is somewhat analogous to Liddell’s notion of “token
space” (Liddell 1994; 1995) and to Schick’s (1990) “model space.” Model space
is characterized as “an abstract, Model scale in which all objects are construed
as miniatures of their actual referents” (Schick 1990:32). Liddell (1995:33) de-
scribes tokens as “conceptual entities given a manifestation in physical space,”
and states that “the space tokens inhabit is limited to the size of physical space
ahead of the signer in which the hands may be located while signing.” Dia-
grammatic space is also so limited, and under Liddell’s analysis signers could
conceptualize tokens as representing objects and landmarks within a description
of an environment. However, tokens are hypothesized to be three-dimensional
entities, whereas the data from Emmorey and Falgier (1999) contained some
examples in which the spatial format was two dimensional, representing a map
with points and lines.
Viewer space is similar to “surrogate space” described by Liddell (1994;
1995) and “real-world space” described by Schick (1990). The term “real-world
space” is problematic because it implies the actual physical space surrounding
the signer. It is important to distinguish between “real” space and viewer space
because in the first case the signer actually sees the environment being de-
scribed, and in the second the environment is conceptualized as present and
observable (see also Liddell 1995). According to Liddell (1994) surrogates are
characterized as invisible, normal-sized entities with body features (head to toe),
and they are conceptualized as in the environment. When signers adopt a route
perspective to describe an environment, the signer describes the environment
412 Karen Emmorey

as if he or she were actually moving through it. Under Liddell’s analysis, the
surrogate within this type of description coincides with the signer’s body (i.e. it
occupies the same physical space as the signer’s body). The term viewer space,
rather than surrogate space, may be preferred for the type of spatial descriptions
discussed here because it is the environment, rather than a surrogate, which is
conceptualized as present.
Spatial formats can be determined independently of the type of perspective
chosen to describe an environment. For example, route perspectives are charac-
terized by movement through an environment, but motion verbs can be produced
within a viewer spatial format (e.g. DRIVE-TO) or within a diagrammatic spa-
tial format (e.g. a classifier construction meaning ‘vehicle moves straight and
turns’). Survey perspectives are characterized by the use of cardinal direction
terms, but these terms can also be produced within either a viewer spatial for-
mat (e.g. the sign EAST produced outward from the signer at eye level to
indicate ‘you go straight east’) or within a diagrammatic spatial format (e.g.
the sign NORTH produced along a path that coincides with a road traced in
signing space, indicating that the road runs to the north). Although perspective
choice can be determined independently of spatial format, diagrammatic space
is clearly preferred for survey descriptions, and viewer space is clearly preferred
for route descriptions.
Emmorey and Falgier (1999) found that ASL signers did not make the same
perspective choices as English speakers when describing spatial environments
learned from a map. ASL signers were much more likely to adopt a survey
perspective compared to English speakers. We hypothesized that ASL signers
may have been more affected by the way the spatial information was acquired,
i.e. via a map rather than via navigation. A mental representation of a map
is more easily expressed using diagrammatic space, and this spatial format is
more compatible with a survey perspective of the environment. English speakers
were not subject to such linguistic influences and preferred to adopt a route
perspective when describing environments with a single path and landmarks of
similar size (specifically, the layout of a convention center).
Thus, language modality appears to influence the choice of spatial perspec-
tive. For signers, diagrammatic signing space can be used effectively to repre-
sent a map, thus biasing signers toward adopting a survey perspective where
English speakers would prefer a route perspective. However, signers do not al-
ways adopt a survey perspective for spatial descriptions. Pilot data suggest that
signers often produce route descriptions for environments they have learned
by navigation. Language modality may have its strongest impact on the nature
of spatial perspective choice when knowledge of that environment is acquired
from a map.
We now turn from studies of narrative spatial descriptions to a study that
explores the nature of spatial conversations in ASL.
The effects of modality on spatial language 413

15.4 How speakers and addressees interpret signing space: Reversed


space, mirrored space, and shared space
The study reported in this chapter investigates how addressees and speak-
ers interpret signing space when conversing about an environment. Pairs of
signers were recruited, and one participant was given the town map used by
Emmorey and Falgier (1999); see Figure 15.3. This participant was asked to
describe the town such that his or her partner could subsequently reproduce the
map.

TOWN
White Mountains

Mountain Road

White River

Store

Park Town Hall


School Gazebo
Maple Street

Gas station
River Highway

W E

S
Figure 15.3 Map of the town (from Tversky and Taylor 1992)
414 Karen Emmorey

Eleven pairs of fluent ASL signers participated in the study. For three of
the pairs, the signer who described the town had previously described it to
another participant in a separate session. Since we were primarily concerned
with how the addressee structured signing space, it was not critical that speaker
be naive to the task of explaining the layout of the town. The addressee could
ask questions throughout the speaker’s description, and the participants faced
each other during the task. Subjects were tested either at Gallaudet University
in Washington, DC or at The Salk Institute in San Diego, CA. Sixteen subjects
had Deaf parents, two subjects had hearing parents and were exposed to ASL
prior to the age of three, and one subject learned ASL in junior high school (her
earlier exposure was to SEE).
As noted, the previous research by Emmorey and Falgier (1999) indicates that
ASL signers tend to produce descriptions of the town from a survey perspective,
rather than from a route perspective, and this was true for the speakers in this
study: nine speakers produced a survey description, two speakers produced a
mixed description (part of the description was from a survey perspective and part
was from a route perspective), and no speaker produced a pure route description
of the town. Given that the addressee’s task was to draw a map, the speaker’s
use of diagrammatic space was sensible because this spatial format allows a
large number of landmarks to be schematically mapped onto signing space,
and diagrammatic space can easily be transformed into a map representation.
In what follows, I focus on how the addressee referred to the speaker’s signing
space when asking a question or when commenting upon the description. All
addressees used a diagrammatic spatial format when re-describing the town.
The results revealed that all but one of the addressees performed a mental
reversal of observed signing space when re-describing the town or asking a ques-
tion. For example, if the speaker indicated that Maple Street looped to the left
(observed as motion to the right for the addressee), the addressee would trace the
Maple Street loop to his or her left in signing space. It was rare for an addressee
to mirror the speaker’s space, e.g. by tracing Maple Street to the right (see be-
low). Addressees also used a type of shared space by pointing toward a location
within the speaker’s space to ask a question or comment about the landmark
associated with that location. Figure 15.4 illustrates these different possibilities.
As discussed earlier, when the addressee reverses the speaker’s signing
space, he or she must perform what amounts to a 180◦ mental rotation (see
Figure 15.4a). However, the nature of this mental transformation is not com-
pletely clear. The intuitions of native signers suggest that they probably do not
mentally rotate locations within the speaker’s signing space. Rather, addressees
report that they “instantly” (intuitively) know how to interpret locations in the
speaker’s signing space. They do not experience a sensation of rotating a men-
tal image of the scene or landmarks within the scene. How then, do addressees
transform observed signing space into a reversed mental representation of that
The effects of modality on spatial language 415

(a) Reversed space (b) Mirrored space


Speaker Speaker

X X

X X

Addressee Addressee
(c) Shared space
(i) Speaker (ii) Speaker

X X

Addressee
Addressee

Figure 15.4 Illustration of: 15.4a reversed space; 15.4b mirrored space; and
15.4c two examples of the use of shared space for non-present referents. The
half circles represent signing space, the solid arrow represents the direction of
the Maple Street loop, and the “X” represents the location of the Town Hall
with respect to Maple Street. The dotted arrow in example (i) of 15.4c indicates
the direction of a pointing sign used by the addressee to refer to the Town Hall

space? One possibility is that addressees comprehend ASL spatial descriptions


as if they were producing the description themselves. One mechanism for this
transformation might be that addressees encode spatial relations by mentally
imagining themselves at the speaker’s position, perhaps a form of self-rotation.
Another mechanism might involve a “motor theory of sign perception” at the
sentence level. Under this explanation, signers perform a transformation of the
perceived articulation into a reversed representation of their own production
(assuming both speaker and addressee are right handed).
416 Karen Emmorey

Evidence for signers’ superior ability to reverse perceived articulation is sug-


gested by the results of Masataka (1995). He found that native signing Japanese
children exhibited an enhanced ability to conduct perception-to-production
transformations involving mirror reversals, compared to their hearing peers.
Masataka’s study was based on the common finding that a figure (such as let-
ter) drawn on the forehead or back of the hand is perceived tactilely by subjects
as a mirror reversal of the experimenter-defined stimulus. Masataka found that
Deaf signing Japanese children did not show such a mirror-reversal tendency,
unlike their hearing peers. That is, when a “p” was written on a particular body
surface, signing children overwhelmingly chose “p” as a matching response,
whereas hearing children were three times more likely to choose the mirror-
reversed “q” response. The signing children were apparently more skilled at
mentally transforming a tactilely perceived pattern into the movement pattern
that was used to create it. Furthermore, Masataka presented evidence that this
ability was language linked: the larger the child’s vocabulary, the better the
performance. Further research will help to determine whether signers perform
a type of self rotation (“putting themselves in the speaker’s shoes”) or conduct
a form of “analysis-by-synthesis” in which perceived articulations are trans-
formed into sign production.
Although most addressees reversed signing space during the interactions,
we found two examples of mirrored space (see Figure 15.4b). In one example,
mirroring the speaker’s signing space led to a left–right error in comprehension.
The addressee mirrored the direction of Maple Street when re-describing that
part of the town, and then drew the map with Maple Street incorrectly looping
to the right. In a different type of example, the speaker specifically told his
addressee that he would describe the map of the town from the addressee’s
point of view (as if the addressee were looking at the map on a blackboard).3
Thus, the speaker reversed the left–right locations of landmarks, which was very
difficult for him, and he made several errors. When the addressee re-described
the town, he did not reverse the speaker’s signing space, but correctly mirrored
the speaker’s space. Mirrored space was correct in this case because the speaker
had already reversed the spatial locations for his addressee.
Figure 15.4c provides two illustrations of the use of shared space produced
by three of the participants. Example (i) illustrates the situation in which the
addressee used a pronoun or classifier sign to refer to a location (or associated
referent) within the speaker’s signing space (indicated by the dotted arrow in
the figure). Such shared space is common for nonspatial discourse when an
addressee points toward a location in the speaker’s space in order to refer to
the referent associated with that location. For example, if the referent “John”

3 Such a mirrored description is akin to producing a room description using YOU-ENTER, as


illustrated in Figure 15.1b.
The effects of modality on spatial language 417

has been associated with a location on the speaker’s right, then the addressee
may direct a pronoun toward this location, which is on his or her left, to refer
to John (compare Figure 3.4 in Neidle, Kegl, MacLaughlin, Bahan, and Lee
2000). However, when the speaker’s signing space is structured topographically
to represent the location of several landmarks, the addressee generally reverses
the speaker’s space, as shown in Figure 15.4a. Such reversals do not occur
for nonspatial conversations because the topography of signing space is not
generally complex and does not convey a spatial viewpoint.
Example (ii) in Figure 15.4c illustrates an example of shared space in which
the signing spaces for the addressee and speaker overlap. For example, in one
situation, two signers sat across from each other at a table, and the speaker
described the layout of the town by signing on the tabletop, e.g. tracing the
location of streets on the table. The addressee then used classifier constructions
and pronouns articulated on the table to refer to locations and landmarks in the
town with the same spatial locations on the table: thus, the signing spaces of the
speaker and addressee physically overlapped. In another similar example, two
signers (who were best friends) swiveled their chairs during the conversation
so that they were seated side-by-side. The addressee then moved her hands into
the speaker’s space in order to refer to landmarks and locations, even while her
partner was still signing! This last example was rare, with both signers finding
such “very shared space” amusing. Physically overlapping shared space may be
possible only when there is an object, such as a table, to ground signing space
in the real world or when signers know each other very well. For one pair of
signers, the speaker clearly attempted to use overlapping shared space with his
addressee, but she adamantly maintained a separate signing space.
Özyürek (2000) uses the term “shared space” to refer to the gesture space
shared between spoken language users. However, Özyürek focuses on how
the speaker changes his or her gestures depending upon the location of the ad-
dressee. When narrators described “in” or “out” spatial relations (e.g. ‘Sylvester
flies out of the window’), their gestures moved along a front–back axis when
the addressee was facing the speaker, but speakers moved their gestures later-
ally when the addressee was to the side. Özyürek argues that speakers prefer
gestures along these particular axes so that they can move their gestures into or
out of a space shared with the addressee. In contrast, shared space as defined
here for ASL is not affected by the spatial position of the addressee, and signers
do not alter the directionality of signs depending upon where their addressee is
located. For example, OUT is signed with motion along the horizontal axis (out-
ward from the signer), regardless of the location of the addressee. The direction
of motion can be altered to refer explicitly to the addressee (e.g. to express ‘the
two of us are going out’) or to refer to a specific location within signing space
(e.g. to indicate the location of an exit). The direction of motion for OUT (or for
other directional signs) is not affected by the location of the addressee, unless
418 Karen Emmorey

the signs specifically refer to the addressee. The use of shared space in ASL
occurs when the speaker’s signing space maps to his or her view of the spatial
layout of physically present objects (as in Figure 15.2) or to a mental image of
the locations of nonpresent objects (as in Figure 15.4c). The addressee shares
the speaker’s signing space either because it maps to the addressee’s view of
present objects or because the addressee uses the same locations within the
signer’s space to refer to nonpresent objects.
Furuyama (2000) presents a study in which hearing subjects produced col-
laborative gestures within a shared gesture space, which appears to be similar
to shared space in ASL. In this study, a speaker (the instructor) explained how
to create an origami figure to a listener (the learner), but the instructor had to
describe the paper-folding steps without using a piece of origami paper for
demonstration. Similar to the use of shared space in signed conversations, Fu-
ruyama found that learners pointed toward the gestures of their instructor or
toward “an ‘object’ seemingly set up in the air by the instructor with a gesture”
(Furuyama 2000:105). Furthermore, the gesture spaces of the two participants
could physically overlap. For example, learners sometimes referred to the in-
structor’s gesture by producing a gesture near (or even touching) the instructor’s
hand. In addition, the surface of a table could ground the gesture space, such
that instructors and learners produced gestures within the same physical space
on the table top. These examples all parallel the use of shared space in ASL
depicted in Figure 15.4c.
Finally, the use of shared space is independent of the spatial format used to
describe an environment. All of the examples in this study involved diagram-
matic space, but it is also possible for viewer space to be shared. For example,
suppose a speaker uses viewer space to describe where she wants to place a
new sofa in her living room (i.e. the spatial description is as if she were in the
room). Her addressee may refer to the sofa by pointing to its associated location
in the speaker’s space, for example, signing the equivalent of, ‘No, move it over
toward that side of the room.’

15.5 Summary and conclusions


The results of the studies discussed here reveal the effects of modality on spatial
language in several ways. First, the spatialization of linguistic expression in
ASL affects the nature of language comprehension by requiring an addressee to
perform a mental transformation of the linguistic space under certain conditions.
Specifically, when a speaker describes a nonpresent environment, an addressee
facing the speaker needs to understand that a location observed on his or her
right is actually located to the left in the described environment. The spatial
transformation in which locations in the speaker’s signing space are “reversed”
by an addressee is simply not required for understanding spoken language
The effects of modality on spatial language 419

descriptions. Second, results from Emmorey and Falgier (1999) suggest that
language modality may influence choice of spatial perspective. The ease with
which diagrammatic space can express information learned from a map may
explain the preference of ASL signers for spatial descriptions with a survey
perspective. In contrast, nothing about the linguistic structure of English leads
to a preference for a route or a survey perspective when the environment has
been learned from a map. Rather, English speakers were more influenced by the
nature of the environment, preferring a route description for the environment
that contained a single path and landmarks of similar size (Taylor and Tversky
1992; Emmorey, Tversky, and Taylor 2000).
Finally, unlike English speakers, ASL signers can use shared space, rather
than adopt their addressee’s point of view. English speakers generally need to
take into account where their addressee is located because spatial descriptions
are most often given from the addressee’s point of view (‘It’s on your left’
or ‘It’s in front of you’). In ASL, there may be no true speaker or addressee
viewpoint, particularly when the signers are discussing a jointly viewed scene
(as illustrated in Figure 15.2). Furthermore, addressees can refer directly to
locations in the speaker’s space (as illustrated in Figure 15.4c). When shared
space is used, speakers and addressees can refer to the same locations in signing
space, regardless of the position of the addressee. Thus, the interface between
language and visual perception (how we talk about what we see) has an added
dimension for signers (they also see what they talk about). That is, signers see
(rather than hear) spatial descriptions, and there is a schematic isomorphism
between aspects of the linguistic signal (the location of the hands in signing
space) and aspects of the spatial scene described (the location of objects in the
described space). Signers must integrate a visually observed linguistic signal
with a visually observed environment or a visual image of the described envi-
ronment. The studies discussed here represent an attempt to understand how
signers accomplish this task.

Acknowledgments
This work was supported in part by a grant from the National Institutes of
Health (NICHD RO1-13249) and from the National Science Foundation (SBR-
9809002). I thank Robin Thompson and Melissa Herzig for help in data analysis,
and I thank Richard Meier, Elisabeth Engberg-Pedersen, and an anonymous
reviewer for helpful comments on this chapter.

15.6 References
Emmorey, Karen. 1998. Some consequences of using signing space to represent phys-
ical space. Keynote address at the Theoretical Issues in Sign Language Research
meeting, November, Washington, DC.
420 Karen Emmorey

Emmorey, Karen. 2002. Language, cognition, and brain: Insights from sign language
research. Mahwah, NJ: Lawrence Erlbaum Associates.
Emmorey, Karen. In press. Perspectives on classifier constructions in signed languages.
Mahwah, NJ: Lawrence Erlbaum Associates.
Emmorey, Karen, and Brenda Falgier. 1999. Talking about space with space: Describing
environments in ASL. In Storytelling and conversations: Discourse in Deaf com-
munities, ed. Elizabeth A. Winston, 3–26. Washington, DC: Gallaudet University
Press.
Emmorey, Karen, and Melissa Herzig. In press. Categorical versus analogue properties
of classifier constructions in ASL. In Perspectives on classifier constructions in
sign languages, ed. Karen Emmorey. Mahwah, NJ: Lawrence Erlbaum Associates.
Emmorey, Karen, Edward S. Klima, and Gregory Hickok. 1998. Mental rotation within
linguistic and nonlinguistic domains in users of American Sign Language. Cogni-
tion 68:221–246.
Emmorey, Karen and Barbara Tversky. In press. Spatial perspective in ASL. Sign Lan-
guage and Linguistics.
Emmorey, Karen, Barbara Tversky, and Holly A. Taylor. 2000. Using space to describe
space: Perspective in speech, sign, and gesture. Spatial Cognition and Computation
2:157–180.
Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign Language: The semantics
and morphosyntax of the use of space in a visual language. International studies on
sign language research and communication of the deaf, Vol. 19. Hamburg: Signum-
Verlag.
Furuyama, Nobuhiro. 2000. Gestural interaction between the instructor and the learner
in origami instruction. In Language and gesture, ed. David McNeill, 99–117.
Cambridge: Cambridge University Press.
Liddell, Scott. 1994. Tokens and surrogates. In Perspectives on sign language struc-
ture: Papers from the 5th International Symposium on Sign Language Research,
Vol. 1, ed. Inger Ahlgren, Brita Bergman, and Mary Brennan, 105–19. Durham:
The International Sign Language Association, University of Durham.
Liddell, Scott. 1995. Real, surrogate, and token space: Grammatical consequences in
ASL. In Language, gesture, and space, ed. Karen Emmorey and Judy Reilly, 19–41.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Linde, Charlotte, and William Labov. 1975. Spatial networks as a site for the study of
language and thought. Language 51:924–939.
Mainwaring, Scott, Barbara Tversky, and Diane J. Schiano. 1996. Perspective choice
in spatial descriptions. IRC Technical Report, 1996–06. Palo Alto, CA: Interval
Research Corporation.
Masataka, Nobuo. 1995. Absence of mirror-reversal tendency in cutaneous pattern per-
ception and acquisition of a signed language in deaf children. Journal of Develop-
mental Psychology 13:97–106.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: MIT Press.
Özyürek, Asli. 2000. The influence of addressee location on speaker’s spatial language
and representational gestures of direction. In Language and gesture, ed. David
McNeill, 64–83. Cambridge: Cambridge University Press.
The effects of modality on spatial language 421

Schick, Brenda. 1990. Classifier predicates in American Sign Language. International


Journal of Sign Linguistics 1:15–40.
Schober, Michael F. 1993. Spatial perspective-taking in conversation. Cognition 47:
1–24.
Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American
Sign Language. Doctoral dissertation, University of California, San Diego, CA.
Talmy, Leonard. 2000. How language structures space. Toward a cognitive semantics,
Vol. 1: Concept structuring systems. Cambridge, MA: MIT Press.
Taylor, Holly A., and Barbara Tversky. 1992. Spatial mental models derived from survey
and route descriptions. Journal of Memory and Language 31:261–292.
Taylor, Holly A., and Barbara Tversky. 1996. Perspective in spatial descriptions. Journal
of Memory and Language 35:371–391.
16 The effects of modality on BSL development
in an exceptional learner

Gary Morgan, Neil Smith, Ianthi Tsimpli,


and Bencie Woll

16.1 Introduction
This chapter reports on the findings of an experiment into the learning of British
Sign Language (BSL) by Christopher, a linguistic savant, and a control group
of talented second language learners. The results from tests of comprehension
and production of morphology and syntax, together with observations of his
conversational abilities and judgments of grammaticality, indicate that despite
his dyspraxia and visuo-spatial impairments, Christopher approaches the task
of learning BSL in a way largely comparable to that in which he has learned
spoken languages. However, his learning of BSL is not uniformly successful.
Although Christopher approaches BSL as linguistic input, rather than purely
visuo-spatial information, he fails to learn completely those parts of BSL for
which an intact nonlinguistic visuo-spatial domain is required (e.g. the BSL
classifier system). The unevenness of his learning supports the view that only
some parts of language are modality-free.
Accordingly, this case illuminates crossmodality issues, in particular, the
relationship of sign language structures and visuo-spatial skills. By exploring
features of Christopher’s signing and comparing it to normal sign learners,
new insights can be gained into linguistic structures on the one hand and the
cognitive prerequisites for the processing of signed language on the other.
In earlier work (see Smith and Tsimpli 1995 and references therein; also
Tsimpli and Smith 1995; 1998; Smith 1996; Smith and Tsimpli 1996; 1997;
Morgan, Smith, Tsimpli, and Woll 2002), we have documented the unique
language learning abilities of a polyglot savant, Christopher (date of birth:
January, 6 1962). Christopher exhibits a striking dissociation between his lin-
guistic and nonlinguistic abilities. Despite living in sheltered accommodation
because his limited cognitive abilities make him unable to look after himself,
Christopher can read, write, translate, and speak (with varying degrees of flu-
ency) some 20 to 25 languages. This linguistic ability is in sharp contrast with
his general intellectual and physical impairments. Due to a limb apraxia (a mo-
tor disorder which makes the articulation of planned movements of the arms and
hands difficult or impossible), he has difficulty with everyday activities such as

422
BSL development in an exceptional learner 423

shaving, doing up buttons, cutting his fingernails, or hanging cups on hooks.


Apraxia is tied to damage to cortical regions that send input to the primary
motor cortex (Kimura 1993).
Additionally, Christopher has a visuo-spatial deficit, which makes finding
his way around difficult. Although Christopher is quite shortsighted and (prob-
ably) astigmatic, his prowess at comprehension of BSL fingerspelling shows
that this condition has minimal affect on his ability to understand sign. Fin-
gerspelling is made up of small, rapid movements of the fingers and hands, in
a relatively restricted space. Christopher was almost perfect in his recognition
of fingerspelled names produced at normal signing speed, indicating that he
should be able to see the details of normal signing without difficulty. Lastly,
Christopher presents some of the key features of social communication deficit
associated with autism: he avoids engagement with his interlocutor, preferring
instead to use single words or prepared sentences, avoids eye contact and of-
ten understands only the “literal” meaning of a conversational exchange (Smith
and Tsimpli 1995; Tsimpli and Smith 1998). In this chapter we deal specifically
with the linguistic aspects of Christopher’s learning of BSL while making note
of the influence of his limb apraxia and autism. We explore in more detail the
role of apraxia and autism in his learning of BSL in Morgan, Smith, Tsimpli
and Woll (in preparation a).
Apart from the dissociation between his “verbal” and “performance” abili-
ties, Christopher also shows marked dissociations within his linguistic talent.
His acquisition of the morphology and lexicon of new languages is extremely
rapid and proficient, whereas his acquisition of syntactic patterns different from
his first language appears to reach a plateau beyond which he is unable to pro-
ceed. Smith and Tsimpli (1995) have argued that this asymmetry reflects the
distinction between those aspects of language acquisition which involve param-
eter setting and those which are dependent on either nonparametrized parts of
Universal Grammar (UG) or on the central system(s). In a Fodorian framework
(see Fodor 1983), human cognition is divided among a number of modular input
systems, corresponding to the senses and language, and the nonmodular central
system, responsible for rational behavior, puzzle-solving and the “fixation of
belief.” Whereas Fodor himself is sceptical about the possibility of any scientific
investigation of the central system, we have argued (Tsimpli and Smith 1998)
that it too is structured, consisting of a number of “quasi-modules,” for theory
of mind, music, moral judgment, etc. The language faculty has both modular
and quasi-modular properties. Parameter re-setting is taken to be impossible
(see also Tsimpli and Smith 1991), but the role of UG in second language
acquisition is otherwise pervasive.
The dissociations already documented suggest that BSL should provide an
interesting test arena for Christopher: will his linguistic prowess compensate
for his visuo-motor deficits in the context of a signed language, or will these
424 G. Morgan, N. Smith, I. Tsimpli, and B. Woll

disabilities preclude his acquisition of BSL? Assuming that he displays some


ability to learn BSL, will his mastery of the language show the same linguistic
asymmetries as are seen in his spoken languages?

16.2 The challenge for Christopher


The most obvious difference between BSL and the other languages Christopher
has encountered is the modality in which it is produced. Signs are articulated
through co-ordinated limb, torso, head, and facial movements in complex spa-
tial arrays and, as communication is necessarily face to face, looking at the
interlocutor while he or she is signing is the only means of access to linguistic
information. In both production and perception, signers have to make use of
configurations of movements and spatial information, and they have to be aware
of their interlocutor’s visual attention.
As we shall see, basic perceptual and articulatory processes, as well as higher-
order ones (morphological, syntactic, and semantic and even paralinguistic),
are integrated in the performance of normal signers of BSL, in that all of them
involve the necessity of looking at the face and movements of the interlocutor to
receive linguistic information (for a comparable description of American Sign
Language, see Neidle et al. 2000). Accordingly, BSL provides Christopher with
a new challenge, as it combines several aspects of behavior with which he has
severe problems in the nonlinguistic domain with these behaviors now recruited
for linguistic and communicative functions.
A less obvious, but crucial, consideration is that learners of BSL (or any
signed language) are faced with the fact that it has no commonly used written
script. Except for his native first language, English, all of Christopher’s previous
languages have been taught and acquired, at least in part, on the basis of a written
input, using books, newspapers, and grammars. Even in English, the written
word constitutes a major part of the input to him, and it is clear that he is obsessed
with the written word, sometimes to the exclusion of spoken language. This lack
of a written system constituted a major hurdle for Christopher to clear, before
he could get properly to grips with the intricacies of the new grammar.1
Against this background we made the following predictions. It is clear that
BSL combines properties that should make it simultaneously both congenial and
uncongenial for him. On the one hand, it exemplifies the domain of Christopher’s
obsessional talent: it is a natural language with all the usual properties of natural
languages. On the other hand, it exploits the visuo-spatial medium which causes
Christopher such difficulty in performing everyday tasks. On the basis of the
1 We have more recently attempted to teach BSL to Christopher through the Sutton Sign Writing
system (see Gangel-Vasquez 1997). Up to the writing of this paper he has looked favorably
at this method of recording signs, but has found it difficult to reproduce the necessary spatial
organization of symbols. We are continuing in this endeavor.
BSL development in an exceptional learner 425

past work looking at Christopher, we expected that his linguistic talent would
outweigh the disadvantages of the medium, and that his ability in BSL would
mirror his mixed abilities in spoken languages: that is, he would make extremely
rapid initial progress, his mastery of the morphology and vocabulary would be
excellent, and that he would have significant difficulty with those syntactic
properties that differentiate BSL from spoken English.
As well as teaching BSL to Christopher we also taught BSL to a comparator
group of 40 talented second language learners, volunteer undergraduate students
at UCL and City University, London. Their ages ranged between 18 and 30 years
and there were 30 females and 10 males. They were assessed as having a level
of fluency in a second language (learnt after 11 years of age) sufficient to begin
a first year degree course at University in one of French, Spanish, or German.
The group was taught the same BSL curriculum as Christopher using the same
teaching methods. We do not discuss this comparison in depth here (for more
details, see Morgan, Smith, Tsimpli and Woll 2002) but occasionally refer to
test scores as a guide to the degree to which Christopher can be regarded as a
normal sign learner.

16.3 Christopher’s psycholinguistic profile


Christopher scores relatively low on measures of nonverbal (performance) in-
telligence, as opposed to measures of verbal intelligence. This is indicated
explicitly in Table 16.1, where the different figures show his performance on
different occasions (the average normal score is in each case 100). There is no
consensus on what exactly these tests of nonverbal intelligence actually tap,

Table 16.1 Christopher’s performance in five nonverbal (performance)


intelligence tests

Test Score (average normal score: 100)

Raven’s Matrices (administered at ages 14 and 32) 75 76

Wechsler Scale: WISC-R, UK (administered at 42 (performance)


age 13.8) 89 (verbal)
Wechsler Adult Intelligence Scale (administered at 52 (performance)
age 27.2) 98 (verbal)

Columbia Greystone Mental Maturity Scale 56


(administered at age 29.2)
Goodenough Draw a Man Test (administered at 40 63
ages 14 and 32)

Source: Morgan et al. 2002


426 G. Morgan, N. Smith, I. Tsimpli, and B. Woll

Table 16.2 Christopher’s performance in two face recognition tests

Test Score

Benton Facial Recognition Test (administered at Corrected Long Form Score: 27


age 33)
Warrington Face/Word Recognition Test Faces: 27/50; Words: 48/50
(administered at age 34)

but common skills across these tests involve the ability to visualize how ab-
stract spatial patterns change from different perspectives, to co-ordinate spatial
locations in topographic maps, and to hold these abstract spatial patterns in
nonverbal short-term memory.
Unlike for instance, individuals with Williams syndrome, Christopher is ex-
tremely poor at face recognition, as shown by the results in Table 16.2. On
the Benton test (Benton et al. 1983), a normal score would be between 41 and
54, and anything below 37 is “severely impaired.” On the Warrington (1984)
face/word recognition test, he scored at the 75th percentile on words, with 48 out
of 50 correct responses, but on faces his performance was too poor to be eval-
uated in comparison with any of the established norms.
The preference for the “verbal” manifest in these data is reinforced by two
other sets of results. First, in a multilingual version of the Peabody Picture Vo-
cabulary Test, administered at age 28 (O’Connor and Hermelin 1991), Christo-
pher scored as shown in (1).
(1) English 121; German 114; French 110; Spanish 89
Second, in a variant of the Gollin figures test (Smith and Tsimpli 1995:8–
12) he was strikingly better at identifying words than objects. In this test, the
subject is presented with approximations to different kinds of representation:
either words or objects. The stimuli were presented in the form of a computer
print-out over about 20 stages. At the first stage there was minimal information
(approximately 6 percent), rendering the stimulus essentially unrecognizable.
Succeeding stimuli increased the amount of information monotonically until,
at the final stage, the representation was complete. The test was administered
to Christopher and 15 controls. Christopher was by far the worst on object
recognition, but second best on word recognition (for details, see Smith and
Tsimpli 1995:Appendix 1).
While no formal diagnosis has been made clinically, it is reasonably clear that
Christopher is on the autistic continuum: he fails some, but not all, false-belief
tasks, and he has some of the characteristic social manifestations of autism.
He typically avoids eye contact, fails to initiate conversational exchanges, and
BSL development in an exceptional learner 427

is generally monosyllabic in spontaneous conversation. (For discussion, see


Smith and Tsimpli 1995, and especially Tsimpli and Smith 1998.)

16.4 Apraxia
On the basis of two initial apraxia batteries (Kimura 1982) and an adapta-
tion of the Boston Diagnostic Apraxia Examination (Goodglass and Kaplan
1983) it appears that Christopher has a severe apraxia involving the production
of planned movements of the limbs when copying nonrepresentational move-
ments. He scored 29 percent correct on the Kimura 3-movement copying test,
where anything below 70 percent is considered apraxic.
This limb apraxia contrasts with his normal performance in the comprehen-
sion and production of meaningful gestures. A version of the BDAE designed
for signing subjects (described in Poizner et al. 1987) was carried out during the
second period of Christopher’s exposure to BSL (after four formal classes), and
he correctly produced 12 of 13 test items: that is, he is within normal limits for
controls (as reported in Poizner et al. 1987:168). When requested to demon-
strate a sneeze, or how to wave ‘goodbye,’ or how to cut meat, Christopher
responded without difficulty, although some of his responses were somewhat
strange. For example, he indicated ‘attracting a dog’ by beckoning with his
finger; for ‘starting a car’ and ‘cleaning out a dish’ he used the BSL signs for
CAR and COOK, instead of imitating the turning of an ignition key or the
wiping of an imaginary vessel with an imaginary cloth. Christopher produced
more conventional gestures for these items when told not to sign. Apart from
this interference, the only test item Christopher failed outright was ‘move your
eyes up.’ As well as producing simple gestures he has normal comprehension
of these gestures when produced by another person.

16.5 BSL learning

16.5.1 Input
A deaf native signing BSL tutor taught Christopher a conventional BSL class
once a month, concentrating on the core grammatical properties of the lan-
guage: the lexicon, negation, verb agreement, questions, topicalization, as well
as aspectual morphology, classifier constructions, nonmanual modifiers, and
spatial location setting. Over eight months there were about 12 hours of formal
teaching. This formal teaching was supplemented by conversation with a deaf
native signer, who went over the same material in a less pedagogic context
between classes. The total amount of BSL contact was therefore 24 hours. All
classes and conversation classes were filmed on video tape.
428 G. Morgan, N. Smith, I. Tsimpli, and B. Woll

The 24 hours of BSL exposure were divided for the purposes of analysis
into five periods: four of 5 hours each and a fifth of 4 hours. Each period was
approximately 6–7 weeks in duration. After each subject area of BSL had been
taught we assessed Christopher’s progress before increasing the complexity of
the material he was exposed to.
Christopher’s uptake of BSL was assessed in each area, using translation
tasks from BSL to English and from English to BSL, as well as analysis of
spontaneous and elicited use of sign. In addition, we carried out a variety of
tests of Christopher’s general cognitive abilities. This battery of assessment and
observational data are used to describe the development of his communicative
behavior, on the one hand, and his acquisition of linguistic knowledge, on the
other.

16.6 Results of Christopher’s learning of BSL2


At the beginning of the learning period, Christopher reported that he knew
some signing. When questioned further, this turned out to be letters from the
manual alphabet, which he claimed to have learnt from deaf people. On his
first exposure to BSL in the study, Christopher already manifested a number
of behaviors in his production and reception of signs that mark him out as an
atypical learner. The most striking of these were his imitation of signs without
understanding them and avoidance of direct eye contact with the signers around
him. This sign imitation reduced over the learning period but did not disappear.
As mentioned above, Christopher’s conversation in spoken languages tends
to be brief, indeed monosyllabic, and somewhat inconsequential, but there is
rarely if ever any imitation of meaningless spoken or oral gesture. Nor does
Christopher manifest echopraxia of speech.
In the first hours of exposure to BSL an interesting anomaly appeared.
Christopher was very keen to communicate with the BSL tutor through sponta-
neously produced non-BSL gestures to describe objects and concepts presented
to him in spoken English. For example, in attempting to represent the word ‘live’
(dwell) he tried to trace the outline of a large house. For the word ‘speak’ he
touched his own teeth. His spontaneous attempt to mime or gesture is surprising,

2 Signed sentences that appear in the text follow standard notation conventions. Signs are repre-
sented by upper-case English glosses. Where more than one English word is needed to gloss
a sign, this is indicated through hyphens e.g. FALL-FROM-HEIGHT ‘the person fell all the
way down.’ When the verb is inflected for person agreement, subject and indirect object are
marked with subscripted numbers indicating person e.g. 3 EXPLAIN2 ‘he explains it to you.’
Lower-case hyphenated glosses indicate a fingerspelled word e.g. g-a-r-y. Repetition of signs is
marked by ‘+,’ and ‘IX’ is a pointing sign. Subscripted letters indicate locations in sign space.
Nonmanual markers such as headshakes (hs) or brow-raised (br), and topics (t) are indicated
by a horizontal line above the manual segment(s). When specific handshapes are referred to we
use standard Stokoe notation e.g. ‘5 hand’ or ‘bent V.’
BSL development in an exceptional learner 429

as it contrasts markedly with the absence of gestural behavior when he speaks. It


also contrasts with his later difficulty in inference-making when learning iconic
signs (see below).

16.6.1 Lexical development


Christopher made significant progress in his comprehension and production of
signs throughout the investigation. Unlike subjects with psychological profiles
comparable to Christopher’s (e.g. the autistic signer, Judith M., reported in
Poizner et al. 1987), Christopher showed no preference for particular classes of
lexical items. Like Judith M., however, Christopher used ‘fillers’ or nonsense
articulations, consisting of openings and closings of the hands, in his first period
of exposure to sign.
As well as comprehension and production tests, we carried out sign-recall
tests to enable the evaluation of Christopher’s memory for new signs. His sign
tutor showed him vocabulary items along with corresponding pictures or written
words. The following week, he was asked to recall the signs by pointing correctly
to the corresponding picture, and he was generally successful. Christopher’s
comprehension of signs in connected discourse, however, was less successful.
Compared to the comparator group, Christopher was as good at recalling single
signs as several other subjects, but performed significantly worse than the other
learners in his general comprehension of signed sentences. This single sign
comprehension ability was quite striking, especially in comparison with his
general disability in producing the fine details of signs. In contrast with his
relatively intelligible gross gestures (e.g. holding his palm out to produce the
sign for FIVE, or moving his arms apart in a horizontal arc with the palms facing
down to produce the sign TABLE), his articulation of small movements of the
hands and wrists was impaired, presumably due to his limb apraxia. Across the
learning period his developing ability to recognize and produce single signs
was matched by a significant increase in the internal complexity of the signs
he could use, where this complexity is defined in terms of the formational
properties of the signs concerned. For example, gross handshapes became finer
(e.g. distinctions appeared between the signs for the numbers ONE, THREE,
and FIVE), and movements became more constrained (initially his sign for
BOOK was produced with open arms, seemingly producing a newspaper sized
book, but subsequently this sign became smaller with his greater distalization
of movement).
Across the learning period, idiosyncrasies in his signs became more intelligi-
ble (e.g. his sign for WOMAN was produced by moving the index finger down
his contralateral cheek, rather than on the ipsilateral side). These movement dif-
ficulties were of a greater degree than the articulation difficulties experienced
by normal sign learners in hand co-ordination. Part of Christopher’s difficulties
430 G. Morgan, N. Smith, I. Tsimpli, and B. Woll

may be attributable to the difficulty he experiences in integrating linguistic and


encyclopaedic knowledge. In learning new vocabulary, adult students of BSL
may be helped if there is a potential inferential link between a sign and its
meaning, where this link could be based on some visual property, such as the
size and shape of an object, or a gestural/facial expression linked to an emo-
tion or activity. Such linking, however, would require access to intact world
knowledge, and presuppose some access to iconicity.
In order to test whether iconicity might be a significant determinant of
Christopher’s ability to master signs, we tested his identification of iconic
vs. semi-iconic and non-iconic signs. During the second period of exposure to
BSL Christopher and the comparator subjects were presented with
30 signs, repeated once, and asked to write an equivalent in English. The
signs had been rated in previous research as “iconic” (transparent), “semi-
iconic” (translucent), and “non-iconic” (opaque). None of the signs had been
used in previous sign classes. Although their overall performance as shown
in Table 16.3 is comparable, Christopher’s incorrect responses to the iconic
and semi-iconic signs were markedly different to those of the normal
learners.
Some non-iconic signs were translated by Christopher as nonsymbolic equiv-
alents. For example, he translated SISTER (made by a curved index finger
touching the bridge of the nose) as ‘rub your nose’; and he translated the
semi-iconic sign MIRROR (made by an open hand twisted quickly with the
palm facing the face) as ‘wave your hand.’ It seems then that Christopher
was in some sense tied to a nonsymbolic interpretation when confronted by
abstract form–meaning relations (for a discussion of his interpretation of pre-
tend play, see Smith and Tsimpli 1996). This had subsequent effects in his
late learning of more complex sign constructions. Confronted with a consid-
erable amount of iconicity in BSL, adult learners of BSL and other signed
languages characteristically use visual inference in their learning of sign vo-
cabulary (see Pizzuto and Volterra 2000), but Christopher, in comparison, seems
not to.

Table 16.3 Test of identification of iconic vs. semi-iconic


and non-iconic signs

Comparator group
Christopher (mean score)

Iconic 5/10 8/10


Semi-iconic 2/10 0/10
Non-iconic 0/10 0/10
Mean (as a percentage) 23.3 26.7
BSL development in an exceptional learner 431

16.6.2 Morphosyntax
16.6.2.1 Negation. There are four main markers of negation in BSL:
r facial action;
r head movement;
r manual negation signs; and
r signs with negation incorporated in them (Sutton-Spence and Woll 1999).
Each marker can occur in conjunction with the others, and facial action can
vary in intensity. Christopher identified the use of headshakes early on in his
exposure to BSL, but he had extreme difficulty in producing a headshake in
combination with a sign. In Period 1 of exposure Christopher separated the two
components out and often produced a headshake at the end of the sign utterance.
In fact, as was ascertained in the apraxia tests, Christopher has major difficulty
in producing a headshake at all. A typical early example of his use of negation
is given in (2) and (3).
t br
(2) Target: NIGHT SIGN CAN YOU
‘Can you sign in the dark?’
hs
(3) Christopher: NIGHT SIGN ME
‘I sign in the dark no’

Christopher became increasingly more able to produce a headshake while using


a manual sign, but we observed in Period 3 that he often used a sequential marker
of negation when signing spontaneously. Rather than shaking his head at the
same time as the negated sign, the headshake was mostly produced at the end
of the sentence after the manual components. Occasionally Christopher was
observed to use the marker between the subject and the verb:
hs
(4) Christopher: ME LIKE
‘I do not like’
These patterns can also be argued to represent a re-analysis of the negation sign
into a linguistic category which is not BSL-like, but is part of UG. If Christopher
had assigned the negation morpheme morphological status, he would tend to
use a sequential rather than a concurrent representation. In experimental tests
of his understanding of negation, Christopher performed at a level comparable
with that of the other learners of BSL as shown in the first two columns of
Figure 16.1. The figure shows the results of six tests of BSL comprehension.
Negation 1 and Agreement 1 are tests of signed sentence comprehension through
Sign to English sentence matching, while Negation 2 and Agreement 2 are tests
of grammaticality judgment. Classifier 1 is a signed sentence to picture match
432 G. Morgan, N. Smith, I. Tsimpli, and B. Woll

100%

80%
Correct scores

60% Christopher

40% Comparator

20%

0% t1
1

r1
t2
2

r2
n

en

ie
en

ie
tio

tio

sif

sif
m

m
a

as
eg

ee

as
eg

ee

Cl

Cl
gr
N

gr
A

Test domains

Figure 16.1 Assessments of comprehension across BSL grammar tests:


Christopher and mean comparator scores

test and Classifier 2 is a signed sentence to written English sentence match.


Comparator group scores are also included.
In the test of comprehension of the headshake marker (Negation 1), Christo-
pher scored 93 percent correct (chance = 50 percent). The comparator group
scored between 86 percent and 100 percent, mean = 97 percent, SD = 4.8
percent. These scores were significantly above chance for both groups. There
was no statistical difference between Christopher and the comparator group’s
scores.
A grammaticality judgment test of comprehension of negation through mor-
phological incorporation (Negation 2) was also carried out. BSL, like ASL, has
a set of verbs that can be negated through a regular morphological modifica-
tion (Sutton-Spence and Woll 1999; Neidle, Kegl, MacLaughlin, Bahan, and
Lee 2000). Signs with stative meaning such as WANT, HAVE, KNOW, and
BELIEVE can be negated through movement and opening of the hand away
from the body, while the location of the sign stays the same. In order to recog-
nize the ungrammatical element, subjects had to identify a sign that does not
take incorporated negation (e.g. EAT) in a short signed sentence. The ungram-
matical signs were produced with the regular morphological modification of
negation. On this test Christopher scored 60 percent correct (chance 50 percent),
the comparator group between 30 percent and 80 percent, mean = 57 percent,
SD = 15.3 percent. There was no statistical difference between Christopher
and the comparator group’s scores.
The overall use of negation across the exposure period in Christopher’s spon-
taneous signing is summarized in Table 16.4. Across the five learning periods
BSL development in an exceptional learner 433

Table 16.4 Use of negation markers across learning period: Types, tokens,
and ungrammatical use

Negation Period 1 Period 2 Period 3 Period 4 Period 5

Types of negation 4 4 2 4 3
Total tokens 13 29 6 57 28
Percentage ungrammatical 7.7 (1) 24 (7) 50 (3) 1.7 (1) 7 (2)
(occurrences)

Christopher displayed productive knowledge of the negation system in BSL,


producing many more grammatical than ungrammatical tokens, with several
different types of negation markers.

16.6.2.2 Verb agreement morphology. There are three basic classes


of verbs in BSL:
r plain verbs, which can be modified to show manner, aspect, and the class of
direct object;
r agreement verbs, which can be modified to show manner, aspect, person,
number, and class of direct object; and
r spatial verbs, which can be modified to show manner, aspect, and location
(Sutton-Spence and Woll 1999).
Here we concentrate on Christopher’s mastery of the rules of verb agreement
morphology. Verbs such as ASK, GIVE, TELL, TELEPHONE, and TEASE
in BSL can include morphosyntactic information either through movement
between indexed locations in sign space or between the signer and shifted
reference points in the context of role shift. In Figure 16.2 the signer moves
the sign ASK between a location on her right, previously indexed for the NP
‘a man,’ toward a location on her left, previously indexed for the NP ‘a woman’

Figure 16.2 ‘(He) asks (her)’


434 G. Morgan, N. Smith, I. Tsimpli, and B. Woll


Figure 16.3 ‘I like (her)’

(the signer is left handed). Moving a plain verb between indexed locations is
ungrammatical, as in Figure 16.3 where the signer moves the sign LIKE toward
a location previously indexed for the NP ‘a woman.’
Verb agreement morphology in BSL is fairly restricted, being used only with
transitive verbs that express an event. When Christopher first observed signers
using indexed locations he seemed to treat this as deictic reference. He looked
in the direction of the point for something that the point had referred to. He did
not use indexing or spatial locations himself; whenever possible, he used a real
world location. In the Period 1 he used uninflected verb forms when copying
sentences.

(5) Target: g-a-r-y 3 EXPLAIN2 YOU


(verb inflection moves from third person to second person)
‘Gary explains to you’
(6) Christopher: g-a-r-y EXPLAIN YOU
(no verb inflection; the sign is the citation form)
‘Gary explain you’
When he first used agreement verbs he had persistent problems in reversing
the direction of the verb’s movement to preserve the meaning, copying the real
world trajectory of the verb. Thus, when asked to repeat the sentence:
(7) 1 TELEPHONE2 ‘I telephone you’
Christopher at first moved the verb inflection in the same direction as he had
just seen it move, i.e. toward himself. His repetition therefore looked like:
(8) 2 TELEPHONE1 ‘you telephone me’.
These reversal errors have been described in 3–5 year old children acquiring
signed language (e.g. Petitto 1987). In contrast, errors in copying the direction of
BSL development in an exceptional learner 435

verb agreement were minimal in the comparator group. Christopher’s difficulty


was largely resolved by Period 5 although there were still occasional examples
of the error in his spontaneous productions.
By Period 5 (after eight months of exposure), Christopher spontaneously
produced correct simple directional affixes on verbs for present referents, in-
dicating that he could reverse the direction of verb movements to preserve
meanings.
brow raise
(9) Target: 2 HELP 1 ‘Will you help me?’
(10) Christopher: 1 HELP 2 ‘Yes I’ll help you.’
However, throughout the learning period, he was unable to use sign space to
set up stable spatial locations for nonpresent subjects and objects. Instead, he
used the real locations of present persons and objects, and avoided any use of
sign space to assign syntactic locations for nonpresent referents.
When real world referents were used as locations for the start or end points
of verb signs, Christopher managed to produce some inflections e.g. the third to
second person location, indicating that in his production he was at least aware
of the distinction between a plain verb and one inflected to agree with a location
at the side of sign space. Although in Christopher’s spontaneous signing there
were very few examples of verb agreement for nonpresent referents, he did
display a level of comprehension comparable to that of the comparator group.
In the tests of comprehension of verb agreement, Christopher scored 60
percent correct (chance was 50 percent) in the simpler of the two tests (Agree-
ment 1), while the comparator group scores were between 60 percent and
100 percent, mean = 79 percent, SD = 13.3 percent. Neither Christopher’s
nor the comparator group’s scores were significantly above chance. In the more
complex grammaticality judgment test (Agreement 2), he answered by alter-
nating between grammatical and ungrammatical replies, indicating that he did
not understand the task. He scored at chance (50 percent) while the compara-
tor group scored between 40 percent and 100 percent, mean = 58.3 percent,
SD = 16.6 percent. Again both sets of scores were not significantly above
chance.
In a separate translation test he failed to translate any of six BSL sentences
using person agreement morphology into English. The errors were characteristic
of his translations as reported for spoken language (Smith and Tsimpli 1995)
when trying to deal with a task involving high cognitive load (online consecutive
translation). He characteristically made errors based on phonological similarity;
e.g. in one sentence he substituted the verb ASK for the sign TEA as the signs
share place of articulation and handshape.
Overall Christopher’s errors in using verb agreement arise either from omit-
ting agreement (using a citation form of the verb plus pointing to the subject
436 G. Morgan, N. Smith, I. Tsimpli, and B. Woll

and object), or by articulating the verb inflection in the wrong direction. These
are typical developmental errors in young children exposed to signed language
from infancy (e.g. Bellugi et al. 1990).

16.6.2.3 Classifiers.3 Christopher was able to copy correctly some


classifiers from his tutors’ examples, but because he also produced many errors
with classifiers it was not clear if this correct usage was productive or unana-
lyzed. For example, when copying the sentence in (11) Christopher used the
same handshape (5 hand) with both hands rather than using one hand to sign a
tall flat object (with a flat palm) and on the other hand signing a jumping person
(with a bent V).

pursed lips
(11) Target: BOY CL-bent-V-person-JUMP-OVER-CL-B-WALL
‘the boy just managed to clear the surface of the high wall’
Christopher signed only the general movement of the sentence by crossing his
hands in space, nor did he sign the ‘effortful’ manner of the jump, through facial
action.

(12) Christopher: BOY hands-cross-in-space4


This difficulty may be a result of his apraxia. However, he did not attempt
to substitute the marked bent V handshape with another, easier-to-produce
handshape to distinguish between the wall and the person. This error indicates
that Christopher was not using the classifier as a polymorphemic sign, and
that his correct copies were unanalyzed whole forms. Even after substantial
exposure to classifiers, Christopher preferred in his spontaneous signing to act
out some verbs like WALK, SIT, and JUMP rather than to exploit a classifier:
e.g. CL-bent-V-person.
Although Christopher found classifiers difficult in his own signing, he ap-
peared to show some understanding of their use. He was occasionally able
to pick out pictures for sentences signed to him such as ‘a person falling,’ ‘a
person walking,’ and ‘a small animal jumping.’ In order to quantify this we
carried out two tests of Christopher and the comparator group. The first test
(Classifier 1) required subjects to watch 10 signed sentences involving a clas-
sifier and then choose one of three written English sentences. For example in
one item the BSL target was ‘a line of telephones’ produced with a Y hand-
shape articulated several times in a straight line in sign space. The choices
were:
3 This part of the research is the subject of a separate paper detailing the spatial aspects of BSL
and the role of mapping in Christopher’s learning (Morgan et al. in preparation b).
4 Text in lower case in the sign gloss tier indicates that a gesture was used with a sign.
BSL development in an exceptional learner 437
r a line of horses;
r a line of cars;
r a line of telephones.
In the second test (Classifier 2) subjects watched 10 signed sentences and then
picked a corresponding picture from four picture alternatives.
Christopher performed significantly worse on the Classifier 1 test than the
comparator group; he scored 20 percent correct (chance was 33 percent). Com-
parator group scores were between 80 percent and 100 percent, mean 89 percent,
SD = 9.9 percent. On the classifier 2 test Christopher scored 10 percent correct
(chance was 25 percent). Comparator group scores were between 50 percent
and 100 percent, mean 72 percent, SD = 13.8 percent. There was no significant
difference between the comparator group’s scores on the Classifier 1 and Clas-
sifier 2 tests. Christopher and the mean comparator group’s scores are presented
in Figure 16.1.
The results presented in Figure 16.1 suggest that in the domains of negation
and agreement Christopher’s general comprehension and judgments of gram-
maticality are similar to other learners, but he does markedly less well than
the comparator group on the classifier tasks. Many of Christopher’s errors in
the classifier tests appeared to be random, while members of the comparator
group (when making wrong choices) seemed to use a visual similarity strategy.
For example, the comparator subjects when matching a picture with a classifier
often made choices based on a salient perceptual characteristic (roundedness,
thinness, etc.) although the movement or spatial location of the sign was not
accurately processed. Christopher, on the other hand, made several choices with
no such apparent strategy.

16.7 Discussion
By the final period of exposure to BSL, Christopher’s signing has greatly im-
proved, and it is at a level where he can conduct a simple conversation. In
this respect he has supported our prediction that he would find the language
accessible and satisfying in linguistic terms. From the beginning of BSL expo-
sure he has shown interest and a motivation to learn, despite the physical and
psychological hurdles he had to overcome. Christopher has learnt to use sin-
gle signs and short sentences as well as normal learners do. This supports part
of our first prediction, that he would find vocabulary learning relatively easy.
His understanding of verb morphology is comparable to that of the comparator
group, but in production the complexity of manipulating locations in sign space
is still beyond him. After eight months’ exposure he continues to use real world
objects and locations (including himself and his conversation partner) to map
out sign locations. Thus, verb morphology in BSL is markedly less well devel-
oped than in his other second languages (for example, in his learning of Berber)
438 G. Morgan, N. Smith, I. Tsimpli, and B. Woll

to which he had comparable exposure. These findings do not support our pre-
diction that he would learn BSL verb morphology quickly and easily, at least
insofar as his sign production is concerned.
In his spontaneous signing, utterance length is limited, yet he does not use
English syntax. He understands negation as well as other normal learners, al-
though in production we have seen an impact of his apraxia on the correct
co-ordination of manual and nonmanual markers. In general, in his production
there is an absence of facial grammar. We have not observed the same extent
of influence of English syntax on his BSL as we originally predicted. How-
ever, there is one domain where a difference in syntactic structure between
English and BSL may have influenced his learning. Christopher’s compre-
hension and production of classifier constructions was very limited. Although
the comparator group performed less well in classifier comprehension than in
the other linguistic tests, Christopher’s scores were significantly worse than the
comparator group only in this domain.

16.7.1 Modality effects


Christopher has learnt a new language in the signed modality less well than in
the spoken modality. Christopher’s apraxia impinges on his production but not
comprehension of several domains of BSL. The pattern of strengths and weak-
nesses in his learning of BSL is similar to, as well as different from, that found
in his learning of spoken languages. Our research has shown that the modal-
ity (including the use of simultaneous articulation of linguistic information
in BSL) is responsible for partly supporting and partly falsifying our original
predictions.
His vocabulary learning was good but his mastery of verb morphology was
not. The restricted nature of verb-agreement morphology in BSL may have
made patterns harder to internalize. We believe that the absence of a written
version of BSL reduced his ability to retain in memory abstract morphological
regularities. The persistent nonpermanence of the input increased the cognitive
load for Christopher. We also suggest that exposure to a language that relies
on many visual links between form and meaning increases the importance of
iconically-based inference-making in adult learning. Christopher’s difficulty in
making these inferences based on intact world knowledge may have affected
his progress significantly.
In his rather limited sentence production we observed less of an influence of
his first language than was the case in his acquisition of other, spoken, languages
such as Berber. Perhaps the contrast between the output modalities of signed
and spoken language may have an inhibitory effect on transfer strategies. His
difficulties in sign articulation caused him to slow his production down. In
BSL development in an exceptional learner 439

general, the greatest influence of English in his other spoken languages is shown
when he is speaking or reading quickly.
In one domain of Christopher’s learning, there may have been a direct in-
fluence of modality. Christopher avoided the use of classifier constructions
and performed very poorly in tests of their comprehension. This may either
be attributable to the complexity of the use of space in the formation of BSL
classifiers (a modality effect), or to the inherent linguistic complexity of clas-
sifiers (a nonmodality effect). On this latter view, Christopher’s difficulty with
classifiers is simply that they encode semantic contrasts (like shape) that none
of his other languages do.
Support for the former view – that there is a modality effect – comes from his
poor performance in using sign space to map out verb agreement morphology.
Although the use of spatial locations for linguistic encoding was comprehended
to the same general degree as in the comparator group, in his sign production
the use of sign space was absent. Thus, if it is the use of sign space which
is a problem, and classifiers rely on a particularly high level of sign space
processing, his visuo-spatial deficits appear to impinge most in the use of this
set of structures.
The analysis of Christopher’s learning of BSL reveals a dissociation be-
tween spatial mapping abilities and the use of grammatical devices that do
not exploit spatial relations. We have attempted to relate this dissociation to
the asymmetry Christopher demonstrates between his verbal and nonverbal
IQ. The general abilities needed to map spatial locations in memory, recog-
nize abstract patterns of spatial contrasts and visualize spatial relations from
different perspectives are called upon in the use of classifiers in sign space.
Christopher’s unequal achievements in different parts of his BSL learning can
then be attributed to his apraxia and visuo-spatial problems. It is clear that
certain cognitive prerequisites outside the linguistic domain are required for
spatialized aspects of BSL but there are no comparable demands in spoken
languages.
The fact that the aspects of signed language that are absent in Christopher’s
signing are those that depend on spatial relations (e.g. the classifier system)
suggests that the deficit is actually generalized from outside the language faculty.
In this case it might be said that underlying grammatical abilities are preserved,
but they are obscured by impairments in cognitive functions needed to encode
and decode a visuo-spatial language.
In conclusion the dissociations between Christopher’s ability in different
parts of the grammar provide the opportunity to explore which areas of lan-
guage are modality-free and which areas are modality-dependent, and the
extent to which signed languages differ from spoken languages in their require-
ments for access to intact, nonlinguistic processing capabilities. Differences in
440 G. Morgan, N. Smith, I. Tsimpli, and B. Woll

Christopher’s abilities and the unevenness of his learning supports the view that
only some parts of language are modality-free.

Acknowledgments
Aspects of this research have been presented at the conferences for Theo-
retical Issues in Sign Language Research, Washington, DC, November 1998,
Amsterdam, July 2000 and the Linguistics Association of Great Britain meeting
at UCL, London, April 2000. We are indebted to the audiences at these venues
for their contribution. We are grateful to Frances Elton and Ann Sturdy for their
invaluable help with this project. We would also like to express our thanks to
the Leverhulme Trust who, under grant F.134AS, have supported our work on
Christopher for a number of years, and to John Carlile for helping to make it
possible. Our deepest debt is to Christopher himself and his family, who have
been unstinting in their support and co-operation.

16.8 References
Bellugi, Ursula, Diane Lillo-Martin, Lucinda O’Grady, and Karen van Hoek. 1990. The
development of spatialized syntactic mechanisms in American Sign Language. In
The Fourth International Symposium on Sign Language Research, ed. William H.
Edmonson and Fred Karlsson, 16–25. Hamburg: Signum-Verlag.
Benton, Arthur L., Kerry Hamsher, Nils R. Varney, and Otfried Spreen. 1983. Contri-
butions to neuropsychological assessment. Oxford: Oxford University Press.
Fodor, Jerry. 1983. The modularity of mind. Cambridge, MA: MIT Press.
Gangel-Vasquez, J. 1997. Literacy in Nicaraguan Sign Language: Assessing word recog-
nition skills at the Escuelita de Bluefields. Manuscript, University of California,
San Diego, CA.
Goodglass, Harold, and Edith Kaplan. 1983. The assessment of aphasia and related
disorders. Philadelphia, PA: Lea and Febiger.
Kimura, Doreen. 1982. Left-hemisphere control of oral and brachial movements and
their relation to communication. Philosophical Transactions of the Royal Society
of London ser. B, 298:135–149.
Kimura, Doreen. 1993. Neuromotor mechanisms in human communication. New York:
Oxford University Press.
Morgan. Gary, Neil V. Smith, Ianthi-Maria Tsimpli, and Bencie Woll. 2002. Language
against the odds: The learning of British Sign Language by a polyglot savant.
Journal of Linguistics 39:1–41.
Morgan. Gary, Neil V. Smith, Ianthi-Maria Tsimpli, and Bencie Woll. In preparation a.
Autism in signed language learning. Manuscript, University College London.
Morgan. Gary, Neil V. Smith, Ianthi-Maria Tsimpli, and Bencie Woll. In prepara-
tion b. Learning to talk about space with space. Manuscript, University College
London.
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan, and Robert G. Lee.
2000. The syntax of American Sign Language: Functional categories and hierar-
chical structure. Cambridge, MA: MIT Press.
BSL development in an exceptional learner 441

O’Connor, Neil and B. Hermelin. 1991. A specific linguistic ability. American Journal
of Mental Retardation 95:673–680.
Petitto, Laura A. 1987. On the autonomy of language and gesture: Evidence from the
acquisition of personal pronouns in American Sign Language. Cognition 27:1–52.
Pizzuto, Elena, and Virginia Volterra. 2000. Iconicity and transparency in sign lan-
guages: A cross-linguistic cross-cultural view. In The signs of language revisited:
An anthology to honor Ursula Bellugi and Edward Klima, ed. Karen Emmorey and
Harlan Lane, 261–286. Mahwah, NJ: Lawrence Erlbaum Associates.
Poizner, Howard, Edward S. Klima, and Ursula Bellugi. 1987. What the hands reveal
about the brain. Cambridge, MA: MIT Press.
Smith, Neil V. 1996. A polyglot perspective on dissociation. Behavioural and Brain
Sciences 19:648.
Smith, Neil V., and Ianthi-Maria Tsimpli. 1995. The mind of a savant: Language learning
and modularity. Oxford: Blackwell.
Smith, Neil V., and Ianthi-Maria Tsimpli. 1996. Putting a banana in your ear. Glot
International 2:28.
Smith, Neil V., and Ianthi-Maria Tsimpli. 1997. Reply to Bates. International Journal
of Bilingualism 2:180–186.
Sutton-Spence, Rachel L. and Bencie Woll. 1999. The linguistics of BSL: An introduc-
tion. Cambridge: Cambridge University Press.
Tsimpli, Ianthi-Maria and Neil V. Smith. 1991. Second language learning: Evidence from
a polyglot savant. In UCL Working Papers in Linguistics 3:171–183. Department
of Phonetics and Linguistics, University College London.
Tsimpli, Ianthi-Maria and Neil V. Smith. 1995. Minds, maps and modules: Evidence
from a polyglot savant. In Working Papers in English and Applied Linguistics, 2,
1–25. Research Centre for English and Applied Linguistics, University of
Cambridge.
Tsimpli, Ianthi-Maria and Neil V. Smith. 1998. Modules and quasi-modules: Language
and theory of mind in a polyglot savant. Learning and Individual Differences
10:193–215.
Warrington, E.K. 1984. Recognition memory test. Windsor: NFER Nelson.
17 Deictic points in the visual–gestural and
tactile–gestural modalities

David Quinto-Pozos

17.1 Introduction
A Deaf-Blind person has only one channel through which conventional lan-
guage can be communicated, and that channel is touch.1 Thus, if a Deaf-Blind
person uses signed language for communication, he must place his hands on top
of the signer’s hands and follow that signer’s hands as they form various hand-
shapes and move through the signing space.2 A sign language such as American
Sign Language (ASL) that is generally perceived through vision must, in this
case, be perceived through touch.
Given that contact between the signer’s hands and the receiver’s hands is nec-
essary for the Deaf-Blind person to perceive a signed language, we may wonder
about the absence of the nonmanual signals of visual–gestural language (e.g.
eyebrow shifts, head orientation, eye gaze). These elements play a significant
role in the grammar of signed languages, often allowing for the occurrence
of various word orders and syntactic structures. One of the central questions
motivating this study was how the absence of such nonmanual elements might
influence the form that tactile-gestural language takes.
Thus, this study began as an effort to describe the signed language production
of Deaf-Blind individuals with a focus on areas where nonmanual signals would
normally be used in visual–gestural language. However, after reviewing the
narrative data from this study, it quickly became evident that the Deaf-Blind
subjects did not utilize nonmanual signals in their signed language production.
In addition, they differed from the sighted Deaf subjects in another key area: in

1 Throughout this work, the term “Deaf-Blind” is used to refer to people who do not have the hear-
ing necessary to perceive spoken language nor the sight necessary to perceive signed language
through the visual channel.
2 As mentioned, some Deaf-Blind individuals perceive tactile signed language with both hands,
but some individuals use only one hand (usually the hand used as the dominant hand in the
production of signed language), especially when conversing with interlocutors whom they know
well. Additionally, there are times when only one hand can be used for reception because of
events that are occurring in the immediate environment (i.e. the placement of individuals around
the signer and receiver(s), movement from one location to another by walking, etc.).

442
Deixis in visual and tactile signed language 443

the use of deictic points as a referencing strategy. After further investigation, it


seemed likely that this referencing strategy is linked to the use of the nonmanual
signal eye gaze.
This study describes the use of deictic points in narratives produced by two
Deaf-Blind adults vis-à-vis the use of deictic points in narratives produced by
two Deaf sighted adults. The narratives (one per subject) were signed by each
subject to each of the three other subjects in the study, which means that each
subject recounted her or his narrative three times; to both sighted Deaf, and
Deaf-Blind interlocutors. This design made it possible to examine language
production and the manner in which it may vary as a function of the inter-
locutor.

17.2 Signed language for the blind and sighted: Reviewing


similarities and differences
One of the most common methods of communication for Deaf-Blind individuals
in the USA involves the use of signed language that is perceived by touch
(Yarnall 1980), which I refer to as tactile signed language. In many cases –
such as those in which Deaf-Blind individuals experience the loss of sight
after the loss of hearing – tactile signed language is the adaptation of a visual
signed language to the tactile modality (Reed, Durlach, and Delhorne 1992).
The documented adaptations will be reviewed in Section 17.2.1. However,
there are also cases in which congenitally deaf and blind individuals use tactile
signed language for communication, but it is not clear if they use it in the same
manner as those who became blind after having learned conventional signed
language. Cases of congenitally Deaf-Blind individuals who use tactile signed
language are interesting because their language acquisition would necessarily
take place tactually, rather than visually. The tactile acquisition of language may
influence the structure and/or form of a language that is learned, and cause it to
differ from visual–gestural and auditory–oral languages, at least in some areas.
Research on this topic may be of fundamental importance to our understanding
of language acquisition generally, but that is not the primary focus of this
study.
One point from this section must be emphasized: The use of tactile signed
language for communication – whether by the congenitally deaf and blind or
by later-blinded individuals – is quite successful. A brief discussion of the
literature on the signed language used by Deaf-Blind people makes this clear.
This portion of the literature review accomplishes two goals: it familiarizes the
reader with comprehension studies that have been conducted on Deaf-Blind
users of signed language, and it explains several points about the unique form
of tactile signed language.
444 David Quinto-Pozos

17.2.1 Communication in tactile sign language


17.2.1.1 A look at Deaf-Blind communication. Studies of tactile
signed language reception have been conducted in which subjects’ accuracy in
perceiving language tactually has been measured. The tactile modality yields
highly accurate perception of fingerspelling at naturally produced rates, which is
about two to six letters per second (Reed, Delhorne, Durlach, and Fischer 1990).
In addition, the tactual reception of signs – like that of fingerspelling – is highly
accurate, but there are some areas where errors tend to occur. For instance,
in a study of the reception of 122 isolated signs by nine Deaf-Blind subjects,
Reed, Delhorne, Durlach, and Fischer (1995) showed that the phonological
parameter of location – which accounted for 45 percent of the one-parameter
errors of perception in their study – appears to be more difficult to perceive in
isolated signs than movement, handshape, and orientation. Yet, despite the ob-
served errors of perception, the authors reported that the nine Deaf-Bind subjects
identified isolated signs accurately between 75 percent and 98 percent of the
time.3
In the same publication, Reed et al. (1995) also examined the perception of
signs in ASL and PSE (Pidgin Sign English) sentences.4 This part of the study
yielded a different result from their examination of the perception of isolated
signs. When presented with sentences as stimuli the Deaf-Blind subjects made
more errors in the phonological parameter of handshape than in location, move-
ment, and orientation. In the end, the subjects’ accuracy for perceiving signs in
sentences ranged from 60 percent to 85 percent.5 Regarding the different types

3 It may be the case that non Deaf-Blind users of visual signed language would also fail to
reach 100 percent accuracy if they were asked to perform a similar identification task. In other
words, sighted Deaf individuals would likely fail to reach 100 percent identification accuracy on
the identification of isolated signs and signs in sentences in visual signed language. However,
sighted Deaf individuals did not participate in the Reed et al. (1995) study in order to compare
such figures.
4 In this portion of the Reed et al. (1995) study, 10 subjects – rather than nine as in part one –
were presented with the sentence stimuli; five subjects were given ASL sentences to repeat and
five were given PSE (Pidgin Sign English) sentences to repeat. The term PSE was introduced
by Woodward (1973) as the language use among the deaf that displays grammatical elements
of other pidgins, with elements of both ASL and English. Later work (Cokely 1983) made the
claim that PSE is really not a pidgin, but rather, among other things, a type of foreigner talk
with influence from judgments of proficiency. These issues are beyond the scope of this chapter.
Regarding the Reed et al. (1995) study, there were some group differences between the two
groups regarding the degree of reception accuracy, but both groups made the most errors in the
parameter of hand shape when presented with sentence stimuli.
5 Subjects in the Reed et al. study who were presented with PSE sentences fared better in the
reception of signs in sentences than those subjects who were presented with ASL sentences.
This may be due to the subjects in the study and their preferences rather than the “forms” of the
message. However, it is worth remarking that several of the PSE subjects in the study learned
language tactually (in one form or another) because they were born deaf and blind, or they
became so at a very early age. In fact, Reed et al. (1995:487) mentioned that “the subjects in
the PSE group may be regarded as closer to native signers of tactual sign language than the
Deixis in visual and tactile signed language 445

of errors encountered in the reception of isolated signs versus signs in sentences,


Reed et al. (1995:485) suggested that the “better reception of handshape in ci-
tation form may result from the fact that handshape is the most redundant part
of the sign in frozen lexical forms . . . [while] in continuous signing, handshape
is less redundant than in isolated signs.” Even though the Deaf-Blind subjects
in this study made minimal perception errors in isolated signs and signs in
sentences, Reed et al. (1995:488) asserted that “the tactual reception of sign
language is an effective means of communication for deaf-blind individuals
who are experienced in its use.”

17.2.1.2 The form of tactile signed language. Recent work has


claimed that there are various ways in which tactile signed language differs from
visual ASL. Specifically, Collins and Petronio (1998) described the changes that
visual ASL undergoes when signed in a tactile modality, and they referred to
the language production of Deaf-Blind individuals as “Tactile ASL.”6 One dif-
ference relates to back-channel feedback strategies that exist in Tactile ASL and
not in visual ASL (e.g. Tactile ASL utilizes finger taps and hand squeezes for
back-channel feedback), and another difference is that the signing space in Tac-
tile ASL is generally smaller than in visual ASL because of the close proximity
of the signer and interlocutor. Also, ASL signs articulated with contact on the
body/torso/head are produced somewhat differently in Tactile ASL because the
signer’s body/torso/head commonly moves to contact the hand as it articulates
the sign in the space in front of the signer rather than the hand(s) moving to
contact the body/torso/head.

subjects in the ASL group.” On the other hand, most of the “ASL subjects” were Deaf-Blind
individuals who had lost their sight after having acquired visual ASL. Clearly, more research
is needed on the sign language production of congenitally Deaf-Blind individuals in order to
determine if tactual language acquisition has a unique effect on the form and/or structure of the
tactile signed language used.
6 The terms “visual ASL” and “Tactile ASL” were used by Collins and Petronio (1998) to refer
to traditional ASL as it is signed in North America and ASL as it is signed by Deaf-Blind
individuals, respectively. The term “visual ASL” is used in the same manner in this chapter,
although the label “Tactile ASL” can be somewhat misleading, since tactile signed language in
the USA, like visual sign language, can have the characteristics of ASL or Signed English, as well
as variations that contain elements of both varieties. The basic claim that Collins and Petronio
make is that the Deaf-Blind subjects in their studies signed ASL with some accommodations
for the tactile modality, hence the term “Tactile ASL.” I refer to the signed language used by
the Deaf-Blind subjects in the study described in this chapter as “tactile signed language,”
and avoid terms such as Tactile ASL or Tactile Signed English (which is also a possibility if
referring to the signed language of Deaf-Blind individuals). Also, Collins and Petronio referred
to differences between Tactile ASL and visual ASL as “sociolinguistic changes ASL undergoes
as it is adapted to the tactile mode” (1998:18). It seems that these “changes” could be better
described as linguistic accommodations or simply adaptations made to ASL (or sign language in
general) when signed in a tactile modality. The term “sociolinguistic changes” implies diachronic
change for many researchers, and the direct claim of diachronic change of visual ASL to tactile
ASL has not been advanced by any known researcher.
446 David Quinto-Pozos

In addition to the differences mentioned above, Collins and Petronio observed


that Tactile ASL regularly contains overt elements that are often covert or are
represented nonmanually in visual ASL. For example, in visual ASL a signer can
use a nonmanual signal to mark a yes–no question (in this case, raised eyebrows)
without articulating the lexical sign QUESTION. However, Collins and Petronio
reported that the Deaf-Blind subjects who they observed all used the lexical sign
QUESTION at the end of a yes–no question. In addition, visual ASL signers
can sometimes use nonmanual signals in place of lexical wh-question signs
whereas Tactile ASL signers – as reported by Collins and Petronio (1998) –
produce lexical signs to accomplish the same function. Thus, what appears to
differ between visual ASL and Tactile ASL regarding the use of yes–no and
wh-question lexical signs is that in visual ASL such signs are optional whereas
in Tactile ASL the same signs appear to be obligatory.
Collins and Petronio (1998) also claimed that the ASL second person singular
sign YOU can be used to introduce questions that are directed at a Deaf-Blind
interlocutor. This observation – which is of particular relevance to the current
study – was also made by Petronio (1988), who observed that Deaf-Blind peo-
ple report being confused when a question is put to them. In order to avoid
confusion, it appears that the use of a sentence-initial point to the interlocutor
has been adopted for communicating that the subsequent sentence is indeed a
question directed at the receiver.
As one can see, tactile signed language is an effective tool for communication
among Deaf-Blind individuals in the USA, but there exist several ways in which
it differs from conventional ASL as used by Deaf sighted individuals. One of
the differences – the function and use of the deictic point – is focused upon in
this chapter. A brief review of the use and function of deictic pointing (both
manually and nonmanually) in visual ASL is in order first.

17.2.2 The deictic point in visual ASL


The deictic point7 is claimed to carry out several functions in visual ASL.
It has been described as a specifier of pronominal reference (Lillo-Martin and
Klima 1990; Meier 1990; Engberg-Pedersen 1993; among others), a determiner
(Bahan, Kegl, MacLaughlin, and Neidle 1995; Bahan 1996), a syntactic tag
(Padden 1983), or a pronoun clitic (Kegl 1986; 1995; Padden 1990), as well as
other functions that have not been included here. Some recent accounts claim
that the deictic point in visual ASL is not entirely linguistic, but also includes
a gestural component (Liddell 1995; Liddell and Metzger 1998).
In the present study, I refer to deictic points as instances of indexation, and
different types of indexation are described based on their semantic functions.
7 In this chapter I primarily address instances of a point (G handshape, palm facing downward or
toward the imaginary mid-sagittal line in the signing space) directed to locations other than the
signer himself or herself.
Deixis in visual and tactile signed language 447

I do not, however, address the deictic point used for first person singular –
whether used to communicate the perspective of the narrator or the perspective
of a character in a narrative – because the first person singular deictic point
differs considerably in phonetic form from the types of deictic points presented
above (for example, those points are generally directed away from the signer’s
body). As mentioned in the introduction, the goal of this work is to address the
use of (or lack of) deictic points that have the potential of being referentially
ambiguous – especially without the support of the nonmanual signal eye gaze –
in tactile signed language. Determining the referent of a first person point is
generally not problematic.8

17.2.3 Nonmanual signals that function as referential devices in visual ASL


One of the functions of nonmanual signals in visual ASL is that of referential
indicator, and one way that this referential function is realized is through eye
gaze. For instance, based on work by Bendixen (1975) and Baker (1976a;
1976b), Liddell (1980:5) noted that “the eyes are directed initially toward the
location to be established, apparently a normal part of the location establishment
process.” Additionally, eye gaze can function as pronominal reference (Baker
and Padden 1978; Liddell 1980) once an association or a link is established
between a location in space and a nominal argument. Sometimes eye gaze
accompanies a deictic point that is directed at a location that was previously
established in the signing space and that was linked to a particular referent
(Petronio 1993). In addition, in an account of eye gaze as a syntactic agreement
marker in ASL, Bahan (1996:270) claimed that eye gaze can co-occur with
a manual deictic point, but cannot occur alone, except in a “special ‘whisper’
register.” As can be noted from these accounts, eye gaze is widely used in visual
ASL for referential purposes.
One other type of nonmanual signal in visual ASL that is pertinent to this
study is torso/head/eye-gaze orientation (often termed a “role-shift”), and one
function of such orientation is to mark “direct” or “reported” speech in ASL
(Padden 1990). For example, if a signer is referring to a statement made by a
third person who is not present, the signer can change his or her torso/head/eye-
gaze orientation, and the subsequent statement is understood to have been made
by a third person who is not present.

17.2.4 The focus of this study


This study focuses on tactile signed language and the absence of several vi-
sual signed language nonmanual signals (e.g. eyebrow shifts, head orientation,
8 In certain circumstances, a deictic point to the signer (first person singular) can also be referen-
tially ambiguous, possibly referring to the signer or possibly referring to another character in
ASL discourse.
448 David Quinto-Pozos

and eye gaze). Does the signed language production of Deaf-Blind individuals
differ substantially in form from that of Deaf sighted individuals in the con-
text of recounting narratives? If so, how does such production differ, and can
the absence of visual signed language nonmanual signals be implicated in the
difference(s)?

17.3 Method of this study

17.3.1 Subjects
Four subjects participated in this study of tactile signed language in narrative
form. Of the four, two are Deaf and sighted, and two are Deaf and blind.
Information about the background of these subjects appears in (1).
(1) Description of subjects
r DB1: male, congenitally deaf and blind (Rubella); began to learn
sign language at age four; began to learn Braille at age six; age at
time of study: 25
r DB2: male, born with hearing and sight, deaf at age five; diagnosed
with Usher Syndrome at age 11;9 fully blind at age 19; age at time
of study: 34
r D1: female, congenitally deaf, fourth generation Deaf; age at time
of study: 28
r D2: male, congenitally deaf, second generation Deaf; age at time of
study: 26
The Deaf sighted subjects (D1 and D2) were chosen because they were born
deaf, are both children of Deaf parents, and also because both had previously
interacted with Deaf-Blind individuals. Thus, D1 and D2 had some experience
using signed language tactually, and they were both relatively comfortable
communicating with the Deaf-Blind subjects in this study. As a consequence of
D1 and D2 having Deaf parents, it is assumed that ASL was learned by each of
them in environments that fostered normal language development. Furthermore,
they both attended residential schools for the Deaf for their elementary and
secondary education, and they both socialize frequently with family and friends
in the Deaf community.
Different criteria were used for the selection of the Deaf-Blind subjects for the
study. DB1 and DB2 were chosen based on their vision and hearing impairments
as well as their language competence. Language competence was an important
9 While DB2 reported being diagnosed with Usher Syndrome at age 11, these ages do not corre-
spond with normal onset of blindness in the several standard types of Usher Syndrome patients.
It is a possibility that DB2 was misdiagnosed with this condition, which accounts for the ages
in question. I thank an anonymous reviewer for pointing out this peculiarity to me.
Deixis in visual and tactile signed language 449

criterion for inclusion in this study because the subjects needed to be able to
read the story that was presented to them. Both of them had graduated from high
school and, at the time of this study, were enrolled in a community college. They
appear to be highly functioning individuals as evidenced by their participation
in an independent living program that encourages each of them to live in his
own apartment and to hold at least part-time employment in the community.
Both DB1 and DB2 are nonnative signers inasmuch as their parents are hearing
and did not use signed language in the household. Regarding primary and
secondary education, there were several similarities between DB1 and DB2.
DB1 was educated in schools that used Signed English for communication.
However, he reports that he started to learn ASL in high school from a hearing
teacher. From the age of five to 19, DB2 also attended schools where Signed
English was the primary mode of communication. At the age of 19, he entered
a school for the Deaf and began to learn ASL tactually because he was fully
blind by this point. DB2 spent three years at this school for the Deaf. Upon
graduating, he attended Gallaudet University, but was enrolled for only about
one year. Currently, both DB1 and DB2 read Braille and use it daily. In fact,
they read periodicals, books from the public library, and other materials written
in Braille on a regular basis. Because of this, it is assumed that their reading
skills are at least at the level needed to understand the simple narratives that
were presented to them. The narratives are discussed below.

17.3.2 Materials
Each subject was presented with a short narrative consisting of 225–275 words.
The narratives were written in English for the Deaf sighted subjects and tran-
scribed into Braille for the Deaf-Blind subjects. Each narrative contains dia-
logue between at least two characters and describes an interaction between these
characters. Several yes–no and wh-questions were included in each of these nar-
ratives. In an effort to diminish the influence that English structure would have
on the signed form of each story, the narratives were presented to all four sub-
jects 24 hours before the videotaping was conducted. Each subject was allowed
30 minutes to read his or her story as many times as he or she wished and was
instructed that the story would have to be signed from memory the following
day. Each subject only read his or her own story prior to the videotaping.

17.3.3 Procedure
For the videotaping of the narratives, each subject signed his or her story to
each of the other subjects in the study one at a time. If a Deaf-Blind subject
was the recipient of a narrative, he placed his hand(s) on top of the signer’s
hand(s). However, the sighted Deaf subjects did not place their hands on the
450 David Quinto-Pozos

Table 17.1 Narrative and subject order for data collection

Segment numbers 1 2 3 4 5 6 7 8 9 10 11 12

Storyteller DB2 D1 DB2 D2 DB1 DB1 D2 D2 D1 D1 DB1 DB2


Receiver DB1 DB1 D2 D1 DB2 D1 DB2 DB1 DB2 D2 D2 D1

signer’s hands when they were recipients of narratives. The 12 narratives were
videotaped in random order and followed the format in Table 17.1.

17.4 Results/discussion
This presentation of the results begins with general comments regarding the
narratives presented by the subjects and then addresses the specific differences
between the subjects. Given this format, the reader will see the ways in which
the 12 narratives were similar as well as ways in which the narratives differed,
presumably as a result of the particular narrator and recipient pairing.

17.4.1 Length and numbers of signs in the narratives


The narratives produced by the four subjects during the videotaping session
were similar in length and number of signs produced. The average length of
the 12 videotaped narratives was approximately three minutes with the shortest
narrative lasting two minutes and the longest lasting four minutes. The narrative
with the most lexical signs included 256 signs and the narrative with the least
number of lexical signs included 163 signs. Table 17.2 shows the length and
total number of lexical signs produced in each narrative.
The similarity of the lengths of the 12 narratives and numbers of signs used
in those narratives demonstrates that all the subjects produced relatively similar
amounts of text. That is, there were not major differences between the subjects
such as one subject producing one minute of text and another subject producing
five minutes of text. Nor were there major differences in the number of signs
produced, such as one subject producing 300 signs and another producing only
50 signs. The similarity of the narratives in terms of length and number of
signs produced allows for the quantitative comparison of the use of specific
referencing strategies.

17.4.2 Differences between the Deaf sighted and Deaf-Blind narratives


17.4.2.1 The use of deictic points for referential purposes. Each in-
stance of the index finger of the dominant hand pointing at something (other
than first person singular) during the recounting of the narratives was coded as
Table 17.2 Signed narratives: Length (in seconds) and number of signs

DB1 to DB1 to DB1 to DB2 to DB2 to DB2 to D1 to D1 to D1 to D2 to D2 to D2 to


DB2 D1 D2 DB1 D1 D2 DB1 DB2 D2 DB1 DB2 D1

Length (seconds) 240 210 205 190 180 165 180 135 120 210 138 150
Number of signs 163 167 169 216 256 237 206 246 228 176 169 201
452 David Quinto-Pozos

an indexation. Later, the different functions of indexation in the narratives were


determined from context, and four categories of indexation surfaced in the data.
The Deaf sighted subjects used four different types of indexation, which
fulfilled three semantic functions. The four types of indexation are shown in (2).
(2) Types of indexation in the narratives
r Type I (third person): the use of a point to the left or right side of
the signing space to establish/indicate an arbitrary location in space
that is linked to a human referent who is not physically present.
r Type II (person CL)10 : the use of a point to a person classifier
(G handshape or V handshape) on the nondominant hand. This,
too, was used to establish/indicate an arbitrary location in space
that is linked to a human referent (or referents in the case of the
V handshape) who is not physically present.
r Type III (locative/inanimate): the use of a point to establish/indicate
an arbitrary location in space that is linked to a locative referent or
object referent that is not physically present.
r Type IV (second person singular): the use of a point to the recipient
of the narrative for asking a question that is directed to a character
in the narrative.
While the Deaf sighted subjects used the four types of indexation listed in (2),
the Deaf-Blind subjects used indexation (other than first person singular) exclu-
sively in the following environment: the case of one character in the narrative
asking a question directed to another character in the narrative (Type IV). As
mentioned above, each narrative contained two characters who engaged in di-
alogue regarding an event that they were planning.11 Within the dialogue, the
characters asked yes–no and content wh-questions of each other. In all four
narratives, the situation of one character asking another character a question
created the environment for the use of indexation. Thus, each subject, while
signing his or her narrative, would take on the role of one of the characters
and would ask the other character in the narrative a question. In this case,
each subject in the study did indeed use indexation, and this point in each of
these interrogative phrases can be interpreted as second person singular (YOU).
However, the Deaf sighted subjects exhibited a nonmanual “role shift” (either

10 Most tokens of Type II indexation were realized with the 1-handshape CL. However, when the
V-classifier was used, the signer still pointed to each finger of the V handshape individually.
There were tokens of a V handshape occurring with the deictic point involving two fingers
(glossed as THEY-TWO), but those tokens were not included in these data. While this study
focuses on the use of a single deictic point, further research must be conducted which addresses
the use of other deictic pointing devices such as the pronoun THEY-TWO.
11 These questions were designed to elicit the use of nonmanual signals (especially furrowed
and raised eyebrows, grammatically significant signals in visual ASL) by the Deaf sighted
subjects and to determine what strategies the Deaf-Blind subjects would use to communicate
such questions.
Deixis in visual and tactile signed language 453

torso or head re-orientation and eye gaze shift) for specifying the message
of a different speaker or character, whereas there was no role shifting or eye
gaze shifting in the signing of the Deaf-Blind subjects. In fact, the Deaf-Blind
subjects’ torsos did not deviate much – if at all – from the default position of
facing the receiver of the narrative. As a result, the deictic points that the Deaf-
Blind subjects produced were directed straight out, essentially in the direction
of the interlocutor. As an example of the only type of indexation produced by
the Deaf-Blind subjects, DB1 used indexation when a character in his narrative
asked another character if she wanted to eat fish. The passage is as follows:
“D-O IX-forward WANT EAT FISH” (here, IX-forward can also be glossed as
YOU or second person singular). Table 17.3 presents the total use of indexation
for the functions described in (2) by each subject in the study.
What is most evident from Table 17.3 is the use of Types I, II, and III
indexation by the Deaf subjects, but not the Deaf-Blind subjects. Figure 17.1
shows the total use of each type of indexation by each subject (summed across
all three instances of each narrative). Note that there are no examples of third
person singular or of locative deictic points (either to a point in space or to a
classifier on the nondominant hand) in the data from the Deaf-Blind subjects
in this study, and the only examples of indexation by the same subjects are
in the form of second person singular reference in questions addressed to a
character in the narrative. As reported in Section 17.2.1.2, Petronio (1988) and
Collins and Petronio (1998) have claimed that indexation is used by Deaf-Blind
signers to signal that a question is about to be posed to the interlocutor. Based
on these claims and the data from the current study, perhaps indexation in Deaf-
Blind signing is used primarily for referring to addressees, either in the context
of an interrogative as described previously or presumably in the context of a
declarative statement (e.g. I LIKE YOU, YOU MY FRIEND, etc.).
Since the Deaf-Blind subjects did not utilize Type I and Type II indexation
for pronominal reference in the signed narratives, I now describe how such pro-
nouns were realized by the Deaf-Blind subjects (or whether they used pronouns
at all). One of the Deaf-Blind subjects (DB1) used the strategy of fingerspelling
the name of the character who would subsequently perform an action. This strat-
egy served a similar function as Types I and II indexation, which was favored
by subjects D1 and D2. Not surprisingly, the sighted Deaf subjects also used
fingerspelling as a strategy for identifying characters in their narratives. In fact,
they often used fingerspelling in conjunction with a deictic point (sometimes
following the fingerspelling, sometimes preceding it, and sometimes articulated
simultaneously – on the other hand – with the fingerspelling). Table 17.4 shows
the use, by subject, of proper names (realized through fingerspelling) in the
narratives.
It can be seen that DB1, in order to refer to characters in his narratives,
fingerspelled the names of the characters, and he used that strategy more times
than any other subject did. However, DB2 never used fingerspelling of proper
Table 17.3 Use of indexation in the narratives

Type of deictic DB1 to DB1 to DB1 to DB2 to DB2 to DB2 to D1 to D1 to D1 to D2 to D2 to D2 to


reference DB2 D1 D2 DB1 D1 D2 DB1 DB2 D2 DB1 DB2 D1

Person (I) 0 0 0 0 0 0 0 2 13 7 12 18
Person CL (II) 0 0 0 0 0 0 6 2 2 0 0 4
Location (III) 0 0 0 0 0 0 0 0 2 0 0 3
2sg (IV) 1 1 1 7 6 6 1 0 0 1 2 3
Deixis in visual and tactile signed language 455

40

35

Type I (third person)


30
Type II (per CL)
Type III (locative/inanimate)
Number of tokens

25 Type IV (2sg in narrative)

20

15

10

0
DB1 DB2 D1 D2
Subjects and type of indexation

Figure 17.1 Total use of indexation by each subject

nouns to make reference to his characters. Rather, DB2 used such signs as GIRL,
SHE, MOTHER, and FATHER. Table 17.5 shows the number of times DB2
used the signs for GIRL and the Signed English sign SHE.12 Thus, DB2 did not
use indexation for the function of specifying third person singular pronouns nor
did he use fingerspelling, but instead referred to his characters with common
nouns or Signed English pronouns. The use of SHE by DB2 raises another issue
that must be addressed: the use of varying amounts of Signed English by the
Deaf-Blind subjects. A discussion of the use of Signed English by the subjects
in this study and the implications of such use follows.

17.4.2.2 ASL or Signed English in the narratives? Certain features


of Signed English appear in the tactile signed language narratives. Both Deaf-
Blind subjects produced varying degrees of Signed English as evidenced by
their use of D-O (ASL does not use English ‘do’), the conjunction AND (which

12 SHE is not an ASL sign but rather an invented sign to represent the English pronoun ‘she.’
Several invented signed systems are used in deaf education throughout the USA in an effort
to teach deaf students English; the sign SHE as used by DB2 likely originated in one of these
signed systems. In the interest of space I refer to this type of signed language production simply
as Signed English.
Table 17.4 The use of proper names realized by fingerspelling the name of the character being referred to

DB1 to DB1 to DB1 to DB2 to DB2 to DB2 to D1 to D1 to D1 to D2 to D2 to D2 to


Subjects DB2 D1 D2 DB1 D1 D2 DB1 DB2 D2 DB1 DB2 D1

Number of tokens 23 36 29 0 0 0 19 15 16 15 18 9
Deixis in visual and tactile signed language 457

Table 17.5 The use of GIRL and SHE by DB2

Subjects DB2 to DB1 DB2 to D1 DB2 to D2

GIRL 14 7 7
SHE 0 11 12

is infrequent in ASL), and – in the case of DB2 – articles and the copula (which
do not exist in ASL). Table 17.6 displays the number of times each subject
produced the copula, articles, AND, and fingerspelled D-O in each narrative.
In contrast to the Deaf-Blind subjects, the Deaf subjects, for the most part, did
not use features of Signed English. They did not utilize the copula, articles, and
Signed English D-O, and they only used the conjunction AND minimally (D1:
four tokens; D2: two tokens). Rather, the sighted Deaf subjects signed ASL with
NMS such as eye gaze shifts, head/torso shifts, and grammatical information
displayed with the raising or lowering of the eyebrows. In fact, it is the case that
the Deaf signers did not discontinue their use of ASL nonmanual signals despite
the presumed knowledge that their interlocutors were not receiving those cues.

17.4.2.3 The use of the signing space in signed languages. Users of


signed languages utilize the signing space in front of their bodies in several
significant ways (Padden 1990). One way is for the production of lexical signs.
Many signs – especially verbs – contain movements in the signing space that are
part of their citation forms. For example, the ASL sign FOLLOW is produced
with an A handshape and movement that begins in front of the torso about chest
level and ends about the middle of the signing space directly in front of the
signer. Yet another way the signing space is used is to establish, refer to, and/or
show relationships between present and nonpresent objects and/or persons in the
signing space (Klima and Bellugi 1979). For example, a signer can point to the
right hand side of the signing space and then sign a noun such as WOMAN,
then the sign FOLLOW-rt13 can be articulated with the movement ending in
the direction of the point that was established. This differs from production
of the sign FOLLOW in citation form as described above, and this form of
the verb FOLLOW exhibits information about the subject and object of the
verb: the object is interpreted as the third person form (‘the woman’) that was
established on the right side of the signing space. Possible translations of this
sequence of signs would be ‘I will follow the woman’ or ‘I followed the woman’
(depending on whether or not a time adverbial had been used previously). By
using the signing space to inflect verbs in this manner, a signer can use a number
of word orders and is not confined to following strict word order patterns as
13 In this glossing convention, the “-rt” segment indicates that the sign is articulated with movement
to the signer’s right side of the signing space.
Table 17.6 English features in each narrative

DB1 to DB1 to DB1 to DB2 to DB2 to DB2 to D1 to D1 to D1 to D2 to D2 to D2 to


Narrative DB2 D1 D2 DB1 D1 D2 DB1 DB2 D2 DB1 DB2 D1

Copula 3 2 0 4 9 10 0 0 0 0 0 0
Articles 0 0 0 1 6 2 0 0 0 0 0 0
AND 16 14 16 12 9 11 1 1 2 1 1 0
D-O 1 1 1 10 12 11 0 0 0 0 0 0
Deixis in visual and tactile signed language 459

in English or varieties of Signed English. The signing space can also be used
in a grammatical manner in other ways (e.g. aspectual verb modulation; Klima
and Bellugi 1979) or to compare and contrast two or more abstract or concrete
entities (Winston 1991).
Naturally, both Deaf-Blind subjects utilized the signing space for the produc-
tion of uninflected signs. However, DB1, who is congenitally deaf and blind,
also consistently used the signing space for the production of inflected verbs,
but that was not the case for DB2.
Throughout the narratives, DB1 produced several verbs that have the possi-
bility of undergoing some type of inflection that utilizes the signing space. In
the three narratives produced by DB1, I identified 53 instances of a verb that
may be inflected, and DB1 executed 45 of those verb tokens with what appears
to be inflection that utilized the signing space to specify information about the
subject and/or object of the verb. For example, DB1 signed MEET with the
right hand held close to his chest while the left hand (in the same handshape)
moved from the area to the left and in front of the signer toward the right hand
in order to make contact with it. This manner of signing MEET occurred twice
across the three narratives. The inflected form has the meaning ‘He/She came
from somewhere in that direction, to the left of me, and met me here,’ whereas
the uninflected form does not specify location or a category of person.
DB1 inflected other verbs as well. The verb SEE was produced nine times
in his three narratives; in eight of those instances it was executed with some
reference to a target that was being “looked at.” For example, several times DB1
signed SEE with hand and arm movement in an upward fashion. He did this in
the context of referring to a mountain. Thus, the sign can be taken to mean that
he was ‘looking up the side of the mountain.’ The sign GO was often inflected
as well, giving reference to the general location of the action.
In contrast to DB1, DB2 did not use the signing space for the inflection
of verbs. Rather, strings of signs in his narratives resemble English, which
primarily relies on word order for the determination of subject and object in
a sentence or phrase. Remember, too, that DB2 used several Signed English
elements throughout his narratives.
Rather than signing only some verbs with inflection (like DB1), the two
sighted Deaf subjects signed almost all their verbs with some type of inflection.
That is, they inflected most (if not all) verbs that were produced by them that can
be inflected for location. Furthermore, ASL nonmanual signals such as eye gaze
and body/torso movement were common in conjunction with the production of
verbs, and these nonmanual signals were often used to indicate role shifts.
Several facts have been presented above. First, the Deaf sighted subjects
produced ASL syntax (along with appropriate nonmanual signals) throughout,
while the Deaf-Blind subjects produced versions of Signed English, specifically
English word order and some lexical signs that do not exist in ASL. DB2 fol-
lowed Signed English word order more than DB1, who inflected most verbs in
460 David Quinto-Pozos

his narratives. Thus, at least one of the Deaf-Blind subjects (DB1) used the sign-
ing space in a spatially distinctive manner (thus resembling visual ASL), but he
still failed to use the deictic point for third person singular or locative reference,
which is unlike what would be expected in visual ASL. From these facts it is
clear that it is not necessarily the use of Signed English elements or word order
that influences the non-occurrence of certain deictic points (specifically, third
person singular and location references), but rather something else. It appears
that the tactile medium does not support substantial use of deictic points, and
perhaps we can hypothesize why this is so.

17.4.3 Accounting for the lack of indexation in the Deaf-Blind narratives


17.4.3.1 Eye gaze as a factor. One explanation for the phenomenon
of few cases of indexation in the Deaf-Blind narratives is that the lack of visual
eye gaze, which is claimed to function as an agreement marker in visual ASL
(Bahan 1996), does not allow for a grammatical pointing system in tactile
signed language. Because eye gaze functions as such an important referencing
tool in visual ASL, its absence in tactile signed language presumably influences
the forms that referencing strategies take in that modality. Perhaps use of the
signing space for reference to / establishment of a nominal with a deictic point
is not significant for Deaf-Blind users because they are presumably not able to
visually perceive the end point of the deictic reference.14
The results of studies of gestural communication in hearing congenitally
blind children supports the suggestion that eye gaze may be necessary for the
development of a deictic pointing system. In studies of the gestures produced
by congenitally blind children while using spoken language to communicate,
Iverson et al. (2000) found that deictic points were used infrequently or not at
all for referencing objects and locations. Yet, the same blind children used other
gestures for deictic purposes despite the fact that they had not had any exposure
to manual gestures at all.15 The sighted children in the same study used deictic
points frequently while gesturing. The authors give the following account for
the lack of production of deictic points by the blind subjects in their study:
Blind children used gesture to call attention to specific objects in the environment, but
they did so using Palm points rather than Index points. Why might this be the case?
When sighted children produce on Index point, they in effect establish a “visual line
of regard” extending from the pointer’s eyes along the length of the arm and pointing

14 I must emphasize that the suggestions offered to explain the lack of deictic points in the Deaf-
Blind narratives are all based on production data. Comprehension studies of deictic points must
be conducted in order to confirm these suggestions.
15 The “other gestures” that I refer to here were defined in Iverson et al. (2000:111) as the following:
“showing, or holding up an object in the listener’s potential line of sight,” and “palm points, or
extensions of a flat hand in the direction of the referent.”
Deixis in visual and tactile signed language 461

finger toward the referent in gesture. Index points localize the indicated referent with
considerable precision – much more precision than the Palm point. It may be that blind
children, who cannot use vision to set up a line between the eyes, the index finger, and the
gestural referent in distant space, are not able to achieve the kind of precise localization
that the Index point affords (indeed, demands). They may therefore make use of the less
precise Palm point. (p. 119)

In addition to the Iverson et al. (2000) study, Urwin (1979) and Iverson and
Goldin-Meadow (1997) reported that the blind subjects in their studies failed
to utilize deictic points for referencing purposes. These studies support the
suggestion that eye gaze might be the necessary requirement for the use of
deictic points for communication purposes.

17.4.3.2 Displacement as a factor. The Deaf-Blind subjects in the


current study only used deictic points when a character in their narratives would
ask another character a question, while strategies other than deictic pointing
were used for all other pronominal references. One explanation for this sparse
use of pointing may have to do with “displacement” (the characteristic of lan-
guage that allows reference to things that exist in places and times other than the
present; Hockett 1966). Perhaps Deaf-Blind individuals reserve deictic points
for reference to people and places within the immediate environment, while
the use of points to locations in the signing space for linguistic purposes is
limited. A rationale for this assertion is given below, but the minimal use of
deictic points to characters in the Deaf-Blind narratives must first be accoun-
ted for.
As described in Section 17.2.2, deictic points in ASL serve various seman-
tic functions, some that are claimed to be linguistic and others that have been
analyzed as gestural or nonlinguistic. The sighted Deaf subjects in this study
used points for various purposes, but the Deaf-Blind subjects only used points
when showing that a question was being asked to a character in the narratives.
There is only one way to refer to a second person singular pronoun in signed
language, and that is by pointing to the location (real or imagined) of second per-
son singular. There does not exist a commonly used nondeictic Signed English
pronoun for second person singular, and signers do not normally fingerspell
Y-O-U. Thus, the Deaf-Blind subjects had no choice but to use the deictic point
in this manner. However, there are alternatives in signed language to using a
point for third person singular reference. Such strategies include (but may not
be limited to) fingerspelling the name of the person being referred to, using
a sign name, or using a Signed English pronoun. All of these forms of third
person reference were used by the Deaf-Blind subjects in this study. Perhaps
Deaf-Blind individuals prefer to use strategies other than pointing to reference
third person singular characters because such points have the potential of being
ambiguous. Points to third person singular can have the same phonetic form as
462 David Quinto-Pozos

points to the following entities: an object in a narrative, a location in a narrative,


or even a third person singular entity that is physically present in the immediate
environment, but not in a narrative.
However, it is likely the case that Deaf-Blind signers do use points when
referring to the location of people and objects in the immediate environment.
I have learned – based on discussions with several sighted professionals who
work with Deaf-Blind individuals – that indexation is indeed used frequently
by sighted signers when describing to Deaf-Blind signers the location of people
and objects in the immediate environment. After such a locative reference has
been established, a Deaf-Blind individual can then point to that location to refer
to the specific person or location in the immediate environment. Yet, in the case
of the narratives in this study where displacement was involved, perhaps the
Deaf-Blind signers chose not to use deictic points because of the ambiguous
nature of such points.

17.4.3.3 Reception in the tactile modality. Another explanation for


the lack of deictic points in the tactile medium can be posited by referring to
claims made by Reed et al. (1995) regarding perception. As was reviewed in
Section 17.2.1.1, Reed et al. found that handshape errors were most prevalent in
the identification of signs presented in sentences. In other words, when tested
to see if they could reproduce signs that had been presented to them within
a sentence (both ASL and PSE sentences), the subjects made the most errors
in the phonological parameter of handshape. Perhaps Deaf-Blind signers find
it difficult to perceive and interpret the handshape of a deictic point within a
sentence. This could be due to several factors: the speed at which the signs
are produced, the direction of the point, the limited size of the signing space
in tactile signed language,16 and/or the fact that points serve various semantic
functions as outlined in Section 17.2.2.

17.4.4 Putting it all together


This study of deictic points by Deaf-Blind individuals has reinforced the de-
scription presented in Section 17.4.2.3: the signing space is used and can be
defined in various ways. It is partly phonological in nature, allowing a signer
to articulate the phonological parameters of movement and location in space,
partly based on movement of the sign through various contrastive locations in
the signing space, and partly grammatical in nature. A Deaf-Blind signer can

16 As mentioned in Section 17.2.1.2, Collins and Petronio (1998) described that the signing space
for Deaf-Blind individuals is smaller because of the close proximity of the signer and interlocutor.
This claim can also be made for the signing of the Deaf-Blind subjects in this study. In general,
there were no significant displacements of the hands/arms from the signing space other than
movement to contact the head/face area.
Deixis in visual and tactile signed language 463

perceive the movement of verbs through the signing space as the verbs specify
subject and object in a phrase because perception simply requires that a Deaf-
Blind receiver follow the signer’s hands as he or she moves through contrastive
locations in the signing space. Following this line of reasoning, a Deaf-Blind
signer would presumably use the signing space for production and reception
of aspectual modification, which also involves specialized movement of the
hands through the signing space. However, there were no cases of aspectual
modification of verbs in the Deaf-Blind narratives.
Yet, the data from this study suggest that at least one use of the signing
space may have a visual component that would influence the manner in which
a Deaf-Blind person would use sign language. Specifically, the lack of deictic
points for referencing purposes in the Deaf-Blind narratives suggests that eye
gaze may play a significant role in the realization of deictic points. This means
that some uses of the signing space can be carried out without eye gaze support
and some uses likely rely upon eye gaze support to be executed.

17.5 Questions to consider


While the two Deaf-Blind subjects in the present study produced various ver-
sions of Signed English, Collins and Petronio (1998) claimed that Deaf-Blind
subjects can and do sign ASL in the tactile modality. If so, how does deictic
reference manifest itself in that type of signing? What, if any, modifications
are made to visual ASL that disambiguate the intended referent of a deictic
sign? Is Signed English a substitute for ASL in ambiguous structures in the
tactile signed language used in North America? As mentioned before, is there a
deictic system of indexation in tactile sign language that is akin to that of visual
ASL, or are deictic points primarily used to refer to people and locations in the
immediate environment? More data on casual conversations among Deaf-Blind
people are needed to address questions such as these.
In the data from this study, we have seen that versions of Signed English
were used by the Deaf-Blind subjects which contained no deictic points for
pronoun and location reference. Perhaps we also need to determine if the same
phenomenon occurs in the use of visual signed language. That is, do deictic
points occur less frequently when sighted Deaf signers use Signed English as
opposed to ASL?
Lastly, do congenitally deaf and blind individuals use the signing space, es-
pecially syntactically, in a unique manner because of their language acquisition
experience and the sensory tools that are available to them? Moreover, can
we theorize about the form of a signed language in the tactile modality if it
were allowed to evolve without a significant amount of influence from visual
signed language? There seem to be many interesting questions in this area of
study.
464 David Quinto-Pozos

One limitation of the current study is that the Deaf subjects are native signers
while the Deaf-Blind subjects are late learners of signed language. It would
be ideal to also investigate the signed language production of Deaf-Blind sign-
ers who acquired signed language following a regular acquisition process and
timeline. However, the incidence of children who are congenitally deaf and
blind and who also have Deaf parents is quite low. Alternatively, future in-
vestigations could include Deaf sighted subjects who are late learners of lan-
guage in order to make matched comparisons with Deaf-Blind late-learners of
language.

17.6 Conclusions
This chapter has examined language that is perceived by touch and has compared
it to signed language that is perceived by vision. Integral to visual–gestural
language is the use of nonmanual signals (e.g. eyebrow shifts, head and torso
movement, and eye gaze, to name a few). What, then, are the consequences of
the presumed inability of Deaf-Blind users of signed language to perceive such
nonmanual signals? This study has begun to address this issue.
Based on the narrative data presented here, the signed language production of
Deaf-Blind individuals does differ in form from that of sighted Deaf individuals.
Specifically, sighted Deaf signers utilize nonmanual signals (such as eyebrow
shifts, head orientation, and eye gaze) extensively, while Deaf-Blind signers
do not. In addition, sighted Deaf signers utilize deictic points for referential
purposes while Deaf-Blind signers use other strategies to accomplish the same
task. It appears that the ability to perceive eye gaze is a crucial component in
the realization of deictic points for referential purposes.
Regarding the use of the deictic point, the Deaf sighted subjects in this study
used such points in four general ways in order to fulfill three semantic functions
(reference to third person singular, to a location or object at a location, and to
second person singular). On the other hand, the Deaf-Blind subjects used deictic
points exclusively to fulfill one function (second person singular reference).
In addition, the Deaf sighted subjects produced ASL, while the Deaf-Blind
subjects each produced a unique version of Signed English. One Deaf-Blind
subject (DB1) used the signing space to inflect verbs for location, whereas the
other Deaf-Blind subject (DB2) did not. This shows that the signing space can
be used contrastively in tactile signed language, but some uses of the signing
space in visual signed language – such as the use of deictic points – do not seem
to be as robust in the tactile modality.
As mentioned above, the difficulty in perceiving eye gaze presumably re-
stricts the manner in which Deaf-Blind signers use deictic points. This sug-
gestion is similar to findings regarding congenitally blind children who have
Deixis in visual and tactile signed language 465

normal hearing: they rarely utilize deictic points for gestural purposes. The
manner in which blind individuals (both hearing and Deaf ) – especially those
who are congenitally blind – conceive of the space around them may also
differ from sighted individuals. More research is certainly needed to under-
stand the language use of blind and Deaf-Blind individuals more fully; there
are many more insights to be gained from research on the role of vision in
language.

Acknowledgments
I would like to thank Carol Padden and an anonymous reviewer for their insight-
ful comments on an earlier draft of this chapter. This study was supported in part
by a Graduate Opportunity Fellowship from the University of Texas at Austin
to the author, a grant (F 31 DC00352-01) from the National Institute on Deaf-
ness and Other Communication Disorders (NIDCD) and National Institutes of
Health (NIH) to the author; and an NIDCD/NIH grant (RO1 DC01691-04) to
Richard P. Meier.

17.7 References
Bahan, Benjamin 1996. Non-manual realization of agreement in American Sign Lan-
guage. Doctoral dissertation, Boston University.
Bahan, Benjamin, Judy Kegl, Dawn MacLaughlin, and Carol Neidle. 1995. Convergent
evidence for the structure of determiner phrases in American Sign Language. In
FLSM VI. Proceedings of the Sixth Annual Meeting of the Formal Linguistics
Society of Mid-America, Vol. Two: Syntax II & Semantics/Pragmatics, ed. Leslie
Gabriele, Debra Hardison, and Robert Westmoreland, 1–12. Bloomington, IN:
Indiana University Linguistics Club.
Baker, Charlotte. 1976a. Eye-openers in American Sign Language. California Linguis-
tics Association Conference Proceedings.
Baker, Charlotte. 1976b. What’s not on the other hand in American Sign Language.
In Papers from the Twelfth Regional Meeting of the Chicago Linguistics Society.
Chicago, IL: University of Chicago Press.
Baker, Charlotte., and Carol A. Padden. 1978. Focusing on the nonmanual components
of American Sign Language. In Understanding language through sign language
research, ed. Patricia Siple, 27–57. New York: Academic Press.
Bendixen, B. 1975. Eye behaviors functioning in American Sign Language. Unpublished
manuscript, Salk Institute and University of California, San Diego, CA.
Cokely, Dennis. 1983. When is a pidgin not a pidgin? An alternative analysis of the
ASL-English contact situation. Sign Language Studies 38:1–24.
Collins, Steve, and Karen Petronio. 1998. What happens in Tactile ASL? In Pinky
extension and eyegaze: Language in deaf communities, ed. Ceil Lucas, 18–37.
Washington, DC: Gallaudet University Press.
Engberg-Pedersen, Elisabeth. 1993. The ubiquitous point. Sign 6:2–10.
466 David Quinto-Pozos

Hockett, Charles F. 1966. The problem of universals in language. In Universals of


language, ed. Joseph H. Greenberg, 1–29. Cambridge, MA: MIT Press.
Iverson, Jana M., and Susan Goldin-Meadow. 1997. What’s communication got to do
with it? Gesture in children blind from birth. Developmental Psychology 33:453–
467.
Iverson, Jana M., Heather L. Tencer, Jill Lany, and Susan Goldin-Meadow. 2000. The
relation between gesture and speech in congenitally blind and sighted language-
learners. Journal of Nonverbal Behavior 24:105–130.
Kegl, Judy A. 1986. Clitics in American Sign Language. In The syntax of pronominal
clitics, ed. Hagit Borer, 285–309. New York: Academic Press.
Kegl, Judy A. 1995. The manifestation and grammatical analysis of clitics in American
Sign Language. Papers from the Regional Meetings, Chicago Linguistic Society
31:140–167.
Klima, Edward S. and Ursula Bellugi. 1979. The signs of language. Cambridge, MA:
Harvard University Press.
Liddell, Scott K. 1980. American Sign Language syntax. The Hague: Mouton.
Liddell, Scott K. 1995. Real, Surrogate, and token space: Grammatical consequences
in ASL. In Language, gesture, and space, eds. Karen Emmorey and Judy Reilly,
19–41. Hillsdale, NJ: Lawrence Erlbaum Associates.
Lillo-Martin, Diane, and Edward S. Klima. 1990. Pointing out differences: ASL pro-
nouns in syntactic theory. In Theoretical issues in sign language research, Vol. 1:
Linguistics, ed. Susan Fischer and Patricia Siple, 191–210. Chicago, IL: University
of Chicago Press.
Liddell, Scott K. and Melanie Metzger. 1998. Gesture in sign language discourse. Journal
of Pragmatics 30:657–697.
Meier, Richard P. 1990. Person deixis in American Sign Language. In Theoretical issues
in sign language research, Vol. 1: Linguistics, ed. Susan Fischer and Patricia Siple,
175–190. Chicago, IL: University of Chicago Press.
Padden, Carol. 1983. Interaction of morphology and syntax in American Sign Language.
Doctoral dissertation, University of California, San Diego, CA.
Padden, Carol. 1990. The relation between space and grammar in ASL verb mor-
phology. In Sign language research: Theoretical issues, ed. Ceil Lucas, 118–132.
Washington, DC: Gallaudet University Press.
Petronio, Karen. 1988. Interpreting for Deaf-Blind students: Factors to consider.
American Annals of the Deaf 133:226–229.
Petronio, Karen. 1993. Clause structure in American Sign Language. Doctoral disserta-
tion, University of Washington.
Reed, Charlotte M., Lorraine A. Delhorne, Nathaniel I. Durlach, and Susan D. Fischer.
1990. A study of the tactual and visual reception of fingerspelling. Journal of Speech
and Hearing Research 33:786–797.
Reed, Charlotte M., Nathaniel I. Durlach, and Lorraine A. Delhorne. 1992. Natural
methods of tactual communication. In Tactile aids for the hearing impaired, ed.
Ian R. Summers, 218–230. London: Whurr.
Reed, Charlotte M., Lorraine A. Delhorne, Nathaniel I. Durlach, and Susan D. Fischer.
1995. A study of the tactual reception of Sign Language. Journal of Speech and
Hearing Research 38:477–489.
Urwin, Cathy 1979. Preverbal communication and early language development in
Deixis in visual and tactile signed language 467

blind children. Papers and Reports in Child Language Development 17:119–


127.
Winston, Elizabeth A. 1991. Spatial referencing and cohesion in an American Sign
Language text. Sign Language Studies 73:397–409.
Woodward, James. 1973. Some characteristics of Pidgin Sign English. Sign Language
Studies 3:39–46.
Yarnall, Gary D. 1980. Preferred methods of communication of four Deaf-Blind Adults:
A field report of four selected case studies. Journal of Rehabilitation of the Deaf
13:1–8.
Index

Aarons, Debra 175, 196, 403 constraints (see Battison’s Constraints)


Abd-El-Jawad, Hassan 128, 139 149–155; phonological structure 286;
Ablorh-Odjidja, J.R. 271, 292 phonological units & patterns 27, 321, 134;
Abney, Steven P. 305, 308, 318 possessives 306–308; pronouns and 18,
Abu-Salim, Issam 128, 139 245–246, 250, 299, 306–307, 323–324, 328,
acquisition: see language acquisition 333–340, 345, 350, 354, 355, 365, 396, 446;
Ades, Anthony 93, 108 rate of sign production 8, 132, 158;
Ahlgren, Inger 253, 259, 312, 318, 354, 358, repetition in 68, 73, 74; role shift in 255,
366, 374, 400 447; slips of the hand 3, 28–29, 116,
Akamatsu, C. Tane 147, 163 126–129, 133; split negation in 238, 276;
Allan, Keith 297, 318, 397, 400 syllable structure of 106; syntax of 2–4,
Alphei, H. 37, 62 237, 275–278, 297, 317; tactile signed
Ameka, Felix K. 69, 84 language and 445, 459–460, 462, 463; topic
American Sign Language (ASL): acquisition marking in 201, 212–218; use of space in
of 4, 12, 143, 256; auxiliaries and 19, 252, 245–246, 321–323, 326, 447, 457, 460; verb
324; categorical perception 10; classifiers agreement 3–4, 51, 245–246, 248–252, 322,
and 19, 397; compounds in 3, 72–73, 154; 324, 325, 371, 376, 380–382; verb classes
deaf education and 32, 161–162; 19, 51, 246, 380; wh questions in 446, 449;
definiteness in 238; deictic points 446, 461, yes/no questions in 201, 212–218, 446, 449
463; dictionaries of 69, 72; Anderson, Roger 160, 162
English-influenced signing vs. 444, 445, Anderson, Stephen R. 62, 68, 84, 346, 350,
455; grammaticization and 199–202, 171, 366
207, 210–212, 214; history of 12, 201, 230; Anderson-Forbes, Meribeth 69, 86
iconicity 172; indefinite determiners 301, animacy 380–383
303; joints of the arm and hand 9, 43, 205; aphasia 124, 256
lexicalization of gesture 168, 207, 210, 171; apraxia 422, 427, 429, 436, 438–439
MCE systems 17, 32, 146–148, 150, 160; Arabic 128, 228
mimetic devices 169–170; modals in 13, Aranda 34–45, 332–33
201, 207–209, 212, 220; morphological Arbib, Michael A. 200, 222
processes (inflectional & derivational) 3, 16, arbitrariness 14, 15
30, 48–50, 151–153; narratives/poetry argument structure 385–386, 395
102–103; negation 275–276, 284, 432; Armstrong, David F. 167, 173, 200, 220
non-manual behaviors and 19, 446, 447, Arnheim, Rudolph 184, 196
459; null arguments 4, 114; perception and Arnold, Richard 69, 85
94–95, 101, 462; person 18, 247–248, 254, Aronoff, Mark 16, 21, 252, 256–257, 259,
323–324, 355–356, 359–362; phonetic 344, 366, 384, 399–400
constraints in 258, 392–395; phonological articulatory-perceptual interface 15, 392,
assimilation and 123; phonological 395, 399, 242–244, 385–389

469
470 Index

Asheninca 331–32, 344–45 Bloom, Paul 10–11, 24, 323, 327, 351, 368
ASL: see American Sign Langauge Bloomfield, Leonard 1–2, 21
assimilation 123, 154–56, 158–159 Boas, Franz 349, 366
Athabascan languages 322 Bobaljik, Jonathan 371, 400
Aubry, Luce 391, 400 Bornstein, Harry 157, 162
Auslan 231, 233, 345, 372, 376, 338–40, 6 borrowings, lexical 2, 3, 224, 230–235
Australian Sign Language: see Auslan Bos, Heleen 19, 21, 176, 197, 237, 252, 259,
auxiliary verbs 19, 176, 252, 324, 383–384 384, 400
Boyes-Braem, Penelope 272, 292
Baars, Bernard 117, 139, 141 Bradley, Lynette 102, 109
babbling: manual 7, 43; vocal 9 Braun, Allen 4, 23
Bahan, Benjamin 23, 175, 196, 251, 259, Brauner, Siegmund 281, 293
261, 294, 301, 303, 306, 307, 308, 309, 311, Brazilian Portuguese 371
318, 319, 354, 368, 374, 375, 391, 400, 403, Brazilian Sign Language (LSB) 4, 19–20,
417, 420, 424, 432, 440, 445, 446, 462, 463, 231, 233, 251, 324
465 Bregman, Albert S. 36, 38, 61
Bainouk 257, 259 Brentari, Diane 3, 5, 10, 21–22, 27, 30, 33,
Baker, Charlotte 213, 220, 338, 363, 366, 38, 39, 41, 42, 44, 52, 60, 61, 68, 81, 84, 88,
446, 447, 460, 465 93, 109, 130, 132, 139, 205, 221, 263, 285,
Baker, Mark 383, 386, 397, 400 286, 292, 293, 296, 318, 387, 400
Baker-Shenk, Charlotte 396, 400 Brinkley, Jim 108, 109
Battison, Robbin 3, 21, 32, 33, 39, 61, 68, British Sign Language (BSL) 6, 19, 213,
84, 95, 108, 149–51, 154, 157, 162 231, 233, 325, 372, 384, 396, 422–441
Battison’s constraints 32, 149 Broselow, Ellen 400, 395
Baumbach, E.J.M. 289, 292 Brouland, Josephine 202, 204, 209, 211,
Bavelier, Daphne 4, 21, 23 221
Bebian, Roch-Ambroise 145, 146, 162 Brown, Roger 156, 162
Becker-Donner, Etta 284, 292 Bryant, Peter 102, 109
Bella Bella 347–49 Butterworth, Brian 112, 139
Bellugi, Ursula 2–4, 8, 14, 19, 21–22, 27, 28, Bybee, Joan L. 199, 202, 206–210, 219,
29, 32, 33, 62, 72, 85, 102, 103, 108, 110, 221
116, 124, 126, 127, 128, 129, 131, 132, 133,
140, 141, 151, 158, 162, 163, 169, 170, 171, Campbell, Ruth 10, 21
172, 173, 175, 197, 234, 236, 261, 296, 311, Cantonese 297, 298, 303, 306, 309–310,
318, 319, 323, 325, 333, 366, 367, 373, 396, 311, 315
402, 427, 429, 436, 441, 457, 459, 466 Capell, A. 280, 293
Bendixen, B. 447, 465 Caramazza, Alfonso 117, 139
Benson, Philip J. 10, 21 Caron, B. 268, 293
Bentin, Shlomo 46, 62 Cassell, Justine 198
Benton, Arthur L. 426, 440 Casterline, Dorothy C. 2, 24, 27, 33, 69, 86,
Benton, R.A. 347, 366 88, 111, 147, 164, 373, 404
Benveniste, Emile 363, 366 categorical perception 10
Berber 325, 437, 438 categorical structure 177, 178, 179, 191,
Berg, Thomas 115, 117, 133, 139 195
Bergman, Brita 213, 220, 272, 292, 312, 318 Channon, Rachel; see also Crain, Rachel
Bever, Thomas G. 93, 97, 111, 145, 162 Channon 81, 83, 84, 85
Bickerton, Derek 12, 21 Chao, Chien-Min 69, 85
Bickford, Albert 226, 235 Chao, Y.R. 56, 61
Blake, Joanna 210, 221 Charlier, M. 295
Blondel, Marion 103, 109 Chase, C. 37, 61
Index 471

Cheek, Adrianne 7, 21 Croft, William 219, 221


Cheng, Lisa 297, 305, 318 Croneberg, Carl G. 2, 24, 27, 33, 69, 86, 88,
Cherry-Shuman, Mary Margaret 69, 73, 111, 147, 164, 373, 404
86 crosslinguistic comparisons, lexical similarity
Chinchor, Nancy 337, 366 across signed languages 172, 232–235
Chinese 56, 57, 61, 114, 131, 191–195, 303, Cutler, Anne 93, 97, 109, 111, 115, 139
309–310
Chinese Sign Language 19, 172, 234 Dahl, Östen 219, 263, 293
Chomsky, Noam 36, 61, 112, 114, 137, 138, Danish Sign Language 18, 19, 123, 172, 234,
139, 144, 145, 162, 169, 241, 242, 254, 259, 248, 324, 338–340, 345, 359–361
291, 293, 370, 384, 386, 387, 400 Davis, Barbara L. 9, 22
Christopher; see linguistic savant de Haan, Ferdinand 209, 221
Chu, His-Hsiung 69, 85 de l’Epée, Abbé Charles M. 62, 144–146,
Clark, Vince 4, 23 162
classifiers 19, 322, 325, 342, 352, 405, deaf education: Brazil 224; France 67, 144,
436–437, 438, 452, 453 145, 146, 224, 234; Mexico 224–225, 235;
Clements, George N. 36, 39, 43, 61 USA 145, 146–147, 159–161
Coerts, Jane 213, 221, 272, 293 Deaf-Blind signers 326, 442–465
Cohn, Jim 103, 109 definiteness 32, 238, 298–305, 307–309
Cokely, Dennis 213, 220, 338, 363, 366, deixis 345–50, 358, 361, 446–447, 460–461
396, 400, 443, 465 DeJorio, Andrea 168, 173
Collett, Peter 173 Del Viso, Susana 128, 140
Collins, Steve 445, 446, 462, 463, 465 Delhorne, Lorraine 443, 444, 466
Colombo, Lucia 89, 93, 110 Dell, Gary 112, 115, 128, 140
compounds 2, 3, 66, 72–74, 79, 80, 81, 83, DeMateo, Asa 245, 260
84, 85, 154 demonstratives 345–50, 365
Comrie, Bernard 370, 371, 377, 380, 400 design features of language 2, 3, 4, 14
conceptual iconicity 357–58, 365 determiners 298–305, 314–315
conceptual structure 385–389, 392, 399 Deuchar, Margaret 272, 293
Conlin, Kimberly E. 7, 23, 63 Deutsche Gebärdensprache (DGS): see
consonants 45–47, 93, 94, 97, 102, 106, 133 German Sign Language
convention, linguistic 15, 178–179, 188–190 Diessel, Holger 345, 347–48, 365, 366
Cooper, Franklin 93, 110 Distributed Morphology 258, 264, 285
Coppola, Marie 12, 19, 22, 262 Dixon, R. M. W. 56, 62
Corina, David P. 4, 21, 23, 27, 29, 30, 33, 39, Dodrill, Carl 108, 109
62, 80, 82, 85, 88, 89, 92, 94, 97, 102, 104, Dommergues, Jean 107, 110
108, 109, 110, 124, 133, 139, 352, 366, Dowd, Dorothy 29, 33
368 Duncan, Susan 182, 196, 197, 401
Cormier, Kearsy 237, 336, 362–63, 366, Durlach, Nathaniel 443, 444, 466
375, 385, 396, 400, 401 Dutch 137
Costello, Elaine 66, 69, 72, 73, 85
Coulter, Geoffrey R. 56, 62, 63, 68, 85, 290, Efron, David 170, 173
293 Ehrlenkamp, Sonja 372, 401
Crain, Rachel Channon; see also Channon, Ekman, Paul 170, 173
Rachel 81, 82, 85 Elman, Jeffrey 92, 109
Crain, Stephen 112, 113, 139, 296, 318 emblem: see gesture, conventional
Crasborn, Onno 46, 62 Emmorey, Karen 10, 21, 22, 62, 63, 64, 89,
Creissels, D. 288, 289, 293 92, 102, 108, 109, 169, 170, 171, 173, 183,
creole languages 12, 13 184, 197, 322, 325, 342, 391, 405, 414, 419,
critical period 4 420, 407–408, 410–412
472 Index

Engberg-Pedersen, Elisabeth 18–19, 22, 237, Frost, Ram 46, 62


248, 255, 260, 324, 326, 354, 366, 405, 419, Furuyama, Nobuhiro 418, 420
420, 446, 465, 338–39, 359–61
English 15, 17, 93, 97–102, 107–108, 112, Gã 270, 284
114, 128, 134, 147, 156–158, 161, 202, Gambino, Giuseppe 338, 368
207–208, 210, 253–254, 309, 330–31, Gangel-Vasquez, J. 424, 440
345–46, 349, 371, 379, 424, 426, 428, Garcı́a-Albea, José 128, 140
431–432, 435, 438, 459 Garrett, Merrill F. 112, 115, 121, 125, 128,
Eriksson, Per 202–203, 221 133, 140
Estonian 278, 284 Gaustad, Martha 156, 162
Evans, Alan 102, 111 Gee, James 113, 132, 140, 160, 162, 374,
eyegaze 170, 447, 453, 459, 460, 461, 463, 397, 401
464 gender, morphological 324, 329–333,
340–341, 344, 353, 363–64
generics 297, 308–309
Falgier, Brenda 410–412, 414, 419, 420
Gerken, LouAnn 156, 160
Fant, Gunnar 62
German 113, 119, 127, 131, 133, 371, 426
Fant, Lou 69, 85
German Sign Language (DGS) 3, 5, 6, 20,
Farnell, Brenda 342, 366
29, 31, 112–138, 238, 248, 252, 272, 277,
Fauconnier, Gilles 185, 197, 310, 318, 357,
284, 286, 290, 324, 372, 376, 379–384,
366, 376, 401
391–392, 396–397
Feinberg, Tod 29, 33 gesture 13, 167, 171, 196, 361, 183–184,
Feldman, Heidi 12, 16, 22 200–201; abstract deixis and 176, 182, 187;
Ferber, Rosa 129, 140 blind children and 169, 460, 465; child
Fiengo, Robert 385, 401 language development 167, 168, 176;
fingerspelling 3, 83, 325, 423, 444, 453, conventional 168, 170, 179, 180, 186,
461 206–207, 210–214; evolution of language
Fischer, Susan D. 3–4, 8, 21–22, 51, 62, 68, 167, 200, 210–214; gesticulation and 168,
85, 158, 162, 175, 197, 237, 244, 252, 260, 169, 11–12, 180–182; in a linguistic savant
338, 341, 367, 370, 373, 375, 384, 391, 397, 428–429; lexicalized 168; linguistic vs.
401, 444, 466 non-linguistic 169, 249, 325, 170–71,
Fodor, Jerry A. 112, 140, 423, 440 175–176, 253–254, 257–259, 357–58;
Forchheimer, Paul 332, 367 modality-free definition of 177, 187;
Fortescue, Michael 56, 62 Neopolitan 168, 207, 210; non-manual
Fourestier, Simone 397, 401 behaviors as 217–220; speech synchronized
Fox, Peter 102, 110 (see also: gesture, gesticulation and);
Frackowiak, Richard 102, 110 spoken 187–188
Frajzyngier, Zygmunt 280, 293 Gilman, Leslie 156, 162
French 145, 146, 206–207, 266, 284, 426 Giurana, Enza 338, 368
French Sign Language (LSF) 5, 13, 19, 21, Givón, Talmy 199, 221
168, 172, 201–211, 224–235 Gjedde, Albert 102, 111
Freuenfelder, Uli 107, 110 Glück, Susanne 273, 274, 278, 293, 294
Freyd, Jennifer 179, 197 Goldinger, Stephen 89, 92, 110
Friedman, Lynne 333, 367, 373, 377, 401 Goldin-Meadow, Susan 11–12, 16, 22, 168,
Friedman, Victor A. 348–49, 367 169, 173, 200, 222, 461, 465
Friel-Patti, Sandy 160, 164 Goldsmith, John 39, 47, 54, 61, 62, 284, 290,
Friesen, Wallace V. 170, 173 293
Frishberg, Nancy 202, 221, 230, 235 Goodglass, Harold 427, 440
Frith, Christopher 102, 110 Goodhart, Wendy 113, 132, 140, 160, 162,
Fromkin, Victoria A. 3, 22, 112, 115, 133, 163, 397, 401
140, 296, 318 Gough, Bonnie 175, 197, 373, 375, 401
Index 473

gradient structure 179, 186–187, 189 iconicity: 11, 12, 14, 15, 16, 83, 167, 225,
grammaticization 199–202, 205–208, 171–72, 233–235, 357–358, 365, 430, 438;
210–212, 216–220 shared symbolism 224, 229
Green, David M. 37, 62 Idioma de Señas de Nicaragua (ISN): see
Greenberg, Joseph 224, 233, 235, 370 Nicaraguan Sign Language
Grinevald, Collette 397, 401 Igoa, José 128, 140
Groce, Nora E. 12, 22, 225, 235 indexation: deictic points 443, 446, 452,
Grosjean, Francois 158, 163 453, 455, 460, 461, 462, 463; indexic signs
Guerra Currie, Anne-Marie P. 226, 235 224, 233
Gustason, Gerilee 147, 163 indices (referential) 253–257
indigenous signed languages: Mexico 225,
Haegeman, Liliane 266, 293 71–72; Southeast Asia 21
Haiman, John 57, 62, 213–214, 221 Indo-European 130, 131
Haitian Creole 13 Indo-Pakistan Sign Language 72, 73,
Hale, Kenneth 333, 367 338–40, 345, 353, 363
Halle, Morris 36, 41, 61, 258, 260, 264, 265, Ingram, David 338, 355, 367
279, 293 initialized signs 227, 235
Hamburger, Marybeth 89, 111 Inkelas, Sharon 50, 64
Hamilton, Lillian 157, 162 International Phonetic Association 70, 85
Hamsher, Kerry 426, 440 Israeli Sign Language 16, 19, 69, 72, 73, 74,
Happ, Daniela 114, 117, 127, 140 248
Harley, Heidi 264, 294 Italian 3, 114
Harré, Rom 330, 332–33, 368 Italian Sign Language (LIS) 203, 338–40,
Hartmann, Katharina 268, 269, 270, 294 345
Háusá 268, 284 Itô, Junko 39, 42
Henrot, F. 295 Iverson, Jana M. 168, 169, 173, 460, 461,
Hermelin, B. 426, 441 465
Herzig, Melissa 405, 419, 420
Hewes, Gordon W. 200, 221 Jackendoff, Ray 242, 260, 386, 387, 401
Hickock, Gregory 4, 22, 108, 110, 407, Jakobson, Roman 41, 62
420 Janis, Wynne 51, 62, 250, 260, 372, 375,
Hildebrandt, Ursula 104, 108, 110 382, 401
Hinch, H.E. 280, 293 Janzen, Terry 200, 205, 212, 214–218, 220,
Hinshaw, Kevin 108, 109 222
Hirsh, Ira J. 62 Japanese 93, 371
Hockett, Charles 2–3, 15, 22, 461, 465 Japanese Federation of the Deaf 69, 71, 85
Hohenberger, Annette 114, 117, 127, 140 Japanese Sign Language (NS) 5, 6, 20, 72,
Holzrichter, Amanda S. 42, 62, 68, 85, 226, 73, 172, 224–229, 232, 234, 245, 252,
235 363, 344–45, 338–42, 372, 376, 380, 384,
home signs 12, 168, 225 396
Hong Kong Sign Language (HKSL) 5, 238, Jenkins, J. 46, 62
299, 300, 301, 302, 303, 304, 305, 306, 307, Jenner, A.R. 37, 61
309, 312, 314, 315, 316, 317 Jescheniak, Jörg 115, 140
Hopper, Paul 199, 205–206, 212, 221 Jezzard, Peter 4, 23
Hua 57, 62, 214 Johnson, Mark 186, 197
Hulst, Harry van der; see van der Hulst, Harry Johnson, Robert E. 27, 33, 44, 46, 58, 63, 68,
Huet, Eduardo 224, 225, 235 86, 88, 110, 154, 163, 175, 197, 225, 235,
Hume, Elizabeth V. 39, 43, 61 356, 367, 374, 402
Humphries, Tom 204, 209, 211, 221 Johnston, Trevor 231, 232, 236, 338, 367,
Hyman, L.M. 288, 289, 294 391, 402
474 Index

Kaplan, Edith 427, 440 language contact and change: cognate signs
Karni, Avi 4, 23 231; lexical borrowing 224, 230–235
Kaufman, Terrence 231, 236 language contact and change normal
Kayne, Richard 291, 294 transmission 231–232
Kean, Mary-Louise 130, 140 language faculty 241–242, 423
Keenan, Edward L. 18, 346, 350, 366 language planning 144–47, 160, 161
Kegl, Judy A. 12–13, 19, 22–23, 173, 175, language processing 31, 89, 92, 145, 156
196, 261, 294, 296, 301, 303, 306, 307, 308, language production 112, 115, 132, 135, 138
309, 311, 318, 319, 354, 368, 374, 397, 401, language typology 56–57, 114, 129, 131, 139
403, 404, 417, 420, 424, 432, 440, 446, 465 language universals: formal 370, 396, 398;
Keller, Jörge 372, 375, 402 substantive 370, 373, 394, 398, 399
Kendon, Adam 69, 85, 170, 173, 207, 210, Langue de Signes Française (LSF): see French
222, 388, 402 Sign Language
Kennedy, Graeme D. 69, 85, 226, 231, 233, Lany, Jill 173, 466
236 LaSasso, Carol 161, 163
Kenstowicz, Michael 65, 69, 70, 85, 288, 294 Last, Marco 337, 367
Kettrick, Catherine 69, 85 lateralization 2, 4
Khasi 346–48 Latin 207
Kimura, Doreen 423, 427, 440 Launer, Patricia 151, 163
Kinande 288 Laycock, Donald C. 332, 367
Kinyarwanda 282, 284 Lee, Robert G. 23, 261, 294, 301, 303, 306,
Kipare 290 307, 308, 309, 311, 318, 319, 354, 368, 374,
Kiparsky, Paul 177, 197 403, 417, 420, 424, 432, 440
Kita, Sotaro 184, 197, 200, 222, 389, 390, Lehmann, Christian 379, 380, 402
402 lengthening 44, 52, 53, 54
Klima, Edward S. 2–4, 8, 14, 19, 22, 27, 28, Lengua de Señas Mexicana (LSM): see
29, 32, 33, 52, 58, 62, 73, 85, 102, 103, 108, Mexican Sign Language
110, 116, 123, 126, 127, 128, 129, 131, 132, Lengua de Signos Española (LSE): see
133, 140, 141, 151, 158, 163, 169, 170, 171, Spanish Sign Language
172, 173, 175, 197, 234, 236, 237, 245, 247, Lentz, Ella Mae 117, 140, 142
252, 253, 255, 261, 296, 311, 318, 319, 323, Leuninger, Helen 114, 115, 117, 127, 128,
325, 330, 333, 335, 349, 354, 356, 358, 366, 132, 135, 139, 140
367, 368, 373, 374, 396, 402, 407, 420, 427, Levelt, Willem J.M. 98, 101, 110, 111, 112,
429, 441, 446, 457, 459, 466 113, 115, 116, 121, 128, 133, 134, 135, 136,
Kohlrausch, A. 37, 62 137, 140, 141
Kyle, Jim G. 69, 85, 232, 233, 234, 236 Levelt’s model of language production 113,
115, 116, 121, 125, 128, 133, 134, 135, 138
Labov, William 410, 420 Levesque, Michael 102, 111
Lacy, Richard 374, 402 Levy, Elena 198
Ladd, Robert 177, 197 lexicon: 2, 15, 375, 377, 378, 387, 391, 392;
Lak 347–49 comparative studies of borrowing, lexical
Lakoff, George 186, 197, 402 224, 230–235; comparative studies of
Lalwani, Anil 4, 23 equivalent variants 227–235; comparative
Landau, Barbara 156, 162 studies of similarly-articulated signs
Lane, Harlan 62, 63, 67, 86, 103, 111, 144, 227–235
146, 147, 163, 202, 222, 333, 350–353, 367 Liben, Lynn 143, 163
Langacker, Ronald W. 179, 185, 197, 402 Liberman, Alvin M. 28, 33, 93, 108, 110
language acquisition 2, 4, 12, 16, 17, Liddell, Scott K. 3, 18, 22, 27, 33, 44, 46, 58,
143–45, 151, 156–58, 160, 161, 250, 256, 63, 68, 86, 88, 110, 154, 163, 170, 171, 173,
423, 425, 429, 434 175, 188, 190, 197, 213, 222, 245, 248, 249,
Index 475

250, 252, 253, 254, 255, 257, 260, 272, 294, Masataka, Nobuo 416, 420
297, 310, 311, 314, 318, 319, 324–326, 337, Mathangwane, J.T. 289, 294
343, 355–58, 361, 367, 368, 374, 375–377, Mathur, Gaurav 250, 252, 257, 258, 261,
379, 390, 391, 402, 411–412, 420, 446, 447, 372, 391–394, 403
466 Matthews, Stephen 309, 319
Lillo-Martin, Diane 4, 19, 22, 112, 113, 114, Mattingly, Ignatius 108, 110
139, 141, 175, 197, 237, 243, 244, 245, 247, Mauk, Claude 7, 23, 63
251, 252, 253, 255, 260, 261, 296, 318, 319, Maung 280, 284
323–324, 330, 335, 349, 354, 356, 358, 368, Maxwell, Madeline 156, 163
374, 384, 396, 399, 402, 404, 436, 440, 446, May, Robert 395, 401
466 Mayberry, Rachel 4, 22
Lindblom, Björn 378, 402 Mayer, Connie 147, 163
Linde, Charlotte 410, 420 Mayer, Karl 115, 141
Lingua de Sinais Brasileira (LSB): see McAnally, Patricia 158, 163
Brazilian Sign Language McBurney, Susan 108, 109
Lingua Italiana del Signi (LIS): see Italian McCarthy, John 39, 63, 114, 141, 290, 294,
Sign Language 400, 403
linguistic savant 6, 325, 422–440 McClelland, James 92, 109
literacy 147, 161 McCullough, Karl-Erik 198
Liu, Chao-Chung 69, 85 McCullough, Stephen 10, 22
Livingston, Sue 160, 163 McGarvin, Lynn 7, 23
Llogoori 47, 62 McKee, Cecile 156, 161, 163, 164
loci (referential) 245–249, 252–257 McKee, David 226, 231, 233, 236
Loew, Ruth 256, 261 McNeill, David 11, 22, 168, 169, 170, 173,
Logical Form (LF) 114, 264–265 176, 177, 180, 184, 186, 196, 197, 198, 200,
Longobardi, Giuseppe 308, 319 222, 389, 403
Luce, Paul 89, 110 Meadow, Kathryn P. 156, 164
Luetke-Stahman, Barbara 144, 163 Mehler, Jacques 107, 110
Lundberg, Ingvar 102, 110 Meier, Richard P. 3–7, 9–10, 13, 16–20,
Lupker, Stephen 89, 92, 110 23–24, 35, 37, 38, 42, 62, 63, 68, 85, 143,
Lyovin, Anatole V. 279, 294 151, 164, 176, 198, 244, 245, 247, 249, 250,
252, 253, 254, 255, 261, 296, 310, 319,
MacDonald, Brennan 102, 111 323–324, 330, 339, 354, 361, 368, 372, 373,
MacKay, Donald G. 112, 115, 117, 139, 375, 378, 379, 384, 401, 403, 446
141 Meir, Irit 16, 19, 21, 237, 248, 256, 259, 261,
MacKay, Ian R.A. 136, 141 344, 366, 372, 375, 382, 400, 403
MacLaughlin, Dawn 23, 261, 294, 298, 299, Meissner, Martin 8, 23
300, 301, 303, 305, 306, 307, 308, 309, 311, memory, short term 14, 29, 426
318, 319, 354, 368, 374, 396, 403, 417, 420, mental spaces 296–297, 310–317
424, 432, 440, 446 Meringer, Rudolf 115, 141
MacNeilage, Peter F. 9, 22, 133, 134, 141 Methodical Sign 145–47, 160
Mainwaring, Scott 407, 420 Metzger, Melanie 161, 163, 170, 171, 173,
Mano 284 175, 184, 198, 311, 319, 446, 466
Manually Coded English (MCE) 32, 323, Mexican Sign Language (LSM) 5, 172,
143–162, 17 224–235
Marantz, Alec 260, 264, 265, 293, 403 Meyer, Antje 98, 101, 110, 111
Marcario, Joanne 89, 110 Meyer, Antje 112, 115, 117, 121, 129, 133,
Marentette, Paula 4, 7, 24, 43, 63 141
Marschark, Marc 167, 171, 173, 183, 197 Meyer, Ernst 102, 111
Marsh, P. 173 Mikos, Ken 117, 140, 142
476 Index

Miller, Christopher Ray 49, 63, 68, 86, 103, Mühlhäusler, Peter 330, 332–33, 368
109, 261 Myers, Scott 56, 63, 288, 294
Miller, George A. 334, 368 Mylander, Carolyn 12, 22
Mills, Anne 39, 64
Mintun, Mark 102, 110 Nagala 331–32, 344–45
Mirus, Gene R. 7, 23, 63 Nagaraja, K.S. 348, 368
modal: 13, 201, 212, 220, 207–209; Nanai 281, 284
epistemic 208–212; obligation 202, 210, natural language 143, 161, 162
212; permission 207–210; possibility 201, Natural Sign 145, 146
207–210 Navaho 8
modality: 35, 145, 241, 259, 350–53, negation: 5, 20, 251, 325, 431–433, 437,
364–65; influence on phonology 60, 113, 438; morphological 263, 274, 281, 283,
253; medium versus 11, 322–323, 329, 284; split 238, 266, 274, 284
358–59, 364–65; noneffects of 2, 14–15, negation phrase (NegP) 266, 269, 271, 273,
113, 237–238, 243–244 275
modality effects: classification of, rules Neidle, Carol 5, 18, 23, 175, 196, 237, 244,
unique to signed or spoken languages 13, 247, 248, 251, 261, 272, 275, 276, 294, 301,
17–18; classification of, statistical 13, 301, 303, 306, 307, 308, 309, 311, 318, 319,
15–16; classification of, typological 13, 16, 354, 368, 374, 403, 417, 420, 424, 432, 440,
56–57, 114; classification of, uniformity of 446, 465
signed languages 13, 18–20, 57, 113–114, Nespor, Marina 39, 63
324, 395–397, 399; iconicity 233–235; Neville, Helen J. 4, 21, 23
New Zealand Sign Language (NZSL) 158,
lexical similarity across signed languages
159, 231
159–61, 232–235; sequentiality vs.
Newkirk, Don 67, 86, 116, 124, 126, 133,
simultaneity 27–28, 113–114, 134, 438;
134, 141
sources of articulators 6, 7, 8, 9, 36,
Newport, Elissa L. 3–5, 10, 12, 16, 19–21,
107–108, 125, 132; sources of, iconicity 11,
48, 64, 67, 68, 87, 116, 124, 126, 134, 143,
12, 15, 357–358; sources of, indexicality 11,
151, 159, 164, 176, 198, 262, 322, 326, 370,
12, 245, 359; sources of, nonmanual
372, 398, 403, 404
behaviors 237, 238; sources of, perception
Nicaraguan Sign Language (ISN) 12, 13, 168
10, 11, 36, 107; sources of, signing space
Nihon Syuwa (NS): see Japanese Sign
132, 237, 238, 245, 344, 348–352, 399, 409,
Language
439; sources of, youth of 6, 12, 13, 20 Nogogu 331–32, 333, 345
Moeller, Mary P. 144, 163 Nolen, Susan Bobbit 8, 25, 68, 87
Moody, Bill 19, 23, 237 nominals 5, 238, 297, 308–310, 312
Moores, Donald F. 144, 147, 164 non-manual behaviors: 19, 113, 119, 124,
Morford, Jill P. 168, 173 167, 170, 171, 237, 238, 274, 284–285, 289,
Morgan, Gary 422, 423, 425, 436, 440 442, 447, 457, 459, 464, 431, 438; as
morpheme 117, 130, 131, 138, 120, 126, gestures 217–220; eyebrow raise 213–214,
128, 129 217–218; eyegaze 170, 447, 453, 459, 460,
morphology: 2, 3, 13, 16, 19, 20, 32, 48–50, 461, 463, 464; negative headshake 263, 275,
57, 113, 138, 148, 151, 152, 156–60, 277, 272, 286
177–178; affixation 16, 17, 150–55, 157–60, Norris, Dennis 93, 109, 111
393, 394; and language typology (see noun phrases (see nominals)
language typology); and slips of the hand Noyer, Rolf 264, 265, 279, 294
128–131 null arguments 19, 114, 325
Morris, Desmond 170, 173 number 329–33, 335–40, 344, 353–54,
Motley, Michael 117, 139, 141 362–65
Mounty, Judith 160, 162 Nusbaum, Howard 89, 111
Mowry, Richard 136, 141 Nuyts, Jan 357, 368
Index 477

O’Rourke, Terence 204, 209, 211, 221 Petersen, Lesa 68, 87


O’Connor, Neil 426, 441 Petersen, Steven 102, 110
Odden, David 114, 141, 290, 294 Petitto, Laura A. 4, 7, 24, 43, 63, 176, 198,
Ogilvy-Foreman, Dale 69, 86 200, 222, 434, 441
O’Grady, Lucinda 436, 440 Petronio, Karen 399, 404, 445, 446, 447,
O’Grady-Batch, Lucinda 33, 29 462, 463, 465, 466
Ojemann, George 108, 109 Pfau, Roland 126, 141, 273, 274, 278, 293,
Old LSF 171 294
Olofsson, Ake 102, 110 Pfetzing, Donna 147, 163
onomatopoeia 172, 178 Philpott, Stuart B 8, 23
O’Seaghdha, Padraigh 115, 140 Phonetic Form (PF) 114, 137, 242–244,
O’Shaughnessy, M. 173 264–265
Osugi, Yutaka 80, 81, 86, 338, 341, 367, 369 phonology: 19, 27–33, 35, 36, 38, 39, 41, 43,
Otake, Takashi 93, 109 44, 45, 46, 54, 55, 56, 60, 61, 62, 63, 64,
Ouhalla, Jamal 267, 268, 294 241–244, 253, 257–259, 321, 325; canonical
Overdulve, C.M. 283, 294 word shape 57; features 39–42, 44–45, 52,
Owusu, N. 283, 295 127, 243, 263, 286; modality and 35, 60–61,
Ozyurek, Asli 200, 222, 417, 420 65–66, 84, 107–108, 243; minimal pairs 28,
57, 58, 67; parameters of sign formation 28,
Padden, Carol 3, 19, 24, 49, 50, 63, 83, 86, 31, 32, 38–39, 94, 119, 121, 123, 124, 125,
175, 198, 204, 209, 211, 221, 237, 244, 246, 126, 127, 133, 136, 170, 172, 227–228, 444,
247, 248, 250, 252, 255, 261, 296, 319, 333, 462; phoneme 30, 128; phonological
368, 372, 373, 380, 382, 388, 403, 446, 447, awareness 102, 103; phonological priming
457, 465, 466 89–93; phonological similarity 91, 102–107;
Pagliuca, William 199, 202, 206–207, psychological reality of 88, 108; root node
209–210, 219, 221 36, 40, 43, 51, 54, 55, 56, 57, 60; timing unit
Pakistani Sign Language 278 43, 44, 52, 60, 81, 82; weight unit 43, 45, 50
PAM (see auxiliary verbs) Pidgin Sign English (PSE) 444, 462
Pangasinan 346–47 Pilleux, Mauricio 272, 295
pantomime 169, 170, 171 Pinker, Steven 10–11, 24, 323, 327, 351, 368
parts of speech 2, 3 Pisoni, David 89, 110, 111
Pizzuto, Elena 237, 338, 368, 430, 441
Patschke, Cynthia 215, 222, 285, 295, 298,
Plains Indian Sign Language 342
319
Poeppel, David 102, 110
Paulescu, Eraldo 102, 110
Poizner, Howard 4, 24, 29, 33, 42, 61, 103,
Payne, David L. 331, 368
108, 110, 111, 176, 198, 256, 261, 296, 311,
Payne, John R. 263, 281, 294
319, 427, 429, 441
Pèlissier, P. 203, 211, 222
Polich, Laura G. 12, 24
Pedelty, Laura 176, 198
Pollock, Jean-Yves 267, 268, 295
Pedersen, Carlene 116, 141
Posner, Michael 102, 110
Pederson, Eric 357, 368 possessives 306–308, 316–317, 323
Penn, Claire 69, 86 Poulin, Christine 255, 261
Perkins, Revere 199, 202, 206–207, Poulisse, Nanda 115, 133, 141
209–210, 219, 221 Poulos, George 282, 295
Perlmutter, David M. 27, 30, 33, 35, 44, 49, Powell, Frank 368
53, 63, 67, 68, 79, 80, 81, 83, 86, 88, 93, Prillwitz, Siegmund 374, 404
110, 134, 137, 141, 290, 294 pronouns 18, 226, 241, 245, 247–250,
person 18, 247–250, 253–256, 323–324, 252–256, 305, 306–308, 315, 316–317,
329–336, 339–340, 342, 347–350, 353–365, 322–323, 326, 329–65
370, 371, 373, 378, 379, 393, 396, 398 prosody 27, 30, 31, 42
Pertama, Edisi 69, 86 Püschel, D. 62
478 Index

Quadros, Ronice Müller de 4, 20, 24, 251, Savir, Hava 69, 71, 86
261 Schade, Ulrich 115, 141
Quechua 346–47 Schein, Jerome 144, 164
questions: wh- 399, 446, 449, 452; yes/no Schiano, Diane J. 407, 420
201, 212–218, 446, 449, 452 Schick, Brenda 411, 421, 404
Quigley, Stephen 158, 163 Schlesinger, I.M. 156, 164
Schober, Michael F. 407, 421
Rabel, L. 348, 368 Schreifers, Herbert 98, 99, 101, 110, 111
Raffin, Michael 156, 162, 164 segments: 35–36, 39, 42–45, 51–59, 65–68,
Raichle, Marcus 102, 110 69, 76–84, 93, 97, 98; repetition of 31,
Ramsey, Claire 143, 164, 368 65–81, 84, 85
rate of signing 8, 32, 132, 138 Segui, Juan 107, 110
Rathmann, Christian 20, 24, 248, 252, 261, Seidenberg, Mark 46, 63
273, 295, 372, 380, 383, 384, 392–393, 403, Semitic languages 16, 57, 114, 160
404 Senghas, Ann 12, 19, 22, 200, 222, 237, 251,
Rauschecker, Josef 4, 23 262
Ray, Sidney Herbert 332, 368 Sergent, Justine 102, 111
Rayman, Janice 102, 111 Setswana 288
Readjustment rules 265, 272, 274, 285, 394, Shaffer, Barbara 200, 202, 208–210, 212,
395, 278 220, 222
Redden, J.E. 283, 295 Shankweiler, Donald 93, 110
reduplication 48, 49, 50, 69, 74 Shattuck-Hufnagel, Stephanie 125, 142
Reed, Charlotte 443, 444, 445, 462, 466 Shepard-Kegl, Judy; also see Kegl, Judy A.
Reed, Judy 368, 331 374, 404
referential specificity 358–359 Sherrick, Carl E. 37, 62
Reich, Peter 112, 115, 140 Shónà 56, 63, 284, 288, 281
Reikhehof, Lottie 69, 86 Shroyer, Edgar H. 69, 86
Remez, Robert E. 156, 162 Shroyer, Susan P. 69, 86
Repp, Bruno 93, 111 Shuman, Malcolm K. 69, 71, 86
Rizzolatti, Giacomo 200, 222 Sierra, Ignacio 224, 236
Roelofs, Ardi 112, 115, 121, 141 Sign Language of the Netherlands (NGT)
role shift 248, 255, 452, 453 19, 176, 213, 252
Romance languages 114 sign poetry 102–103
Romano, Christine 237 Signed English 326, 449, 455, 457, 459, 460,
Rondal, J.-A. 272, 294 461, 463, 464
Rose, Heidi 103, 111 Signing Exact English (SEE 2) 8, 12, 17,
Rose, Susan 158, 163 146–50, 152–54, 158–60, 323, 352
Ross, J.R. 237 signing space: 205, 237–238, 321, 326, 435,
Russian 178, 279, 284 439, 457, 462, 463; gestural 387–393,
395–397, 399; interpretation of, mirrored
Salvatore, M. 62 413–416; interpretation of, reversed
Sanchez-Casas, Rosa 93, 109 413–416, 418; interpretation of, shared
Sandler, Wendy 16, 21, 27, 30, 33, 35, 44, 407–409, 413–419; spatial formats,
60, 63, 80, 83, 86, 88, 106, 109, 111, 116, diagrammatic 411–412, 414, 418–419;
123, 141, 243, 244, 246, 247, 259, 261, spatial formats, viewer space 411–412, 418;
261, 344, 352, 366, 373, 374, 387, 400, 404 410–412, 414, 418
Sapir, Edward 2, 24 Simmons, David 69, 86
Sauliner, Karen 157, 162 Singleton, Jenny L. 12, 24, 169, 173, 200,
Saussure, Ferdinand de 15, 24 222, 404
Savin, H.B. 93, 97, 111 Siple, Patricia 29, 33, 350, 368
Index 479

slips of the hand: 2, 3, 5, 14, 29, 116, 138, Sutton-Spence, Rachel 19, 25, 237, 404,
117; morphological structure and 128–131; 431, 432, 433, 441
phonological parameters and 29, 123–124, Suty, Karen 160, 164
126–127; self-corrections (repairs) 29, 117, Swedish Sign Language 213, 253
119, 122, 125, 136, 135; types 117, 124, Sweetser, Eve 199, 222
128, 133, 134, 138, 119, 120, 122, 125, 126, Swisher, M. Virginia 156, 158, 164
127 Sybesma, Rint R. 297, 305, 318
slips of the tongue 29, 116, 119, 121, 127, syllable 27, 30, 35, 43, 44, 45, 46, 50, 51, 56,
129, 138 57, 93, 94, 97, 98, 106–107, 108, 124, 132,
Slobin, Dan I. 132, 142, 156, 164 133, 137, 290
Slowiaczek, Louisa M. 89, 111 syntax: 2–4, 113, 258, 259, 237–238,
Smith, Neil 422, 423, 425, 426, 427, 430, 241–244, 251–255; autonomy of 237–238,
435, 436, 440, 441 241–244; modality and 243–244, 296–297
Smith, Cheri 117, 140, 142 Sze, Felix Y.B. 309, 319
Smith, Wayne 18–19, 248, 252, 262, 324,
327, 329, 340–41, 368, 384, 404, 24 tactile signed language 442, 443, 445,
Smith Stark, Thomas C. 225, 231, 232, 233, 446
236 tactile-gestural modality 4, 442–465
Son, Won-Jae 69, 86 Taiwanese Sign Language 18, 248, 252, 324
sonority 31, 43, 98, 106 Talmy, Leonard 390, 399, 404, 405, 421
Spanish 3, 15, 128, 207, 373, 390, 426 Tang, Gladys 297, 319
Spanish Sign Language (LSE) 5, 172, Taub, Sarah F. 83, 87, 178, 198, 392, 404
224–235 Taylor, Holly A. 410, 413, 419, 421
spatial language 18, 322, 405, 439 temporal aspect: continuative 151–152;
spatial locations: 17, 244–50, 252–253, delayed completive 83
255–258, 297, 322, 373–375, 377–381, 385, Tencer, Heather L. 173, 445, 446, 462, 463,
390–392, 395; non-listablity of 175–176, 466
245, 356, 377, 378, 385, 386, 392 Thelen, Esther 7, 25
spatial marking 333–34, 344–50, 353, 358 Thomason, Sarah G. 231, 232, 236
Speas, Margaret 371, 404 tone languages 114, 268, 281, 287
specificity 298–305, 308–309, 312 topic marking 19, 201, 212–218
Spreen, Otfried 426, 440 Traugott, Elizabeth Closs 206, 217, 222
Stack, Kelly 44, 64 Tsimpli, Ianthi-Maria 422, 423, 425, 426,
Stedt, Joe D. 144, 147, 164 427, 430, 435, 436, 440, 441
Stemberger, Joseph P. 112, 115, 117, 125, Tsonga 289
128, 130, 133, 134, 142 Tuldava, J. 279, 295
Sternberg, Martin L. A. 69, 86 Turkish 131
Stokoe, William C. 2, 5, 24, 27, 28, 33, 39, Turner, Robert 4, 23
58, 64, 69, 80, 86, 88, 111, 147, 164, 167, Tversky, Barbara 407–408, 410, 413, 419,
173, 174, 200, 220, 231, 236, 373 420, 421
Strange, Winifred 46, 62, 64 Twi 283, 284
Studdert-Kennedy, Michael 28, 33, 93, 108, typological homogeneity 114–115, 348–50,
110 352–54, 358–59, 364–65
Stungis, Jim 103, 111
Supalla, Samuel J. 12, 17, 24, 69, 87, 148, Universal Grammar (UG) 27, 38, 112, 114,
151, 158–161, 155, 164, 323, 327, 352, 369 139, 243–244, 423, 431
Supalla, Ted 3, 5, 19–21, 24, 48, 64, 67, 68, universals 241, 256, 370
87, 151, 159, 160, 164, 169, 174, 245, 262, Uno, Yoshio 69, 87
338, 342–45, 369, 370, 372, 397, 403, 404, Urwin, Cathy 461, 466
405, 421 Uyechi, Linda 44, 46, 64, 68, 73, 80, 87
480 Index

Valli, Clayton 103, 111 Wilcox, Phyllis Perrin 68, 87, 200, 208, 212,
van der Hulst, Harry 52, 60, 64, 80, 87, 243, 223, 392, 404
262, 387, 404 Wilcox, Sherman E. 167, 173, 200, 208, 212,
van Hoek, Karen 311, 319, 436, 440 220, 223
van Ooijen, Brit 93, 97, 109, 111 Willerman, Raquel 7, 23
variation, sources of (see also modality effects) Wilson, Kirk L. 338–39, 369
114 Winston, Elizabeth A. 330, 369, 459,
Varney, Nils R. 426, 440 467
Vasishta, Madan M. 338–39, 369 Wismann, Lynn 69, 87
Veinberg, Silvana C. 272, 295 Wix, Tina 161, 164
Venda 282, 284 Wodlinger-Cohen, R. 157, 164
verb agreement: 2, 3, 5, 6, 12, 17–19, 51, Woll, Bencie 10, 19, 21, 25, 69, 85, 172,
175–177, 241, 244–259, 322–326, 342, 350, 174, 213, 223, 232, 233, 234, 235, 236, 237,
356, 371–372, 374, 379–381, 388, 393, 398, 404, 422, 423, 425, 431, 432, 433, 436, 440,
433–439, 457, 459; phonetic constraints on 441
258, 392, 393, 395 Wood, Sandra K. 277, 295, 399, 404
visual perception 28, 29 Woodbury, Anthony 177, 198
Vogel, Irene 39, 63 Woodward, James C. 19, 21, 25, 147, 165,
Vogt-Svendsen, Marit 272, 295 202, 223, 226, 230, 232, 236, 333, 338–39,
Volterra, Virginia 430, 441 369, 444, 467
Vorberg, Dirk 101, 110 word order 3, 19, 20, 51, 322, 324–325, 459,
vowels 45–47, 93, 94, 97, 102, 106, 133, 187 460
Wurmbrand, Susanne 371, 400, 404
Wall, Stig 102, 110 Wylie, Laurence 206, 223
Wallace, Simon B. 10, 21
Walsh, Margaret 69, 87 Yarnall, Gary 443, 467
Ward, Jill 69, 87 Yidin 56, 62
Warren, D.H. 38, 64 Yip, Virginia 309, 319
Warrington, E.K. 426, 441 Yoruba 371
Webb, Rebecca 159, 160, 164, 186, 198 Yucatec Maya Sign Language 72, 73
Weber, David J. 347, 369
Wechsler, Stephen 375, 401 Zaidel, Eran 102, 111
Weinreich, Uriel 234, 236 Zakia, Renée A. E. 7, 23
Welch, R.B. 38, 64 Zanuttini, Raffaella 266, 275, 293, 295
Wells, G. 147, 163 Zattore, Robert 102, 111
West Greenlandic 56, 57, 62 Zawlkow, Esther 147, 163
Whittemore, Gregory 116, 142 Zec, Draga 50, 64
Wiese, Richard 116, 142 Zeshan, Ulrike 69, 72, 73, 87, 237, 272, 278,
Wilbur, Ronnie B. 8, 25, 27, 30, 33, 42, 44, 295, 338–40, 363, 369
64, 68, 73, 80, 87, 200, 215, 218, 222, 223, Zimmer, June 123, 142, 298, 319
272, 285, 290, 291, 295, 298, 319 Zuck, Eric 102, 111

You might also like