Lecture 07
Lecture 07
LANGUAGE
Although language is often used for the transmission of information (“turn right at the next light
and then go straight,” “Place tab A into slot B”), this is only its most mundane function.
Language also allows us to access existing knowledge, to draw conclusions, to set and
accomplish goals, and to understand and communicate complex social relationships. Language is
fundamental to our ability to think, and without it we would be nowhere near as intelligent as we
are.
Human language is the most complex behavior on the planet and, at least as far as we know, in
the universe. Language involves both the ability to comprehend spoken and written words and to
create communication in real time when we speak or write. Most languages are oral, generated
through speaking. Speaking involves a variety of complex cognitive, social, and biological
processes including operation of the vocal cords, and the coordination of breath with movements
of the throat, mouth, and tongue.
Other languages are sign languages, in which the communication is expressed by movements of
the hands. The most common sign language is American Sign Language (ASL), commonly used
in many countries across the world and adapted for use in varying countries.
Language can be conceptualized in terms of sounds, meaning, and the environmental factors that
help us understand it. Phonemes are the elementary sounds of our language, morphemes are the
smallest units of meaning in a language, syntax is the set of grammatical rules that control how
words are put together, and contextual information is the elements of communication that are not
part of the content of language but that help us understand its meaning.
A phoneme is the smallest unit of sound that makes a meaningful difference in a language . The
word “bit” has three phonemes, /b/, /i/, and /t/ (in transcription, phonemes are placed between
slashes), and the word “pit” also has three: /p/, /i/, and /t/. In spoken languages, phonemes are
produced by the positions and movements of the vocal tract, including our lips, teeth, tongue,
vocal cords, and throat, whereas in sign languages phonemes are defined by the shapes and
movement of the hands.
There are hundreds of unique phonemes that can be made by human speakers, but most
languages only use a small subset of the possibilities. English contains about 45 phonemes,
whereas other languages have as few as 15 and others more than 60. The Hawaiian language
contains only about a dozen phonemes, including five vowels (a, e, i, o, and u) and seven
consonants (h, k, l, m, n, p, and w).
In addition to using a different set of phonemes, because the phoneme is actually a category of
sounds that are treated alike within the language, speakers of di fferent languages are able to hear
the difference only between some phonemes but not others. This is known as the categorical
perception of speech sounds. English speakers can differentiate the /r/ phoneme from the /l/
phoneme, and thus “rake” and “lake” are heard as di fferent words. In Japanese, however, /r/
and /l/ are the same phoneme, and thus speakers of that language cannot tell the difference
between the word “rake” and the word “lake.” Try saying the words “cool” and “keep” out loud.
Can you hear the difference between the two /k/ sounds? To English speakers they both sound
the same, but to speakers of Arabic these represent two di fferent phonemes (Figure 10.9,
“Speech Sounds and Adults”). Infants are born able to understand all phonemes, but they lose
their ability to do so as they get older; by 10 months of age a child’s ability to recognize
phonemes becomes very similar to that of the adult speakers of the native language. Phonemes
that were initially differentiated come to be treated as equivalent (Werker & Tees,
2002). Whereas phonemes are the smallest units of sound in language, a morpheme is a string
of one or more phonemes that makes up the smallest units of meaning in a language . Some
morphemes, such as one-letter words like “I” and “a,” are also phonemes, but most morphemes
are made up of combinations of phonemes. Some morphemes are prefixes and su ffixes used to
modify other words. For example, the syllable “re-” as in “rewrite” or “repay” means “to do
again,” and the suffix “-est” as in “happiest” or “coolest” means “to the maximum.”
Syntax is the set of rules of a language by which we construct sentences. Each language has a
different syntax. The syntax of the English language requires that each sentence have a noun and
a verb, each of which may be modified by adjectives and adverbs. Some syntaxes make use of
the order in which words appear, while others do not. In English, “The man bites the dog” is
different from “The dog bites the man.” In German, however, only the article endings before the
noun matter. “Der Hund beisst den Mann” means “The dog bites the man” but so does “Den
Mann beisst der Hund.”
Semantics is the meaning of words and sentences. Languages denote, refer to, and represent
things. Words do not possess fixed meanings but change their interpretation as a function of the
context in which they are spoken. We use contextual information — the information
surrounding language—to help us interpret it. Examples of contextual information include the
knowledge that we have and that we know that other people have, and nonverbal expressions
such as facial expressions, postures, gestures, and tone of voice. Misunderstandings can easily
arise if people aren’t attentive to contextual information or if some of it is missing, such as it
may be in newspaper headlines or in text messages.
EXAMPLES IN WHICH THE SYNTAX IS CORRECT BUT THE INTERPRETATION CAN
BE AMBIGUOUS
SENTENCE PROCESSING
Sentence processing takes place whenever a reader or listener processes a language utterance,
either in isolation or in the context of a conversation or a text. Parsing is the evaluation of the
meaning of a sentence according to the rules of syntax drawn by inferences made from each
word in the sentence.
MODELS
There are a number of influential models of human sentence processing that draw on di fferent
combinations of architectural choices.
The garden path model (Frazier, 1987) is a serial modular parsing model. It proposes that a
single parse is constructed by a syntactic module. Contextual and semantic factors influence
processing at a later stage and can induce re-analysis of the syntactic parse. Re-analysis is costly
and leads to an observable slowdown in reading. When the parser encounters an ambiguity, it is
guided by two principles: late closure and minimal attachment.
Late closure causes new words or phrases to be attached to the current clause. For example,
“John said he would leave yesterday” would be parsed as John said (he would leave yesterday),
and not as John said (he would leave) yesterday (i.e., he spoke yesterday).
Minimal attachment is a strategy of parsimony: The parser builds the simplest syntactic
structure possible (that is, the one with the fewest phrasal nodes).
Constraint-based theories of language comprehension emphasize how people make use of the
vast amount of probabilistic information available in the linguistic signal. Through statistical
learning, the frequencies and distribution of events in linguistic environments can be picked
upon, which inform language comprehension. As such, language users are said to arrive at
a particular interpretation over another during the comprehension of an ambiguous sentence by
rapidly integrating these probabilistic constraints.
Both of these children made some progress in socialization after they were rescued, but neither
of them ever developed language (Rymer, 1993). This is also why it is important to determine
quickly if a child is deaf and to begin immediately to communicate in sign language. Deaf
children who are not exposed to sign language during their early years will likely never learn it
(Mayberry, Lock, & Kazmi, 2002).
For the 90% of people who are right-handed, language is stored and controlled by the left
cerebral cortex, although for some left-handers this pattern is reversed. These di fferences can
easily be seen in the results of neuroimaging studies that show that listening to and producing
language creates greater activity in the left hemisphere than in the right. Broca’s area, an area
in front of the left hemisphere near the motor cortex, is responsible for language production
(Figure 1, “Drawing of Brain Showing Broca’s and
Wernicke’s Areas”). This area was first localized in the 1860s by the French physician Paul
Broca, who studied patients with lesions to various parts of the brain. Wernicke’s area, an area
of the brain next to the auditory cortex, is responsible for language comprehension.
Evidence for the importance of Broca’s and Wernicke’s areas in language is seen in patients who
experience aphasia, a condition in which language functions are severely impaired. People with
Broca’s aphasia have difficulty producing speech, whereas people with damage to Wernicke’s
area can produce speech, but what they say makes no sense and they have trouble understanding
language.
Psychological theories of language learning differ in terms of the importance they place on
nature versus nurture. Yet it is clear that both matter. Children are not born knowing language;
they learn to speak by hearing what happens around them. On the other hand, human brains,
unlike those of any other animal, are prewired in a way that leads them, almost e ffortlessly, to
learn language.
Perhaps the most straightforward explanation of language development is that it occurs through
principles of learning, including association, reinforcement, and the observation of others
(Skinner, 1965). There must be at least some truth to the idea that language is learned, because
children learn the language that they hear spoken around them rather than some other language.
Also supporting this idea is the gradual improvement of language skills with time. It seems that
children modify their language through imitation, reinforcement, and shaping, as would be
predicted by learning theories.
But language cannot be entirely learned. For one, children learn words too fast for them to be
learned through reinforcement. Between the ages of 18 months and five years, children learn up
to 10 new words every day (Anglin, 1993). More importantly, language is more generative than
it is imitative. Generativity refers to the fact that speakers of a language can compose sentences
to represent new ideas that they have never before been exposed to. Language is not a predefined
set of ideas and sentences that we choose when we need them, but rather a system of rules and
procedures that allows us to create an infinite number of statements, thoughts, and ideas,
including those that have never previously occurred. When a child says that she “swimmed” in
the pool, for instance, she is showing generativity. No adult speaker of English would ever say
“swimmed,” yet it is easily generated from the normal system of producing language.
Other evidence that refutes the idea that all language is learned through experience comes from
the observation that children may learn languages better than they ever hear them. Deaf children
whose parents do not speak ASL very well nevertheless are able to learn it perfectly on their
own, and may even make up their own language if they need to (Goldin-Meadow & Mylander,
1998). A group of deaf children in a school in Nicaragua, whose teachers could not sign,
invented a way to communicate through made-up signs (Senghas, Senghas, & Pyers, 2005). The
development of this new Nicaraguan Sign Language has continued and changed as new
generations of students have come to the school and started using the language. Although the
original system was not a real language, it is becoming closer and closer every year, showing the
development of a new language in modern times.
The linguist Noam Chomsky is a believer in the nature approach to language, arguing that
human brains contain a language acquisition device that includes a universal grammar that
underlies all human language (Chomsky, 1965, 1972). According to this approach, each of the
many languages spoken around the world (there are between 6,000 and 8,000) is an individual
example of the same underlying set of procedures that are hardwired into human brains.
Chomsky’s account proposes that children are born with a knowledge of general rules of syntax
that determine how sentences are constructed. Chomsky differentiates between
the deep structure of an
idea — how the idea is represented in the fundamental universal grammar that is common to all
languages, and the surface structure of the idea — how it is expressed in any one language.
Once we hear or express a thought in surface structure, we generally forget exactly how it
happened. At the end of a lecture, you will remember a lot of the deep structure (i.e., the ideas
expressed by the instructor), but you cannot reproduce the surface structure (the exact words that
the instructor used to communicate the ideas).
Although there is general agreement among psychologists that babies are genetically
programmed to learn language, there is still debate about Chomsky’s idea that there is a universal
grammar that can account for all language learning. Evans and Levinson (2009) surveyed the
world’s languages and found that none of the presumed underlying features of the language
acquisition device were entirely universal. In their search they found languages that did not have
noun or verb phrases, that did not have tenses (e.g., past, present, future), and even some that did
not have nouns or verbs at all, even though a basic assumption of a universal grammar is that all
languages should share these features.
Bilingualism (the ability to speak two languages) is becoming more and more frequent in the
modern world. Nearly one-half of the world’s population, including 17% of Canadian citizens,
grows up bilingual.
In Canada, education is under provincial jurisdiction; however, the federal government has been
a strong supporter of establishing Canada as a bilingual country and has helped pioneer the
French immersion programs in the public education systems throughout the country. In contrast,
many U.S. states have passed laws outlawing bilingual education in schools based on the idea
that students will have a stronger identity with the school, the culture, and the government if they
speak only English, and in part based on the idea that speaking two languages may interfere with
cognitive development.
A variety of minority language immersion programs are now offered across the country
depending on need and interest. In British Columbia, for instance, the city of Vancouver
established a new bilingual Mandarin Chinese-English immersion program in 2002 at the
elementary school level in order accommodate Vancouver’s both historic and present strong ties
to the Chinese-speaking world. Similar programs have been developed for both Hindi and
Punjabi to serve the large South Asian cultural community in the city of Surrey. By default, most
schools in British Columbia teach in English, with French immersion options available. In both
English and French schools, one can study and take government exams in Japanese, Punjabi,
Mandarin Chinese, French, Spanish, and German at the secondary level.
Some early psychological research showed that, when compared with monolingual children,
bilingual children performed more slowly when processing language, and their verbal scores
were lower. But these tests were frequently given in English, even when this was not the child’s
first language, and the children tested were often of lower socioeconomic status than the
monolingual children (Andrews, 1982).
More current research that has controlled for these factors has found that, although bilingual
children may, in some cases, learn language somewhat slower than do monolingual children
(Oller & Pearson, 2002), bilingual and monolingual children do not significantly di ffer in the
final depth of language learning, nor do they generally confuse the two languages (Nicoladis &
Genesee, 1997). In fact, participants who speak two languages have been found to have better
cognitive functioning, cognitive flexibility, and analytic skills in comparison to monolinguals
(Bialystok, 2009). Research has also found that learning a second language produces changes in
the area of the brain in the left hemisphere that is involved in language, such that this area is
denser and contains more neurons (Mechelli et al., 2004). Furthermore, the increased density is
stronger in those individuals who are most proficient in their second language and who learned
the second language earlier. Thus, rather than slowing language development, learning a second
language seems to increase cognitive abilities