Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Guide Book
Language and Mind
1st Year Second Semester
After Learning Language and mind, you will be able to understand the
scientific method of learning a Language.
How we acquire a Language? First our sensory organs input language to the
brain, then the language process inside of the brain and the language output from
the brain.
Our five sense of organs = (Ears- listening, Eyes-see, Nose-smell, Tongue- taste
and Skin- touch)
Basic components of a language: - (Phonology- study of sound patterns,
Morphology- a particular form or structure, Syntax- The arrangement of words
and phrases, Semantics- the study of the meanings of words and sentences, and
Pragmatics- study of communication, how language is used
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Why we learn a Language?
We learn a Language for communication.
Why we learn about acquisition of the language?
This is come up with the metacognitive thinking.
Learning a Language is Technically learning with our conciseness.
Like second language learning. We learn the language
according to our necessities in a class room.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Acquiring a Language is learning the language without our
conciseness.
Acquiring the first language in our childhood. We didn’t learn
tenses, vocabulary technically.
Lessons…
1st Lesson Design Features of Language
2nd Lesson Language and human biology
3rd Lesson Children's acquisition of Language
4th Lesson 1st Language vs 2nd Language acquisition
5th Lesson Storing and Processing Language
6th Lesson Language Processing and Language Disorders
An Introduction to Psycholinguistics
Psycholinguistics is an interdisciplinary field that investigates the psychological
and cognitive processes underlying language comprehension, production, and
acquisition. It explores how humans perceive, understand, and use language,
aiming to uncover the mental mechanisms and structures that enable effective
communication.
In psycholinguistics, researchers delve into various aspects of language processing.
They examine how individuals comprehend spoken and written language,
unraveling the processes involved in recognizing and interpreting words, phrases,
and sentences. This involves understanding how meaning is assigned and how
context influences comprehension.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Language production is another key area of study in psycholinguistics.
Researchers explore the processes behind generating language, including word
selection, syntactic organization, and articulation. They investigate how speakers
plan and execute speech, considering factors such as word retrieval, sentence
construction, and the physical aspects of producing sounds.
The field also focuses on language acquisition, studying how children develop
language skills and how adults acquire second or additional languages.
Psycholinguists explore the cognitive processes and mechanisms underlying
language learning, including the development of phonological awareness,
grammatical understanding, and semantic comprehension.
Understanding how language is mentally represented is a crucial aspect of
psycholinguistics. Researchers investigate the neural networks and brain regions
involved in language processing, aiming to uncover how linguistic knowledge is
organized and stored in the brain. This involves studying the relationship between
language and other cognitive processes, such as memory and attention.
Psycholinguistics also encompasses the study of language disorders. Researchers
investigate conditions such as aphasia, which involves language loss due to brain
damage, as well as developmental language disorders. They examine the cognitive
deficits associated with these disorders, aiming to develop interventions to help
individuals with language difficulties.
Psycholinguistics draws upon research and methodologies from multiple fields,
including psychology, linguistics, cognitive science, neuroscience, and computer
science. Researchers employ experimental methods such as reaction time studies,
eye-tracking, brain imaging techniques (e.g., fMRI), and computational modeling
to explore the complex interactions between language and cognition.
By studying psycholinguistics, researchers aim to enhance our understanding of
how language is processed in the mind, leading to insights into human cognition,
language development, communication disorders, and the fundamental nature of
language itself.
Design features of Language
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Language is a system of communication uniquely associated with humans
and distinguished by its capacity to express complex ideas. Notably, studies
analyzing the various features of human language have informed our
understanding of language as a distinctly human trait. Specifically, language
is thought to possess a highly structured system of encoding and
representing concepts through either speech sounds or manual gestures,
depending on whether they are spoken or signed.
Likewise, studies which have attempted to methodically dissect this system
of human communication into various parts, or components, have informed
our understanding of the reasons for the immense expressive power of
language.
We’ll start off with a feature analysis of what defines a language, followed by
an evaluation of these features with a specific focus on how human language
can be compared to animal communication.
1.2 Hockett’s design features
In 1960, the linguistic anthropologist Charles Francis Hockett conducted a
pioneering feature study of language. In the study, he listed 13 design
features that he deemed to be universal across the world’s languages. More
importantly, these features distinguished human language from animal
communication. While the first 9 features could also match primate
communications, the last 4 were solely reserved for human language. Later
on, Hockett added another 3 features that he saw as unique to human
language. Thus, it can be said that human language shares a general set of
features that help set it apart from communication among animals.
1.2.1 Vocal-Auditory Channel
With the exception of signed languages, natural language is vocally
transmitted by speakers as speech sounds and auditory received by listeners
as speech waves. Although writing and sign language both utilize the
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
manual-visual channel, the expression of human language primarily occurs
in the vocal-auditory channel.
1.2.2 Broadcast Transmission and Directional Reception
Language signals (i.e. speech sounds) are emitted as waveforms, which are
projected in all directions (‘broadcasted into auditory space’), but are
perceived by receiving listeners as emanating from a particular direction and
point of origin (the vocalizing speaker).
1.2.3 Transistorizes
Language signals are considered temporal as sound waves rapidly fade after
they are uttered; this characteristic is also known as rapid fading. In other
words, this temporal nature of language signals requires humans to receive
and interpret speech sounds at their time of utterance, since they are not
subsequently recoverable.
1.2.4 Interchangeability
Humans can transmit and receive identical linguistic signals, and so are able
to reproduce any linguistic message they understand. This allows for the
interlocutory roles of ‘speaker’ and ‘listener’ to alternate between the
conversation’s participants via turn taking within the context of linguistic
communication.
1.2.5 Total Feedback
Humans have an ability to perceive the linguistic signals they transmit i.e.
they have understanding of what they are communicating to others. This
allows them to continuously monitor their actions and output to ensure they
are relaying what they are trying to express.
1.2.6 Specialization
Language signals are emitted for the sole purpose of communication, and
not any other biological functions such as eating. In other words, language
signals are intentional, and not just a side effect of another behavior.
Contrasting example: Biological functions which may have a communicative
side effect: such as a panting dog which hangs out its tongue to cool off
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
(biological), may simultaneously indicate to its owner that it is feeling hot or
thirsty (communicative).
1.2.7 Semanticity
Specific language signals represent specific meanings; the associations are
‘relatively fixed’. An example is how a single object is represented by
different language signals i.e. words in different languages. In French, the
word sel represents a white, crystalline substance consisting of sodium and
chlorine atoms. Yet in English, this same substance is represented by the
word salt.
Likewise, the crying of babies may, depending on circumstance, convey to
its parent that it requires milk, rest or a change of clothes.
1.2.8 Arbitrariness
There is no intrinsic or logical connection between the form of specific
language signals and the nature of the specific meanings they represent.
Instead, the signal and the meaning are linked by either convention or
instinct.
Contrasting example: Conveyance of aggression in crabs – strongly
threatened crabs express their potential intention to fight by raising their
front claw, which is partially iconic given that crabs use their craw pincers to
attack prey and defend against predators.
1.2.9 Discreteness
Language signals are composed of basic units and are perceived as distinct
and individuated. These units may be further classified into distinct
categories. These basic units can be put in varying order to represent
different meanings. The change in meaning is abrupt, and rarely continuous.
1.2.10 Displacement
Displacement also includes prevarication, which is the ability to lie or
produce utterances which do not correspond with reality. Language signals
may be used to convey ideas about things not physically or temporally
present at the time of the communicative event such as a topic that is linked
to the past or future.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
1.2.11 Productivity
Productivity is also called openness or creativity. It entails reflexiveness, the
ability of language to be used to talk about language. Humans can use
language to understand and produce an indefinite number of novel
utterances.
1.2.12 Cultural Transmission
Although humans are born with the innate ability to learn language, they
learn (a) particular linguistic system(s) as their native language(s) from
elders in their community. In other words, language is socially transmitted
from one generation to the next, and a child reared in isolation does not
acquire language.
1.2.13 Duality of Patterning
The discrete speech sounds of a language combine to form discrete
morphological units, which do not have meaning in itself. These morphemes
have to be further combine to form meaningful words and sentences.
1.3 Evaluation of Hockett’s design features
While Hockett’s list of design features may appear comprehensive, it
contains three key limitations.
Firstly, Hockett’s list is drawn up from the narrow perspective of spoken
language. However, human language can be expressed in both the audio-
vocal (spoken) and visual-manual (sign) modes: sign languages are equally
complex and fully grammatical linguistic systems (Stokoe, 2005). As Corballis
(2009, p. 22) notes, there is:
growing evidence that the signed languages of the deaf have all of the grammatical and semantic
sophistication of spoken languages, as exemplified by the fact that university-level instruction at
Gallaudet University in Washington, DC, is conducted entirely in American Sign Language
(ASL).
Therefore, his first two features, the vocal-auditory channel and broadcast
transmission and directional reception, are only relevant to the auditory
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
nature of spoken language, and cannot be strictly considered necessary to
human language.
Secondly, Hockett’s list is a plain compilation of all discernible features of
human language; it does not indicate which features are critical to the
linguistic system of communication. For example, the sixth and seventh
features of specialization and semanticist are likely to be properties of all
natural systems that have developed for communication, rather than human
language per se.
Thirdly, Hockett’s list includes many features that relate to the physical
characteristics and production of linguistic signs (either via speech or
gestures), rather than language as a communicative tool per se. For
example, the third and fifth features of transistorizes and total feedback
appear to be more relevant to the physical, rather than semiotic, properties
of the speech sounds and gestures used in spoken and sign languages. In
other words, the fact that sound waves and physical gestures are spatially
transmitted makes them necessarily transitory (feature 3) and perceptible to
the producer at the same time (feature 5). Also, the fourth feature of
interchangeability similarly appears more relevant to the physical ability of
language users to imitate or reproduce the speech sounds or gestural signs
used in spoken and sign languages, rather than their cognitive ability to use
these signs communicatively.
This therefore leaves only six main features: arbitrariness, discreteness,
displacement, productivity, cultural transmission and duality of patterning.
However, in order to understand how these features are crucial to human
language as a system of communication, a componential analysis – the focus
of our next section – is in order.
Brain and Language
Your Brain…
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
The brain consists of multiple interconnected regions that collectively
contribute to our cognitive abilities and functions. Here's a breakdown of some
key brain regions you mentioned and their roles:
1. Parietal Lobe: The parietal lobe is involved in processing sensory information
and spatial awareness. It plays a role in perception, attention, and integration of
sensory inputs. The somatosensory cortex, located in the parietal lobe,
processes touch, temperature, and pain sensations.
2. Occipital Lobe: The occipital lobe is primarily responsible for processing visual
information. It houses the primary visual cortex, which receives and interprets
visual stimuli from the eyes, enabling us to perceive and understand the visual
world.
3. Temporal Lobe: The temporal lobe plays a crucial role in auditory processing,
language comprehension, and memory. It houses the primary auditory cortex
and is involved in recognizing and interpreting sounds, including speech.
Additionally, the temporal lobe contains the hippocampus, which is essential for
forming and retrieving memories.
4. Frontal Lobe: The frontal lobe is involved in executive functions, decision-
making, problem-solving, and planning. It includes the prefrontal cortex, which
plays a key role in higher-order cognitive processes such as attention, working
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
memory, and cognitive flexibility. The motor cortex, located in the frontal lobe,
is responsible for planning and executing voluntary movements.
5. Hippocampus: The hippocampus is a structure within the temporal lobe that
plays a vital role in memory formation and retrieval. It helps us acquire new
information and recall past experiences, contributing to learning and memory
processes.
6. Amygdala: The amygdala, located within the temporal lobe, is involved in the
processing and regulation of emotions. It plays a role in detecting and
responding to emotional stimuli, influencing our emotional responses and
memory encoding.
7. Cerebellum: The cerebellum, located at the back of the brain, is traditionally
associated with motor coordination and balance. However, it also contributes
to cognitive functions such as attention, timing, and error detection. The
cerebellum assists in coordinating smooth and precise movements and has been
implicated in various cognitive processes.
8. Motor Cortex: The motor cortex, located in the frontal lobe, is responsible for
planning and executing voluntary movements. It helps us control and
coordinate our muscles, enabling us to perform motor skills necessary for
various tasks.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
9. Prefrontal Cortex: The prefrontal cortex, a region in the frontal lobe, is involved
in executive functions, including decision-making, planning, working memory,
and cognitive control. It plays a critical role in goal-directed behavior,
attentional regulation, and higher-level cognitive processes.
Understanding the functions of these brain regions provides insights into how
different cognitive processes, such as perception, memory, language, and
executive functions, are supported by specific brain areas. However, it's
important to note that these regions often work in collaboration with one
another, and complex cognitive tasks typically involve the integration and
coordination of multiple brain regions.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Language Processing
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
1. Language Processing Areas: Several brain regions are involved in language
processing. The two key areas commonly associated with language are Broca's area
and Wernicke's area, located in the left hemisphere for most individuals. Broca's
area, in the frontal cortex, is primarily involved in language production and speech
planning. Wernicke's area, located in the temporal cortex, is associated with
language comprehension and understanding.
2. Neural Networks: Language processing relies on a network of interconnected brain
regions, often referred to as the "language network" or "language system." This
network includes regions beyond Broca's and Wernicke's areas, such as the angular
gyrus, superior temporal gyrus, and various connections between frontal, temporal,
and parietal regions. These regions work together to support different aspects of
language, including phonological processing, semantic understanding, syntactic
analysis, and the integration of meaning.
3. Hemispheric Lateralization: In most individuals, language processing is
predominantly lateralized to the left hemisphere of the brain. This left hemisphere
dominance is known as the "left lateralization of language." While the left
hemisphere is more specialized for language, the right hemisphere also contributes
to certain aspects of language processing, such as prosody (intonation and rhythm)
and some aspects of discourse comprehension.
4. Plasticity and Language Recovery: The brain exhibits a remarkable capacity for
plasticity, especially in relation to language. Following a brain injury or stroke that
affects language abilities (aphasia), the brain can sometimes reorganize and
compensate for the damaged areas. Other regions, often in the corresponding areas
of the right hemisphere or adjacent regions, may assume some language functions,
allowing for language recovery or rehabilitation.
5. Developmental Language Processes: Language development in children involves
dynamic changes in the brain. As children acquire language skills, there are
structural and functional changes in the brain regions associated with language. For
example, the density and connectivity of neural networks related to language
gradually increase during development, reflecting the growing proficiency in
language abilities.
6. Neuroimaging Techniques: Advances in neuroimaging techniques, such as
functional magnetic resonance imaging (fMRI), electroencephalography (EEG),
and magnetoencephalography (MEG), have provided insights into the neural
mechanisms underlying language processing. These techniques allow researchers
to examine brain activity and connectivity patterns during various language tasks,
providing valuable information about the brain's involvement in language.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Understanding the intricate relationship between the human brain and language
is a central focus of psycholinguistics, neuroscience, and related fields.
Studying the neural basis of language processing helps unravel the cognitive
processes, mechanisms, and neural networks that underlie our ability to
comprehend, produce, and acquire language.
Language processing in the brain involves a complex network of regions
working together to comprehend, produce, and acquire language. Here is a
general overview of how language works in the brain:
1. Language Comprehension: When we listen to or read language, it undergoes
several stages of processing. The primary auditory cortex in the temporal lobe
receives the auditory input, which is then further processed in regions such as the
Wernicke's area. Wernicke's area, located in the left hemisphere for most
individuals, is involved in the comprehension and interpretation of language. It
helps assign meaning to words and phrases by accessing semantic knowledge
stored in other parts of the brain.
2. Syntactic Analysis: The brain also analyzes the grammatical structure of sentences
during language comprehension. The left inferior frontal gyrus, including Broca's
area, is crucial for syntactic processing. It helps us identify the relationships
between words and phrases, and it plays a role in constructing and interpreting
sentence structures.
3. Semantic Processing: The meaning of words and sentences is processed in various
regions distributed throughout the brain. The left temporal lobe, including the
middle temporal gyrus and angular gyrus, is involved in semantic processing.
These regions access and integrate semantic information, allowing us to understand
the meaning of individual words and the overall message conveyed by a sentence.
4. Language Production: When we speak or write, the brain initiates processes for
language production. The prefrontal cortex, including Broca's area, is instrumental
in planning and organizing speech. It coordinates the motor movements necessary
for
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
articulating words and generating grammatically correct sentences. The motor
cortex, located in the frontal lobe, then sends signals to the muscles involved in
speech production.
5. Language Acquisition: Language acquisition involves learning and developing
language skills. Children and adults acquiring a second language undergo similar
processes. The left hemisphere, especially the language-related areas, plays a
crucial role in language acquisition. During language learning, various brain
regions, including the hippocampus and cortical areas, work together to form and
consolidate linguistic representations and neural connections associated with
language.
It's important to note that language processing is not limited to specific regions
but involves distributed networks throughout the brain. Moreover,
neuroplasticity allows the brain to adapt and reorganize in response to language
learning, recovery from language impairments, or changes in language
demands.
Advancements in brain imaging techniques such as functional magnetic
resonance imaging (fMRI) and electroencephalography (EEG) have provided
insights into the neural mechanisms underlying language processing. By
studying the brain's responses to language tasks, researchers gain a deeper
understanding of the intricate processes involved in language comprehension,
production, and acquisition.
Children’s Acquisition of Language
The term language acquisition refers to the development of language in children.
By age 6, children have usually mastered most of the basic vocabulary and
grammar of their first language.
Second language acquisition (also known as second language
learning or sequential language acquisition) refers to the process by which a
person learns a "foreign" language—that is, a language other than their mother
tongue.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Examples and Observations
"For children, acquiring a language is an effortless achievement that occurs:
Without explicit teaching,
On the basis of positive evidence (i.e., what they hear),
Under varying circumstances, and in a limited amount of time,
In identical ways across different languages.
... Children achieve linguistic milestones in parallel fashion, regardless of the
specific language they are exposed to. For example, at about 6-8 months, all
children start to babble ... that is, to produce repetitive syllables like bababa. At
about 10-12 months they speak their first words, and between 20 and 24 months
they begin to put words together. It has been shown that children between 2 and 3
years speaking a wide variety of languages use infinitive verbs in main clauses ...
or omit sentential subjects ... although the language they are exposed to may not
have this option. Across languages young children also over-regularize the past
tense or other tenses of irregular verbs. Interestingly, similarities in language
acquisition are observed not only across spoken languages, but also between
spoken and signed languages." (María Teresa Guasti, Language Acquisition: The
Growth of Grammar. MIT Press, 2002)
Typical Speech Timetable for English-Speaking Child
Week 0 - Crying
Week 6 - Cooing (goo-goo)
Week 6 - Babbling (ma-ma)
Week 8 - Intonation patterns
Week 12: Single words
Week 18 - Two-word utterances
Year 2: Word endings
Year 2½: Negatives
Year 2¼: Questions
Year 5: Complex constructions
Year 10: Mature speech patterns (Jean Aitchison, The Language Web: The
Power and Problem of Words. Cambridge University Press, 1997)
The Rhythms of Language
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
"At around nine months of age, then, babies start to give their utterances a
bit of a beat, reflecting the rhythm of the language they're learning. The
utterances of English babies start to sound like 'te-tum-te-tum.' The
utterances of French babies start to sound like 'rat-a-tat-a-tat.' And the
utterances of Chinese babies start to sound like sing-song. ... We get the
feeling that language is just around the corner.
"This feeling is reinforced by [an]other feature of language..: intonation.
Intonation is the melody or music of language. It refers to the way the voice
rises and falls as we speak." (David Crystal, A Little Book of Language. Yale
University Press, 2010)
Vocabulary
"Vocabulary and grammar grow hand in hand; as toddlers learn more words,
they use them in combination to express more complex ideas. The kinds of
objects and relationships that are central to daily life influence the content
and complexity of a child's early language." (Barbara M. Newman and
Philip R. Newman, Development Through Life: A Psychosocial Approach,
10th ed. Wadsworth, 2009)
"Humans mop up words like sponges. By the age of five, most English-
speaking children can actively use around 3,000 words, and more are added
fast, often quite long and complex ones. This total rises to 20,000 around the
age of thirteen, and to 50,000 or more by the age of about twenty." (Jean
Aitchison, The Language Web: The Power and Problem of
Words. Cambridge University Press, 1997)
The Lighter Side of Language Acquisition
Child: Want other one spoon, Daddy.
Father: You mean; you want the other spoon.
Child: Yes, I want other one spoon, please, Daddy.
Father: Can you say "the other spoon"?
Child: Other ... one ... spoon.
Father: Say "other."
Child: Other.
Father: "Spoon."
Child: Spoon.
Father: "Other spoon."
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Child: Other ... spoon. Now give me other one spoon. (Martin Braine, 1971;
quoted by George Yule in The Study of Language, 4th ed. Cambridge
University Press, 2010)
Resource: - https://www.pnas.org/doi/10.1073/pnas.231498898
Second Language Acquisition
Stephen Krashen’s theory
Introduction
Stephen Krashen (University of Southern California) is an expert in the field of
linguistics, specializing in theories of language acquisition and development. Much
of his recent research has involved the study of non-English and bilingual language
acquisition. Since 1980, he has published well over 100 books and articles and has
been invited to deliver over 300 lectures at universities throughout the United
States and Canada.
This is a brief description of Krashen's widely known and well-accepted theory of
second language acquisition, which has had a large impact in all areas of second
language research and teaching.
The 5 hypotheses of Krashen's Theory of Second Language Acquisition
Krashen's theory of second language acquisition consists of five main hypotheses:
the Acquisition-Learning hypothesis;
the Monitor hypothesis;
the Input hypothesis;
and the Affective Filter hypothesis;
the Natural Order hypothesis.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
The Acquisition Learning hypothesis
The Acquisition-Learning distinction is the most fundamental of the five
hypotheses in Krashen's theory and the most widely known among linguists and
language teachers. According to Krashen there are two independent systems of
foreign language performance: 'the acquired system' and 'the learned system'. The
'acquired system' or 'acquisition' is the product of a subconscious process very
similar to the process children undergo when they acquire their first language. It
requires meaningful interaction in the target language - natural communication - in
which speakers are concentrated not in the form of their utterances, but in the
communicative act.
The "learned system" or "learning" is the product of formal instruction and it
comprises a conscious process which results in conscious knowledge 'about' the
language, for example knowledge of grammar rules. A deductive approach in a
teacher-centered setting produces "learning", while an inductive approach in a
student-centered setting leads to "acquisition".
The Monitor hypothesis
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
The Monitor hypothesis explains the relationship between acquisition and learning
and defines the influence of the latter on the former. The monitoring function is the
practical result of the learned grammar. According to Krashen, the acquisition
system is the utterance initiator, while the learning system performs the role of the
'monitor' or the 'editor'. The 'monitor' acts in a planning, editing and correcting
function when three specific conditions are met:
The second language learner has sufficient time at their disposal.
They focus on form or think about correctness.
They know the rule.
It appears that the role of conscious learning is somewhat limited in second
language performance. According to Krashen, the role of the monitor is minor,
being used only to correct deviations from "normal" speech and to give speech a
more 'polished' appearance.
Krashen also suggests that there is individual variation among language learners
with regard to 'monitor' use. He distinguishes those learners that use the 'monitor'
all the time (over-users); those learners who have not learned or who prefer not to
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
use their conscious knowledge (under-users); and those learners that use the
'monitor' appropriately (optimal users). An evaluation of the person's psychological
profile can help to determine to what group they belong. Usually extroverts are
under-users, while introverts and perfectionists are over-users. Lack of self-
confidence is frequently related to the over-use of the "monitor".
The Input hypothesis
The Input hypothesis is Krashen's attempt to explain how the learner acquires a
second language – how second language acquisition takes place. The Input
hypothesis is only concerned with 'acquisition', not 'learning'. According to this
hypothesis, the learner improves and progresses along the 'natural order' when
he/she receives second language 'input' that is one step beyond his/her current stage
of linguistic competence. For example, if a learner is at a stage 'i', then acquisition
takes place when he/she is exposed to 'Comprehensible Input' that belongs to
level 'i + 1'. Since not all of the learners can be at the same level of linguistic
competence at the same time, Krashen suggests that natural communicative
input is the key to designing a syllabus, ensuring in this way that each learner will
receive some 'i + 1' input that is appropriate for his/her current stage of linguistic
competence.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
The Affective Filter hypothesis
The Affective Filter hypothesis embodies Krashen's view that a number of
'affective variables' play a facilitative, but non-causal, role in second language
acquisition. These variables include: motivation, self-confidence, anxiety and
personality traits. Krashen claims that learners with high motivation, self-
confidence, a good self-image, a low level of anxiety and extroversion are better
equipped for success in second language acquisition. Low motivation, low self-
esteem, anxiety, introversion and inhibition can raise the affective filter and form a
'mental block' that prevents comprehensible input from being used for acquisition.
In other words, when the filter is 'up' it impedes language acquisition. On the other
hand, positive affect is necessary, but not sufficient on its own, for acquisition to
take place.
The Natural Order hypothesis
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Finally, the less important Natural Order hypothesis is based on research findings
(Dulay & Burt, 1974; Fathman, 1975; Makino, 1980 cited in Krashen, 1987) which
suggested that the acquisition of grammatical structures follows a 'natural order'
which is predictable. For a given language, some grammatical structures tend to be
acquired early while others late. This order seemed to be independent of the
learners' age, L1 background, conditions of exposure, and although the agreement
between individual acquirers was not always 100% in the studies, there were
statistically significant similarities that reinforced the existence of a Natural Order
of language acquisition. Krashen however points out that the implication of the
natural order hypothesis is not that a language program syllabus should be based
on the order found in the studies. In fact, he rejects grammatical sequencing when
the goal is language acquisition.
Resource: - https://www.sk.com.br/sk-krash-english.html
Speech Mechanism
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
The field of phonetics studies the sounds of human speech. When we study speech
sounds we can consider them from two angles. Acoustic phonetics, in addition to
being part of linguistics, is also a branch of physics. It’s concerned with the
physical, acoustic properties of the sound waves that we produce. We’ll talk some
about the acoustics of speech sounds, but we’re primarily interested
in articulatory phonetics, that is, how we humans use our bodies to produce
speech sounds. Producing speech needs three mechanisms.
The first is a source of energy. Anything that makes a sound needs a source of
energy. For human speech sounds, the air flowing from our lungs provides energy.
The second is a source of the sound: air flowing from the lungs arrives at the
larynx. Put your hand on the front of your throat and gently feel the bony part
under your skin. That’s the front of your larynx. It’s not actually made of bone;
it’s cartilage and muscle. This picture shows what the larynx looks like from the
front.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
By
Olek Remesz (wiki-pl: Orem, commons: Orem) [CC BY-SA 2.5-2.0-1.0
(https://creativecommons.org/licenses/by-sa/2.5-2.0-1.0)], via Wikimedia Commons
This next picture is a view down a person’s throat.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
By
OpenStax College [CC BY 3.0 (http://creativecommons.org/licenses/by/3.0)], via Wikimedia
Commons
What you see here is that the opening of the larynx can be covered by two triangle-
shaped pieces of skin. These are often called “vocal cords” but they’re not really
like cords or strings. A better name for them is vocal folds.
The opening between the vocal folds is called the glottis.
We can control our vocal folds to make a sound. I want you to try this out so take
a moment and close your door or make sure there’s no one around that you might
disturb.
First I want you to say the word “uh-oh”. Now say it again, but stop half-way
through, “Uh-”. When you do that, you’ve closed your vocal folds by bringing
them together. This stops the air flowing through your vocal tract. That little
silence in the middle of “uh-oh” is called a glottal stop because the air is stopped
completely when the vocal folds close off the glottis.
Now I want you to open your mouth and breathe out quietly, “haaaaaaah”. When
you do this, your vocal folds are open and the air is passing freely through the
glottis.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Now breathe out again and say “aaah”, as if the doctor is looking down your
throat. To make that “aaaah” sound, you’re holding your vocal folds close together
and vibrating them rapidly.
When we speak, we make some sounds with vocal folds open, and some with
vocal folds vibrating. Put your hand on the front of your larynx again and make a
long “SSSSS” sound. Now switch and make a “ZZZZZ” sound. You can feel your
larynx vibrate on “ZZZZZ” but not on “SSSSS”. That’s because [s] is
a voiceless sound, made with the vocal folds held open, and [z] is a voiced sound,
where we vibrate the vocal folds. Do it again and feel the difference between
voiced and voiceless.
Now take your hand off your larynx and plug your ears and make the two sounds
again with your ears plugged. You can hear the difference between voiceless and
voiced sounds inside your head.
I said at the beginning that there are three crucial mechanisms involved in
producing speech, and so far we’ve looked at only two:
Energy comes from the air supplied by the lungs.
The vocal folds produce sound at the larynx.
The sound is then filtered, or shaped, by the articulators.
The oral cavity is the space in your mouth. The nasal cavity, obviously, is the
space inside and behind your nose. And of course, we use our tongues, lips, teeth
and jaws to articulate speech as well. In the next unit, we’ll look in more detail at
how we use our articulators.
So to sum up, the three mechanisms that we use to produce speech are:
respiration at the lungs,
phonation at the larynx, and
articulation in the mouth.
Language Disorders…
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
What is a communication disorder?
A communication disorder is a developmental or acquired impairment to an
individual’s speech, language, or hearing—and some individuals face deficits in more
than one area. Communication disorders can make it mildly or profoundly difficult for
someone to receive, send, or understand various forms of communication. Continue
reading to learn more about the patients a speech-language pathologist (SLP) typically
work with on the job.
Communication Disorders vs. Voice
Disorders
While communication disorders may include voice disorders, they are not synonymous.
Communication disorders cover speech, language, and hearing disorders. Voice
disorders fall under speech disorders but are in their own category and refer specifically
to an individual’s voice quality when it is abnormal for their age or gender. Voice
disorders can either be organic or nonorganic.
Organic voice disorders are physiological, typically resulting from laryngitis, paralyzed
vocal cords, or another issue with the vocal cords. Nonorganic voice disorders, also
known as functional voice disorders, occur when there are no abnormalities to one’s
physical vocal structure, but they have ineffective use of their vocal system. Some voice
disorders are vocal fatigue, muscle tension dysphonia, diplopodia, and ventricular
phonation.
In rare instances, a voice disorder can be caused by psychological stressors. In this
case, an SLP might recommend a patient to a psychologist or psychiatrist and may
even work with them cross-functionally. There are other disorders that still impact one’s
vocal capabilities that aren’t categorized as voice disorders, such as paradoxical vocal
fold movement (PVFM), that an SLP will diagnose and treat with vocal and breathing
exercises to improve laryngeal and respiratory control.
Types of Communication Disorders
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Communication disorders are grouped into four main categories: speech disorders,
language disorders, hearing disorders, and central auditory processing disorders.
1. Speech Disorders
A speech disorder causes an individual to have difficulty with creating or forming
speech sounds. Speech disorders include articulation, fluency, and voice.
Articulation and phonological disorders are caused by structural changes in the
muscles and bones used to make speech sounds, and as a result, produce atypical
speech.
Fluency disorders are caused by genetic or neurophysiological disruptions in the
flow of an individual’s dialogue that produce atypical rhythm in sounds, such as
stuttering or cluttering.
Voice disorders are caused by abnormalities in pitch or resonance that don’t align
with a person’s age or gender.
2. Language Disorders
A language disorder is defined as an impairment to an individual’s use or
understanding of verbal, written, or other language systems. Different from speech
disorders, language disorders refer to one’s expressive and receptive language.
Someone with a language disorder may have challenges using or understanding
language—or a combination of both.
A language disorder doesn’t have to do with speech or hearing but rather the form,
content, or function of language.
The form of language refers to the sound, structure, and word combination within
a language system. These are also referred to as phonology, morphology,
and syntax.
The content of language or semantics refers to the meaning of words and phrases.
The function of language or pragmatics refers to how an individual uses the
elements from phonology, morphology, syntax, and semantics to communicate.
Most commonly, language disorders are developmental, but in other scenarios,
they can be caused by brain injury or illness.
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
3. Hearing disorders
Hearing disorders prevent an individual from hearing at total capacity, limiting an
individual’s ability to produce, comprehend, or maintain speech. Someone with a
hearing disorder can have difficulty recognizing and understanding auditory
material. Someone with a hearing disorder can be described as deaf or hard of
hearing.
Deaf individuals suffer from auditory nerve damage that causes severe hearing loss
with little to no functional hearing. Deafness can limit oral communication, and
therefore a deaf individual may receive sensory input from somewhere other than
the auditory channel.
Those who are hard of hearing will still receive sensory input from an auditory
channel but may have difficulty hearing and, therefore, communicating. Someone
hard of hearing could experience a hearing impairment that’s either fluctuating or
permanent due to a disease or eardrum infection. These impairments can typically
be treated so the person can regain partial hearing.
4. Central auditory processing disorders (CAPD)
Central auditory processing disorders are caused by deficits that impact how
individuals convert the information in their central auditory nervous system
(CANS). A central auditory processing disorder is diagnosed when an individual
has difficulty processing audible signals. These challenges are not because of
peripheral or intellectual impairments. CAPD are limitations in how an individual
analyzes, stores, and receives information from audible signals.
The role of SLPs in treating communication disorders
SLPs are instrumental in improving their patients’ communication ability to help
give them a better quality of life. In general, SLPs:
Diagnose communication disorders
Create treatment plans
Recommend and reevaluate strategies to tailor individualized treatment plans
aligned with a patient’s strengths
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Deliver therapy and ongoing support to those impacted by communication
disorders
If you’re passionate about working with a particular type of disorder, you can look
for and enroll in a program where faculty are engaged in research on a specific
communication disorder. If you’re interested in working as a speech-language
pathologist someday so that you can treat individuals with communication
disorders, learn more about the Department of Communication Sciences and
Disorders at Northeastern University.
Interested in becoming a Speech-Language Pathologist? Learn more about
the Department of Communication Sciences and Disorders at Northeastern
University.
Aphasia
What is aphasia?
Aphasia is a disorder that results from damage to portions of the brain that are
responsible for language. For most people, these areas are on the left side of the
brain. Aphasia usually occurs suddenly, often following a stroke or head injury, but
it may also develop slowly, as the result of a brain tumor or a progressive
neurological disease. The disorder impairs the expression and understanding of
language as well as reading and writing. Aphasia may co-occur with speech
disorders, such as dysarthria or apraxia of speech, which also result from brain
damage.
Who can acquire aphasia?
Most people who have aphasia are middle-aged or older, but anyone can acquire
it, including young children.
What causes aphasia?
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Aphasia is caused by damage to one or more of the language areas of the brain.
Most often, the cause of the brain injury is a stroke. A stroke occurs when a blood
clot or a leaking or burst vessel cuts off blood flow to part of the brain. Brain cells
die when they do not receive their normal supply of blood, which carries oxygen
and important nutrients. Other causes of brain injury are severe blows to the head,
brain tumors, gunshot wounds, brain infections, and progressive neurological
disorders, such as Alzheimer's disease.
Areas of the brain affected by Broca's and Wernicke's aphasia
What types of aphasia are there?
There are two broad categories of aphasia: fluent and no fluent, and there are
several types within these groups.
Damage to the temporal lobe of the brain may result in Wernicke's aphasia
(see figure), the most common type of fluent aphasia. People with Wernicke's
aphasia may speak in long, complete sentences that have no meaning, adding
unnecessary words and even creating made-up words.
As a result, it is often difficult to follow what the person is trying to say. People
with Wernicke's aphasia are often unaware of their spoken mistakes. Another
hallmark of this type of aphasia is difficulty understanding speech.
The most common type of no fluent aphasia is Broca's aphasia (see figure). People
with Broca's aphasia have damage that primarily affects the frontal lobe of the
brain. They often have right-sided weakness or paralysis of the arm and leg
because the frontal lobe is also important for motor movements. People with
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
Broca's aphasia may understand speech and know what they want to say, but they
frequently speak in short phrases that are produced with great effort. They often
omit small words, such as "is," "and" and "the."
For example, a person with Broca's aphasia may say, "Walk dog," meaning, "I will
take the dog for a walk," or "book book two table," for "There are two books on
the table." People with Broca's aphasia typically understand the speech of others
fairly well. Because of this, they are often aware of their difficulties and can
become easily frustrated.
Another type of aphasia, global aphasia, results from damage to extensive portions
of the language areas of the brain. Individuals with global aphasia have severe
communication difficulties and may be extremely limited in their ability to speak
or comprehend language. They may be unable to say even a few words or may
repeat the same words or phrases over and over again. They may have trouble
understanding even simple words and sentences.
There are other types of aphasia, each of which results from damage to different
language areas in the brain. Some people may have difficulty repeating words and
sentences even though they understand them and can speak fluently (conduction
aphasia). Others may have difficulty naming objects even though they know what
the object is and what it may be used for (anomic aphasia).
Sometimes, blood flow to the brain is temporarily interrupted and quickly restored.
When this type of injury occurs, which is called a transient ischemic attack,
language abilities may return in a few hours or days.
Language for special purpose
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)
English for Specific Purposes (ESP) is a language approach whose goal is to
provide learners with narrowly defined goals the language elements they need to
function as professionals. Diane Belcher says,
“For those who are at all familiar with the approach to English language teaching
known as English for specific purposes, or ESP (also known as LSP), the
descriptors likely to spring to mind probably include such terms as needs-
based, pragmatic, efficient, cost-effective, and functional: a view of ESP
encapsulated by Hutchinson and Waters (1987) in the statement, ‘Tell me what you
need English for and I will tell you the English that you need’ (p. 8)” (134).
It does not diminish the value of more general English-learning courses that ESP
courses exist anymore than a technical or vocational school threatens the values or
existence of a traditional university. It merely acknowledges that not every
individual or group has the same motivations driving them to learn a language.
Magda Kourilova writes,
Good Luck….
Resource: - Harshani Dissanayake (Instructor in English, English Language
Training Center, Police Training College Kundasale)
1st Year Second Semester (Call us for Classes WhatsApp 0772551029)