Group 6 - Phonetic and Phonology
Group 6 - Phonetic and Phonology
Group 6 - Phonetic and Phonology
Group 6 members:
Afni Sofia Nursafni (20227470186)
Azlia Saraswati P (20227470038)
The writers would like to express our deepest gratitude to Allah SWT for the blessing,
guidance, and opportunity given to us so that we can complete the paper of Advance
Linguistic Subject entitled “Phonetic and Phonology”
We realize that this paper is far from perfection. Critics and suggestions from readers
will be highly appreciated and useful for us to improve our performance in other
opportunities. We’d alsolike to extend our gratitude to our lecturer Dr. Supeno, M. Hum. for
the endless encouragement and support.
We hope that this paper could be useful for the readers and can be used as the reference
for the next study in the same field. Thank You.
The Author,
i
TABLE OF CONTENT
PREFACE .................................................................................................................................. i
TABLE OF CONTENT...........................................................................................................ii
B. Form of problems.......................................................................................................... 1
A. Phonetics ........................................................................................................................ 2
B. Phonology....................................................................................................................... 7
CONCLUSSION .................................................................................................................... 13
BIBLIOGRAPHY .................................................................................................................. 14
ii
CHAPTER I INTRODUCTION
Phonetics and phonology are two branches of linguistics that study the sounds of human
language. Phonetics is the study of the physical properties of speech sounds, such as their
production, perception, and acoustic properties. Phonology is the study of the sound system of
a language, including how sounds are combined to form words and how they are used to convey
meaning.
The study of phonetics has a long history, dating back to the ancient Greeks. The first
systematic study of phonetics was conducted by the Indian grammarian Panini in the 4th
century BC. Panini's work was later translated into Arabic and Latin, and it had a major
influence on the development of phonetics in Europe.
The study of phonology began in the 19th century, with the work of the Danish linguist
Rasmus Rask and the German linguist Jacob Grimm. Rask and Grimm were the first to propose
that the sounds of a language are organized into a system, and they developed the first
systematic methods for describing the sound systems of languages.
In the 20th century, the study of phonetics and phonology was revolutionized by the
development of new technologies, such as the spectrograph and the computer. These
technologies made it possible to study speech sounds in much greater detail than ever before,
and they led to a number of new insights into the nature of speech sounds and the sound systems
of languages.
Today, phonetics and phonology are two of the most active areas of research in linguistics.
Phoneticians and phonologists are working to understand the physical properties of speech
sounds, the sound systems of languages, and the relationship between speech sounds and
meaning. Their work has important implications for a wide range of fields, including speech
recognition, speech synthesis, language learning, and language pathology.
B. Form of problems
Based on background of study above, we can conclude that the form of problem are:
1
CHAPTER II DISCUSSION
A. Phonetics
'Phonetics' is the study of pronunciation. Other designations for this field of inquiry
include 'speech science' or the 'phonetic sciences' (the plural is important) and 'phonology.'
Some prefer to reserve the term 'phonology' for the study of the more abstract, the more
functional, or the more psychological aspects of the underpinnings of speech and apply
'phonetics' only to the physical, including physiological, aspects of speech. In fact, the
boundaries are blurred and some would insist that the assignment of labels to different
domains of study is less important than seeking answers to questions.
Phonetics attempts to provide answers to such questions as: What is the physical nature
and structure of speech? How is speech produced and perceived? How can one best learn to
pronounce the sounds of another language? How do children first learn the sounds of their
mother tongue? Howcan one find the cause and the therapy for defects of speech and hearing?
How and why do speech sounds vary— in different styles of speaking, in different phonetic
contexts, over time, over geographical regions? How can one design optimal mechanical
systems to code, transmit, synthesize, and recognize speech? What is the character and the
explanation for the universal constraints on the structure of speech sound inventories and
speech sound sequences? Answers to these and related questions may be sought anywhere
in the 'speech chain,' i.e., the path between the phonological encoding of the linguistic
message by the speaker and its decoding by the listener. -
The speech chain is conceived to start with the phonological encoding of the targeted
message, conceivably into a string of units like the phoneme although there need be no firm
commitment on the nature of the units. These units are translated into an orchestrated set of
motor commands which control the movements of the separate organs involved in speech.
Movements of the speech articulators produce slow pressure changes inside the airways of
the vocal tract (lungs, pharynx, oral and nasal cavities) and when released these pressure
differentials create audible sound. The sound resonates inside the continuously changing
vocal tract and radiates to the outside air through the mouth and nostrils. At the receiving
end of the speech chain, i.e., the acoustic speech signal is detected by the ears of the listener
and transformed and encoded into a sensory signal that can be interpreted by the brain.
Although often viewed as an encoding process that involves simple uni- directional
2
translation or transduction of speech from one form into another (e.g., from movementsof
the vocal organs into sound, from sound into an auditory representation), it is well established
thatfeedback loops exist at many stages. Thus, what the speaker does may be continuously
modulated by feedback obtained from tactile and kinesthetic sensation as well as from the
acoustic signal via auditory decoding of his speech.
In addition to the speech chain itself, which is the domain where speech is
implemented, some of the above questions in phonetics require an examination of the
environment in which speech is produced, that is, the social situation and the functional or
task constraints, e.g., that it may have evolved out of other forms of behavior, that it must be
capable of conveying messages in the presence of noise, and that its information is often
integrated with signals conveyed by other channels.
The endpoints of the speech chain in the brains of the speaker (transmitter) and the
listener (receiver) are effectively hidden and very little is known about what goes on there.
For practical reasons, then, most research is done on the more accessible links in the chain:
neuromuscular, aerodynamic, articulatory, and acoustic. The articulatory phase of speech is
perhaps most immediately accessible to examination by direct visual inspection and (to the
speaker himself) via tactile and kinesthetic sensation. Thus, it is at this level that speech was
first studied—supplemented
by less precise auditory analysis—in several ancient scientific traditions. This history of
phonetics—going back some 2.5 millennia—makes it perhaps the oldest of the behavioral
sciences and, given the longevity and applicability of some of the early findings from these
times, one of the most successful.
In the second half of the nineteenth century the instrumental study of speech both
physiologically and acoustically was initiated and this has developed continuously until now
some very advanced methods are available, especially as these involve on-line control and
rapid analysis of signals by computers. One of the most useful tools in the development of
phonetics has been phonetic transcription, especially the near-universally used International
Phonetic Alphabet (IPA). Based on the principle of 'one sound, one symbol' it surpasses
culturally-maintained spelling systems and permits work in comparative phonetics and in
phonetic universals (Maddieson 1984).
3
In addition to classifying some of the subareas of phonetics according to the point in
the speech chain on which they focus, research is often divided up according to the. particular
problem attackedor to a functional division of aspects of the speech signal itself.
One of the overriding problems in phonetics is the extreme variability in the physical
manifestation of functionally identical units, whether these be phonemes, syllables or
words. Theories of coarticulation, i.e., the overlap or partially simultaneous production of
two or more units, have been developed to account for some of this variation. Other
proposed solutions to this problem emphasize that if properly analyzed there is less variation
in speech than appears at first: more global, higher-order patterns in the acoustic speech
signal may be less variably associated with given speech units than are the more detailed
acoustic parameters. Other approaches lay emphasis on the cognitive capacity of speakers
and hearers to anticipate each other's abilities and limitations and to cooperate in the
conveyance and reception of pronunciation norms. Anothermajor problem is how the motor
control of speech is accomplished by the brain when there are so many different structures
and movements to be carefully coordinated and orchestrated, in biophysical terms, where
there are an inordinate number of 'degrees of freedom.' A proposed solution is the positing
coordinative structures which reduce the degrees of freedom to a manageable few (see
Action Theory and the Production of Speech). Related to this issue is the meta-question:
what is the immediate goal of the speaker? What is the strategy of the listener? Is the
common currency of the speaker and hearer a sequence of articulatory shapes—made
audible sothey can be transmitted? Or are the articulatory shapes secondary, the common
coin being acoustic- auditory images? It is in this context that atypical modes of speech, i.e.,
substitute articulations necessitated for purposes of amusement as in ventriloquism or
because of organic defects in the normal articulatory apparatus.
4
structure of speech. In spite of the breadth of its scope and the diversity of its approaches,
phonetics remains a remarkably unified discipline.
In its development as a discipline phonetics has drawn from a variety of fields and
pursuits: medicine and physiology (including especially the area of communication
disorders), physics, engineering, philology, anthropology, psychology, language teaching,
voice (singing and oratory) instruction, stenography and spelling reform, and translation,
among others.
But in spite of the many seemingly diverse paths taken by phonetics, it has proven
itself a remarkably unified field. Reports on work in all of these areas are welcome at such
international meetings as the International Congress of Phonetic Sciences (a series begun in
1928, the last one being the 13th, held at Aix-en-Provence in 1991), Eurospeech (the most
recent being the 5th, held in Berlin) and the International Conference on Spoken Language
Processing (a series started in 1990, the most recent being the 2nd, held in Banff, Canada).
Likewise in several journals a quite inter-disciplinary approach to phonetics may be found:
Journal of the Acoustical Society of America, Journal of the Acoustical Society of Japan,
Phonetica, Journal of Phonetics, Language and Speech, Speech Communication.
5
What this author thinks keeps the field together is this: On the one hand we see speech
as a powerful but uniquely human instrument for conveying and propagating information
and yet because of its immediacy and ubiquity, it seems so simple and commonplace. But
on the other hand,we realize how little we know about its structure and its workings. 'It is
one of the grand scientific and intellectual puzzles of all ages. And we do not know where
the answer is to be found. Therefore, we cannot afford to neglect clues from any possibly
relevant domain.' This is the spirit behind what may be called 'unifying theories' in phonetics:
empirically based attempts to relate to and to link concerns in several of phonetics' domains:
traditional phonology, clinical practice, as well as in the other applied areas. In an earlier era
Zipf's 'principle of least effort' exemplified such a unifying theory: the claim that all human
behavior, including that in speech, attempts to achieve its purposes in a way that minimizes
the expenditure of energy. Zipf applied his theory to language change, phonetic universals,
syntax, as well as other domains of behavior. In the late twentieth century there are unifying
theories known by the labels 'motor theory of speech perception' (Liberman, et al. 1967;
Liberman and Mattingly 1985), 'quantal theory,' 'action theory,' 'direct realist theory of
speech perception,' 'biological basis of speech,' among others. They address questions in
phonetic universals, motor control, perception, cognition, and language and speech
evolution. Needless to say, one of the principal values of a theory—including the ones just
mentioned—is not that they be 'true' (the history of science, if not our philosophy of science,
tells us that what we regard as 'true' in the late twentieth century may be replaced by other
theories in the future) but rather that they be interesting, ultimately useful, testable, and
that they force us to constantly enlarge the domain of inquiry—in other words that they
present a challenge to conventional wisdom.
See also: Neuromuscular Aspects of Speech; Action Theory and the Production of Speech;
Phonetics, Articulatory; Speech Development: Acoustic/Phonetic Studies; Phonetics,
Descriptive Acoustic; Speech Processing: Auditory Models; Speech Perception: Direct
Realist Theory; Speech Perception; Speech: Biological Basis; Speech Production: Atypical
Aspects; Voice Quality; Quantal Theory of Speech; Phonetics: Precursors of Modern
Approaches; Arab and Persian Phonetics; Phonetics, East Asian: History; Phonetics:
Instrumental, Early Modern; Phonetic Transcription: History; Phonetic Pedagogy; Whistles
and Whistled Speech.
6
B. Phonology
Phonology comes from Ancient Greek “phone” (voice sound) and “logos” word,
(speech subject of discussion). Phonology is the study of the systems of sounds and sound
combinations in a language. It is concerned with how these sounds are systematically
organized in a language; how they are combined to form words, how they are categorized
by, and interpreted in the minds of speakers (the word phonology itself comes from the Greek
word phone, which means “voiced “). The study of phonology in the Western tradition goes
back almost 200 years, to the early 1800’s, when European linguistics began studying sound
change comparing the speech sounds in a variety of related languages. However, the
emphasis in modern phonology, as it has developed over the last 30 years, has been primarily
on the psychological system that underlies pronunciation, and secondarily on the actual
physical articulation of speech. So, phonology is the study of the sound system of language;
how the particular sounds used in each language form and integrated system for encoding
information and how such systems differ from one language to another.
The scope of phonology extends beyond individual sounds and examines larger units
such as syllables, stress patterns, intonation, and phonotactics (the permissible combinations
of sounds). It also encompasses the study of phonological variation and change across
different dialects, languages, and historical periods. Phonology analyzes how sounds interact
with each other and with other aspects of language structure, such as morphology and syntax.
7
Phonology is typically divided into two levels: segmental phonology and
suprasegmental phonology.
a. Vowels
Vowels are sounds that are produced by vibrating the vocal cords. They are
characterized by their relative openness and closedness, and by their height and frontness.
For example, the English vowels in the words "eat", "bit", and "bait" are all pronounced with
the vocal cords vibrating, but they differ in their relative openness and closedness, and in
their height and frontness.
b. Consonants
Consonants are sounds that are produced without vibrating the vocal cords. They are
characterized by the way that the airflow is blocked or constricted in the vocal tract. For
example, the English consonants in the words "stop", "spit", and "spot" are all pronounced
without vibrating the vocal cords, but they differ in the way that the airflow is blocked or
constricted in the vocal tract.
c. Tones
Tones are variations in pitch that are used to distinguish between words or word
meanings. For example, in the Chinese language, the word "ma" can have four different
meanings, depending on the tone that is used. With a high tone, "ma" means "mother"; with
a rising tone, "ma" means "hemp"; with a falling tone, "ma" means "horse"; and with a level
tone, "ma" means "scold".
a. Syllables
Syllables are important for understanding how human languages work. They are the
8
building blocks of words, and they are used to distinguish between different words. Syllables
are also important for the study of language acquisition, language change, and language
processing.
b. Stress
c. Intonation
Phonological features are distinctive properties or attributes of speech sounds that are
used to differentiate and classify them. These features serve as the basic building blocks of
phonological analysis and play a crucial role in describing the sound systems of languages.
Here are some commonly recognized phonological features:
1. Voicing: This feature distinguishes between sounds produced with vocal cord
vibration (voiced) and without vocal cord vibration (voiceless). For example, the /b/ sound
in "bat" is voiced, while the /p/ sound in "pat" is voiceless.
2. Place of Articulation: This feature represents the location in the vocal tract where a
sound is produced. Examples of places of articulation include bilabial (sounds produced with
the lips), alveolar (sounds produced with the tongue against the alveolar ridge behind the
upper front teeth), and velar (sounds produced with the back of the tongue against the soft
part of the roof of the mouth).
9
3. Manner of Articulation: This feature refers to how the airflow is modified or restricted
during the production of a sound. It includes manners such as stops (complete closure and
release of airflow, e.g., /p/), fricatives (partial closure creating turbulent airflow, e.g., /s/),
and approximants (narrow constriction allowing smooth airflow, e.g., /r/).
4. Nasality: This feature indicates whether the airflow passes through the nasal cavity
(nasal sounds) or is exclusively through the oral cavity (non-nasal sounds).
6. Tenseness: This feature differentiates between tense sounds, which are produced with
greater muscular tension and longer duration (e.g., long vowels), and lax sounds, which are
produced with less muscular tension and shorter duration (e.g., short vowels).
7. Height: This feature refers to the relative position of the tongue in producing vowel
sounds, such as high (tongue raised close to the roof of the mouth), mid (tongue in a middle
position), and low (tongue lowered).
8. Backness: This feature describes the relative position of the tongue in the horizontal
dimension during vowel production, such as front (tongue moved forward), central (tongue
in a central position), and back (tongue moved backward).
Phonological features allow linguists to analyze and describe the phonetic and
phonemic distinctions within a language. By examining the presence or absence of specific
features in different phonological contexts, researchers can uncover the underlying rules and
patterns that govern the distribution and behavior of speech sounds, contributing to our
understanding of language structure and sound patterns.
1. Assimilation:
Example: In English, the word "unhappy" (/ʌnˈhæpi/) undergoes assimilation when the /n/
sound changes to /m/ before the /p/ sound, resulting in the pronunciation /ʌmˈhæpi/.
10
2. Dissimilation:
Dissimilation is a phonological process where sounds become less like each other to avoid
redundancy.
3. Insertion/Deletion:
Insertion and deletion are phonological processes where sounds are added or removed to
simplify pronunciation or accommodate the phonotactic rules of a language.
Example (Insertion): In English, the word "athlete" (/æθ.liːt/) undergoes insertion when a
schwa /ə/ sound is added between the /θ/ and /l/ sounds, resulting in the pronunciation
/æθ.ə.liːt/.
Example (Deletion): In informal speech, the word "going to" (/ˈɡoʊ.ɪŋ tuː/) undergoes
deletion of the /t/ sound, resulting in the pronunciation /ˈɡoʊ.ɪn tuː/ or even /ˈɡoʊ.ɪnə/
("gonna").
4. Metathesis:
Example: In English, the word "ask" (/æsk/) undergoes metathesis when the /s/ and /k/
sounds switch places, resulting in the pronunciation /æks/.
It's important to note that these processes may vary across languages and dialects. The
examples provided offer a simplified understanding of these phonological processes.
Additionally, there are many other types of phonological processes that can occur in different
languages, contributing to the complexity and richness of their phonological systems
11
language fluency.
12
CONCLUSSION
Phonetics deals with the physical aspects of sounds produced in human language. It
focuses on the study of the physical properties of speech sounds, including their articulation
(how they are produced by the vocal organs), acoustic properties (how they are transmitted
as sound waves), and auditory perception (how they are perceived by listeners). Phonetics
aims to describe and classify the range of sounds that can be found in languages worldwide,
including vowels, consonants, and suprasegmental features like stress and intonation.
Phonetics and phonology are vital branches of linguistics that enhance our
understanding of the sounds of human language, their organization, and their role in
communication. They contribute to various fields, including language acquisition, speech
pathology, second language learning, and linguistic research.
13
BIBLIOGRAPHY
Brentari, Diane; Fenlon, Jordan; Cormier, Kearsy (July 2018). "Sign Language Phonology".
Oxford Research Encyclopedia of Linguistics.
Ladefoged P 1993 A Course in Phonetics, 3rd edn. Harcourt, Brace, Jovanovitch College
Publishers,Fort Worth, TX
Lass, Roger (1998). Phonology: An Introduction to Basic Concepts. Cambridge, UK; New
York; Melbourne, Australia: Cambridge University Press. p. 1.
Liberman A M, Mattingly I G 1985 The motor theory of speech perception revised. Cognition
21: 1-36
Stokoe, William C. (1978) [1960]. Sign Language Structure: An outline of the visual
communication systems of the American deaf. Department of Anthropology and
Linguistics, University at Buffalo. Studies in linguistics, Occasional papers. Vol. 8 (2nd
ed.). Silver Spring, MD: Linstok Press.
14