Short Notes - Mega File
Short Notes - Mega File
In phonetics and phonology, speech sounds (segments) using basic units of contrast are defined as
gestures.
All consonants have a fixed articulatory target which is realised at a single precise place
of articulation. The articulatory target for a particular consonant is known as its target for a particular
consonant is known as its articulatory locus.
The traditional terms which are used for all the places of articulation are not just names for particular
locations (on the roof of the mouth). They should be thought of as names for articulatory targets.
i. Bilabial gesture – (e.g., stops and nasal: p, b, m). The symbols for the voiceless and voiced bilabial
fricatives are [ɸβ]. These sounds are pronounced by bringing the two lips nearly together, so that there
is only a slit between them.
ii. Labiodental fricatives – [f, v]. In English, a labiodental nasal, [ɱ], may occur when /m/ occurs
before /f/, as in emphasis or symphony.
iii. Dental - e.g. dental fricatives [θ, ð] but there are no dental stops, nasals, or laterals except
allophonically realized (before [θ, ð] as in eighth, tenth, wealth). any speakers of rench, Italian,
and other languages (such as rdu, ashto and indhi typically have dental stops such as [t d].
iv. Alveolar are very common targets and stops, nasals, and fricatives all occur in English and in
many other languages at alveolar as a target of articulatory gestures (e.g., t, d, n, l, r., etc.).
v. Retroflex is very common sound in many Pakistani languages which is made by curling the tip of
the tongue up and back so that the tongue tip moves during the retroflex sounds such as [ɳ, ŋ, ɲ, ʈ, ɽ].
vii. Palato-alveolar and palatal are also possible articulatory gestures commonly found in world
languages. Similarly, velar sounds found in Urdu and other Pakistani languages need to be mentioned
here including [x, ɣ] which are velar fricatives. The gestures for pharyngeal (such as Arabic
pharyngeal fricative [ʕ] and epiglottal sounds (such as epiglottal fricative [ʢ] involve pulling the root
of the tongue or the epiglottis back toward the back wall of the pharynx.
Describe nasals.
Like stops, nasal can also occur voiced or voiceless (for example, in Burmese, Ukrainian and French)
though in English and other most languages nasals are voiced. As voiceless nasals are comparatively
rare, they are symbolized simply by adding the voiceless diacritic [ ] under the symbol for the voiced
sound. There are no special symbols for voiceless nasals and it is written as /m/ - a combination of the
letter for the voiced bilabial nasal and a diacritic indicating voicelessness.
Q. Explain Fricatives
Fricative as an articulatory gesture may be divided into voiced or voiceless sounds but we can
also subdivide fricatives in accordance with other aspects of the gestures that produce them. For
example, some authorities have divided fricatives into sounds such as [s], in which the tongue is
grooved so that the airstream comes out through a narrow channel, and those such as [θ], in which the
tongue is flat and forms a wide slit through which the air flows. On the other hand, a slightly better
way of dividing fricatives is to separate them into groups on a purely auditory basis.
The fricatives [s, z, ʒ, ʃ] are called sibilant sounds. They have more acoustic energy—that is, greater
loudness—at a higher pitch than the other fricative sounds.
In approximants, trills, taps and flaps are commonly found with different articulatory gestures in the
world languages. These languages vary not only in terms of the nature of the sounds but they also
vary in terms of the length of the sound.
The following table covers various types of these approximant sounds found in the world languages:
Laterals are important articulatory gestures. Laterals are usually presumed to be voiced approximants
unless a specific statement to the contrary is made.
a lateral stop
Phonetics and Phonology ENG507(Finals) 2nd Batch
a lateral fricative
lateral approximant
There are types of stop sounds (i.e., b, p, pʰ, bʱ, ɓ, b, kʼ, dn, nd, tɬ, tɬʼ, ts, tsʼ and nine types of trill,
tap and flap (together one category of approximants sounds (i.e., r, ɾ, ɽ, ɹ, ɻ, ʀ, ʁ, ʙ, * with a range of
similarities in terms of their articulatory gestures.
Acoustics is the study of the physics of the speech signal (i.e., when speech sound travels through the
air in a wave form from the speaker‘s mouth to the hearer‘s ear through vibrations. In acoustics, it is
possible to measure and analyze these vibrations (physical properties) by mathematical techniques
usually by using specially-developed computer software to produce spectrograms of speech.
Acoustic phonetics also studies the relationship between activity in the speaker‘s vocal tract and the
resulting sounds by involving physics, computer, statistics and a number of experiments in lab. Thus,
the analysis of speech using the expertise available in acoustic phonetics is claimed to be more
objective and scientific than the traditional auditory method which mostly depends on the reliability
of the trained human ear.
It also involves advanced level speech software for analyzing sound differences (in terms of pitch,
loudness and quality) and distinguishes largely among speech sounds by giving the detailed
composition of energy (e.g., frequency on spectrum).
According to the experts of speech sounds (phoneticians), acoustic analysis can provide a clear,
objective datum for the investigation of speech – the physical ‗facts‘ of utterance. In other words,
acoustic evidence is often referred to when one wants to support an analysis being made in
articulatory or auditory phonetics. Acoustic analysis not only gives us the features of a sound but also
tells us about the duration or length of a speech sound. For such an analysis, we need to carefully
know about material and procedure of recording. Thus, acoustic analysis describes the durational
characteristics, articulatory properties and phonetic differences through physiological measurement.
The experts of phonetics are particularly interested in analyzing vowels acoustically. They describe
vowels in terms of numbers (in a language that how many vowels are possible). It is also possible to
analyze vowel sounds so that the measurement of the actual frequencies of the formants (the formant
structure of the vowels of a language) is taken.
It is a model of speech (e.g., vowel) production. According to this theory, source refers to the
waveform of the vibrating larynx. Its spectrum is rich in harmonics, which gradually decrease in
amplitude as their frequency increases. The various resonance chambers of the vocal tract, especially
the movements of the tongue and lips, act on the laryngeal source in the manner of a filter, reinforcing
certain harmonics relative to others. Thus the combination of these two elements (larynx as source
and cavity as filter) is known as the source-filter model of speech (e.g., vowel) production.
Q. Define formants.
The overtones are called formants, and the lowest three formants distinguish vowels from each other.
Phonetics and Phonology ENG507(Finals) 2nd Batch
Q. Explain in detail the Source – Filter Mechanism.
In this theory, the tract is represented using a source-filter model and several devices have been
devised to synthesize speech in this way. The idea is that the air in the vocal tract acts like the air in an
organ pipe, or in a bottle. Sound travels from a noise-making source (i.e., the vocal fold vibration) to
the lips. Then, at the lips, most of the sound energy radiates away from the lips for a listener to hear,
while some of the sound energy reflects back into the vocal tract. The addition of the reflected sound
energy with the source energy tends to amplify energy at some frequencies and damp energy at others,
depending on the length and shape of the vocal tract. The vocal folds (at larynx) are then a source of
sound energy, and the cavity (vocal tract - due to the interaction of the reflected sound waves in it) is a
frequency filter altering the timbre of the vocal fold sound. Thus this same source-filter mechanism is
at work in many musical instruments. In the brass instruments, for example, the noise source is the
vibrating lips in the mouthpiece of the instrument, and the filter is provided by the long brass tube.
The formants that characterize different vowels are the result of the different shapes of the vocal tract.
Any particle of air, such as that in the vocal tract or that in a bottle, will vibrate in a way that depends
on its size and shape. Remember that the air in the vocal tract is set in vibration by the action of the
vocal folds (in larynx). Every time the vocal folds open and close, there is a pulse of acoustic energy
(activation). Irrespective of the rate of vibration at source (of the vocal folds), the air in the vocal tract
will resonate at these frequencies as long as the position of the vocal organs remains the same.
Because of the complex shape of the filter (tract), the air will vibrate in more than one way at once.
So, the relationship between resonant frequencies and vocal tract shape is actually much more
complicated than the air in the back part of the vocal tract vibrating in one way and the air in other
parts vibrating in another. In most voiced sounds, three formants are produced every time the vocal
folds (source) vibrate. The resonance in the vocal tract (filter) is independent of the rate of vibration
of the vocal folds (source). In other words, the vocal folds may vibrate faster or slower, giving the
sound a higher or lower pitch, but the formants will be the same as long as the position of the tube
(vocal tract) is the same.
The theory of perturbation says that with the acoustic effect of constriction at the lips, we can predict
the formant frequency differences between rounded and unrounded vowels. Keeping in mind this
modification in the size and nature of vocal tract (for specific vowel sounds), we can estimate how
this perturbation theory works. So for each formant, there are locations in the vocal tract where
constriction will cause the formant frequency to rise, and locations where constriction will cause the
frequency to fall.
According to perturbation theory, resonance occurs in a uniform tube where one end is closed and the
other end is. This theory tells us whether each resonance frequency increases or decreases when a
small modification occurs in the diameter at a local region of the tube (tract). As a result:
-the resonance frequency of the particular resonance mode decreases when a constriction is located at
an anti-node of that resonance mode; and
-the resonance frequency of the particular resonance mode increases when a constriction is located at
a node of that resonance mode.
Keeping in mind this idea of perturbation theory, we can derive that the resonance frequencies
will change (decrease or increase) as per the position (modification in the size and nature) of the vocal
tract (tube).
Phonetics and Phonology ENG507(Finals) 2nd Batch
Using computer programs, we can analyze vowel sounds by showing their components through the
display (spectrogram). In spectrograms, time runs from left to right, the frequency of the components
is shown on the vertical scale, and the intensity of each component is shown by the degree of
darkness. It is thus a display that shows, roughly speaking, dark bands for concentrations of energy at
particular frequencies—showing the source and filter characteristics of speech. The first two
frequencies are important here. The first formant (F1) is inversely related to the height of a vowel
whereas the second formant (F2) is related to the frontness of a vowel sound. When the first two
formants are taken, the vowels of a language can be plotted on a chart and the structure is very much
related to the traditional description of vowel sounds.
The acoustic properties (structure) of consonantal sounds are usually more complicated than that of
vowels. Usually, a consonant can be said to be a particular way of beginning or ending a vowel sound
because during the production of a consonant there is no distinguishing feature prominently visible.
There is virtually no difference in the sounds during the actual closures of voiced stops [b, d, g], and
absolutely none during the closures of voiceless stops [p, t, k], because there is only silence at these
points. Each of the stop sounds conveys its quality by its effect on the adjacent vowel. There are some
consonantal sounds which have vowel like structure; therefore, their acoustic features are somehow
similar to vowels (in the case of nasal consonants, approximants and glides) but most of the
consonants have totally different acoustic features.
1. Using Praat (or any other software) and spectrogram is particularly useful when a researcher
is working on a problem related to the nature (physical properties) of a sound (e.g., is it a
phoneme or allophone?).
2. It increases our understanding of the speech sounds and their behavior in different forms (in
isolation or as the part of connected speech).
3. Practice on spectrogram gives us the opportunity to learn about the characteristics of speech
sounds.
4. It is also important for experts who are working on phonetic aspects of speech as signal
processing.
5. These are also used as the part of techniques in speech recognition.
6. Spectrograms enable us to explore the complex nature of speech structure (as the part of
spoken language).
7. Spectrograms are the part of techniques used in machine translation.
1. Start analyzing sounds one by one by keeping in mind the individual characteristics of sounds
as a class.
2. Carefully see the overall structure, especially the frequency scale.
3. While interpreting consonants, also analyze the behavior of the adjacent vowels.
4. Pay more attention to the first two formants (especially for vowels).
5. Watch for a burst and aspiration in stop sounds.
6. Remember that in vowels the first formant is inversely related to the height of a vowel (the
lower is F1, the higher is the vowel) and F2 is related to the degree of backness of the vowel.
Phonetics and Phonology ENG507(Finals) 2nd Batch
7. It is, of course, also possible to tell many other things about the manner of articulation from
the spectrograms of various sounds (e.g., one can usually see whether a stop has been
weakened to a fricative, or even to an approximant in some cases). Similarly, the process of
affrication (of a stop) can also be seen on many occasions. Trills can be separated from taps
and flaps, and voiced sounds from voiceless ones.
1. Spectrograms show relative quality (e.g., a particular speaker may have a higher level of
vowels).
2. The formant plots (of an average speaker) may be compared with the formant plots of a
particular speaker.
3. Similarly, when two different speakers record their sets of vowels with the same phonetic
quality, their relative positions on a formant chart will be similar, but the absolute values of
the formant frequencies will differ.
4. In such case, the absolute values of the vowels will be important. But remember that it is a
complex issue and no simple (or single) technique is useful.
5. One very important strategy may be to use the values of F4 (which may be the indicator of
individual head size). Thus F4 may be studied in connection with first three formants for
further evaluation. Remember that F4 for other sounds will also be required for such a
comparison.
The fundamental distinction between consonant and vowel sounds is that vowels make the
least obstruction to the flow of air. In addition to this, vowels are almost always found at the center of
a syllable, and it is very rare to find any sound, other than a vowel which can stand alone as a whole
syllable. Phonetically, each vowel has a number of features (properties) that distinguish it from other
vowels. These include; firstly, the shape of the lips (lip-rounding), rounded (for sounds like /u:/
vowel , neutral (as for ə - schwa sound) or spread (as in /i:/ sound in word like sea or – when
photographers traditionally ask you to say ―cheese‖ /tʃi:z/ in order to make you look smiling.
Secondly, part of the tongue - the front, the middle or the back of the tongue may be raised, giving
different vowel qualities: compare /æ/ vowel (as in word ‗cat‘ as a front vowel, with the /ɑ:/ vowel
(as in ‗cart‘ which is a back vowel. Thirdly, the tongue (and the lower jaw may be raised ‗close‘ to
the roof of the mouth (for close vowels. e.g. /i:/ or /u:/ , or the tongue may be left ‗low‘ in the mouth
with the jaw comparatively ‗open‘ (as for open vowels e.g., /a:/ and /æ/.
In order to classify vowels (independent of the vowel system of a particular language), the English
phonetician Daniel Jones introduced a system in early 20th century and worked out on a set of vowels
called the ―cardinal vowels‖ comprising of eight vowels to be used as reference points (so that other
vowels could be related to them like the corners and sides of a map).
Cardinal vowel system is a chart or four-sided figure (the exact shape of which has been changed
from time to time), with eight corners as can be seen on the IPA chart from IPA website. It is a
diagram to be used both for rounded and unrounded vowels, and Jones proposed that there should be a
primary and a secondary set of cardinal vowels. The primary set includes eight vowels in total (from 1
to 8 ; the front unrounded vowels [i, e, ε, a], the back unrounded vowel [ɑ] and the rounded back
vowels [ɔ, o, u].
Secondary cardinal vowels from 9 to [y, ø, œ, ɶ, ɒ] are the rounded counterparts of primary
cardinal vowels from to 5 [i, e, ε, a, ɑ]. imilarly, secondary cardinal vowels from 4 to 6 (ʌ, ɤ, ɯ
are the unrounded equivalents of primary cardinal vowels from 6 to 8 [ɔ, o, u] respectively. Two
further cardinal vowels ( 7 and 8symbolized by [ɨ] and [ʉ] represent the highest point at the center
where the tongue can possibly reach. The entire vowel system (of human languages) is usually shown
in the form of the cardinal vowel diagram (resembling human tongue and divided into eight corners)
or cardinal vowel quadrilateral. The aim is to give an approximate configuration of the degree and
direction of tongue movement involved in the vowel production. These diagrams have been
successfully used for the description of vowel system available in the dialect and languages of the
world.
Vowel sounds in various accents of English are interesting for a number of reasons. They provide
solid basis for comparisons and contrasts. For example, the accent of American (newscasters) English
has represented fairly conservative difference (in the first two formants) with Californian English. The
Californian English does not maintain a contrast between the vowels in cot and caught (they are both
spoken as the same [ɑ] . oreover, the Californians have a higher vowel (lower first formant in [eɪ]
than in [ɪ]. Their high back vowels seem more towards front as they have a higher second formant.
Among other differences, vowel /ʊ/ is often pronounced with spread lips in this variety of English.
Similarly, in a number of northern cities in the United States (e.g., Pittsburgh and Detroit), [æ] is
spoken very close to [ɛ] (as raised with decreased . These are some of the examples of differences
found in various varieties of English.
While discussing vowels in English, we can explore vowels in BBC accent for a number of reasons.
The first and the foremost comparison may be with the American English (or varieties of American
English). The vowels of BBC accent are different in both number (20 in total – both pure vowels and
diphthongs and quality. British English speakers distinguish the vowel [ʌ] in cut from the vowel [ɜ]
in curt. It does not have any r-coloring (rhotacization) mainly by the frequency of the first formant.
Moreover, as the main feature of BBC English (to be noted here) is the distinction between the three
back vowels [a] as in father and cart, [ɑ] as in bother and cot, and [ɔ] as in author and caught. BBC
vowels are really interesting in making it a standard variety of English.
Variations among human languages are not limited only to the number of vowels but also to the
quality and features. Spanish has a very simple system contrasting only five vowel sounds [i, e, a, o,
u]. It is important to remember that these symbols do not have the same values as English or their
descriptions of cardinal vowels. Japanese also has a set of five vowels [i, e, a, o, u] but very different
in a narrower transcription than that of Spanish. Similarly, Danish has different vowels and their
qualities as it contrasts three front rounded vowels which may occur both in long or short forms.
ocusing on akistani regional languages, rdu and unjabi have very different nasal vowels (the
Phonetics and Phonology ENG507(Finals) 2nd Batch
nasality is phonemic here . rdu has some seven nasal vowels such as /he/ vs. /h / or /hi:/ vs. /h :/ as
in word /nah :/ meaning ‗no‘. asalization in vowels is a common feature of Indo Aryan languages.
Normally, while discussing the degree of variation in vowel sounds, three types of features
are given (i.e., height of tongue, backness of tongue and lip rounding) which cover the major variation
in world‘s languages. But this description does not cover all types of variation in vowel quality. One
out of such variations is advanced tongue root (ATR) which is found in Akan language spoken in
Ghana. Actually, vowels produced with ATR involve the furthest-back part of the tongue, opposite to
the pharyngeal wall, which is not normally involved in the production of speech sounds - also called
the radix (articulations of this type may, therefore, be described as radical). ATR (a kind of
articulation in which the movement of the root of tongue expands the front–back diameter of the
pharynx) is used phonologically in Akan (and some other African languages) as a factor in contrast of
vowel harmony. The opposite direction of movement is called retracted tongue root (RTR). ATR is
thus related to the size of pharynx – making the pharyngeal cavity different: creating comparatively
large (+ATR: root forward and larynx lowered) and small pharyngeal cavity (-ATR: no advanced
tongue root). Akan contrasts between two sets of vowels +ATR and –ATR.
In the description of vowel quality, rhotacization (or rhotacized vowel) is a term which is
used in English phonology referring to dialects or accents where /r/ is pronounced following a vowel,
as in words ‗car‘ and ‗cart‘. Thus varieties of English are divided on the basis of this feature -
varieties having this feature are rhotic (in which /r/ is found in all phonological contexts) while others
(not having this feature) are non-rhotic (such as Received Pronunciation where /r/ is only found before
vowels as in ‗red‘ and ‗around‘ . imilarly, vowels which occur after retroflex consonants are
sometimes called rhotacized vowels (they display rhotacization).
The speakers of Urdu, Punjabi and many other Pakistani regional languages learn to produce a variety
of nasal vowels as the part of their mother tongue and face no issue in learning nasalization in vowels.
However, for the speakers of other languages (such as English which does not have nasal vowels)
have to learn this feature of vowels by starting saying the low vowel /æ/ as in man and by keeping the
soft palate lowered. any languages have contrasts between nasal and oral vowels including rench
and rdu. rdu and unjabi have many nasal vowels. rdu has seven nasal vowels such as /he/
(meaning ‗is‘ vs. /h / (meaning ‗are‘ . Nasalization in vowels is a common feature of Indo Aryan
languages. In I A chart, the diacritic used for nasalization is the symbol [ ] called tilde (used above
the phonetic symbol to show the nasality).
Most of the world languages contain a class of sounds that functions in a way similar to consonants
but is phonetically similar to vowels (e.g., in English, /w/ and /j/ as in ‗wet‘ and ‗yet‘ . When they are
used in the first part of syllables (at onset), they function as consonants. But if they are pronounced
slowly, they resemble (in quality) with the vowels [u] and [i] respectively. These sounds are called
semivowels which are also termed as approximants today. In French there are three semivowels (i.e.,
in addition to j and w there is another sound symbolized /ɥ/ and is found in initial position in the word
like ‗huit‘ /ɥit/ (eight and in consonant clusters such as /frɥ/ in /frɥi/ (‗fruit‘ . The I A chart also lists
a semivowel corresponding to the back close unrounded vowel /ɯ/. Like the others, this is classed as
an approximant.
They can usually be described as added vowel-like articulations including; ‗palatalization‘ (adding a
high front tongue gesture as in sound /i/), velarization (raising of the back of the tongue),
pharyngealization (it is the superimposition of the narrowing of the larynx) and labialization (the
addition of lip-rounding).
upra means ‗above‘ or ‗beyond‘ and segments are sounds (phonemes . uprasegmental is a term
used in phonetics and phonology to refer to a vocal effect (such as tone, intonation, stress, etc.) which
extends over more than one sound (segment) in an utterance.
Major suprasegmental features include pitch, stress, tone, intonation or juncture. Phonological studies
can be divided into two fields: segmental phonology and suprasegmental phonology.
Q. Explain Syllable
In a simple way of defining the term, syllables are the parts of word (in which a word is further
divided into parts), for example, mi-ni-mi-za-tion orsup-re-seg-men-tal. Phonetically, we can observe
that the flow of speech typically consists of an alternation between vowel-like states (where the vocal
tract is comparatively open and unobstructed) and consonant-like states where some obstruction to the
airflow is made (thus altering speech between the two natural kinds of sounds). So, from the speech
production point of view, a syllable consists of a movement from a constricted or silent state to a
vowel-like state and then back to constricted or silent state. From the acoustic point of view, this
means that the speech signal shows a series of peaks of energy corresponding to vowel-like states
separated by troughs of lower energy (sonority).
Q. Explain Syllable
It can be divided into three possible parts as phonemes may occur at the beginning (onset), in the
middle (nucleus or peak) and at the end (coda) of syllables - the combination of nucleus (peak) and
coda is called the rhyme. The beginning (onset) and ending (coda) are optional while a syllable must
have a nucleus (at least one phoneme). Thus, the study of the sequences of phonemes is called
phonotactics, and it seems that the phonotactic possibilities of a language are determined by its
syllabic structure (sequences of sounds that a native speaker produces can be broken down into
syllables).
A syllable structure could be of three types: ‗simple‘ (CV , ‗moderate‘ (CVC and ‗complex‘ (with
consonant clusters at edges) such as CCVCC and CCCVCC (where V means vowel and C stands for
consonant). Moreover, words can have one syllable (monosyllabic), two syllables (bisyllabic or
disyllabic), three syllables (trisyllabic) or many syllables (polysyllabic).
One of the areas in which a little agreement is found is related to the levels of stress. Some
descriptions of languages manage with just two levels (stressed and unstressed), while others use
more than two. In English, one can argue that if one takes the word ‗in-di-ca-tor‘ as an example, the
first syllable is the most strongly (primarily) stressed one, the third syllable is the next most strongly
(secondarily) stressed and the second and fourth syllables are weakly stressed or unstressed
accordingly. This gives us three levels (primary, secondary and tertiary) and it is possible to argue for
more, though this rarely seems to give any practical benefit.
In terms of its linguistic function, stress is often treated under two different headings: word
(lexical) stress and sentence (emphatic) stress. Lexical stress is basically related to the primary stress
applied at syllable level (when only one syllable is stressed) that has the ability to change the meaning
and the grammatical category of a word as in the case of ‗I port‘ (noun and ‗im ORT‘ (verb .
Sentence level stress, on the other hand, is applied on one word (rather than a syllable) in a sentence
thus making that word more prominent (stressed) than the rest of the words in the sentence. This type
of stress has its role in intonation patterns and rhythmic features of the language showing specific
emphasis on the stressed word (which may be highlighting some information in the typical context).
Languages of the world are; therefore, divided into two broad categories: stress timed language and
syllable timed languages.
Stress timed languages have stress as their dominating rhythmic feature meaning that these languages
seem to be timed according to the stressed patterns(the division among the syllables is made on the
basis of stressed and unstressed patternse.g., English and German languages). In other words, in stress
timed languages, stressed syllables occur with regular intervals and their units of timing are perceived
accordingly. Stress-timed rhythm is one of these rhythmical types, and is said to be characterized by a
tendency for stressed syllables to occur at equal intervals of time. This idea is further clarified in the
next topic.
In syllable timed languages, all syllables tend to have an equal time value (for example, their
length or duration) and the rhythm of the language is said to be syllable-timed. In these languages,
syllables tend to occur at regular intervals of time with fixed word stress. A classic example is
Japanese in which all morae have approximately the same duration. This tendency is contrasted with
stress-timing where the time between stressed syllables is said to tend to be equal irrespective of the
number of unstressed syllables in between. Czech, Polish, Swahili and Romance languages (e.g.,
Spanish and French) are often claimed to be syllable-timed.
As a suprasegmental feature, pitch is an auditory sensation - when we hear a regularly vibrating sound
such as a note played on a musical instrument (or a vowel produced by the human voice), we hear a
high pitch (when the rate of vibration is high) and a low pitch (when the rate of vibration is low).
There are some speech sounds that are voiceless (e.g. /s/), and cannot give rise to a sensation of pitch
Phonetics and Phonology ENG507(Finals) 2nd Batch
in this way but the voiced sounds can. Thus the pitch sensation that we receive from a voiced sound
corresponds quite closely to the frequency of vibration of the vocal folds. However, we usually refer
to the vibration frequency as fundamental frequency in order to keep the two things distinct. In tonal
languages, pitch is used as an essential component of the pronunciation of a word and a change of
pitch may cause a change in meaning. In most languages (whether or not they are tone languages)
pitch plays a central role in intonation. In very simple words, pitch is the variation in the vibration of
vocal folds.
In simple sense, ‗intonation‘ refers to the variations in the pitch of a speaker‘s voice used to
convey or alter meaning (at sentence level). In its broader and more popular sense, it is used to cover
much the same field as ‗prosody‘, where various features such as voice quality, tempo and loudness
are also included. It is a term frequently used in the study of suprasegmental phonology, referring to
the distinctive use of patterns of pitch, or melody and the study of intonation is sometimes called
intonology. In some approaches, the pitch patterns are described as contours and analyzed in terms of
levels of pitch as pitch phonemes and morphemes while in others, the patterns are described as tone
units or tone groups which are further analyzed as contrasts of nuclear tone, tonicity, etc. The three
variables of pitch range, height and direction are generally distinguished.
Intonation as a suprasegmental feature performs several functions in a language. Its most important
function is to act as a signal of grammatical structure (e.g., creating patterns to distinguish among
grammatical categories), where it performs a role similar to punctuation (in written language).It may
furnish far more contrasts (for conveying meaning). Intonation also gives an idea about the syntactic
boundaries (sentence, clause and phrase level boundaries). It also provides the contrast between some
grammatical structures (such as questions and statements).
The role of intonation in the communication is quite important as it also conveys personal attitude
(e.g., sarcasm, puzzlement, anger, etc.). Finally, it can signal contrasts in pitch along with other
prosodic and paralinguistic features. It can also bring variation in meaning and can prove an important
signal of the social background of the speakers.
Linguistic phonetics is an approach which is embodied in the principles of the International Phonetic
Association (IPA) and in a hierarchical phonetic descriptive framework that provides certain basis for
formal phonological theory. Speech, being very complex phenomena and having multiple levels of
organization, needs to be explored from different angles. Linguistic phonetics answers a number of
questions related to the possible ways of articulatory unified phonetics and phonology and from the
Phonetics and Phonology ENG507(Finals) 2nd Batch
perspective of cognitive phonetics focusing on speech production and perception and how they shape
languages as a sound systems. The idea is mainly related to the overall ability of human beings to
produce sounds (as a community and irrespective of their specific languages) and then the
representation of their shared knowledge (as considered by the IPA in its charts) for formal phonetic
and phonological theories.
The description of the phonetics of the individual involves describing the phonetic knowledge and
skills related to the performance of language. It is possible that certain aspects of the phonetics of the
individual can be captured using IPA transcription but others are not compatible with it (such as his
private knowledge and its performance and the role of memory and experience). Secondly, the
phonetics of the individual is usually not the focus of the linguist in speech elicitation, and it is
difficult to describe even with spectrograms of the person‘s speech. Although, the phonetics of the
individual is the focus of much of the explanatory power of phonetic theory but for general phonetic
description we need to focus on the phonetics of the community.
IPA is the set of symbols and diacritics that have been officially approved by IPA. The association
publishes a chart comprising of a number of separate charts. At the top inside the front cover, you will
find the main consonant chart. Below it is a table showing the symbols for nonpulmonic consonants,
and below that is the vowel chart. Inside the back cover is a list of diacritics and other symbols, and a
set of symbols for suprasegmental features (events) such as tone, intonation, stress, and length.
Feature hierarchy is an important concept in phonetics and phonology which is based on the
properties and features of sounds. In a very general sense, a feature may be tied to a particular
articulatory maneuver or acoustic property. For example, the feature [bilabial] indicates not only that
the segment is produced with lips but also that it involves both of them. Such features (in phonetics
and phonology) are listed in a hierarchy with nodes in the hierarchy defining ever more specific
phonetic properties. For example, sounds are divided in terms of their supra-laryngeal and laryngeal
characteristics, and their airstream mechanism. The supra-laryngeal characteristics can be further
divided into those for place (of articulation), manner (of articulation), the possibility of nasality, and
the possibility of being lateral. Thus, these features are used for classifying speech sounds and
describing them formally.
The four features (i.e., Stop, Fricative, Approximant, and Vowel) depend on the degree of
closure of the articulators.
The manner category ‗ top‘ has only one possible value (i.e., [stop] , but ‗ ricative‘ has two (i.e.,
[sibilant] and [nonsibilant] . imilarly, ‗Approximant‘ and ‗Vowel‘ have five principal features:
Height, (with five possible values [high], [mid-high], [mid], [mid-low], and [low]), Backness (with
three values, [front], [center], and [back]) and two kinds of ‗Rounding‘ (i.e., rotrusion with possible
values [protruded] and [retracted], and Compression, with possible values [compressed] and
[separated] . The fourth feature for vowels and approximants is the ‗Tongue Root‘ which has two
possible values: [+ATR] and [−ATR]. inally, the feature ‗Rhotic‘ has only one possible value,
[rhotacized]. It is also important to remember that the ‗Laryngeal‘ possibilities involve mainly five
features ([voiceless], [breathy voice], [modal voice], [creaky voice], and [closed] - forming a glottal
stop). Airstream features have three values; Pulmonic, Velaric and Glottalic).
Focusing on the phonetics of the individual we can explore the controlling of the articulatory
movement. For example, while underlying our linguistic description of [p], as an example of speech
motor control, an array of muscular complexity involving dozens of muscles in the chest, abdomen,
larynx, tongue, throat, and face is in action. Interestingly, all of these are contracted with varying
degrees of tension in specific sequence and duration of contraction. For this sound (i.e., [p]), in order
to produce a lip closure movement, two main muscles (depressor labii inferior and incisivus inferior)
are activated to create ‗too much‘ and ‗enough tension‘. Then at the same time the jaw muscles are
activated so that it may trade with the lip muscles (together with lower lip movement) for closing and
the opening. This structure specifies an overall task ―close the lips‖ at the top node, and subtasks such
as ―raise the lower lip‖ and ―lower the upper lip‖ are coordinated with each other to accomplish the
overall task. Some subtasks also require further reduction of the goal into smaller subtasks. In addition
to create a voiceless bilabial (i.e. [p]), the glottis needs to be wide apart (for free air passage). So
exploiting the air passage, keeping it voiceless at larynx and creating a closure by lips and jaws along
with many subtasks are achieved for [p] mainly because of the controlling articulatory movements
thus enabling us to understand the individual variation in the production of speech.
The role of the memory for speech under the exemplar theory suggests that many instances of each
word are stored in memory and their phonetic variability is memorized rather than computed. The
main postulates of the concepts are given here:
To produce sounds with maximum ease of articulation, only similar sounds are affected. The focus of
the speakers is always on maintaining a sufficient perceptual distance between the sounds that occur
in a contrasting set (e.g., vowels in stressed-monosyllabic words beat, bit, bet, and bat). This principle
of perceptual separation does not usually result into one sound affecting an adjacent sound (as
explained in the principle of maximum ease of articulation). Instead, perceptual separation affects the
set of sounds that potentially can occur at a given position in a word, such as in the position that must
be occupied by a vowel in a stressed monosyllable as in words beat, bit, bet, bat so that the perceptual
separation is maximized. The principle of ‗maximum perceptual separation‘ also accounts for some of
the differences between languages. All these examples illustrate how languages maintain a balance
between the requirements of the speaker and those of the listener.
Q. Define syllable
Syllable is the unit of pronunciation typically larger than a single sound and smaller than a word. A
word maybe divided into parts such as in ne-ver-the-less.
Q. Define syllabification
Syllabification is the term which refers to the division of a word into syllables.
Q. Define resyllabification
Q. Define polysyllabic
Q. Explain Syllabification
There are different modes and structures for syllable structure and languages are labelled as per their
syllabic templates. Consonant sequences are called clusters (e.g., CC – two consonants or CCC –
three consonants). Most of the phonotactic analyses are based on the syllable structures and syllabic
templates. On the basis of these consonant clusters, mainly three types of syllabic patterns are
considered among languages; simple – moderate – complex (on the basis of consonants clusters at
edges: onset and coda).
Q. Define phonotactics
The study of the phonemes and their order found in the syllables (the study of sound sequences) of a
language is called the phonotactics.
Q. Explain Phonotactics
Phonotactics is a term used in phonology to refer to the order (sequential arrangements or tactic
behavior) of segments (sounds or phonological units) which occur in a language. It shows us what
counts as a phonologically well-formed structure of a word. The allowed sound patterns and restricted
sound patterns of language are found through phonotactics. For example, in English, consonant
sequences such as /fs/ and /spm/ do not occur initially in an English word, and there are many other
restrictions on the possible consonant+vowel combinations which may occur. By thoroughly
analyzing the data, the ‗sequential constraints‘ of a language can be stated in terms of phonotactic
rules.
Nowadays most of the research works in phonetics and phonology are based on software like Praat
and WaveSurfer. Praat is a freeware created by Paul Boersma and David Weenink at the Institute of
Phonetics Sciences of the University of Amsterdam. It is freely downloadable with improvised
version and its guides and discussions are also available. One of the active platforms available for
Praat related discussion and blogs is the yahoopraatgroup. There is an introductory tutorial also
available at the homepage of Praat. In short, Praat is a computer program with which you can analyze,
synthesize, and manipulate speech, and create high-quality pictures for your articles and thesis.
The manual which we are going to use in this course was developed by the faculty (Sonya Bird and
Qian Wang) at University of Victoria Canada. It is an excellent manual with worksheets and carefully
designed ten labs. Praat only requires a very modest experience of computer and you can easily
conduct experiments if you are using personal computer at a very initial level. It is very important for
learning phonetics and phonology in general and acoustic analysis of speech in particular and will
help you a lot in your future research work in the area of phonetics and phonology especially acoustic
phonetics.
‗ egmenting and labelling‘ is experiment on raat which is particularly helpful in acoustic analysis
because we will be able to place segmental symbols (label and add the textgrids to spectrogram) and
annotate the sound file. The sound file needs text for a number of reasons including keeping the track
record of your measurement from the file for annotating the textgrid tiers and by learning segmenting
and labelling, we also learn how to open both files together and putting the phonetic symbols to our
recordings.
Follow the following steps for learning segmenting and labeling on Praat:
1. Create a textgrid:
In the Praat Objects window, highlight the subject (required) file.
Annotate > To TextGrid.
Create two tiers (this will be enough for our purposes . Write ‗word segment‘ (these are two
tiers on the cell named ‗All tier names‘ on the small window.
Exporting your visual displays to the Word document is the part of the ‗write up‘ of the experiment. It
is an important step as we need to report every experiment to our Word file for reporting purposes.
There is quite an easy way to do it by maximizing the edit window and hitting the ‗ rt cr‘ (print
screen) button and, subsequently, pasting it into Word document.
Source-filter theory is particularly important to understand the basic components of speech sounds
and the nature of the acoustic signals (it is the physics of speech sounds). It is, therefore, very crucial
to understand the acoustic analysis of speech sounds particularly the vowels and vowel-like (sonorous
sounds). The major goal of this lab is to understand and explore the basic acoustic components of
(sonorous) speech sounds such as fundamental frequency, harmonics, and formants. Basically, these
components create the acoustic signals associated with speech – understanding them is crucial to
understanding what we actually hear, when we hear speech sounds. The labs based on the source-filter
theory are conducted in the next few sessions.
1. Displaying the pitch track and allowing Praat to measure the pitch automatically:
Display the pitch track: Pitch > Show pitch.
Place your cursor in the middle – a stable portion of the vowel.
Go to Pitch > Get pitch – a box will appear with the pitch value in it (note it down)
Harmonics are the multiple integers of the fundamental frequency which are basically the result of
vocal fold vibration (complex wave . We need the ‗narrow band spectrogram‘ for measuring the H
(which we can set by fixing the spectrum setting at 0.025). By starting measuring the frequency of the
first three harmonics, we will go to the H10 (H1, H2, H3 – H10). Finally, we will compare with pitch
measurement already taken. It is important to note down that when our vocal folds vibrate, the result
is a complex wave, consisting of the fundamental frequency plus other higher frequencies, called
harmonics. Let‘s now take the harmonics:
Formants are the overtone resonances. Acoustically, in order to plot vowels on chart, F1-F2 are very
important. We need the wide bands for measuring the formants (which are the important
characteristics of sonorant speech sounds – vowels). On spectrogram, formants are thick bands
(darkness corresponds to loudness; i.e. the darkest harmonics are the ones that are the most
amplified). These amplified harmonics form the formants that are characteristic of sonorant speech
sounds. ow, let‘s measure the first and second formants ( and 2 from the middle of each vowel
using the three techniques outlined below and note down your measurements:
3. Measuring the frequency without displaying Praat formants – the easiest way – if raat‘s formant
tracking goes wonky:
Get rid of raat‘s formant tracking: ormant > how formants (unclick .
Place your cursor in the center of each formant, in the middle of the vowel.
A red horizontal bar should appear with the frequency value on the left (in red).
Captured in the source-filter model of the speech, it is clear now (from the comparison of the two
values – for formants and harmonics) that harmonic numbers are different for one type of sounds but
Phonetics and Phonology ENG507(Finals) 2nd Batch
the formants are the same. Actually, the relationship between the harmonics and the formants is
captured in the source-filter model of speech production. The point is that harmonics are related to the
laryngeal activity (source) and formants are the output of the vocal tract (filter).
The eight vowels from American English are to be recorded for the purpose. These vowels are: heed,
hid, head, had, hod, hawed, hood and who‘d. easure the following three things: intrinsic pitch,
spectral make up (formants) and plot them in excel sheet (and finally export them to your Word
document). Now, record yourself saying the words. Take a quick look at your vowels in the Edit
window, and make sure you can clearly see the vowel formants. If you have trouble seeing them, you
can go back to the previous labs and learn it again. While doing this, please make a note of it on your
worksheet.
Q.Discuss in detail intrinsic pitch
Explore the eight vowels recorded in the last session and measure the pitch (F0) for each of them. You
can measure pitch by using any of the three ways. Having measured the pitch (F0) in each of the
vowels, note down your measurements. There is one more way to confirm your pitch measurement by
looking at the spectral slice (which gives the component frequencies and their amplitudes).
Use the confirmed pitch values and plot the pitch of each vowel on your excel sheet. Make sure
you label your y-axis using a scale that allows you to spread out your measurements as much as you
can. Now draw the cluster chart from the excel sheet and export to Word document and give the
figure number and title.
In order to have the spectral make-up of the vowels, we need to take the formant values (for first two
formants i.e., F1 and F2) of the eight vowels already recorded. Calculate the first two formants for
each vowel. This you can do by using the automatic formant tracking or the manual measurement.
Having taken the values for all of the vowels, we will subsequently plot them on a chart. We need the
default wide-band spectrogram for measuring the first and second formants (F1 and F2) of each
vowel. You can also use raat‘s automatic formant tracking to help you if you want. Also note and try
to answer the following questions.
a. Why do formants (F1 and F2) differ across vowels?
b. What does F1 seem to correspond to, in terms of articulation?
c. What about F2? To what does it correspond?
By putting F1 and F2 in separate columns, write the formant values associated with different vowels
(giving vowels in the first column, the difference between F2 and F1 in the second column and F1 in
the third). After putting the data in Excel sheet, we will use the Scatter chart from the same
spreadsheet. Further in order to make it corresponding with the required values for F1 and F2, we will
reverse the values for both formants (on both axis – Y and X). Now the zero for both F1 and F2 is at
the right corner. Once completed, export the chart to your Word document and give it the number and
title accordingly.
Sonorants are vowel-like sounds (nasals and glides). These sounds are called sonorants because they
have formants. But they are different from vowels because they generally have lower amplitude;
therefore, they behave like consonants.
Formants for nasal sounds are also important for acoustic analysis. Measure the first three (F1, F2 and
F3) formants of nasals from the file. Nasals have very distinctive waveforms (different than that of
vowels) as they have distinctive forms of anti-formants (bands of frequencies damped) and formant
transition.
There are three important acoustic correlates of voicing in stops: the voice bar, VOT, and the duration
of the preceding vowel. Record /apa/, /aba/, /ata/, /ada/, /ap ha/ and /atha/ and for each of the stops in
the file, take the three measurements. See the voicing or the voice bar by exploring features of stop.
Also check the duration of the preceding vowels. Note down the presence of voicing.
It is the characteristic of voiced + voiceless + aspirates stop sounds and there are very easy steps to
calculate the VOT. Record /apa/, /aba/ /ata/, /ada/, /ap ha/ and /atha/. Zoom in through your stop sounds
so that you can analyze the patterns of the stop sounds and find the difference among the three types
of VOT (negative, zero and positive). Measure the VOT of each stop and compare voiced/voiceless
counterparts (p/b, t/d, k/g). Similarly, zoom in so that you can clearly see the stop closure followed by
the beginning of the vowel. You can measure the time between the end of the stop closure (the
beginning of the release burst) and the onset of voicing in the following vowel (the onset of regular
pitch pulses in the waveform). This is voice onset time or VOT.
Phonetics and phonology is a very potential area for research to be carried out in Pakistani
context. In applied phonology, many areas can be explored; for example, issues faced by Pakistani
learners of English may be studied. Similarly, the pronunciation issues of Pakistani learners are
potential area through which the difficulties faced by Pakistani students may be addressed. Also,
researchers can explore and document the features of Pakistani English based on their phonological
features in order to get the Pakistani variety of English recognized. Other problematic areas may also
include: segmental and supra segmental features (such as stress placement, intonation patterns and
syllabification and resyllabification of English words by Pakistani learners. Contrastive analysis
(between English phonology and the sound systems of the regional languages of Pakistan (Urdu,
Punjabi, Sindhi, Balochi and Pashto) can also be carried out by the researchers. We can also think
about exploring the consonant clusters and interlanguage phonology from second language acquisition
point of view. While focusing on ELT as the part of applied linguistics, studies may also be carried
out on Pakistani variety of English (development of its corpora, deviation from the standard variety
(RP), its specific features, etc.). Moreover, IPA resources and their applicationon ELT in Pakistani
context can also be studied.
Phonetics and Phonology ENG507(Finals) 2nd Batch
Regional languages of Pakistan may also be documented and studied.In this context, Pakistani
researchers can get their work published (on the sounds – IPA illustrations) in international reputable
journals such as the IPA Journal of Cambridge University. Pakistani regional languages are the part of
the rich linguistic regions. (Himalaya Hindu Kush (HKH) region, one of the richest regions in the
world linguistically and culturally) may be very potential area for research in the fields of areal and
typological linguistics (description of linguistic features cross-linguistically). While working on
Pakistani regional languages,one may apply for funding from international organizations (e.g.,
organization for endangered languages and UNISCO).
There are many other areas which may be explored. The distinctive features, for example, can be
studied as the part of the phonetic studies of sounds (in applied phonology). Such a study would
discover facts about the features of English pronunciation (and, of course, about the sounds of other
languages). While working on this aspect of sound systems, the phonological analysis (theoretical)
may include the description of phoneme as a combination of different features (e.g., /d/ as a phoneme
and its features – alveolar, stop, voiced, oral and central) in a binary (+ -) order which is an important
component of phonology. Moreover, the feature analysis may also include aspects of the target
language as the part of ELT (English Language Teaching learning point of view). We need to include
three principles for feature analysis: contrastive function (how it is different), descriptive function
(what it is) and classificatory function (based on broader classes of sounds). Features may also be
studied further as a part of language universals and then their role as language specific sub sets.
In experimental phonetics and phonology, the studies of sounds include various latest experimental
techniques and computer software that are used under carefully designed lab experimentation. It is an
important aspect of the application of the latest technology by going beyond the simple acoustics and
by working in sophisticated phonetic labs in order to discover the hidden aspects of human speech.
or example, questions such as ‗How speech is produced and processed?‘ are the focus of
experimental phonetics. The latest trends under experimental phonetics include brain functions in
speech production and processing (by using the latest equipment – many special instruments such as
x-ray techniques), speech errors, neurolinguistics and the topics related to the developments through
computers – for speech analysis and synthesis.
The varieties of English (or many other major languages such as Urdu, Punjabi and Pashto, for that
purpose) may also be viewed as potential areas of research in the domain of phonetics and phonology.
Such a study would include comparisons and contrasts (similarities and differences) among the accent
(varieties) of the subject language(s) – e.g., differences at phonetic and phonological levels and the
study of segmental and suprasegmental features – in different varieties. Differences among accents of
English have already been discussed in this course under various headings (such as vowels in other
accents of English). Similarly, English dialectology has already been explored by a number of studies
with particular focus on the geographic differences (in the recognition of various forms of English –
or Englishes). There are now many well-known data gathering techniques used in field linguistics
(such as the sociolinguistics of English varieties) like variations paradigm. Field workers are
particularly trained for data gathering and their expertise are developed for large scale studies related
to the studies of varieties.
Phonetics and phonology is an integral part of English Language Teaching (ELT). The teaching of
phonetics and phonology further needs to be integrated in the teaching of ELT. For the purpose, the
teachers are expected to work on their skills related to the pronunciation of English and sensitize their
students related to the topic – they may use their self-initiated procedures for carrying out the
phonological contrastive analysis (CA) (e.g., of their mother tongues and English) at segmental and
suprasegmental levels for enhancing their skills and completing their researches. They are also
expected to take part in the phonology based ELT activities from the TESOL Home Page (available
online) and participate in the English Language Teaching Reforms (ELTR) projects of the Higher
Education Commission (HEC) of Pakistan and activities planned and sponsored by British Council
Pakistan. The students are also expected to be the part of these platforms through their social groups
and online learning opportunities.
ENG507 (Finals) Solved Fall2019
DIFFERENCE (distinctive)
Q. sonorants are different from vowels because they have ____amplitude (lower)
Q. confirming the pitch of spectral slice one should ignore___ at beginning (small spikes)
Q. IPA is the set of symbols and diacritics that have been officially approved by IPA.
V✅ CV ccvc cvc
Q. Which of the following sounds is represented by the symbol [t] (there is a diacritic mark under
[t] which I am unable to type?? DENTAL STOP
Q. The formants that characterize different vowels are the result of the different shapes of the
VOCAL TRACT.
Q. One should watch for the burst and aspiration in the STOP SOUNDS.
Q. Which of the following vowel sounds (in the given words) will be uttered with neutral lips? SET
Q. Spanish has a very simple system contrasting only FIVE vowel sounds.
Q. In syllable timed languages, all syllables tend to have a/an EQUAL time value.
Q. In Mandarin, the word [ma] means “mother” when it is said with HIGH PITCH
Q. Which of the following types of phonetics is considered for specific purposes only?
INDIVIDUAL PHONETICS
Q. Two native speakers of a language will always speak WITH SOME VARIATION
Q. The features [voiceless] and [breathy voice] are studied under the cover term ‘Laryngeal
Q. ‘Radical’ is a cover for [pharyngeal] and [epiglottal] articulations made with the ROOT OF THE
TONGUE of the tongue.
Q. In the production of a plosive like [p], which of the following is not a sub-task? CLOSE THE
TEETH
Q. Speech is quite diverse and complex particularly when it comes to the phonetics of INDIVIDUAL
Q. Which of the following words uses a seven phoneme pattern of syllable? STRENGTHS
Q. The measurements are taken from the middle of a vowel sound because it is the NUCLEUS
portion.
Q. Which of the following features (place of articulation) best describes the stop /p/? BILABIAL
Q. Which of the following features (place of articulation) best describes the stop /g/. GLOTTAL
Q. The question that is mainly answered by the contrastive function of distinctive feature theory is
‘how “is it different”?
Q. Which of the following functions of the distinctive feature theory answers the question, ‘What is
it’? DESCRIPTIVE
Q. Which of the following is considered a GOLDEN method of SLA? Task Based Learning and
Teaching (TBLT)
Q. The activities of ELTR are planned and sponsored by British Council Pakistan
Q. English Language Teaching Reforms (ELTR) are the projects of the Higher Education
Commission (HEC) of Pakistan
Q. in spectrograms, time runs from left to right, and the frequency of the components is shown on
the vertical scale.
Q. One should observe gap in pattern with burst for voiceless and sharp formant beginning for
voiced stops.
Q. spectrogram help to know about bilabial sound.....2 [The most favourite of VU; V.V.IMP for 2 marks
questions. They are asking about stops or bilabial or approximants.]
Stop - gap in pattern (with burst for voiceless and sharp formant beginning for voiced stops)
Nasal - formant structure similar to that of vowels (with formants at 250, 2500, and 3250)
Lateral - formant structure similar to that of vowels (with formants at 250, 1200, and 2400)
Q. Memory for Speech [Another favourite of VU; V.V.IMP for 2 marks questions. They ask it for
speaking style, sound change or any one of them.]
The role of the memory for speech under the exemplar theory suggests that many instances of each word
are stored in memory and their phonetic variability is memorized rather than computed. The main
postulates of the concepts are given here:
Language universal features: Broad phonetic classes (e.g., aspirated vs. unaspirated) derive
from physiological constraints on speaking or hearing, but their detailed phonetic definitions are
arbitrary—a matter of community norms.
Speaking styles: No one style is basic (from which others are derived), because all are stored in
memory. Bilingual speakers store two systems.
Generalization and productivity: Exemplar theory says that generalization is also possible
within productivity. Interestingly, productivity—the hallmark of linguistic knowledge in the
phonetic implementation approach—is the least developed aspect of the exemplar theory.
Sound change: Sound change is phonetically gradual and operates across the whole lexicon. It is
a gradual shift as new instances keep on adding.
The primary set includes eight vowels in total (from 1 to 8); the front unrounded vowels [i, e, ε, a], the
back unrounded vowel [ɑ] and the rounded back vowels [ɔ, o, u].Symbol was given and we had to write
the lip jaw and tongue position.
Q. What specific terms are used for the consonants cluster Cc and ccc in syllable? (2)
Consonant sequences are called clusters (e.g., CC – two consonants or CCC – three consonants). Most of
the phonotactic analyses are based on the syllable structures and syllabic templates.
Q. Define syllable.
A unit of pronunciation having one vowel sound, with or without surrounding consonants, forming the
whole or a part of a word; e.g., there are two syllables in water and three in inferno.
In order to understand VOT, the three types of plosive sounds are to be explained – voiced, voiceless and
a voiceless aspirated sound. Most aspirated (largest positive VOT) at the top to most voiced (largest
negative VOT) at the bottom. The Navajo aspirated stops have a very large VOT value that is quite
exceptional (150 MS).
Zoom into a small piece of the waveform in the middle of the vowel and measure the period by
highlighting one complete cycle and noting the time associated with it (in the panel above the waveform).
To remove a boundary that you have made - Highlight the boundary - Go to Boundary > Remove OR
click Alt+backspace.
Languages of the world are; divided into two broad categories: stress timed language and syllable timed
languages.
Q. What are three formants help in distinguishing vowel from each other?
Q. How could I get findings while doing research on Pakistan regional language? (2)
Pakistani regional languages are the part of the rich linguistic regions. (Himalaya Hindu Kush (HKH)
region, one of the richest regions in the world linguistically and culturally) may be very potential area for
research in the fields of areal and typological linguistics (description of linguistic features cross
linguistically). While working on Pakistani regional languages, one may apply for funding from
international organizations (e.g., organization for endangered languages and UNISCO).
The latest trends under experimental phonetics include brain functions in speech production and
processing (by using the latest equipment – many special instruments such as x-ray techniques)
Acoustically, vowels are mainly distinguished by the first two formant frequencies F1 and F2; F1 is
inversely related to the vowel height (which means that smaller F1 amplitude = higher vowels), and F2 is
related to the front or back of the vowels (smaller F2 amplitude = more back vowels).
Gap in pattern (with burst for voiceless and sharp formant beginning for voiced stops)
In spectrograms, time runs from left to right, the frequency of the components is shown on the vertical
scale, and the intensity of each component is shown by the degree of darkness. It is thus a display that
shows, roughly speaking, dark bands for concentrations of energy at particular frequencies—showing the
source and filter characteristics of speech.
Varieties having this feature are rhotic (in which /r/ is found in all phonological contexts)
Q. How tone (high vs low)change the meaning of the word [ma] in Mandarin? 2
For example, in Mandarin Chinese, [́ma] said with a high pitch means ‘mother’ while [̀ma] said on a low
rising tone means ‘hemp’. In other (non-tonal) languages, tone forms the central part of intonation, and
the difference between, for example, a rising and a falling tone on a particular word may cause a different
interpretation of the sentence in which it occurs. In the case of tone languages, it is usual to identify tones
as being a property of individual syllables, whereas an intonational tone may be spread over many
syllables.
1. fall,
2. rise,
3. 3. fall–rise and
4. rise–fall
Source-filter theory is an important concept in acoustic phonetics. It is a model of speech (e.g., vowel)
production. According to this theory, source refers to the waveform of the vibrating larynx. Its spectrum
is rich in harmonics, which gradually decrease in amplitude as their frequency increases. The various
resonance chambers of the vocal tract, especially the movements of the tongue and lips, act on the
laryngeal source in the manner of a filter, reinforcing certain harmonics relative to others. Thus the
combination of these two elements (larynx as source and cavity as filter) is known as the source-filter
model of speech (e.g., vowel) production.
When discussing differences in quality, we noted that the quality of a vowel depends on its overtone
structure (i.e., formants). Now putting this idea another way, we can say that a sound (e.g., vowel)
contains a number of different pitches simultaneously. There is the pitch at which it is actually spoken,
and there are the various overtone pitches that give it its distinctive quality. We distinguish one vowel
from another by the differences in these overtones. The overtones are called formants, and the lowest
three formants distinguish vowels from each other.
All voiced sounds are distinguishable from one another by their formant structure (frequencies). This idea
could be understood by considering the vocal tract as a tube and thus the concept is when the vocal fold
pulses have been produced at a steady rate, the “utterance” is on a monotone. In other words, what you
hear as the changes in pitch are actually the changes in the overtones of this monotone “voice.” These
overtone pitch variations convey a great deal of the quality of the voiced sounds. The rhythm of the
sentence is apparent because the overtone pitches occur only when the vocal folds would have been
vibrating.
Tone (in phonetics and phonology) as a suprasegmental feature refers to an identifiable movement
(variation) or level of pitch that is used in a linguistically contrastive way. In tone (tonal) languages, the
linguistic function of tone is to change the meaning of a word. For example, in Mandarin Chinese, [́ma]
said with a high pitch means ‘mother’ while [̀ma] said on a low rising tone means ‘hemp’. In other (non-
tonal) languages, tone forms the central part of intonation, and the difference between, for example, a
rising and a falling tone on a particular word may cause a different interpretation of the sentence in which
it occurs. In the case of tone languages, it is usual to identify tones as being a property of individual
syllables, whereas an intonational tone may be spread over many syllables. In the analysis of English
intonation, tone refers to one of the pitch possibilities for the tonic (or nuclear) syllable.
Q. Tube model
The formants that characterize different vowels are the result of the different shapes of the vocal tract.
The air in the vocal tract is set in vibration by the action of the vocal folds (in larynx). Every time the
vocal folds open and close, there is a pulse of acoustic energy (activation). Irrespective of the rate of
vibration at source (of the vocal folds), the air in the vocal tract will resonate at these frequencies as long
as the position of the vocal organs remains the same. Because of the complex shape of the filter (tract),
the air will vibrate in more than one way at once. So, the relationship between resonant frequencies and
vocal tract shape is actually much more complicated than the air in the back part of the vocal tract
vibrating in one way and the air in other parts vibrating in another. The vocal folds may vibrate faster or
slower, giving the sound a higher or lower pitch, but the formants will be the same as long as the position
of the tube (vocal tract) is the same.
Q. Describe the cardinal vowels according to lips, tongue and jaw position. Æ and œ (3 marks)
Firstly, the shape of the lips (lip-rounding), rounded (for sounds like /u:/ vowel), neutral (as for ə - schwa
sound) or spread (as in /i:/ sound in word like sea or – when photographers traditionally ask you to say
“cheese” /tʃi:z/ in order to make you look smiling. Secondly, part of the tongue - the front, the middle or
the back of the tongue may be raised, giving different vowel qualities: compare /æ/ vowel (as in word
‘cat’) as a front vowel, with the /ɑ:/ vowel (as in ‘cart’) which is a back vowel. Thirdly, the tongue (and
the lower jaw) may be raised ‘close’ to the roof of the mouth (for close vowels. e.g. /i:/ or /u:/), or the
tongue may be left ‘low’ in the mouth with the jaw comparatively ‘open’ (as for open vowels e.g., /a:/ and
/æ/.
Q. Cardinal vowels
The English phonetician Daniel Jones introduced a system in early 20th century and worked out on a set
of vowels called the “cardinal vowels” comprising of eight vowels to be used as reference points (so that
other vowels could be related to them like the corners and sides of a map). His idea of cardinal vowels
became a success and it is still used by experts and students for vowel description. Cardinal vowel system
is a chart or four-sided figure (the exact shape of which has been changed from time to time). It is a
diagram to be used both for rounded and unrounded vowels, and Jones proposed that there should be a
primary and a secondary set of cardinal vowels.
Q. Secondary cardinal vowels/ cardinal vowels were given, and their articulation with respect to lips, jaws
and tongue was asked
They are easy to understand in connection with the primary cardinal vowel system. Following r the
secondary cardinal vowels (their numerical codes and features) as pointed out Danial Jones:
The main difference between primary and secondary cardinal vowels is related to lip-rounding as in some
languages the feature of lip-rounding is possible for front vowels. By reversing the lip position (in
comparison with primary cardinal vowels), the secondary series of vowel types is produced (e.g.,
rounding the lips for the front vowels).
Q. Semivowels
Most of the world languages contain a class of sounds that functions in a way similar to consonants but is
phonetically similar to vowels (e.g., in English, /w/ and /j/ as in ‘wet’ and ‘yet’). When they are used in
the first part of syllables (at onset), they function as consonants. But if they are pronounced slowly, they
resemble (in quality) with the vowels [u] and [i] respectively. These sounds are called semivowels which
are also termed as approximants today. In French there are three semivowels (i.e., in addition to j and w
there is another sound symbolized /ɥ/ and is found in initial position in the word like ‘huit’ /ɥit/ (eight)
and in consonant clusters such as /frɥ/ in /frɥi/ (‘fruit’). The IPA chart also lists a semivowel
corresponding to the back close unrounded vowel /ɯ/. Like the others, this is classed as an approximant.
Syllable structure could be of three types: ‘simple’ (CV), ‘moderate’ (CVC) and ‘complex’ (with
consonant clusters at edges) such as CCVCC and CCCVCC (where V means vowel and C stands for
consonant).
Secondary articulatory gestures: ‘Secondary’ articulation is an articulatory gesture with a lesser degree of
closure occurring at approximately the same time as another (primary) gesture. It is different than co-
articulation which is at the same time and of the same value (taking place as an equal level gesture).
Types of SAGs
i. Palatalization (can come as short or long question too) is the addition of a high front tongue gesture (as
in sound like [i]) to another main (primary) gesture. The diacritic used for palatalization is the small [ʲ]
superimposed above another symbol (for primary gesture). The terms palatalization (a process whereby
the place of an articulation is shifted nearer to the center of the hard palate) and palatalized (when the
front of the tongue is raised close to the palate while an articulatory closure is made at another point in the
vocal tract) are sometimes used in a slightly different way. A palatalized consonant has a typical /j/-like
(similar to /i/ vowel) quality.
ii. Velarization involves raising the back of the tongue (adding the /u/ vowel like quality). It can be
considered as the addition of an [u]-like tongue position (but remember that it is without the addition of
the lip rounding). A typical English example of velarization is the /l/ sound at the end of a syllable (as in
words like kill, pill, sell and will) called velarized or dark /l/ and may be written as [l̴]. The diacritics for
velarization are both [ˠ] and [ ̴].
iii. pharyngealization which is the superimposition of narrowing of the pharynx. The IPA diacritics for
symbolizing pharyngealization are [ ̴] (as for velarization) and [ˤ] (the superimposition of the symbol for
pharyngeal sound).
iv. Labialization which is the addition of lip rounding (written as [ʷ]) to other primary articulation such
as Arabic /tʷ/ and /sʷ/. Nearly all kinds of consonants can have added lip rounding, including those that
already have one of the other secondary articulations (such as velarization and palatalization).
As a suprasegmental feature, pitch is an auditory sensation - when we hear a regularly vibrating sound
such as a note played on a musical instrument (or a vowel produced by the human voice), we hear a high
pitch (when the rate of vibration is high) and a low pitch (when the rate of vibration is low). There are
some speech sounds that are voiceless (e.g. /s/), and cannot give rise to a sensation of pitch in this way but
the voiced sounds can. Thus the pitch sensation that we receive from a voiced sound corresponds quite
closely to the frequency of vibration of the vocal folds.
1. Pitch range,
2. height and
3. direction
Intonation refers (very) simply to the variations in the pitch of a speaker’s voice (f0) used to convey or
alter meaning but in its broader and more popular sense intonation covers much of the same field as
‘prosody’ where variations in such things as voice quality, tempo and loudness are included. Intonation as
a suprasegmental feature performs several functions in a language. Its most important function is to act as
a signal of grammatical structure (e.g. creating patterns to distinguish among grammatical categories),
where it performs a role similar to punctuation (in written language). It may furnish far more contrasts
(for conveying meaning). Intonation also gives an idea about the syntactic boundaries (sentence, clause
and phrase level boundaries).
The role of intonation in the communication. It is quite important as it also conveys personal attitude
(e.g., sarcasm, puzzlement, anger, etc.). Finally, it can signal contrasts in pitch along with other prosodic
and paralinguistic features. It can also bring variation in meaning and can prove an important signal of the
social background of the speakers.
Intonation according to grammatical structure Its most important function is to act as a signal of
grammatical structure (e.g., creating patterns to distinguish among grammatical categories), where it
performs a role similar to punctuation (in written language). It may furnish far more contrasts (for
conveying meaning). Intonation also gives an idea about the syntactic boundaries (sentence, clause and
phrase level boundaries). It also provides the contrast between some grammatical structures (such as
questions and statements).
For example, the change in meaning illustrated by ‘Are you asking me or telling me?’ is regularly
signaled by a contrast between rising and falling pitch. Note the role of intonation in sentences like
‘He’s going, isn’t he?’ (= I’m asking you) opposed to ‘He’s going, isn’t he!’ (= I’m telling you)
Q. What is praat?
Praat is a computer program with which you can analyze, synthesize, and manipulate speech, and create
high-quality pictures for your articles and thesis. Nowadays most of the research works in phonetics and
phonology are based on software like Praat and Wave Surfer.
Create two tiers (this will be enough for our purposes). Write ‘word segment’ (these are two tiers)
on the cell named ‘All tier names’ on the small window.
a. negative
b. zero
c. positive
In a voiceless aspirated plosive (such as /p/ there is a delay (or lag) before voicing starts; and, in a
voiceless aspirated plosive (e.g., /pʰ/), the delay is much longer, depending on the amount of aspiration.
The amount of this delay is called Voice Onset Time (VOT) which in relation to the types of plosive
varies from language to language.
To calculate the VOT. Record /apa/, /aba/ /ata/, /ada/, /apha/ and /atha/. Zoom in through your stop sounds
so that you can analyze the patterns of the stop sounds and find the difference among the three types of
VOT (negative, zero and positive). Measure the VOT of each stop and compare voiced/voiceless
counterparts (p/b, t/d, k/g). Similarly, zoom in so that you can clearly see the stop closure followed by the
beginning of the vowel. You can measure the time between the end of the stop closure (the beginning of
the release burst) and the onset of voicing in the following vowel (the onset of regular pitch pulses in the
waveform).
Make sure the volume bar is fluctuating as you record – if it isn’t, you’re not recording; if you don’t see
the volume bar at all, you’re not speaking loudly enough.
Watch out for clipping. If your recording level is too high and you go into the red on the volume bar,
you’ll end up with what is called a “clipped” signal; this is very bad for speech analysis!
Q. While recording on praat, one has to be careful about clipping, explain? (3)
Watch out for clipping. If your recording level is too high and you go into the red on the volume bar,
you’ll end up with what is called a “clipped” signal; this is very bad for speech analysis.
Q. Difference Between acoustic and auditory phonetics ....3/ OR What is more authentic in auditory and
acoustic analysis.
Acoustic Phonetics is the study of detailed physical properties of sound we produce. It generally uses
tools which read the changes in air pressure that the sound creates. Each sound has its different sound
quality, which depends on the source filter that is our speech organs. Each sound has its own f0, f1, f2 and
f3 (formants), depending on the source modifier. f0 deals with the fundamental frequency, f1 gives the
information about the pharyngeal cavity, f2 about the oral cavity and f3 about the position of the lips
while the sound was produced.
Auditory phonetics is just the other side of the coin for this study. It deals with the study of these
articulated sound characteristics from the perception perspective. Auditory phonetics deals with the
listener at a broader aspect. The perceived f0 is measured in terms of pitch and calculated in Mel or bark
scales.
A nasal consonant is a consonant whose production involves a lowered velum and a closure in the oral
cavity, so that air flows out through the nose. Examples of nasal consonants are [m], [n], and [ŋ] (as in
think and sing).
Q. Feature hierarchy....3
Feature hierarchy is an important concept in phonetics and phonology which is based on the properties
and features of sounds. In a very general sense, a feature may be tied to a particular articulatory maneuver
or acoustic property. For example, the feature [bilabial] indicates not only that the segment is produced
with lips but also that it involves both of them. Such features (in phonetics and phonology) are listed in a
hierarchy with nodes in the hierarchy defining ever more specific phonetic properties
Q. Consonant gestures
In phonetics and phonology, speech sounds (segments) using basic units of contrast are defined as
gestures – they are treated as the abstract characterizations of articulatory events with an intrinsic time
dimension. Thus sounds (segments) are used to describe the phonological structure of specific languages
and account for phonological variation. In this type of description in phonetics and phonology, sounds are
the underlying units which are represented by classes of functionally equivalent movement patterns
(gestures)
Q. Sonorant/ why sonorant called vowel like sound and how they different from vowel?
Sonorant is vowel-like sounds (nasals and glides). These sounds are called sonorant because they have
formants. But they are different from vowels because they generally have lower amplitude; therefore, they
behave like consonants.
Retroflex: This sound is produced when the tongue tip curls against the back of the alveolar ridge. Many
speakers of English do not use retroflex sounds at all but it is a common sound in Pakistani languages.
They are pronounced by general lowering of the third and fourth formants such as Urdu, Sindhi, Pashto,
Balochi and Punjabi.
Stress timed languages’ is a very general phrase used in phonetics to characterize the pronunciation of
languages displaying a particular type of rhythmic pattern that is opposed to that of syllable-timed
languages. In stress-timed languages, it is claimed that the stressed syllables recur at regular intervals of
time (stress-timing) regardless of the number of intervening unstressed syllables as in English. This
characteristic is sometimes also referred to as ‘isochronism’, or isochrony.
In syllable timed languages, all syllables tend to have an equal time value (for example, their length or
duration) and the rhythm of the language is said to be syllable-timed. In these languages, syllables tend to
occur at regular intervals of time with fixed word stress. A classic example is
Q. Japanese Syllables in which all morae have approximately the same duration.
This tendency is contrasted with stress-timing where the time between stressed syllables is said to tend to
be equal irrespective of the number of unstressed syllables in between. Czech, Polish, Swahili and
Romance languages (e.g., Spanish and French)
Q. Many phoneticians disagree with the basic idea of stress timing value. Write the 3 dimensions they
suggest?
Many phoneticians disagree with the basic idea of timing value. They are of the view that there are three
dimensions:
Q. Glides:
Glides are also the sonorants (vowel-like) sounds as they have similar patterns (have formants). Take the
first three formants (F1, F2 and F3) from the middle of the sounds for glides (both for /w/ and /j/) and
explore their acoustic correlates. Carefully judge the center of these sounds (the midpoint of [w] and [j]).
Analyze that how similar is the formant structure of glides with vowels and nasals. Draw lines to indicate
F1, F2, F3 and compare with vowels.
Q. Formants:
Formants come from the vocal tract. The air inside the vocal tract vibrates at different pitches depending
on its size and shape of opening. We call these pitches formants. You can change the formants in the
sound by changing the size and shape of the vocal tract. Formants filter the original sound source. After
harmonics go through the vocal tract some become louder and some become softer.
Harmonic numbers are different for one type of sounds but the formants are the same. The relationship
between the harmonics and the formants is captured in the source-filter model of speech production. The
point is that harmonics are related to the laryngeal activity (source) and formants are the output of the
vocal tract (filter).
The first two frequencies are important here. The first formant (F1) is inversely related to the height of a
vowel whereas the second formant (F2) is related to the frontness of a vowel sound. When the first two
formants are taken, the vowels of a language can be plotted on a chart and the structure is very much
related to the traditional description of vowel sounds.
Q. Formants f1 and f2
Formants are the overtone resonances. Acoustically, in order to plot vowels on chart, F1-F2 are very
important. We need the wide bands for measuring the formants (which are the important characteristics of
sonorant speech sounds – vowels). On spectrogram, formants are thick bands (darkness corresponds to
loudness; i.e. the darkest harmonics are the ones that are the most amplified). These amplified harmonics
form the formants that are characteristic of sonorant speech sounds.
Q. Phonotactics:
In phonology, phonotactics is the study of the ways in which phonemes are allowed to combine in a
particular language. (A phoneme is the smallest unit of sound capable of conveying a distinct meaning.)
Over time, a language may undergo phonotactic variation and change. For example, as Daniel Schreier
points out, "Old English phonotactics admitted a variety of consonantal sequences that are no longer
found in contemporary varieties"
Q. Why do you think teacher are mostly expected to perform action research in ELT. 3
Teachers are expected to facilitate action research which is the most rewarding and productive for their
own profession. For example, the phonetics of phonological speech errors if explored and shared by
teachers (by investigating their own practices) may lead to a very positive discussion in the academic
circles (of research into ELT – SLA). Similarly, topics such as learners’ performance and development
(e.g., what do good speakers do?) may yield useful results for teachers’ community.
Stop voicing: There are three important acoustic correlates of voicing in stops: the voice bar, VOT, and
the duration of the preceding vowel. Record /apa/, /aba/, /ata/, /ada/, /apha/ and /atha/ and for each of the
stops in the file, take the three measurements according to the following instructions: See the voicing or
the voice bar by exploring features of stop. We can also explore the features related to the place of
articulation (any bilabial feature for /p/ or /b/ in comparison with non-bilabial). Also check the duration of
the preceding vowels. Note down the presence of voicing.
Part of the tongue - the front, the middle or the back of the tongue may be raised, giving different vowel
qualities: compare /æ/ vowel (as in word ‘cat’) as a front vowel, with the /ɑ:/ vowel (as in ‘cart’) which is
a back vowel. Thirdly, the tongue (and the lower jaw) may be raised ‘close’ to the roof of the mouth (for
close vowels. e.g. /i:/ or/u:/), or the tongue may be left ‘low’ in the mouth with the jaw comparatively
‘open’ (as for open vowels e.g., /a:/and /æ/.
Any particle of air, such as that in the vocal tract or that in a bottle, will vibrate in a way that depends on
its size and shape. Remember that the air in the vocal tract is set in vibration by the action of the vocal
folds (in larynx). Every time the vocal folds open and close, there is a pulse of acoustic energy
(activation). Irrespective of the rate of vibration at source (of the vocal folds), the air in the vocal tract
will resonate at these frequencies as long as the position of the vocal organs remains the same. Because of
the complex shape of the filter (tract), the air will vibrate in more than one way at once.
Q. Ease of articulation
In order to explain the sound patterns of a language, the views of both speaker and listener are considered.
Both of them like to use the least possible articulatory effort (except when they are trying to produce very
clear speech), and there are a large number of assimilations, with some segments left out, and other
reduced to minimum. Thus as peaker uses language with an ease of articulation (e.g., co articulation and
secondary articulation). This tendency to use language sounds with maximum possible ease of articulation
leads to change in the pronunciation of words.
One evidence that the IPA chart is based on linguistic phonetics is the description of the blank cells on the
chart (those neither shaded nor containing a symbol) that indicate the combinations of categories that are
humanly possible but have not been observed so far to be distinctive in any language (e.g., a voiceless
retroflex lateral fricative is possible but has not been documented so far, so it is left blank). The shaded
cells, on the other hand, exhibit the sounds not possible at these places.
Further, below the consonant chart is a set of symbols for consonants made with different airstream
mechanisms (clicks, voiced implosives, and ejectives). All these descriptions reflect the potentialities of
human speech sounds (as a linguistic community) not only showing the possible segments but also the
suprasegmental features and points related to the possible airstream mechanisms and even the diacritics
for various types co-articulations and secondary articulatory gestures. The IPA chart is carefully
documented (by experts) and is continuously revised and updated.
Developing relevant material for the teaching of phonetics and phonology is an important task for
aspiring teachers of English language. For example, you can develop your material related to the
pronunciation teaching to the learners of English. You can incorporate material related to the IPA text –
transcription of the audio (listening) based activities – by involving students on using dictionaries (ideally
the phonetic dictionaries) in the classroom.
In experimental phonetics and phonology, the studies of sounds include various latest experimental
techniques and computer software that are used under carefully designed lab experimentation. It is an
important aspect of the application of the latest technology by going beyond the simple acoustics and by
working in sophisticated phonetic labs in order to discover the hidden aspects of human speech. For
example, questions such as ‘How speech is produced and processed?’ are the focus of experimental
phonetics (explore the speech chain as the beginning of experimental phonetics as mentioned in Chapter-
20 by Peter Roach). The latest trends under experimental phonetics include brain functions in speech
production and processing (by using the latest equipment – many special instruments such as x-ray
techniques), speech errors, neurolinguistics and the topics related to the developments through computers
– for speech analysis and synthesis.
Q. Why the English lateral /l/ is called an approximant? 3/ Difference of lateral sounds of Americans and
British English.
The only English lateral phoneme, at least in British English, is /l/ with allophones [l] as in led [lɛd] and
[ɫ] as in bell[bɛɫ]. In most forms of American English, initial [l] has more velarization than is typically
heard in British English initial [l]. In all forms of English, the air flows freely without audible friction,
making this sound a voiced alveolar lateral approximant. It may be compared with the sound [ɹ] in red
[ɹɛd], which is for many people a voiced alveolar central approximant. Laterals are usually presumed to
be voiced approximants unless a specific statement to the contrary is made.
Q. What formant position would you observe against a retroflex sound on its spectrogram.
Vowels produced with ATR involve the furthest-back part of the tongue, opposite to the pharyngeal wall,
which is not normally involved in the production of speech sounds - also called the radix (articulations of
this type may, therefore, be described as radical). ATR (a kind of articulation in which the movement of
the root of tongue expands the front–back diameter of the pharynx) is used phonologically in Akan (and
some other African languages)as a factor in contrast of vowel harmony. The opposite direction of
movement is called retracted tongue root (RTR).ATR is thus related to the size of pharynx – making the
pharyngeal cavity different: creating comparatively large (+ATR: root forward and larynx lowered) and
small pharyngeal cavity (-ATR: no advanced tongue root). Akan contrasts between two sets of vowels
+ATR and –ATR.
Non-rhotic (such as Received Pronunciation where /r/ is only found before vowels as in ‘red’ and
‘around’).Most American English speakers speak with a rhotic accent, but there are non-rhotic
In the production of trill the articulator is set in motion by the current of air [r]. It is a typical sound of
Scottish English as in words like ‘rye’ and ‘row’.
Flap is front and back movement of tongue tip at the underside of tongue with curling behind. It is found
in abundance in Indo-Aryan (IA) languages [ɽ]. Typical flap sounds found in IA languages is a retroflex
sound and the examples are [ɽ], [ɖ] and [ɳ].
Q. What is Neurolinguistics.
Human language or communication (speech, hearing, reading, writing, or nonverbal modalities) related to
any aspect of the brain or brain function. It is a field of interdisciplinary study which does not have a
formal existence. Its subject matter is the relationship between the human nervous system and language
"The primary goal of the field of neurolinguistics is to understand and explicate the neurological bases of
language and speech, and to characterize the mechanisms and processes involve in language use. The
study of neuorolinguistics is broad-based; it includes language and speech impairments in the adult
aphasias and in children, as well as reading disabilities and the lateralization of function as it relates to
language and speech processing." and computer modeling.
Features may also be studied further as a part of language universals and then their role as language
specific subsets.
Good teachers are expected to be active researchers and therefore busy in updating themselves about the
latest researches and teaching methodologies around the world. It is also a pedagogical challenge for
teachers to keep themselves updated by exploring pedagogical and technological challenges for ELT
experts (in their own contexts and internationally). For example, the aspects of Task Based Learning and
Teaching (TBLT) as a golden method for second language acquisition (SLA) may be effective in
Pakistani context if explored by ELT practitioners. Teachers as the agents of change and they must be
reading research studies and carry out research by and explore their issues and solutions. A good way is to
keep reading teachers’ digests and journals and participate in the online discussions by teaching
associations.
Take the measurement of the first two formants and plot those values on a chart using the Excel
spreadsheet. By putting F1 and F2 in separate columns, write the formant values associated with different
vowels (giving vowels in the first column, the difference between F2 and F1 in the second column and F1
in the third). After putting the data in Excel sheet, use the Scatter chart from the same spreadsheet.
Further in order to make it corresponding with the required values for F1 and F2, reverse the values for
both formants (on both axis – Y and X). Now the zero for bothF1 and F2 is at the right corner. Watch the
video and you will find how F1 is inversely related to the height of the vowel and the difference between
F2 and F1 to the frontness of the vowels. Once completed, export the chart to your Word document and
give it the number and title accordingly.
Q. Mechanism of source filter/ Role of vocal folds and vocal tract in Source filter theory.
In this theory, the tract is represented using a source-filter model and several devices have been devised to
synthesize speech in this way. The idea is that the air in the vocal tract acts like the air in an organ pipe, or
in a bottle. Sound travels from a noise-making source (i.e., the vocal fold vibration) to the lips. Then, at
the lips, most of the sound energy radiates away from the lips for a listener to hear, while some of the
sound energy reflects back into the vocal tract. The addition of the reflected sound energy with the source
energy tends to amplify energy at some frequencies and damp energy at others, depending on the length
and shape of the vocal tract. The vocal folds (at larynx) are then a source of sound energy, and the cavity
(vocal tract - due to the interaction of the reflected sound waves in it) is a frequency filter altering the
timbre of the vocal fold sound. Thus this same source-filter mechanism is at work in many musical
instruments. In the brass instruments, for example, the noise source is the vibrating lips in the mouthpiece
of the instrument, and the filter is provided by the long brass tube.
The CV pattern (where one consonant is found at the onset followed by a vowel as its peak) of syllable is
found in all languages of the world. It is the universal pattern of syllable (Max Onset C) and is
encouraged by all human languages in abundance. There are languages which only allow CV templates of
syllables (e.g., Honolulu - CVCVCVCV and Waikiki - CVCVCV). Interestingly, it is also found in the
nicknames of the almost all languages of the world: kami, nana, baba, papa, mani, rani, etc. As a part of
their L1 acquisition, children first acquire the CV pattern of their mother tongue.
Also possible: CCV (try) CCCVC (stroke), CCCV (straw), VCC (eggs) CVCC
(risk), CVCCC (risks).
Supra means ‘above’ or ‘beyond’ and segments are sounds (phonemes). Suprasegmental is a term used in
phonetics and phonology to refer to a vocal effect (such as tone, intonation, stress, etc.) which extends
over more than one sound (segment) in an utterance. Major suprasegmental features include pitch, stress,
tone, intonation or juncture. These features are meaningful when they are applied above segmental level
(on more than one segment). Phonological studies can be divided into two fields: segmental phonology
and suprasegmental phonology. Suprasegmental features have been extensively explored in the recent
decades and many theories have been constituted related to the application and description of these
features.
Syllables constitute words, phrases and sentences through the combination of their prosodic features:
loudness — stress, pitch — tone, duration — length and tempo. Syllables may be stressed, unstressed,,
high, mid, low, rising, falling, long, short. OR From the speech production point of view, a syllable
consists of a movement from a constricted or silent state to a vowel-like state and then back to constricted
or silent state. From the acoustic point of view, this means that the speech signal shows a series of peaks
of energy corresponding to vowel-like states separated by troughs of lower energy (sonority).
The acoustic properties (structure) of consonantal sounds are usually more complicated than that of
vowels. Usually, a consonant can be said to be a particular way of beginning or ending a vowel sound
because during the production of a consonant there is no distinguishing feature prominently visible. There
is virtually no difference in the sounds during the actual closures of voiced stops [b, d, g], and absolutely
none during the closures of voiceless stops [p, t, k], because there is only silence at these points. Each of
the stop sounds conveys its quality by its effect on the adjacent vowel. We have seen that during a vowel
such as [u], there will be formants corresponding to the particular shape of the vocal tract. In the case of
consonants, these changes are not really distinguishable (particularly for obstruent). Although there are
some consonantal sounds which have vowel like structure; therefore, their acoustic features are somehow
similar to vowels (in the case of nasal consonants, approximants and glides) but most of the consonants
have totally different acoustic features.
Like stops, nasal can also occur voiced or voiceless (for example, in Burmese, Ukrainian and French)
though in English and other most languages nasals are voiced. As voiceless nasals are comparatively rare,
they are symbolized simply by adding the voiceless diacritic [ ] under the symbol for the voiced sound.
There are no special symbols for voiceless nasals and it is written as /m / - a combination of the letter for
the voiced bilabial nasal and a diacritic indicating voicelessness.
For exploring the acoustics of vowels, we need to record vowels and explore their properties. The eight
vowels from American English. These vowels are: heed, hid, head, had, hod, hawed, hood and who’d.
When you are done with the recording, get ready for measuring the following three things: intrinsic pitch,
spectral make up (formants) and plotting them in excel sheet (and finally exporting them to your Word
document). Now, record yourself saying the words. Take a quick look at your vowels in the Edit window,
and make sure you can clearly see the vowel formants. If you have trouble seeing them, you can go back
to the previous labs and learn it again. While doing this, please make a note of it on your worksheet.
Q. Harmonics....5
Harmonics are the multiple integers of the fundamental frequency which are basically the result of vocal
fold vibration (complex wave). It is important to note down that when our vocal folds vibrate, the result is
a complex wave, consisting of the fundamental frequency plus other higher frequencies, called harmonics.
As already mentioned, to see harmonics, we need to look at a narrow-band spectrogram, which is more
precise along the frequency domain than the default wide-band spectrogram.
There are different modes and structures for syllable structure and languages are labeled as per their
syllabic templates. Consonant sequences are called clusters (e.g., CC – two consonants or CCC – three
consonants). Most of the phonotactic analyses are based on the syllable structures and syllabic templates.
On the basis of these consonant clusters, mainly three types of syllabic patterns are considered among
languages; simple – moderate – complex (on the basis of consonants clusters at edges: onset and coda).
Examples: Simple CV Moderate CVC(G)(N) (G for Glide and N for Nasal – specific Cs)
Q. lexical stress....5
Lexical stress, or word stress, is the stress placed on a given syllable in a word. The position of lexical
stress in a word may depend on certain general rules applicable in the language or dialect , but in other
languages, it must be learned for each word, as it is largely unpredictable. In some cases, classes of words
in a language differ in their stress properties. Lexical stress is basically related to the primary stress
applied at syllable level (when only one syllable is stressed) that has the ability to change the meaning and
the grammatical category of a word as in the case of ‘IMport’ (noun) and ‘imPORT’ (verb).
Sentence stress is applied on one word (rather than a syllable) in a sentence thus making that word more
prominent (stressed) than the rest of the words in the sentence. This type of stress has its role in intonation
patterns and rhythmic features of the language showing specific emphasis on the stressed word (which
may be highlighting some information in the typical context). In order to perceive the nature of sentence
level stress, read the following sentences with shifting the stress accordingly and judge the shift in
emphasis (and its role in the context):
Q. Vowel qualities....5
There are two features of vowel quality (i.e., height and backness of the tongue) that are used to contrast
one vowel with another in nearly all languages of the world. But there are four other features that are used
less frequently and not all languages exhibit them. They include ‘lip-rounding’, rhotacization, nasalization
and advanced tongue root (ATR).
Q. Nasal Formants
Formants for nasal sounds are also important for acoustic analysis. Measure the first three (F1, F2 and F3)
formants of nasals from the file (use the already learnt way of measuring formants). Remember that
nasals have very distinctive waveforms (different than that of vowels) as they have distinctive forms of
anti-formants (bands of frequencies damped) and formant transition. When you are done with the
measurement, try to answer the following questions:
Q. In order to Formant automatically, what three steps will you follow, elaborate?(5)
Linguistic phonetics is an approach which is embodied in the principles of the International Phonetic
Association (IPA) and in a hierarchical phonetic descriptive framework that provides certain basis for
formal phonological theory. Linguistic phonetics answers a number of questions related to the possible
ways of articulatory unified phonetics and phonology and from the perspective of cognitive phonetics
focusing on speech production and perception and how they shape languages as a sound systems. The
idea is mainly related to the overall ability of human beings to produce sounds (as a community and
irrespective of their specific languages) and then the representation of their shared knowledge (as
considered by the IPA in its charts) for formal phonetic and phonological theories
It has often been found that languages do not allow all phonemes to appear in any order (e.g., a native
speaker of English can figure out fairly easily that the sequence of phonemes /streŋθs/ makes an English
word (‘strengths’) and that the sequence /bleidg/ would be acceptable as an English word ‘blage’,
although that word does not happen to exist, but the sequence /lvm/ could not possibly be the part of an
English word).
The research and application of speech perception must deal with several problems which result from
what has been termed the lack of invariance. Reliable constant relations between a phoneme of a language
and its acoustic manifestation in speech are difficult to find. This ‘lack of phonetic invariance’ provides
us with many reasonable justifications as it has posed an important problem for phonetic theory as we try
to reconcile the fact that shared phonetic knowledge can be described using the IPA symbols and
phonological features with the fact that the individual phonetic forms that speakers produce and hear on a
daily basis span a very great range (of varieties).
Q. Name some materials that can assist in the teaching of the language skills. 5
Developing relevant material for the teaching of phonetics and phonology is an important task for
aspiring teachers of English lang.
1. Explore already developed material available online from various sources (such as British Council and
other teacher resource centers); however, you must also be able to develop your own material (as
specifically required by your students).
2. You can develop your material related to the pronunciation teaching to the learners of English.
3. You can incorporate material related to the IPA text – transcription of the audio (listening) based
activities – by involving students on using dictionaries (ideally the phonetic dictionaries) in the
classroom.
4. Movies and documentaries (such as from BBC - CNN - National Geographic channels) may also serve
as very effective resources for the teaching of pronunciation.
5. Finally, the real life material (for listening) and writing interaction from everyday language may also
yield tremendous results. The focus of material development should always be the enhancement of the
proficiency level of students.
In Pakistan Phonetics and phonology is a very potential area for research to be carried out in Pakistani
context. In applied phonology, many areas can be explored; for example, issues faced by Pakistani
learners of English may be studied. Similarly, the pronunciation issues of Pakistani learners are potential
area through which the difficulties faced by Pakistani students may be addressed. Also, researchers can
explore and document the features of Pakistani English based on their phonological features in order to
get the Pakistani variety of English recognized. Other problematic areas may also include: segmental and
supra segmental features (such as stress placement, intonation patterns and syllabification and
resyllabification of English words by Pakistani learners. Contrastive analysis (between English phonology
and the sound systems of the regional languages of Pakistan can also be carried out by the researchers.
We can also think about exploring the consonant clusters and interlanguage phonology from second
language acquisition point of view. While focusing on ELT as the part of applied linguistics, studies may
also be carried out on Pakistani variety of English (development of its corpora, deviation from the
standard variety (RP), its specific features, etc.).
Q. Speaker styles
A complete range of a speaker’s vowel qualities may be considered as representative of the speaker’s
personal features which, in turn, may be compared with the formant frequency of each vowel (with the
total range of that formant in that speaker’s voice). But this is true that the phoneticians are still working
with comparing the acoustic data of one individual with the other and improve further the system of
speech recognition. Experts of applied phonetics and computer speech technology are trying to
understand the complexity of speech – synthesis systems and improve it.
Q. VOT (5)
Voicing is a feature of some of the sounds we make. If we hold our fingers lightly against the front of our
throat and make the sound ssssssss, and then go zzzzzzzz – we will feel buzz for the second one. That’s
our vocal folds vibrating really quickly. There are lots of minimal pairs like this in English - s/z are
fricatives, but there are also stops, like t/d and p/b. For stops, the voice onset time (VOT) is the
relationship between when you open your articulators and when those vocal folds start buzzing. Some
stops will have the voicing start before the release of the closure, known as a negative VOT, aspirated
consonants (a bit of air after the release) with a voiced sound after result in a positive VOT, and those
situations where the voicing and opening occur at the same time are known as tenuis VOT just to sound
fancy. The VOT of sounds varies across languages.
Q. Explain with reference to the division of sound into supra laryngeal n lyrangeal. 5
Sounds are divided in terms of their supra-laryngeal and laryngeal characteristics, and their airstream
mechanism. The supra-laryngeal characteristics can be further divided into those for place (of
articulation), manner (of articulation), the possibility of nasality, and the possibility of being lateral. Thus,
these features are used for classifying speech sounds and describing them formally.
Q. Importance of Spectrograms
1. Using Praat (or any other software) and spectrogram is particularly useful when a researcher is
working on a problem related to the nature (physical properties) of a sound (e.g., is it a phoneme
or allophone?).
2. It increases our understanding of the speech sounds and their behavior in different forms (in
isolation or as the part of connected speech).
3. Practice on spectrogram gives us the opportunity to learn about the characteristics of speech
sounds.
4. It is also important for experts who are working on phonetic aspects of speech as signal
processing.
5. These are also used as the part of techniques in speech recognition.
In the description of vowel quality, rhotacization (or rhotacized vowel) is a term which is used in English
phonology referring to dialects or accents where /r/ is pronounced following a vowel, as in words ‘car’
and ‘cart’. Thus varieties of English are divided on the basis of this feature - varieties having this feature
are rhotic (in which /r/ is found in all phonological contexts) while others (not having this feature) are
non-rhotic (such as Received Pronunciation where/r/ is only found before vowels as in ‘red’ and
‘around’). Similarly, vowels which occur after retroflex consonants are sometimes called rhotacized
vowels (they display rhotacization). It is important to mention that while BBC pronunciation is nonrhotic,
many accents of the British Isles are rhotic, including most of the south and west of England, much of
Wales, and all of Scotland and Ireland. Most American English speakers speak with a rhotic accent, but
there are non-rhotic areas (e.g., the Boston area, lower-class of New York and the Deep South).
Q. Describe two reasons why phonetics of the community is considered for phonetic description.5
Firstly, individual speakers differ in interesting ways (two native speakers of a language will always
speak with some variations). The description of the phonetics of the individual involves describing the
phonetic knowledge and skills related to the performance of language. It is possible that certain aspects of
the phonetics of the individual can be captured using IPA transcription but others are not compatible with
it (such as his private knowledge and its performance and the role of memory and experience).Secondly,
the phonetics of the individual is usually not the focus of the linguist in speech elicitation, and it is
difficult to describe even with spectrograms of the person’s speech. Although, the phonetics of the
individual is the focus of much of the explanatory power of phonetic theory but for general phonetic
description we need to focus on the phonetics of the community.
Q. How are the oral stops produced? Provide IPA symbols for eng oral stops any three. 5
In phonetics, a stop, also known as a plosive or oral occlusive, is a consonant in which the vocal tract is
blocked so that all airflow ceases.
The occlusion may be made with the tongue blade ([t], [d]) tongue body ([k], [ɡ]), lips ([p], [b]), or glottis
([ʔ]).
Stops contrast with nasals, where the vocal tract is blocked but airflow continues through the nose, as in
/m/ and /n/,and with fricatives, where partial occlusion impedes but does not block airflow in the vocal
tract.
Q. Name the five major features based on the major regions of vocal tract. 5
The five features in total (i.e., Labial, Coronal, Dorsal, Radical, and Glottal).The first three of these
features are related to tongue position whereas ‘Radical’ is a cover term for [pharyngeal] and [epiglottal]
articulations made with the root of the tongue. The feature of ‘Glottal’, on the other hand, is based on
being [glottal], to cover various articulations such as [h]. If we are to have a convenient grouping of the
features for consonants, we have to recognize that Supra Laryngeal features must allow for the dual
nature of the actions of the larynx and include Glottal as a place of articulation. Remember that a sound
may be articulated at more than one of the regions Labial, Coronal, Dorsal, Radical, and Glottal. Within
the five general regions, ‘Coronal’ articulations can be split into three mutually exclusive possibilities:
Laminal (i.e., blade of the tongue), Apical (i.e., tip of the tongue), and Sub-apical (i.e., the under part of
the blade of the tongue). Thus the major regions may be subdivided into sub regions on the basis of their
features.
https://web.facebook.com/groups/727905794618256
Q. In syllable timed languages, all syllables tend to have an equal time value (for example, their length or duration)
and the rhythm of the language is said to be syllable-timed
Q. In spectrograms, time runs from left to right, the frequency of the components is shown on the vertical
scale.
Q. One should observe gap in pattern (with burst for voiceless and sharp formant beginning for voiced stops.)
Q. spectogram help to know about bilabiel sound.....2 [The most favourite of VU; V.V.IMP for 2 marks
questions. They are asking about stops or bilabial or approximants.]
Stop - gap in pattern (with burst for voiceless and sharp formant beginning for voiced stops)
Lateral - formant structure similar to that of vowels (with formants at 250, 1200, and 2400)
Q. Memory for Speech [Another favourite of VU; V.V.IMP for 2 marks questions. They ask it for speaking
style, sound change or any one of them.]
The role of the memory for speech under the exemplar theory suggests that many instances of each word are stored
in memory and their phonetic variability is memorized rather than computed. The main postulates of the concepts
are given here:
Language universal features: Broad phonetic classes (e.g., aspirated vs. unaspirated) derive from
physiological constraints on speaking or hearing, but their detailed phonetic definitions are arbitrary—a
matter of community norms.
Speaking styles: No one style is basic (from which others are derived), because all are stored in memory.
Bilingual speakers store two systems.
Generalization and productivity: Exemplar theory says that generalization is also possible within
productivity. Interestingly, productivity—the hallmark of linguistic knowledge in the phonetic
implementation approach—is the least developed aspect of the exemplar theory.
Sound change: Sound change is phonetically gradual and operates across the whole lexicon. It is a gradual
shift as new instances keep on adding.
The primary set includes eight vowels in total (from 1 to 8); the front unrounded vowels [i, e, ε, a], the back
unrounded vowel [ɑ] and the rounded back vowels [ɔ, o, u].
A set of secondary cardinal vowels (as a precise set of references) was introduced by the same British phonetician
Danial Jones (1881-1967). Secondary cardinal vowels are easy to understand in connection with the primary
cardinal vowel system. The main difference between primary and secondary cardinal vowels is related to lip-
rounding as in some languages the feature of lip-rounding is possible for front vowels.
While going for producing sounds with maximum ease of articulation, only similar sounds are affected. The focus of
the speakers is always on maintaining a sufficient perceptual distance between the sounds that occur in a contrasting
set (e.g., vowels in stressed-monosyllabic words beat, bit, bet, and bat). This principle of perceptual separation does
not usually result into one sound affecting an adjacent sound (as explained in the principle of maximum ease of
articulation). Instead, perceptual separation affects the set of sounds that potentially can occur at a given position in
a word, such as in the position that must be occupied by a vowel in a stressed monosyllable as in words beat, bit, bet,
bat so that the perceptual separation is maximized. The principle of ‘maximum perceptual separation’ also accounts
for some of the differences between languages. All these examples illustrate how languages maintain a balance
between the requirements of the speaker and those of the listener. On the one hand, there is the pressure to make
changes that would result in easier articulations from a speaker’s point of view and, then, from the listener’s point of
view that there should be sufficient perceptual contrast between sounds that affect the meaning of an utterance.
Q. IPA
ENG507 Spring2019 (Finals) solved by Maha Malik
While discussing the key elements of linguistic phonetic description, we need to consider the International Phonetic
Alphabet (abbreviated as IPA). IPA is the set of symbols and diacritics that have been officially approved by IPA.
The association publishes a chart comprising of a number of separate charts. At the top inside the front cover, you
will find the main consonant chart. Below it is a table showing the symbols for nonpulmonic consonants, and below
that is the vowel chart. Inside the back cover is a list of diacritics and other symbols, and a set of symbols for
suprasegmental features (events) such as tone, intonation, stress, and length. Remember that the IPA chart does not
try to cover all possible types of phonetic descriptions (e.g., all the individual strategies for realizing linguistic
phonological contrasts, or gradations in the degree of co-articulation between adjacent segments, etc.). Instead, it is
limited to those possible sounds that can have linguistic significance in that they can change the meaning of a word
in some languages. So the description of IPA is based on the linguistic phonetics of the community.
The fundamental distinction between consonant and vowel sounds is that vowels make the least obstruction to the
flow of air. In addition to this, vowels are almost always found at the center of a syllable, and it is very rare to find
any sound, other than a vowel which can stand alone as a whole syllable. Phonetically, each vowel has a number of
features (properties) that distinguish it from other vowels. These include; firstly, the shape of the lips (lip-rounding),
rounded (for sounds like /u:/ vowel), neutral (as for ə - schwa sound) or spread (as in /i:/ sound in word like sea or –
when photographers traditionally ask you to say “cheese” /tʃi:z/ in order to make you look smiling. Secondly, part of
the tongue - the front, the middle or the back of the tongue may be raised, giving different vowel qualities: compare
/æ/ vowel (as in word ‘cat’) as a front vowel, with the /ɑ:/ vowel (as in ‘cart’) which is a back vowel. Thirdly, the
tongue (and the lower jaw) may be raised ‘close’ to the roof of the mouth (for close vowels. e.g. /i:/ or /u:/), or the
tongue may be left ‘low’ in the mouth with the jaw comparatively ‘open’ (as for open vowels e.g., /a:/ and /æ/. In
British phonetics, terms such as ‘close’ and ‘open’ are used for vowels, whereas in American phonetics ‘high’ and
‘low’ are used for vowel description. So, generally, these three aspects are described in the case of vowels; lip-
rounding, the part of the tongue and the height of the tongue. In addition to these three features, some other
characteristics of vowels are also used in various languages of the world (e.g., nasality – whether a vowel is nasal or
not).
Q. Phonotactics
The study of the phonemes and their order found in the syllables (the study of sound sequences) of a language is
called the phonotactics. It has often been found that languages do not allow all phonemes to appear in any order
(e.g., a native speaker of English can figure out fairly easily that the sequence of phonemes /streŋθs/ makes an
English word (‘strengths’) and that the sequence /bleidg/ would be acceptable as an English word ‘blage’, although
that word does not happen to exist, but the sequence /lvm/ could not possibly be the part of an English word).
Phonotactic analyses of English come up with some interesting findings. For example, why should ‘bump’, ‘lump’,
‘hump’, ‘rump’, ‘mump(s)’, ‘clump’ and others all be associated with large blunt shapes? Why should there be a
whole family of words ending with a plosive and a syllabic /l/ all having meanings to do with clumsy, awkward or
difficult action (e.g., ‘muddle’, ‘fumble’, ‘straddle’, ‘cuddle’, ‘fiddle’, ‘buckle’, ‘struggle’, ‘wriggle’)? Why can’t
English syllables begin with /pw/, /bw/, /tl/, /dl/ when /pl/, /bl/, /tw/, /dw/ are acceptable? All such discussion is
called the phonotactics of the language.
In tonal languages, pitch is used as an essential component of the pronunciation of a word and a change of pitch may
cause a change in meaning. In most languages (whether or not they are tone languages) pitch plays a central role in
intonation. In very simple words, pitch is the variation in the vibration of vocal folds.
2. Is there one formant with a similar frequency for all places of articulation?
3. Is there one formant that has much higher amplitude than the others across nasals?
4. Do you see any overall differences between the nasals on the one hand and [a] on the other?
Q. nasal formants
Formants for nasal sounds are also important for acoustic analysis. Measure the first three (F1, F2 and F3) formants
of nasals from the file (use the already learnt way of measuring formants). Remember that nasals have very
distinctive waveforms (different than that of vowels) as they have distinctive forms of anti-formants (bands of
frequencies damped) and formant transition.
Q. Speaker styles?
A complete range of a speaker’s vowel qualities may be considered as representative of the speaker’s personal
features which, in turn, may be compared with the formant frequency of each vowel (with the total range of that
formant in that speaker’s voice). But this is true that the phoneticians are still working with comparing the acoustic
data of one individual with the other and improve further the system of speech recognition. Experts of applied
phonetics and computer speech technology are trying to understand the complexity of speech – synthesis systems
and improve it.
To calculate the VOT. Record /apa/, /aba/ /ata/, /ada/, /apha/ and /atha/. Zoom in through your stop sounds so that
you can analyze the patterns of the stop sounds and find the difference among the three types of VOT (negative, zero
and positive). Measure the VOT of each stop and compare voiced/voiceless counterparts (p/b, t/d, k/g). Similarly,
zoom in so that you can clearly see the stop closure followed by the beginning of the vowel. You can measure the
time between the end of the stop closure (the beginning of the release burst) and the onset of voicing in the
following vowel (the onset of regular pitch pulses in the waveform).
Q.Lack of invariance
‘lack of phonetic invariance’ provides us with many reasonable justifications as it has posed an important problem
for phonetic theory as we try to reconcile the fact that shared phonetic knowledge can be described using the IPA
symbols and phonological features with the fact that the individual phonetic forms that speakers produce and hear on
a daily basis span a very great range (of varieties). This lack of invariance as a problem also has great practical
significance for language engineers who try to get computers to produce and recognize speech.
The first two frequencies are important here. The first formant (F1) is inversely related to the height of a vowel
whereas the second formant (F2) is related to the frontness of a vowel sound. When the first two formants are taken,
ENG507 Spring2019 (Finals) solved by Maha Malik
the vowels of a language can be plotted on a chart and the structure is very much related to the traditional
description of vowel sounds.
There are three important acoustic correlates of voicing in stops: the voice bar, VOT, and the duration of the
preceding vowel. Record /apa/, /aba/, /ata/, /ada/, /apha/ and /atha/ and for each of the stops in the file, take the three
measurements according to the following instructions: See the voicing or the voice bar by exploring features of stop.
We can also explore the features related to the place of articulation (any bilabial feature for /p/ or /b/ in comparison
with non-bilabial). Also check the duration of the preceding vowels. Note down the presence of voicing.
Create a textgrid:
• Create two tiers (this will be enough for our purposes). Write ‘word segment’ (these are two tiers) on the cell
named ‘All tier names’ on the small window
Q. dental stop.....5
Dental Sounds are present both in British and American English, e.g. dental fricatives [θ, ð] but there are no dental
stops, nasals, or laterals except allophonically realized (before [θ, ð] as in eighth, tenth, wealth). Many speakers of
French, Italian, and other languages (such as Urdu, Pashto and Sindhi) typically have dental stops such as [t̪ d̪].
However, there is a great deal of individual variation in the pronunciation of these consonants in all these languages.
A classic example is Japanese in which all morae have approximately the same duration. This tendency is contrasted
with stress-timing where the time between stressed syllables is said to tend to be equal irrespective of the number of
unstressed syllables in between.
Q. Name following
Spectral slices (or cross-sections) show the amplitude/frequency spectrum at a selected moment in the signal. They
are useful as aids to comparing local spectral events or measuring spectral properties such as formant frequencies,
levels or bandwidths.
Linguistic phonetic descriptions (of speech sounds) are, by and large, descriptions of the phonetics of the
community (excluding the individual properties and considering the shared properties of the sound system of a
language by its native speakers). The representations that experts write and use in the IPA, and analyze in a formal
phonological theory, are intended to show the community’s shared knowledge of how to say the words of a
language. It is important to note that this shared phonetic knowledge is perceptible to other speakers (and thus to the
phonetician as well) and experts are mainly related to the aggregate behavior of linguistic group in the sense that it
captures what community members accept as correct pronunciation system.
Q. Phonology fields?
Phonological studies can be divided into two fields: segmental phonology and suprasegmental phonology.
part of the tongue - the front, the middle or the back of the tongue may be raised, giving different vowel qualities:
compare /æ/ vowel (as in word ‘cat’) as a front vowel, with the /ɑ:/ vowel (as in ‘cart’) which is a back vowel.
Thirdly, the tongue (and the lower jaw) may be raised ‘close’ to the roof of the mouth (for close vowels. e.g. /i:/ or
/u:/), or the tongue may be left ‘low’ in the mouth with the jaw comparatively ‘open’ (as for open vowels e.g., /a:/
and /æ/.
Coronal’ articulations can be split into three mutually exclusive possibilities: Laminal (i.e., blade of the tongue),
Apical (i.e., tip of the tongue), and Sub-apical (i.e., the under part of the blade of the tongue)
Q. Explain with reference to the division of sound into supra laryngeal n lyrangeal. 5
Sounds are divided in terms of their supra-laryngeal and laryngeal characteristics, and their airstream mechanism.
The supra-laryngeal characteristics can be further divided into those for place (of articulation), manner (of
articulation), the possibility of nasality, and the possibility of being lateral. Thus, these features are used for
classifying speech sounds and describing them formally.
CV.CV.CV.CV.CVC
Q: describe the cardinal vowels according to lips, tongue and jaw position. Æ and œ (3 marks)
Firstly, the shape of the lips (lip-rounding), rounded (for sounds like /u:/ vowel), neutral (as for ə - schwa sound) or
spread (as in /i:/ sound in word like sea or – when photographers traditionally ask you to say “cheese” /tʃi:z/ in order
to make you look smiling. Secondly, part of the tongue - the front, the middle or the back of the tongue may be
raised, giving different vowel qualities: compare /æ/ vowel (as in word ‘cat’) as a front vowel, with the /ɑ:/ vowel
(as in ‘cart’) which is a back vowel. Thirdly, the tongue (and the lower jaw) may be raised ‘close’ to the roof of the
mouth (for close vowels. e.g. /i:/ or /u:/), or the tongue may be left ‘low’ in the mouth with the jaw comparatively
‘open’ (as for open vowels e.g., /a:/ and /æ/.
In the last two topics, we measured the harmonics as well as the formants of the sonorant sounds (vowels). Having
taken the measurements for both formants and harmonics, we need to compare them and explore a possible
relationship between the two. Captured in the source-filter model of the speech, it is clear now (from the comparison
of the two values – for formants and harmonics) that harmonic numbers are different for one type of sounds but the
formants are the same. The relationship between the harmonics and the formants is captured in the source-filter
model of speech production. The point is that harmonics are related to the laryngeal activity (source) and formants
are the output of the vocal tract (filter).
• Change the window length to 0.025s – the default window length is 0.005s (wide-band spectrogram) - this changes
the spectrogram dramatically!
• Looking at each vowel, notice the grey horizontal bands: these correspond to harmonics. For each vowel, measure
the frequencies of the first 3 harmonics (H1-H3) and the 10th harmonic (H10).
• Click on the center (horizontally) of each harmonic in the center of each vowel.
• A red horizontal bar should appear with the frequency value on the left side of the window in red.
Q. 3 types of vot
Q. japanese as syllable
A classic example is Japanese in which all morae have approximately the same duration. This tendency is contrasted
with stress-timing where the time between stressed syllables is said to tend to be equal irrespective of the number of
unstressed syllables in between.
In a voiceless unaspirated plosive (such as /p/ there is a delay (or lag) before voicing starts; and, in a voiceless
aspirated plosive (e.g., /pʰ/), the delay is much longer, depending on the amount of aspiration. The amount of this
delay is called Voice Onset Time (VOT) which in relation to the types of plosive varies from language to language.
ENG507 Spring2019 (Finals) solved by Maha Malik
Q. What is praat?
Nowadays most of the research works in phonetics and phonology are based on software like Praat and WaveSurfer.
So it is appropriate to include some beginners’ level introductory sessions to one of the mostly used software Praat.
Praat is a computer program with which you can analyze, synthesize, and manipulate speech, and create high-quality
pictures for your articles and thesis.
In this theory, the tract is represented using a source-filter model and several devices have been devised to
synthesize speech in this way. The idea is that the air in the vocal tract acts like the air in an organ pipe, or in a
bottle. Sound travels from a noise-making source (i.e., the vocal fold vibration) to the lips. Then, at the lips, most of
the sound energy radiates away from the lips for a listener to hear, while some of the sound energy reflects back into
the vocal tract. The addition of the reflected sound energy with the source energy tends to amplify energy at some
frequencies and damp energy at others, depending on the length and shape of the vocal tract. The vocal folds (at
larynx) are then a source of sound energy, and the cavity (vocal tract - due to the interaction of the reflected sound
waves in it) is a frequency filter altering the timbre of the vocal fold sound. This idea can make it very easy for us to
understand the formants of a vowel sound. Thus this same source-filter mechanism is at work in many musical
instruments. In the brass instruments, for example, the noise source is the vibrating lips in the mouthpiece of the
instrument, and the filter is provided by the long brass tube.
Syllable structure could be of three types: ‘simple’ (CV), ‘moderate’ (CVC) and ‘complex’ (with consonant clusters
at edges) such as CCVCC and CCCVCC (where V means vowel and C stands for consonant).
Any particle of air, such as that in the vocal tract or that in a bottle, will vibrate in a way that depends on its size and
shape. Remember that the air in the vocal tract is set in vibration by the action of the vocal folds (in larynx). Every
time the vocal folds open and close, there is a pulse of acoustic energy (activation). Irrespective of the rate of
vibration at source (of the vocal folds), the air in the vocal tract will resonate at these frequencies as long as the
position of the vocal organs remains the same. Because of the complex shape of the filter (tract), the air will vibrate
in more than one way at once.
Make sure the volume bar is fluctuating as you record – if it isn’t, you’re not recording; if you don’t see the volume
bar at all, you’re not speaking loudly enough.
• Watch out for clipping. If your recording level is too high and you go into the red on the volume bar, you’ll end up
with what is called a “clipped” signal; this is very bad for speech analysis!
Q. Semivowels
Most of the world languages contain a class of sounds that functions in a way similar to consonants but is
phonetically similar to vowels (e.g., in English, /w/ and /j/ as in ‘wet’ and ‘yet’). When they are used in the first part
of syllables (at onset), they function as consonants. But if they are pronounced slowly, they resemble (in quality)
with the vowels [u] and [i] respectively. These sounds are called semivowels which are also termed as approximants
today. In French there are three semivowels (i.e., in addition to j and w there is another sound symbolized /ɥ/ and is
found in initial position in the word like ‘huit’ /ɥit/ (eight) and in consonant clusters such as /frɥ/ in /frɥi/ (‘fruit’).
The IPA chart also lists a semivowel corresponding to the back close unrounded vowel /ɯ/. Like the others, this is
classed as an approximant.
Q. Palatalization
Palatalization is the addition of a high front tongue gesture (as in sound like [i]) to another main (primary) gesture.
The diacritic used for palatalization is the small [ʲ] superimposed above another symbol (for primary gesture). The
terms palatalization (a process whereby the place of an articulation is shifted nearer to the center of the hard palate)
and palatalized (when the front of the tongue is raised close to the palate while an articulatory closure is made at
another point in the vocal tract) are sometimes used in a slightly different way. A palatalized consonant has a typical
/j/-like (similar to /i/ vowel) quality.
Harmonics are the multiple integers of the fundamental frequency which are basically the result of vocal fold
vibration (complex wave). We need the ‘narrow band spectrogram’ for measuring the H (which we can set by fixing
the spectrum setting at 0.025). By starting measuring the frequency of the first three harmonics, we will go to the
H10 (H1, H2, H3 – H10). Finally, we will compare with pitch measurement already taken. It is important to note
down that when our vocal folds vibrate, the result is a complex wave, consisting of the fundamental frequency
(which you have measured in Topic 187) plus other higher frequencies, called harmonics. As already mentioned, to
see harmonics, we need to look at a narrow-band spectrogram, which is more precise along the frequency domain
than the default wide-band spectrogram. Let’s now take the harmonics:
• Change the window length to 0.025s – the default window length is 0.005s (wide-band spectrogram) - this changes
the spectrogram dramatically!
• Looking at each vowel, notice the grey horizontal bands: these correspond to harmonics. For each vowel, measure
the frequencies of the first 3 harmonics (H1-H3) and the 10th harmonic (H10).
• Click on the center (horizontally) of each harmonic in the center of each vowel.
ENG507 Spring2019 (Finals) solved by Maha Malik
• A red horizontal bar should appear with the frequency value on the left side of the window in red.
Q. Ease of articulation
In order to explain the sound patterns of a language, the views of both speaker and listener are considered. Both of
them like to use the least possible articulatory effort (except when they are trying to produce very clear speech), and
there are a large number of assimilations, with some segments left out, and other reduced to minimum. Thus a
speaker uses language with an ease of articulation (e.g., coarticulation and secondary articulation). This tendency to
use language sounds with maximum possible ease of articulation leads to change in the pronunciation of words.
Q. features hierarchy
Feature hierarchy is an important concept in phonetics and phonology which is based on the properties and features
of sounds. In a very general sense, a feature may be tied to a particular articulatory maneuver or acoustic property.
For example, the feature [bilabial] indicates not only that the segment is produced with lips but also that it involves
both of them. Such features (in phonetics and phonology) are listed in a hierarchy with nodes in the hierarchy
defining ever more specific phonetic properties. For example, sounds are divided in terms of their supra-laryngeal
and laryngeal characteristics, and their airstream mechanism. The supra-laryngeal characteristics can be further
divided into those for place (of articulation), manner (of articulation), the possibility of nasality, and the possibility
of being lateral. Thus, these features are used for classifying speech sounds and describing them formally.
As a suprasegmental feature, pitch is an auditory sensation - when we hear a regularly vibrating sound such as a note
played on a musical instrument (or a vowel produced by the human voice), we hear a high pitch (when the rate of
vibration is high) and a low pitch (when the rate of vibration is low). There are some speech sounds that are
voiceless (e.g. /s/), and cannot give rise to a sensation of pitch in this way but the voiced sounds can. Thus the pitch
sensation that we receive from a voiced sound corresponds quite closely to the frequency of vibration of the vocal
folds. However, we usually refer to the vibration frequency as fundamental frequency in order to keep the two things
distinct. In tonal languages, pitch is used as an essential component of the pronunciation of a word and a change of
pitch may cause a change in meaning. In most languages (whether or not they are tone languages) pitch plays a
central role in intonation. In very simple words, pitch is the variation in the vibration of vocal folds.
One evidence that the IPA chart is based on linguistic phonetics is the description of the blank cells on the chart
(those neither shaded nor containing a symbol) that indicate the combinations of categories that are humanly
possible but have not been observed so far to be distinctive in any language (e.g., a voiceless retroflex lateral
fricative is possible but has not been documented so far, so it is left blank). The shaded cells, on the other hand,
exhibit the sounds not possible at these places.
Q. Syllable
In a simple way of defining the term, syllables are the parts of word (in which a word is further divided into parts),
for example, mi-ni-mi-za-tion orsup-re-seg-men-tal. Phonetically, we can observe that the flow of speech typically
consists of an alternation between vowel-like states (where the vocal tract is comparatively open and unobstructed)
and consonant-like states where some obstruction to the airflow is made (thus altering speech between the two
natural kinds of sounds). So, from the speech production point of view, a syllable consists of a movement from a
constricted or silent state to a vowel-like state and then back to constricted or silent state. From the acoustic point of
view, this means that the speech signal shows a series of peaks of energy corresponding to vowel-like states
separated by troughs of lower energy (sonority).
ENG507 Spring2019 (Finals) solved by Maha Malik
Q. Explain stress timed languages
Languages of the world are; therefore, divided into two broad categories: stress timed language and syllable timed
languages. Stress timed languages have stress as their dominating rhythmic feature meaning that these languages
seem to be timed according to the stressed patterns(the division among the syllables is made on the basis of stressed
and unstressed patternse.g., English and German languages). In other words, in stress timed languages, stressed
syllables occur with regular intervals and their units of timing are perceived accordingly. Stress-timed rhythm is one
of these rhythmical types, and is said to be characterized by a tendency for stressed syllables to occur at equal
intervals of time.
These sounds are called sonorants because they have formants (remember their acoustic correlates).
Sonorants are vowel-like sounds (nasals and glides). These sounds are called sonorants because they have formants.
But they are different from vowels because they generally have lower amplitude; therefore, they behave like
consonants
Q. what specific terms are used for the consonants cluster Cc and ccc in syllable? (2)
Consonant sequences are called clusters (e.g., CC – two consonants or CCC – three consonants). Most of the
phonotactic analyses are based on the syllable structures and syllabic templates.
Q.what are three formants help in distinguishing vowel from each other?
Acoustically, vowels are mainly distinguished by the first two formant frequencies F1 and F2; F1 is inversely related
to the vowel height (which means that smaller F1 amplitude = higher vowels), and F2 is related to the front or back
of the vowels (smaller F2 amplitude = more back vowels).
Q.while recording on praat, one has to be careful about clipping, explain? (3)
Watch out for clipping. If your recording level is too high and you go into the red on the volume bar, you’ll end up
with what is called a “clipped” signal; this is very bad for speech analysis.
Phonologists are interested in the structure of a syllable. It can be divided into three possible parts as phonemes may
occur at the beginning (onset), in the middle (nucleus or peak) and at the end (coda) of syllables - the combination of
nucleus (peak) and coda is called the rhyme. The beginning (onset) and ending (coda) are optional while a syllable
must have a nucleus (at least one phoneme). Thus, the study of the sequences of phonemes is called phonotactics,
and it seems that the phonotactic possibilities of a language are determined by its syllabic structure (sequences of
sounds that a native speaker produces can be broken down into syllables).
The role of the memory for speech under the exemplar theory suggests that many instances of each word are stored
in memory and their phonetic variability is memorized rather than computed. No one style is basic (from which
others are derived), because all are stored in memory.
ENG507 Spring2019 (Finals) solved by Maha Malik
Q. In order to Formant automatically, what three steps will you follow, elaborate?(5)
• Go to Formant > Formant listing: a box will appear with the time point at which the measurement was taken, and
the first four formants.
To remove a boundary that you have made - Highlight the boundary - Go to Boundary > Remove OR click
Alt+backspace.
Q. stop voicing
There are three important acoustic correlates of voicing in stops: the voice bar, VOT, and the duration of the
preceding vowel. Record /apa/, /aba/, /ata/, /ada/, /apha/ and /atha/ and for each of the stops in the file, take the three
measurements. See the voicing or the voice bar by exploring features of stop. Also check the duration of the
preceding vowels. Note down the presence of voicing.
1. Palatalization [ʲ] is the raising of the front of the tongue such as for /i/ vowel).
2. Velarization (written as [ˠ] and [ ̴]) is the raising of the back of the tongue (such as [u]-like sound.
4. Labialization [ʷ] is the rounding of the lips such as Arabic [sʷ] and [tʷ].
Q.Define Syllable
In a simple way of defining the term, syllables are the parts of word (in which a word is further divided into parts),
for example, mi-ni-mi-za-tion orsup-re-seg-men-tal. Phonetically, we can observe that the flow of speech typically
consists of an alternation between vowel-like states (where the vocal tract is comparatively open and unobstructed)
and consonant-like states where some obstruction to the airflow is made (thus altering speech between the two
natural kinds of sounds). So, from the speech production point of view, a syllable consists of a movement from a
constricted or silent state to a vowel-like state and then back to constricted or silent state. From the acoustic point of
view, this means that the speech signal shows a series of peaks of energy corresponding to vowel-like states
separated by troughs of lower energy (sonority).
One evidence that the IPA chart is based on linguistic phonetics is the description of the blank cells on the chart
(those neither shaded nor containing a symbol) that indicate the combinations of categories that are humanly
possible but have not been observed so far to be distinctive in any language (e.g., a voiceless retroflex lateral
fricative is possible but has not been documented so far, so it is left blank). The shaded cells, on the other hand,
exhibit the sounds not possible at these places. Further, below the consonant chart is a set of symbols for consonants
made with different airstream mechanisms (clicks, voiced implosives, and ejectives). All these descriptions reflect
the potentialities of human speech sounds (as a linguistic community) not only showing the possible segments but
also the suprasegmental features and points related to the possible airstream mechanisms and even the diacritics for
ENG507 Spring2019 (Finals) solved by Maha Malik
various types coarticulations and secondary articulatory gestures. The IPA chart is carefully documented (by
experts) and is continuously revised and updated.
In experimental phonetics and phonology, the studies of sounds include various latest experimental techniques and
computer software that are used under carefully designed lab experimentation. It is an important aspect of the
application of the latest technology by going beyond the simple acoustics and by working in sophisticated phonetic
labs in order to discover the hidden aspects of human speech. For example, questions such as ‘How speech is
produced and processed?’ are the focus of experimental phonetics (explore the speech chain as the beginning of
experimental phonetics as mentioned in Chapter-20 by Peter Roach). The latest trends under experimental phonetics
include brain functions in speech production and processing (by using the latest equipment – many special
instruments such as x-ray techniques), speech errors, neurolinguistics and the topics related to the developments
through computers – for speech analysis and synthesis.
Q. Importance of Spectrograms
1. Using Praat (or any other software) and spectrogram is particularly useful when a researcher is working on a
problem related to the nature (physical properties) of a sound (e.g., is it a phoneme or allophone?).
2. It increases our understanding of the speech sounds and their behavior in different forms (in isolation or as the
part of connected speech).
3. Practice on spectrogram gives us the opportunity to learn about the characteristics of speech sounds.
4. It is also important for experts who are working on phonetic aspects of speech as signal processing.
Formants are the overtone resonances. Acoustically, in order to plot vowels on chart, F1-F2 are very important. We
need the wide bands for measuring the formants (which are the important characteristics of sonorant speech sounds
– vowels). On spectrogram, formants are thick bands (darkness corresponds to loudness; i.e. the darkest harmonics
are the ones that are the most amplified). These amplified harmonics form the formants that are characteristic of
sonorant speech sounds.
In the description of vowel quality, rhotacization (or rhotacized vowel) is a term which is used in English phonology
referring to dialects or accents where /r/ is pronounced following a vowel, as in words ‘car’ and ‘cart’. Thus varieties
of English are divided on the basis of this feature - varieties having this feature are rhotic (in which /r/ is found in all
phonological contexts) while others (not having this feature) are non-rhotic (such as Received Pronunciation where
/r/ is only found before vowels as in ‘red’ and ‘around’). Similarly, vowels which occur after retroflex consonants
are sometimes called rhotacized vowels (they display rhotacization). It is important to mention that while BBC
pronunciation is nonrhotic, many accents of the British Isles are rhotic, including most of the south and west of
England, much of Wales, and all of Scotland and Ireland. Most American English speakers speak with a rhotic
accent, but there are non-rhotic areas (e.g., the Boston area, lower-class of New York and the Deep South).
Q. provide word which change to bilabial to labiodetnal. (RECHECK THIS ON YOUR OWN)
ENG507 Spring2019 (Finals) solved by Maha Malik
very common in English (e.g., stops and nasal: p, b, m), In some languages (such as Ewe of West Africa), bilabial
fricatives contrast with labiodental fricatives. The symbols for the voiceless and voiced bilabial fricatives are [ɸβ].
These sounds are pronounced by bringing the two lips nearly together, so that there is only a slit between them. Ewe
also contrasts voiceless bilabial and labiodental fricatives.
Languages of the world are; divided into two broad categories: stress timed language and syllable timed languages.
Stress timed languages have stress as their dominating rhythmic feature meaning that these languages seem to be
timed according to the stressed patterns(the division among the syllables is made on the basis of stressed and
unstressed patternse.g., English and German languages). In other words, in stress timed languages, stressed syllables
occur with regular intervals and their units of timing are perceived accordingly. Stress-timed rhythm is one of these
rhythmical types, and is said to be characterized by a tendency for stressed syllables to occur at equal intervals of
time.
‘Stress timed languages’ is a very general phrase used in phonetics to characterize the pronunciation of languages
displaying a particular type of rhythmic pattern that is opposed to that of syllable-timed languages. In stress-timed
languages, it is claimed that the stressed syllables recur at regular intervals of time (stress-timing) regardless of the
number of intervening unstressed syllables as in English. This characteristic is sometimes also referred to as
‘isochronism’, or isochrony. However, it is clear that this regularity is the case only under certain conditions, and the
extent to which the tendency towards regularity in English is similar to that in, say, other Germanic languages
remains unclear. In short, the division among the syllables is made on the basis of stress and unstressed patterns. In
such languages, stress is realized both at word and sentence levels approximately changing the rhythmic patterns
(particularly at sentence level).
the formants that characterize different vowels are the result of the different shapes of the vocal tract. Any particle of
air, such as that in the vocal tract or that in a bottle, will vibrate in a way that depends on its size and shape.
Remember that the air in the vocal tract is set in vibration by the action of the vocal folds (in larynx). Every time the
vocal folds open and close, there is a pulse of acoustic energy (activation). Irrespective of the rate of vibration at
source (of the vocal folds), the air in the vocal tract will resonate at these frequencies as long as the position of the
vocal organs remains the same. Because of the complex shape of the filter (tract), the air will vibrate in more than
one way at once. So, the relationship between resonant frequencies and vocal tract shape is actually much more
complicated than the air in the back part of the vocal tract vibrating in one way and the air in other parts vibrating in
another. Here we will just remember the fact that in most voiced sounds, three formants are produced every time the
vocal folds (source) vibrate. Note an interesting point here that the resonance in the vocal tract (filter) is independent
of the rate of vibration of the vocal folds (source). In other words, the vocal folds may vibrate faster or slower,
giving the sound a higher or lower pitch, but the formants will be the same as long as the position of the tube (vocal
tract) is the same.
1. Create a textgrid:
• Hold down Ctrl and click on each file to highlight them both.
• Edit (in your display you should now see the waveform (top), the spectrogram (middle) and the textgrid (bottom)
corresponding to your sound file).
• Place the cursor at the beginning of the name on the spectrogram/waveform; a boundary line will show up.
• Click in the little circle at the top of the word tier in the Textgrid to create a boundary.
• To remove a boundary that you have made - Highlight the boundary - Go to Boundary > Remove OR click
Alt+backspace.
Q. Glides
Glides are also the sonorants (vowel-like) sounds as they have similar patterns (have formants). Take the first three
formants (F1, F2 and F3) from the middle of the sounds for glides (both for /w/ and /j/) and explore their acoustic
correlates. Carefully judge the center of these sounds (the midpoint of [w] and [j]). Analyze that how similar is the
formant structure of glides with vowels and nasals. Draw lines to indicate F1, F2, F3 and compare with vowels.
Q. cardinaal vowels:
In order to classify vowels (independent of the vowel system of a particular language), the English phonetician
Daniel Jones introduced a system in early 20th century and worked out on a set of vowels called the ―cardinal
vowels‖ comprising of eight vowels to be used as reference points (so that other vowels could be related to them like
the corners and sides of a map). Cardinal vowel system is a chart or four-sided figure (the exact shape of which has
been changed from time to time), with eight corners as can be seen on the IPA chart from IPA website. It is a
diagram to be used both for rounded and unrounded vowels, and Jones proposed that there should be a primary and a
secondary set of cardinal vowels. The primary set includes eight vowels in total (from 1 to 8 ; the front unrounded
vowels [i, e, ε, a], the back unrounded vowel [ɑ] and the rounded back vowels [ɔ, o, u].
In spectrograms, time runs from left to right, the frequency of the components is shown on the vertical scale, and the
intensity of each component is shown by the degree of darkness. It is thus a display that shows, roughly speaking,
dark bands for concentrations of energy at particular frequencies—showing the source and filter characteristics of
speech.
Varieties having this feature are rhotic (in which /r/ is found in all phonological contexts)
Q. How tone (high vs low)change the meaning of the word [ma] in Mandarin? 2
For example, in Mandarin Chinese, [́ma] said with a high pitch means ‘mother’ while [̀ma] said on a low rising tone
means ‘hemp’. In other (non-tonal) languages, tone forms the central part of intonation, and the difference between,
for example, a rising and a falling tone on a particular word may cause a different interpretation of the sentence in
ENG507 Spring2019 (Finals) solved by Maha Malik
which it occurs. In the case of tone languages, it is usual to identify tones as being a property of individual syllables,
whereas an intonational tone may be spread over many syllables.
In order to understand VOT, the three types of plosive sounds are to be explained – voiced, voiceless and a voiceless
aspirated sound. Most aspirated (largest positive VOT) at the top to most voiced (largest negative VOT) at the
bottom. The Navajo aspirated stops have a very large VOT value that is quite exceptional (150 MS).
The only English lateral phoneme, at least in British English, is /l/ with allophones [l] as in led [lɛd] and [ɫ] as in bell
[bɛɫ]. In most forms of American English, initial [l] has more velarization than is typically heard in British English
initial [l]. In all forms of English, the air flows freely without audible friction, making this sound a voiced alveolar
lateral approximant. It may be compared with the sound [ɹ] in red [ɹɛd], which is for many people a voiced alveolar
central approximant. Laterals are usually presumed to be voiced approximants unless a specific statement to the
contrary is made.
Q. name some materials that can assist in the teaching of the language skills. 5
1. Explore already developed material available online from various sources (such as British Council and
other teacher resource centers); however, you must also be able to develop your own material (as
specifically required by your students).
2. You can develop your material related to the pronunciation teaching to the learners of English.
3. You can incorporate material related to the IPA text – transcription of the audio (listening) based activities
– by involving students on using dictionaries (ideally the phonetic dictionaries) in the classroom.
4. Movies and documentaries (such as from BBC - CNN - National Geographic channels) may also serve as
very effective resources for the teaching of pronunciation.
5. Finally, the real life material (for listening) and writing interaction from everyday language may also yield
tremendous results. The focus of material development should always be the enhancement of the
proficiency level of students.
Q. describe two reasons why phonetics of the community is considered for phonetic dscription.5
Firstly, individual speakers differ in interesting ways (two native speakers of a language will always speak with
some variations). The description of the phonetics of the individual involves describing the phonetic knowledge and
skills related to the performance of language. It is possible that certain aspects of the phonetics of the individual can
be captured using IPA transcription but others are not compatible with it (such as his private knowledge and its
performance and the role of memory and experience).
Secondly, the phonetics of the individual is usually not the focus of the linguist in speech elicitation, and it is
difficult to describe even with spectrograms of the person’s speech. Although, the phonetics of the individual is the
focus of much of the explanatory power of phonetic theory but for general phonetic description we need to focus on
the phonetics of the community.
In terms of its linguistic function, stress is often treated under two different headings: word (lexical) stress and
sentence (emphatic) stress. Lexical stress is basically related to the primary stress applied at syllable level (when
ENG507 Spring2019 (Finals) solved by Maha Malik
only one syllable is stressed) that has the ability to change the meaning and the grammatical category of a word as in
the case of ‗I port‘ (noun and ‗im ORT‘ (verb .
Q. how are the oral stops produced. Provide IPA symbols for eng oral stops any three. 5
In phonetics, a stop, also known as a plosive or oral occlusive, is a consonant in which the vocal tract is blocked so
that all airflow ceases.
The occlusion may be made with the tongue blade ([t], [d]) tongue body ([k], [ɡ]), lips ([p], [b]), or glottis ([ʔ]).
Stops contrast with nasals, where the vocal tract is blocked but airflow continues through the nose, as in /m/ and /n/,
and with fricatives, where partial occlusion impedes but does not block airflow in the vocal tract.
Q. Why do you think teacher are mostly expected to perform action research in ELT. 3
Teachers are expected to facilitate action research which is the most rewarding and productive for their own
profession. For example, the phonetics of phonological speech errors if explored and shared by teachers (by
investigating their own practices) may lead to a very positive discussion in the academic circles (of research into
ELT – SLA). Similarly, topics such as learners’ performance and development (e.g., what do good speakers do?)
may yield useful results for teachers’ fraternity. Having said this, it is required from teachers (and student teachers)
to facilitate action research related to reading/listening issues, English reading strategies (e.g., in primary schools) –
(and their effectiveness), impact on pronunciation and many more. Research in the fields of phonetic theory and the
description with phonological, typological and broader implication may also be included in phonetics and phonology
specific action research.
Q. Phonotactics
In phonology, phonotactics is the study of the ways in which phonemes are allowed to combine in a particular
language. (A phoneme is the smallest unit of sound capable of conveying a distinct meaning.) Over time, a language
may undergo phonotactic variation and change. For example, as Daniel Schreier points out, "Old English
phonotactics admitted a variety of consonantal sequences that are no longer found in contemporary varieties"
They are different from vowels because they generally have lower amplitude; therefore, they behave like
consonants. Record the following sequences for our experimentation on sonorant sounds /ama/ - /ana/ - /aŋa/ - /wi/ -
/ju/. Having recorded these sequences, now start exploring the features of these sounds like the measurement of F1,
F2, F3 and also try to compare them with vowels.
Sound change is phonetically gradual and operates across the whole lexicon. It is a gradual shift as new instances
keep on adding.
Non-rhotic (such as Received Pronunciation where /r/ is only found before vowels as in ‘red’ and ‘around’).
Most American English speakers speak with a rhotic accent, but there are non-rhotic areas (e.g., the Boston area,
lower-class of New York and the Deep South).
1. Pitch range,
2. height and
3. direction
cry
create
crazy
price
practice
private
Q. Name the five major features based on the major regions of vocal tract. 5
The five features in total (i.e., Labial, Coronal, Dorsal, Radical, and Glottal).
The first three of these features are related to tongue position whereas ‘Radical’ is a cover term for [pharyngeal] and
[epiglottal] articulations made with the root of the tongue.
The feature of ‘Glottal’, on the other hand, is based on being [glottal], to cover various articulations such as [h]. If
we are to have a convenient grouping of the features for consonants, we have to recognize that SupraLaryngeal
features must allow for the dual nature of the actions of the larynx and include Glottal as a place of articulation.
Remember that a sound may be articulated at more than one of the regions Labial, Coronal, Dorsal, Radical, and
Glottal.
Within the five general regions, ‘Coronal’ articulations can be split into three mutually exclusive possibilities:
Laminal (i.e., blade of the tongue), Apical (i.e., tip of the tongue), and Sub-apical (i.e., the under part of the blade of
the tongue). Thus the major regions may be subdivided into sub regions on the basis of their features
ENG507 Spring2019 (Finals) solved by Maha Malik
Q. Define the term "supra-segmental. 5
upra means ‗above‘ or ‗beyond‘ and segments are sounds (phonemes . uprasegmental is a term used in phonetics
and phonology to refer to a vocal effect (such as tone, intonation, stress, etc.) which extends over more than one
sound (segment) in an utterance.
Major suprasegmental features include pitch, stress, tone, intonation or juncture. Phonological studies can be divided
into two fields: segmental phonology and suprasegmental phonology.
Q. VOT (5)
It is the characteristic of voiced + voiceless + aspirates stop sounds and there are very easy steps to calculate the
VOT. Record /apa/, /aba/ /ata/, /ada/, /apha/ and /atha/. Zoom in through your stop sounds so that you can analyze
the patterns of the stop sounds and find the difference among the three types of VOT (negative, zero and positive).
Measure the VOT of each stop and compare voiced/voiceless counterparts (p/b, t/d, k/g). Similarly, zoom in so that
you can clearly see the stop closure followed by the beginning of the vowel. You can measure the time between the
end of the stop closure (the beginning of the release burst) and the onset of voicing in the following vowel (the onset
of regular pitch pulses in the waveform). This is voice onset time or VOT.
Its most important function is to act as a signal of grammatical structure (e.g., creating patterns to distinguish among
grammatical categories), where it performs a role similar to punctuation (in written language). It may furnish far
more contrasts (for conveying meaning). Intonation also gives an idea about the syntactic boundaries (sentence,
clause and phrase level boundaries). It also provides the contrast between some grammatical structures (such as
questions and statements). For example, the change in meaning illustrated by ‘Are you asking me or telling me?’ is
regularly signaled by a contrast between rising and falling pitch. Note the role of intonation in sentences like ‘He’s
going, isn’t he?’ (= I’m asking you) opposed to ‘He’s going, isn’t he!’ (= I’m telling you) (These examples are given
by Peter Roach).
Like stops, nasal can also occur voiced or voiceless (for example, in Burmese, Ukrainian and French) though in
English and other most languages nasals are voiced. As voiceless nasals are comparatively rare, they are symbolized
simply by adding the voiceless diacritic [ ] under the symbol for the voiced sound. There are no special symbols for
voiceless nasals and it is written as /m / - a combination of the letter for the voiced bilabial nasal and a diacritic
indicating voicelessness.
Developing relevant material for the teaching of phonetics and phonology is an important task for aspiring teachers
of English language. For example, you can develop your material related to the pronunciation teaching to the
learners of English. You can incorporate material related to the IPA text – transcription of the audio (listening) based
activities – by involving students on using dictionaries (ideally the phonetic dictionaries) in the classroom.
The role of the memory for speech under the exemplar theory suggests that many instances of each word are stored
in memory and their phonetic variability is memorized rather than computed. Exemplar theory says that
generalization is also possible within productivity. Interestingly, productivity—the hallmark of linguistic knowledge
in the phonetic implementation approach—is the least developed aspect of the exemplar theory.
ENG507 Spring2019 (Finals) solved by Maha Malik
Q. How could I get findings while doing research on Pakistan regional languagae? (2)
Pakistani regional languages are the part of the rich linguistic regions. (Himalaya Hindu Kush (HKH) region, one of
the richest regions in the world linguistically and culturally) may be very potential area for research in the fields of
areal and typological linguistics (description of linguistic features crosslinguistically). While working on Pakistani
regional languages, one may apply for funding from international organizations (e.g., organization for endangered
languages and UNISCO).
1. fall,
2. rise,
3. fall–rise and
4. rise–fall
ATR : a kind of articulation in which the movement of the root of tongue expands the front–back diameter of the
pharynx
Sound change is phonetically gradual and operates across the whole lexicon. It is a gradual shift as new instances
keep on adding.
F1 is inversely related to the vowel height (which means that smaller F1 amplitude = higher vowels)
AND F2 is related to the front or back of the vowels (smaller F2 amplitude = more back vowels.)
There is one more way to confirm your pitch measurement by looking at the spectral slice (which gives the
component frequencies and their amplitudes).
• Click on the first (big) peak = H1 = F0 (Ignore any small spikes at the beginning; this might be noise).
Now note down the frequency of this peak at the top of the vertical bar).
Use the confirmed pitch values and plot the pitch of each vowel on your excel sheet. Make sure you label your y-
axis using a scale that allows you to spread out your measurements as much as you can. Now draw the cluster chart
from the excel sheet and export to Word document and give the figure number and title.
It has often been found that languages do not allow all phonemes to appear in any order (e.g., a native speaker of
English can figure out fairly easily that the sequence of phonemes /streŋθs/ makes an English word (‗strengths‘) and
that the sequence /bleidg/ would be acceptable as an English word ‗blage‘, although that word does not happen to
exist, but the sequence /lvm/ could not possibly be the part of an English word).
In syllable timed languages, all syllables tend to have an equal time value (for example, their length or duration) and
the rhythm of the language is said to be syllable-timed. In these languages, syllables tend to occur at regular
intervals of time with fixed word stress. A classic example is Japanese in which all morae have approximately the
same duration. This tendency is contrasted with stress-timing where the time between stressed syllables is said to
tend to be equal irrespective of the number of unstressed syllables in between. Czech, Polish, Swahili and Romance
languages (e.g., Spanish and French)
Velarization involves raising the back of the tongue (adding the /u/ vowel like quality). It can be considered as the
addition of an [u]-like tongue position (but remember that it is without the addition of the lip rounding). A typical
English example of velarization is the /l/ sound at the end of a syllable (as in words like kill, pill, sell and will) called
velarized or dark /l/ and may be written as [l̴]. The diacritics for velarization are both [ˠ] and [ ̴].
Ease of articulation (e.g., coarticulation and secondary articulation). This tendency to use language sounds with
maximum possible ease of articulation leads to change in the pronunciation of words. In co-articulations, for
example, a change in the place of the nasal and the following stop occurred in words such as improper and
impossible before these words came into English through Norman French. In words such as these, the [n] that occurs
in the prefix in- (as in intolerable and indecent) has changed to [m]. These changes are even reflected in the spelling.
In all this and in many similar historical changes, one or more segments are affected by adjacent segments so that
there is an economy of articulation.
We have discussed that the vocal tract as a tube with a uniform diameter has simultaneous resonance frequencies—
several different pitches at the same time. We have also discussed that these resonance frequencies change in a
predictable way when the tube is squeezed at various locations. This means that we can model the acoustics of
vowels in terms of perturbations of the uniform tube. For example, when the lips are rounded, the diameter of the
vocal tract is smaller at the lips than at other locations in the vocal tract. The theory of perturbation says that with the
acoustic effect of constriction at the lips, we can predict the formant frequency differences between rounded and
unrounded vowels. Keeping in mind this modification in the size and nature of vocal tract (for specific vowel
sounds), we can estimate how this perturbation theory works. So for each formant, there are locations in the vocal
tract where constriction will cause the formant frequency to rise, and locations where constriction will cause the
frequency to fall.
ENG507 Spring2019 (Finals) solved by Maha Malik
Q. How ELT teacher develop the material for phonetics and phonology
Developing relevant material for the teaching of phonetics and phonology is an important task for aspiring teachers
of English language. The specific needs of ELT activities in one‘s own context and explore already developed
material available online from various sources (such as British Council and other teacher resource centers);
however, one must also be able to develop own material (as specifically required by students). For example, the
teacher can develop material related to the pronunciation teaching to the learners of English. She/ he can incorporate
material related to the IPA text – transcription of the audio (listening) based activities – by involving students on
using dictionaries (ideally the phonetic dictionaries) in the classroom. Movies and documentaries (such as from
BBC - CNN - National Geographic channels) may also serve as very effective resources for the teaching of
pronunciation. Finally, the real life material (for listening) and writing interaction from everyday language may also
yield tremendous results. The focus of material development should always be the enhancement of the proficiency
level of students.
In the production of trill the articulator is set in motion by the current of air [r]. It is a typical sound of Scottish
English as in words like ‘rye’ and ‘row’.
Flap is front and back movement of tongue tip at the underside of tongue with curling behind. It is found in
abundance in Indo-Aryan (IA) languages [ɽ]. Typical flap sounds found in IA languages is a retroflex sound and the
examples are [ɽ], [ɖ] and [ɳ].
Acoustic Phonetics is the study of detailed physical properties of sound we produce. It generally uses tools which
read the changes in air pressure that the sound creates. Each sound has its different sound quality, which depends on
the source filter, that is our speech organs. Each sound has its own f0, f1, f2 and f3 (formants), depending on the
source modifier. f0 deals with the fundamental frequency, f1 gives the information about the pharyngeal cavity, f2
about the oral cavity and f3 about the position of the lips while the sound was produced.
Neurolinguistics is the study of the neural mechanisms in the human brain that control the comprehension,
production, and acquisition of language. As an interdisciplinary field, neurolinguistics draws methods and theories
from fields such as neuroscience, linguistics, cognitive science, communication disorders and neuropsychology.
Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental
techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models
in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the
processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language.
Neurolinguists study the physiological mechanisms by which the brain processes information related to language,
and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology,
and computer modeling.
In this theory, the tract is represented using a source-filter model and several devices have been devised to
synthesize speech in this way. The idea is that the air in the vocal tract acts like the air in an organ pipe, or in a
bottle. Sound travels from a noise-making source (i.e., the vocal fold vibration) to the lips. Then, at the lips, most of
ENG507 Spring2019 (Finals) solved by Maha Malik
the sound energy radiates away from the lips for a listener to hear, while some of the sound energy reflects back into
the vocal tract. The addition of the reflected sound energy with the source energy tends to amplify energy at some
frequencies and damp energy at others, depending on the length and shape of the vocal tract. The vocal folds (at
larynx) are then a source of sound energy, and the cavity (vocal tract - due to the interaction of the reflected sound
waves in it) is a frequency filter altering the timbre of the vocal fold sound. Thus this same source-filter mechanism
is at work in many musical instruments. In the brass instruments, for example, the noise source is the vibrating lips
in the mouthpiece of the instrument, and the filter is provided by the long brass tube.
Intonation refers (very) simply to the variations in the pitch of a speaker’s voice (f0) used to convey or alter meaning
but in its broader and more popular sense intonation covers much of the same field as ‘prosody’ where variations in
such things as voice quality, tempo and loudness are included. Intonation as a suprasegmental feature performs
several functions in a language. Its most important function is to act as a signal of grammatical structure (e.g.,
creating patterns to distinguish among grammatical categories), where it performs a role similar to punctuation (in
written language). It may furnish far more contrasts (for conveying meaning). Intonation also gives an idea about the
syntactic boundaries (sentence, clause and phrase level boundaries).
Intonation also gives an idea about the syntactic boundaries (sentence, clause and phrase level boundaries). It also
provides the contrast between some grammatical structures (such as questions and statements). For example, the
change in meaning illustrated by ‘Are you asking me or telling me?’ is regularly signaled by a contrast between
rising and falling pitch. Note the role of intonation in sentences like ‘He’s going, isn’t he?’ (= I’m asking you)
opposed to ‘He’s going, isn’t he!’ (= I’m telling you) (These examples are given by Peter Roach).
Q. When formants are not steady in PRAAT what will you do.
• Get rid of Praat’s formant tracking: Formant > Show formants (unclick).
• Place your cursor in the center of each formant, in the middle of the vowel.
• A red horizontal bar should appear with the frequency value on the left (in red).
Features may also be studied further as a part of language universals and then their role as language specific sub
sets.
‘Secondary’ articulation is an articulatory gesture with a lesser degree of closure occurring at approximately the
same time as another (primary) gesture. It is different than co-articulation which is at the same time and of the same
ENG507 Spring2019 (Finals) solved by Maha Malik
value (taking place as an equal level gesture). Thus it is appropriate to consider four types of secondary articulations
in conjunction with vowels because they can usually be described as added vowel-like articulations including;
‘palatalization’ (adding a high front tongue gesture as in sound /i/), velarization (raising of the back of the tongue),
pharyngealization (it is the superimposition of the narrowing of the larynx) and labialization (the addition of lip-
rounding).
Acoustic Phonetics is the study of detailed physical properties of sound we produce. It generally uses tools which
read the changes in air pressure that the sound creates. Each sound has its different sound quality, which depends on
the source filter, that is our speech organs. Each sound has its own f0, f1, f2 and f3 (formants), depending on the
source modifier. f0 deals with the fundamental frequency, f1 gives the information about the pharyngeal cavity, f2
about the oral cavity and f3 about the position of the lips while the sound was produced. Auditory phonetics is just
the other side of the coin for this study. It deals with the study of these articulated sound characteristics from the
perception perspective. Auditory phonetics deals with the listener at a broader aspect. The perceived f0 is measured
in terms of pitch and calculated in Mel or bark scales.
Good teachers are expected to be active researchers and therefore busy in updating themselves about the latest
researches and teaching methodologies around the world. It is also a pedagogical challenge for teachers to keep
themselves updated by exploring pedagogical and technological challenges for ELT experts (in their own contexts
and internationally). For example, the aspects of Task Based Learning and Teaching (TBLT) as a golden method for
second language acquisition (SLA) may be effective in Pakistani context if explored by ELT practitioners. Teachers
as the agents of change and they must be reading research studies and carry out research by and explore their issues
and solutions. A good way is to keep reading teachers’ digests and journals and participate in the online discussions
by teaching associations.
Q. filter theory
It is a model of speech (e.g., vowel) production. According to this theory, source refers to the waveform of the
vibrating larynx. Its spectrum is rich in harmonics, which gradually decrease in amplitude as their frequency
increases. The various resonance chambers of the vocal tract, especially the movements of the tongue and lips, act
on the laryngeal source in the manner of a filter, reinforcing certain harmonics relative to others. Thus the
combination of these two elements (larynx as source and cavity as filter) is known as the source-filter model of
speech (e.g., vowel) production.
Q. articulatory gestures:
1. Bilabial gesture – (e.g., stops and nasal: p, b, m). The symbols for the voiceless and voiced bilabial
fricatives are [ɸβ]. These sounds are pronounced by bringing the two lips nearly together, so that there is
only a slit between them.
2. Labiodental fricatives – [f, v]. In English, a labiodental nasal, [ɱ], may occur when /m/ occurs before /f/,
as in emphasis or symphony.
3. Dental - e.g. dental fricatives [θ, ð] but there are no dental stops, nasals, or laterals except allophonically
realized (before [θ, ð] as in eighth, tenth, wealth). any speakers of rench, Italian, and other languages
(such as rdu, ashto and indhi typically have dental stops such as [t d ].
ENG507 Spring2019 (Finals) solved by Maha Malik
4. Alveolar are very common targets and stops, nasals, and fricatives all occur in English and in many other
languages at alveolar as a target of articulatory gestures (e.g., t, d, n, l, r., etc.).
5. Retroflex is very common sound in many Pakistani languages which is made by curling the tip of the
tongue up and back so that the tongue tip moves during the retroflex sounds such as [ɳ, ŋ, ɲ, ʈ, ɽ].
6. Palato-alveolar and palatal are also possible articulatory gestures commonly found in world languages.
Similarly, velar sounds found in Urdu and other Pakistani languages need to be mentioned here including
[x, ɣ] which are velar fricatives. The gestures for pharyngeal (such as Arabic pharyngeal fricative [ʕ] and
epiglottal sounds (such as epiglottal fricative [ʢ] involve pulling the root of the tongue or the epiglottis back
toward the back wall of the pharynx.
Take the measurement of the first two formants and plot those values on a chart using the Excel spreadsheet. By
putting F1 and F2 in separate columns, write the formant values associated with different vowels (giving vowels in
the first column, the difference between F2 and F1 in the second column and F1 in the third). After putting the data
in Excel sheet, use the Scatter chart from the same spreadsheet. Further in order to make it corresponding with the
required values for F1 and F2, reverse the values for both formants (on both axis – Y and X). Now the zero for both
F1 and F2 is at the right corner. Watch the video and you will find how F1 is inversely related to the height of the
vowel and the difference between F2 and F1 to the frontness of the vowels. Once completed, export the chart to your
Word document and give it the number and title accordingly.
Speech is quite diverse and complex particularly when it comes to the phonetics of individual. It is understandable
that different speakers of the same language will have somewhat different productions of speech depending upon
their vocal tract physiology and their own habits of speech motor coordination and more importantly due to their
memory of speech. As per the phonetic implementation view, words are stored in speech memory in their most basic
phonetic form and used when needed.
Q. Tonal language:
Tone (in phonetics and phonology) as a suprasegmental feature refers to an identifiable movement (variation) or
level of pitch that is used in a linguistically contrastive way. In tone (tonal) languages, the linguistic function of tone
is to change the meaning of a word. For example, in Mandarin Chinese, [́ma] said with a high pitch means ‘mother’
while [̀ma] said on a low rising tone means ‘hemp’. In other (non-tonal) languages, tone forms the central part of
intonation, and the difference between, for example, a rising and a falling tone on a particular word may cause a
different interpretation of the sentence in which it occurs. In the case of tone languages, it is usual to identify tones
as being a property of individual syllables, whereas an intonational tone may be spread over many syllables. In the
analysis of English intonation, tone refers to one of the pitch possibilities for the tonic (or nuclear) syllable. For
further analysis, a set of four types of tone is usually used (fall, rise, fall–rise and rise–fall) though others are also
suggested by various experts.
It is sometimes claimed by the experts that different languages (and dialects) have different types of rhythmic
patterns. Languages of the world are; therefore, divided into two broad categories: stress timed language and
syllable timed languages. But this is basically the division of languages on the basis of their modes of timing (i.e.,
stress vs. syllable timed languages). Stress timed languages have stress as their dominating rhythmic feature
meaning that these languages seem to be timed according to the stressed patterns (the division among the syllables is
made on the basis of stressed and unstressed patterns e.g., English and German languages). In other words, in stress
ENG507 Spring2019 (Finals) solved by Maha Malik
timed languages, stressed syllables occur with regular intervals and their units of timing are perceived accordingly.
Stress-timed rhythm is one of these rhythmical types, and is said to be characterized by a tendency for stressed
syllables to occur at equal intervals of time.
Q. Nasalization:
The speakers of Urdu, Punjabi and many other Pakistani regional languages learn to produce a variety of nasal
vowels as the part of their mother tongue and face no issue in learning nasalization in vowels. However, for the
speakers of other languages (such as English which does not have nasal vowels) have to learn this feature of vowels
by starting saying the low vowel /æ/ as in man and by keeping the soft palate lowered. any languages have contrasts
between nasal and oral vowels including rench and rdu. rdu and unjabi have many nasal vowels. rdu has seven
nasal vowels such as /he/ (meaning ‗is‘ vs. /h / (meaning ‗are‘ . Nasalization in vowels is a common feature of Indo
Aryan languages. In I A chart, the diacritic used for nasalization is the symbol [ ] called tilde (used above the
phonetic symbol to show the nasality).
Acoustic Phonetics is the study of detailed physical properties of sound we produce. It generally uses tools which
read the changes in air pressure that the sound creates. Each sound has its different sound quality, which depends on
the source filter, that is our speech organs. Each sound has its own f0, f1, f2 and f3 (formants), depending on the
source modifier. f0 deals with the fundamental frequency, f1 gives the information about the pharyngeal cavity, f2
about the oral cavity and f3 about the position of the lips while the sound was produced. Auditory phonetics is just
the other side of the coin for this study. It deals with the study of these articulated sound characteristics from the
perception perspective. Auditory phonetics deals with the listener at a broader aspect. The perceived f0 is measured
in terms of pitch and calculated in Mel or bark scales.
Explaining vowels (particularly, the vowels of Non-European languages), a set of secondary cardinal vowels (as a
precise set of references) was introduced by the same British phonetician Danial Jones (1881-1967). Secondary
cardinal vowels are easy to understand in connection with the primary cardinal vowel system. The main difference
between primary and secondary cardinal vowels is related to lip-rounding as in some languages the feature of lip-
rounding is possible for front vowels. By reversing the lip position (in comparison with primary cardinal vowels),
the secondary series of vowel types is produced (e.g., rounding the lips for the front vowels).
Man From Nowhere
36. One should observe gap in pattern with burst for Voiceless and sharp formant beginning for
voiced stops
37. Which of the following vowel is uttered by rounding lips? /u/
38. Which of the following features (place of articulation) best describes the stop /p/? BILABIAL
39. Which of the following features (place of articulation) best describes the stop /g/ GLOTTAL
40. The activities of ELTR are planned and sponsored by British Council Pakistan
41. Two native speakers of a lunguage will always speak WITH SOME VARIATION
42. Spanish has a very simple system contrasting only FIVE vowel sounds.
43. In English, sounds /w/ and /j/ are considered SEMI VOWELS
44. Which of the following symbols represents labilization as a secondary gesture? [w]
45. Speech is quite diverse and complex particularly when it comes to the phonetics of
INDIVIDUAL
46. Phonotactically speaking, which of the following sequences of phonemes will be acceptable in
English bleidg.
47. Which of the following words uses a seven phoneme pattern of syllable? STRENGTHS
48. PRAAT software is particularly useful for the ACOUSTIC analysis.
49. The measurements are taken from the middle of a vowel sound because it is the NUCLEUS
portion .
50. F1 is inversely related to the HEIGHT of the vowel.
51. The difference between F2 and F1 is related to the FRONTNESS of the vowel.
52. The features (voiceless) and (breathy voice) are studied under the cover term 'Laryngeal
53. Radical is a cover for (pharyngeal) and (epiglottal] articulations made with the ROOT OF The
TONGUE.
54. In the production of a plosive like (p), which of the following is not a sub-task? CLOSE The
TEETH
55. Sonorant is VOWEL like sounds.
56. The question that is mainly answered by the contrastive function of distinctive feature theory
is "how is it different ?
57. Which of the following functions of the distinctive feature theory answers the question, what
it is? DESCRIPTIVE
58. Which of the following is considered a GOLDEN method of SLA? Task Based Learning and
Teaching (TBLT)
59. Sonorant is the sounds that basically consist of nasals and glides.
60. English Language Teaching Reforms (ELTR) are the projects of the Higher Education
Commission (HEC) of Pakistan
61. in spectrograms, time runs from left to right, and the frequency of the components is shown on
the vertical scale.
Q. How tone (high vs low)change the meaning of the word [ma] in Mandarin?
2 Marks
For example, in Mandarin Chinese, [ma] said with a high pitch means 'mother" while [ma] said
on a low rising tone means 'hemp'. In other (non-tonal) languages, tone forms the central part of
intonation, and the difference between, for example, a rising and a falling tone on a particular word
may cause a different interpretation of the sentence in which it occurs. In the case of tone
languages, it is usual to identify tones as being a property of individual syllables, whereas an
intonational tone may be spread over many syllables.
also bring variation in meaning and can prove an important signal of the social background of the
speakers.
Q. Types of SAGs
The four types of possible secondary articulatory gestures related to vowel quality
• Palatalization (can come as short or long question too) is the addition of a high
front tongue gesture (as in sound like [i]) to another main (primary) gesture. The
diacritic used for palatalization is the small [ʲ] superimposed above another symbol
(for primary gesture). The terms palatalization (a process whereby the place of an
articulation is shifted nearer to the center of the hard palate) and palatalized (when
the front of the tongue is raised close to the palate while an articulatory closure is
made at another point in the vocal tract) are sometimes used in a slightly different
way. A palatalized consonant has a typical /j/-like (similar to /i/ vowel) quality.
• Velarization involves raising the back of the tongue (adding the /u/ vowel like
quality). It can be considered as the addition of an [u]-like tongue position (but
remember that it is without the addition of the lip rounding). A typical English
example of velarization is the /l/ sound at the end of a syllable (as in words like
kill, pill, sell and will) called velarized or dark /l/ and may be written as [l ]. The
diacritics for velarization are both [ˠ] and [ ].
• Pharyngealization which is the superimposition of narrowing of the pharynx. The
IPA diacritics for symbolizing pharyngealization are [ ] (as for velarization) and
[ˤ] (the superimposition of the symbol for pharyngeal sound)
• Labialization which is the addition of lip rounding (written as [ʷ]) to other
primary articulation such as Arabic /tʷ/ and /sʷ/. Nearly all kinds of consonants can
have added lip rounding, including those that already have one of the other
secondary articulations (such as velarization and palatalization).
Q. Phonology fields?
Phonological studies can be divided into two fields: segmental phonology and suprasegmental
phonology. Suprasegmental features have been extensively explored in the recent decades and
many theories have been constituted related to the application and description of these features.
Q. lexical stress....5
Lexical stress, or word stress, is the stress placed on a given syllable in a word. The position of
lexical stress in a word may depend on certain general rules applicable in the language or dialect ,
but in other languages, it must be learned for each word, as it is largely unpredictable. In some
cases, classes of words in a language differ in their stress properties. Lexical stress is basically
related to the primary stress applied at syllable level (when only one syllable is stressed) that has
the ability to change the meaning and the grammatical category of a word as in the case of ‗IMport‘
(noun) and " imPORT‘ (verb).
Sentence stress is applied on one word (rather than a syllable) in a sentence thus making that word
more prominent (stressed) than the rest of the words in the sentence. This type of stress has its role
in intonation patterns and rhythmic features of the language showing specific emphasis on the
stressed word (which may be highlighting some information in the typical context). In order to
perceive the nature of sentence level stress, read the following sentences with shifting the stress
accordingly and judge the shift in emphasis (and its role in the context):
• Did YOU drive to Peshawar last weekend?
• Did you DRIVE to Peshawar last weekend?
• Did you drive to PESHAWAR last weekend?
• Did you drive to Peshawar LAST weekend?
Q. IPA
While discussing the key elements of linguistic phonetic description, we need to consider the
International Phonetic Alphabet (abbreviated as IPA). IPA is the set of symbols and diacritics that
have been officially approved by IPA. The association publishes a chart comprising of a number
of separate charts. At the top inside the front cover, you will find the main consonant chart. Below
it is a table showing the symbols for non-pulmonic consonants, and below that is the vowel chart.
Inside the back cover is a list of diacritics and other symbols, and a set of symbols for
suprasegmental features (events) such as tone, intonation, stress, and length. Remember that the
IPA chart does not try to cover all possible types of phonetic descriptions (e.g., all the individual
strategies for realizing linguistic phonological contrasts, or gradations in the degree of co-
articulation between adjacent segments, etc.). Instead, it is limited to those possible sounds that
can have linguistic significance in that they can change the meaning of a word in some languages.
So the description of IPA is based on the linguistic phonetics of the community.
Many phoneticians disagree with the basic idea of timing value. They are of the view that there
are three dimensions: a. fixed word stress (mainly found in Romance languages), b. variable word
stress (mainly found in languages such as English and German) c. fixed phrase stress (phrase as a
third possibility as exhibited by Japanese) and they want to categorize languages on the basis of
these three patterns.
Q. Importance of Spectrograms
1. Using Praat (or uny other software) and spectrogram is particularly useful when a
researcher is working on a problem related to the nature (physical properties) of a
sound (e.g, is it a phoneme or allophone?).
2. It increases our understanding of the speech sounds and their behavior in different forms (in
isolation or as the part of connected speech)
3. Practice on spectrogram gives us the opportunity to lean about
the characteristics of speech Sounds.
4. It is also important for experts who are working on phonetic aspects of speech as signal
processing.
5. These are also used as the part of techniques in speech recognition.
Q.Stop/retroflex Spectogram
Retroflex - general lowering of the third and fourth formats Stop - gap in pattern (with burst for
voiceless and sharp formant beginning for voiced stops)
Q. Explain Phonotactics
can refer to morphological structure; and phonological patterns which are sensitive to
morphology (e.g. affixation, etc.) are represented only in the morphological component
of the grammar (not in the phonology).
Phonological studies can be divided into two fields: segmental phonology and
suprasegmental phonology. Suprasegmental features have been extensively explored in the recent
decades and many theories have been constituted related to the
application and description of these features.
1. Negative
2. Zero
3. positive
Q. define bilabial gesture and give some examples and what is consonantal
Gestures?
very common in English (e.g., stops and nasal: p, b, m), In some languages (such
as Ewe of West Africa), bilabial fricatives contrast with labiodentals fricatives. The
symbols for the voiceless together, so that there is only a slit between them. Ewe also
contrasts voiceless bilabial and labiodentals fricatives. In phonetics and phonology,
speech sounds (segments) using basic units of contrast are defined as gestures – they
are treated as the abstract characterizations of articulatory events with an intrinsic time
dimension. Thus sounds (segments) are used to describe the phonological structure of
specific languages and account for phonological variation. In this type of description in
phonetics and phonology, sounds are the underlying units which are represented by
classes of functionally equivalent movement patterns (gestures).
Q.explain VOT
Voice Onset Time (VOT) is a term used in phonetics referring to the point in time at
which vocalfold vibration starts in relation to the release of a closure (during the
production of plosive sounds). In order to understand VOT, the three types of plosive
sounds are to be explained – voiced, voiceless and a voiceless aspirated sound.
Major supra segmental features include pitch, stress, tone, intonation or juncture.
These features are meaningful when they are applied above segmental level (on more
than one segment).
A set of four types of tone is usually used (fall, rise, fall–rise and rise–fall).
Many of the American vowels are essentially different than those of British – and
that is why it is a different English (compare Standard American Newscaster English
with British English as spoken by BBC newscasters). When you carefully listen to
American vowels [i, ɪ, ɛ, æ] as in words heed, hid, head, had (spoken by a native
speaker of English) these vowels sound as if they differ by a series of equal steps. Even
some Eastern American speakers would make a distinct diphthong in heed so that their
[i] is really a glide (diphthong) starting from almost the same vowel as that in hid.
Similarly, the back vowels also vary considerably in both forms of English (e.g., many
Californians do not distinguish between the vowels in words father and author).
Similarly, the vowels [ʊ, u] as in good and food also vary considerably as they have a
very unrounded vowel in good and a rounded but central vowel in food. In short,
American English in ways is distinct from the British English and as the students of
phonetics and phonology we should try to explore these differences.
Q. Feature hierarchy....3
Feature hierarchy is an important concept in phonetics and phonology which is based on the
properties and features of sounds. In a very general sense, a feature may be tied to a particular
articulatory maneuver or acoustic property. For example, the feature [bilabial] indicates not only
that the segment is produced with lips but also that it involves both of them. Such features (in
phonetics and phonology) are listed in a hierarchy with nodes in the hierarchy defining ever more
specific phonetic properties
Sounds are divided in terms of their supra-laryngeal and laryngeal characteristics, and their
airstream mechanism. The supra-laryngeal characteristics can be further divided into those for
place (of articulation), manner (of articulation), the possibility of nasality, and the possibility of
being lateral. Thus these features are used for classifying speech sounds and describing them
formally.
Q. Name the five major features based on the major regions of vocal tract. 5
The five features in total (i.e., Labial, Coronal, Dorsal, Radical, and Glottal). The first three of
these features are related to tongue position whereas Radical is a cover term for (pharyngeal] and
[epiglottal) articulations made with the root of the tongue. The feature of 'Glottal", on the other
hand, is based on being [glottal), to cover various articulations such as (h). If we are to have a
convenient grouping of the features for consonants, we have to recognize that Supra Laryngeal
features must allow for the dual nature of the actions of the larynx and include Glottal as a place
of articulation, Remember that a sound may be articulated at more than one of the regions Labial,
Coronal, Dorsal, Radical, and Glottal. Within the five general regions, "Coronal articulations can
be split into three mutually exclusive possibilities Laminal (i.e., blade of the tongue), Apical (i.e.,
tip of the tongue), and Sub-apical (i.c., the under part of the blade of the tongue). Thus the major
regions may be subdivided into sub regions on the basis of their features.
• Language universal features: Broad phonetic classes (e.g., aspirated vs. unaspirated) derive
from physiological constraints on speaking or hearing, but their detailed phonetic definitions are
arbitrary—a matter of community norms.
• Speaking styles: No one style is basic (from which others are derived), because all are stored in
memory. Bilingual speakers store two systems.
• Generalization and productivity: Exemplar theory says that generalization is also possible
within productivity. Interestingly, productivity—the hallmark of linguistic knowledge in the
phonetic implementation approach—is the least developed aspect of the exemplar theory.
• Sound change: Sound change is phonetically gradual and operates across the whole lexicon. It
is a gradual shift as new instances keep on adding.
Q. Ease of articulation
In order to explain the sound patterns of a language, the views of both speaker and listener are
considered. Both of them like to use the least possible articulatory effort (except when they are
trying to produce very clear speech), and there are a large number of assimilations, with some
segments left out, and other reduced to minimum. Thus a speaker uses language with an case of
articulation (e. g, co articulation and secondary articulation). This tendency to use language sounds
with maximum possible case of articulation leads to change in the pronunciation of words.
Q.what specific terms are used for the consonants cluster Cc and ccc in
syllable?
The shaded cells, on the other hand, exhibit the sounds not possible at these
places.
• Pitch range
• Height
• Direction
Syllable structure could be of three types: 'simple' (CV), 'moderate (CVC) and
"complex (with consonant clusters at edges) such as CCVCC and CCCVCC (where V
means vowel and C stands for consonant).
Q.Semivowels
Most of the world languages contain a class of sounds that functions in a way
similar to consonants but is phonetically similar to vowels (eg., in English, /w/ and /jf as
in "wet' and 'yet'). When they are used in the first part of syllables (at onset), they
function as consonants. But if they are pronounced slowly, they resemble (in quality)
with the vowels [u] and [i] respectively. These sounds are called semivowels which are
also termed as approximants today.
Q. Lexical stress
Lexical stress, or word stress, is the stress placed on a given syllable in a word.
The position of lexical stress in a word may depend on certain general rules applicable
in the language or dialect , but in other languages, it must be learned for each word, as
it is largely unpredictable. In some cases, classes of words in a language differ in their
stress properties. Lexical stress is basically related to the primary stress applied at
syllable level (when only one syllable is stressed) that has the ability to change the
meaning and the grammatical category of a word as in the case of ‗IMport‘ (noun) and "
imPORT‘ (verb).
in Mandarin Chinese, [ma] said with a high pitch means 'mother" while [ma] said
on a low rising tone means 'hemp'.
Q. Nasal stops.
Like stops, nasal can also occur voiced or voiceless (for example, in Burmese,
Ukrainian and French) though in English and other most languages nasals are voiced.
As voiceless nasals are comparatively rare, they are symbolized simply by adding the
voiceless diacritic [ ] under the symbol for the voiced sound. There are no special
symbols for voiceless nasals and it is written as /m/-a combination of the letter for the
voiced bilabial nasal and a diacritic indicating voicelessness.
Zoom into a small piece of the waveform in the middle of the vowel and measure
the period by highlighting one complete cycle and noting the time associated with it (in
the panel above the waveform).
Q. Radical hierarchy
Radical is a cover term for (pharyngeal] and [epiglottal) articulations made with the
root of the tongue.
Vowel Features
• high: the tongue is raised towards the hard or soft palate
• low: the tongue is lowered away from the hard or soft palate
• front: the blade of the tongue
• back: the body of the tongue or dorsum
Q.Formant frequencies
The first two frequencies are important here. The first formant (F1) is inversely
related to the height of a vowel whereas the second formant (F2) is related to the
frontness of a vowel sound.
When the first two formants are taken, the vowels of a language can be plotted on a
chart and the structure is very much related to the traditional description of vowel
sounds.
Q.How are the oral stops produced? Provide IPA symbols for eng oralstops any
three, 5
In phonetics, a stop, also known as a plosive or oral occlusive, is a consonant in which the vocal
tract is blocked so that all airflow ceases. The occlusion may be made with the tongue blade ([t).
[d]) tongue body (fk). (g), lips ((p). [b]), or glottis (12)) Stops contrast with nasals, where the vocal
tract is blocked but airflow continues through the nose, as in /m/ and /nf,and with fricatives, where
partial occlusion inipedes but does not block airflow in the vocal tract.
1.Pitch range.
2. height
3. Direction
Q.Phonotactics:
In phonology, phonotactics is the study of the ways in which phonemes are allowed to combine in
a particular language. (A phoneme is the smallest unit of sound capable of conveying a distinct
meaning.) Over time, a language may undergo phonotactic variation and change. For example, as
Daniel Schreier points out, "Old English phonotactics admitted a variety of consonantal sequences
that are no longer found in contemporary varieties .
Non-thotic (such as Received Pronunciation where itl is only found before vowels as in red" and
*around') Most American English speakers speak with a thotic accent. but there are non-rhotic
Q. What is Neurolinguistics.
does not have a formal existence. Its subject matter is the relationship between the human nervous
system and language The primary goal of the field of neurolinguistics is to understand and
explicate the neurological bases of lunguage and speech, and to churacterize the mechanisns and
processes involve in language use. The study of neucrolinguisties is broad-based; it includes
language and speech impairments in the adult aphasias and in children, as well as reading
disahilities and the lateralization of function as it relates to language and speech processing." and
computer modeling.
Q.Name following.
Spectral slices (or cross-sections) show the amplitude/frequency spectrum at a selected moment in
the signal. They are useful as aids to comparing local spectral events or measuring spectral.
Gap in pattern (with burst for voiceless and sharp formant beginning for voiced stops)
A nasal consonant is a consonant whose production involves a lowered velum and a closure in the
oral cavity, so that air flows out through the nose. Examples of nasal consonants are [m], [n], and
[ŋ] (as in think and sing).
Good teachers are expected to be active researchers and therefore busy in updating themselves
about the latest researches and teaching methodologies around the world. It is also a pedagogical
challenge for teachers to keep themselves updated by exploring pedagogical and technological
challenges for ELT experts (in their own contexts and internationally). For example, the aspects
of Task Based Learning and Teaching (TBLT) as a golden method for second language acquisition
(SLA) may be effective in Pakistani context if explored by ELT practitioners. Teachers as the
agents of change and they must be reading research studies and carry out research by and explore
their issues and solutions. A good way is to keep reading teachers’ digests and journals and
participate in the online discussions by teaching associations.
Q why do you think teacher are mostly expected to facilitate action research in
ELT.
Teachers are expected to facilitate action research which is the most rewarding and productive for
their own profession. For example, the phonetics of phonological speech errors if explored and
shared by teachers (by investigating their own practices) may lead to a very positive discussion in
the academic circles (of research into ELT - SLA). Similarly, topics such as learners' performance
and development (c.g., what do good speakers do?) may yield useful results for teachers'
community.
A unit of pronunciation having one vowel sound, with or without surrounding consonants, forming
the whole or a part of a word; for example, there are two syllables in water and three in inferno.
Acoustic Phonetics is the study of detailed physical properties of sound we produce. It generally
uses tools which read the changes in air pressure that the sound creates. Each sound has its different
sound quality, which depends on the source filter, that is our speech organs. Each sound has its
own f0, f1, f2 and f3 (formants), depending on the source modifier. f0 deals with the fundamental
frequency, f1 gives the information about the pharyngeal cavity, f2 about the oral cavity and f3
about the position of the lips while the sound was produced. Auditory phonetics is just the other
side of the coin for this study. It deals with the study of these articulated sound characteristics from
the perception perspective. Auditory phonetics deals with the listener at a broader aspect. The
perceived f0 is measured in terms of pitch and calculated in Mel or bark scales
The latest trends under experimental phonetics include brain functions in speech production and
processing (by using the latest equipment – many special instruments such as x-ray techniques)
Developing relevant material for the teaching of phonetics and phonology is an important ask for
aspiring teachers of English language. For example, you can develop your material related to the
pronunciation teaching to the learners of English You can incorporate material related to the IPA
text - transcription of the audio (listening) based activities - by involving students on using
dictionaries (ideally the phonetic dictionaries) in the classroom.
In experimental phonetics and phonology, the studies of sounds include various latest experimental
techniques and computer software that are used under carefully designed lab experimentation. It
is an important aspect of the application of the latest technology by going beyond the simple
acoustics and by working in sophisticated phonetic labs in order to discover the hidden aspects of
human speech. For example, questions such as ‘How speech is produced and processed?’ are the
focus of experimental phonetics (explore the speech chain as the beginning of experimental
phonetics as mentioned in Chapter-20 by Peter Roach). The latest trends under experimental
phonetics include brain functions in speech production and processing (by using the latest
equipment – many special instruments such as x-ray techniques), speech errors, neurolinguistics
and the topics related to the developments through computers – for speech analysis and synthesis.
Q. Name some materials that can assist in the teaching of the language skills.
Developing relevant material for the teaching of phonetics and phonology is an important task for
aspiring teachers of English language.
1. Explore already developed material available online from various sources (such as British
Council and other teacher resource centers): however, you must also be able to develop your own
material (as specifically required by your students) .
2. You can develop your material related to the pronunciation teaching to the learners of English.
3. You can incorporate material related to the IPA text- transcription of the audio (listening) based
activities - by involving students on using dictionaries (ideally the phonetic dictionaries) in the
classroom.
4.Movies and documentaries (such as from BBC - CNN - National Geographic channels) mayalso
serve as very effective resources for the teaching of pronunciation
5.Finally, the real life material (for listening) and writing interaction from everyday languagemay
also yield tremendous results. The focus of material development should always be
theenhancement of the proficiency level of students.
To calculate the VOT. Record /apa/, laba/ /atal, ladal, lapha/ and /atha/. Zoom in through your stop
sounds so that you can analyze the patterns of the stop sounds and find the difference among the
three types of VOT (negative, zero and positive). Measure the VOT of each stop and compare
voiced/voiceless counterparts (p/b, t/d, k/g). Similarly, zoom in so that you can clearly see the stop
closure followed by the beginning of the vowel. You can measure the time between the end of the
stop closure (the beginning of the release burst) and the onset of voicing in the following vowel
(the onset of regular pitch pulses in the waveform).
Phonetics and phonology is a very potential area for research to be carried out in Pakistani context.
In applied phonology, many areas can be explored; for example, issues faced by Pakistani learners
of English may be studied. Similarly, the pronunciation issues of Pakistani learners are potential
area through which the difficulties faced by Pakistani students may be addressed. Also, researchers
can explore and document the features of Pakistani English based on their phonological features
in order to get the Pakistani variety of English recognized. Other problematic areas may also
include: segmental and supra segmental features (such as stress placement, intonation patterns and
syllabification and re-syllabification of English words by Pakistani learners. Contrastive analysis
(between English phonology and the sound systems of the regional languages of Pakistan can also
be carried out by the researchers. We can also think about exploring the consonant clusters and
interlanguage phonology from second language acquisition point of view. While focusing on ELT
as the part of applied linguistics, studies may also be carried out on Pakistani variety of English
(development of its corpora, deviation from the standard variety (RP), its specific features, etc.).
Pakistani regional languages are the part of the rich linguistic regions. (Himalaya Hindu Kush
(HKH) region, one of the richest regions in the world linguistically and culturally) may be very
potential area for research in the fields of areal and typological linguisties (description of linguistic
features cross linguistically). While working on Pakistani regional languages, one may apply for
funding from international organizations (e.g., organization for endangered languages and
UNISCO).
Remember that we need to include three principles for feature analysis: contrastive function (how
it is different), descriptive function (what it is) and classificatory function (based on broader classes
of sounds). Features may also be studied further as a part of language universals and then their role
as language specific sub sets.
Q. Why sonorants called vowel like sound and how they differ from vowels?
Sonorants are vowel-like sounds (nasals and glides). These sounds are called sonorants because
they have formants (remember their acoustic correlates). But they are different from vowels
because they generally have lower amplitude; therefore, they behave like consonants. Record the
following sequences for our experimentation on sonorant sounds /ama/ - /ana/ - /aŋa/ - /wi/ - /ju/.
Having recorded these sequences, now start exploring the features of these sounds like the
measurement of F1, F2, F3 and also try to compare them with vowels.
Q. Sonorants/ why sonorants called vowel like sound and how thy differ from
vowel.
Sonorants are vowel-like sounds (nasals and glides). These sounds are called sonorants because
they have formants. But they are different from vowels because they generally have lower
amplitude; therefore, they behave like consonants.
Q. Nasal Formants
Formants for nasal sounds are also important for acoustic analysis. Measure the first three (F1, F2
and F3) formants of nasals from the file (use the already learnt way of measuring formants).
Remember that nasals have very distinctive waveforms (different than that of vowels) as they have
distinctive forms of anti-formants (bands of frequencies damped) and formant transition.
When you are done with the measurement, try to answer the following questions:
1. Are there any systematic patterns across nasals?
2. Is there one formant with a similar frequency for all places of articulation?
3. Is there one formant that has much higher amplitude than the others across nasals?
4. Do you see any overall differences between the nasals on the one hand and [a] on the other?
Stop voicing: There are three important acoustic correlates of voicing in stops: the voice bar, VOT,
and the duration of the preceding vowel. Record /apa/, laba/. /atal, fada/, lapha/ and /atha/ and for
each of the stops in the file, take the three measurements according to the following instructions:
See the voicing or the voice bar by exploring features of stop. We can also explore the features
related to the place of articulation (any bilabial feature for /p/ or /hv in comparison with non-
bilabial). Also check the duration of the preceding vowels. Note down the presence of voicing.
Q. Harmonics....5
Harmonics come from the vocal folds. You can change the harmonics present in the sound by
changing the shape of the vocal folds and therefore the pitch being created. More closure in the
vocal folds will create stronger, higher harmonics. Harmonics are considered the source of the
sound. During an exhale, air comes up from the lungs and passes through the larynx. If the vocal
folds are closed inside the larynx during the exhale, they will begin to vibrate at multiple different
frequencies. The strongest and slowest vibration is the fundamental pitch being sung .The faster
vibrations that occur simultaneously are called overtones or harmonics.
Q. Formants:
Formants come from the vocal tract. The air inside the vocal tract vibrates at different pitches
depending on its size and shape of opening. We call these pitches formants. You can change the
formants in the sound by changing the size and shape of the vocal tract. Formants filter the original
sound source. After harmonics go through the vocal tract some become louder and some become
softer.
For exploring the acoustics of vowels, we need to record vowels and explore their properties. The
eight vowels from American English (given in your book (Chapter 8) are to be recorded for the
purpose (by now, you should know how to record them). These vowels are: heed, hid, head, had,
hod, hawed, hood and who‘d. When you are done with the recording, get ready for measuring the
following three things: intrinsic pitch, spectral make up (formants) and plotting them in excel sheet
(and finally exporting them to your Word document). Now, record yourself saying the words. Take
a quick look at your vowels in the Edit window, and make sure you can clearly see the vowel
formants. If you have trouble seeing them, you can go back to the previous labs and learn it again.
While doing this, please make a note of it on your worksheet.
Take the measurement of the first two formants and plot those values on a chart using the Excel
spreadsheet. By putting F1 and F2 in separate columns, write the formant values associated with
different vowels (giving vowels in the first column, the difference between F2 and F1 in the second
column and F1 in the third). After putting the data in Excel sheet, use the Scatter chart from the
same spreadsheet. Further in order to make it corresponding with the required values for Fl and
F2, reverse the values for both formants (on both axis- Y and X). Now the zero for both FI and F2
is at the right corner. Watch the video and you will find how F1 is inversely related to the height
of the vowel and the difference between F2 and F1 to the frontness of the vowels. Once completed,
export the chart to your Word document and give it the number and title accordingly.
To remove a boundary that you have made - Highlight the boundary - Go to Boundary > Remove
OR click Alt+backspace.
Languages of the world are; divided into two broad categories: stress timed language and syllable
timed languages.
Q. What are three formants help in distinguishing vowel from each other?
Zoom into a small piece of the waveform in the middle of the vowel and measure the period by
highlighting one complete cycle and noting the time associated with it (in the panel above the
waveform).
Make sure the volume bar is fluctuating as you record- if it isn't, you're not recording, if you
don't see the volume bar at all, you're not speaking loudly enough Watch out for clipping. If
your recording level is too high and you go into the red on the volume bar, you'll end up with
what is called a "clipped" signal, this is very bad for speech analysis!
Watch out for clipping. If your recording level is too high and you go into the red on the
volume bar, you'll end up with what is called a "clipped" signal, this is very bad for speech
analysis.
While going for producing sounds with maximum ease of articulation, only similar sounds are
affected. The focus of the speakers is always on maintaining a sufficient perceptual distance
between the sounds that occur in a contrasting set (e.g., vowels in stressed-monosyllabic words
beat, bit, bet, and bat). This principle of perceptual separation does not usually result into one
sound affecting an adjacent sound (as explained in the principle of maximum ease of
articulation). Instead, perceptual separation affects the set of sounds that potentially can occur
at a given position in a word, such as in the position that must be occupied by a vowel in a
stressed monosyllable as in words beat, bit, bet, bat so that the perceptual separation is
maximized. The principle of ‘maximum perceptual separation’ also accounts for some of the
differences between languages. All these examples illustrate how languages maintain a balance
between the requirements of the speaker and those of the listener. On the one hand, there is the
pressure to make changes that would result in easier articulations from a speaker’s point of
view and, then, from the listener’s point of view that there should be sufficient perceptual
contrast between sounds that affect the meaning of an utterance.
The CV pattern (where one consonant is found at the onset followed by a vowel as its peak)
of syllable is found in all languages of the world. It is the universal pattern of syllable (Max
Onset C) and is encouraged by all human languages in abundance. There are languages which
only allow CV templates of syllables (e.g., Honolulu - CVCVCVCV and Waikiki - CVCVCV).
Interestingly, it is also found in the nicknames of the almost all languages of the world: kami,
nana, baba, papa, mani, rani, etc. As a part of their L1 acquisition, children first acquire the
CV pattern of their mother tongue.
There are different modes and structures for syllable structure and languages are labelled as
per their syllabic templates. Consonant sequences are called clusters (e.g., CC – two
consonants or CCC – three consonants). Most of the phonotactic analyses are based on the
syllable structures and syllabic templates. On the basis of these consonant clusters, mainly
three types of syllabic patterns are considered among languages; simple – moderate – complex
(on the basis of consonants clusters at edges: onset and coda).
Examples:
Simple CV
Moderate CVC(G)(N) (G for Glide and N for Nasal – specific Cs)
Complex CCVCC – CCCVCC (bipartite CC and tripartite CCC)
It has often been found that languages do not allow all phonemes to appear in any order (e.g.,
anative speaker of English can figure out fairly easily that the sequence of phonemes
/streŋθs/makes an English word (‗strengths‘) and that the sequence /bleidg/ would be
acceptable as an English word ‗blage‘, although that word does not happen to exist, but the
sequence /lvm/ could not possibly be the part of an English word).
Retroflex: This sound is produced when the tongue tip curls against the back of the alveolar
ridge. Many speakers of English do not use retroflex sounds at all but it is a common sound in
Pakistani languages. They are pronounced by general lowering of the third and fourth formants
such as Urdu, Sindhi, Pashto, Balochi and Punjabi.
In the production of trill the articulator is set in motion by the current of air (r). t is a typical so
and of Scottish English as in words like "rye' and 'row Flap is front and back movement of
tongue tip at the underside of tongue with curling behind. It is found in abundance in Indo-
Aryan (IA) languages (t). Typical flap sounds found in IA languages is an retroflex sound and
the examples are It). 14) and In).
Acoustically, vowels are mainly distinguished by the first two formant frequencies FI and F2;
FI is inversely related to the vowel height (which means that smaller F1 amplitude = higher
vowels), and F2 is related to the front or back of the vowels (smaller F2 amplitude= more back
vowels).
Intonation refers (very) simply to the variations in the pitch of a speaker's voice (f0) used to
convey or alter meaning but in its broader and more popular sense intonation covers much of
the same field as "prosody' where variations in such things as voice quality, tempo and loudness
are included. Intonation as a suprasegmental feature performs several functions in a language.
Its most important function is to act as a signal of grammatical structure (e.g. creating patterns
to distinguish among grammatical categories). where it performs a role similar to punctuation
(in written language). It may furnish far more contrasts (for conveying meaning). Intonation
also gives an idea about the syntactic boundaries (sentence, clause and phrase level
boundaries). The role of intonation in the communication. It is quite important as it also
conveys personal attitude (e.g., sarcasm, puzzlement, anger, etc.). Finally, it can signal
contrasts in pitch along with other prosodic and paralinguistic features. It can also bring
variation in meaning and can prove an important signal of the social background of the
speakers.
In a voiceless aspirated plosive (such as /p/ there is a delay (or lag) before voicing starts; and,
in a voiceless aspirated plosive (eg, /p/), the delay is much longer, depending on the amount of
aspiration. The amount of this delay is called Voice Onset Time (VOT) which in relation to
the types of plosive varies from language to language.
When discussing differences in quality, we noted that the quality of a vowel depends on its
overtone structure (i.e., formants). Now putting this idea another way, we can say that a sound
(e.g., vowel) contains a number of different pitches simultaneously. There is the pitch at which
it is actually spoken, and there are the various overtone pitches that give it its distinctive
quality. We distinguish one vowel from another by the differences in these overtones. The
overtones are called formants, and the lowest three formants distinguish vowels from each
other.
Q.Mechanism of source filter/ Role of vocal folds and vocal tract inSource
filter theory.
In this theory, the tract is represented using a source-filter model and several devices have been
devised to synthesize speech in this way. The idea is that the air in the vocal tract acts like the
air in an organ pipe, or in a bottle. Sound travels from a noise-making source (i.e., the vocal
fold vibration) to the lips. Then, at the lips, most of the sound energy radiates away from the
lips for a listener to hear, while some of the sound energy reflects back into the vocal tract. The
addition of the reflected sound energy with the source energy tends to amplify energy at some
frequencies and damp energy at others, depending on the length and shape of the vocal tract.
The vocal folds (at larynx) are then a source of sound energy, and the cavity (vocal tract - due
to the interaction of the reflected sound waves in it) is a frequency filter altering the timbre of
the vocal fold sound. Thus this same source-filter mechanism is at work in many musical
instruments. In the brass instruments, for example, the noise source is the vibrating lips in the
mouthpiece of the instrument, and the filter is provided by the long brass tube.
Any particle of air, such as that in the vocal tract or that in a bottle, will vibrate in a way that
depends on its size and shape. Remember that the air in the vocal tract is set in vibration by the
action of the vocal folds (in larynx). Every time the vocal folds open and close, there
(activation). Irrespective of the rate of vibration at source (of the vocal folds), the air in the
vocal tract a pulse of acoustic energy will resonate at these frequencies as long as the position
of the vocal organs remains the same. Because of the complex shape of the filter (tract), the air
will vibrate in more than one way at once.
Q. Consonant gestures
In phonetics and phonology, speech sounds (segments) using basic units of contrast are
defined as gestures – they are treated as the abstract characterizations of articulatory events
with an intrinsic time dimension. Thus sounds (segments) are used to describe the phonological
structure of specific languages and account for phonological variation. In this type of
description in phonetics and phonology, sounds are the underlying units which are represented
by classes of functionally equivalent movement patterns (gestures)
The only English lateral phoneme, at least in British English, is /l/ with allophones [l] as in led
[lɛd] and [ɫ] as in bell [bɛɫ]. In most forms of American English, initial [l] has more valorization
than is typically heard in British English initial [l]. In all forms of English, the air flows freely
without audible friction, making this sound a voiced alveolar lateral approximant. It may be
compared with the sound [ɹ] in red [ɹɛd], which is for many people a voiced alveolar central
approximant. Laterals are usually presumed to be voiced approximants unless a specific
statement to the contrary is made.
In phonetics and phonology, speech sounds (segments) using basic units of contrast are defined
as gestures.
Dental Sounds are present both in British and American English, e.g. dental fricatives [θ,ð] but
there are no dental stops, nasals, or laterals except allophonically realized (before [θ, ð] as in
eighth, tenth, wealth). Many speakers of French, Italian, and other languages (such as Urdu,
Pashto and Sindhi) typically have dental stops such as [td]. However, there is a great deal of
individual variation in the pronunciation of these consonants in all these languages.
Q. Nasal Stop
Like stops, nasal can also occur voiced or voiceless (for example, in Burmese, Ukrainian and
French) though in English and other most languages nasals are voiced. As voiceless nasals are
comparatively rare, they are symbolized simply by adding the voiceless diacritic [ ] under the
symbol for the voiced sound. There are no special symbols for voiceless nasals and it is written
as /m/-a combination of the letter for the voiced bilabial nasal and a diacritic indicating
voicelessness.
Q. Tube model
The formants that characterize different vowels are the result of the different shapes of the
vocal tract. The air in the vocal tract is set in vibration by the action of the vocal folds (in
larynx). Every time the vocal folds open and close, there is a pulse of acoustic energy
(activation). Irrespective of the rate of vibration at source (of the vocal folds), the air in the
vocal tract will resonate at these frequencies as long as the position of the vocal organs remains
the same. Because of the complex shape of the filter (tract), the air will vibrate in more than
one way at once. So, the relationship between resonant frequencies and vocal tract shape is
actually much more complicated than the air in the back part of the vocal tract vibrating in one
way and the air in other parts vibrating in another. The vocal folds may vibrate faster or slower,
giving the sound a higher or lower pitch, but the formants will be the same as long as the
position of the tube (vocal tract) is the same.
All voiced sounds are distinguishable from one another by their formant structure
(frequencies). This idea could be understood by considering the vocal tract as a tube and thus
the concept is when the vocal fold pulses have been produced at a steady rate, the "utterance"
is on a monotone. In other words, what you hear as the changes in pitch are actually the changes
in the overtones of this monotone "voice". These overtone pitch variations convey a great deal
of the quality of the voiced sounds. The rhythm of the sentence is apparent because the
overtone pitches occur only when the vocal folds would have been vibrating.
Theory of perturbation says that with the acoustic effect of constriction at the lips, we can
predict the formant frequency differences between rounded and unrounded vowels. Keeping
in mind this modification in the size and nature of vocal tract (for specific vowel sounds), we
can estimate how this perturbation theory works. So for each formant, there are locations in
the vocal tract where constriction will cause the formant frequency to rise, and locations where
constriction will cause the frequency to fall.
In spectrograms, time runs from left to right, the frequency of the components is shown on the
vertical scale, and the intensity of each component is shown by the degree of darkness. It is
thus a display that shows, roughly speaking, dark bands for concentrations of energy at
particular frequencies-showing the source and filter characteristics of speech.
The first two frequencies are important here. The first formant (F1) is inversely related to the
height of a vowel whereas the second formant (F2) is related to the frontness of a vowel sound
When the first two formants are taken, the vowels of a language can be plotted on a chart and
the structure is very much related to the traditional description of vowel sounds.
Part of the tongue - the front, the middle or the back of the tongue may be raised, giving
different vowel qualities: compare lel vowel (as in word 'cat') as a front vowel, with the /a/
vowel (as in 'cart') which is a back vowel. Thirdly, the tongue (and the lower jaw) may be
raised elose' to the roof of the mouth (for close vowels eg. i orlu/), or the tongue may be left
low' in the mouth with the jaw comparatively "open' (as for open vowels eg, la/and /a
Q. Describe the cardinal vowels according to lips, tongue and jaw position.
E and e.
Firstly, the shape of the lips (lip-rounding), rounded (for sounds like /u:/ vowel), neutral (as
for a - schwa sound) or spread (as in /i/ sound in word like sea or - when photographers
traditionally ask you to say "cheese" /tfizz/ in order to make you look smiling. Secondly, part
of the tongue - the front, the middle or the back of the tongue may be raised, giving different
vowel qualities: compare /æl vowel (as in word "cat') as a front vowel, with the /a:/ vowel (as
in 'cart') which is a back vowel. Thirdly, the tongue (and the lower jaw) may be raised 'close'
to the roof of the mouth (for close vowels. e.g. /i:/ or /u:/), or the tongue may be left "low' in
the mouth with the jaw comparatively open' (as for open vowels eg, la:/ and /æel.
The fundamental distinction between consonant and vowel sounds is that vowels make the
least obstruction to the flow of air. In addition to this, vowels are almost always found at the
center of a syllable, and it is very rare to find any sound, other than a vowel which can stand
alone as a whole syllable. Phonetically, each vowel has a number of features (properties) that
distinguish it from other vowels. These include; firstly, the shape of the lips (lip-rounding),
rounded (for sounds like /u:/ vowel), neutral (as for ə - schwa sound) or spread (as in /i:/ sound
in word like sea or –when photographers traditionally ask you to say “cheese” /tʃi:z/ in order
to make you look smiling. Secondly, part of the tongue - the front, the middle or the back of
the tongue may be raised, giving different vowel qualities: compare /æ/ vowel (as in word
‘cat’) as a front vowel, with the /ɑ:/ vowel (as in ‘cart’) which is a back vowel. Thirdly, the
tongue (and the lower jaw) may be raised ‘close’ to the roof of the mouth (for close vowels.
e.g. /i:/ or /u:/), or the tongue may be left ‘low’ in the mouth with the jaw comparatively ‘open’
(as for open vowels e.g., /a:/ and /æ/. In British phonetics, terms such as ‘close’ and ‘open’ are
used for vowels, whereas in American phonetics ‘high’ and ‘low’ are used for vowel
description. So, generally, these three aspects are described in the case of vowels; lip-rounding,
the part of the tongue and the height of the tongue. In addition to these three features, some
other characteristics of vowels are also used in various languages of the world (e.g., nasality –
whether a vowel is nasal or not)
Q. Cardinal vowels
The English phonetician Daniel Jones introduced a system in early 20th century and worked
out on a set of vowels called the ―cardinal vowels‖ comprising of eight vowels to be used as
reference points (so that other vowels could be related to them like the corners and sides of a
map). His idea of cardinal vowels became a success and it is still used by experts and students
for vowel description. Cardinal vowel system is a chart or four-sided figure (the exact shape
of which has been changed from time to time). It is a diagram to be used both for rounded and
unrounded vowels, and Jones proposed that there should be a primary and a secondary set of
cardinal vowels.
The acoustic properties (structure) of consonantal sounds are usually more complicated than
that of vowels. Usually, a consonant can be said to be a particular way of beginning or ending
a vowel sound because during the production of a consonant there is no distinguishing feature
prominently visible. There is virtually no difference in the sounds during the actual closures of
voiced stops [b, d, g], and absolutely none during the closures of voiceless stops [p, t, k],
because there is only silence at these points. Each of the stop sounds conveys its quality by its
effect on the adjacent vowel. We have seen that during a vowel such as [u], there will be
formants corresponding to the particular shape of the vocal tract. In the case of consonants,
these changes are not really distinguishable (particularly for obstruent’s). Although there are
some consonantal sounds which have vowel like structure; therefore, their acoustic features
are somehow similar to vowels (in the case of nasal consonants, approximants and glides) but
most of the consonants have totally different acoustic features.
Q. Importance of Spectrograms
1. Using Praat (or uny other software) and spectrogram is particularly useful when a researcher
is working on a problem related to the nature (physical properties) of a sound (e.g, is it a
phoneme or allophone?).
2. It increases our understanding of the speech sounds and their behavior in different forms
(in isolation or as the part of connected speech)
3. Practice on spectrogram gives us the opportunity to lean about the characteristics of speech
Sounds.
4. It is also important for experts who are working on phonetic aspects of speech as signal
processing.
Q. Speaker styles
The primary set includes eight vowels in total (from 1 to 8); the front unrounded vowels [i, e,
ε, a], the back unrounded vowel [ɑ] and the rounded back vowels [ɔ, o, u]. Symbol was given
and we had to write the lip jaw and tongue position.
They are easy to understand in connection with the primary cardinal vowel system. Following
are the secondary cardinal vowels (their numerical codes and features) as pointed out Danial
Jones:
Close (high) front rounded vowel [y]
Close-mid front rounded vowel [ø]
Open-mid front rounded vowel [œ]
Open (low) front rounded vowel [ɶ]
Open (low) back rounded vowel [ɒ]
Open-mid back unrounded vowel [ʌ]
Close-mid back unrounded vowel [ɤ]
Close (high) back unrounded vowel [ɯ]
(There was a question where the symbol were given a and asked what do they stand for)
The main difference between primary and secondary cardinal vowels is related to lip-rounding
as in some languages the feature of lip-rounding is possible for front vowels. By reversing the
lip position (in comparison with primary cardinal vowels), the secondary series of vowel types
is produced (e.g., rounding the lips for the front vowels).
Normally, while discussing the degree of variation in vowel sounds, three types of features are
given (i.e., height of tongue, backness of tongue and lip rounding) which cover the major
variation in world’s languages. But this description does not cover all types of variation in
vowel quality. One out of such variations is advanced tongue root (ATR) which is found in
Akan language spoken in Ghana. Actually, vowels produced with ATR involve the furthest-
back part of the tongue, opposite to the pharyngeal wall, which is not normally involved in the
production of speech sounds - also called the radix (articulations of this type may, therefore,
be described as radical). ATR (a kind of articulation in which the movement of the root of
tongue expands the front–back diameter of the pharynx) is used phonologically in Akan (and
some other African languages) as a factor in contrast of vowel harmony. The opposite direction
of movement is called retracted tongue root (RTR). ATR is thus related to the size of pharynx
– making the pharyngeal cavity different: creating comparatively large (+ATR: root forward
and larynx lowered) and small pharyngeal cavity (-ATR: no advanced tongue root). Akan
contrasts between two sets of vowels +ATR and –ATR.
Q. Vowel qualities....3marks
There are two features of vowel quality (i.e., height and blackness of the tongue) that are used
to contrast one vowel with another in nearly all languages of the world. But there are four other
features that are used less frequently and not all languages exhibit them. They include
‗liprounding‘, rhotacization, nasalization and advanced tongue root (ATR).
Varieties having this feature are rhotic (in which /t/ is found in all phonological contexts)
The research and application of speech perception must deal with several problems which
result from what has been termed the lack of invariance. Reliable constant relations between a
phoneme of a language and its acoustic manifestation in speech are difficult to find. This 'lack
of phonetic invariance‘ provides us with many reasonable justifications as it has posed an
important problem for phonetic theory as we try to reconcile the fact that shared phonetic
knowledge can be described using the IPA symbols and phonological features with the fact
that the individual phonetic forms that speakers produce and hear on a daily basis span a very
great range (of varieties).
In the description of vowel quality, rhotacization (or rhotacized vowel) is a term which is used
in English phonology referring to dialects or accents where /r/ is pronounced following a
vowel, as in words ‘car’ and ‘cart’. Thus varieties of English are divided on the basis of this
feature - varieties having this feature are rhotic (in which /r/ is found in all phonological
contexts) while others (not having this feature) are non-rhotic (such as Received Pronunciation
where /r/ is only found before vowels as in ‘red’ and ‘around’). Similarly, vowels which occur
after retroflex consonants are sometimes called rhotacized vowels (they display rhotacization).
It is important to mention that while BBC pronunciation is non-rhotic, many accents of the
British Isles are rhotic, including most of the south and west of England, much of Wales, and
all of Scotland and Ireland. Most American English speakers speak with a rhotic accent, but
there are non-rhotic areas (e.g., the Boston area, lower-class of New York and the Deep South).
In the description of vowel quality, rhotacization (or rhotacized vowel) is a term which is used
in English phonology referring to dialects or accents where /r/ is pronounced following a
vowel, as in words 'car‘ and 'cart‘. Thus varieties of English are divided on the basis of this
feature -varieties having this feature are rhotic (in which /r/ is found in all phonological
contexts) while others (not having this feature) are non-rhotic (such as Received Pronunciation
where /r/ is only found before vowels as in 'red‘ and 'around‘). Similarly, vowels which occur
after retroflex consonants are sometimes called rhotacized vowels (they display rhotacization).
How ELT teacher develop the material for phonetics and phonology Developing relevant
material for the teaching of phonetics and phonology is an important task for aspiring teachers
of English language. The specific needs of ELT activities in one‘s own context and explore
already developed material available online from various sources (such as British Council and
other teacher resource centers);
however, one must also be able to
develop own material (as
specifically required by students).
aspiration. The amount of this delay is called Voice Onset Time (VOT) which in relation to the
types of plosive varies from language to language.
Q. Why air vibrate more than in one way in vocal tract? (Topic#120)
Any particle of air, such as that in the vocal tract or that in a bottle, will vibrate in a way that
depends on its size and shape. Remember that the air in the vocal tract is set in vibration by the
action of the vocal folds (in larynx). Every time the vocal folds open and close, there
(activation). Irrespective of the rate of vibration at source (of the vocal folds), the air in the vocal
tract a pulse of acoustic energy will resonate at these frequencies as long as the position of the
vocal organs remains the same. Because of the complex shape of the filter (tract), the air will
vibrate in more than one way at once.
The formants that characterize different vowels are the result of the different shapes of the vocal
tract. The air in the vocal tract is set in vibration by the action of the vocal folds (in larynx).
Every time the vocal folds open and close, there is a pulse of acoustic energy (activation).
Irrespective of the rate of vibration at source (of the vocal folds), the air in the vocal tract will
resonate at these frequencies as long as the position of the vocal organs remains the same.
Because of the complex shape of the filter (tract), the air will vibrate in more than one way at
once. So, the relationship between resonant frequencies and vocal tract shape is actually much
more complicated than the air in the back part of the vocal tract vibrating in one way and the air
in other parts vibrating in another. The vocal folds may vibrate faster or slower, giving the sound
a higher or lower pitch, but the formants will be the same as long as the position of the tube
(vocal tract) is the same.
When the first two formants are taken, the vowels of a language can be plotted on a chart and the
structure is very much related to the traditional description of vowel sounds.
speech technology are trying to understand the complexity of speech-synthesis systems and
improve it.
The English phonetician Daniel Jones introduced a system in early 20th century and worked out
on a set of vowels called the ―cardinal vowels‖ comprising of eight vowels to be used as
reference points (so that other vowels could be related to them like the corners and sides of a
map). His idea of cardinal vowels became a success and it is still used by experts and students
for vowel description. Cardinal vowel system is a chart or four-sided figure (the exact shape of
which has been changed from time to time). It is a diagram to be used both for rounded and
unrounded vowels, and Jones proposed that there should be a primary and a secondary set of
cardinal vowels.
Normally, while discussing the degree of variation in vowel sounds, three types of features are given (i.e.,
height of tongue, backness of tongue and lip rounding) which cover the major variation in world’s
languages. But this description does not cover all types of variation in vowel quality. One out of such
variations is advanced tongue root (ATR) which is found in Akan language spoken in Ghana. Actually,
vowels produced with ATR involve the furthest-back part of the tongue, opposite to the pharyngeal wall,
which is not normally involved in the production of speech sounds - also called the radix (articulations of
this type may, therefore, be described as radical). ATR (a kind of articulation in which the movement of
the root of tongue expands the front–back diameter of the pharynx) is used phonologically in Akan (and
some other African languages) as a factor in contrast of vowel harmony. The opposite direction of
movement is called retracted tongue root (RTR). ATR is thus related to the size of pharynx – making the
pharyngeal cavity different: creating comparatively large (+ATR: root forward and larynx lowered) and
small pharyngeal cavity (-ATR: no advanced tongue root). Akan contrasts between two sets of vowels
+ATR and –ATR.
in words 'car‘ and 'cart‘. Thus varieties of English are divided on the basis of this feature -
varieties having this feature are rhotic (in which /r/ is found in all phonological contexts) while
others (not having this feature) are non-rhotic (such as Received Pronunciation where /r/ is only
found before vowels as in 'red‘ and 'around‘). Similarly, vowels which occur after retroflex
consonants are sometimes called rhotacized vowels (they display rhotacization). How ELT
teacher develop the material for phonetics and phonology Developing relevant material for the
teaching of phonetics and phonology is an important task for aspiring teachers of English
language. The specific needs of ELT activities in one‘s own context and explore already
developed material available online from various sources (such as British Council and other
teacher resource centers); however, one must also be able to develop own material (as
specifically required by students). For example, the teacher can develop material related to the
pronunciation teaching to the learners of English. She/ he can incorporate material related to the
IPA text – transcription of the audio (listening) based activities – by involving students on using
dictionaries (ideally the phonetic dictionaries) in the classroom. Movies and documentaries (such
as from BBC - CNN - National Geographic channels) may also serve as very effective resources
for the teaching of pronunciation. Finally, the real life material (for listening) and writing
interaction from everyday language may also yield tremendous results. The focus of material
development should always be the enhancement of the proficiency level of students.
Q. Semivowels (topic#142)
Most of the world languages contain a class of sounds that functions in a way similar to
consonants but is phonetically similar to vowels (eg., in English, /w/ and /jf as in "wet' and 'yet').
When they are used in the first part of syllables (at onset), they function as consonants. But if
they are pronounced slowly, they resemble (in quality) with the vowels [u] and [i] respectively.
These sounds are called semivowels which are also termed as approximants today. In French
there are three semivowels (i.e., in addition to j and w there is another sound symbolized /y/ and
is found in initial position in the word like "huit' /yit/ (eight) and in consonant clusters such as
/fru in /fryi ('fruit'). The IPA chart also lists a semivowel corresponding to the back close
unrounded vowel /u/. Like the others, this is classed as an approximant.
ii. Velarization involves raising the back of the tongue (adding the /u/ vowel like quality).
It can be considered as the addition of an [u]-like tongue position (but remember that it is
without the addition of the lip rounding). A typical English example of velarization is the /l/
sound at the end of a syllable (as in words like kill, pill, sell and will) called velarized or dark /l/
and may be written as [l ]. The diacritics for velarization are both [ˠ] and [ ].
iv. Labialization which is the addition of lip rounding (written as [ʷ]) to other primary
articulation such as Arabic /tʷ/ and /sʷ/. Nearly all kinds of consonants can have added lip
rounding, including those that already have one of the other secondary articulations (such as
velarization and palatalization).
Supra means "above‘ or " beyond‘ and segments are sounds (phonemes). Suprasegmental is a
term used in phonetics and phonology to refer to a vocal effect (such as tone, intonation, stress,
etc.) which extends over more than one sound (segment) in an utterance. Major suprasegmental
features include pitch, stress, tone, intonation or juncture. These features are meaningful when
they are applied above segmental level (on more than one segment). Phonological studies can be
divided into two fields: segmental phonology and suprasegmental phonology. Suprasegmental
features have been extensively explored in the recent decades and many theories have been
constituted related to the application and description of these features.
Many phoneticians disagree with the basic idea of stress timing value.
Write the 3 dimensions they suggest? (topic 155)
Many phoneticians disagree with the basic idea of timing value. They are of the view that there
are three dimensions:
a. fixed word stress (mainly found in Romance languages),
b. variable word stress (mainly found in languages such as English and German)
c. fixed phrase stress (phrase as a third possibility as exhibited by Japanese) and they want to
categorize languages on the basis of these three patterns.
receive from a voiced sound corresponds quite closely to the frequency of vibration of the vocal
folds.
Q. IPA (topic#164)
While discussing the key elements of linguistic phonetic description, we need to consider the
International Phonetic Alphabet (abbreviated as IPA). IPA is the set of symbols and diacritics that have
been officially approved by IPA. The association publishes a chart comprising of a number of separate
charts. At the top inside the front cover, you will find the main consonant chart. Below it is a table
showing the symbols for non-pulmonic consonants, and below that is the vowel chart. Inside the back
cover is a list of diacritics and other symbols, and a set of symbols for suprasegmental features (events)
such as tone, intonation, stress, and length. Remember that the IPA chart does not try to cover all possible
types of phonetic descriptions (e.g., all the individual strategies for realizing linguistic phonological
contrasts, or gradations in the degree of co-articulation between adjacent segments, etc.). Instead, it is
limited to those possible sounds that can have linguistic significance in that they can change the meaning
of a word in some languages. So the description of IPA is based on the linguistic phonetics of the
community.
humanly possible but have not been observed so far to be distinctive in any language (e.g., a voiceless
retroflex lateral fricative is possible but has not been documented so far, so it is left blank). The shaded
cells, on the other hand, exhibit the sounds not possible at these places. Further, below the consonant
chart is a set of symbols for consonants made with different airstream mechanisms (clicks, voiced
implosives, and ejectives). All these descriptions reflect the potentialities of human speech sounds (as a
linguistic community) not only showing the possible segments but also the suprasegmental features and
points related to the possible airstream mechanisms and even the diacritics for various types co-
articulations and secondary articulatory gestures. The IPA chart is carefully documented (by experts) and
is continuously revised and updated.
Q. Name the five major features based on the major regions of vocal
tract. 5 (Topic#167)
The five features in total (i.e., Labial, Coronal, Dorsal, Radical, and Glottal). The first three of
these features are related to tongue position whereas Radical is a cover term for (pharyngeal] and
[epiglottal) articulations made with the root of the tongue. The feature of 'Glottal", on the other
hand, is based on being [glottal), to cover various articulations such as (h). If we are to have a
convenient grouping of the features for consonants, we have to recognize that Supra Laryngeal
features must allow for the dual nature of the actions of the larynx and include Glottal as a place
of articulation, Remember that a sound may be articulated at more than one of the regions Labial,
Coronal, Dorsal, Radical, and Glottal. Within the five general regions, "Coronal articulations can
be split into three mutually exclusive possibilities Laminal (i.e., blade of the tongue), Apical (i.e.,
tip of the tongue), and Sub-apical (i.c., the under part of the blade of the tongue). Thus the major
regions may be subdivided into sub regions on the basis of their features.
• Speaking styles:
No one style is basic (from which others are derived), because all are stored in memory.
Bilingual speakers store two systems.
• Sound change: Sound change is phonetically gradual and operates across the whole
lexicon. It is a gradual shift as new instances keep on adding.
Q. Ease of articulation(Topic#174)
In order to explain the sound patterns of a language, the views of both speaker and listener are
considered. Both of them like to use the least possible articulatory effort (except when they are
trying to produce very clear speech), and there are a large number of assimilations, with some
segments left out, and other reduced to minimum. Thus a speaker uses language with an case of
articulation (e. g, co articulation and secondary articulation). This tendency to use language
sounds with maximum possible case of articulation leads to change in the pronunciation of words
It has often been found that languages do not allow all phonemes to appear in any order (e.g.,
anative speaker of English can figure out fairly easily that the sequence of phonemes
/streŋθs/makes an English word (‗strengths‘) and that the sequence /bleidg/ would be acceptable
as an English word ‗blage‘, although that word does not happen to exist, but the sequence /lvm/
could not possibly be the part of an English word).
How to insert a grid text in PRAAT? Write down the steps (topic
184)
Create a text grid:
• In the Praat Objects window, highlight the subject (required) file.
Fl and F2, reverse the values for both formants (on both axis- Y and X). Now the zero for both FI
and F2 is at the right corner. Watch the video and you will find how F1 is inversely related to the
height of the vowel and the difference between F2 and F1 to the frontness of the vowels. Once
completed, export the chart to your Word document and give it the number and title accordingly.
Q. Why sonorants called vowel like sound and how they differ from
vowels?(Topic#195)
Sonorants are vowel-like sounds (nasals and glides). These sounds are called sonorants because they
have formants (remember their acoustic correlates). But they are different from vowels because they
generally have lower amplitude; therefore, they behave like consonants. Record the following sequences
for our experimentation on sonorant sounds /ama/ - /ana/ - /aŋa/ - /wi/ - /ju/. Having recorded these
sequences, now start exploring the features of these sounds like the measurement of F1, F2, F3 and also
try to compare them with vowels.
Sonorants/ why sonorants called vowel like sound and how thy differ
from vowel (topic 195)
Sonorants are vowel-like sounds (nasals and glides). These sounds are called sonorants because
they have formants. But they are different from vowels because they generally have lower
amplitude; therefore, they behave like consonants.
with non-bilabial). Also check the duration of the preceding vowels. Note down the presence of
voicing.
sounds). Features may also be studied further as a part of language universals and then their role as
language specific sub sets.
Q. Name some materials that can assist in the teaching of the language
skills. (Topic#206)
Developing relevant material for the teaching of phonetics and phonology is an important task
for aspiring teachers of English language.
1. Explore already developed material available online from various sources (such as British
Council and other teacher resource centers): however, you must also be able to develop your own
material (as specifically required by your students)
. 2. You can develop your material related to the pronunciation teaching to the learners of
English.
3. You can incorporate material related to the IPA text- transcription of the audio (listening)
based activities - by involving students on using dictionaries (ideally the phonetic dictionaries) in
the classroom.
4. Movies and documentaries (such as from BBC - CNN - National Geographic channels) may
also serve as very effective resources for the teaching of pronunciation
5. Finally, the real life material (for listening) and writing interaction from everyday language
may also yield tremendous results. The focus of material development should always be the
enhancement of the proficiency level of students.
broader aspect. The perceived f0 is measured in terms of pitch and calculated in Mel or bark
scales.
Q. VOT (5)
Voicing is a feature of some of the sounds we make. If we hold our fingers lightly against the
front of our throat and make the sound sss, and then go zzzzzzz - we will feel buzz for the
second one That's Cur vocal folds vibrating really quickly. There are lots of minimal pairs like
this in English - siz are fricatives, but there are also stops, like tid and p/h. For stops, the voice
onset time (VOT) is the relationship between when you open your articulators and when those
vocal folds start buzzing. Some stops will have the voicing start before the release of the closure,
known as a negative VOT, aspirated consonants (a bit of air after the release) with a voiced
sound after result in a positive VOT, and those situations where the voicing and opening occur at
the same time are known as tenuis VOT just to sound fancy. The VOT of sounds varies across
languages.
Q. How are the oral stops produced? Provide IPA symbols for eng oral
stops any three, 5 google
In phonetics, a stop, also known as a plosive or oral occlusive, is a consonant in which the vocal
tract is blocked so that all airflow ceases. The occlusion may be made with the tongue blade ([t).
[d]) tongue body (fk). (g), lips ((p). [b]), or glottis (12)) Stops contrast with nasals, where the
vocal tract is blocked but airflow continues through the nose, as in /m/ and /nf,and with
fricatives, where partial occlusion inipedes but does not block airflow in the vocal tract.
Q. Phonotactics:
In phonology, phonotactics is the study of the ways in which phonemes are allowed to combine
in a particular language. (A phoneme is the smallest unit of sound capable of conveying a
distinct meaning.) Over time, a language may undergo phonotactic variation and change. For
example, as Daniel Schreier points out, "Old English phonotactics admitted a variety of
consonantal sequences that are no longer found in contemporary varieties .
Q. What is Neurolinguistics.
Human language or communication (speech. hearing, reading, writing, or nonverbal modalities)
related to any aspect of the brain or brain function. It is a field of interdisciplinary study which
does not have a formal existence. Its subject matter is the relationship between the human
nervous system and language The primary goal of the field of neurolinguistics is to understand
and explicate the neurological bases of lunguage and speech, and to churacterize the mechanisns
and processes involve in language use. The study of neucrolinguisties is broad-based; it includes
language and speech impairments in the adult aphasias and in children, as well as reading
disahilities and the lateralization of function as it relates to language and speech processing." and
computer modeling.
Q. Name following
1. [f,v] Labiodental fricatives
2. [P,b,m] Stops
https://youtube.com/channel/UC_Ar-KkTPqjaljUX2LimmOQ
Subscribe Channel For More Helpful Video [ Learning With A&I ] Page 1
Eng507 Final Term Important Topic
انشاءہللا بہت جلد فائنل ٹرم میٹریل ایلوڈ کنا جائے گا
Subscribe Channel For More Helpful Video [ Learning With A&I ] Page 2
Eng507 Final Term Important Topic
Subscribe Channel For More Helpful Video [ Learning With A&I ] Page 3
1|Page
Copyright ©: Yaseen and Sons
ENG507 IMPORTANT Topics (VU FINAL TERM)
With the Prayers of my Parents For all those students eager to quench their thirst for knowledge
Call: 03020011103
Whatsapp: 03357890091
Email: [email protected]
Facebook Group: Virtual Tutors
Whatsapp Group: Virtual Tutors
Youtube Channel: Virtual Tutors
Note:
All work will be done with great attention & Handouts will be at much lower price than Campus/Market.
W H AT S A P P G RO U P L I N K
https://chat.whatsapp.com/jax3gbou4xm2fn
pcep5wru
MCQS (1 MARK)
Radical' is a cover term for [pharyngeal] and [epiglottal] articulation made with the Root of
the tongue.
Which language has fixed phrase stress? Japanese
IPA is the set of symbols and diacritics that have been officially approved by IPA.
Nasalization diacritic is called Tilde.
The shape of the lips: spread as in /i/ vowel.
Sonorants are sonorants because they are Formant.
Vowel quality intonation Pitch.
In Japanese all more have approximately same Duration.
Which language has no nasal vowels Saraiki. Urdu
Nasalization is common feature of Indo Aryan Languages.
Formats structure of approximants in match with Vowels.
Copyright ©: Yaseen and Sons
Page 2
I and owe the cluster of _____.V CV CCVC CVC.
Cardinal vowels produced by Daniel Jones Noam Chomsky
ELT stands for English Language Teaching.
More than 3 syllables are called Tri Syllabic.
Assimilation is particularly studied in phonetic of Individual.
Which of the following symbol represents voiced alveolar tap? R.
In source filter theory, filter represents Cavity.
Waveform of nasals is _______ to waveform of vowels Difference Distinctive.
Sonorants are different from vowels because they have _____ amplitude Lower.
Confirming the pitch of spectral slice one should ignore___ at beginning Small Spikes.
Difference between F2 and F1 is related to ___ of vowels. Backness
The formants that characterize different vowels are the result of the different shapes of the
Vocal Tract.
The last part of the syllable is called Coda.
One should watch for the burst and aspiration in the Stop Sounds.
Which of the following vowel sounds (in the given word) will be uttered with neutral lips? Set
Frontness or backness of vowel is determined by the position of Tongue.
R. coloring is also termed as Rhotacization.
The study of the sequences of phonemes is called Phonotactics.
In syllable timed languages, all syllables tend to have a/an Equal time value.
In tonal languages, which of the following is an essential component of the pronunciation of
word? Pitch
In Mandarin, the word ma means "mother" when it is said with High Pitch.
The study of intonation is sometimes called Intonology.
Which of the following types of phonetics is considered for specific purposes only? Individual
Phonetics.
Acoustic is the study of the physics of the Speech Signal.
One should observe gap in pattern with burst for Voiceless and sharp formant beginning for
voiced stops.
Which of the following vowel is uttered by rounding lips? /U/
Which of the following features (place of articulation) best describes the stop /p/? Bilabial.
Which of the following features (place of articulation) best describes the stop /g/ Glottal?
Two native speakers of a language will always speak With Some Variation.
Spanish has a very simple system contrasting only Five vowel sounds.
In English, sounds /w/ and /j/ are considered Semi Vowels.
Which of the following symbols represents labialization as a secondary gesture? [w].
Speech is quite diverse and complex particularly when it comes to the phonetics of Individual
Copyright ©: Yaseen and Sons
Page 3
Which of the following words uses a seven phoneme pattern of syllable? Strengths.
PRAAT software is particularly useful for the Acoustic analysis.
The measurements are taken from the middle of a vowel sound because it is the Nucleus
portion.
F1 is inversely related to the Height of the vowel.
The difference between F2 and F1 is related to the Frontness of the vowel.
The features (voiceless) and (breathy voice) are studied under the cover term Laryngeal.
Radical is a cover for (pharyngeal) and (epiglottal] articulations made with the Root of The
Tongue.
In the production of a plosive like (p), which of the following is not a sub-task? Close The
Teeth.
Sonorant is Vowel like sounds.
The question that is mainly answered by the contrastive function of distinctive feature theory
is How is it Different?
Which of the following functions of the distinctive feature theory answers the question, what
it is? Descriptive.
Which of the following is considered a GOLDEN method of SLA? Task Based Learning and
Teaching (TBLT).
Sonorant is the sounds that basically consist of nasals and Glides.
English Language Teaching Reforms (ELTR) are the projects of the Higher Education
Commission (HEC) of Pakistan
In spectrograms, time runs from left to right, and the frequency of the components is shown
on the vertical scale.
F I N A L T E R M DATA
https://chat.whatsapp.com/jax3gbou4xm2fn
pcep5wru
Page 4
SHORT QUESTIONS (2 & 3 MARKS)
Trills and Flap (Topic 31)
Spectrograms (Topic 124)
Lateral Sound Difference (Topic 111)
Intonations (Topic 86)
Gestures and its Types(Topic 104)
VOT and its Types
Types of Hierarchy
Fricatives
Speaking Style
PRAAT (Definition)
Verities of English
Source Filter Theory
Coronal Types
Tube Model
Tones and its Types
Phonology and Its Types
Acoustic Correlate (Topic 34)
Type of Stop Sounds (Topic 106, 108 etc.)
IPA Chart
Vowels, Types and Features
Mandarin Chinese Role
Formants
Nasal Sounds
F I N A L T E R M DATA
https://chat.whatsapp.com/jax3gbou4xm2fn
pcep5wru
Page 5
Copyright ©: Yaseen and Sons
Page 6
FOR GENERAL DISCUSSION
https://chat.whatsapp.com/EPv0rONZYjsA3RJCFXUree
https://chat.whatsapp.com/BdhhUlghhDvH6x6dTLgjQf
F O R F I N A L T E R M DATA A N D O N L I N E C L A S S E S
https://chat.whatsapp.com/jax3gbou4xm2fn
pcep5wru
F O R PA I D TA S K S ( Q u i z , G D B s , A s s i g n m e n t s , L M S
Handling)
https://chat.whatsapp.com/DCw28dckUxSDw
hB0wiUNP2
Page 7
LONG QUESTIONS (5 MARKS)
Linguistics Phonetic
Air Vibration in Vocal Tract (Topic 120)
Supra Segmental Features (Topic 146)
Retroflex (Topic 25)
Syllable (Topic 148)
SAG (Topic 133,134)
IPA Chart
Primary Cardinal Sets
Source Filter Theory (Topic 118,119)
Different Types of Stress (Topic 151)
Lexical Stress
PRAAT Sound Recording Steps
Phonetics of Community
Vocal Tract Regions
Page 8
Copyright ©: Yaseen and Sons
Page 9
ENG507 {Short Notes covering Topic (21-42)}
In phonetics and phonology, speech sounds (segments) using basic units of contrast are defined as gestures –
they are treated as the abstract characterizations of articulatory events with an intrinsic time dimension. Thus
sounds (segments) are used to describe the phonological structure of specific languages and account for
phonological variation. In this type of description in phonetics and phonology, sounds are the underlying units
which are represented by classes of functionally equivalent movement patterns (gestures).
Nasal manners of articulation are commonly found in the languages of the world. Like stops, nasal can also
occur voiced or voiceless (for example, in Burmese, Ukrainian and French) though in English and other most
languages nasals are voiced. As voiceless nasals are comparatively rare, they are symbolized simply by adding
the voiceless diacritic [ ̥ ] under the symbol for the voiced sound. There are no special symbols for voiceless
nasals and it is written as /m ̥ / - a combination of the letter for the voiced bilabial nasal and a diacritic
indicating voicelessness.
Fricative as an articulatory gesture may be divided into voiced or voiceless sounds but we can also subdivide
fricatives in accordance with other aspects of the gestures that produce them. For example, some authorities
have divided fricatives into sounds such as [s], in which the tongue is grooved so that the airstream comes out
through a narrow channel, and those such as [θ], in which the tongue is flat and forms a wide slit through
which the air flows. On the other hand, a slightly better way of dividing fricatives is to separate them into
groups on a purely auditory basis.
The only English lateral phoneme, at least in British English, is /l/ with allophones [l] as in led [lɛd] and [ɫ] as in
bell [bɛɫ]. In most forms of American English, initial [l] has more velarization than is typically heard in British
English initial [l]. In all forms of English, the air flows freely without audible friction, making this sound a voiced
alveolar lateral approximant. It may be compared with the sound [ɹ] in red [ɹɛd], which is for many people a
voiced alveolar central approximant. Laterals are usually presumed to be voiced approximants unless a specific
statement to the contrary is made.
Source-filter theory is an important concept in acoustic phonetics. It is a model of speech (e.g., vowel)
production. According to this theory, source refers to the waveform of the vibrating larynx. Its spectrum is rich
in harmonics, which gradually decrease in amplitude as their frequency increases. The various resonance
chambers of the vocal tract, especially the movements of the tongue and lips, act on the laryngeal source in
the manner of a filter (see filtered speech), reinforcing certain harmonics relative to others. Thus the
combination of these two elements (larynx as source and cavity as filter) is known as the source-filter model of
speech (e.g., vowel) production. We have already discussed that speech sounds can differ in pitch, loudness,
and quality. Now if we understand the idea of source-filter we would be able to analyze these changes as
possible variation in speech sounds. When discussing differences in quality, we noted that the quality of a
vowel depends on its overtone structure (i.e., formants). Now putting this idea another way, we can say that a
sound (e.g., vowel) contains a number of different pitches simultaneously. There is the pitch at which it is
actually spoken, and there are the various overtone pitches that give it its distinctive quality. We distinguish
one vowel from another by the differences in these overtones. The overtones are called formants, and the
lowest three formants distinguish vowels from each other.
In this theory, the tract is represented using a source-filter model and several devices have been devised to
synthesize speech in this way. The idea is that the air in the vocal tract acts like the air in an organ pipe, or in a
bottle. Sound travels from a noise-making source (i.e., the vocal fold vibration) to the lips. Then, at the lips,
most of the sound energy radiates away from the lips for a listener to hear, while some of the sound energy
reflects back into the vocal tract. The addition of the reflected sound energy with the source energy tends to
amplify energy at some frequencies and damp energy at others, depending on the length and shape of the
vocal tract. The vocal folds (at larynx) are then a source of sound energy, and the cavity (vocal tract - due to
the interaction of the reflected sound waves in it) is a frequency filter altering the timbre of the vocal fold
sound. This idea can make it very easy for us to understand the formants of a vowel sound. Thus this same
source-filter mechanism is at work in many musical instruments. In the brass instruments, for example, the
noise source is the vibrating lips in the mouthpiece of the instrument, and the filter is provided by the long
brass tube.
The formants that characterize different vowels are the result of the different shapes of the vocal tract. Any
particle of air, such as that in the vocal tract or that in a bottle, will vibrate in a way that depends on its size
and shape. Remember that the air in the vocal tract is set in vibration by the action of the vocal folds (in
larynx). Every time the vocal folds open and close, there is a pulse of acoustic energy (activation). Irrespective
of the rate of vibration at source (of the vocal folds), the air in the vocal tract will resonate at these frequencies
as long as the position of the vocal organs remains the same.
All voiced sounds are distinguishable from one another by their formant structure (frequencies). This idea
could be understood by considering the vocal tract as a tube and thus the concept is when the vocal fold
pulses have been produced at a steady rate, the “utterance” is on a monotone. In other words, what you hear
as the changes in pitch are actually the changes in the overtones of this monotone “voice.” These overtone
pitch variations convey a great deal of the quality of the voiced sounds. The rhythm of the sentence is
apparent because the overtone pitches occur only when the vocal folds would have been vibrating.
Using computer programs, we can analyze vowel sounds by showing their components through the display
(spectrogram). In spectrograms, time runs from left to right, the frequency of the components is shown on the
vertical scale, and the intensity of each component is shown by the degree of darkness. It is thus a display that
shows, roughly speaking, dark bands for concentrations of energy at particular frequencies—showing the
source and filter characteristics of speech. Remember that the traditional articulatory descriptions of vowels
are related to the formant frequencies. The first two frequencies are important here. The first formant (F1) is
inversely related to the height of a vowel whereas the second formant (F2) is related to the frontness of a
vowel sound. When the first two formants are taken, the vowels of a language can be plotted on a chart and
the structure is very much related to the traditional description of vowel sounds.
The acoustic properties (structure) of consonantal sounds are usually more complicated than that of vowels.
Usually, a consonant can be said to be a particular way of beginning or ending a vowel sound because during
the production of a consonant there is no distinguishing feature prominently visible. There is virtually no
difference in the sounds during the actual closures of voiced stops [b, d, g], and absolutely none during the
closures of voiceless stops [p, t, k], because there is only silence at these points. Each of the stop sounds
conveys its quality by its effect on the adjacent vowel. We have seen that during a vowel such as [u], there will
be formants corresponding to the particular shape of the vocal tract. In the case of consonants, these changes
are not really distinguishable (particularly for obstruents).
1. Using Praat (or any other software) and spectrogram is particularly useful when a researcher is
working on a problem related to the nature (physical properties) of a sound (e.g., is it a phoneme or
allophone?).
2. It increases our understanding of the speech sounds and their behavior in different forms (in isolation
or as the part of connected speech).
3. Practice on spectrogram gives us the opportunity to learn about the characteristics of speech sounds.
4. It is also important for experts who are working on phonetic aspects of speech as signal processing.
5. These are also used as the part of techniques in speech recognition.
A complete range of a speaker’s vowel qualities may be considered as representative of the speaker’s personal
features which, in turn, may be compared with the formant frequency of each vowel (with the total range of
that formant in that speaker’s voice). But this is true that the phoneticians are still working with comparing the
acoustic data of one individual with the other and improve further the system of speech recognition. Experts
of applied phonetics and computer speech technology are trying to understand the complexity of speech –
synthesis systems and improve it.
The fundamental distinction between consonant and vowel sounds is that vowels make the least obstruction
to the flow of air. In addition to this, vowels are almost always found at the center of a syllable, and it is very
rare to find any sound, other than a vowel which can stand alone as a whole syllable. Phonetically, each vowel
has a number of features (properties) that distinguish it from other vowels. These include; firstly, the shape of
the lips (lip-rounding), rounded (for sounds like /u:/ vowel), neutral (as for ə - schwa sound) or spread (as in
/i:/ sound in word like sea or – when photographers traditionally ask you to say “cheese” /tʃi:z/ in order to
make you look smiling. Secondly, part of the tongue - the front, the middle or the back of the tongue may be
raised, giving different vowel qualities: compare /æ/ vowel (as in word ‘cat’) as a front vowel, with the /ɑ:/
vowel (as in ‘cart’) which is a back vowel. Thirdly, the tongue (and the lower jaw) may be raised ‘close’ to the
roof of the mouth (for close vowels. e.g. /i:/ or /u:/), or the tongue may be left ‘low’ in the mouth with the jaw
comparatively ‘open’ (as for open vowels e.g., /a:/ and /æ/. In British phonetics, terms such as ‘close’ and
‘open’ are used for vowels, whereas in American phonetics ‘high’ and ‘low’ are used for vowel description. So,
generally, these three aspects are described in the case of vowels; lip-rounding, the part of the tongue and the
height of the tongue. In addition to these three features, some other characteristics of vowels are also used in
various languages of the world (e.g., nasality – whether a vowel is nasal or not).
In order to classify vowels (independent of the vowel system of a particular language), the English phonetician
Daniel Jones introduced a system in early 20th century and worked out on a set of vowels called the “cardinal
vowels” comprising of eight vowels to be used as reference points (so that other vowels could be related to
them like the corners and sides of a map). Jones’ idea of cardinal vowels became a success and it is still used by
experts and students for vowel description. He was strongly influenced by the French phonetician Paul Passy,
and it has been claimed that the set of ‘cardinal vowels’ is quite similar to the vowels of educated Parisian
French of his time. Cardinal vowel system is a chart or four-sided figure (the exact shape of which has been
changed from time to time), with eight corners as can be seen on the IPA chart from IPA website. It is a
diagram to be used both for rounded and unrounded vowels, and Jones proposed that there should be a
primary and a secondary set of cardinal vowels. The primary set includes eight vowels in total (from 1 to 8); the
front unrounded vowels [i, e, ε, a], the back unrounded vowel [ɑ] and the rounded back vowels [ɔ, o, u].
The main difference between primary and secondary cardinal vowels is related to lip-rounding as in some
languages the feature of lip-rounding is possible for front vowels. By reversing the lip position (in comparison
with primary cardinal vowels), the secondary series of vowel types is produced (e.g., rounding the lips for the
front vowels).
In the description of vowel quality, rhotacization (or rhotacized vowel) is a term which is used in English
phonology referring to dialects or accents where /r/ is pronounced following a vowel, as in words ‘car’ and
‘cart’. Thus varieties of English are divided on the basis of this feature - varieties having this feature are rhotic
(in which /r/ is found in all phonological contexts) while others (not having this feature) are non-rhotic (such as
Received Pronunciation where /r/ is only found before vowels as in ‘red’ and ‘around’). Similarly, vowels which
occur after retroflex consonants are sometimes called rhotacized vowels (they display rhotacization). It is
important to mention that while BBC pronunciation is nonrhotic, many accents of the British Isles are rhotic,
including most of the south and west of England, much of Wales, and all of Scotland and Ireland. Most
American English speakers speak with a rhotic accent, but there are non-rhotic areas (e.g., the Boston area,
lower-class of New York and the Deep South).
There are two features of vowel quality (i.e., height and backness of the tongue) that are used to contrast one
vowel with another in nearly all languages of the world. But there are four other features that are used less
frequently and not all languages exhibit them. They include ‘lip-rounding’, rhotacization, nasalization and
advanced tongue root (ATR). The following are the acoustic correlates of all these six features:
Most of the world languages contain a class of sounds that functions in a way similar to consonants but is
phonetically similar to vowels (e.g., in English, /w/ and /j/ as in ‘wet’ and ‘yet’). When they are used in the first
part of syllables (at onset), they function as consonants. But if they are pronounced slowly, they resemble (in
quality) with the vowels [u] and [i] respectively. These sounds are called semivowels which are also termed as
approximants today. In French there are three semivowels (i.e., in addition to j and w there is another sound
symbolized /ɥ/ and is found in initial position in the word like ‘huit’ /ɥit/ (eight) and in consonant clusters such
as /frɥ/ in /frɥi/ (‘fruit’). The IPA chart also lists a semivowel corresponding to the back close unrounded vowel
/ɯ/. Like the others, this is classed as an approximant.
‘Secondary’ articulation is an articulatory gesture with a lesser degree of closure occurring at approximately
the same time as another (primary) gesture. It is different than co-articulation which is at the same time and of
the same value (taking place as an equal level gesture)
The secondary gestures for vowel quality are hereby summarized for further clarification. A sound may or may
not have any of the four secondary articulations such palatalization, velarization, pharyngealization and
labialization. Labialization may also have (at the same time) any of the three secondary articulatory gestures
(and even if the sound is itself labial such as [mʷ]). The main features of these secondary articulatory gestures
for vowel sounds are briefly mentioned here for your understanding:
1. Palatalization [ʲ] is the raising of the front of the tongue such as for /i/ vowel).
2. Velarization (written as [ˠ] and [ ̴]) is the raising of the back of the tongue (such as [u]-like sound.
3. Pharyngealization [ˤ] is the retracting of the root of the tongue.
4. Labialization [ʷ] is the rounding of the lips such as Arabic [sʷ] and [tʷ].
Supra means ‘above’ or ‘beyond’ and segments are sounds (phonemes). Suprasegmental is a term used in
phonetics and phonology to refer to a vocal effect (such as tone, intonation, stress, etc.) which extends over
more than one sound (segment) in an utterance. Major suprasegmental features include pitch, stress, tone,
intonation or juncture. Remember that these features are meaningful when they are applied above segmental
level (on more than one segment). Phonological studies can be divided into two fields: segmental phonology
and suprasegmental phonology. Suprasegmental features have been extensively explored in the recent
decades and many theories have been constituted related to the application and description of these features.
Syllables are the parts of word (in which a word is further divided into parts), for example, mi-ni-mi-za-tion or
sup-reseg-men-tal. Phonetically, we can observe that the flow of speech typically consists of an alternation
between vowel-like states (where the vocal tract is comparatively open and unobstructed) and consonantlike
states where some obstruction to the airflow is made (thus altering speech between the two natural kinds of
sounds). So, from the speech production point of view, a syllable consists of a movement from a constricted or
silent state to a vowel-like state and then back to constricted or silent state. From the acoustic point of view,
this means that the speech signal shows a series of peaks of energy corresponding to vowel-like states
separated by troughs of lower energy (sonority).
Syllable structure could be of three types: ‘simple’ (CV), ‘moderate’ (CVC) and ‘complex’ (with consonant
clusters at edges) such as CCVCC and CCCVCC (where V means vowel and C stands for consonant). Moreover,
words can have one syllable (monosyllabic), two syllables (bisyllabic or disyllabic), three syllables (trisyllabic) or
many syllables (polysyllabic).
Sentence level stress, on the other hand, is applied on one word (rather than a syllable) in a sentence thus
making that word more prominent (stressed) than the rest of the words in the sentence. This type of stress has
its role in intonation patterns and rhythmic features of the language showing specific emphasis on the stressed
word (which may be highlighting some information in the typical context).
Languages of the world are; therefore, divided into two broad categories: stress timed language and syllable
timed languages.
‘Stress timed languages’ is a very general phrase used in phonetics to characterize the pronunciation of
languages displaying a particular type of rhythmic pattern that is opposed to that of syllable-timed languages.
In stress-timed languages, it is claimed that the stressed syllables recur at regular intervals of time (stress-
timing) regardless of the number of intervening unstressed syllables as in English. This characteristic is
sometimes also referred to as ‘isochronism’, or isochrony.
Many phoneticians disagree with the basic idea of timing value. They are of the view that there are three
dimensions: fixed word stress (mainly found in Romance languages), variable word stress (mainly found in
languages such as English and German) and fixed phrase stress (phrase as a third possibility as exhibited by
Japanese) and they want to categorize languages on the basis of these three patterns.
As a suprasegmental feature, pitch is an auditory sensation - when we hear a regularly vibrating sound such as
a note played on a musical instrument (or a vowel produced by the human voice), we hear a high pitch (when
the rate of vibration is high) and a low pitch (when the rate of vibration is low). There are some speech sounds
that are voiceless (e.g. /s/), and cannot give rise to a sensation of pitch in this way but the voiced sounds can.
Thus the pitch sensation that we receive from a voiced sound corresponds quite closely to the frequency of
vibration of the vocal folds. However, we usually refer to the vibration frequency as fundamental frequency in
order to keep the two things distinct. In tonal languages, pitch is used as an essential component of the
pronunciation of a word and a change of pitch may cause a change in meaning. In most languages (whether or
not they are tone languages) pitch plays a central role in intonation. In very simple words, pitch is the variation
in the vibration of vocal folds.
Tone (in phonetics and phonology) as a suprasegmental feature refers to an identifiable movement (variation)
or level of pitch that is used in a linguistically contrastive way. In tone (tonal) languages, the linguistic function
of tone is to change the meaning of a word. For example, in Mandarin Chinese, [́ma] said with a high pitch
means ‘mother’ while [̀ma] said on a low rising tone means ‘hemp’. In other (non-tonal) languages, tone forms
the central part of intonation, and the difference between, for example, a rising and a falling tone on a
particular word may cause a different interpretation of the sentence in which it occurs. In the case of tone
languages, it is usual to identify tones as being a property of individual syllables, whereas an intonational tone
may be spread over many syllables. In the analysis of English intonation, tone refers to one of the pitch
possibilities for the tonic (or nuclear) syllable. For further analysis, a set of four types of tone is usually used
(fall, rise, fall–rise and rise–fall) though others are also suggested by various experts.
The three variables of pitch range, height and direction are generally distinguished.
Intonation as a suprasegmental feature performs several functions in a language. Its most important function
is to act as a signal of grammatical structure (e.g., creating patterns to distinguish among grammatical
categories), where it performs a role similar to punctuation (in written language). It may furnish far more
contrasts (for conveying meaning). Intonation also gives an idea about the syntactic boundaries (sentence,
clause and phrase level boundaries). It also provides the contrast between some grammatical structures (such
as questions and statements). For example, the change in meaning illustrated by ‘Are you asking me or telling
me?’ is regularly signaled by a contrast between rising and falling pitch. Note the role of intonation in
sentences like ‘He’s going, isn’t he?’ (= I’m asking you) opposed to ‘He’s going, isn’t he!’ (= I’m telling you)
(These examples are given by Peter Roach). The role of intonation in the communication is quite important as
it also conveys personal attitude (e.g., sarcasm, puzzlement, anger, etc.). Finally, it can signal contrasts in pitch
along with other prosodic and paralinguistic features. It can also bring variation in meaning and can prove an
important signal of the social background of the speakers.
Linguistic phonetics is an approach which is embodied in the principles of the International Phonetic
Association (IPA) and in a hierarchical phonetic descriptive framework that provides certain basis for formal
phonological theory. Speech, being very complex phenomena and having multiple levels of organization, needs
to be explored from different angles. Linguistic phonetics answers a number of questions related to the
possible ways of articulatory unified phonetics and phonology and from the perspective of cognitive phonetics
focusing on speech production and perception and how they shape languages as a sound systems. The idea is
mainly related to the overall ability of human beings to produce sounds (as a community and irrespective of
their specific languages) and then the representation of their shared knowledge (as considered by the IPA in its
charts) for formal phonetic and phonological theories.
The major reason why the phonetics of the community is considered for phonetic descriptions is that, firstly,
individual speakers differ in interesting ways (two native speakers of a language will always speak with some
variations). The description of the phonetics of the individual involves describing the phonetic knowledge and
skills related to the performance of language. It is possible that certain aspects of the phonetics of the
individual can be captured using IPA transcription but others are not compatible with it (such as his private
knowledge and its performance and the role of memory and experience). Secondly, the phonetics of the
individual is usually not the focus of the linguist in speech elicitation, and it is difficult to describe even with
spectrograms of the person’s speech. Although, the phonetics of the individual is the focus of much of the
explanatory power of phonetic theory but for general phonetic description we need to focus on the phonetics
of the community.
One evidence that the IPA chart is based on linguistic phonetics is the description of the blank cells on the
chart (those neither shaded nor containing a symbol) that indicate the combinations of categories that are
humanly possible but have not been observed so far to be distinctive in any language (e.g., a voiceless retroflex
lateral fricative is possible but has not been documented so far, so it is left blank). The shaded cells, on the
other hand, exhibit the sounds not possible at these places. Further, below the consonant chart is a set of
symbols for consonants made with different airstream mechanisms (clicks, voiced implosives, and ejectives).
All these descriptions reflect the potentialities of human speech sounds (as a linguistic community) not only
showing the possible segments but also the suprasegmental features and points related to the possible
airstream mechanisms and even the diacritics for various types coarticulations and secondary articulatory
gestures. The IPA chart is carefully documented (by experts) and is continuously revised and updated.
Feature hierarchy is an important concept in phonetics and phonology which is based on the properties and
features of sounds. In a very general sense, a feature may be tied to a particular articulatory maneuver or
acoustic property.
Sounds are divided in terms of their supra-laryngeal and laryngeal characteristics, and their airstream
mechanism. The supra-laryngeal characteristics can be further divided into those for place (of articulation),
manner (of articulation), the possibility of nasality, and the possibility of being lateral. Thus, these features are
used for classifying speech sounds and describing them formally.
For dividing speech sounds through feature hierarchy, the first division is made on the basis of the major
regions of the vocal tract, giving us the five features in total (i.e., Labial, Coronal, Dorsal, Radical, and Glottal).
The first three of these features are related to tongue position whereas ‘Radical’ is a cover term for
[pharyngeal] and [epiglottal] articulations made with the root of the tongue. The feature of ‘Glottal’, on the
other hand, is based on being [glottal], to cover various articulations such as [h]. If we are to have a convenient
grouping of the features for consonants, we have to recognize that SupraLaryngeal features must allow for the
dual nature of the actions of the larynx and include Glottal as a place of articulation. Remember that a sound
may be articulated at more than one of the regions Labial, Coronal, Dorsal, Radical, and Glottal. Within the five
general regions, ‘Coronal’ articulations can be split into three mutually exclusive possibilities: Laminal (i.e.,
blade of the tongue), Apical (i.e., tip of the tongue), and Sub-apical (i.e., the under part of the blade of the
tongue). Thus the major regions may be subdivided into sub regions on the basis of their features.
Speech is quite diverse and complex particularly when it comes to the phonetics of individual. It is
understandable that different speakers of the same language will have somewhat different productions of
speech depending upon their vocal tract physiology and their own habits of speech motor coordination and
more importantly due to their memory of speech. Sociolinguistic features also influence; we are exposed to a
variety of speech styles ranging from very careful pronunciations in various types of public speaking to the
quite casual style that is typical between friends. All this leads to the lack of phonetic invariants (or the
variability of invariability). This ‘lack of phonetic invariance’ provides us with many reasonable justifications as
it has posed an important problem for phonetic theory as we try to reconcile the fact that shared phonetic
knowledge can be described using the IPA symbols and phonological features with the fact that the individual
phonetic forms that speakers produce and hear on a daily basis span a very great range (of varieties). This lack
of invariance as a problem also has great practical significance for language engineers who try to get
computers to produce and recognize speech.
Language universal features: Broad phonetic classes (e.g., aspirated vs. unaspirated) derive from
physiological constraints on speaking or hearing, but their detailed phonetic definitions are
arbitrary—a matter of community norms.
Speaking styles: No one style is basic (from which others are derived), because all are stored in
memory. Bilingual speakers store two systems.
Generalization and productivity: Exemplar theory says that generalization is also possible within
productivity. Interestingly, productivity—the hallmark of linguistic knowledge in the phonetic
implementation approach—is the least developed aspect of the exemplar theory.
Sound change: Sound change is phonetically gradual and operates across the whole lexicon. It is a
gradual shift as new instances keep on adding.
In order to explain the sound patterns of a language, the views of both speaker and listener are considered.
Both of them like to use the least possible articulatory effort (except when they are trying to produce very
clear speech), and there are a large number of assimilations, with some segments left out, and other reduced
to minimum. Thus a speaker uses language with an ease of articulation (e.g., coarticulation and secondary
articulation). This tendency to use language sounds with maximum possible ease of articulation leads to
change in the pronunciation of words.
Consonant sequences are called clusters (e.g., CC – two consonants or CCC – three consonants). Most of the
phonotactic analyses are based on the syllable structures and syllabic templates.
The study of the phonemes and their order found in the syllables (the study of sound sequences) of a language
is called the phonotactics. It has often been found that languages do not allow all phonemes to appear in any
order (e.g., a native speaker of English can figure out fairly easily that the sequence of phonemes /streŋθs/
makes an English word (‘strengths’) and that the sequence /bleidg/ would be acceptable as an English word
‘blage’, although that word does not happen to exist, but the sequence /lvm/ could not possibly be the part of
an English word). Phonotactic analyses of English come up with some interesting findings. For example, why
should ‘bump’, ‘lump’, ‘hump’, ‘rump’, ‘mump(s)’, ‘clump’ and others all be associated with large blunt
shapes? Why should there be a whole family of words ending with a plosive and a syllabic /l/ all having
meanings to do with clumsy, awkward or difficult action (e.g., ‘muddle’, ‘fumble’, ‘straddle’, ‘cuddle’, ‘fiddle’,
‘buckle’, ‘struggle’, ‘wriggle’)? Why can’t English syllables begin with /pw/, /bw/, /tl/, /dl/ when /pl/, /bl/, /tw/,
/dw/ are acceptable? All such discussion is called the phonotactics of the language.
Phonotactics is a term used in phonology to refer to the order (sequential arrangements or tactic behavior) of
segments (sounds or phonological units) which occur in a language. It shows us what counts as a
phonologically well-formed structure of a word. The allowed sound patterns and restricted sound patterns of
language are found through phonotactics. For example, in English, consonant sequences such as /fs/ and
/spm/ do not occur initially in an English word, and there are many other restrictions on the possible
consonant+vowel combinations which may occur. By thoroughly analyzing the data, the ‘sequential
constraints’ of a language can be stated in terms of phonotactic rules. According to the Generative
phonotactics, no phonological principles can refer to morphological structure; and phonological patterns which
are sensitive to morphology (e.g. affixation, etc.) are represented only in the morphological component of the
grammar (not in the phonology). Some examples from the English phonotactics are given for your
understanding:
Also possible: CCV (try) CCCVC (stroke), CCCV (straw), VCC (eggs) CVCC (risk), CVCCC (risks).
Praat is a computer program with which you can analyze, synthesize, and manipulate speech, and create high-
quality pictures for your articles and thesis. You are advised to start going through the tutorial from its
homepage.
1. Create a textgrid: • In the Praat Objects window, highlight the subject (required) file. • Annotate > To
TextGrid. • Create two tiers (this will be enough for our purposes). Write ‘word segment’ (these are
two tiers) on the cell named ‘All tier names’ on the small window.
2. Open the sound file and textgrid together: • Hold down Ctrl and click on each file to highlight them
both. • Edit (in your display you should now see the waveform (top), the spectrogram (middle) and
the textgrid (bottom) corresponding to your sound file).
3. Segment the file: • Place the cursor at the beginning of the name on the spectrogram/waveform; a
boundary line will show up. • Click in the little circle at the top of the word tier in the Textgrid to
create a boundary. • To remove a boundary that you have made - Highlight the boundary - Go to
Boundary > Remove OR click Alt+backspace.
4. Label the intervals: • Select/highlight the target interval by clicking between two boundaries; the
selected interval should go yellow. • To input or change the text in an interval, edit in the Textbox
above the spectrogram. • Give name to each interval you create ([first name] or [last name]).
1) Displaying the pitch track and allowing Praat to measure the pitch automatically: • Display the pitch
track: Pitch > Show pitch. • Place your cursor in the middle – a stable portion of the vowel. • Go to
Pitch > Get pitch – a box will appear with the pitch value in it (note it down)
2) Displaying the pitch track and measuring pitch manually: • Display the pitch track - Pitch > Show
pitch. • Click on the blue pitch track in the middle of the vowel. • A red horizontal bar should appear
with the pitch value (in dark blue) on the right side of the window (take the measurement from here).
3) By looking at the waveform (top of the display): • Zoom into a small piece of the waveform in the
middle of the vowel and measure the period by highlighting one complete cycle and noting the time
associated with it (in the panel above the waveform).
Change the window length to 0.025s – the default window length is 0.005s (wide-band spectrogram) -
this changes the spectrogram dramatically!
Looking at each vowel, notice the grey horizontal bands: these correspond to harmonics. For each
vowel, measure the frequencies of the first 3 harmonics (H1-H3) and the 10th harmonic (H10).
Click on the center (horizontally) of each harmonic in the center of each vowel.
A red horizontal bar should appear with the frequency value on the left side of the window in red.
Formants are the overtone resonances. Acoustically, in order to plot vowels on chart, F1-F2 are very
important. By now, we know that the narrow bands spectrograms are required for measuring harmonics
whereas we need the wide bands for measuring the formants (which are the important characteristics of
sonorant speech sounds – vowels). On spectrogram, formants are thick bands (darkness corresponds to
loudness; i.e. the darkest harmonics are the ones that are the most amplified). These amplified harmonics
form the formants that are characteristic of sonorant speech sounds. Now, let’s measure the first and second
formants (F1 and F2) from the middle of each vowel using the three techniques outlined below and note down
your measurements:
Captured in the source-filter model of the speech, it is clear now (from the comparison of the two values – for
formants and harmonics) that harmonic numbers are different for one type of sounds but the formants are the
same. Actually, the relationship between the harmonics and the formants is captured in the source-filter
model of speech production. The point is that harmonics are related to the laryngeal activity (source) and
formants are the output of the vocal tract (filter).
For exploring the acoustics of vowels in this session, we need to record vowels and explore their properties.
The eight vowels from American English (given in your book (Chapter 8) are to be recorded for the purpose (by
now, you should know how to record them). These vowels are: heed, hid, head, had, hod, hawed, hood and
who’d. When you are done with the recording, get ready for measuring the following three things: intrinsic
pitch, spectral make up (formants) and plotting them in excel sheet (and finally exporting them to your Word
document). Now, record yourself saying the words. Take a quick look at your vowels in the Edit window, and
make sure you can clearly see the vowel formants. If you have trouble seeing them, you can go back to the
previous labs and learn it again. While doing this, please make a note of it on your worksheet.
Finally, we are at the last step related to spectral make-up of vowel sounds. We have already taken the
measurement of the first two formants and we are going to plot those values on a chart using the Excel
spreadsheet. By putting F1 and F2 in separate columns, write the formant values associated with different
vowels (giving vowels in the first column, the difference between F2 and F1 in the second column and F1 in the
third). After putting the data in Excel sheet, we will use the Scatter chart from the same spreadsheet. Further
in order to make it corresponding with the required values for F1 and F2, we will reverse the values for both
formants (on both axis – Y and X). Now the zero for both F1 and F2 is at the right corner. Watch the video and
you will find how F1 is inversely related to the height of the vowel and the difference between F2 and F1 to the
frontness of the vowels. Once completed, export the chart to your Word document and give it the number and
title accordingly.
Formants for nasal sounds are also important for acoustic analysis. Measure the first three (F1, F2 and F3)
formants of nasals from the file (use the already learnt way of measuring formants). Remember that nasals
have very distinctive waveforms (different than that of vowels) as they have distinctive forms of anti-formants
(bands of frequencies damped) and formant transition.
Glides are also the sonorants (vowel-like) sounds as they have similar patterns (have formants). Read from our
recorded file and take the first three formants (F1, F2 and F3) from the middle of the sounds for glides (both
for /w/ and /j/) and explore their acoustic correlates. Carefully judge the center of these sounds (the midpoint
of [w] and [j]). Analyze that how similar is the formant structure of glides with vowels and nasals. Draw lines to
indicate F1, F2, F3 and compare with vowels.
There are three important acoustic correlates of voicing in stops: the voice bar, VOT, and the duration of the
preceding vowel. Record /apa/, /aba/, /ata/, /ada/, /apha/ and /atha/ and for each of the stops in the file,
take the three measurements according to the following instructions: See the voicing or the voice bar by
exploring features of stop. We can also explore the features related to the place of articulation (any bilabial
feature for /p/ or /b/ in comparison with non-bilabial). Also check the duration of the preceding vowels.
To calculate the VOT: Record /apa/, /aba/ /ata/, /ada/, /apha/ and /atha/. Zoom in through your stop sounds
so that you can analyze the patterns of the stop sounds and find the difference among the three types of VOT
(negative, zero and positive). Measure the VOT of each stop and compare voiced/voiceless counterparts (p/b,
t/d, k/g). Similarly, zoom in so that you can clearly see the stop closure followed by the beginning of the vowel.
You can measure the time between the end of the stop closure (the beginning of the release burst) and the
onset of voicing in the following vowel (the onset of regular pitch pulses in the waveform).
Phonetics and phonology is a very potential area for research to be carried out in Pakistani context. In applied
phonology, many areas can be explored; for example, issues faced by Pakistani learners of English may be
studied. Similarly, the pronunciation issues of Pakistani learners are potential area through which the
difficulties faced by Pakistani students may be addressed. Also, researchers can explore and document the
features of Pakistani English based on their phonological features in order to get the Pakistani variety of
English recognized. Other problematic areas may also include: segmental and supra segmental features (such
as stress placement, intonation patterns and syllabification and resyllabification of English words by Pakistani
learners. Contrastive analysis (between English phonology and the sound systems of the regional languages of
Pakistan (Urdu, Punjabi, Sindhi, Balochi and Pashto) can also be carried out by the researchers. We can also
think about exploring the consonant clusters and interlanguage phonology from second language acquisition
point of view. While focusing on ELT as the part of applied linguistics, studies may also be carried out on
Pakistani variety of English (development of its corpora, deviation from the standard variety (RP), its specific
features, etc.). Moreover, IPA resources and their application on ELT in Pakistani context can also be studied.
Pakistani regional languages are the part of the rich linguistic regions. (Himalaya Hindu Kush (HKH) region, one
of the richest regions in the world linguistically and culturally) may be very potential area for research in the
fields of areal and typological linguistics (description of linguistic features crosslinguistically). While working on
Pakistani regional languages, one may apply for funding from international organizations (e.g., organization for
endangered languages and UNISCO).
Three principles for feature analysis: contrastive function (how it is different), descriptive function (what it is)
and classificatory function (based on broader classes of sounds). Features may also be studied further as a part
of language universals and then their role as language specific sub sets.
In experimental phonetics and phonology, the studies of sounds include various latest experimental
techniques and computer software that are used under carefully designed lab experimentation. It is an
important aspect of the application of the latest technology by going beyond the simple acoustics and by
working in sophisticated phonetic labs in order to discover the hidden aspects of human speech. For example,
questions such as ‘How speech is produced and processed?’ are the focus of experimental phonetics (explore
the speech chain as the beginning of experimental phonetics as mentioned in Chapter-20 by Peter Roach). The
latest trends under experimental phonetics include brain functions in speech production and processing (by
using the latest equipment – many special instruments such as x-ray techniques), speech errors,
neurolinguistics and the topics related to the developments through computers – for speech analysis and
synthesis.
Developing relevant material for the teaching of phonetics and phonology is an important task for aspiring
teachers of English language. Remember the specific needs of ELT activities in your own context and explore
already developed material available online from various sources (such as British Council and other teacher
resource centers); however, you must also be able to develop your own material (as specifically required by
your students). For example, you can develop your material related to the pronunciation teaching to the
learners of English. You can incorporate material related to the IPA text – transcription of the audio (listening)
based activities – by involving students on using dictionaries (ideally the phonetic dictionaries) in the
classroom.
Good teachers are expected to be active researchers and therefore busy in updating themselves about the
latest researches and teaching methodologies around the world. It is also a pedagogical challenge for teachers
to keep themselves updated by exploring pedagogical and technological challenges for ELT experts (in their
own contexts and internationally). For example, the aspects of Task Based Learning and Teaching (TBLT) as a
golden method for second language acquisition (SLA) may be effective in Pakistani context if explored by ELT
practitioners. Teachers as the agents of change and they must be reading research studies and carry out
research by and explore their issues and solutions. A good way is to keep reading teachers’ digests and
journals and participate in the online discussions by teaching associations.
As already discussed, teachers are expected to facilitate action research which is the most rewarding and
productive for their own profession. For example, the phonetics of phonological speech errors if explored and
shared by teachers (by investigating their own practices) may lead to a very positive discussion in the academic
circles (of research into ELT – SLA). Similarly, topics such as learners’ performance and development (e.g., what
do good speakers do?) may yield useful results for teachers’ fraternity. Having said this, it is required from
teachers (and student teachers) to facilitate action research related to reading/listening issues, English reading
strategies (e.g., in primary schools) – (and their effectiveness), impact on pronunciation and many more.
Research in the fields of phonetic theory and the description with phonological, typological and broader
implication may also be included in phonetics and phonology specific action research.
Tap: Tap is up and down movement of the top of the tip of tongue. For example, pronouncing the
middle sound in word ‘pity’ with typical American accent [ɾ]. It is very brief and is produced by a sharp
upward throw of the tongue blade. In this sound, tongue makes a single tap against the alveolar ridge.
Flap: Flap is front and back movement of tongue tip at the underside of tongue with curling behind. It
is found in abundance in Indo-Aryan (IA) languages [ɽ]. Typical flap sounds found in IA languages is a
retroflex sound and the examples are [ɽ], [ɖ] and [ɳ].
Trill: In the production of trill the articulator is set in motion by the current of air [r]. It is a typical
sound of Scottish English as in words like ‘rye’ and ‘row’.
Bilabial: This sound is made with two lips (for example, /p/ and /b/). The lips come together for these
sounds.
Labiodental: This sound is made when the lower lip is raised to touch the upper front teeth (for
example, /f/ and /v/).
Dental: This sound is made with the tongue tip or blade and upper front teeth. For example, say the
words thigh, thy and you will find the first sound in each of these words to be dental.
Alveolar: This sound is made with the tongue tip or blade and the alveolar ridge. You may pronounce
words such as tie, die, nigh, sigh, zeal, lie using the tip of the tongue or the blade of the tongue for the
first sound in each of these words (which are alveolar sounds).
Retroflex: This sound is produced when the tongue tip curls against the back of the alveolar ridge.
Many speakers of English do not use retroflex sounds at all but it is a common sound in Pakistani
languages such as Urdu, Sindhi, Pashto, Balochi and Punjabi.
Palato-alveolar: This sound is produced with the tongue blade and the back of the alveolar ridge (for
example, first sound in each of words like shy, she, show)
Palatal: This sound is produced with front of the tongue and the hard palate (such as the first sound
in ‘yes’.
Velar: This sound is produced with back of the tongue and the soft palate (such as /k/ and /g/).
www.vuanswer.com
For More Solutions Visit VUAnswer.com
ɜː burn
Q. Define allophone?
An allophone is a definable systematic variant of a phoneme. Compare the following sets:
1. ‘s’ sound in words like sill, still and spill or in words like seed, steed and speed
2. ‘k’ sound in words like, key and car
3. ‘t’ sound in words like true and tea
4. ‘n’ sound in words like tenth and ten
5. Phone is a sound pattern having some acoustic features.
www.vuanswer.com
For More Solutions Visit VUAnswer.com
Q. What are the areas of experimental phonetics explored by Peter Ladefoged in 1967:
Following three area were explored by Peter Ladefoged
Stress in respiratory activity
The nature of vowel quality
Perception and production of speech
www.vuanswer.com
For More Solutions Visit VUAnswer.com
A sound wave is the pattern of disturbance caused by the movement of energy traveling through
air (sound always travels in the shape of waves in the air). Sound basically consists of small variations in
air pressure that occur very rapidly one after another. These variations are caused by actions of the
speaker’s vocal organs that are (for the most part) superimposed on the outgoing flow of lung air.
www.vuanswer.com
For More Solutions Visit VUAnswer.com
Q. What is Tap?
Tap is up and down movement of the top of the tip of tongue. For example, pronouncing the
middle sound in word ‘pity’ with typical American accent [ɾ]. It is very brief and is produced by a sharp
upward throw of the tongue blade. In this sound, tongue makes a single tap against the alveolar ridge.
Q. What is Flap?
Flap is front and back movement of tongue tip at the underside of tongue with curling behind. It
is found in abundance in Indo-Aryan (IA) languages [ɽ]. Typical flap sounds found in IA languages is a
retroflex sound and the examples are [ɽ], [ɖ] and [ɳ].
www.vuanswer.com
For More Solutions Visit VUAnswer.com
Acoustically, vowels are mainly distinguished by the first two formant frequencies F1 and F2; F1
is inversely related to the vowel height (which means that smaller F1 amplitude = higher vowels), and F2
is related to the front or back of the vowels (smaller F2 amplitude = more back vowels).
Q. Give broad transcription of The North Wind and the Sun’ story
www.vuanswer.com
For More Solutions Visit VUAnswer.com
ðə ˈnoɹθ ˌwɪnd ən (ð)ə ˈsʌn wɚ dɪsˈpjutɪŋ ˈwɪtʃ wəz ðə ˈstɹɑŋɡɚ, wɛn ə ˈtɹævəlɚ ˌkem əˈlɑŋ ˈɹæpt
ɪn ə ˈwoɹm ˈklok.
ðe əˈɡɹid ðət ðə ˈwʌn hu ˈfɚst səkˈsidəd ɪn ˈmekɪŋ ðə ˈtɹævəlɚ ˈtek ɪz ˈklok ˌɑf ʃʊd bi kənˈsɪdɚd ˈstɹɑŋɡɚ
ðən ðɪ ˈəðɚ.
ðɛn ðə ˈnoɹθ ˌwɪnd ˈblu əz ˈhɑɹd əz i ˈkʊd, bət ðə ˈmoɹ hi ˈblu ðə ˈmoɹ ˈklosli dɪd ðə ˈtɹævlɚ ˈfold hɪz
ˈklok əˈɹaʊnd ɪm;
ˌæn ət ˈlæst ðə ˈnoɹθ ˌwɪnd ˌɡev ˈʌp ði əˈtɛmpt. ˈðɛn ðə ˈsʌn ˈʃaɪnd ˌaʊt ˈwoɹmli ənd ɪˈmidiətli ðə ˈtɹævlɚ
ˈtʊk ˌɑf ɪz ˈklok.
ən ˈso ðə ˈnoɹθ ˌwɪnd wəz əˈblaɪʒ tɪ kənˈfɛs ðət ðə ˈsʌn wəz ðə ˈstɹɑŋɡɚ əv ðə ˈtu.
Q. Give narrow transcription of The North Wind and the Sun’ story
ðə ˈnɔɹθ ˌwɪnd ən ə ˈsʌn wɚ dɪsˈpjuɾɪŋ ˈwɪtʃ wəz ðə ˈstɹɑŋɡɚ, wɛn ə ˈtɹævlɚ ˌkem əˈlɑŋ ˈɹæpt ɪn
ə ˈwɔɹm ˈklok.
ðe əˈɡɹid ðət ðə ˈwʌn hu ˈfɚst səkˈsidəd ɪn ˈmekɪŋ ðə ˈtɹævlɚ ˈtek ɪz ˈklok ˌɑf ʃʊd bi kənˈsɪdɚd ˈstɹɑŋɡɚ
ðən ðɪ ˈʌðɚ.
ðɛn ðə ˈnɔɹθ ˌwɪnd ˈblu əz ˈhɑɹd əz hi ˈkʊd, bət ðə ˈmɔɹ hi ˈblu ðə ˈmɔɹ ˈklosli dɪd ðə ˈtɹævlɚ ˈfold hɪz
ˈklok əˈɹaʊnd hɪm;
ˌæn ət ˈlæst ðə ˈnɔɹθ ˌwɪnd ˌɡev ˈʌp ði əˈtɛmpt. ˈðɛn ðə ˈsʌn ˈʃaɪnd ˌaʊt ˈwɔɹmli ənd ɪˈmidiətli ðə ˈtɹævlɚ
ˈtʊk ˌɑf ɪz klok.
ən ˈso ðə ˈnɔɹθ ˌwɪnd wəz əˈblaɪʒ tɪ kənˈfɛs ðət ðə ˈsʌn wəz ðə ˈstɹɑŋɡɚ əv ðə ˈtu.
Q. what is Stress?
In phonetics, stress refers to the degree of force used in producing a syllable. The usual distinction
is between stressed and unstressed syllables, the former being more prominent than the latter (and marked
in transcription with a raised vertical line, [ˈ]).
www.vuanswer.com
For More Solutions Visit VUAnswer.com
Apart from the above stops, in some varieties of English, the glottal stop /ʔ/ is found as in
beaten [ˈbɪʔn̩ ]. English voiceless stops (p, t, k) are also aspirated in the beginning of the
words such as [phaI, thaI, khaI].
Very common fricative sounds are /f, v, s, z, θ, ʃ, ð, h/ whereas [ʒ] is a less common fricative sound.
English fricatives are also divided into two categories (this distinction is made on the basis of energy
made in their production); fortis: /f, s, θ, ʃ, h/ and lenis: /v, z, ð, ʒ/. Stops and fricatives are together called
‘obstruents’ and they are similar in three ways: (1) they influence vowel length (vowels are shorter before
voiceless obstruents), (2) voiceless obstruents at final position are longer than their voiced counterparts
(e.g., race vs. rays), and (3) obstruents are voiced only if the adjacent segments are also voiced (e.g.,
dogs).
www.vuanswer.com
For More Solutions Visit VUAnswer.com
Remember that although it is very strange to call the combination of a plosive and a fricative a single
sound (an affricate) (as it has been deliberated for quite some time) yet experts argue that an affricate is a
single segment and accordingly it should be treated as a singleunit. There are two affricates in English: /tʃ/
and /dʒ/ (the first of these is voiceless, the second voiced) sounds as at the beginning and end of the
English words church and judge. Both of them are post alveolar sounds by their place of articulation.
For a detailed transcription, diacritics are used to a symbol in order to narrow its meaning. The following
six diacritics are quite important for attempting the detailed transcription exercises:
www.vuanswer.com
For More Solutions Visit VUAnswer.com
Q. Define Aspiration.
Aspiration is a puff of noise made when a consonantal constriction is released and air is allowed
to escape relatively freely (e.g., in English /p t k/ at the beginning of a syllable are aspirated).
Phonetically, aspiration is the result of the vocal cords being widely parted at the time of the articulatory
release.
Q. Define Nasalization.
Nasalization is an articulatory process whereby a sound is made ‘nasal’ (when the air is passing
through the nasal cavity) due its adjacent nasal sound (it is an articulatory influence of an adjacent nasal
consonant, as in words like mat or hand). A vowel can also be nasalised in words like man (when /a/ may
be articulated with the soft palate lowered throughout), because of the nasal consonants’ influence (this is
called anticipatory coarticulation).
Q. Define velarisation.
Velarization, in phonetics, secondary articulation in the pronunciation of consonants, in which
the tongue is drawn far up and back in the mouth (toward the velum, or soft palate), as if to pronounce a
back vowel such as o or u. Velarization is not phonemic in English, although for most English speakers
the l in “feel” is velarized, but the l in “leaf” is not. It is distinctive in some languages (e.g., Arabic).
Velarized consonants may be distinguished from velar consonants, in which the primary articulation
involves the back of the tongue and the velum; in velarized consonants there must always be some other
primary articulation.
www.vuanswer.com
For More Solutions Visit VUAnswer.com
Diphthong is a single vowel consisting of the features of two vowels. Its most important feature is
the glide from one vowel quality to another one (so basically it is a glide). The BBC accent of English
contains a large number (eight in total) of diphthongs including three ending at /ɪ/ (eɪ, aɪ, ɔɪ – as in words
bay, buyand boy), two ending at /υ/ (əʊ, aʊ – as in words no andnow) and three ending at /ə/ (ɪə, eə, ʊə -
as in words peer, pair andpoor ).
www.vuanswer.com
For More Solutions Visit VUAnswer.com
Q. Define stress.
Stress is basically a prominence of syllable in terms of loudness, length, pitch and quality and all
of them work together in order to make a syllable stressed. Stress is a term used in phonetics to refer to
the degree of force (for making it louder and longer) used in producing a syllable. In terms of its
linguistic function, stress is often treated under two different headings: word stress (lexical stress) and
sentence stress (emphatic stress).
Q. Define Tone.
Although ‘tone’ as a word has a very wide range of meanings and uses in ordinary languages, its
meaning in phonetics and phonology is quite restricted. It refers to an identifiable movement or level of
pitch that is used in a linguistically contrastive way. In typical ‘tone’ languages, the linguistic function of
tone is to change the meaning of a word. In the case of tone languages, it is usual to identify tones as
being a property of individual syllables, whereas an intonational tone may be spread over many syllables.
Similarly, in the analysis of English intonation, tone refers to one of the pitch possibilities for the tonic (or
nuclear) syllable, a set usually including fall, rise, fall–rise and rise–fall, though others are also suggested
by various experts.
www.vuanswer.com
For More Solutions Visit VUAnswer.com
Q. What is intonation?
Intonation, in phonetics, the melodic pattern of an utterance. Intonation is primarily a matter of
variation in the pitch level of the voice (see also tone), but in such languages as English, stress and
rhythm are also involved. Intonation conveys differences of expressive meaning (e.g., surprise, anger,
wariness).
In many languages, including English, intonation serves a grammatical function, distinguishing
one type of phrase or sentence from another. Thus, “Your name is John,” beginning with a medium pitch
and ending with a lower one (falling intonation), is a simple assertion; “Your name is John?”, with a
rising intonation (high final pitch), indicates a question.
Most commonly, the air is moved outwards from the body (creating an egressive airstream) but
more rarely, speech sounds are also made by drawing air inward (into the body – an ingressive
airstream).. There are various forms and mechanisms for initiating the air move. The most common is
when the air is moved inwards or outwards by initiating air movement involving ‘lungs’ (the pulmonic
airstream), which is used for producing the majority of human speech sounds. The ‘glottalic’ airstream
mechanism, as its name suggests, uses the movement of the glottis - the aperture between the vocal folds
as the source of energy. The third one is the ‘velaric’ airstream mechanism which involves an airflow
produced by a movement of the back of the tongue against the velum.
www.vuanswer.com
For More Solutions Visit VUAnswer.com
produced without the direct involvement of air from the lungs. Air is compressed in the mouth or pharynx
above the glottal closure, and released while the breath is still held thus the resultant sounds produced in
this glottalic airstream mechanism are known as ejective sounds. They are also called ‘glottalic’ or
glottalized sounds (though the latter term is often restricted to sounds where the glottal feature is a
secondary articulation). In languages like Quechua and Hausa ejective consonants are used as phonemes.
A further category of sounds involving a glottalic airstream mechanism is known as implosive.
www.vuanswer.com
For More Solutions Visit VUAnswer.com
In order to understand VOT, the three types of plosive sounds are to be explained – voiced,
voiceless and a voiceless aspirated sound. For example, during the production of a fully voiced plosive
(e.g., /b/ or /g/), the vocal folds vibrate throughout; in a voiceless unaspirated plosive (such as /p/ or /t/),
there is a delay (or lag) before voicing starts; and, in a voiceless aspirated plosive (e.g., /pʰ/ or /tʰ/), the
delay is much longer, depending on the amount of aspiration. The amount of this delay is called Voice
Onset Time (VOT) which in relation to the types of plosive varies from language to language.
www.vuanswer.com