0% found this document useful (0 votes)
38 views

General Notes

Sound is caused by pressure variations in a medium that propagate in wave patterns away from a source of vibration. Sound waves can be longitudinal waves, which involve changes in pressure, or transverse waves, which involve shear stress perpendicular to the direction of propagation. The human ear detects these pressure variations and converts them into electrical signals that are sent to the brain. Key characteristics of sound include frequency, wavelength, pitch, harmonics, and the differences between speech and music frequencies from a perceptual standpoint. The ear has three main parts - outer, middle, and inner ear - that work together to detect sound waves and facilitate hearing.

Uploaded by

AhmedBIowar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

General Notes

Sound is caused by pressure variations in a medium that propagate in wave patterns away from a source of vibration. Sound waves can be longitudinal waves, which involve changes in pressure, or transverse waves, which involve shear stress perpendicular to the direction of propagation. The human ear detects these pressure variations and converts them into electrical signals that are sent to the brain. Key characteristics of sound include frequency, wavelength, pitch, harmonics, and the differences between speech and music frequencies from a perceptual standpoint. The ear has three main parts - outer, middle, and inner ear - that work together to detect sound waves and facilitate hearing.

Uploaded by

AhmedBIowar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

WHAT IS SOUND?

Sound is an aural sensation caused by pressure variations in the air


which are always produced by some sources of vibrations.

They may be from a solid object or from turbulence in a liquid or gas.


These pressure fluctuations may take place very slowly, such as those
caused by atmospheric changes or very rapidly and be in the
ultrasonic frequency range.

The velocity of sound is independent of the rate at which these


pressure changes takes place and depends solely on the properties of
the air in which the sound wave is travelling.

WHAT IS SOUND WAVE?

A sound wave is the pattern of disturbance caused by the movement


of energy travelling through a medium (such as air, water, or any other
liquid or solid matter) as it propagates away from the source of the
sound.

The source is some object that causes a vibration, such as a ringing


telephone, or a person's vocal chords.

The vibration disturbs the particles in the surrounding medium; those


particles disturb those next to them, and so on.

The pattern of the disturbance creates outward movement in a wave


pattern, like waves of seawater on the ocean. The wave carries the
sound energy through the medium, usually in all directions and less
intensely as it moves farther from the source.

SOUND WAVES : LONGITUDINAL AND TRANSVERSE

Sound is transmitted through gases, plasma, and liquids as


longitudinal waves, also called compression waves. Through solids,
however, it can be transmitted as both longitudinal waves and
transverse waves.

Longitudinal sound waves are waves of alternating pressure deviations


from the equilibrium pressure, causing local regions of compression
and rarefaction, while transverse waves (in solids) are waves of
alternating shear stress at right angle to the direction of propagation.

PROPOGATION OF SOUND
Sound is a sequence of waves of pressure which propagates through
compressible media such as air or water. (Sound can propagate through
solids as well, but there are additional modes of propagation). During their
propagation, waves can be reflected, refracted, or attentuated by the
medium. The purpose of this experiment is to examine what effect the
characteristics of the medium have on sound.
All media have three properties which affect the behavior of sound
propagation:
1. A relationship between density and pressure. This relationship,
affected by temperature, determines the speed of sound within the
medium.
2. The motion of the medium itself, e.g., wind. Independent of the
motion of sound through the medium, if the medium is moving, the
sound is further transported.
3. The viscosity of the medium. This determines the rate at which
sound is attenuated. For many media, such as air or water, attenuation
due to viscosity is negligible.

WHAT IS PITCH?
Pitch is a perceptual property that allows the ordering of sounds on a
frequency-related scale. Pitches are compared as "higher" and "lower" in the
sense associated with musical melodies,which require sound whose
frequency is clear and stable enough to distinguish from noise. Pitch is a

major auditory attribute of musical tones, along with duration, loudness, and
timbre.
WHAT IS AN OCTAVE?
In music, an octave (Latin octavus: eighth) or perfect octave is the interval
between one musical pitch and another with half or double its frequency.An
octave is defined as a 2:1 ratio of two frequencies.
FREQUENCY
This is the number of vibrations or pressure fluctuations per sec. It is given
by hertz. The frequency can be expressed as
f = 1 / T (1)

where
f = frequency (s-1, Hz)
T = time for completing one cycle (s)
Example - Frequency
The time for completing one cycle for a 500 Hz tone can be calculated using
(1) as
T = 1 / (500 Hz)
= 0.002 s
The range for human hearing is 20 to 20.000 Hz. By age 12-13.000 Hz are
the upper limit for many people.
WAVELENGTH :
This is the distance travelled by the sound during the period of one complete
vibration.

Velocity of sound in air , plane waves , energy density ,sound intensity ,


decibel , sound pressure level
Refer Xerox
WHAT IS HARMONICS?
It is an overtone accompanying a fundamental tone at a fixed interval,
produced by vibration of a string, column of air, etc. in an exact fraction of its
length.
SPEECH AND MUSIC FREQUENCIES
Just as there are similarities and differences between speech and music
spectra, there are also similarities and differences between the perceptual
requirements for speech and for music. Compared with music, speech tends
to be a well-controlled spectrum with well established and predictable
perceptual characteristics.
In contrast, musical spectra are highly variable and the perceptual
requirements can vary based on the musician and the instrument being
played. This is an overview of five salient differences between speech and
music that have direct ramifications for hearing aid fittings.
1) SPEECH VS. MUSIC SPECTRA:
Speech:

Speech, regardless of language has to be generated by a rather


uniform set of tubes and cavities.

The human vocal tract is approximately 17 cm from larynx (vocal


chords) to lips. The vocal tract can be either a single tube as is the
case of oral consonants and vowels, or a pair of parallel tubes when
the nasal cavity is open as in [m] and [n].

For example, the frequency of the resonances of the vocal tract (called
formants) are governed primarily by constrictions in the mouth and the
length of the vocal tract tube. Vocal tract lengths cannot change
significantly.

Music:

In contrast to the relatively well defined human vocal tract output,


there is no consistent, well defined, long-term music spectrum.

The outputs of various musical instruments are highly variable ranging


from a low-frequency preponderance to a high-frequency emphasis.

2) PHYSICAL OUTPUT VS. PERCEPTUAL REQUIREMENTS OF THE


LISTENER:
Speech :

In speech, there are slight differences between various languages in


the proportion of audible cues that are important for speech
perception.

Frequency do vary slightly from language to language, but generally


show that for speech, most of the important sounds for speech clarity
derive from bands over 1000 Hz, whereas most of the loudest
perception of speech are from those bands below 1000 Hz.

Clarity in speech (which has more to do with auditory perception) is


derived from the higher frequencies.

Speech is phonetically more dominant in the lower frequencies. That is,


the auditory perception of speech has a significantly different
weighting than does the physical output from a speaker's mouth.
Despite the differences between the physical output of the speech and
the frequency requirements for optimal speech understanding, the
differences are constant and predictable- low frequency loudness cues
and high frequency clarity cues.

Music :

Regardless of the physical output of the musical instrument, the


perceptual needs of the musician or listener may vary depending on
the instrument. A stringed instrument musician needs to be able to
hear the exact relationship between the lower frequency fundamental
energy and the higher frequency harmonic structure.

Not only does a violinist generate a wide range of frequencies, but the
violinist needs to be able to hear those frequencies.
In contrast, a woodwind player such as a clarinetist needs to be able to
hear the lower frequency inter-resonant breathiness. When a clarinet
player says ''that is a good sound'' they are saying that the lower
frequency noise in between resonances of their instrument has a

certain level. High frequency information is not very important to a


clarinet player (other than for loudness perception).

One can, therefore, say that a clarinet player has a low frequency
phonemic requirement, despite the fact that the clarinet player can
generate as many higher frequency sounds as can the violinist.

3) LOUDNESS SUMMATION, LOUDNESS, AND INTENSITY:


Speech :

The ''source'' of sound in the human vocal tract is the vibration of the
vocal cords.

This simply means that not only is there the fundamental energy
(typically 120-130 Hz for men and 180-220 Hz for women) but there
are evenly spaced harmonics at integer multiples of the fundamental.

For a man's voice with a fundamental frequency of 125 Hz, there are
harmonics at 250 Hz, 375 Hz, 500 Hz, and so on Therefore the minimal
spacing between harmonics in speech is on the order of at least 100
Hz. In other words, no two harmonics would fall within the same critical
band with the result that there is minimal loudness summation- soft
sounding speech is less intense and loud sounding speech is more
intense.

Music :
Some musical instruments are speech-like in the sense that they
generate mid- frequency fundamental energy with evenly spaced
harmonics. Oboes, saxophones and violins are in this category. That is,
for the bass and cello, there is a poor correlation between measured
intensity and perceived loudness.
4) THE ''CREST FACTOR'' OF SPEECH AND MUSIC:
The crest factor is a measure of the difference in decibels between the
peaks in a spectrum and the average or RMS (root mean square) value.
A typical crest factor with speech is about 12 dB. That is, the peaks of
speech are about 12 dB more intense than the average values.
Typical crest factors for musical instruments are on the order of 18-20 dB.
5) DIFFERENT INTENSITIES FOR SPEECH AND MUSIC:

Typical outputs for normal intensity speech can range from 53 dB SPL for
the [th] as in 'think' to about 77 dB SPL for the [a] in 'father'. Shouted
speech can reach 83 dB SPL. Music can be on the order of 100 dB SPL with
peaks and valleys in the spectrum of +/- 18
HUMAN EAR CHARACTERISTICS :

The ear is a transducer converting sound pressure waves into


signals which are sent to brain.
The three principal parts of the human auditory system are the outer
ear, the middle ear, and the inner ear.
The outer ear is composed of the pinna and the auditory canal or
auditory meatus. The auditory canal is terminated by the tympanic
membrane or the
eardrum.
The middle ear is an air-filled cavity spanned by the three tiny bones,
on the whole called the ossicles.
The 3 tiny bones are the Malleus (hammer), the Incus(anvil), and
the Stapes(stirrup).
The malleus is attached to the eardrum and the stapes is attached to
the oval window of the inner ear. Together these three bones form a
mechanical, lever-action connection between the air-actuated
eardrum
and the fluid-filled cochlea of the inner ear.
The inner ear is terminated in the auditory nerve, which sends
impulses to the brain. The inner ear consists of the semicircular
canals that serve as the balance organ of the body and the cochlea
that contains the basilar membrane and organ of Corti, which
together form the complicated mechanisms that transduce
vibrations into neural signal codes. The organ of corti contains four
rows of hair cells. There are some 16,000 20,000 of the hair cells
distributed along the basilar membrane that follows the spiral of the
cochlea and can resolve about 1500 separate pitches.

FUNCTIONING OF EACH PART OF THE EAR :

Pinna : Sound first reaches the outer and visible part of the ear known
as the pinna.
A concave shape of a certain size will act as a focusing device for
certain wavelengths. Pinna tend to scatter the longer wavelengths
while reflecting sharter ones into meatus.
Meatus or Auditory canal : The meatus is the tube connecting the
outer ear to the ear drum, and because of the size it resonates to a
frequency of about 3kHz.

Ear Drum & Eustachian tube : Major atmospheric pressure changes


can be equalized on either side of the ear drum through the
Eustachian tube by the act of swallowing.
Ear Drum and Ossicles : The ear drum begins to vibrate as these
sound waves strikes. These vibrations pass through to the three
ossicles of the middle ear (hammer, anvil and stapes) where they are
amplified. As the transmission proceeds, the vibrations first hit the
hammer, then the hammer pushes the anvil, and the anvil hits the
stapes.
Inner ear : The vibrations are finally interpreted as sound in the brain
after being transmitted and transformed into nerve signals by the
cochlea (snail shaped component of the inner ear). This is due to the
connectivity of the oval window of the inner ear to the edge of the
stapes. When the stapes vibrates, they always transmit the sound
vibrations to the inner ear.
(The vibrations set the liquid contained in the inner ear, in motion. The
ciliate cells located in the liquid amplify the sound vibrations and
categorize them by frequency. They transform mechanical energy into
nerve signals. The auditory nerve collects all the pulses emitted by
the ciliate cells. The brain registers, analyses and interprets all the
information.
The sound arrives in the auditory canal
2.The sound causes the eardrum to vibrate
3.The malleus and incus transmit the vibrations
4.The inner ear decodes the sound and sends it to the auditory nerve
5.The auditory nerve conveys the sound to the brain
6.The brain analyses and interprets the sound

SOUND TRANSMISSION AND


ABSORPTION
SONOMETER
A Sonometer is a device for demonstrating the relationship between the
frequency of the sound produced by a plucked string, and the tension, length
and mass per unit length of the string. It was invented by Pythogoras. These
relationships are usually called Mersenne's laws after Marin Mersenne (1588-

1648), who investigated and codified them. For small amplitude vibration,
the frequency is proportional to:
a. the square root of the tension of the string,
b. the reciprocal of the square root of the linear density of the string,
c. the reciprocal of the length of the string.

It is a simple instrument used to verify the laws of stretched strings and to


determine the frequency of a tuning fork. It consists of a long hollow rectangular
wooden box (w) called the sound box, having three openings on one of its sides. A
metal hook or a peg P1, is rigidly fixed at one end with a frictionless pulley P 2,
attached to the other end. One end of a long metal wire of uniform cross-section
tied firmly to the peg, passes over two wedge shaped bridges A and B and then
over the pulley. The wire can be stretched by adding suitable load to the weight
hanger H which is attached to the other end. C is a movable bridge whose position
can be adjusted between A and B so that any desired length of the wire can be set
into vibration.

WHAT IS SOUND WAVE?

A sound wave is the pattern of disturbance caused by the movement


of energy travelling through a medium (such as air, water, or any other
liquid or solid matter) as it propagates away from the source of the
sound.

The source is some object that causes a vibration, such as a ringing


telephone, or a person's vocal chords.

The vibration disturbs the particles in the surrounding medium; those


particles disturb those next to them, and so on.

The pattern of the disturbance creates outward movement in a wave


pattern, like waves of seawater on the ocean. The wave carries the
sound energy through the medium, usually in all directions and less
intensely as it moves farther from the source.

SOUND WAVES : LONGITUDINAL AND TRANSVERSE

Sound is transmitted through gases, plasma, and liquids as


longitudinal waves, also called compression waves. Through solids,
however, it can be transmitted as both longitudinal waves and
transverse waves.

Longitudinal sound waves are waves of alternating pressure deviations


from the equilibrium pressure, causing local regions of compression

and rarefaction, while transverse waves (in solids) are waves of


alternating shear stress at right angle to the direction of propagation.

PROPOGATION OF SOUND
Sound is a sequence of waves of pressure which propagates through
compressible media such as air or water. (Sound can propagate through
solids as well, but there are additional modes of propagation). During their
propagation, waves can be reflected, refracted, or attentuated by the
medium. The purpose of this experiment is to examine what effect the
characteristics of the medium have on sound.
All media have three properties which affect the behavior of sound
propagation:
1. A relationship between density and pressure. This relationship,
affected by temperature, determines the speed of sound within the
medium.
2. The motion of the medium itself, e.g., wind. Independent of the
motion of sound through the medium, if the medium is moving, the
sound is further transported.
3. The viscosity of the medium. This determines the rate at which
sound is attenuated. For many media, such as air or water, attenuation
due to viscosity is negligible.

WHAT IS PITCH?
Pitch is a perceptual property that allows the ordering of sounds on a
frequency-related scale. Pitches are compared as "higher" and "lower" in the
sense associated with musical melodies,which require sound whose
frequency is clear and stable enough to distinguish from noise. Pitch is a
major auditory attribute of musical tones, along with duration, loudness, and
timbre.
WHAT IS AN OCTAVE?
In music, an octave (Latin octavus: eighth) or perfect octave is the interval
between one musical pitch and another with half or double its frequency.An
octave is defined as a 2:1 ratio of two frequencies.
FREQUENCY
This is the number of vibrations or pressure fluctuations per sec. It is given
by hertz. The frequency can be expressed as
f = 1 / T (1)
where
f = frequency (s-1, Hz)
T = time for completing one cycle (s)

Example - Frequency
The time for completing one cycle for a 500 Hz tone can be calculated using
(1) as
T = 1 / (500 Hz)
= 0.002 s
The range for human hearing is 20 to 20.000 Hz. By age 12-13.000 Hz are
the upper limit for many people.
WAVELENGTH :
This is the distance travelled by the sound during the period of one complete
vibration.

Velocity of sound in air , plane waves , energy density ,sound intensity ,


decibel , sound pressure level
Refer Xerox
WHAT IS HARMONICS?
It is an overtone accompanying a fundamental tone at a fixed interval,
produced by vibration of a string, column of air, etc. in an exact fraction of its
length.
SPEECH AND MUSIC FREQUENCIES
Just as there are similarities and differences between speech and music
spectra, there are also similarities and differences between the perceptual
requirements for speech and for music. Compared with music, speech tends
to be a well-controlled spectrum with well established and predictable
perceptual characteristics.
In contrast, musical spectra are highly variable and the perceptual
requirements can vary based on the musician and the instrument being

played. This is an overview of five salient differences between speech and


music that have direct ramifications for hearing aid fittings.

SPEECH VS. MUSIC SPECTRA:


Speech:

Speech, regardless of language has to be generated by a rather


uniform set of tubes and cavities.

The human vocal tract is approximately 17 cm from larynx (vocal


chords) to lips. The vocal tract can be either a single tube as is the
case of oral consonants and vowels, or a pair of parallel tubes when
the nasal cavity is open as in [m] and [n].

For example, the frequency of the resonances of the vocal tract (called
formants) are governed primarily by constrictions in the mouth and the
length of the vocal tract tube. Vocal tract lengths cannot change
significantly.

Music:

In contrast to the relatively well defined human vocal tract output,


there is no consistent, well defined, long-term music spectrum.

The outputs of various musical instruments are highly variable ranging


from a low-frequency preponderance to a high-frequency emphasis.

PHYSICAL OUTPUT VS. PERCEPTUAL REQUIREMENTS OF THE


LISTENER:
Speech :

In speech, there are slight differences between various languages in


the proportion of audible cues that are important for speech
perception.

Frequency do vary slightly from language to language, but


generally show that for speech, most of the important sounds for
speech clarity derive from bands over 1000 Hz, whereas most of
the loudest perception of speech are from those bands below 1000
Hz.

Clarity in speech (which has more to do with auditory perception) is


derived from the higher frequencies.

Speech is phonetically more dominant in the lower frequencies.


That is, the auditory perception of speech has a significantly
different weighting than does the physical output from a speaker's
mouth. Despite the differences between the physical output of the
speech and the frequency requirements for optimal speech
understanding, the differences are constant and predictable- low
frequency loudness cues and high frequency clarity cues.

Music :

Regardless of the physical output of the musical instrument, the


perceptual needs of the musician or listener may vary depending
on the instrument. A stringed instrument musician needs to be able
to hear the exact relationship between the lower frequency
fundamental energy and the higher frequency harmonic structure.

Not only does a violinist generate a wide range of frequencies, but


the violinist needs to be able to hear those frequencies.
In contrast, a woodwind player such as a clarinetist needs to be
able to hear the lower frequency inter-resonant breathiness. When
a clarinet player says ''that is a good sound'' they are saying that
the lower frequency noise in between resonances of their
instrument has a certain level. High frequency information is not
very important to a clarinet player (other than for loudness
perception).

One can, therefore, say that a clarinet player has a low frequency
phonemic requirement, despite the fact that the clarinet player can
generate as many higher frequency sounds as can the violinist.
LOUDNESS SUMMATION, LOUDNESS, AND INTENSITY:
Speech :

The ''source'' of sound in the human vocal tract is the


vibration of the vocal cords.

This simply means that not only is there the fundamental


energy (typically 120-130 Hz for men and 180-220 Hz for
women) but there are evenly spaced harmonics at integer
multiples of the fundamental.

For a man's voice with a fundamental frequency of 125 Hz,


there are harmonics at 250 Hz, 375 Hz, 500 Hz, and so on
Therefore the minimal spacing between harmonics in speech
is on the order of at least 100 Hz. In other words, no two
harmonics would fall within the same critical band with the
result that there is minimal loudness summation- soft
sounding speech is less intense and loud sounding speech is
more intense.

Music :
Some musical instruments are speech-like in the sense that they
generate mid- frequency fundamental energy with evenly
spaced harmonics. Oboes, saxophones and violins are in this
category. That is, for the bass and cello, there is a poor
correlation between measured intensity and perceived loudness.

THE ''CREST FACTOR'' OF SPEECH AND MUSIC:


The crest factor is a measure of the difference in decibels
between the peaks in a spectrum and the average or RMS (root
mean square) value. A typical crest factor with speech is about
12 dB. That is, the peaks of speech are about 12 dB more
intense than the average values.
Typical crest factors for musical instruments are on the order of 18-20 dB.
5. Different intensities for speech and music:
Typical outputs for normal intensity speech can range from 53 dB
SPL for the [th] as in 'think' to about 77 dB SPL for the [a] in
'father'. Shouted speech can reach 83 dB SPL. Music can be on
the order of 100 dB SPL with peaks and valleys in the spectrum
of +/- 18
HUMAN EAR CHARACTERISTICS :

The ear is a transducer converting sound pressure waves into signals


which are sent to brain.
The three principal parts of the human auditory system are the outer
ear, the middle ear, and the inner ear.
The outer ear is composed of the pinna and the auditory canal or
auditory meatus.The auditory canal is terminated by the tympanic
membrane or the
eardrum.
The middle ear is an air-filled cavity spanned by the three
tiny bones, on the whole called the ossicles. The 3 tiny bones are the
malleus, the incus, and the stapes.
The malleus is attached to the eardrum and the stapes is attached to
the oval window of the inner ear. Together these three bones form a
mechanical, lever-action connection between the air-actuated
eardrum
and the fluid-filled cochlea of the inner ear. The inner ear is
terminated
in the auditory nerve, which sends impulses to the brain.

You might also like