Acoustics
Acoustics
Acoustics
Acoustics/Print version
Acoustics is the science that studies sound, in particular its production, transmission, and effects. Sound can often be
considered as something pleasant; music is an example. In that case a main application is room acoustics, since the
purpose
of room acoustical design and optimisation is to make a room sound as good as possible. But some noises can also
be
unpleasant and make people feel uncomfortable. In fact, noise reduction is actually a main challenge, in particular in
the
industry of transportations, since people are becoming increasingly demanding. Furthermore, ultrasounds also have
applications
in detection, such as sonar systems or non-destructive material testing. The articles in this wikibook describe the
fundamentals of acoustics and some of the major applications.
Table of contents
Fundamentals
1. Fundamentals of Acoustics
2. Fundamentals of Room Acoustics
3. Fundamentals of Psychoacoustics
4. Sound Speed
5. Filter Design and Implementation
6. Flow-induced oscillations of a Helmholtz resonator
7. Active Control
Acoustics/Print version 2
Applications
Applications in Psychoacoustics
1. Human Vocal Fold
2. Threshold of Hearing/Pain
Miscellaneous Applications
1. Bass-Reflex Enclosure Design
2. Polymer-Film Acoustic Filters
3. Noise in Hydraulic Systems
4. Noise from Cooling Fans
5. Piezoelectric Transducers
Introduction
Sound is an oscillation of pressure transmitted through a gas, liquid, or solid in the form of a traveling wave, and can
be generated by any localized pressure variation in a medium. An easy way to understand how sound propagates is
to consider that space can be divided into thin layers. The vibration (the successive compression and relaxation) of
these layers, at a certain velocity, enables the sound to propagate, hence producing a wave. The speed of sound
depends on the compressibility and density of the medium.
Acoustics/Print version 3
In this chapter, we will only consider the propagation of sound waves in an area without any acoustic source, in a
homogeneous fluid.
Equation of waves
Sound waves consist in the propagation of a scalar quantity, acoustic over-pressure. The propagation of sound waves
in a stationary medium (e.g. still air or water) is governed by the following equation (see wave equation):
This equation is obtained using the conservation equations (mass, momentum and energy) and the thermodynamic
equations of state of an ideal gas (or of an ideally compressible solid or liquid), supposing that the pressure
variations are small, and neglecting viscosity and thermal conduction, which would give other terms, accounting for
sound attenuation.
In the propagation equation of sound waves, is the propagation velocity of the sound wave (which has nothing to
do with the vibration velocity of the air layers). This propagation velocity has the following expression:
where is the density and is the compressibility coefficient of the propagation medium.
Helmholtz equation
Since the velocity field for acoustic waves is irrotational we can define an acoustic potential by:
Using the propagation equation of the previous paragraph, it is easy to obtain the new equation:
Applying the Fourier Transform, we get the widely used Helmoltz equation:
where is the wave number associated with . Using this equation is often the easiest way to solve acoustical
problems.
However, acoustic intensity does not give a good idea of the sound level, since the sensitivity of our ears is
logarithmic. Therefore we define decibels, either using acoustic over-pressure or acoustic average intensity:
Plane waves
If we study the propagation of a sound wave, far from the acoustic source, it can be considered as a plane 1D wave.
If the direction of propagation is along the x axis, the solution is:
where f and g can be any function. f describes the wave motion toward increasing x, whereas g describes the motion
toward decreasing x.
The momentum equation provides a relation between and which leads to the expression of the specific
impedance, defined as follows:
And still in the case of a plane wave, we get the following expression for the acoustic intensity:
Spherical waves
More generally, the waves propagate in any direction and are spherical waves. In these cases, the solution for the
acoustic potential is:
The fact that the potential decreases linearly while the distance to the source rises is just a consequence of the
conservation of energy. For spherical waves, we can also easily calculate the specific impedance as well as the
acoustic intensity.
Boundary conditions
Concerning the boundary conditions which are used for solving the wave equation, we can distinguish two
situations. If the medium is not absorptive, the boundary conditions are established using the usual equations for
mechanics. But in the situation of an absorptive material, it is simpler to use the concept of acoustic impedance.
Non-absorptive material
In that case, we get explicit boundary conditions either on stresses and on velocities at the interface. These
conditions depend on whether the media are solids, inviscid or viscous fluids.
Acoustics/Print version 5
Absorptive material
Here, we use the acoustic impedance as the boundary condition. This impedance, which is often given by
experimental measurements depends on the material, the fluid and the frequency of the sound wave.
Introduction
Three theories are used to understand room acoustics :
1. The modal theory
2. The geometric theory
3. The theory of Sabine
With the boundary condition , for x=0 and x=L1 (idem in the other directions), the expression of pressure
is :
The modal density is the number of modal frequencies contained in a range of 1Hz. It depends on the frequency
The modal density depends on the square frequency, so it increase rapidly with the frequency. At a certain level of
frequency, the modes are not distinguished and the modal theory is no longer relevant.
Where and are respectively the power generated by the acoustical source and the power absorbed by the
walls.
The power absorbed is related to the voluminal energy in the room e :
Where a is the equivalent absorption area defined by the sum of the product of the absorption coefficient and the area
of each material in the room :
Reverberation time
With this theory described, the reverberation time can be defined. It is the time for the level of energy to decrease of
60 dB. It depends on the volume of the room V and the equivalent absorption area a :
Sabine formula
This reverberation time is the fundamental parameter in room acoustics and depends trough the equivalent
absorption area and the absorption coefficients on the frequency. It is used for several measurement :
• Measurement of an absorption coefficient of a material
• Measurement of the power of a source
• Measurement of the transmission of a wall
Acoustics/Print version 7
Due to the famous principle enounced by Gustav Theodor Fechner, the sensation of perception doesn’t follow a
linear law, but a logarithmic one. The perception of the intensity of light, or the sensation of weight, follow this law,
as well. This observation legitimates the use of logarithmic scales in the field of acoustics. A 80dB (10-4 W/m²)
sound seems to be twice as loud as a 70 dB (10-5 W/m²) sound, although there is a factor 10 between the two
acoustic powers. This is quite a naïve law, but it led to a new way of thinking acoustics, by trying to describe the
auditive sensations. That’s the aim of psychoacoustics. By now, as the neurophysiologic mechanisms of human
hearing haven’t been successfully modelled, the only way of dealing with psychoacoustics is by finding metrics that
best describe the different aspects of sound.
Acoustics/Print version 8
Perception of sound
The study of sound perception is limited by the complexity of the human ear mechanisms. The figure below
represents the domain of perception and the thresholds of pain and listening. The pain threshold is not
frequency-dependent (around 120 dB in the audible bandwidth). At the opposite side, the listening threshold, as all
the equal loudness curves, is frequency-dependent.
Acoustics/Print version 9
Phons
Two sounds of equal intensity do not have the same loudness, because of the frequency sensibility of the human ear.
A 80 dB sound at 100 Hz is not as loud as a 80 dB sound at 3 kHz. A new unit, the phon, is used to describe the
loudness of a harmonic sound. X phons means “as loud as X dB at 1000 Hz”. Another tool is used : the equal
loudness curves, a.k.a. Fletcher curves.
Sones
Another scale currently used is the sone, based upon the rule of thumb for loudness. This rule states that the sound
must be increased in intensity by a factor 10 to be perceived as twice as loud. In decibel (or phon) scale, it
corresponds to a 10 dB (or phons) increase. The sone scale’s purpose is to translate those scales into a linear one.
Where S is the sone level, and the phon level. The conversion table is as follows:
Acoustics/Print version 10
Phons Sones
100 64
90 32
80 16
70 8
60 4
50 2
40 1
Metrics
We will now present five psychoacoustics parameters to provide a way to predict the subjective human sensation.
dB A
The measurement of noise perception with the sone or phon scale is not easy. A widely used measurement method is
a weighting of the sound pressure level, according to frequency repartition. For each frequency of the density
spectrum, a level correction is made. Different kinds of weightings (dB A, dB B, dB C) exist in order to approximate
the human ear at different sound intensities, but the most commonly used is the dB A filter. Its curve is made to
match the ear equal loudness curve for 40 phons, and as a consequence it’s a good approximation of the phon scale.
Example : for a harmonic 40 dB sound, at 200 Hz, the correction is -10 dB, so this sound is 30 dB A.
Acoustics/Print version 11
Loudness
It measures the sound strength. Loudness can be measured in sone, and is a dominant metric in psychoacoustics.
Tonality
As the human ear is very sensible to the pure harmonic sounds, this metric is a very important one. It measures the
number of pure tones in the noise spectrum. A broadwidth sound has a very low tonality, for example.
Roughness
It describes the human perception of temporal variations of sounds. This metric is measured in asper.
Sharpness
Sharpness is linked to the spectral characteristics of the sound. A high-frequency signal has a high value of
sharpness. This metric is measured in acum.
Blocking effect
A sinusoidal sound can be masked by a white noise in a narrowing bandwidth. A white noise is a random signal with
a flat power spectral density. In other words, the signal's power spectral density has equal power in any band, at any
centre frequency, having a given bandwidth. If the intensity of the white noise is high enough, the sinusoidal sound
will not be heard. For example, in a noisy environment (in the street, in a workshop), a great effort has to be made in
order to distinguish someone’s talking.
The speed of sound c (from Latin celeritas, "velocity") varies depending on the medium through which the sound
waves pass. It is usually quoted in describing properties of substances (e.g. see the article on sodium). In
conventional use and in scientific literature sound velocity v is the same as sound speed c. Sound velocity c or
velocity of sound should not be confused with sound particle velocity v, which is the velocity of the individual
particles.
More commonly the term refers to the speed of sound in air. The speed varies depending on atmospheric conditions;
the most important factor is the temperature. The humidity has very little effect on the speed of sound, while the
static sound pressure (air pressure) has none. Sound travels slower with an increased altitude (elevation if you are on
solid earth), primarily as a result of temperature and humidity changes. An approximate speed (in metres per second)
can be calculated from:
Acoustics/Print version 12
Details
A more accurate expression for the speed of sound is
where
• R (287.05 J/(kg·K) for air) is the gas constant for air: the universal gas constant , which units of J/(mol·K), is
divided by the molar mass of air, as is common practice in aerodynamics)
• κ (kappa) is the adiabatic index (1.402 for air), sometimes noted γ
• T is the absolute temperature in kelvins.
In the standard atmosphere :
T0 is 273.15 K (= 0 °C = 32 °F), giving a value of 331.5 m/s (= 1087.6 ft/s = 1193 km/h = 741.5 mph = 643.9 knots).
T20 is 293.15 K (= 20 °C = 68 °F), giving a value of 343.4 m/s (= 1126.6 ft/s = 1236 km/h = 768.2 mph = 667.1
knots).
T25 is 298.15 K (= 25 °C = 77 °F), giving a value of 346.3 m/s (= 1136.2 ft/s = 1246 km/h = 774.7 mph = 672.7
knots).
In fact, assuming an ideal gas, the speed of sound c depends on temperature only, not on the pressure. Air is almost
an ideal gas. The temperature of the air varies with altitude, giving the following variations in the speed of sound
using the standard atmosphere - actual conditions may vary. Any qualification of the speed of sound being "at sea
level" is also irrelevant.
In a Non-Dispersive Medium – Sound speed is independent of frequency, therefore the speed of energy transport
and sound propagation are the same. For audio sound range air is a non-dispersive medium. We should also note that
air contains CO2 which is a dispersive medium and it introduces dispersion to air at ultrasound frequencies (28KHz).
In a Dispersive Medium – Sound speed is a function of frequency. The spatial and temporal distribution of a
propagating disturbance will continually change. Each frequency component propagates at each its own phase speed,
while the energy of the disturbance propagates at the group velocity. Water is an example of a dispersive medium.
In general, the speed of sound c is given by
where
C is a coefficient of stiffness
is the density
Thus the speed of sound increases with the stiffness of the material, and decreases with the density.
In a fluid the only non-zero stiffness is to volumetric deformation (a fluid does not sustain shear forces).
Hence the speed of sound in a fluid is given by
Acoustics/Print version 13
where
K is the adiabatic bulk modulus
For a gas, K is approximately given by
where
κ is the adiabatic index, sometimes called γ.
p is the pressure.
Thus, for a gas the speed of sound can be calculated using:
(Newton famously considered the speed of sound before most of the development of thermodynamics and so
incorrectly used isothermal calculations instead of adiabatic. His result was missing the factor of κ but was otherwise
correct.)
In a solid, there is a non-zero stiffness both for volumetric and shear deformations. Hence, in a solid it is possible to
generate sound waves with different velocities dependent on the deformation mode.
In a solid rod (with thickness much smaller than the wavelength) the speed of sound is given by:
where
E is Young's modulus
(rho) is density
Thus in steel the speed of sound is approximately 5100 m/s.
In a solid with lateral dimensions much larger than the wavelength, the sound velocity is higher. It is found by
replacing Young's modulus with the plane wave modulus, which can be expressed in terms of the Young's modulus
and Poisson's ratio as:
Mach number is the ratio of the object's speed to the speed of sound in air (medium).
Sound in solids
In solids, the velocity of sound depends on density of the material, not its temperature. Solid materials, such as steel,
conduct sound much faster than air.
Experimental methods
In air a range of different methods exist for the measurement of sound.
Other methods
In these methods the time measurement has been replaced by a measurement of the inverse of time (frequency).
Kundt's tube is an example of an experiment which can be used to measure the speed of sound in a small volume, it
has the advantage of being able to measure the speed of sound in any gas. This method uses a powder to make the
nodes and antinodes visible to the human eye. This is an example of a compact experimental setup.
A tuning fork can be held near the mouth of a long pipe which is dipping into a barrel of water, in this system it is
the case that the pipe can be brought to resonance if the length of the air column in the pipe is equal to ( {1+2n}/λ )
where n is an integer. As the antinodal point for the pipe at the open end is slightly outside the mouth of the pipe it is
best to find two or more points of resonance and then measure half a wavelength between these.
Here it is the case that v = fλ
External links
• Calculation: Speed of sound in air and the temperature [1]
• The speed of sound, the temperature, and ... not the air pressure [2]
• Properties Of The U.S. Standard Atmosphere 1976 [3]
Introduction
Acoustic filters, or mufflers, are used in a number of applications requiring the suppression or attenuation of sound.
Although the idea might not be familiar to many people, acoustic mufflers make everyday life much more pleasant.
Many common appliances, such as refrigerators and air conditioners, use acoustic mufflers to produce a minimal
working noise. The application of acoustic mufflers is mostly directed to machine components or areas where there
is a large amount of radiated sound such as high pressure exhaust pipes, gas turbines, and rotary pumps.
Although there are a number of applications for acoustic mufflers, there are really only two main types which are
used. These are absorptive and reactive mufflers. Absorptive mufflers incorporate sound absorbing materials to
attenuate the radiated energy in gas flow. Reactive mufflers use a series of complex passages to maximize sound
attenuation while meeting set specifications, such as pressure drop, volume flow, etc. Many of the more complex
mufflers today incorporate both methods to optimize sound attenuation and provide realistic specifications.
In order to fully understand how acoustic filters attenuate radiated sound, it is first necessary to briefly cover some
basic background topics. For more information on wave theory and other material necessary to study acoustic filters
please refer to the references below.
Acoustics/Print version 16
where Pi and Pr are incident and reflected wave amplitudes respectively. Also note that bold notation is used to
indicate the possibility of complex terms. The first term represents a wave travelling in the +x direction and the
second term, -x direction.
Since acoustic filters or mufflers typically attenuate the radiated sound power as much as possible, it is logical to
assume that if we can find a way to maximize the ratio between reflected and incident wave amplitude then we will
effectively attenuated the radiated noise at certain frequencies. This ratio is called the reflection coefficient and is
given by:
It is important to point out that wave reflection only occurs when the impedance of a pipe changes. It is possible to
match the end impedance of a pipe with the characteristic impedance of a pipe to get no wave reflection. For more
information see [1] or [2].
Although the reflection coefficient isn't very useful in its current form since we want a relation describing sound
power, a more useful form can be derived by recognizing that the power intensity coefficient is simply the magnitude
of reflection coefficient square [1]:
As one would expect, the power reflection coefficient must be less than or equal to one. Therefore, it is useful to
define the transmission coefficient as:
which is the amount of power transmitted. This relation comes directly from conservation of energy. When talking
about the performance of mufflers, typically the power transmission coefficient is specified.
Acoustics/Print version 17
Low-pass filter
These are devices that attenuate the radiated sound power at higher
frequencies. This means the power transmission coefficient is
approximately 1 across the band pass at low frequencies(see figure to
right).
This is equivalent to an expansion in a pipe, with the volume of gas
located in the expansion having an acoustic compliance (see figure to
right). Continuity of acoustic impedance (see Java Applet at: Acoustic
Impedance Visualization [5]) at the junction, see [1], gives a power
transmission coefficient of:
Tpi for Low-Pass Filter
where k is the wavenumber (see [Wave Properties [6]]), L & are length and area of expansion respectively, and S
is the area of the pipe.
The cut-off frequency is given by:
Acoustics/Print version 18
High-pass filter
These are devices that attenuate the radiated sound power at lower
frequencies. Like before, this means the power transmission coefficient
is approximately 1 across the band pass at high frequencies (see figure
to right).
This is equivalent to a short side brach (see figure to right) with a
radius and length much smaller than the wavelength (lumped element
assumption). This side branch acts like an acoustic mass and applies a
different acoustic impedance to the system than the low-pass filter.
Again using continuity of acoustic impedance at the junction yields a
Tpi for High-Pass Filter
power transmission coefficient of the form [1]:
where a and L are the area and effective length of the small tube, and S is the area of the pipe.
The cut-off frequency is given by:
Band-stop filter
These are devices that attenuate the radiated sound power over a
certain frequency range (see figure to right). Like before, the power
transmission coefficient is approximately 1 in the band pass region.
Since the band-stop filter is essentially a cross between a low and high
pass filter, one might expect to create one by using a combination of
both techniques. This is true in that the combination of a lumped
acoustic mass and compliance gives a band-stop filter. This can be
realized as a helmholtz resonator (see figure to right). Again, since the
impedance of the helmholtz resonator can be easily determined,
Tpi for Band-Stop Filter
continuity of acoustic impedance at the junction can give the power
transmission coefficient as [1]:
Acoustics/Print version 19
where is the area of the neck, L is the effective length of the neck, V is the volume of the helmholtz resonator,
and S is the area of the pipe. It is interesting to note that the power transmission coefficient is zero when the
frequency is that of the resonance frequency of the helmholtz. This can be explained by the fact that at resonance the
volume velocity in the neck is large with a phase such that all the incident wave is reflected back to the source [1].
The zero power transmission coefficient location is given by:
This frequency value has powerful implications. If a system has the majority of noise at one frequency component,
the system can be "tuned" using the above equation, with a helmholtz resonator, to perfectly attenuate any
transmitted power (see examples below).
Design
If the long wavelength assumption is valid, typically a combination of methods described above are used to design a
filter. A specific design procedure is outlined for a helmholtz resonator, and other basic filters follow a similar
procedure (see [1 [7]]).
Two main metrics need to be identified when designing a helmholtz resonator [3]:
2. - Transmission loss: based on TL level. This constant is found from a TL graph (see HR [8]
pp. 6).
This will result in two equations with two unknowns which can be solved for the unknown dimensions of the
helmholtz resonator. It is important to note that flow velocities degrade the amount of transmission loss at resonance
and tend to move the resonance location upwards [3].
In many situations, the long wavelength approximation is not valid and alternative methods must be examined.
These are much more mathematically rigorous and require a complete understanding acoustics involved. Although
the mathematics involved are not shown, common filters used are given in the section that follows.
Absorptive
These are mufflers which incorporate sound absorbing materials to transform acoustic energy into heat. Unlike
reactive mufflers which use destructive interference to minimize radiated sound power, absorptive mufflers are
typically straight through pipes lined with multiple layers of absorptive materials to reduce radiated sound power.
The most important property of absorptive mufflers is the attenuation constant. Higher attenuation constants lead to
more energy dissipation and lower radiated sound power.
(1) - High amount of absorption at larger frequencies. (2) - Good for applications involving broadband (constant across the spectrum) and
[10]
narrowband (see [1 ]) noise.
(3) - Reduced amount of back pressure compared to reactive mufflers.
(1) - Poor performance at low frequencies. (2) - Material can degrade under certain circumstances (high heat, etc).
Acoustics/Print version 21
Examples
Reactive
Reactive mufflers use a number of complex passages (or lumped Absorptive Muffler
(1) - High performance at low frequencies. (2) - Typically give high insertion loss, IL, for stationary tones.
(3) - Useful in harsh conditions.
(1) - Poor performance at high frequencies. (2) - Not desirable characteristics for broadband noise.
Examples
Performance
There are 3 main metrics used to describe the performance of mufflers; Noise Reduction, Insertion Loss, and
Transmission Loss. Typically when designing a muffler, 1 or 2 of these metrics is given as a desired value.
where and is sound pressure levels at source and receiver respectively. Although NR is easy to measure,
pressure typically varies at source side due to standing waves [3].
where and are pressure levels at receiver without and with a muffler system respectively. Main
problem with measuring IL is that the barrier or sound attenuating system needs to be removed without changing the
source [3].
with
where and are the transmitted and incident wave power respectively. From this expression, it is obvious the
problem with measure TL is decomposing the sound field into incident and transmitted waves which can be difficult
to do for complex systems (analytically).
Examples
(1) - For a plenum chamber (see figure below):
in dB
Plenum Chamber
Transmission Loss vs. Theta
where
in dB
gdnrb
Links
1. Muffler/silencer applications and descriptions of performance criteria [Exhaust Silencers [7]]
2. Engineering Acoustics, Purdue University - ME 513 [14].
3. Sound Propagation Animations [15]
4. Exhaust Muffler Design [16]
5. Project Proposal & Outline
Acoustics/Print version 24
References
1. Fundamentals of Acoustics; Kinsler et al, John Wiley & Sons, 2000
2. Acoustics; Pierce, Acoustical Society of America, 1989
3. - ME 413 Noise Control, Dr. Mongeau, Purdue University
Introduction
The principle of active control of noise, is to create destructive interferences using a secondary source of noise. Thus,
any noise can theoretically disappear. But as we will see in the following sections, only low frequencies noises can
be reduced for usual applications, since the amount of secondary sources required increases very quickly with
frequency. Moreover, predictable noises are much easier to control than unpredictable ones. The reduction can reach
up to 20dB for the best cases. But since good reduction can only be reached for low frequencies, the perception we
have of the resulting sound is not necessarily as good as the theoretical reduction. This is due to psychoacoustics
considerations, which will be discussed later on.
It is now obvious that if we chose there is no more noise at the M point. This is the most
simple example of active control of noise. But it is also obvious that if the pressure is zero in M, there is no reason
why it should also be zero at any other N point. This solution only allows to reduce noise in one very small area.
However, it is possible to reduce noise in a larger area far from the source, as we will see in this section. In fact the
expression for acoustic pressure far from the primary source can be approximated by:
This means that if you want no noise in a one meter diameter sphere at a frequency below 340Hz, you will need 30
secondary sources. This is the reason why active control of noise works better at low frequencies.
Ducts
For an infinite and straight duct with a constant section, the pressure in areas without sources can be written as an
infinite sum of propagation modes:
where are the eigen functions of the Helmoltz equation and a represent the amplitudes of the modes.
The eigen functions can either be obtained analytically, for some specific shapes of the duct, or numerically. By
putting pressure sensors in the duct and using the previous equation, we get a relation between the pressure matrix P
(pressure for the various frequencies) and the A matrix of the amplitudes of the modes. Furthermore, for linear
sources, there is a relation between the A matrix and the U matrix of the signal sent to the secondary sources:
and hence: .
Our purpose is to get: A=0, which means: . This is possible every time the rank of the K matrix is
bigger than the number of the propagation modes in the duct.
Thus, it is theoretically possible to have no noise in the duct in a very large area not too close from the primary
sources if the there are more secondary sources than propagation modes in the duct. Therefore, it is obvious that
active noise control is more appropriate for low frequencies. In fact the more the frequency is low, the less
Acoustics/Print version 26
propagation modes there will be in the duct. Experiences show that it is in fact possible to reduce the noise from over
60dB.
Enclosures
The principle is rather similar to the one described above, except the resonance phenomenon has a major influence
on acoustic pressure in the cavity. In fact, every mode that is not resonant in the considered frequency range can be
neglected. In a cavity or enclosure, the number of these modes rise very quickly as frequency rises, so once again,
low frequencies are more appropriate. Above a critical frequency, the acoustic field can be considered as diffuse. In
that case, active control of noise is still possible, but it is theoretically much more complicated to set up.
Feedforward
In the case of a feed forward, two sensors and one secondary source are required. The sensors measure the sound
pressure at the primary source (detector) and at the place we want noise to be reduced (control sensor). Furthermore,
we should have an idea of what the noise from the primary source will become as he reaches the control sensor. Thus
we approximately know what correction should be made, before the sound wave reaches the control sensor
(forward). The control sensor will only correct an eventual or residual error. The feedforward technique allows to
reduce one specific noise (aircraft engine for example) without reducing every other sound (conversations, …). The
main issue for this technique is that the location of the primary source has to be known, and we have to be sure that
this sound will be detected beforehand. Therefore portative systems based on feed forward are impossible since it
would require having sensors all around the head.
Feedback
In that case, we do not exactly know where the sound comes from;
hence there is only one sensor. The sensor and the secondary source
are very close from each other and the correction is done in real time:
as soon as the sensor gets the information the signal is treated by a Feedforward
filter which sends the corrected signal to the secondary source. The
main issue with feedback is that every noise is reduced and it is even theoretically impossible to have a standard
conversation.
Acoustics/Print version 27
Applications
Motor noise
This noise is rather predictable since it a consequence of the rotation of the pistons in the motor. Its frequency is not
exactly the motor’s rotational speed though. However, the frequency of this noise is in between 20Hz and 200Hz,
which means that an active control is theoretically possible. The following pictures show the result of an active
control, both for low and high regime.
Even though these results show a significant reduction of the acoustic
pressure, the perception inside the car is not really better with this
active control system, mainly for psychoacoustics reasons which were
mentioned above. Moreover such a system is rather expensive and thus
are not used in commercial cars.
Low reg
Tires noise
This noise is created by the contact between the tires and the road. It is a broadband noise which is rather
unpredictable since the mechanisms are very complex. For example, the different types of roads can have a
significant impact on the resulting noise. Furthermore, there is a cavity around the tires, which generate a resonance
phenomenon. The first frequency is usually around 200Hz. Considering the multiple causes for that noise and its
unpredictability, even low frequencies become hard to reduce. But since this noise is broadband, reducing low
frequencies is not enough to reduce the overall noise. In fact an active control system would mainly be useful in the
case of an unfortunate amplification of a specific mode.
Acoustics/Print version 28
Aerodynamic noise
This noise is a consequence of the interaction between the air flow around the car and the different appendixes such
as the rear views for example. Once again, it is an unpredictable broadband noise, which makes it difficult to reduce
with an active control system. However, this solution can become interesting in the case an annoying predictable
resonance would appear.
Further reading
• "Active Noise Control" [17] at Dirac delta.
Introduction
Acoustic experiments often require to realise measurements in rooms with special characteristics. Two types of
rooms can be distinguished: anechoic rooms and reverberation rooms.
Anechoic room
The principle of this room is to simulate a free field. In a free space, the acoustic waves are propagated from the
source to infinity. In a room, the reflections of the sound on the walls produce a wave which is propagated in the
opposite direction and comes back to the source. In anechoic rooms, the walls are very absorbent in order to
eliminate these reflections. The sound seems to die down rapidly. The materials used on the walls are rockwool,
glasswool or foams, which are materials that absorb sound in relatively wide frequency bands. Cavities are dug in
the wool so that the large wavelength corresponding to bass frequencies are absorbed too. Ideally the sound pressure
level of a punctual sound source decreases about 6 dB per a distance doubling.
Anechoic rooms are used in the following experiments:
Acoustics/Print version 29
Reverberation room
The walls of a reverberation room mostly consist of concrete and are covered with reflecting paint. Alternative
design consist of sandwich panels with metal surface. The sound reflects a lot of time on the walls before dying
down. It gives a similar impression of a sound in a cathedral. Ideally all sound energy is absorbed by air. Because of
all these reflections, a lot of plane waves with different directions of propagation interfere in each point of the room.
Considering all the waves is very complicated so the acoustic field is simplified by the diffuse field hypothesis: the
field is homogeneous and isotropic. Then the pressure level is uniform in the room. The truth of this thesis increases
with ascending frequency, resulting in a lower limiting frequency for each reverberation room, where the density of
standing waves is sufficient.
Several conditions are required for this approximation: The absorption coefficient of the walls must be very low
(α<0.2) The room must have geometrical irregularities (non-parallel walls, diffusor objects) to avoid nodes of
pressure of the resonance modes.
With this hypothesis, the theory of Sabine can be applied. It deals with the reverberation time which is the time
required to the sound level to decrease of 60dB. T depends on the volume of the room V, the absorption coefficient
αi and the area Si of the different materials in the room :
Reverberation rooms are used in the following experiments:
measurement of the ability of a material to absorb a sound
measurement of the ability of a partition to transmit a sound
Intensimetry
measurement of sound power
Introduction
Many people use one or two rooms in their living space as "theatrical" rooms where theater or music room activities
commence. It is a common misconception that adding speakers to the room will enhance the quality of the room
acoustics. There are other simple things that can be done to increase the room's acoustics to produce sound that is
similar to "theater" sound. This site will take you through some simple background knowledge on acoustics and then
explain some solutions that will help improve sound quality in a room.
Acoustics/Print version 30
The Direct sound is coming right out of the TV to the listener, as you can see with the heavy black arrow. All of the
other sound is reflected off surfaces before they reach the listener.
Reflected sound
Reflected sound waves, good and bad, affect the sound you hear, where it comes from, and the quality of the sound
when it gets to you. The bad news when it comes to reflected sound is standing waves.
These waves are created when sound is reflected back and forth between any two parallel surfaces in your room,
ceiling and floor or wall to wall.
Standing waves can distort noises 300Hz and down. These noises include the lower mid frequency and bass ranges.
Standing waves tend to collect near the walls and in corners of a room, these collecting standing waves are called
room resonance modes.
There are some room dimensions that produce the largest amount of standing waves.
1. Cube
2. Room with 2 out of the three dimensions equal
3. Rooms with dimensions that are multiples of each other
Absorbed
The sound that humans hear is actually a form of acoustic energy. Different materials absorb different amounts of
this energy at different frequencies. When considering room acoustics, there should be a good mix of high frequency
absorbing materials and low frequency absorbing materials. A table including information on how different common
household absorb sound can be found here [18].
Acoustics/Print version 32
Diffused sound
Using devices that diffuse sound is a fairly new way of increasing acoustic performance in a room. It is a means to
create sound that appears to be "live". They can replace echo-like reflections without absorbing too much sound.
Some ways of determining where diffusive items should be placed were found on this website [19].
1. If you have carpet or drapes already in your room, use diffusion to control side wall reflections.
2. A bookcase filled with odd-sized books makes an effective diffuser.
3. Use absorptive material on room surfaces between your listening position and your front speakers, and treat the
back wall with diffusive material to re-distribute the reflections.
References
• Acoustic Room Treatment Articles [20]
• Room Acoustics: Acoustic Treatments [21]
• Home Improvement: Acoustic Treatments [22]
• Crutchfield Advisor [23]
Voice production
Although the science behind sound production for a vocal fold is complex, it can be thought of as similar to a brass
player's lips, or a whistle made out of grass. Basically, vocal folds (or lips or a pair of grass) make a constriction to
the airflow, and as the air is forced through the narrow opening, the vocal folds oscillate. This causes a periodical
change in the air pressure, which is perceived as sound.
Vocal Folds Video [26]
When the airflow is introduced to the vocal folds, it forces open the two vocal folds which are nearly closed initially.
Due to the stiffness of the folds, they will then try to close the opening again. And now the airflow will try to force
the folds open etc... This creates an oscillation of the vocal folds, which in turn, as I stated above, creates sound.
However, this is a damped oscillation, meaning it will eventually achieve an equilibrium position and stop
oscillating. So how are we able to "sustain" sound?
As it will be shown later, the answer seems to be in the changing shape of vocal folds. In the opening and the closing
stages of the oscillation, the vocal folds have different shapes. This affects the pressure in the opening, and creates
the extra pressure needed to push the vocal folds open and sustain oscillation. This part is explained in more detail in
the "Model" section.
This flow-induced oscillation, as with many fluid mechanics problems, is not an easy problem to model. Numerous
attempts to model the oscillation of vocal folds have been made, ranging from a single mass-spring-damper system
to finite element models. In this page I would like to use my single-mass model to explain the basic physics behind
the oscillation of a vocal fold.
Information on vocal fold models: National Center for Voice and Speech [27]
Model
The most simple way of simulating the motion of vocal folds is to use
a single mass-spring-damper system as shown above. The mass
represents one vocal fold, and the second vocal fold is assumed to be
symmetry about the axis of symmetry. Position 3 represents a location
immediately past the exit (end of the mass), and position 2 represents
the glottis (the region between the two vocal folds).
-----EQN 1
-----EQN 2
Note that the pressure and the velocity at position 3 cannot change. This makes the right hand side of EQN 2
constant. Observation of EQN 2 reveals that in order to have oscillating pressure at 2, we must have oscillation
velocity at 2. The flow velocity inside the glottis can be studied through the theories of the orifice flow.
The constriction of airflow at the vocal folds is much like an orifice flow with one major difference: with vocal folds,
the orifice profile is continuously changing. The orifice profile for the vocal folds can open or close, as well as
change the shape of the opening. In Figure 1, the profile is converging, but in another stage of oscillation it takes a
diverging shape.
The orifice flow is described by Blevins as:
-----EQN 3
Where the constant C is the orifice coefficient, governed by the shape and the opening size of the orifice. This
number is determined experimentally, and it changes throughout the different stages of oscillation.
Solving equations 2 and 3, the pressure force throughout the glottal region can be determined.
-----EQN 4
where
Here delta is the penetration distance of the vocal fold past the line of symmetry.
Past the vocal folds, the produced sound enters the vocal tract. Basically this is the cavity in the mouth as well as the
nasal cavity. These cavities act as acoustic filters, modifying the character of the sound. These are the characters that
define the unique voice each person produces.
Related links
• FEA Model [28]
• Two Mass Model [29]
References
1. Fundamentals of Acoustics; Kinsler et al, John Wiley & Sons, 2000
2. Acoustics: An introduction to its Physical Principles and Applications; Pierce, Allan D., Acoustical Society of
America, 1989.
3. Blevins, R.D. (1984). Applied Fluid Dynamics Handbook. Van Nostrand Reinhold Co. 81-82.
4. Titze, I. R. (1994). Principles of Voice Production. Prentice-Hall, Englewood Cliffs, NJ.
5. Lucero, J. C., and Koenig, L. L. (2005). Simulations of temporal patterns of oral airflow in men and women using
a two-mass model of the vocal folds under dynamic control, Journal of the Acostical Society of America 117,
1362-1372.
6. Titze, I.R. (1988). The physics of small-amplitude oscillation of the vocal folds. Journal of the Acoustical Society
of America 83, 1536-1552
Acoustics/Print version 36
Threshold of pain
120 dBSPL 20 Pa
130 dBSPL 63 Pa
The Threshold of hearing is frequency dependent, and typically shows a minimum (indicating the ear's maximum
sensitivity) at frequencies between 1 kHz and 5 kHz. A typical ATH curve is pictured in Fig. 1. The absolute
threshold of hearing represents the lowest curve amongst the set of equal-loudness contours, with the highest curve
representing the threshold of pain.
In psychoacoustic audio compression, the ATH is used, often in combination with masking curves, to calculate
which spectral components are inaudible and may thus be ignored in the coding process; any part of an audio
spectrum which has an amplitude (level or strength) below the ATH may be removed from an audio signal without
any audible change to the signal.
The ATH curve rises with age as the human ear becomes more insensitive to sound, with the greatest changes
occurring at frequencies higher than 2 kHz. Curves for subjects of various age groups are illustrated in Fig. 2. The
data is from the United States Occupational Health and Environment Control, Standard Number:1910.95 App F
../../Human Vocal Fold/ - Acoustics - ../../How an Acoustic Guitar Works/
Acoustics/Print version 37
General technique
1. A microphone should be used whose frequency response will suit the frequency range of the voice or instrument
being recorded.
2. Vary microphone positions and distances until you achieve the monitored sound that you desire.
3. In the case of poor room acoustics, place the microphone very close to the loudest part of the instrument being
recorded or isolate the instrument.
4. Personal taste is the most important component of microphone technique. Whatever sounds right to you, is right.
Types of microphones
Dynamic microphones
These are the most common general-purpose microphones. They do not require power to operate. If you have a
microphone that is used for live performance, it is probably a dynamic mic.
They have the advantage that they can withstand very high sound pressure levels (high volume) without damage or
distortion, and tend to provide a richer, more intense sound than other types. Traditionally, these mics did not
provide as good a response on the highest frequencies (particularly above 10 kHz), but some recent models have
come out that attempt to overcome this limitation.
In the studio, dynamic mics are often used for high sound pressure level instruments such as drums, guitar amps and
brass instruments. Models that are often used in recording include the Shure SM57 and the Sennheiser MD421.
Acoustics/Print version 38
Condenser microphones
These microphones are often the most expensive microphones a studio owns. They require power to operate, either
from a battery or phantom power, provided using the mic cable from an external mixer or pre-amp. These mics have
a built-in pre-amplifier that uses the power. Some vintage microphones have a tube amplifier, and are referred to as
tube condensers.
While they cannot withstand the very high sound pressure levels that dynamic mics can, they provide a flatter
frequency response, and often the best response at the highest frequencies. Not as good at conveying intensity, they
are much better at providing a balanced accurate sound.
Condenser mics come with a variety of sizes of transducers. They are usually grouped into smaller format
condensers, which often are long cylinders about the size of a nickel coin in diameter, and larger format condensers,
the transducers of which are often about an inch in diameter or slightly larger.
In the studio, condenser mics are often used for instruments with a wide frequency range, such as an acoustic piano,
acoustic guitar, voice, violin, cymbals, or an entire band or chorus. On louder instruments they do not use close
miking with condensers. Models that are often used in recording include the Shure SM81 (small format), AKG C414
(large format) and Neumann U87 (large format).
Ribbon microphones
Ribbon microphones are often used as an alternative to condenser microphones. Some modern ribbon microphones
do not require power, and some do. The first ribbon microphones, developed at RCA in the 1930s, required no
power, were quite fragile and could be destroyed by just blowing air through them. Modern ribbon mics are much
more resiliant, and can be used with the same level of caution as condenser mics.
Ribbon microphones provide a warmer sound than a condenser mic, with a less brittle top end. Some vocalists
(including Paul McCartney) prefer them condenser mics. In the studio they are used on vocals, violins, and even
drums. Popular models for recording include the Royer R121 and the AEA R84.
Working distance
Close miking
When miking at a distance of 1 inch to about 1 foot from the sound source, it is considered close miking. This
technique generally provides a tight, present sound quality and does an effective job of isolating the signal and
excluding other sounds in the acoustic environment.
Bleed
Bleeding occurs when the signal is not properly isolated and the microphone picks up another nearby instrument.
This can make the mixdown process difficult if there are multiple voices on one track. Use the following methods to
prevent leakage:
• Place the microphones closer to the instruments.
• Move the instruments farther apart.
• Put some sort of acoustic barrier between the instruments.
• Use directional microphones.
Acoustics/Print version 39
A B miking
The A B miking distance rule (ratio 3 - 1) is a general rule of thumb for close miking. To prevent phase anomalies
and bleed, the microphones should be placed at least three times as far apart as the distance between the instrument
and the microphone.
Distant miking
Distant miking refers to the placement of microphones at a distance of 3 feet or more from the sound source. This
technique allows the full range and balance of the instrument to develop and it captures the room sound. This tends
to add a live, open feeling to the recorded sound, but careful consideration needs to be given to the acoustic
environment.
Accent miking
Accent miking is a technique used for solo passages when miking an ensemble. A soloist needs to stand out from an
ensemble, but placing a microphone too close will sound unnaturally present compared the distant miking technique
used with the rest of the ensemble. Therefore, the microphone should be placed just close enough to the soloist so
that the signal can be mixed effectively without sounding completely excluded from the ensemble.
Ambient miking
Ambient miking is placing the microphones at such a distance that the room sound is more prominent than the direct
signal. This technique is used to capture audience sound or the natural reverberation of a room or concert hall.
Stereo
Stereo miking is simply using two microphones to obtain a stereo left-right image of the sound. A simple method is
the use of a spaced pair, which is placing two identical microphones several feet apart and using the difference in
time and amplitude to create the image. Great care should be taken in the method as phase anomalies can occur due
to the signal delay. This risk of phase anomaly can be reduced by using the X/Y method, where the two microphones
are placed with the grills as close together as possible without touching. There should be an angle of 90 to 135
degrees between the mics. This technique uses only amplitude, not time, to create the image, so the chance of phase
Acoustics/Print version 40
discrepancies is unlikely.
Surround
To take advantage of 5.1 sound or some other surround setup, microphones may be placed to capture the surround
sound of a room. This technique essentially stems from stereo technique with the addition of more microphones.
Because every acoustic environment is different, it is difficult to define a general rule for surround miking, so
placement becomes dependent on experimentation. Careful attention must be paid to the distance between
microphones and potential phase anomalies.
Amplifiers
When miking an amplified speaker, such as for electric guitars, the mic should be placed 2 to 12 inches from the
speaker. Exact placement becomes more critical at a distance of less than 4 inches. A brighter sound is achieved
when the mic faces directly into the center of the speaker cone and a more mellow sound is produced when placed
slightly off-center. Placing off-center also reduces amplifier noise.
A bigger sound can often be achieved by using two mics. The first mic should be a dynamic mic, placed as described
in the previous paragraph. Add to this a condenser mic placed at least 3 times further back (remember the 3:1 rule),
which will pickup the blended sound of all speakers, as well as some room ambience. Run the mics into separate
channels and combine them to your taste.
Acoustics/Print version 41
Brass instruments
High sound-pressure levels are produced by brass instruments due to the directional characteristics of mid to
mid-high frequencies. Therefore, for brass instruments such as trumpets, trombones, and tubas, microphones should
face slightly off of the bell's center at a distance of one foot or more to prevent overloading from wind blasts.
Guitars
Technique for acoustic guitars is dependent on the desired sound. Placing a microphone close to the sound hole will
achieve the highest output possible, but the sound may be bottom-heavy because of how the sound hole resonates at
low frequencies. Placing the mic slightly off-center at 6 to 12 inches from the hole will provide a more balanced
pickup. Placing the mic closer to the bridge with the same working distance will ensure that the full range of the
instrument is captured.
A technique that some engineers use places a large-format condenser mic 12-18 inches away from the 12th fret of
the guitar, and a small-format condenser very close to the strings nearby. Combining the two signals can produce a
rich tone.
Pianos
Ideally, microphones would be placed 4 to 6 feet from the piano to allow the full range of the instrument to develop
before it is captured. This isn't always possible due to room noise, so the next best option is to place the microphone
just inside the open lid. This applies to both grand and upright pianos.
Percussion
One overhead microphone can be used for a drum set, although two are preferable. If possible, each component of
the drum set should be miked individually at a distance of 1 to 2 inches as if they were their own instrument. This
also applies to other drums such as congas and bongos. For large, tuned instruments such as xylophones, multiple
mics can be used as long as they are spaced according to the 3:1 rule. Typically, dynamic mics are used for
individual drum miking, while small-format condensers are used for the overheads.
Voice
Standard technique is to put the microphone directly in front of the vocalist's mouth, although placing slightly
off-center can alleviate harsh consonant sounds (such as "p") and prevent overloading due to excessive dynamic
range.
Woodwinds
A general rule for woodwinds is to place the microphone around the middle of the instrument at a distance of 6
inches to 2 feet. The microphone should be tilted slightly towards the bell or sound hole, but not directly in front of
it.
Sound Propagation
It is important to understand how sound propagates due to the nature of the acoustic environment so that microphone
technique can be adjusted accordingly. There are four basic ways that this occurs:
Acoustics/Print version 42
Reflection
Sound waves are reflected by surfaces if the object is as large as the wavelength of the sound. It is the cause of echo
(simple delay), reverberation (many reflections cause the sound to continue after the source has stopped), and
standing waves (the distance between two parallel walls is such that the original and reflected waves in phase
reinforce one another).
Absorption
Sound waves are absorbed by materials rather than reflected. This can have both positive and negative effects
depending on whether you desire to reduce reverberation or retain a live sound.
Diffraction
Objects that may be between sound sources and microphones must be considered due to diffraction. Sound will be
stopped by obstacles that are larger than its wavelength. Therefore, higher frequencies will be blocked more easily
than lower frequencies.
Refraction
Sound waves bend as they pass through mediums with varying density. Wind or temperature changes can cause
sound to seem like it is literally moving in a different direction than expected.
Sources
• Huber, Dave Miles, and Robert E. Runstein. Modern Recording Techniques. Sixth Edition. Burlington: Elsevier,
Inc., 2005.
• Shure, Inc. (2003). Shure Product Literature. Retrieved November 28, 2005, from http://www.shure.com/
scripts/literature/literature.aspx.
Acoustics/Print version 43
Introduction
Microphones are devices which convert pressure fluctuations into electrical signals. There are two main methods of
accomplishing this task that are used in the mainstream entertainment industry. They are known as dynamic
microphones and condenser microphones. Piezoelectric crystals can also be used as microphones but are not
commonly used in the entertainment industry. For further information on piezoelectric transducers Click Here [30].
Dynamic microphones
This type of microphone coverts pressure fluctuations into electrical current. These microphones work by means of
the principal known as Faraday’s Law. The principal states that when an electrical conductor is moved through a
magnetic field, an electrical current is induced within the conductor. The magnetic field within the microphone is
created using permanent magnets and the conductor is produced in two common arrangements.
The first conductor arrangement is made of
a coil of wire. The wire is typically copper
and is attached to a circular membrane or
piston usually made from lightweight plastic
or occasionally aluminum. The impinging
pressure fluctuation on the piston causes it
to move in the magnetic field and thus
creates the desired electrical current. Figure
1 provides a sectional view of a moving-coil
microphone.
Condenser microphones
This type of microphone converts pressure
fluctuations into electrical potentials through
Figure 2: Dynamic Ribbon Microphone
Acoustics/Print version 44
the use of changing an electrical capacitor. This is why condenser microphones are also known as capacitor
microphones. An electrical capacitor is created when two charged electrical conductors are placed at a finite distance
from each other. The basic relation that describes capacitors is:
Q=C*V
where Q is the electrical charge of the capacitor’s conductors, C is the capacitance, and V is the electric potential
between the capacitor’s conductors. If the electrical charge of the conductors is held at a constant value, then the
voltage between the conductors will be inversely proportional to the capacitance. Also, the capacitance is inversely
proportional to the distance between the conductors. Condenser microphones utilize these two concepts.
The capacitor in a condenser
microphone is made of two parts: the
diaphragm and the back plate. Figure 3
shows a section view of a condenser
microphone. The diaphragm is what
moves due to impinging pressure
fluctuations and the back plate is held Figure 3: Sectional View of Condenser Microphone
in a stationary position. When the
diaphragm moves closer to the back plate, the capacitance increases and therefore a change in electric potential is
produced. The diaphragm is typically made of metallic coated Mylar. The assembly that houses both the back plate
and the diaphragm is commonly referred to as a capsule.
To keep the diaphragm and back plate at a constant charge, an electric potential must be presented to the capsule.
There are various ways of performing this operation. The first of which is by simply using a battery to supply the
needed DC potential to the capsule. A simplified schematic of this technique is displayed in figure 4. The resistor
across the leads of the capsule is very high, in the range of 10 mega ohms, to keep the charge on the capsule close to
constant.
Another technique of providing a
constant charge on the capacitor is to
supply a DC electric potential through
the microphone cable that carries the
microphones output signal. Standard
microphone cable is known as XLR
cable and is terminated by three pin
connectors. Pin one connects to the
shield around the cable. The
microphone signal is transmitted
between pins two and three. Figure 5 Figure 4: Internal Battery Powered Condenser Microphone
displays the layout of dynamic
microphone attached to a mixing console via XLR cable.
Phantom Supply/Powering (Audio
Engineering Society, DIN 45596): The
first and most popular method of
providing a DC potential through a
microphone cable is to supply +48 V
to both of the microphone output leads,
pins 2 and 3, and use the shield of the Figure 5: Dynamic Microphone Connection to Mixing Console via XLR Cable
Acoustics/Print version 45
cable, pin 1, as the ground to the circuit. Because pins 2 and 3 see the same potential, any fluctuation of the
microphone powering potential will not affect the microphone signal seen by the attached audio equipment. This
configuration can be seen in figure 6. The +48 V will be stepped down at the microphone using a transformer and
provide the potential to the back plate and diaphragm in a similar fashion as the battery solution. In fact, 9, 12, 24, 48
or 52 V can be supplied, but 48 V is the most frequent.
The second method of running the
potential through the cable is to supply
12 V between pins 2 and 3. This
method is referred to as T-powering
(also known as Tonaderspeisung, AB
powering; DIN 45595). The main
problem with T-powering is that Figure 6: Condenser Microphone Powering Techniques
Finally, the diaphragm and back plate can be manufactured from a material that maintains a fixed charge. These
microphones are termed electrets. In early electret designs, the charge on the material tended to become unstable
over time. Recent advances in science and manufacturing have allowed this problem to be eliminated in present
designs.
Conclusion
Two branches of microphones exist in the entertainment industry. Dynamic microphones are found in the
moving-coil and ribbon configurations. The movement of the conductor in dynamic microphones induces an electric
current which is then transformed into the reproduction of sound. Condenser microphones utilize the properties of
capacitors. Creating the charge on the capsule of condenser microphones can be accomplished by battery, phantom
powering, T-powering, and by using fixed charge materials in manufacturing.
References
• Sound Recording Handbook. Woram, John M. 1989.
• Handbook of Recording Engineering Fourth Edition. Eargle, John. 2003.
• Wharfedale [44]
The purpose of the acoustic transducer is to convert electrical energy into acoustic energy. Many variations of
acoustic transducers exist, although the most common is the moving coil-permanent magnet transducer. The classic
loudspeaker is of the moving coil-permanent magnet type.
The classic electrodynamic loudspeaker driver can be divided into three key components:
1. The Magnet Motor Drive System
2. The Loudspeaker Cone System
3. The Loudspeaker Suspension
front focusing plate and the yoke. The magnetic field is coupled through the air gap. The magnetic field strength (B)
of the air gap is typically optimized for uniformity across the gap. [1]
Figure 2 Permanent Magnet Structure
When a coil of wire with a current flowing is placed inside the permanent magnetic field, a force is produced. B is
the magnetic field strength, is the length of the coil, and is the current flowing through the coil. The
electro-magnetic force is given by the expression of Laplace :
and are orthogonal, so the force is obtained by integration on the length of the wire (Re is the radius of a
spire, n is the number of spires and is on the axis of the coil):
This force is directly proportional to the current flowing through the coil.
and frequency response of the loudspeaker. When the cone is attached to the voice coil, a large gap above the voice
coil is left exposed. This could be a problem if foreign particles make their way into the air gap of the voice coil and
the permanent magnet structure. The solution to this problem is to place what is known as a dust cap on the cone to
cover the air gap. Below a figure of the cone and dust cap are shown.
The current intensity i and the speed v can also be related by this equation (U is the voltage, R the electrical
resistance and the inductance) :
The electrical impedance can be determined as the ratio of the voltage on the current intensity :
When the fluid is the air, this additive mass and additive damping are negligible. For example, at the frequency of
1000 Hz, the additive mass is 3g.
References
1. The Loudspeaker Design Cookbook 5th Edition; Dickason, Vance., Audio Amateur Press, 1997.
2. Beranek, L. L. Acoustics. 2nd ed. Acoustical Society of America, Woodbridge, NY. 1993.
Introduction
A sealed or closed box baffle is the most basic but often the cleanest sounding sub-woofer box design. The
sub-woofer box in its most simple form, serves to isolate the back of the speaker from the front, much like the
theoretical infinite baffle. The sealed box provides simple construction and controlled response for most sub-woofer
applications. The slow low end roll-off provides a clean transition into the extreme frequency range. Unlike ported
boxes, the cone excursion is reduced below the resonant frequency of the box and driver due to the added stiffness
provided by the sealed box baffle.
Closed baffle boxes are typically constructed of a very rigid material such as MDF (medium density fiber board) or
plywood .75 to 1 inch thick. Depending on the size of the box and material used, internal bracing may be necessary
to maintain a rigid box. A rigid box is important to design in order to prevent unwanted box resonance.
As with any acoustics application, the box must be matched to the loudspeaker driver for maximum performance.
The following will outline the procedure to tune the box or maximize the output of the sub-woofer box and driver
combination.
where all of the following parameters are in the mechanical mobility analog
Ve - voltage supply
Re - electrical resistance
Mm - driver mass
Cm - driver compliance
Acoustics/Print version 55
Rm - resistance
RAr - rear cone radiation resistance into the air
XAf - front cone radiation reactance into the air
RBr - rear cone radiation resistance into the box
XBr - rear cone radiation reactance into the box
Driver parameters
In order to tune a sealed box to a driver, the driver parameters must be known. Some of the parameters are provided
by the manufacturer, some are found experimentally, and some are found from general tables. For ease of
calculations, all parameters will be represented in the SI units meter/kilogram/second. The parameters that must be
known to determine the size of the box are as follows:
f0 - driver free-air resonance
CMS - mechanical compliance of the driver
SD - effective area of the driver
Mechanical compliance
By definition compliance is the inverse of stiffness or what is commonly referred to as the spring constant. The
compliance of a driver can be found by measuring the displacement of the cone when known masses are place on the
cone when the driver is facing up. The compliance would then be the displacement of the cone in meters divided by
the added weight in newtons.
From this diameter, the area is found from the basic area of a circle equation.
Acoustic compliance
From the known mechanical compliance of the cone, the acoustic compliance can be found from the following
equation:
CAS = CMSSD2
From the driver acoustic compliance, the box acoustic compliance is found. This is where the final application of the
sub-woofer is considered. The acoustic compliance of the box will determine the percent shift upwards of the
resonant frequency. If a large shift is desire for high SPL applications, then a large ratio of driver to box acoustic
compliance would be required. If a more flattened response is desire for high fidelity applications, then a lower ratio
of driver to box acoustic compliance would be required. Specifically, the ratios can be found in the following figure
using line (b) as reference.
CAS = CAB*r
r - driver to box acoustic compliant ratio
Acoustics/Print version 57
Box dimensions
From the calculated box volume, the dimensions of the box can then be designed. There is no set formula for finding
the dimensions of the box, but there are general guidelines to be followed. If the driver was mounted in the center of
a square face, the waves generated by the cone would reach the edges of the box at the same time, thus when
combined would create a strong diffracted wave in the listening space. In order to best prevent this, the driver should
be either be mounted offset of a square face, or the face should be rectangular.
The face of the box which the driver is set in should not be a square.
Acoustics/Print version 58
Bass-reflex enclosures improve the low-frequency response of loudspeaker systems. Bass-reflex enclosures are also
called "vented-box design" or "ported-cabinet design". A bass-reflex enclosure includes a vent or port between the
cabinet and the ambient environment. This type of design, as one may observe by looking at contemporary
loudspeaker products, is still widely used today. Although the construction of bass-reflex enclosures is fairly simple,
their design is not simple, and requires proper tuning. This reference focuses on the technical details of bass-reflex
design. General loudspeaker information can be found here [45].
In this figure, represents the radiation impedance of the outside environment on the loudspeaker diaphragm.
The loading on the rear of the diaphragm has changed when compared to the sealed enclosure case. If one visualizes
the movement of air within the enclosure, some of the air is compressed and rarified by the compliance of the
enclosure, some leaks out of the enclosure, and some flows out of the port. This explains the parallel combination of
, , and . A truly realistic model would incorporate a radiation impedance of the port in series
with , but for now it is ignored. Finally, , the acoustical mass of the enclosure, is included as discussed
in the sealed enclosure case. The formulas which calculate the enclosure parameters are listed in Appendix B.
It is important to note the parallel combination of and . This forms a Helmholtz resonator (click here
for more information). Physically, the port functions as the “neck” of the resonator and the enclosure functions as the
“cavity.” In this case, the resonator is driven from the piston directly on the cavity instead of the typical Helmholtz
case where it is driven at the “neck.” However, the same resonant behavior still occurs at the enclosure resonance
frequency, . At this frequency, the impedance seen by the loudspeaker diaphragm is large (see Figure 3 below).
Thus, the load on the loudspeaker reduces the velocity flowing through its mechanical parameters, causing an
anti-resonance condition where the displacement of the diaphragm is a minimum. Instead, the majority of the volume
velocity is actually emitted by the port itself instead of the loudspeaker. When this impedance is reflected to the
electrical circuit, it is proportional to , thus a minimum in the impedance seen by the voice coil is small. Figure
3 shows a plot of the impedance seen at the terminals of the loudspeaker. In this example, was found to be about
40 Hz, which corresponds to the null in the voice-coil impedance.
Where and are types of Bessel functions. For small values of ka,
Acoustics/Print version 60
and
Hence, the low-frequency impedance on the loudspeaker is represented with an acoustic mass [1]. For a
simple analysis, , , , and (the transducer parameters, or Thiele-Small parameters) are
converted to their acoustical equivalents. All conversions for all parameters are given in Appendix A. Then, the
series masses, , , and , are lumped together to create . This new circuit is shown below.
Based on the nature of the derivation, it is convenient to define the parameters and h, the Helmholtz tuning ratio:
Other parameters are combined to form what are known as quality factors:
This notation allows for a simpler expression for the resulting transfer function [1]:
where
Acoustics/Print version 61
It is possible to simply let for this analysis without loss of generality because distance is only a function of
the surroundings, not the loudspeaker. Also, because the transfer function magnitude is of primary interest, the
exponential term, which has a unity magnitude, is omitted. Hence, the pressure response of the system is given by
[1]:
Where . In the following sections, design methods will focus on rather than ,
which is given by:
This also implicitly ignores the constants in front of since they simply scale the response and do not affect
the shape of the frequency response curve.
Alignments
A popular way to determine the ideal parameters has been through the use of alignments. The concept of alignments
is based upon filter theory. Filter development is a method of selecting the poles (and possibly zeros) of a transfer
function to meet a particular design criterion. The criteria are the desired properties of a magnitude-squared transfer
function, which in this case is . From any of the design criteria, the poles (and possibly zeros) of
are found, which can then be used to calculate the numerator and denominator. This is the “optimal” transfer
function, which has coefficients that are matched to the parameters of to compute the appropriate values
that will yield a design that meets the criteria.
There are many different types of filter designs, each which have trade-offs associated with them. However, this
design is limited because of the structure of . In particular, it has the structure of a fourth-order high-pass
filter with all zeros at s = 0. Therefore, only those filter design methods which produce a low-pass filter with only
poles will be acceptable methods to use. From the traditional set of algorithms, only Butterworth and Chebyshev
low-pass filters have only poles. In addition, another type of filter called a quasi-Butterworth filter can also be used,
which has similar properties to a Butterworth filter. These three algorithms are fairly simple, thus they are the most
popular. When these low-pass filters are converted to high-pass filters, the transformation produces in
the numerator.
More details regarding filter theory and these relationships can be found in numerous resources, including [5].
Acoustics/Print version 62
Butterworth Alignment
The Butterworth algorithm is designed to have a maximally flat pass band. Since the slope of a function corresponds
to its derivatives, a flat function will have derivatives equal to zero. Since as flat of a pass band as possible is
optimal, the ideal function will have as many derivatives equal to zero as possible at s = 0. Of course, if all
derivatives were equal to zero, then the function would be a constant, which performs no filtering.
Often, it is better to examine what is called the loss function. Loss is the reciprocal of gain, thus
The loss function can be used to achieve the desired properties, then the desired gain function is recovered from the
loss function.
Now, applying the desired Butterworth property of maximal pass-band flatness, the loss function is simply a
polynomial with derivatives equal to zero at s = 0. At the same time, the original polynomial must be of degree eight
(yielding a fourth-order function). However, derivatives one through seven can be equal to zero if [3]
Quasi-Butterworth Alignment
The quasi-Butterworth alignments do not have as well-defined of an algorithm when compared to the Butterworth
alignment. The name “quasi-Butterworth” comes from the fact that the transfer functions for these responses appear
similar to the Butterworth ones, with (in general) the addition of terms in the denominator. This will be illustrated
below. While there are many types of quasi-Butterworth alignments, the simplest and most popular is the 3rd order
alignment (QB3). The comparison of the QB3 magnitude-squared response against the 4th order Butterworth is
shown below.
Notice that the case is the Butterworth alignment. The reason that this QB alignment is called 3rd order is
due to the fact that as B increases, the slope approaches 3 dec/dec instead of 4 dec/dec, as in 4th order Butterworth.
This phenomenon can be seen in Figure 5.
Acoustics/Print version 63
Chebyshev Alignment
The Chebyshev algorithm is an alternative to the Butterworth algorithm. For the Chebyshev response, the
maximally-flat passband restriction is abandoned. Now, a ripple, or fluctuation is allowed in the pass band. This
allows a steeper transition or roll-off to occur. In this type of application, the low-frequency response of the
loudspeaker can be extended beyond what can be achieved by Butterworth-type filters. An example plot of a
Chebyshev high-pass response with 0.5 dB of ripple against a Butterworth high-pass response for the same is
shown below.
Acoustics/Print version 64
For more information on Chebyshev polynomials, see the Wolfram Mathworld: Chebyshev Polynomials [46] page.
When applying the high-pass transformation to the 4th order form of , the desired response has the form
[1]:
The parameter determines the ripple. In particular, the magnitude of the ripple is dB and can be
chosen by the designer, similar to B in the quasi-Butterworth case. Using the recursion formula for ,
References
[1] Leach, W. Marshall, Jr. Introduction to Electroacoustics and Audio Amplifier Design. 2nd ed. Kendall/Hunt,
Dubuque, IA. 2001.
[2] Beranek, L. L. Acoustics. 2nd ed. Acoustical Society of America, Woodbridge, NY. 1993.
[3] DeCarlo, Raymond A. “The Butterworth Approximation.” Notes from ECE 445. Purdue University. 2004.
[4] DeCarlo, Raymond A. “The Chebyshev Approximation.” Notes from ECE 445. Purdue University. 2004.
[5] VanValkenburg, M. E. Analog Filter Design. Holt, Rinehart and Winston, Inc. Chicago, IL. 1982.
[6] Kreutz, Joseph and Panzer, Joerg. "Derivation of the Quasi-Butterworth 5 Alignments." Journal of the Audio
Engineering Society. Vol. 42, No. 5, May 1994.
Acoustics/Print version 66
[7] Rutt, Thomas E. "Root-Locus Technique for Vented-Box Loudspeaker Design." Journal of the Audio
Engineering Society. Vol. 33, No. 9, September 1985.
[8] Simeonov, Lubomir B. and Shopova-Simeonova, Elena. "Passive-Radiator Loudspeaker System Design Software
Including Optimization Algorithm." Journal of the Audio Engineering Society. Vol. 47, No. 4, April 1999.
Voice-Coil Resistance
Driver (Speaker)
Suspension Compliance
Driver (Speaker)
Suspension Resistance
Enclosure Compliance
Enclosure Air-Leak
Losses
(inside enclosure volume) (inside area of the side the speaker is mounted on)
specific heat of air at constant volume specific heat of filling at constant volume ( )
ratio of specific heats for air (1.4) speed of sound in air (about 344 m/s)
= effective density of enclosure. If little or no filling (acceptable assumption in a bass-reflex system but not for sealed enclosures),
Acoustics/Print version 68
Introduction
Acoustic filters are used in many devices such as mufflers, noise control materials (absorptive and reactive), and
loudspeaker systems to name a few. Although the waves in simple (single-medium) acoustic filters usually travel in
gases such as air and carbon-monoxide (in the case of automobile mufflers) or in materials such as fiberglass,
polyvinylidene fluoride (PVDF) film, or polyethylene (Saran Wrap), there are also filters that couple two or three
distinct media together to achieve a desired acoustic response. General information about basic acoustic filter design
can be perused at the following wikibook page [Acoustic Filter Design & Implementation [47]]. The focus of this
article will be on acoustic filters that use multilayer air/polymer film-coupled media as its acoustic medium for sound
waves to propagate through; concluding with an example of how these filters can be used to detect and extrapolate
audio frequency information in high-frequency "carrier" waves that carry an audio signal. However, before getting
into these specific type of acoustic filters, we need to briefly discuss how sound waves interact with the
medium(media) in which it travels and how these factors can play a role when designing acoustic filters.
where
is interpreted as the (time-averaged) rate of energy transmission of a sound wave through a unit area normal to
the direction of propagation, and this parameter is also an important factor in acoustic filter design because the
characteristic properties of the given medium can change relative to intensity of the sound wave traveling through it.
In other words, the reaction of the particles (atoms or molecules) that make up the medium will respond differently
when the intensity of the sound wave is very high or very small relative to the size of the control area (i.e.
dimensions of the filter, in this case). Other properties such as the elasticity and mean propagation velocity (of a
sound wave) can change in the acoustic medium as well, but focusing on frequency, impedance, and/or intensity in
the design process usually takes care of these other parameters because most of them will inevitably be dependent on
the aforementioned properties of the medium.
and
are the tangible values that tell how much of the incident wave is being reflected from and transmitted through the
junction where the media meet. Note that is the (total) input impedance seen by the incident sound wave upon
just entering an air-solid acoustic media layer. In the case of multiple air-columns as shown in Fig. 2, is the
aggregate impedance of each air-column layer seen by the incident wave at the input. Below in Fig. 1, a simple
illustration explains what happens when an incident sound wave propagating in medium (1) and comes in contact
with medium (2) at the junction of the both media (x=0), where the sound waves are represented by vectors.
As mentioned above, an example of three such successive air-solid acoustic media layers is shown in Fig. 2 and the
electroacoustic equivalent circuit for Fig. 2 is shown in Fig. 3 where = (density of solid
material)(thickness of solid material) = unit-area (or volume) mass, characteristic acoustic impedance of
medium, and wavenumber. Note that in the case of a multilayer, coupled acoustic medium in an
acoustic filter, the impedance of each air-solid section is calculated by using the following general purpose
impedance ratio equation (also referred to as transfer matrices)...
where is the (known) impedance at the edge of the solid of an air-solid layer (on the right) and is the
(unknown) impedance at the edge of the air column of an air-solid layer.
Acoustics/Print version 70
References
[1] Minoru Todo, "New Type of Acoustic Filter Using Periodic Polymer Layers for Measuring Audio Signal
Components Excited by Amplitude-Modulated High-Intensity Ultrasonic Waves," Journal of Audio Engineering
Society, Vol. 53, pp. 930-41 (2005 October)
[2] Fundamentals of Acoustics; Kinsler et al, John Wiley & Sons, 2000
[3] ME 513 Course Notes, Dr. Luc Mongeau, Purdue University
[4] http://www.ieee-uffc.org/archive/uffc/trans/Toc/abs/02/t0270972.htm
Created by Valdez L. Gant
Sound in fluids
The speed of sound in fluids can be determined using the following relation.
Typical value of bulk modulus range from 2e9 to 2.5e9 N/m2. For a particular oil, with a density of 889 kg/m3,
speed of sound
Acoustics/Print version 72
Source of Noise
The main source of noise in hydraulic systems is the pump which supplies the flow. Most of the pumps used are
positive displacement pumps. Of the positive displacement pumps, axial piston swash plate type is mostly preferred
due to their reliability and efficiency.
The noise generation in an axial piston pump can be classified under two categories (i) fluidborne noise and (ii)
Structureborne noise
the discharge at the pump outlet is sum of all the discharge from the individual chambers. The discontinuity in flow
between adjacent chambers results in a kinematic flow ripple. The amplitude of the kinematic ripple can be
theoretical determined given the size of the pump and the number of displacement chambers. The kinematic ripple is
the main cause of the fluidborne noise. The kinematic ripples is a theoretical value. The actual flow ripple at the
pump outlet is much larger than the theoretical value because the kinematic ripple is combined with a
compressibility component which is due to the fluid compressibility. These ripples (also referred as flow
pulsations) generated at the pump are transmitted through the pipe or flexible hose connected to the pump and travel
to all parts of the hydraulic circuit.
The pump is considered an ideal flow source. The pressure in the system will be decided by resistance to the flow or
otherwise known as system load. The flow pulsations result in pressure pulsations. The pressure pulsations are
superimposed on the mean system pressure. Both the flow and pressure pulsations easily travel to all part of the
circuit and affect the performance of the components like control valve and actuators in the system and make the
component vibrate, sometimes even resonate. This vibration of system components adds to the noise generated by
the flow pulsations. The transmission of FBN in the circuit is discussed under transmission below.
A typical axial piston pump with 9 pistons running at 1000 rpm can produce a sound pressure level of more than 70
dBs.
Fig. 1 shows an exploded view of axial piston pump. Also the flow pulsations and the oscillating forces on the
swash plate, which cause FBN and SBN respectively are shown for one revolution of the pump.
Transmission
FBN
The transmission of FBN is a complex phenomenon. Over the past few decades, considerable amount of research
had gone into mathematical modeling of pressure and flow transient in the circuit. This involves the solution of wave
equations, with piping treated as a distributed parameter system known as a transmission line [1] & [3].
Lets consider a simple pump-pipe-loading valve circuit as shown in Fig. 2. The pressure and flow ripple at any
location in the pipe can be described by the relations:
.........(1)
.....(2)
where and are frequency dependent complex coefficients which are directly proportional to pump (source)
flow ripple, but also functions of the source impedance , characteristic impedance of the pipe and the
termination impedance . These impedances ,usually vary as the system operating pressure and flow rate changes,
can be determined experimentally.
For complex systems with several system components, the pressure and flow ripples are estimated using the
transformation matrix approach. For this, the system components can be treated as lumped impedances (a throttle
valve or accumulator), or distributed impedances (flexible hose or silencer). Various software packages are available
today to predict the pressure pulsations.
Acoustics/Print version 74
SBN
The transmission of SBN follows the classic source-path-noise model. The vibrations of the swash plate, the main
cause of SBN, are transferred to the pump casing which encloses all the rotating group in the pump including
displacement chambers (also known as cylinder block), pistons, and the swash plate. The pump case, apart from
vibrating itself, transfers the vibration down to the mount on which the pump is mounted. The mount then passes the
vibrations down to the main mounted structure or the vehicle. Thus the SBN is transferred from the swash plate to
the main structure or vehicle via pumpcasing and mount.
Some of the machine structures, along the path of transmission, are good at transmitting this vibrational energy and
they even resonate and reinforce it. By converting only a fraction of 1% of the pump structureborne noise into sound,
a member in the transmission path could radiate more ABN than the pump itself [4].
Noise reduction
The reduction of the noise radiated from the hydraulic system can be approached in two ways.
(i) Reduction at Source - which is the reduction of noise at the pump. A large amount of open literature are
available on the reduction techniques with some techniques focusing on reducing FBN at source and others focusing
on SBN. Reduction in FBN and SBN at the source has a large influence on the ABN that is radiated. Even though, a
lot of progress had been made in reducing the FBN and SBN separately, the problem of noise in hydraulic systems is
not fully solved and lot need to be done. The reason is that the FBN and SBN are interrelated, in a sense that, if one
tried to reduce the FBN at the pump, it tends to affect the SBN characteristics. Currently, one of the main researches
in noise reduction in pumps, is a systematic approach in understanding the coupling between FBN and SBN and
targeting them simultaneously instead of treating them as two separate sources. Such an unified approach, demands
not only well trained researchers but also sophisticated computer based mathematical model of the pump which can
accurately output the necessary results for optimization of pump design. The amplitude of fluid pulsations can be
reduced, at the source, with the use of an hydraulic attenuator(5).
(ii) Reduction at Component level - which focuses on the reduction of noise from individual component like hose,
control valve, pump mounts and fixtures. This can be accomplished by a suitable design modification of the
component so that it radiates least amount of noise. Optimization using computer based models can be one of the
ways.
Acoustics/Print version 75
Fig.3 Domain of hydraulic system noise generation and transmission (Figure recreated from [1])
References
1. Designing Quieter Hydraulic Systems - Some Recent Developments and Contributions, Kevin Edge, 1999, Fluid
Power: Forth JHPS International Symposium.
2. Fundamentals of Acoustics L.E. Kinsler, A.R. Frey, A.B.Coppens, J.V. Sanders. Fourth Edition. John Wiley &
Sons Inc.
3. Reduction of Axial Piston Pump Pressure Ripple A.M. Harrison. PhD thesis, University of Bath. 1997
4. Noise Control of Hydraulic Machinery Stan Skaistis, 1988. MARCEL DEKKER , INC.
5 Hydraulic Power System Analysis, A. Akers, M. Gassman, & R. Smith, Taylor & Francis, New York, 2006, ISBN:
0-8247-9956-9
Acoustics/Print version 76
Proposal
As electric/electronic devices get smaller and functional, the noise of cooling device becomes important. This page
will explain the origins of noise generation from small axial cooling fans used in electronic goods like desktop/laptop
computers. The source of fan noises includes aerodynamic noise as well as operating sound of the fan itself. This
page will be focused on the aerodynamic noise generation mechanisms.
Introduction
If one opens a desktop computer, they may find three (or more) fans. For example, a fan is typically found on the
heat sink of the CPU, in the back panel of the power supply unit, on the case ventilation hole, on the graphics card,
and even on the motherboard chipset if it is a recent one. Computer noise which annoys many people is mostly due
to cooling fans, if the hard drive(s) is fairly quiet. When Intel Pentium processors were first introduced, there was no
need to have a fan on the CPU, however, contemporary CPUs cannot function even for several seconds without a
cooling fan. As CPU densities increase, the heat transfer for nominal operation requires increased airflow, which
causes more and more noise. The type of fans commonly used in desktop computers are axial fans, and centrifugal
blowers in laptop computers. Several fan types are shown here (pdf format) [50]. Different fan types have different
characteristics of noise generation and performance. The axial flow fan is mainly considered in this page.
Rotor-Casing interaction
If the fan blades are very close to a structure which is not symmetric, unsteady interaction forces to blades are
generated. Then the fan experiences a similar running condition as lying in non-uniform flow field. See
Acoustics/Rotor Stator Interactions for details.
Rotating Stall
Click here to read the definition and an aerodynamic description of stall.
The noise due to stall is a complex phenomenon that occurs at low flow rates. For some reason, if flow is locally
disturbed, it can cause stall on one of the blades. As a result, the upstream passage on this blade is partially blocked.
Therefore, the mean flow is diverted away from this passage. This causes increasing of the angle of attack on the
closest blade at the upstream side of the originally stalled blade, the flow is again stalled there. On the other hand,
the other side of the first blade is un-stalled because of reduction of flow angle.
Acoustics/Print version 78
repeatedly, the stall cell turns around the blades at about 30~50% of the running frequency, and the direction is
opposite to the blades. This series of phenomenon causes unsteady blade forces, and consequently generates noise
and vibrations.
Incident Turbulent
Velocity fluctuations of the intake flow with a stochastic time history generate random forces on blades, and a
broadband spectrum noise.
Acoustics/Print version 79
Vortex Shedding
For some reason, a vortex can separate from a blade. Then the circulating flow around the blade starts to be changed.
This causes non-uniform forces on blades, and noises. A classical example for this phenomenon is 'Karman vortex
street' [51]. (some images and animations [52].) Vortex shedding mechanism can occur in a laminar boundary layer of
low speed fan and also in a turbulent boundary layer of high frequency fan.
Flow Separation
Flow separation causes stall explained above. This phenomenon can cause random noise, which spreads all the
discrete spectrum noises, and turns the noise into broadband.
Tip Vortex
Since cooling fans are ducted axial flow machines, the annular gap between the blade tips and the casing is important
parameter for noise generation. While rotating, there is another flow through the annular gap due to pressure
difference between upstream and downstream of fan. Because of this flow, tip vortex is generated through the gap,
and broadband noise increases as the annular gap gets bigger.
Installation Effects
Once a fan is installed, even though the fan is well designed acoustically, unexpected noise problem can come up. It
is called as installation effects, and two types are applicable to cooling fans.
Closing Comment
Noise reduction of cooling fans has some restrictions:
1. Active noise control is not economically effective. 80mm cooling fans are only 5~10 US dollars. It is only
applicable for high-end electronic products.
2. Restricting certain aerodynamic phenomenon for noise reduction can cause serious performance reduction of the
fan. Increasing RPM of the fan is of course much more dominant factor for noise.
Different stories of fan noise are introduced at some of the linked sites below like active RPM control or noise
comparison of various bearings used in fans.
Acoustics/Print version 80
References
[1] Neise, W., and Michel, U., "Aerodynamic Noise of Turbomachines"
[2] Anderson, J., "Fundamentals of Aerodynamics", 3rd edition, 2001, McGrawHill
[3] Hoppe, G., and Neise, W., "Vergleich verschiedener Gerauschmessnerfahren fur Ventilatoren. Forschungsbericht
FLT 3/1/31/87, Forschungsvereinigung fur Luft- und Trocknungstechnik e. V., Frankfurt/Main, Germany
Introduction
Piezoelectricity from the Greek word "piezo" means pressure electricity. Certain crystalline substances generate
electric charges under mechanical stress and conversely experience a mechanical strain in the presence of an electric
field. The piezoelectric effect describes a situation where the transducing material senses input mechanical vibrations
and produces a charge at the frequency of the vibration. An AC voltage causes the piezoelectric material to vibrate in
an oscillatory fashion at the same frequency as the input current.
Quartz is the best known single crystal material with piezoelectric properties. Strong piezoelectric effects can be
induced in materials with an ABO3, Perovskite crystalline structure. 'A' denotes a large divalent metal ion such as
lead and 'B' denotes a smaller tetravalent ion such as titanium or zirconium.
Acoustics/Print version 81
For any crystal to exhibit the piezoelectric effect, its structure must have no center of symmetry. Either a tensile or
compressive stress applied to the crystal alters the separation between positive and negative charge sights in the cell
causing a net polarization at the surface of the crystal. The polarization varies directly with the applied stress and is
direction dependent so that compressive and tensile stresses will result in electric fields of opposite voltages.
Dynamic Performance
The dynamic performance of a piezoelectric material relates to how it behaves under alternating stresses near the
mechanical resonance. The parallel combination of C2 with L1, C1, and R1 in the equivalent circuit below control
the transducers reactance which is a function of frequency.
Frequency Response
The graph below shows the impedance of a piezoelectric transducer as a function of frequency. The minimum value
at fn corresponds to the resonance while the maximum value at fm corresponds to anti-resonance. Superscript textItalic
text
Resonant Devices
Non resonant devices may be modeled by a capacitor representing the capacitance of the piezoelectric with an
impedance modeling the mechanically vibrating system as a shunt in the circuit. The impedance may be modeled as
a capacitor in the non resonant case which allows the circuit to reduce to a single capacitor replacing the parallel
combination.
For resonant devices the impedance becomes a resistance or static capacitance at resonance. This is an undesirable
effect. In mechanically driven systems this effect acts as a load on the transducer and decreases the electrical output.
In electrically driven systems this effect shunts the driver requiring a larger input current. The adverse effect of the
static capacitance experienced at resonant operation may be counteracted by using a shunt or series inductor
resonating with the static capacitance at the operating frequency.
Acoustics/Print version 82
Applications
Mechanical Measurement
Because of the dielectric leakage current of piezoelectrics they are poorly suited for applications where force or
pressure have a slow rate of change. They are, however, very well suited for highly dynamic measurements that
might be needed in blast gauges and accelerometers.
Ultrasonic
High intensity ultrasound applications utilize half wavelength transducers with resonant frequencies between 18 kHz
and 45 kHz. Large blocks of transducer material is needed to generate high intensities which is makes manufacturing
difficult and is economically impractical. Also, since half wavelength transducers have the highest stress amplitude
in the center the end sections act as inert masses. The end sections are often replaced with metal plates possessing a
much higher mechanical quality factor giving the composite transducer a higher mechanical quality factor than a
single-piece transducer.
The overall electro-acoustic efficiency is:
The second term on the right hand side is the dielectric loss and the third term is the mechanical loss.
Efficiency is maximized when:
then:
Welding of plastics
Atomization of liquids
Ultrasonic drilling
Ultrasonic cleaning
Ultrasonic foils in the paper machine wet end for more uniform fibre distribution
Ultrasound
Non-destructive testing
etc.
Acoustics/Print version 83
References
[1] http:/ / www. sengpielaudio. com/ calculator-speedsound. htm
[2] http:/ / www. sengpielaudio. com/ SpeedOfSoundPressure. pdf
[3] http:/ / www. pdas. com/ atmos. htm
[4] http:/ / mathworld. wolfram. com/ WaveEquation1-Dimensional. html
[5] http:/ / www. ndt-ed. org/ EducationResources/ CommunityCollege/ Ultrasonics/ Physics/ acousticimpedance. htm
[6] http:/ / en. wikibooks. org/ wiki/ Acoustic:Boundary_Conditions_and_Forced_Vibrations#Wave_Properties
[7] http:/ / www. silex. com/ pdfs/ Exhaust%20Silencers. pdf
[8] http:/ / mecheng. osu. edu/ ~selamet/ docs/ 2003_JASA_113(4)_1975-1985_helmholtz_ext_neck. pdf
[9] http:/ / en. wikibooks. org/ wiki/ Acoustic:specific_application-automobile_muffler#Absorptive_muffler
[10] http:/ / en. wikipedia. org/ wiki/ Narrowband
[11] http:/ / en. wikibooks. org/ wiki/ Acoustic:Car_Mufflers#The_reflector_muffler
[12] http:/ / www. eiwilliams. com/ steel/ index. php?p=EngineSilencers#All
[13] http:/ / freespace. virgin. net/ mark. davidson3/ TL/ TL. html
[14] http:/ / widget. ecn. purdue. edu/ ~me513/
[15] http:/ / widget. ecn. purdue. edu/ ~me513/ animate. html
[16] http:/ / myfwc. com/ boating/ airboat/ Section3. pdf
[17] http:/ / www. diracdelta. co. uk/ science/ source/ a/ c/ active%20noise%20control/ source. html
[18] http:/ / www. crutchfieldadvisor. com/ learningcenter/ home/ speakers_roomacoustics. html?page=2#materials_table
[19] http:/ / www. crutchfieldadvisor. com/ S-hpU9sw2hgbG/ learningcenter/ home/ speakers_roomacoustics. html?page=4:
[20] http:/ / www. ecoustics. com/ Home/ Accessories/ Acoustic_Room_Treatments/ Acoustic_Room_Treatment_Articles/
[21] http:/ / www. audioholics. com/ techtips/ roomacoustics/ roomacoustictreatments. php
[22] http:/ / www. diynetwork. com/ diy/ hi_family_room/ article/ 0,2037,DIY_13912_3471072,00. html
[23] http:/ / www. crutchfieldadvisor. com/ S-hpU9sw2hgbG/ learningcenter/ home/ speakers_roomacoustics. html?page=1
[24] http:/ / en. wikipedia. org/ wiki/ Larynx
[25] http:/ / sprojects. mmi. mcgill. ca/ larynx/ notes/ n_frames. htm
[26] http:/ / www. entusa. com/ normal_larynx. htm
[27] http:/ / www. ncvs. org/ ncvs/ tutorials/ voiceprod/ tutorial/ model. html
[28] http:/ / biorobotics. harvard. edu/ pubs/ gunter-jasa-pub. pdf
[29] http:/ / www. mat. unb. br/ ~lucero/ JAS001362. pdf
[30] http:/ / en. wikibooks. org/ wiki/ Acoustic:Piezoelectric_Transducers
[31] http:/ / en. wikibooks. org/ wiki/ Acoustic:Acoustic_Transducers_-_The_Loudspeaker
[32] http:/ / www. akgusa. com/
[33] http:/ / www. audio-technica. com/ cms/ site/ c35da94027e94819/ index. html
[34] http:/ / www. audixusa. com/
[35] http:/ / www. bkhome. com/ bk_home. asp
[36] http:/ / www. dpamicrophones. com/
[37] http:/ / www. electrovoice. com/
[38] http:/ / www. josephson. com/
[39] http:/ / www. neumannusa. com/ mat_dev/ FLift/ open. asp
[40] http:/ / www. rode. com. au/
[41] http:/ / www. schoeps. de/
[42] http:/ / www. sennheiser. com/
[43] http:/ / www. shure. com/
[44] http:/ / www. wharfedalepro. com/
[45] http:/ / en. wikipedia. org/ wiki/ Loudspeaker
[46] http:/ / mathworld. wolfram. com/ ChebyshevPolynomialoftheFirstKind. html
[47] http:/ / en. wikibooks. org/ wiki/ Acoustic_Filter_Design_%26_Implementation
[48] http:/ / en. wikipedia. org/ wiki/ Parametric_array
[49] http:/ / web. ics. purdue. edu/ ~vgant/ AcousticFilterPaper. pdf
[50] http:/ / www. etrinet. com/ tech/ pdf/ aerodynamics. pdf
[51] http:/ / www. galleryoffluidmechanics. com/ vortex/ karman. htm
Acoustics/Print version 84
[52] http:/ / www2. icfd. co. jp/ menu1/ karmanvortex/ karman. html
[53] http:/ / www. silentpcreview. com/ files/ ball_vs_sleeve_bearing. pdf
[54] http:/ / www. jmcproducts. com/ cooling_info/ noise. shtml
[55] http:/ / www. ansys. com/ industries/ tm-fan-noise. htm
[56] http:/ / www. directron. com/ noise. html
[57] http:/ / www. xlr8yourmac. com/ G4ZONE/ G4_fan_noise. html
[58] http:/ / www. xlr8yourmac. com/ systems/ quicksilver_noise/ quieting_quicksilver_noise. html
[59] http:/ / www. roberthancock. com/ dell/ fannoise. htm
[60] http:/ / www. cpemma. co. uk/
[61] http:/ / www. tomshardware. com/ 2004/ 06/ 15/ fighting_fan_noise_pollution/ index. html
[62] http:/ / www. diracdelta. co. uk/ science/ source/ f/ a/ fan%20noise/ source. html
[63] http:/ / www. bkhome. com/
[64] http:/ / cpemma. co. uk/
[65] http:/ / morganelectroceramics. com
[66] http:/ / www. ultratechnology. se
Article Sources and Contributors 85
Image:Xypair.gif Source: http://en.wikibooks.org/w/index.php?title=File:Xypair.gif License: GNU Free Documentation License Contributors: Adrignola, Rwhirled
Image:Acoustics microphone design and operation.JPG Source: http://en.wikibooks.org/w/index.php?title=File:Acoustics_microphone_design_and_operation.JPG License: Public Domain
Contributors: Original uploader was Booby at en.wikibooks
File:Moving coil.JPG Source: http://en.wikibooks.org/w/index.php?title=File:Moving_coil.JPG License: GNU Free Documentation License Contributors: Original uploader was Doktorcik at
en.wikibooks
Image:Ribbon.jpg Source: http://en.wikibooks.org/w/index.php?title=File:Ribbon.jpg License: GNU Free Documentation License Contributors: Adrignola, Doktorcik
Image:Condenser.jpg Source: http://en.wikibooks.org/w/index.php?title=File:Condenser.jpg License: GNU Free Documentation License Contributors: Adrignola, Doktorcik
Image:Battery_Powered.jpg Source: http://en.wikibooks.org/w/index.php?title=File:Battery_Powered.jpg License: GNU Free Documentation License Contributors: Adrignola, Doktorcik
Image:XLR.jpg Source: http://en.wikibooks.org/w/index.php?title=File:XLR.jpg License: GNU Free Documentation License Contributors: Adrignola, Doktorcik
Image:Powering.jpg Source: http://en.wikibooks.org/w/index.php?title=File:Powering.jpg License: GNU Free Documentation License Contributors: Adrignola, Doktorcik
Image:Acoustics loudspeakers.JPG Source: http://en.wikibooks.org/w/index.php?title=File:Acoustics_loudspeakers.JPG License: Public Domain Contributors: Original uploader was Booby
at en.wikibooks
Image:loud_speaker.gif Source: http://en.wikibooks.org/w/index.php?title=File:Loud_speaker.gif License: GNU Free Documentation License Contributors: Adrignola, Jbland
Image:Magnet2.gif Source: http://en.wikibooks.org/w/index.php?title=File:Magnet2.gif License: GNU Free Documentation License Contributors: Adrignola, Jbland
Image:loud_cone.gif Source: http://en.wikibooks.org/w/index.php?title=File:Loud_cone.gif License: GNU Free Documentation License Contributors: Adrignola, Jbland
Image:Electreson.gif Source: http://en.wikibooks.org/w/index.php?title=File:Electreson.gif License: Public Domain Contributors: Adrignola, Remimais
Image:loud_suspension.gif Source: http://en.wikibooks.org/w/index.php?title=File:Loud_suspension.gif License: GNU Free Documentation License Contributors: Adrignola, Jbland
Image:loudspk.gif Source: http://en.wikibooks.org/w/index.php?title=File:Loudspk.gif License: GNU Free Documentation License Contributors: Adrignola, Jbland
Image:Eq_circuit.gif Source: http://en.wikibooks.org/w/index.php?title=File:Eq_circuit.gif License: GNU Free Documentation License Contributors: Adrignola, Jbland
Image:Freq_resp.gif Source: http://en.wikibooks.org/w/index.php?title=File:Freq_resp.gif License: GNU Free Documentation License Contributors: Adrignola, Jbland
Image:louds_without_baffle.gif Source: http://en.wikibooks.org/w/index.php?title=File:Louds_without_baffle.gif License: Public Domain Contributors: Adrignola, Remimais
Image:loudspeakers_baffled.gif Source: http://en.wikibooks.org/w/index.php?title=File:Loudspeakers_baffled.gif License: Public Domain Contributors: Adrignola, Remimais
Image:Circuit.jpg Source: http://en.wikibooks.org/w/index.php?title=File:Circuit.jpg License: Public Domain Contributors: Adrignola, Bgeswein
Image:Effective area.jpg Source: http://en.wikibooks.org/w/index.php?title=File:Effective_area.jpg License: Public Domain Contributors: Adrignola, Bgeswein
Image:Compliance.jpg Source: http://en.wikibooks.org/w/index.php?title=File:Compliance.jpg License: Public Domain Contributors: Adrignola, Bgeswein
Image:Bassreflex-Gehäuse_(enclosure).png Source: http://en.wikibooks.org/w/index.php?title=File:Bassreflex-Gehäuse_(enclosure).png License: GNU Free Documentation License
Contributors: User:Melancholie
Image:Vented_box_ckt.gif Source: http://en.wikibooks.org/w/index.php?title=File:Vented_box_ckt.gif License: Public Domain Contributors: Adrignola, Kaeding
Image: Za0_Zvc_plots.gif Source: http://en.wikibooks.org/w/index.php?title=File:Za0_Zvc_plots.gif License: Public Domain Contributors: Adrignola, Kaeding
Image:VB_LF_ckt.gif Source: http://en.wikibooks.org/w/index.php?title=File:VB_LF_ckt.gif License: Public Domain Contributors: Adrignola, Kaeding
Image:QB3_gradient.GIF Source: http://en.wikibooks.org/w/index.php?title=File:QB3_gradient.GIF License: Public Domain Contributors: Adrignola, Kaeding
Image:Butt_vs_Cheb_HP.gif Source: http://en.wikibooks.org/w/index.php?title=File:Butt_vs_Cheb_HP.gif License: Public Domain Contributors: Adrignola, Kaeding
Image: Vented_enclosure.gif Source: http://en.wikibooks.org/w/index.php?title=File:Vented_enclosure.gif License: Public Domain Contributors: Adrignola, Kaeding
Image:Acoustics polymer film acoustic filters.JPG Source: http://en.wikibooks.org/w/index.php?title=File:Acoustics_polymer_film_acoustic_filters.JPG License: Public Domain
Contributors: Original uploader was Booby at en.wikibooks
Image:Acoustics noise in hydraulic systems.JPG Source: http://en.wikibooks.org/w/index.php?title=File:Acoustics_noise_in_hydraulic_systems.JPG License: Public Domain Contributors:
Original uploader was Booby at en.wikibooks
Image:Pump_noise.png Source: http://en.wikibooks.org/w/index.php?title=File:Pump_noise.png License: Public Domain Contributors: Adrignola, Gks
Image:Noise.png Source: http://en.wikibooks.org/w/index.php?title=File:Noise.png License: Public Domain Contributors: Adrignola, Gks
Image:Acoustics noise from cooling fans.JPG Source: http://en.wikibooks.org/w/index.php?title=File:Acoustics_noise_from_cooling_fans.JPG License: Public Domain Contributors: Original
uploader was Booby at en.wikibooks
Image:Noisespectrum.gif Source: http://en.wikibooks.org/w/index.php?title=File:Noisespectrum.gif License: GNU Free Documentation License Contributors: Adrignola, Shin31
Image:Stall.gif Source: http://en.wikibooks.org/w/index.php?title=File:Stall.gif License: GNU Free Documentation License Contributors: Adrignola, Shin31
Image:Acoustics piezoelectric transducers.JPG Source: http://en.wikibooks.org/w/index.php?title=File:Acoustics_piezoelectric_transducers.JPG License: Public Domain Contributors:
Original uploader was Booby at en.wikibooks
License
Creative Commons Attribution-Share Alike 3.0 Unported
http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/