William M. Hartmann - How We Localize Sound
William M. Hartmann - How We Localize Sound
William M. Hartmann - How We Localize Sound
William M. Hartmann
(); https://doi.org/10.1063/PT.6.1.20201015a
(); https://doi.org/10.1063/PT.6.1.20210416a
dt'PL(t')PR(t'
broadband and relatively benign in their spectral varia- the real world using headphones, once the transfer func-
tion, so t h a t listeners can both localize the source and tions of the microphones and headphones themselves had
identify it on the basis of the spectrum. It is still not been compensated by inverse filtering.
entirely clear how this localization process works. Early Adequate filtering requires fast, dedicated digital sig-
models of the process t h a t focused on particular spectral nal processors linked to the computer that runs experi-
features (such as the peak at 7000 Hz for a source over- ments. The motion of the listener's head can be taken into
head) have given way, under the pressure of recent account by means of an electromagnetic head tracker. The
research, to models t h a t employ the entire spectrum. head tracker consists of a stationary transmitter, whose
three coils produce low-frequency magnetic fields, and a
The experimental art receiver, also with three coils, that is mounted on the lis-
Most of what we know about sound localization has been tener's head. The tracker gives a reading of all six degrees
learned from experiments using headphones. With head- of freedom in the head motion, 60 times per second. Based
phones, the experimenter can precisely control the stimu- on the motion of the head, the controlling computer directs
lus heard by the listener. Even experiments done on cats, the fast digital processor to refilter the signals to the ears
birds, and rodents have these creatures wearing minia- so that the auditory scene is stable and realistic. This vir-
ture earphones. tual reality technology is capable of synthesizing a con-
In the beginning, much was learned about fundamen- vincing acoustical environment. Starting with a simple
tal binaural capabilities from headphone experiments monaural recording of a conversation, the experimenter
with simple differences in level and arrival time for tones can place the individual talkers in space. If the listener's
of various frequencies and noises of various compositions.7 head turns to face a talker, the auditory image remains
However, work on the larger question of sound localization constant, as it does in real life. What is most important for
had to await several technological developments to the psychoacoustician, this technology has opened a large
achieve an accurate rendering of the ATF in each ear. First new territory for controlled experiments.
were the acoustical measurements themselves, done with
tiny probe microphones inserted in the listener's ear Making it wrong
canals to within a few millimeters of the eardrums. With headphones, the experimenter can create conditions
Transfer functions measured with these microphones not found in nature to try to understand the role of differ-
allowed experimenters to create accurate simulations of ent localization mechanisms. For instance, by introducing
vA
^ \ A curves show the spectrum of a small loudspeak-
-10 - M er as heard in the left ear of a manikin when the
speaker is in front (red), overhead (blue), and in
J
back (green). A comparison of the curves reveals
the relative gains of the anatomical transfer
function, (b) The KEMAR manikin is, in every
-20 - gross anatomical detail, a typical American. It
has silicone outer ears and microphones in its
head. The coupler between the ear canal and the
1,
1
microphone is a cavity tuned to have the input
acoustical impedance of the middle ear. The
-30 KEMAR shown here is in an anechoic room
i i i i
0.2 0.5 1 10 20 accompanied by Tim, an undergraduate physics
FREQUENCY (kHz)
major at Michigan State.