Stages of Sound in An Audiovisual Production Process
Stages of Sound in An Audiovisual Production Process
Stages of Sound in An Audiovisual Production Process
PRODUCTION PROCESS
AUDIOVISUAL
By Rosa María Oliart
1
SOUND IN THE STAGES OF A PRODUCTION PROCESS
AUDIOVISUAL.
By Rosa María Oliart
In our country, this figure still often occurs in both film and television,
where the sound engineer responsible for the audiovisual work
participates directly in all stages of the process.
We are going to define each of these stages and the creative and
technical function of the sound engineer in them.
Traditionally, directors began to think about the sound of the film when it
was being edited. Currently, sound is understood as an integral part of the
film and any audiovisual work and can have influences on the writing of
the script, the filming and the editing; in this way it interacts with other
areas.
This is why the creative work of the sound engineer begins with the script
in the pre-production stage. At this stage the sound engineer must read
and study the script to propose the sound climate of the work, make a
design of the general sound of the audiovisual work, create a sound
concept , a set of effects, atmospheres and sound transformations that
seeks to reinforce the meaning of the work.
2
For this objective, dialogue with the scriptwriter, the director, and the
musician is very convenient since they are the ones indicated to propose
and discuss narrative contributions that the sound area can offer as well
as aesthetic qualities for the sound of the work.
At this stage, with the script as a work tool, the sound engineer must also
make a breakdown by sequences based on the shooting plan made by the
production area, to carry out adequate technical preparation. This way you
will have the necessary equipment to work on the programmed sequences
on each recording day. In this way, if on a recording day we only have
dialogues indoors, the equipment and accessories that we will need will be
those indicated for this type of recording. If we have dialogue on a boat on
the high seas that will be filmed from land, we will need wireless
microphones; If we have a live rock group, we will need a console to mix
several microphones, etc.
In large churches the priests sing the psalms in a single note to lessen the
resonant effect that would cause the different notes to blend into one
another. Similarly, experienced speakers in large halls speak slowly and
avoid sudden changes in tone to improve the intelligibility of the word. In
ancient times, the tempo or speed of musical performance was an element
that the musician decided based on the size of the room where he was
going to play. A fast tempo of musical performance in a large room
confuses sounds.
3
With all this historical experience, it is essential that the sound engineer
who is going to record with direct sound visits and studies the acoustic
characteristics of the locations to control them and/or get the most out of
them, depending on the scene that will be recorded there. We are going
to analyze the acoustic properties of the venues from the point of view of
their use as filming locations.
When we are going to record direct sound indoors, we must evaluate two
basic aspects: The sound insulation of the place to avoid external parasitic
noises and the acoustic conditions of the premises.
Regarding sound insulation, when we arrive at an interior location we must
try to minimize outside noises as much as possible. These can enter
through doors and windows that, if the filming conditions allow it, we must
close. Another way for parasitic noise to enter the locations is the
architecture of the premises themselves. We cannot vary this, but we must
know that more of the low frequencies of parasitic sounds enter through
this route than the high ones.
This creative research necessarily comes up against limits that are not
only human or technical but also physical: while a director of photography
can modulate the level of light to his liking, the sound engineer has very
limited power over the sound levels of the film. scenarios, you can hardly
ask the actor to speak louder, not to bang the cutlery on the plate so much
or to slam the door more violently during filming.
4
sound, we can give it electronic reverberation, but what we cannot do is
remove the acoustic reverberation that is recorded in the original. At most
we can moderate it with treatments such as slight compression or an
equalization cut in the frequencies where the reverberation is located, but
we will not be able to separate it from the original sound.
It also happens that some sound effects such as dice on a table, some
energetic footsteps or the raking of a weapon produce an unpleasant
sensation, since the high instantaneous energy they produce is amplified
by reverberation, superimposing itself on subsequent sounds, causing a
disturbance in the sound. the natural proportion of the recorded sound
elements.
5
from surfaces that are in their pickup range; it is preferable to avoid them
unless the place has been treated or the reverberation interests us
aesthetically.
Among the available microphones, we must choose the one that achieves
the best direct sound/reflected sound ratio, at the same distance. Another
important aspect is the good location of the microphone, the closer it is to
the sound source it will capture a greater proportion of the sound directly
from the source and to a lesser extent the reverberation. If the microphone
is further away from the source, it will capture the reverberation to a
greater degree than the sound of the source, thus producing a distortion
effect on the sound reality.
Another problem that frequently occurs outdoors is wind. For this, there
are foams, zeppelins and other wind protectors that are essential for
outdoor recording. It is important in windy scenarios to study the
characteristics of its directionality, since the wind tends to carry away the
sound waves which will force us to use microphone positions that take this
factor into account.
6
to detect any incorrectness in the voice technique that could cause a
problem that affects intelligibility, such as a weak pronunciation of the
vowels that will take away projection from the voice or an incorrect
articulation of consonants, words that are too close together, little use of
pauses, decrease in intensity at the end of sentences etc. All these details
must be worked out with the actors and some of them resolved during
recording.
THE PRODUCTION
In the production stage we face the phenomenon of filming, where all the
images and sounds of the audiovisual work will be supplied. At this stage
the sound engineer will make various types of sound records. With good
pre-production work, the sound engineer will already have the acoustic
problems of the sets resolved and will only have to deal with unforeseen
events, which will generally be parasitic noises that are entering the
location occasionally, such as construction. or a party in the surrounding
area.
DIRECT SOUND
It is the sound that is recorded simultaneously with the image, at the same
moment that the scene is filmed, synchronously. In the case of video, the
direct sound is recorded on the same tape where the image is recorded,
on one or the two channels available for audio, either directly, with one or
two microphones to the camera or VTR, or by line if is that we use a mixer
7
to work with more than two microphones; image and sound being
automatically synchronized. In the case of cinema, direct sound is
recorded on a DAT using the clapperboard or time code to maintain
synchronization between the camera and the DAT and the image and
sound to be synchronized before editing. The direct sound can include any
type of sound element depending on the scene to be recorded. They can
be dialogues, effects, environments or music.
The sound engineer must know the lenses with which the camera is
working, to calculate the framing and the space it has for the movement of
its microphone on the boom . Currently, with the ease of video, the sound
engineer observes the rehearsals to know the limits of the painting.
8
For this, it must be taken into account that the direct sound comes directly
from the sound source, but there is also the reflected sound coming from
the reflections of the same direct sound in walls and solid bodies, which
must be taken care of during recording. There is also the ambient sound
that is part of the direct recording and that comes diffusely from different
points and must be taken into account. For ideal intelligibility, priority must
be given to direct sound and the reflected sound must be filtered as much
as possible. However, the latter is essential to give the sensation of a
living sound perspective. The solution is to respect the balance between
the direct sound and the reflected sound , making careful placement of the
microphone, because if the reflected sound is reduced excessively, it
would give us the same dry sensation of a studio and the direct sound
would not have credibility.
A difficulty that usually appears during the recording of direct sound is that
in sound what is real does not necessarily correspond to what is credible .
Take as an example a motorcycle that crosses the frame in a fixed plane.
Since the engine and the exhaust pipe of the motorcycle are located at the
back of the motorcycle, when the motorcycle has left the frame we will
have the most powerful sound of the motorcycle. When the motorcycle is
in the center of the frame, its sound will only be beginning to increase,
which would seem false to the viewer. To make this sound shot believable,
it will be necessary to place or move the microphone depending on how
the shot should sound or, failing that, make a difference in the montage
between the sound shot and the image shot so that it corresponds to what
is expected. the viewer sees.
9
In the case of sequences that include diegetic music as part of the action,
such as a dialogue between two characters who are dancing in a
nightclub, they must rehearse with the music so that they can grasp the
rhythm of the dance and the levels at which they must dialogue. to listen to
each other with the music playing. Then the shots must be recorded
without music, to have clean dialogue and place the music continuously in
the montage.
WILD SOUND.
Known as wild track, it is one that is not recorded at the time of recording
the image but after the scene, but at the filming location. It is a free sound
that has the function of replacing or reinforcing the original sound for
various reasons that make it necessary.
For example, if a close-up shot is going to occur in a scene, for which the
special effects area has achieved a visual effect of the bullet coming out of
the gun, the shot is taken but it does not serve us for the direct sound,
since This effect is purely visual, therefore the shot will not sound... Once
the shot is finished, we will make a wild sound of the shot in that same
environment, which will maintain the continuity of the background and
which will be synchronized to the image later.
In this case we have made a wild sound because we do not have the
original sound in the take. Another case can occur in a scene where two
people are drinking in a bar and an argument breaks out, which ends with
one of them breaking a bottle against the table and attacking the other
person. The shot had direct sound and was very good visually and by
performance, but just in that shot the sound of the bottle breaking was not
very impressive... so we will freely make a wild bottle breaking sound to
later reinforce the effect in the assembly. A case of direct sound may also
occur where a character on a balcony speaks loudly to another character
10
on a street. The camera shot and audio recording are taken from the
balcony when the character below screams. The sound recording from
above works very well for us as it represents the distance suggested
visually, but suddenly in a part of the dialogue, the noise of the engine of a
car passing by on the street arises and does not allow us to understand
well the text of the character below , then we make a “clean” wild sound
from that text to replace it in the montage. The advantage of doing it at
that moment and not in a later dubbing is that in addition to maintaining
the original atmosphere and distance, we have the “hot” actor and his
performance can be much closer to that of the original take. Dialogues that
for various reasons were not satisfactory during filming can also be
repeated.
There are also wild sounds that are recorded with the purpose of being
placed in voiceover , such as the isolated effects characteristic of a
cafeteria to be carefully placed in pauses or precise places, enriching the
general atmosphere with details.
ENVIRONMENTS OF CONTINUITY
When a sequence is recorded with direct sound and has different shots,
there will always be variations in the background between takes, whether
they are natural variations if it is recorded in the same unit of time and
space, such as a change of traffic light that will produce variations. in the
background of traffic that will become evident when the shots are edited.
Even worse in cases where, due to some production necessity, some
shots are recorded in one location and others in another, even though they
belong to the same sequence. This will cause us to have different
backgrounds when the shots are edited. For this reason, it is an
unavoidable obligation of the sound engineer to record an atmosphere of
continuity in each scenario immediately after finishing the recording of
each sequence, to have them in the montage and to be able to “patch” any
problem of background “jumps” that may arise in The edition. It is essential
to have continuous environments, without cuts, for each sequence that are
equal to the sound backgrounds that will appear live. These environments
will also be used in the event that it is decided to double some text, to
maintain the continuity of the background.
MUSICAL MASTERS
11
subject is not going to be recorded in the image but rather there will be
visual cuts; In sound if we need to record a continuous master of the
complete song to be used in the edition and depending on this, the images
we want are placed. If during editing we plan to return to images of the
musical source, be it a Creole ensemble, a singer, an orchestra or a band,
we must make direct reference sound from each of the shots of the
musical source to facilitate synchronization, as a support to the editor.
POST PRODUCTION
12
methodological procedures for working on post-production that remain in
all formats.
This is the peak stage of the sound engineer's work, where the narrative
and aesthetic approaches of the audiovisual work will be specified from a
sound perspective. The points that we will present have to do with a
methodological order in the sound system and not with an order of
importance.
In the case of video where image and sound are synchronized, editing
begins with direct sound. In cinema, you must first synchronize the image
with the direct sound and then begin editing.
In both cases, at least two tracks should be used to edit the direct sound
by placing the sounds from each take alternately on the two tracks, so that
the sound takes are not edited to the cut but are slightly longer than the
cut. image to which they correspond and can fade in and out and thus
allow greater fluidity in the continuity of the sound. This “tail” of sound that
we place at the beginning and end of each take is known as overlap.
When editing the direct sound, it is advisable to work not only with the
dialogues but with all the direct shots, whether they have dialogue or not,
to build up the sound continuity between the different shots.
Once the edition with the direct sound is ready, we proceed to place the
replacements of the dialogues that we have recorded as wild sounds to
enrich some live sounds, which, as we have already explained, for various
reasons, whether artistic or technical, are defective or unusable.
We also placed the wild sounds that correspond to sound effects, which
for some reason could not be recorded synchronously; In some cases they
function as replacements and in others as reinforcements. We have
already talked about the advantages of wild sounds with respect to the
sound continuity that they provide, as they are recorded in the same
settings and with the actors with “fresh” dialogues and actions, it is at this
stage where we can corroborate the importance that It has a good wild
sound recorded with forethought, thinking about the sound editing.
13
By placing the continuity environments that we have recorded during
filming for each sequence, we will be able to standardize the sequences
by giving them sound continuity. Furthermore, in this stage we will build
the sound atmosphere of the work, also proceeding to place the dramatic
atmospheres that we have recorded and beginning to give a connotative
charge to the sound of the different sequences.
SOUND FILE
The sound file is a fundamental tool for the sound engineer. There are
collections of sound effects and environments on the market that are
marketed to be used in audiovisual media such as radio, film and
television and that are very useful for composing environments and effects
of a particular character that could not be achieved on the stage where
was filmed or in cases where it is not filmed with direct sound and the
entire sound of the work has to be built in a studio. This happens a lot in
television advertising where direct sound is not recorded, except for
testimonial exceptions, and the voices have to be dubbed and the file must
be used to compose the environments and effects. Also in the case of
period or futuristic films that do not have an appropriate sound
environment during the live performance, therefore this would only serve
as a reference and the dialogues would have to be dubbed and the entire
environment recomposed, resorting to the archive.
The sound archive is also a source of supply for the construction of the off
-space. As we have already mentioned, it is often necessary to build with
sound elements that are enriching for the story, for the verisimilitude of the
visually recreated spaces or as an aesthetic contribution to the sequence;
and that they are sounds that do not exist in the settings where the
sequence is filmed.
DUBBING
14
Dubbing is a post-production technique widely used for both voices and
some effects. There are several reasons why you can opt for dubbing.
There are cases in which, due to the very noisy conditions of the locations,
the live performance is unusable, so the dialogues of that sequence will be
dubbed with the same actors. For this purpose, it is important to have
recorded the reference sound, even if it is noisy, so that the actors have a
guideline for their interpretation during the recording of the scene, which
can motivate them in the dubbing, and this also helps them as a guideline
for synchronization.
There are many methods and formats for dubbing but the basic principle is
the same: working in an audio post-production studio in front of a
microphone and in front of a screen that projects the scene that is going to
be dubbed.
The voice actors simultaneously see the image, listen to the reference
sound through headphones and record several takes in synchro until one
is approved by the director.
In the case of effects, sometimes the live ones are enriched by using
effects dubbing, to mark some steps , frictions , struggles, that may
interest us to give a particular dramatic charge to the scenes.
Furthermore, films that are designed to be dubbed will require not only the
dubbing of the voices but also the effects, which are part of the scene.
15
Effects dubbing, also known as foley, is a highly creative specialty that is
worked in front of the microphone, in synchronization and giving the sound
texture to each of the particular sounds. Collaborate in the construction of
the characters, since the steps, for example, can be light or heavy, the
movements light or forced, etc. The effects in off must be clearly identified
by the viewer, because they must be apprehended only by their sound,
however the effects in on, from the moment a sound is heard
synchronized with an image, it is identified with what is seen, more for the
synchronism than for its sound verisimilitude.
Many of the sounds that are dubbed in the studio do not come from an
element similar to what is seen, the bubbles of an underwater diver can
come from the sound produced by blowing a straw into a glass of water,
then giving it the appropriate effects; or the fight of two swordsmen may be
reinforced by the metallic blows of two projector reels.
The important thing, more than being realistic with the sound reproduction
of a visual element, is that it works with the image, even if it comes from a
different sound source.
First I composed the rise of the wave and the curling with a wave recorded
in Trujillo with a lot of bass, then after the central double explosion, came
the run of the burst wave that was composed with a wave from Paracas.
Then came the crash of the wave against the rocks, which I did with a
wave splash on rocks in “La Herradura” and then the surf with foam that
was recorded in La Chira, in rather high tones because it was foam.
Given all this “monstrosity” of the wave created and which, in the context
of the film, should have been the wave of the end of the world, the core of
the scene was missing, which was the central double explosion.
16
I tried many bursting wave options but they didn't work for me, given the
magnitude of the composition, so I used two gunpowder explosions in
synchro in the mines of Cerro de Pasco and they worked perfectly within
the total sound.
MUSICALIZATION
This is the stage in which the music is placed in the montage, sometimes
with a utilitarian sense to “lift” a sequence that may have been weak in
some aspect, although this is a primary function.
Through musicalization you can speed up or slow down the action that in
real time is shorter or longer than in psychological time.
In the case of television series, the opening music has the function of
characterizing the spirit of the series. Series usually also have musical
spots that identify the program and are placed at the end and beginning of
each block, to exit and return from commercials.
17
In Peru we have an institution that is the APDAYC, where all the musical
songs are registered and it is the entity that goes to request the rights to
use the songs in exchange for a price established by them, depending on
the use that they intend. it will be given to them. For advertising the rates
are higher than for film and video.
In the case of television channels, they have annual contracts with record
labels that give them, in exchange for a fee, the rights to use their songs
for productions that are broadcast nationally. If television programs are
going to be exported, they must have original music or pay the
international rights for the musical themes used there.
FINAL MIX
This is the peak stage of sound production where all sound elements will
be given the appropriate level balance, equalization, rever, compression or
other effects depending on the combination of simultaneous sounds with
each other.
It is at this stage where we mark the sound characteristics of each
sequence according to the proportions that we assign to each particular
sound.
The mix consists of the re-selection and dosage of the sounds that are in
the bands. It is a comprehensive, highly articulated and harmonious
organization of sound elements. It could be understood as the
orchestration of the sounds of the audiovisual work where hierarchies
must be established, generally for the benefit of the dialogues and where
crescendos , progressions and gradations of each sound and of the whole
are created. The aesthetic achievement of this process is known as
sonoplasty .
It should be noted that for the mix, sounds from various sources come in
the bands: from filming, from dubbing, from the wild, from the studio, from
the archive, etc. and in this process special care must be taken in the
treatment of the different sources, to achieve a balance between them,
highlighting one sound level or another, according to the expressive needs
of the narration; so that they form part of a fluid, plastic and coherent
whole.
There are notable mixtures like the one made by Graham V. Hearstone
and Gerry Humphries for “Blade Runner” by Ridley Scott, a torrent of
electronic sounds, city rumors and polyglot voices magnificently organized
and pasted.
18
Mixing is in itself a creative process. As can be seen from the previous
chapters, on the same soundtracks created, infinite options can arise for
mixing them, prioritizing some elements over others, eliminating some or
giving different effects to each particular sound.
How do these moments appear and how do you deal with them when this
happens?
How do we choose which sounds we should prioritize when they can't all
be in the mix simultaneously?
What sounds should play the role of support, support and servants of the
most important ones?
Although these are difficult questions and the answer to them will depend
on each particular sequence, we are going to try to make general
approaches to solving these chaotic moments or better yet, to prevent
them from happening.
The word for example belongs to a structured code that the viewer must
decode conceptually, and makes perfect intelligibility necessary, so it
should have a certain prioritization in the mix, if it is considered important
to understand what is being said.
Music, for its part, tends to be called the “Universal Language” since it is
learnable more on a sensitive than intellectual level; and to this extent it
19
can play, within the mix, more versatile roles of intensity than the
dialogues.
The effects that make up an atmosphere are sometimes so many that they
become a paste where some cover the others and no detail is perceived
and the atmosphere becomes noise. Chaos in the mix occurs when we
have more than two elements of the same species overlapping, and
sometimes it is better to disappear some of them depending on the
intelligibility of the whole.
We have already talked about Walter Murch's “two and a half” theory,
where it is proven that the human brain can simultaneously perceive
sound elements of different species, but if it receives more than “two and a
half” elements of the same species, these already They become a group
and are individually incomprehensible.
First.-
The level of each sound and the balance of the sounds against each
other. What is interesting is not the absolute level of the pure sounds, but
the relationship with the level of the sounds that precede or accompany it.
In addition, the inputs and outputs of sounds to the mix must be decided,
whether they are by cut or by fades .
Second.-
Equalization, which gives the mixer power over the frequency spectrum of
sounds: the bass, mids, and treble. Certain frequencies of each sound can
be suppressed or reinforced depending on recreating a space or the
intelligibility of the particular sounds. For example, if instrumental music is
playing and a dialogue enters, we do not have to lower the entire level of
the music so that the dialogue is intelligible, but rather cut the frequencies
20
that compete with it, and those that do not compete remain at a good
level. Heavy background traffic can have the bass cut and thus be
perceived cleaner and with more details.
Third.-
Room.-
Processors that can create effects on sounds, such as treating the voices
of robots or unreal animals, as well as generating special atmospheres.
INTERNATIONAL BAND
There are cases in which when removing the original dialogues from a live
show, these are already recorded with some effects or environments from
the live show, so they are inseparable. In this case, an additional band
would have to be created to reproduce the effects and environments that
will disappear with the live performance and be mixed with the
international band so that they are in the versions dubbed into another
language, since only the voices will be dubbed.
21