Stages of Sound in An Audiovisual Production Process

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 21

SOUND IN THE STAGES OF A

PRODUCTION PROCESS
AUDIOVISUAL
By Rosa María Oliart

1
SOUND IN THE STAGES OF A PRODUCTION PROCESS
AUDIOVISUAL.
By Rosa María Oliart

I consider that sound is a privileged area within the audiovisual production


process since it participates in all stages of the process in a technical and
creative way.

In countries with more developed cinematography there are usually


specializations within the sound area and many times the filming sound
engineers finish their work there, there are others specialized in sound
editing and others who are mixers.
Historically, this was not always the case, but rather the sound engineer
participated in all stages of the process, changing his technical function for
each one of them.

In our country, this figure still often occurs in both film and television,
where the sound engineer responsible for the audiovisual work
participates directly in all stages of the process.
We are going to define each of these stages and the creative and
technical function of the sound engineer in them.

THE PRE PRODUCTION

In any audiovisual production process, all approaches, both economic,


technical and artistic, begin with reading and studying the script. This must
be analyzed from all areas that participate in production.

Traditionally, directors began to think about the sound of the film when it
was being edited. Currently, sound is understood as an integral part of the
film and any audiovisual work and can have influences on the writing of
the script, the filming and the editing; in this way it interacts with other
areas.

This is why the creative work of the sound engineer begins with the script
in the pre-production stage. At this stage the sound engineer must read
and study the script to propose the sound climate of the work, make a
design of the general sound of the audiovisual work, create a sound
concept , a set of effects, atmospheres and sound transformations that
seeks to reinforce the meaning of the work.

2
For this objective, dialogue with the scriptwriter, the director, and the
musician is very convenient since they are the ones indicated to propose
and discuss narrative contributions that the sound area can offer as well
as aesthetic qualities for the sound of the work.

We must find sound elements that can fulfill narrative, emotional or


metaphorical functions within the soundtrack. For example, a film that
takes place in a port may have the sea as a permanent element, so we
use this sound element dramatically, giving it various nuances even if it
only appears in the background, to create connotative meaning for the
different events that occur. Thus, in sequences of dramatic tension the
sea can be rough or explosive , in sequences of elliptical transition the sea
can present soft and rhythmic waves.

At this stage, with the script as a work tool, the sound engineer must also
make a breakdown by sequences based on the shooting plan made by the
production area, to carry out adequate technical preparation. This way you
will have the necessary equipment to work on the programmed sequences
on each recording day. In this way, if on a recording day we only have
dialogues indoors, the equipment and accessories that we will need will be
those indicated for this type of recording. If we have dialogue on a boat on
the high seas that will be filmed from land, we will need wireless
microphones; If we have a live rock group, we will need a console to mix
several microphones, etc.

Another essential topic in pre-production is visiting locations to get to know


the stages acoustically. The acoustics of spaces have existed since
sound itself has existed. Since the time of the caves, man had to face the
mysterious echoes and the permanence of sounds in their spaces. Since
then, man has learned empirically to apply certain rules to avoid the
influences of places, with harmful characteristics, for certain sound
information. Later, with scientific evolution, the phenomena of reflection
and reverberation of sound were understood.

In large churches the priests sing the psalms in a single note to lessen the
resonant effect that would cause the different notes to blend into one
another. Similarly, experienced speakers in large halls speak slowly and
avoid sudden changes in tone to improve the intelligibility of the word. In
ancient times, the tempo or speed of musical performance was an element
that the musician decided based on the size of the room where he was
going to play. A fast tempo of musical performance in a large room
confuses sounds.

3
With all this historical experience, it is essential that the sound engineer
who is going to record with direct sound visits and studies the acoustic
characteristics of the locations to control them and/or get the most out of
them, depending on the scene that will be recorded there. We are going
to analyze the acoustic properties of the venues from the point of view of
their use as filming locations.

When we are going to record direct sound indoors, we must evaluate two
basic aspects: The sound insulation of the place to avoid external parasitic
noises and the acoustic conditions of the premises.
Regarding sound insulation, when we arrive at an interior location we must
try to minimize outside noises as much as possible. These can enter
through doors and windows that, if the filming conditions allow it, we must
close. Another way for parasitic noise to enter the locations is the
architecture of the premises themselves. We cannot vary this, but we must
know that more of the low frequencies of parasitic sounds enter through
this route than the high ones.

In cinema there is a tendency to use real locations such as houses,


offices, restaurants, etc. With prior coordination with the production team,
neighboring noises can be reduced, such as that of a construction where
there would be machinery or noise from tools, etc. or apparently harmless
noises such as televisions or radios on that are doubly dangerous,
because when they appear in the background of the direct sound, when it
is edited, the continuity breaks will be felt in the background. Equally
dangerous for the connection will be outdoor settings that have a strong,
continuous background nearby, such as a market with loudspeakers or a
highway.

This creative research necessarily comes up against limits that are not
only human or technical but also physical: while a director of photography
can modulate the level of light to his liking, the sound engineer has very
limited power over the sound levels of the film. scenarios, you can hardly
ask the actor to speak louder, not to bang the cutlery on the plate so much
or to slam the door more violently during filming.

With respect to the acoustic properties of an interior location, when


working on real sets it is neither economical nor practical to make radical
architectural modifications since the use of these settings will be limited to
a few hours or days of filming.
Taking this into account, we will analyze an acoustic phenomenon that has
a decisive weight in the sound register such as reverberation. The
acoustic reverberation recorded in an original direct sound is an
irreversible process. In the sound studio, when we process the direct

4
sound, we can give it electronic reverberation, but what we cannot do is
remove the acoustic reverberation that is recorded in the original. At most
we can moderate it with treatments such as slight compression or an
equalization cut in the frequencies where the reverberation is located, but
we will not be able to separate it from the original sound.

When we arrive at an indoor location, we make a random sound like


clapping. If we notice that the sound persists after the clap has been
executed, we are facing a reverberation problem. This phenomenon
occurs to a greater or lesser degree depending on the dimensions of the
room (the greater the amplitude, the greater the reverberation) and the
type of surfaces in the room, which cause the sound waves to continue
bouncing and losing energy in each reflection until they are extinguished.

Smooth surfaces like marble or tile are highly reverberant; On the


contrary, porous surfaces such as wood or adobe tend more to absorb
sound waves. The latter are definitely more favorable for direct sound
recording as they will allow pure sounds to be recorded, without
reverberation. In a dialogue recording in a location with very high reverb
characteristics, this would ruin the intelligibility of the human voice. As the
reverberated sonority of a word persists, it will merge with the attack of the
next, especially in the vowels that carry greater energy.

It also happens that some sound effects such as dice on a table, some
energetic footsteps or the raking of a weapon produce an unpleasant
sensation, since the high instantaneous energy they produce is amplified
by reverberation, superimposing itself on subsequent sounds, causing a
disturbance in the sound. the natural proportion of the recorded sound
elements.

How should we proceed if we find a location with reverberant


characteristics that we want to dampen? In these circumstances we must
acoustically “dry” the place with absorbent materials. If the scene allows it,
we introduce absorbent scenic materials within the frame and if this is not
possible we place them outside the frame. Among the most appropriate
absorbent materials are wool rugs, teknopor or sponge sheets; which
should be distributed in the location on the floors, ceilings or walls, trying
to cover all smooth surfaces. There are also other resources that can get
us out of trouble when faced with a reverberant location, such as furniture,
tapestries, books, wood, plants and even people; increasing the number of
people in a place increases absorption.

The choice of microphones and their location is also important when we


have a reverberant location. Directional microphones amplify reflections

5
from surfaces that are in their pickup range; it is preferable to avoid them
unless the place has been treated or the reverberation interests us
aesthetically.

Among the available microphones, we must choose the one that achieves
the best direct sound/reflected sound ratio, at the same distance. Another
important aspect is the good location of the microphone, the closer it is to
the sound source it will capture a greater proportion of the sound directly
from the source and to a lesser extent the reverberation. If the microphone
is further away from the source, it will capture the reverberation to a
greater degree than the sound of the source, thus producing a distortion
effect on the sound reality.

Another type of location is outdoors, where the fundamental problems that


arise are places with high levels of noise in the background, such as
heavy traffic or bustling places with a lot of people. But in these natural
settings we generally “see” the sound background and for this reason its
presence in the soundtrack is justified.

If we work with a sequence shot with a continuous background, there will


be no problem, but if we work in cuts, with shots and reverse shots, we
should try to use wireless microphones that separate the main sound
source, such as dialogue, from the sound background as much as
possible to avoid interference. continuity breaks.

Another problem that frequently occurs outdoors is wind. For this, there
are foams, zeppelins and other wind protectors that are essential for
outdoor recording. It is important in windy scenarios to study the
characteristics of its directionality, since the wind tends to carry away the
sound waves which will force us to use microphone positions that take this
factor into account.

Another important issue in pre-production, once the casting has been


defined, is to participate in the rehearsals that the director does with the
actors to get to know the type of voices and diction characteristics of the
actors with whom we are going to work. Some actors tend to speak
explosively, which can cause “ pop ” problems, especially in explosive
letters such as “p” or “f”, some may have a rather introverted diction
characteristic.

Depending on these characteristics of the actors' voices and the


interpretations assigned to the characters, they must be addressed in the
sound record in a manner appropriate for the purposes of the story, which
must be studied by the sound engineer before of the filming. It is important

6
to detect any incorrectness in the voice technique that could cause a
problem that affects intelligibility, such as a weak pronunciation of the
vowels that will take away projection from the voice or an incorrect
articulation of consonants, words that are too close together, little use of
pauses, decrease in intensity at the end of sentences etc. All these details
must be worked out with the actors and some of them resolved during
recording.

These deformations may not have significance in ordinary conversation in


which the psychoacoustic elements of hearing function to support
understanding. However, when this same information suffers inevitable
losses due to registration, editing, treatments and mixing, it may cause
damage to intelligibility for the viewer.

THE PRODUCTION

In the production stage we face the phenomenon of filming, where all the
images and sounds of the audiovisual work will be supplied. At this stage
the sound engineer will make various types of sound records. With good
pre-production work, the sound engineer will already have the acoustic
problems of the sets resolved and will only have to deal with unforeseen
events, which will generally be parasitic noises that are entering the
location occasionally, such as construction. or a party in the surrounding
area.

Regarding the technical team, we will also be prepared according to the


technical breakdown of the script and the shooting plans. In the case of
the documentary genre, mainly in reportage, the possibilities of forecasting
with respect to the scenarios are not as detailed as in the case of fiction.
We will know in general whether it will be recorded indoors or outdoors, if
it will be in the mountains or in the center of a city, but complex sound
situations become a challenge to solve immediately and a good sound
engineer must be prepared to these complex and unpredictable acoustic
situations.

DIRECT SOUND

It is the sound that is recorded simultaneously with the image, at the same
moment that the scene is filmed, synchronously. In the case of video, the
direct sound is recorded on the same tape where the image is recorded,
on one or the two channels available for audio, either directly, with one or
two microphones to the camera or VTR, or by line if is that we use a mixer

7
to work with more than two microphones; image and sound being
automatically synchronized. In the case of cinema, direct sound is
recorded on a DAT using the clapperboard or time code to maintain
synchronization between the camera and the DAT and the image and
sound to be synchronized before editing. The direct sound can include any
type of sound element depending on the scene to be recorded. They can
be dialogues, effects, environments or music.

It is during the recording of direct sound, during filming, where the


relationship between the sound plane and the visual plane must be
established with the use of microphones. The distance effect
corresponding to the one proposed by the image must be constructed. It is
important in direct sound to respect the sound perspective proposed by
the camera, but this is not always possible due to the conditions of the
frame or the scene itself; So you have to recreate this sound perspective
in post-production. For example, in a very open plane where the
characters must dialogue intimately, in a closer sound plane; A boom , no
matter how directional it may be, would not reach the limits of the frame
and the use of wireless would result in an implausible close-up, so it
should be recorded with the option that has the most possibilities in post-
production, which in this case. case would be the wireless ones, which
could be given some effect in the studio or lastly, they are the ones that
would best serve as a reference for a dubbing.

There is also a creative use of not respecting visual perspective in sound,


as when Orson Welles in “Othello” opposes wide shots to sound close-
ups, creating a feeling of intimacy in the sound even though the characters
are seen from a distance.

The sound engineer must know the lenses with which the camera is
working, to calculate the framing and the space it has for the movement of
its microphone on the boom . Currently, with the ease of video, the sound
engineer observes the rehearsals to know the limits of the painting.

Another important factor to take into account in the placement and


movement of the microphone on the boom is the design of the lighting and
light sources in indoor and outdoor locations, since when the boom goes
high it can generate shadows inside the boom. framing. For this it is
convenient to do the sound tests when the lighting is set and on.

What is required of a sound engineer in filming is the perfect intelligibility


of the privileged elements of the sound set, such as the dialogues, and a
good sound reproduction of the space that surrounds them.

8
For this, it must be taken into account that the direct sound comes directly
from the sound source, but there is also the reflected sound coming from
the reflections of the same direct sound in walls and solid bodies, which
must be taken care of during recording. There is also the ambient sound
that is part of the direct recording and that comes diffusely from different
points and must be taken into account. For ideal intelligibility, priority must
be given to direct sound and the reflected sound must be filtered as much
as possible. However, the latter is essential to give the sensation of a
living sound perspective. The solution is to respect the balance between
the direct sound and the reflected sound , making careful placement of the
microphone, because if the reflected sound is reduced excessively, it
would give us the same dry sensation of a studio and the direct sound
would not have credibility.

A difficulty that usually appears during the recording of direct sound is that
in sound what is real does not necessarily correspond to what is credible .
Take as an example a motorcycle that crosses the frame in a fixed plane.
Since the engine and the exhaust pipe of the motorcycle are located at the
back of the motorcycle, when the motorcycle has left the frame we will
have the most powerful sound of the motorcycle. When the motorcycle is
in the center of the frame, its sound will only be beginning to increase,
which would seem false to the viewer. To make this sound shot believable,
it will be necessary to place or move the microphone depending on how
the shot should sound or, failing that, make a difference in the montage
between the sound shot and the image shot so that it corresponds to what
is expected. the viewer sees.

Regarding level regulation when recording direct sound, this should be


done during rehearsals, taking into account the highest and lowest peaks
of the shot and leaving the level fixed at one point so that there are no
background changes between shots. and flat; either in the case of wireless
ones, which are at a fixed distance from the sound source, or in the case
of the boom , where any variant must be done with microphone movement
and not by moving the recording input level during recording. When
shooting with direct sound in a loud place, such as a pool hall, a
restaurant, a gym, etc. It is advisable to silence the audible elements so
that the background is even in all shots and we do not have background
jumps in the montage, which could affect the sound continuity. One of the
most controlled ways to achieve this in fiction films is to record the shot by
asking the extras to make the movements corresponding to the action,
such as playing pool, eating or talking, in “mute” mode for after the
sequence is over. , record a continuous sound environment of all the
corresponding actions to place it on another track in the montage.

9
In the case of sequences that include diegetic music as part of the action,
such as a dialogue between two characters who are dancing in a
nightclub, they must rehearse with the music so that they can grasp the
rhythm of the dance and the levels at which they must dialogue. to listen to
each other with the music playing. Then the shots must be recorded
without music, to have clean dialogue and place the music continuously in
the montage.

There is a type of sound that is recorded during filming that is called


reference sound . It is not a sound that is going to be used, like the direct
one, but it is limited to taking note as a sound block of the words spoken,
the dramatic intentions and diction of the actor so that they serve as a
guideline for a later dubbing. This sound does not require a specific
quality, but a good reference sound, which respects the sound plane with
respect to the visual plane, can be very useful for dubbing since it will give
the sound engineer in charge of post-production an idea very close to the
sound of the stage and the visual correspondence in the sound shots that
he will have to reproduce in the dubbing. Even more so if the scenario
appears again at another moment in the plot and its sound would have to
be respected.

WILD SOUND.

Known as wild track, it is one that is not recorded at the time of recording
the image but after the scene, but at the filming location. It is a free sound
that has the function of replacing or reinforcing the original sound for
various reasons that make it necessary.

For example, if a close-up shot is going to occur in a scene, for which the
special effects area has achieved a visual effect of the bullet coming out of
the gun, the shot is taken but it does not serve us for the direct sound,
since This effect is purely visual, therefore the shot will not sound... Once
the shot is finished, we will make a wild sound of the shot in that same
environment, which will maintain the continuity of the background and
which will be synchronized to the image later.
In this case we have made a wild sound because we do not have the
original sound in the take. Another case can occur in a scene where two
people are drinking in a bar and an argument breaks out, which ends with
one of them breaking a bottle against the table and attacking the other
person. The shot had direct sound and was very good visually and by
performance, but just in that shot the sound of the bottle breaking was not
very impressive... so we will freely make a wild bottle breaking sound to
later reinforce the effect in the assembly. A case of direct sound may also
occur where a character on a balcony speaks loudly to another character

10
on a street. The camera shot and audio recording are taken from the
balcony when the character below screams. The sound recording from
above works very well for us as it represents the distance suggested
visually, but suddenly in a part of the dialogue, the noise of the engine of a
car passing by on the street arises and does not allow us to understand
well the text of the character below , then we make a “clean” wild sound
from that text to replace it in the montage. The advantage of doing it at
that moment and not in a later dubbing is that in addition to maintaining
the original atmosphere and distance, we have the “hot” actor and his
performance can be much closer to that of the original take. Dialogues that
for various reasons were not satisfactory during filming can also be
repeated.

There are also wild sounds that are recorded with the purpose of being
placed in voiceover , such as the isolated effects characteristic of a
cafeteria to be carefully placed in pauses or precise places, enriching the
general atmosphere with details.

ENVIRONMENTS OF CONTINUITY

When a sequence is recorded with direct sound and has different shots,
there will always be variations in the background between takes, whether
they are natural variations if it is recorded in the same unit of time and
space, such as a change of traffic light that will produce variations. in the
background of traffic that will become evident when the shots are edited.
Even worse in cases where, due to some production necessity, some
shots are recorded in one location and others in another, even though they
belong to the same sequence. This will cause us to have different
backgrounds when the shots are edited. For this reason, it is an
unavoidable obligation of the sound engineer to record an atmosphere of
continuity in each scenario immediately after finishing the recording of
each sequence, to have them in the montage and to be able to “patch” any
problem of background “jumps” that may arise in The edition. It is essential
to have continuous environments, without cuts, for each sequence that are
equal to the sound backgrounds that will appear live. These environments
will also be used in the event that it is decided to double some text, to
maintain the continuity of the background.

MUSICAL MASTERS

When recording live music, a complete master of the musical performance


is often recorded in the image, especially in the case of video. In the case
of cinema, for economic reasons, the musical shots are planned and a film
with a master of the complete song is not shot. If a total master of the

11
subject is not going to be recorded in the image but rather there will be
visual cuts; In sound if we need to record a continuous master of the
complete song to be used in the edition and depending on this, the images
we want are placed. If during editing we plan to return to images of the
musical source, be it a Creole ensemble, a singer, an orchestra or a band,
we must make direct reference sound from each of the shots of the
musical source to facilitate synchronization, as a support to the editor.

ATMOSPHERES AND DRAMATIC EFFECTS

In the pre-production stage, when we made our design of the general


sound of the work, a series of possible sounds and/or atmospheres
emerged that we wanted to integrate into the composition of the
soundtrack; which were not precisely the natural environments of the
settings where the sequences are filmed but rather those sound
atmospheres that could generate a metaphorical effect on the sequences.
When we are in the filming stage we must take advantage of the sound
possibilities that the locations and their surroundings offer us, to make a
record of supply sounds that may interest us for the montage. If we are
filming on an island for example, we are going to find various places where
the sound of the sea has particular nuances, for example a calm area by
the pier, an area with cliffs, an anchovy or a cave. This would allow us to
have various variants of the sea atmosphere, which we could use
dramatically in the composition of the soundtrack. We can also count on
some effects such as different types of birds or sea lion bellows, which
could serve as effects that connotatively load the sequences, when
incorporated into the soundtrack.

During filming we also have to think about the subsequent construction of


the off space, to do this we must make the most of the locations and
record specific effects or atmospheres in them that can denotatively or
connotatively enrich the off space.

POST PRODUCTION

Just as during pre-production the sound engineer was in front of a script


and in the production stage the sound engineer was in front of a shoot;
stages in which it had specific creative and technical functions. During
post-production, the sound engineer's work is carried out from the
montage, from the finished work to narrate in images, and before this his
work in post-production begins. Whatever the technical format in which the
editing of an audiovisual product is worked, there are certain

12
methodological procedures for working on post-production that remain in
all formats.

This is the peak stage of the sound engineer's work, where the narrative
and aesthetic approaches of the audiovisual work will be specified from a
sound perspective. The points that we will present have to do with a
methodological order in the sound system and not with an order of
importance.

LIVE SOUND EDITING

In the case of video where image and sound are synchronized, editing
begins with direct sound. In cinema, you must first synchronize the image
with the direct sound and then begin editing.
In both cases, at least two tracks should be used to edit the direct sound
by placing the sounds from each take alternately on the two tracks, so that
the sound takes are not edited to the cut but are slightly longer than the
cut. image to which they correspond and can fade in and out and thus
allow greater fluidity in the continuity of the sound. This “tail” of sound that
we place at the beginning and end of each take is known as overlap.
When editing the direct sound, it is advisable to work not only with the
dialogues but with all the direct shots, whether they have dialogue or not,
to build up the sound continuity between the different shots.

WILD SOUNDS EDITING

Once the edition with the direct sound is ready, we proceed to place the
replacements of the dialogues that we have recorded as wild sounds to
enrich some live sounds, which, as we have already explained, for various
reasons, whether artistic or technical, are defective or unusable.

We also placed the wild sounds that correspond to sound effects, which
for some reason could not be recorded synchronously; In some cases they
function as replacements and in others as reinforcements. We have
already talked about the advantages of wild sounds with respect to the
sound continuity that they provide, as they are recorded in the same
settings and with the actors with “fresh” dialogues and actions, it is at this
stage where we can corroborate the importance that It has a good wild
sound recorded with forethought, thinking about the sound editing.

BAND OF ENVIRONMENTS AND EFFECTS

13
By placing the continuity environments that we have recorded during
filming for each sequence, we will be able to standardize the sequences
by giving them sound continuity. Furthermore, in this stage we will build
the sound atmosphere of the work, also proceeding to place the dramatic
atmospheres that we have recorded and beginning to give a connotative
charge to the sound of the different sequences.

At this stage, since we already have the dialogues, we begin to place


sound details in the dialogue pauses, in the backgrounds or cut points,
providing aesthetic details that can optimize the grammar between the
sequences. At this stage we must begin the construction of the off-space.
The environments are usually always edited with an overlap or sound tail,
from before the scene that carries the environment begins and until after it
ends. This is done so that the transition from one environment to another,
between scene and scene, can be decided in the final mix to be done as
appropriate by cut or fade.

SOUND FILE

The sound file is a fundamental tool for the sound engineer. There are
collections of sound effects and environments on the market that are
marketed to be used in audiovisual media such as radio, film and
television and that are very useful for composing environments and effects
of a particular character that could not be achieved on the stage where
was filmed or in cases where it is not filmed with direct sound and the
entire sound of the work has to be built in a studio. This happens a lot in
television advertising where direct sound is not recorded, except for
testimonial exceptions, and the voices have to be dubbed and the file must
be used to compose the environments and effects. Also in the case of
period or futuristic films that do not have an appropriate sound
environment during the live performance, therefore this would only serve
as a reference and the dialogues would have to be dubbed and the entire
environment recomposed, resorting to the archive.

The sound archive is also a source of supply for the construction of the off
-space. As we have already mentioned, it is often necessary to build with
sound elements that are enriching for the story, for the verisimilitude of the
visually recreated spaces or as an aesthetic contribution to the sequence;
and that they are sounds that do not exist in the settings where the
sequence is filmed.

DUBBING

14
Dubbing is a post-production technique widely used for both voices and
some effects. There are several reasons why you can opt for dubbing.
There are cases in which, due to the very noisy conditions of the locations,
the live performance is unusable, so the dialogues of that sequence will be
dubbed with the same actors. For this purpose, it is important to have
recorded the reference sound, even if it is noisy, so that the actors have a
guideline for their interpretation during the recording of the scene, which
can motivate them in the dubbing, and this also helps them as a guideline
for synchronization.

Many directors propose artistic reasons for choosing to dub dialogues,


since with this technique they have the possibility of recomposing the
combination of images and sounds to their liking. Others offer practical
reasons since it gives them the possibility of intervening in the filming,
without respecting the silence required by the use of direct sound.

There are cases in which it is necessary to dub from one language to


another, this will be done trying to respect the synchronization and
interpretation intentions of the original sound, also making use of the
reference sound.

There are many methods and formats for dubbing but the basic principle is
the same: working in an audio post-production studio in front of a
microphone and in front of a screen that projects the scene that is going to
be dubbed.
The voice actors simultaneously see the image, listen to the reference
sound through headphones and record several takes in synchro until one
is approved by the director.

The result of dubbing is a close-up recording, therefore the intervention of


the sound engineer will be needed to generate sounds that correspond to
both the visual plane and the atmosphere that is being recreated. This is
achieved with some tricks in the placement of the microphones with
respect to the voice source, sometimes using the sound reflected in solid
surfaces of various types that are introduced into the studio and making
use of equalization and other audio processors.

In the case of effects, sometimes the live ones are enriched by using
effects dubbing, to mark some steps , frictions , struggles, that may
interest us to give a particular dramatic charge to the scenes.
Furthermore, films that are designed to be dubbed will require not only the
dubbing of the voices but also the effects, which are part of the scene.

15
Effects dubbing, also known as foley, is a highly creative specialty that is
worked in front of the microphone, in synchronization and giving the sound
texture to each of the particular sounds. Collaborate in the construction of
the characters, since the steps, for example, can be light or heavy, the
movements light or forced, etc. The effects in off must be clearly identified
by the viewer, because they must be apprehended only by their sound,
however the effects in on, from the moment a sound is heard
synchronized with an image, it is identified with what is seen, more for the
synchronism than for its sound verisimilitude.

We have already talked in a previous chapter about how the viewer


receives sound information synchronized to an image, as if the sound
emanated from the image thanks to the synchro effect, which corresponds
to a habitual way of receiving sounds with respect to things. from where
they come. Thanks to this, in effects dubbing there is a great possibility of
sound invention.

Many of the sounds that are dubbed in the studio do not come from an
element similar to what is seen, the bubbles of an underwater diver can
come from the sound produced by blowing a straw into a glass of water,
then giving it the appropriate effects; or the fight of two swordsmen may be
reinforced by the metallic blows of two projector reels.
The important thing, more than being realistic with the sound reproduction
of a visual element, is that it works with the image, even if it comes from a
different sound source.

I carried out an interesting case in the sound construction of the wave of


the film “At midnight and a half” (Marité Ugaz - Mariana Rondón. Peru-
Venezuela 1998) where the magnitude of a wave, in addition to being
filmed in slow motion, made it necessary to compose the sound of the
wave from many sounds.

First I composed the rise of the wave and the curling with a wave recorded
in Trujillo with a lot of bass, then after the central double explosion, came
the run of the burst wave that was composed with a wave from Paracas.

Then came the crash of the wave against the rocks, which I did with a
wave splash on rocks in “La Herradura” and then the surf with foam that
was recorded in La Chira, in rather high tones because it was foam.

Given all this “monstrosity” of the wave created and which, in the context
of the film, should have been the wave of the end of the world, the core of
the scene was missing, which was the central double explosion.

16
I tried many bursting wave options but they didn't work for me, given the
magnitude of the composition, so I used two gunpowder explosions in
synchro in the mines of Cerro de Pasco and they worked perfectly within
the total sound.

MUSICALIZATION

The musicalization stage takes place in post-production but it is a work


that has already been designed and advanced from pre-production.

This is the stage in which the music is placed in the montage, sometimes
with a utilitarian sense to “lift” a sequence that may have been weak in
some aspect, although this is a primary function.
Through musicalization you can speed up or slow down the action that in
real time is shorter or longer than in psychological time.

In the musicalization stage, the most important thing that is generated is


the narrative and grammar structuring that the music exerts, with its
intermittent appearance throughout the work. By placing the music, the
dramatic weight of the key moments of the story or documentary is
marked and the role of continuity or independence of the sequences is
also specified, providing dramatic cohesion to situations that occur in
different spaces or times or making the sequences independent of each
other. when placing, changing or removing music.

At the time of “placement” of the music, the decision is sometimes made to


anticipate the music of the next scene over the end of the previous one; or
leave the music of the previous scene, at the beginning of the later one. In
this way, various music mixing options are allowed, whether it enters or
exits by fade or cut. It is very common for films and documentaries to have
opening music, which identifies the character of the work, as well as
closing music that acquires a summary value.

In the case of television series, the opening music has the function of
characterizing the spirit of the series. Series usually also have musical
spots that identify the program and are placed at the end and beginning of
each block, to exit and return from commercials.

The ideal from my point of view, for musicalization, is the composition of


the theme or themes of the audiovisual work, since these will give the
work a particular "personality", but it also happens that there are themes
already composed that are used in the audiovisual products and are
somehow associated with the work. In these cases, the rights to use the
theme must be paid.

17
In Peru we have an institution that is the APDAYC, where all the musical
songs are registered and it is the entity that goes to request the rights to
use the songs in exchange for a price established by them, depending on
the use that they intend. it will be given to them. For advertising the rates
are higher than for film and video.

In the case of television channels, they have annual contracts with record
labels that give them, in exchange for a fee, the rights to use their songs
for productions that are broadcast nationally. If television programs are
going to be exported, they must have original music or pay the
international rights for the musical themes used there.

FINAL MIX

This is the peak stage of sound production where all sound elements will
be given the appropriate level balance, equalization, rever, compression or
other effects depending on the combination of simultaneous sounds with
each other.
It is at this stage where we mark the sound characteristics of each
sequence according to the proportions that we assign to each particular
sound.

The mix consists of the re-selection and dosage of the sounds that are in
the bands. It is a comprehensive, highly articulated and harmonious
organization of sound elements. It could be understood as the
orchestration of the sounds of the audiovisual work where hierarchies
must be established, generally for the benefit of the dialogues and where
crescendos , progressions and gradations of each sound and of the whole
are created. The aesthetic achievement of this process is known as
sonoplasty .
It should be noted that for the mix, sounds from various sources come in
the bands: from filming, from dubbing, from the wild, from the studio, from
the archive, etc. and in this process special care must be taken in the
treatment of the different sources, to achieve a balance between them,
highlighting one sound level or another, according to the expressive needs
of the narration; so that they form part of a fluid, plastic and coherent
whole.

There are notable mixtures like the one made by Graham V. Hearstone
and Gerry Humphries for “Blade Runner” by Ridley Scott, a torrent of
electronic sounds, city rumors and polyglot voices magnificently organized
and pasted.

18
Mixing is in itself a creative process. As can be seen from the previous
chapters, on the same soundtracks created, infinite options can arise for
mixing them, prioritizing some elements over others, eliminating some or
giving different effects to each particular sound.

In countries with more developed cinematography, there is a specialization


of the mixer, who receives the assembled soundtracks and applies a
mixing criterion that is not necessarily the same as that used by the
composer in the edition. As a result, during the final mix of almost any film
or audiovisual product, there are times when the balance between
dialogue, music, atmosphere and effects suddenly and sometimes
unexpectedly turns into chaos , where the most experienced mixers and
directors can be overwhelmed by the number of options.

How do these moments appear and how do you deal with them when this
happens?

How do we choose which sounds we should prioritize when they can't all
be in the mix simultaneously?

What sounds should play the role of support, support and servants of the
most important ones?

What sound, if any, should be removed from the mix?

Although these are difficult questions and the answer to them will depend
on each particular sequence, we are going to try to make general
approaches to solving these chaotic moments or better yet, to prevent
them from happening.

Just as we have classified sounds, despite there being no hierarchical


relationship between them, what does exist is a differentiation of the type
of sound according to its species , we have words, music, atmospheres
and effects as well as silence. ; each of which has its own way of being
apprehended by the viewer.

The word for example belongs to a structured code that the viewer must
decode conceptually, and makes perfect intelligibility necessary, so it
should have a certain prioritization in the mix, if it is considered important
to understand what is being said.

Music, for its part, tends to be called the “Universal Language” since it is
learnable more on a sensitive than intellectual level; and to this extent it

19
can play, within the mix, more versatile roles of intensity than the
dialogues.

The effects that make up an atmosphere are sometimes so many that they
become a paste where some cover the others and no detail is perceived
and the atmosphere becomes noise. Chaos in the mix occurs when we
have more than two elements of the same species overlapping, and
sometimes it is better to disappear some of them depending on the
intelligibility of the whole.

We have already talked about Walter Murch's “two and a half” theory,
where it is proven that the human brain can simultaneously perceive
sound elements of different species, but if it receives more than “two and a
half” elements of the same species, these already They become a group
and are individually incomprehensible.

Just as a well-balanced painting will have an interesting and proportionate


color palette, the sound of an audiovisual work will appear balanced and
interesting if it is made from a well-proportioned palette of our spectrum of
sound species.
A practical consideration when it comes to the final mix is that in the
simultaneous combination of certain sounds with each other, some of
them may be perceived transformed. Some sounds will overlap
transparently and effectively and others will tend to destructively interfere
with each other, blocking the clarity of the mix.

Four fundamental points are worked on in the mixture:

First.-

The level of each sound and the balance of the sounds against each
other. What is interesting is not the absolute level of the pure sounds, but
the relationship with the level of the sounds that precede or accompany it.
In addition, the inputs and outputs of sounds to the mix must be decided,
whether they are by cut or by fades .

Second.-

Equalization, which gives the mixer power over the frequency spectrum of
sounds: the bass, mids, and treble. Certain frequencies of each sound can
be suppressed or reinforced depending on recreating a space or the
intelligibility of the particular sounds. For example, if instrumental music is
playing and a dialogue enters, we do not have to lower the entire level of
the music so that the dialogue is intelligible, but rather cut the frequencies

20
that compete with it, and those that do not compete remain at a good
level. Heavy background traffic can have the bass cut and thus be
perceived cleaner and with more details.

Third.-

The rever serves to give the sound an environmental space, whether


realistic, as if to recreate a large interior space, or fantastic, as in the
sounds that come from the dreams or memories of a character that are
treated with echo. The discreetly dosed rever gives roundness and
naturalness to sounds made in the studio that are generally very dry.

Room.-

Processors that can create effects on sounds, such as treating the voices
of robots or unreal animals, as well as generating special atmospheres.
INTERNATIONAL BAND

When audiovisual products are going to be exported and dubbed into


other languages, a particular type of mix must be done, which is known as
international band. This consists of making a separate mix in two
differentiated but synchronous units. In one of them only the voices will be
mixed and in the other all the other sounds, effects, environments and
music that do not have dialogues. In this way, when the country that buys
wants to dub into its language, it will eliminate the voice mix, it will only
use it as a reference sound for the dubbing, so that the mix of effects,
environments and music remains intact to be coupled to the new version
in another language.

There are cases in which when removing the original dialogues from a live
show, these are already recorded with some effects or environments from
the live show, so they are inseparable. In this case, an additional band
would have to be created to reproduce the effects and environments that
will disappear with the live performance and be mixed with the
international band so that they are in the versions dubbed into another
language, since only the voices will be dubbed.

21

You might also like