0% found this document useful (0 votes)
52 views6 pages

07lab7 PDF

This document discusses various audio editing techniques including amplitude change, pitch shifting, time correction, and speed change. It begins by explaining how amplitude change was limited in analog studios but can now be precisely controlled digitally by multiplying sample values. Pitch shifting with time correction allows changing pitch without altering duration by analyzing and resynthesizing waveforms. Speed change similarly uses time compression/expansion algorithms to alter duration without affecting pitch. Both pitch shifting and speed change work best within a small range to avoid digital artifacts.

Uploaded by

Walid_Sassi_Tun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views6 pages

07lab7 PDF

This document discusses various audio editing techniques including amplitude change, pitch shifting, time correction, and speed change. It begins by explaining how amplitude change was limited in analog studios but can now be precisely controlled digitally by multiplying sample values. Pitch shifting with time correction allows changing pitch without altering duration by analyzing and resynthesizing waveforms. Speed change similarly uses time compression/expansion algorithms to alter duration without affecting pitch. Both pitch shifting and speed change work best within a small range to avoid digital artifacts.

Uploaded by

Walid_Sassi_Tun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

LAB SEVEN

REVIEW
Know how to use the plug-ins in both your audio editor and
multitrack editor.
Start to predict how a particular process might affect a
particular sound, and see if you are right.

AMPLITUDE CHANGE AS A PROCESS


Altering a sounds amplitude envelope is a fundamental way to
change its recognizability. In the classic analogue tape studio,
envelope change was limited to longer sustained sounds, since it
involved splicing the tape at different angles:

Using tape splicing to change a sounds envelope.


In the example above, the aural result would be a gradual
increase in the original sounds amplitude as the tape recorders
playback head moved from the blank tape across the splice to the
original sound. Different angles of splicing would result in
different attacks. In all cases, the results would be approximate,
leaving very limited control over attack duration or shape.
A second method was to use the mixer, raising or lowering the
sounds level as it was played. Although the mixer gave more
control over the envelope shape, only slow envelope changes on
longer sounds were possible. This method worked well for fading in
or fading out material but not for truly altering a sounds envelope.
121

Lab Seven

Most audio editors give you the ability to alter a sounds


amplitude. Normalizing, which we discussed in the previous lab, is
one such process. Changing a sounds gain (another term for
amplitude) by either a ratio or by decibels is also fairly standard.
Changing a sounds gain over time constitutes creating an
envelope, which is a dynamically varying gain change.
Digital Amplitude Change
In digital audio, changing amplitude is a simple matter of altering
the sample values, most often by multiplying. For example, to
double a sounds amplitude, multiply every sample by two; to
halve a sounds amplitude, multiply every sample by .5 (or divide
by two). To fade in a sound over one second, for example, change
the number by which you are multiplying each sample over the
course of one second (or 44,100 numbers) from zero (no sound,
completely faded out) to one (the sounds current full amplitude).
Like all other processes in audio editors, amplitude change is
destructiveit changes the actual sample values stored in the audio
file. In most cases, change is beneficial; however, amplitude changes
often depend upon a specific situation; a given sounds amplitude
should be relative to other sounds around it. Creating ten different
versions of an audio file (one very quiet, one a little louder, one
louder still, etc.) is not a very efficient way of working, particularly if
you dont know all the possible ways the sound will be used.
Multitrack audio editors, such as ProTools, Audacity, and
Audition can create amplitude change in real time (that is, while it is
playing). The user can specify a particular amplitude for a region
(even a dynamically changing one), and the program will multiply
the sample data by that amplitude as it is playing. As a result,
creating amplitude envelopes within multitrack programs is a
dynamic process (one that we can change while we are composing)
as opposed to the static process within the audio editor (one that
we cannot change while composing).
Balancing Levels
Approaching the topic from another point of view, one can
consider how multitrack programs are modeled after the classic
analogue studio. Once a composer had his or her sounds on the
various tracks of the tape (perhaps eight or sixteen separate tracks
of audio), he or she would mix these sounds through the mixer to a
two-track stereo master. During the mixing process, the composer
would be continually changing the levels of the various tracks
using the mixer faders, balancing the sounds while listening, so
that the foreground material was louder than the background
122

Lab Seven

material, and important sound events could be heard, and so forth.


This was a very active process, involving as many fingers, elbows,
and forearms as the composer could manipulate successfully.
Unfortunately, such an active process is not available in
software without additional hardware (MIDI-controlled sliders, for
example). Like most computer applications, the current computer
interface reduces all interactions with the program to a mouse
which only allows dynamic change to one parameter at a time.
Obviously, this is a major limitation.
Automation
By the 1980s, improvements in tape recorder technology made it
possible to reduce components, such as the tape heads, to such a
point that magnetic tape could successfully be divided into eight,
sixteen, and even twenty-four tracks. Such an increase allowed for
greater complexity through multiple layering; however, it also
increased the complexity of the final mixing process.
Because digital technology was becoming available at the
same time, some studios (particularly high-end commercial
recording studios) were able to exercise digital control over their
analogue mixers, recording the fader movements over the course of
a song and having the computer control tiny motors in these faders
during playback. This process was known as automated mixing,
and it was extremely expensive. A twenty-four track analogue
mixing board with motorized faders could cost between half a
million and two million dollars.
The concept of automated mixing has been copied in
multitrack programs. The process of recording fader movements
will be discussed in detail in Lab Nine. We will now discuss
manually setting fader movements.

BOUNCING
Bouncing is a technique borrowed from the analogue tape studio,
in which several tracks are mixed together and recorded onto
another track. Bouncing allows for more tracks and layers to be
recorded than are physically available on the tape. Bouncing
allowed George Martin to create multiple layers in the Beatles
famous Sergeant Peppers Lonely Hearts Club Band LP on a four-track
tape recorder. This recording contained two guitars, bass, drums,
multiple vocal tracks, and even a symphony orchestra, and it was
recorded in typical studio fashion, one separate track at a time
(rather than recording everything live, at once).

123

Lab Seven

Bouncing tape tracks. Four discrete tape tracks can yield many more separate layers.

Unfortunately, there were two major limitations to bouncing


in the analogue tape studio. First, each time a track was recorded
on the tape, it added noise, an unavoidable consequence caused by
the limits of magnetic tape. Usually three bounces were possible
before the signal-to-noise ratio made it unusable. This limit is
eliminated in digital audio because the bouncing process is in fact
mixingentirely digitalso no extra noise is introduced.
The second limitation to bouncing is the loss of control over
the individual tracks once they are mixed. For example, using the
above diagram, if the material a, formerly on track one, was not
loud enough after the second bounce, there was nothing that could
be done, particularly after d was recorded over the original
material on track one. This limitation remains in digital audio
because the bounced signal becomes a single audio file, and there is
no way to separate the material once it is mixed.

PITCH SHIFTING WITH TIME CORRECTION


In Project Two, you experimented with pitch shifting; specifically,
pitch shifting without time correction. In other words, when you
lowered the pitch of the sound, the sound also got longer its
duration (time) did not stay the same.
Time correction is a digital process that allows frequency
transposition without a change in time. As has been mentioned
several times, the relationship between time and frequency is
normally fixed: given a waveform of a specific frequency, lowering
it by one octave (dividing the frequency in half) will make the
waveform twice as long, and therefore it will take twice as long to
complete. Through special analysis of the waveform (most often
using a Fourier Transform), it is possible to determine which
frequencies are present in a sound at periodic times (called frames).
Once the frequency content of a sound has been divided into these
frames, it is possible to resynthesize the sound by altering the
frequencies in these frames (for example, doubling them to make

124

Lab Seven

the sound an octave higher) while keeping the number of frames


the same, thereby maintaining the same duration for the sound.
There are two essential differences between the two results,
and they affect how you can use the results of the process.
First, without the change in time that results from lowering
the pitch shift, the ensuing sound may not be heard as a gesture.
And second, this effect is offset by the ability to combine the
original with the transformation side by side; they will maintain the
same internal rhythm.

SPEED CHANGE
Like time correction in the pitch shift process, Time Compression/
Expansion is a digital process that requires analysis and
resynthesis. In fact, it uses the very same algorithm, but instead of
altering the frequencies within the individual analysis frames, it
alters them when the frames are recombined.
And, like pitch shifting, time compression (making a sound
shorter in duration without affecting its pitch) and expansion
(making a sound longer in duration without affecting its pitch) is
very effective within a small range; excessive or extreme use will
create unwanted and obvious digital artifacts.
Origins of the Concept
Both these processes were perfected with commercial results in
mind. Although they were originally created in research situations
(explained in the Study Guide units on computer music), potential
commercial applications created the tools we are using today. Pitch
shifting is marketed towards correcting out-of-tune singerstaking
a single word or syllable and raising or lowering it to the correct
frequency. (Now you know the origins of the phrase fix it in the
mix.) Time expansion was originally marketed to radio stations
because it made it possible to take a thirty-second radio commercial
and compress it into twenty-eight seconds, for example. It is now
used mainly to alter the tempos of drum loops within popular
music.
Note that in both cases, the use of these processes in the
commercial world is subtle, correcting mistakes through small
parameter adjustments. In these cases, any artifacts created by the
process are not noticeable. When we use these processes in
electroacoustic music, we are often more interested in their extreme
use; sometimes the resulting artifacts are useful and interesting. In
125

Lab Seven

all cases, even though we are experimenting, we must be aware of


these vestiges.

DELAY
In the classic analogue tape studio, delay was created exclusively
through the use of tape machines. Short delays were created by
playing back the sound just recorded on the tape, using the
difference in space between the record head and playback head and
the speed of the tape as the determining factor for the delay time.
Although some tape recorders were created with moveable
playback heads, which facilitated different delay times, most often
the delay time was restricted to changes in tape speed. Longer
delays used two tape machines, and the difference in position
between the machines determined the delay time, which could be
freely varied. In both cases, the amount of signal that was taken
from the playback and returned to the record head would
determine the amount of feedback.
By the 1980s, the appearance of digital delays (DDL) allowed
for more flexible delay times, including times that were much
shorter than physically possible with tape machines (less than
100 milliseconds, for example). These delay times created other
time-based effects, including phasing, chorusing, and flanging.
These are all possible in software using time-based processes.

TASKS
There are several tasks to work on this week. Because the working
methods are different in the various multitrack programs, these
tasks are in the individual appendices.

TO DO THIS WEEK
After completing all of the tasks found in the appendices for this
week, together with the second practical project, you should have a
more thorough understanding of various processes available in EA.
How can these processes be applied constructively to your sounds?
You should be turning in your Sound Journal for Weeks 6
through 9 this week.

126

You might also like