0% found this document useful (0 votes)
60 views

Notes: Sound Editing: Frequency:The Frequency Is The Number of Peaks and Troughs Per Second and Is Given As

Uploaded by

Ghadeer Alshoum
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views

Notes: Sound Editing: Frequency:The Frequency Is The Number of Peaks and Troughs Per Second and Is Given As

Uploaded by

Ghadeer Alshoum
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Notes: Sound Editing

Sounds are transmitted through the air as changes in air pressure. These pressure
changes are smooth and continuous. This is analog data. Computers and most other
modern audio processing equipment can only use digital data.

Creating digital data from analog data


Analog signals are sampled to create digital data. The ‘sampling rate’ (how many
times a second the analog signal is measured), determines the ‘fidelity’ of the sound and
the size of the digital file.
Higher sample rates will mean bigger file sizes but it is not just the number of samples
that matter. Each single sample has a ‘bit depth’, which is the number of bits of
information in each sample.
This directly corresponds to the resolution of each sample. Examples of bit depth
include Compact Disc Digital Audio, which uses 16 bits per sample and DVD-Audio
and Blu-ray Disc, which supports up to 24 bits per sample.
This digitization is carried out by analog-to-digital Conversion (ADC). After you have
edited a digital
file, the
computer will use a

digital-to- analog
Conversion (DAC)
to convert it to a (relatively) smooth analog waveform that drives your speakers.
There is some more important theory work on sampling but these ideas are enough to get us
started with the practical work.

For all practical work on this topic you will be using digital audio files. Before you start,
there are a few other terms like frequency, pitch and amplitude you need to consider.

Frequency :The frequency is the number of peaks and troughs per second and is given as
cycles per second – Hz; pronounced Hertz. The average human ear can hear sounds as low
as 20 Hz and as high as 20 kHz (20 000 Hz). This is known as the audible range.

Pitch: We use the word pitch to compare sounds. We perceive higher frequency sounds as
higher in pitch. When we work with audio editing software we usually refer to changing the
pitch rather than altering the frequency.

Amplitude:
1
The height of the sound wave
reflects the strength or power
of the sound. We call this the
amplitude. Higher amplitudes
result in higher volumes. This
is why we call a device that
increases volume an amplifier.

When an amplifier tries to


increase the amplitude beyond its
limits, the waveform is ‘clipped’.
Clipping produces a distortion,
which is one of the ‘effects’ that
rock guitarists often use.
We will use several ‘effects’ in
our audio editing practice.

Audio editing :What we’ve seen in the diagrams so far are very simple ‘waveforms’. A
waveform is a representation of an audio signal or recording. It lets you see the changes in
amplitude against time.

Audio editing applications, like Audacity®, use waveforms to show how sound was
recorded. If the waveform is low, the volume was probably low. If the waveform fills the
track, the volume was probably high. Changes in the waveform can be used to spot when
certain parts of the sound occur. In this screenshot of one of the practice files, we can see
the clock ticks and when the alarm rings.

When we need to save the file we’ve been editing we need to decide which format to use. In
theory this should be a simple trade-off between the quality of the recording and the size of
the file. It is sometimes more important, however, to consider the eventual use and
distribution of the recording.

2
Audio formats
There are three major groups of audio formats:
• Uncompressed
• Lossless compression
• Lossy compression.

Uncompressed formats like .WAV give pure digital sound as recorded. However, the files
can be very large. Five minutes of WAV audio can need 40 to 50 MB of storage.

Files saved with Lossless compression like .FLAC files are still the sound as recorded but
the compression algorithm allows some parts of the file to be stored as coded data, saving
some file space.

Files saved with lossy compression like .MP3 have some audio information removed and
the data is simplified. This does result in some loss of quality but clever algorithms only
remove the parts of the sound that have the least effect on the sound we can hear.

The compression of a recording and the decompression for playing back a recording is done
by CODECs (coder–decoder). CODECs for the common audio formats are usually
automatically installed in media applications and extra CODECs for other file types can be
downloaded and installed as required.

Learning objectives
After reading this guide you should be able to teach the following learning objectives:
• trim a sound clip to remove unwanted material
• join together two sound clips
• fade in and fade out a sound clip
• alter the speed of a sound clip
• change the pitch of a sound clip
• add or adjust reverberation
• overdub a sound clip to include a voice over
• Export a sound clip in different file formats.

Key terms
Word/phrase Meaning
The conversion of a sound wave (a continuous signal) to a sequence
Sampling
of samples (a discrete-time signal).

3
Waveform A visual representation of an audio signal or recording.
The number of complete waves or oscillations or cycles occurring in
Frequency unit time (usually one second).
The frequency of a sound as perceived by the ear. A high pitch sound
corresponds to a high frequency sound wave and a low pitch sound
Pitch corresponds to a low frequency sound wave.

The size of the vibration and this determines how loud the sound is.
Amplitude
Fidelity How close the digitised sound is to the original analog recording.
Monophonic Single channel sound.
recording
Stereophonic Two (or more) channel sound.
recording

You will also need to become familiar with some commonly used effects like:
• Fading in and out
• Changing the pitch
• Changing the speed
• Changing the volume (amplification)
• Adding reverberation.

You might also like