0% found this document useful (0 votes)
34 views20 pages

Edit Sound

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views20 pages

Edit Sound

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Module Code and Title: MMDDA501: AUDIO PRODUCTION

Learning Units: 1. Prepare for editing sound

1.1 Setup digital voice recorder and audio interface


Audio is a sound within the acoustic range available to humans.

1.1.1: Voice recorder hardware setup

➢ Installation of the digital recorder to the computer


✓ A digital voice recorder: is a device that converts sound, such as speech and other sounds, into a
digital file that can be moved from one electronic device to another, played back by a computer,
tablet or smartphone and stored like any other digital file.
Digital voice recorders no longer rely on a cassette tape. Instead, a voice becomes digitally
recorded on the internal memory inside the recorder or on a removable memory card. The
recorder or flash drive is then connected to a computer as the voice recording is made into a
media file.
A digital voice recorder blends the portability of a cassette recorder with the convenience of
digital storage, making it easy to transfer recordings from the device onto a computer for
immediate use in applications or on the Internet. Depending on your digital voice recorder
model, you may be able to use the device on your computer via a standard USB cable, while
other recorders utilize an SD or micro-SD card instead.

WAYS TO CONNECT DIGITAL VOICE RECORDER TO THE COMPUTER


❖ USB CONNECTION
1. Plug the data cable included with your digital voice recorder into the device, then plug the other
end into one of the USB ports on your computer.
2. Turn on the digital voice recorder by pressing its power button or sliding its power switch.
3. Click "Open Folder to View Files" when the connection prompt appears on the screen. A
window will open that contains the audio files stored on the device.
4. Click and drag the audio files from the folder onto your computer's desktop to transfer them
from the digital recording device to the computer.
5. Unplug the USB cable, then power off the digital voice recorder

❖ MULTIMEDIA CARD CONNECTION


1. Remove the SD or microSD card from your digital voice recorder if it uses that instead of a USB
connection.
2. Insert the microSD card into a microSD card adapter if this is the type of card used by your
digital voice recorder.
3. Insert the SD card or microSD card adapter into your computer's multimedia card slot. If your
computer does not have a multimedia card slot, insert the card into a multimedia card-to-USB
adapter, then plug the adapter into one of your computer's USB ports.
4. Click "Open Folder to View Files" when prompted. A window will open that contains the files on
the multimedia card. Click and drag the files from the folder to your desktop to transfer them
from the card to the computer.
5. Unplug the adapter from the USB port or remove the SD card from the multimedia card slot. Put
the card back into the digital voice recorder so that it is ready the next time you need to record.
Note: Some digital voice recorders have a built-in USB interface that you can reveal by sliding a level on
the device. Plug the device directly into the computer’s USB port the follow the instructions in section 1
of this article.

➢ INSTALLATION OF AUDIO INTERFACES TO THE COMPUTER


An audio interface is the hardware that connects your microphones and other audio gear to your
computer. A typical audio interface converts analog signals into the digital audio information that your
computer can process.
This same audio interface also performs the same process in reverse, receiving digital audio information
from your computer and converting it into an analog signal that you can hear through your studio
monitors or headphones.
It sends that digital audio to your computer via some kind of connection (e.g., Thunderbolt, USB,
FireWire, or a special PCI/PCIe card).

An audio interface is used to make good quality home studio recordings. It’s an external sound card
with inputs for mics and instrument, Audio interfaces let you plug pro mics, head phones, instruments
and other signals into a computer.
When an audio interface is used with a computer, it can act as the computer's sound card.
When choosing an audio interface, it's important to determine the specific port that's available on your
computer for its use.

WHY DO YOU NEED AN AUDIO INTERFACE?


There are several reasons to use a dedicated audio interface, rather than the sound card built into your
computer. Technically speaking, a sound card is an audio interface, but its limited sound quality and
minimal I/O make it less than ideal for recording. Many sound cards only have a consumer-grade stereo
line level input, a headphone output, and possibly also a consumer-grade stereo line level output.

COMPUTER CONNECTIVITY OPTIONS


Here are a few audio interface connection types are considered standard, and those are:
i. Thunderbolt
ii. USB
iii. FireWire and
iv. PCIe.
Most PC and Mac computers come equipped with USB ports (either USB 2 or USB 3), whereas FireWire
(either 400 or 800) is mostly found on Macs. Both of these protocols average the same speed
(480Mbps), which is fast enough to record up to 64 tracks at once under ideal conditions.

essential tools/equipment used in sound recording engineering


• Recorder
• Audio interface
• Studio monitors
• Pop filter
• Microphone
• Sound card
• Computer
1.2 Configure digital recorder and digital audio workstation

1.2.1: Installation and configuration of the audio hardware driver

Driver is like a license needed by the software to access the hardware parts.

the driver acts as the bridge between hardware and software. it is what allows the operating system to
access the hardware’s functions and activate them in the required programs.

To install drivers, you’ll need a drivers’ setup file to install the driver. And not just a simple drivers file,
you’ll need the compatible one who mainly made for a particular sound hardware and the operating
system.

In the simplest way; Driver’s compatibility means an ability to run the hardware on a particular system
with a particular OS.

This is because you can’t just install Windows 7 Sound drivers in Windows 10 or in Linux, so you’ll
definitely have to make sure about the driver’s compatibility before downloading and installing it.

2 MAIN TYPES OF SOUND DRIVERS WHICH NEEDED TO INSTALL BY THE USERS-END

1. Motherboard Sound Drivers – are the programs which are read by the operating system and
required for system stability and basic functionality of the motherboard’s inbuilt sound system.
Motherboard Sound drivers often come when you buy a new laptop or Computer for yourself, and
they just gave you a CD or DVD as a setup file which you can run on a supported operating system
like windows or Linux to use all of your Motherboard’s Audio Features.
2. Sound Card Drivers – are those drivers which are required to interact with particular pieces of
computer Sound hardware. It is a set of small programs which are important for its chipset.

1.2.2: CONFIGURATION OF THE AUDIO DEVICE HARDWARE WITH THE DIGITAL AUDIO WORKSTATIONS

A digital audio workstation (DAW) is a hardware device or software app used for composing, producing,
recording, mixing and editing audio like music, speech and sound effects. DAWs facilitate the mixing of
multiple sound sources (tracks) on a time-based grid.
Most audio cards provide one or more small applications that allow you to customize your hardware.

The settings are normally gathered on a control panel that can be opened from within Cubase or
separately, when Cubase is not running. For details, refer to the audio hardware documentation.

Settings include:

• Selecting which inputs/outputs are active.


• Setting up word clock synchronization.
• Turning on/off monitoring via the hardware.
• Setting levels for each input.
• Setting levels for the outputs so that they match the equipment that you use for monitoring.
• Selecting digital input and output formats.
• Making settings for the audio buffers.

Latency refers to the delay between the time the audio enters the computer and the time Audacity is
able to record it to a track. For example, if you are recording a keyboard track, latency is the delay
between the time you strike a key and the time that note is recorded.

Latency refers to a short period of delay (usually measured in milliseconds) between when an audio
signal enters a system and when it emerges.

WHAT CAUSES IT?

Audio latency is caused by delays processing the audio data as it travels from the outside world (or from
the triggered note on a keyboard), to the computer’s processor and back out again.

CREATION OF AUDIO RECORDING TRACK

Tracking is essentially the process of recording audio. The name comes from the fact that each
instrument is recorded individually and given its own “track” in the mix, so that the balance and sound
of each can be controlled later. Originally, “track” referred to a thin width of analogue tape, today it
usually means a file on a hard drive.

Learning Unit 2: Edit audio


Audio editing generally refers to working with a track using editing software
Steps involved in the audio editing
1. Select: this is the first step in audio editing where you identify the portion of the audios
that you want to edit.
2. Editing: the next is editing where you make changes to the selected audio depending on
the software used.
3. Processing: the third step, where you add effects or other changes to the processed
audio. In this step different plugins can be used
4. Output: this the last step where you save the edited file into a chosen file format (wav,
mp3,)
TYPES OF AUDIO EDITNING
1. Cutting: this is the most basic type of audio editing used to cut an audio file, select the
portion of the file you want to remove and the delete it. This is often used to remove the
unwanted sections from a recorded audio such as pauses, or mistakes.
2. Fading: it is used to smooth out sudden changes in volume. For example, if you have a
recording of someone speaking and there is a sudden loud noise, you can use fading to
gradually reduce the recording volume until the noise is gone.
3. Mixing: this type is used to combine multiple audio files into one. This is often used to
create background music for a video or podcast.

2.1 Basic Audio editing

2.1.1 Waveform adjustments

A waveform is an image that represents an audio signal or recording. It shows the changes in amplitude
over a certain amount of time.

2.1.2 Audio level: The unit used to express the sound pressure level is the decibel, abbreviated dB.
The sound pressure level of audible sounds ranges from 0 dB through 120 dB. Sounds in excess
of 120 dB may cause immediate irreversible hearing impairment, besides being quite painful for
most individuals.

The audio level Adjustment:

Open an audio clip in Adobe Audition>select the section of audio you want to adjust. In the menu above
the audio channel>click and drag the decibel (dB) scale to adjust the volume. Select another part of the
track and repeat step 2 to adjust the volume levels on a different section.

2.1.3 Audio normalization: is the application of a constant amount of gain to an audio recording to
bring the amplitude to a target level (the norm). Because the same amount of gain is applied
across the entire recording, the signal-to-noise ratio and relative dynamics are unchanged.

To normalize audio:

1. Navigate to Multitrack >Mix down Session to New File > Entire Session. ...
2. Click-and-drag to select the portion that needs volume adjustment.
3. From the options that pop-up, click-and-drag the knob to the left or right. ...
4. Navigate to Edit > Select > Select All. ...
5. Navigate to Favorites > Normalize to dB of your choice
2.1.4 Fades: In audio engineering, a fade is a gradual increase or decrease in the level of an audio
signal. ... A recorded song may be gradually reduced to silence at its end (fade-out), or may
gradually increase from silence at the beginning (fade-in)

To apply audio fades:

1. -Open your audio file in Adobe Audition.


2. -In the workspace window locate the two boxes at the start and end of the audio. ...
3. -To fade the audio in, click and drag the Fade In box along the audio timeline.
4. -While still pressing down on the Fade In box, move the line up or down to adjust the rate of the
fade in effect.
AUDITION: INTERFACE OVERVIEW

1. Menu Bar: access general program operations such as save, effects, and help.
2. Waveform/Multitrack Toggle: switch between waveform and multitrack mode.
3. Tool Bar: tools used to interact with the project.
4. Files Panel: list of files associated with the project
5. Zoom Bar: displays the full Timeline. Can be used to zoom and change the focus of the Timeline.
6. Track Controls: modifies track properties and input settings
7. Play head: indicates the current time position of the playback audio.
8. Timeline: composed of audio tracks, where editing, arrangement, and recording takes place
9. Transport Panel: play and recording controls
10. Zoom Panel: various zoom controls
11. Levels Panel: monitor recording and playback audio levels

Zoom & Focus

The Zoom panel has options for zooming and focusing on any area of the Timeline. Options include
zooming in/out of the amplitude, which increases/decreases the height of the waveform, as well as
the timing, which stretches/compresses the Timeline to see more/less of it.
Alternatively, the Zoom Bar to the top of the Timeline can be used to zoom and change the focus of
the Timeline.

To zoom in/out with the slider, click-and-drag the handles on the left and right inward/outward. To
shift the focus in Timeline, click-and-drag the slider to the left or right.

The timeline

After audio has been recorded or imported, it can be placed on tracks in the Timeline. Tracks hold
the clips that will eventually be turned into a final audio file.

Most podcasting projects will have multiple tracks: vocals, background music, and perhaps sound
effects. The numbers at the top of the Timeline indicate the time position. The far left of tracks
contain the track controls.

Track Controls

Track Title: Name each track to organize content.

Mute (M): Mute the track so that it can’t be heard.

Solo (S): Mutes all other tracks.

Arm to Record (R): Sets the track ready to record.

Volume: Adjusts the volume on the track.

Stereo Balance: Boosts or reduces volume levels in right and left channels. Slide left to set output to
only left side headphone or speaker; slide right to set output to only right-side headphone or
speaker.

Input: Selects which device will be used to capture audio.

Output: Selects which device will be used to play audio.

Read: Leave this as the default setting.

Audition: Editing Modes

Audition has two editing modes: Multitrack and Waveform.

Click the buttons to the top left to switch between these modes.
Multitrack Mode

Multitrack mode is commonly used for podcasting, radio show creation, and musical composition.
Multitrack is a non-destructive clip-based workspace, where newly recorded and imported clips are
arranged on the Timeline. Users can modify timing and adjust volume levels for each clip, then
export a final audio file. This mode does not modify new or imported audio files.

Waveform Mode

Waveform is a destructive editing workspace capable adding complex effects, such as noise
removal. Edits made in Waveform Mode are permanent and overwrite imported or newly recorded
audio. This mode can be useful to quickly edit audio to remove filler words (like “umm,” “uhh,” or
“like”) and silences, isolate volume issues, or reduce background noise. Accessing Waveform Mode
is also essential during the Export process. Whenever working in this mode, be sure to save backups
of original audio files as any edits will be permanent.

Audition: Setting up and saving Projects

To start a new Audition project:

1. Ensure any USB microphones or audio interfaces are plugged into the computer, and
launch Adobe Audition.
2. Click the Multitrack button to the top-left. A new window will appear.
3. Adjust the settings as desired, such as:
a. Session Name
b. Folder Location
c. Template: None
d. Sample Rate: 48000
e. Bit Depth: Highest Possible
f. Master: Stereo
4. Click OK.

Saving (Audition Session File)

Always save work as an Audition Session file. This will back up any recordings as well as save any edits in
case they need to be adjusted. Without the Session file, the project can’t be re-opened for more editing
later. When moving the project to another location, be sure to include all associated audio files, such as
music and sound effects. The. sesx Audition Session file only contains editing data. Depending on the
length and quality of the audio segments, this could be a large amount of files and memory space.

To Save an Adobe Audition Session File:

1) Navigate to File > Save As. A pop-up will appear.


2) Rename the file, select a storage location, and select “Audition Session (*.sesx)” from the
Format drop-down menu.
3) Click OK. The Session file will be saved.

Audition: Editing Tools & Techniques

1. Tools

The following tools, which are the most commonly used tools for editing, can be accessed from the
Toolbar at the top of the interface.

Move Tool
Moves clips in the Timeline. Clips can be moved left-right to change their timing, as well as updown to
different tracks.

Razor Selected Clips Tool

Cuts clips into separate portions that can be independently moved and edited in the Timeline. This tool
won’t delete or modify audio from the original file or recording.

Time Selection Tool

Selects portions of clips in the Timeline. Click-and-drag within the Timeline to select.

2. Techniques

Trimming

Moving

To move a clip within the Timeline, click-and-drag it with the Move Tool. Clips can be moved left/right to
change their timing, as well as up-down to different tracks.

Splitting

Splitting a clip separates it into smaller portions, which can be edited independently of each other. This
allows portions of long clips to be divided, isolated, moved, or deleted. When splitting a clip in the
Timeline, the original file or recording will remain unaltered and can be retrieved from the Files panel at
any time.

Deleting

To delete a clip, select it and press the Delete key. To delete a portion of the clip, use the Razor Tool to
cut out the section, select the new sub clip, and press the Delete key.

Audition: Recording & Importing

In addition to recording new audio, previously recorded files such as sound effects, voice files, and
music, can be imported into the project.
Recording

To record:

1. Navigate to Adobe Audition > Preferences > Audio Hardware.

Adjust the following settings:

• Default Input: Select the source of audio.


• Default Output: Select the audio playback device, such as Built-In Output or Headphones.
• I/O Buffer Size: 512 is selected by default. If there is a significant audio delay, or echo effect
heard in the headphones, lower this number to reduce the delay.

2. In the Timeline, click “Default Stereo Input” on a track, hover over “Mono”, and select an option that
corresponds to the microphone source. Repeat this process to setup a track for each mic being used.
3. Click the Arm For Record button, labelled as an “R,” on each track being used. Audio level meters
will appear to the right for each activated track.
4. Click the Record button, which looks like a red circle, in the Transport panel. The track will start
recording in the Timeline. If you don’t see the Transport panel, navigate to Window > Transport and
ensure “Transport” is checked.
5. When done recording, click the Stop button, which looks like a white square, in the Transport panel.

The original recording will be saved in the Files panel to the left. See the Basic Editing and Advanced
Editing sections for editing instructions. Use the volume knob in track controls to increase or decrease
the volume of the track after recording.

Importing

To import audio:

1) Navigate to File > Import > File.


2) Locate and select all desired files(s).
3) Click Open. The files will now appear in the Files panel.
4) Click-and-drag an imported file to the Timeline at the desired location.
Learning unity 3: MIX AUDIO

3.1 MULTITRACK EDITING

MULTITRACK SESSIONS

In the Multitrack Editor, you can mix together multiple audio tracks to create layered soundtracks and
elaborate musical compositions. ... The Multitrack Editor is an extremely flexible, real-time editing
environment, so you can change settings during playback and immediately hear the results.

Mixing is the process of blending all the individual tracks in a recording to create a version of the song
that sounds as good as possible – the “mix”.

Figure 1 Multitrack view with 2 tracks

The process can include:

• Balancing the levels of the tracks that have been recorded


• Fine-tuning the sound of each instrument or voice using equalization (EQ)
• “Panning” the tracks between speakers to create a stereo image
• Adding reverb, compression, and other effects to enhance the original recording

In this course we will use adobe audition as one of the digital audio workstation software.

3.2 Apply audio effects

3.2.1. Audio clean up

In the spectral view, also known as spectral frequency analysis, the audio is analyzed by the
application and then it will display a view that is very colorful. One of my favorite features in Adobe
Audition is its ability to edit audio with spectral displays giving you a lot of control over your stereo
audio file. With the spectral display, you see how the frequencies dominate in the audio. This might
sound confusing initially so let's first talk about the spectral view and then how to use it to edit your
audio.

In the spectral view, also known as spectral frequency analysis, the audio is analyzed by the
application and then it will display a view that is very colorful. The bottom represents the low bass
register, and the higher frequencies are represented at the top.
Noise reduction is the process of removing noise from a signal. All signal processing devices, both
analogue and digital, have traits that make them susceptible to noise. The Noise
Reduction/Restoration > Noise Reduction effect dramatically reduces background and broadband
noise with a minimal reduction in signal quality. This effect can remove a combination of noise,
including tape hiss, microphone background noise, power-line hum, or any noise that is constant
throughout a waveform Removal

The Sound Remover tool does this great job of lifting unwanted sounds away from the sound that you
want to retain. It takes care of dynamic sounds, sounds that change over time and sounds that are
prominent, sounds that are in the foreground. Open your project in Audition and select the area you
want to adjust. Most times you can significantly remove or completely eliminate the unwanted sound by
reducing the frequency range where you hear the metallic sound.

List of some special effect used in audio editing:


• Adjust pitch
• Gate
• Pitch correction
• Access plugins
• Compression
• Modulation
• Filter and EQ
• Reverb
• Echo and delay

Here is a meaning of each effect:


➢ Adjust Pitch: is a term we use to describe how low or high sounds are. All sounds are caused by
vibrations, and when those vibrations occur at fairly consistent frequencies, we perceive them as
musical tones.
Sounds which don’t have a precise musical tone to them are referred to as having indefinite pitch (A.K.A
Noise)
➢ Noise gate: this is a gate used to avoid the unwanted sound by creating the threshold and all sounds
bellow are ignored
➢ Access plugins: Plugins are self-contained pieces of code that can be "plugged in" to DAWs to
enhance their functionality. Generally, plugins fall into the categories of audio signal processing,
analysis, or sound synthesis.
➢ Compression: Compression is the process of lessening the dynamic range between the loudest and
quietest parts of an audio signal. This is done by boosting the quieter signals and attenuating the
louder signals. The controls you are given to set up a compressor are usually:

✓ Threshold - how loud the signal has to be before compression is applied.


✓ Ratio - how much compression is applied. For example, if the compression ratio is set for
6:1, the input signal will have to cross the threshold by 6 dB for the output level to increase
by 1dB.
✓ Attack - how quickly the compressor starts to work.
✓ Release - how soon after the signal dips below the threshold the compressor stops.
✓ Knee - sets how the compressor reacts to signals once the threshold is passed. Hard Knee
settings mean it clamps the signal straight away, and Soft Knee means the compression kicks
in more gently as the signal goes further past the threshold.
✓ Make-Up Gain - allows you to boost the compressed signal. as compression often attenuates
the signal significantly.
✓ Output - allows you to boost or attenuate the level of the signal output from the
compressor.

➢ Modulation: music production, modulation means changing the property of sound over time. The
modulation of sound requires a source signal called a modulator that controls another signal called a
carrier. Modulating sounds adds a sense of motion, dimension, and depth.

➢ Filter and EQ: A filter is a device that attenuates or removes a user-defined range of frequencies
from an audio waveform while passing other frequencies. Typical filters are low pass, high pass, and
band pass. An equalizer (EQ) is a type of filter that corrects for losses in the transmission of audio
signals, making the output equal to the input, or making an otherwise inconsistent frequency
response "flat," giving all frequencies equal energy.

➢ Reverb: Reverb (short for reverberation) is one of the oldest of all audio effects, and aims to
recreate the natural ambience of real rooms and spaces. Adding a softer edge and a sense of 'space'
to sounds, it is an essential tool when mixing a track, and commonly a bedrock of many guitarists'
pedal boards.

➢ Echo and delay: Echo and delay are created by copying the original signal in some way, then
replaying it a short time later. There's no exact natural counterpart, though the 34 strong reflections
sometimes heard in valleys or tunnels appear as reasonably distinct echoes. Early echo units were
based on tape loops, before analogue charge-coupled devices eliminated the need for moving parts

Audio levels matching and adjustment


A much faster way to get your audio under control is to use Match Volume. This feature, called
“normalization” in other applications, raises the level of the entire clip by the same amount such that
the loudest portion of the clip does not exceed the level that you specify.
To match audio levels, choose effects > Match Loudness to open the Match Loudness panel>Drag the
files you want to match levels > Click run

3.2.2 Multitrack special effect application


Adding effects in a multitrack session is similar to adding effects to a single file in the Waveform Editor,
but there are a few key differences:
• Each track in a multitrack session can have its own rack of 16 effects.
• Track effects apply to all files or clips in the track.
You can also apply effects to the individual files or clips in a track. When you do, the file or clip includes
both the clip effects and any overlying track effects.
✓ Effect you add to tracks in a multitrack session appear in the Effects Rack panel. This is true whether
you add the effect from the Effects menu or from an Effects pop-up menu in the Effects Rack panel.

✓ Effect you add to tracks or clips in a multitrack session appear in the Effects Rack panel. You don’t
need to click Apply. The effects are applied to the multitrack session file (.sesx) when you save it.

✓ You cannot apply process effects to tracks or clips in a multitrack session. Process effects are
identified in the Effects menu by the word (process) in parentheses. To apply process effects, open
the file in the Waveform Editor view, apply the effect, save the file, and add the modified file to your
multitrack session.

Some special effect you can apply on multitrack section are:


• Reverb and delay, Compression, Presets and favorites, Automation for mixer

MIX BUS
What is a mix bus?
mix bus is a way of routing multiple tracks into one channel to process them simultaneously.
Mean you can use a single channel to affect a group of tracks or instruments and adjust level and pan via
one channel (Bus).

Learning unity 4: Finalize sound editing

4.1 Applying noise reduction

Noise reduction is the process of removing noise from a signal. All signal processing devices, both
analogue and digital, have traits that make them susceptible to noise. The Noise Reduction/Restoration
> Noise Reduction effect dramatically reduces background and broadband noise with a minimal
reduction in signal quality. This effect can remove a combination of noise, including tape hiss,
microphone background noise, power-line hum, or any noise that is constant throughout a waveform
Removal.

Steps to apply noise reduction:

1. Effect
2. Noise Reduction/Restoration
3. Noise Reduction or Denoise
4. Capture noise print (when you selected Noise reduction)
5. Adjust the level and click Apply

4.2 Export multitrack mix down


• File naming convention
After finalizing your project, you need to give a name by which it be named and identified from
another project
• Location
Find a safe location to which you can export your project. Place which can favorited you to work
with the final audio you export
• Format
An audio file format is a file format for storing digital audio data on a computer system. The bit
layout of the audio data (excluding metadata) is called the audio coding format and can be
uncompressed, or compressed to reduce the file size, often using lossy compression. The data can
be a raw bit stream in an audio coding format, but it is usually embedded in a container format or an
audio data format with defined storage layer

Audio file compression


The file compression is a method in which the logical size of a file is reduced to save disk space and
for easier and faster transmission over a network or the internet.

Types of compression
Lossy compression: the compression algorithm reduces the size of the file by discarding less important
information in the file which can significantly reduce the file size but also affect its quality.
Lossless compression: the compression algorithm reduces the size of the file without losing data/
information in the file.
The original, uncompressed data can be recreated from the compressed version.
Most formats offer a range of degrees of compression, generally measured in bit rate. The lower the
rate, the smaller the file and the more significant the quality loss.
Here are most common examples:
Uncompressed audio formats: WAV, AIFF & PCM
Lossless audio formats: FLAC, WMA, ALAC
Lossy audio formats: MP3 & AAC

Sample rate and bit rate


The sample rate is the number of times the audio is sampled per second. For example, CD audio has a
sample rate of 44100 Hz. This means that the audio is sampled 44100 times every second. The sample
rate is measured in “Hertz” – a unit of frequency describing cycles per second.
Standard CD-quality audio uses a sample rate of 44.1 kHz (k means times 1000). Sampling Rates range
from 8000 Hz (very, very low quality) up to 192 000 Hz (very, very hi quality). The disadvantage of very
high sample rates is that they deliver huge files – not to mention – people cannot hear improvement
after 44100 Hz.
BIT DEPTH and BIT RATE
Bit Rate and Bit Depth are two more important aspects of digitized sound. From the export (or render)
window in your DAW, you will probably be able to choose from 16, 24, and 32 bits. Soundbite allows
these three options as well as a 32bit float option. This refers to Bit Depth – the number of possible
values to represent a sample of the signal with. Professional studios usually offer bit depths of 24 and
32. In digital multimedia, Bit Rate refers to the number of bits within a unit of playback time to
represent a continuous medium (such as audio) and describes the character of the sample.
List several file extensions/formats
AIF: Audio Interchange File Format
ASF: Advanced Streaming Format
AU: AU is short for audio
MP3: Files with .mp3 extension are digitally encoded file formats for audio files that are formally based
on the MPEG-1 Audio Layer III or MPEG-2 Audio Layer III. It was developed by the Moving Picture
Experts Group (MPEG) that uses Layer 3 audio compression.
MPA: is a lossy audio format that provides high-quality stereo sound at about 192kbs/s
SND: Sound
FLAC (hi-res), Free Lossless Audio Codecs
WAV (hi-res): Waveform Audio Format
WMA: A Windows Media Audio
AAC: (Advanced Audio Coding)

You might also like