0% found this document useful (0 votes)
19 views48 pages

Record Sound

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views48 pages

Record Sound

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

● Introduction to sound recording

✔ Sound recording equipment


✔ Audio recording materials
✔ Audio recording accessories
✔ Recorded audio signal factors
Signal to noise ratio
Dynamic range
Frequency response
Sound Quality
✔ Sound wave properties
Frequency
Amplitude
Time period
Velocity
Wavelength
● Selection of audio equipment
✔ Input device
Microphone
Music instrument
✔ Audio Processing devices
Audio mixer
Sound card/Audio interface
Compressor
Gate
Limiter
Expander
De-esser
✔ Output devices
Headphone
Studio monitor
● Selection of audio materials and accessories
✔ Microphone protection
✔ Memory cards
✔ Batteries
✔ External hard drive
✔ Acoustic panels
✔ Soundproofing insulation
✔ Microphone stand
✔ Wind and pop noise protection
✔ Shock and vibration
● Selection of digital audio workstation (DAW) Tools
✔ Operating system
Windows - Based Digital audio workstation (DAW)
Mac-Based Digital audio workstation (DAW)
Cross-platform software (DAW)
✔ Digital audio workstation features
Multitrack Recording
MIDI Support
✔ Hardware requirements
Processor (CPU)
RAM (Random access memory)
Storage
Graphic card

LEARNING OUTCOME 1: PREPARE AUDIO EQUIPMENT, TOOLS AND MATERIALS

1.1: Introduction to sound recording


✔ Sound recording equipment

Gather your equipment. This will vary depending on the specific audio project you are working on, but some
common items include:
Microphones
Microphone stands
Mixing boards
Monitors
Amps
Cables
Light boards (for live performances)
Live mixing DAWs (for live recording and production)

✔ Audio recording materials

Audio recording materials are the tools and supplies that are used to capture and store sound. This can include
microphones, recording interfaces, digital audio workstations (DAWs), and storage media.
Microphones are devices that convert sound waves into electrical signals. There are many different types of
microphones available, each with its own unique characteristics. Some common types of microphones include:

Dynamic microphones: Dynamic microphones are rugged and durable, making them ideal for live sound
applications. They are also relatively inexpensive. Dynamic microphones are a type of microphone that uses a
moving coil to generate an electrical signal. Dynamic microphones are the most common type of microphone used in
live sound reinforcement, and they are also widely used in recording studios and broadcasting.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Condenser microphones: Condenser microphones are more sensitive than dynamic microphones, making them
ideal for recording delicate sounds. However, they are also more fragile and require phantom power to operate.

Ribbon microphones: Ribbon microphones are known for their warm, natural sound quality. They are often used to
record vocals and acoustic instruments. Ribbon microphones, also known as velocity microphones, are a type
of microphone that uses a thin ribbon of conductive material suspended in a magnetic field to generate an
electrical signal. Ribbon microphones are known for their warm, natural sound and their ability to capture
subtle details.

Recording interfaces

Recording interfaces are devices that allow you to connect your microphones to your computer. They also provide
preamplification and A/D conversion, which are necessary for recording audio digitally.
Digital audio workstations (DAWs) are software programs that are used to record, edit, and mix audio. DAWs
provide a variety of features, such as track recording, editing, and mixing tools, as well as effects and plugins.

Storage media

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Storage media is used to store your recorded audio files. Common types of storage media include hard drives, solid
state drives (SSDs), and optical discs.

 Hard disk drives (HDDs): HDDs are the most common type of storage media used in computers. They are
relatively inexpensive and can store a lot of data. However, they are also slower and less reliable than
other types of storage media.

Hard disk drive

 Solid-state drives (SSDs): SSDs are a newer type of storage media that is becoming increasingly popular.
They are faster and more reliable than HDDs, but they are also more expensive.

Solidstate drive

 Optical discs: Optical discs, such as CDs, DVDs, and Blu-ray discs, are a type of storage media that uses
lasers to read and write data. Optical discs are relatively inexpensive and can store a lot of data, but they
are also slower and less durable than other types of storage media.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 Flash drives: Flash drives are a type of portable storage media that uses NAND flash memory to store
data. Flash drives are small, lightweight, and durable, but they can be more expensive than other types of
storage media.

Flash drive

✔ Audio recording accessories

Audio recording accessories are devices and supplies that can be used to improve the sound quality of your
recordings, or make the recording process easier and more efficient.
Here are some common audio recording accessories:
Microphone stands: Microphone stands hold your microphone in place, and allow you to position it accurately.
There are many different types of microphone stands available, including floor stands, desktop stands, and boom
stands.

Microphone stand

Cables: Cables are used to connect your microphone, recording interface, and other audio equipment. There are
many different types of audio cables available, each with its own specific purpose. For example, XLR cables are
typically used to connect microphones to recording interfaces, while TRS cables are often used to connect monitors
to mixers.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Audio cable

Pop filters: Pop filters reduce plosives, which are the popping sounds that can occur when pronouncing certain
consonants, such as "p" and "b." Pop filters are typically attached to the microphone stand, and placed between the
microphone and the speaker's mouth.

Pop filter
Shock mounts: Shock mounts isolate your microphones from vibration, which can improve the sound quality of
your recordings. Shock mounts are especially important when using condenser microphones, which are more
sensitive to vibration than dynamic microphones.

Shock mount
Headphones: Headphones are used to monitor your audio during recording and mixing. Headphones allow you to
hear your recording in isolation, so that you can make adjustments as needed.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Headphones
Speakers: Speakers are used to playback your recorded audio. Speakers allow you to hear your recording in its
entirety, and to share it with others.

Speakers
In addition to the above accessories, there are a number of other audio recording accessories that can be useful,
depending on your specific needs. For example, you may want to consider using a windscreen when recording
outdoors, or a cloud lifter if you are using a condenser microphone with a low-gain preamp.
Here are some additional audio recording accessories that you may find useful:

Windscreen: A windscreen reduces wind noise when recording outdoors. Windcreens are typically made of fur or
foam, and they fit over the microphone.

Windscreen
Cloud lifter: A cloud lifter is a device that boosts the signal level of a microphone. This can be useful if you are
using a condenser microphone with a low-gain preamp.
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
DI box: A DI box, or direct injection box, converts a high-impedance signal from an instrument, such as an electric
guitar, to a low-impedance signal that can be connected to a mixer or recording interface.

Audio recorder: A portable audio recorder is a device that can be used to record audio without the need for a
computer. Portable audio recorders are often used to record interviews, lectures, and other events.

Audio recorder
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
With the right audio recording accessories, you can improve the sound quality of your recordings, and make the
recording process easier and more efficient.

1.1.4 ✔ Recorded audio signal factors

Signal-to-noise ratio (SNR) is a measure of the strength of the desired signal relative to the background noise. A
higher SNR indicates that the desired signal is louder than the background noise, resulting in better sound quality.

Dynamic range is the difference between the loudest and softest sounds that can be accurately represented in an
audio signal. A higher dynamic range allows for more detail and nuance in the sound quality.

Frequency response is the range of frequencies that an audio signal can accurately reproduce. A wider frequency
response allows for a more complete and accurate representation of the sound.

Sound quality is a subjective measure of how good an audio signal sounds. It is influenced by a number of factors,
including SNR, dynamic range, frequency response, and other factors such as distortion and artifacts.
How these four factors relate to each other
SNR, dynamic range, and frequency response are all important factors in determining the sound quality of a recorded
audio signal. A high SNR, wide dynamic range, and accurate frequency response will generally result in better sound
quality.

1.1.5 Sound wave properties

1. The frequency of a sound wave is the number of cycles that the wave completes per second. It is measured
in hertz (Hz). Higher frequency sound waves have shorter wavelengths and are perceived as higher pitched
sounds. Lower frequency sound waves have longer wavelengths and are perceived as lower pitched sounds.
2. Amplitude: The amplitude of a sound wave is the displacement of the wave from its equilibrium position. It
is measured in meters (m). Higher amplitude sound waves have more energy and are perceived as louder
sounds. Lower amplitude sound waves have less energy and are perceived as softer sounds.
3. Time period: The time period of a sound wave is the time it takes for the wave to complete one cycle. It is
measured in seconds (s). The time period is inversely proportional to the frequency of the wave. This means
that higher frequency sound waves have shorter time periods and lower frequency sound waves have longer
time periods.
4. Velocity: The velocity of a sound wave is the speed at which the wave travels through a medium. It is
measured in meters per second (m/s). The velocity of a sound wave is determined by the medium through
which it is traveling. For example, sound waves travel faster through air than through water.
5. Wavelength: The wavelength of a sound wave is the distance between two successive peaks of the wave. It
is measured in meters (m). The wavelength of a sound wave is inversely proportional to its frequency. This
means that higher frequency sound waves have shorter wavelengths and lower frequency sound waves have
longer wavelengths.

The relationship between frequency, wavelength, and velocity of a sound wave can be expressed by the following
equation:
v = fλ
where:
v is the velocity of the wave
f is the frequency of the wave
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
λ is the wavelength of the wave
This equation tells us that the velocity of a sound wave is equal to the product of its frequency and wavelength. This
means that if we know the frequency of a sound wave, we can calculate its wavelength by dividing the velocity of
sound by the frequency. Conversely, if we know the wavelength of a sound wave, we can calculate its frequency by
dividing the velocity of sound by the wavelength.
The properties of sound waves are important for understanding how sound is produced, transmitted, and perceived.
By understanding these properties, we can design better sound systems and improve the quality of our recordings.

1.2: Selection of audio equipment

1.2.1 Input Devices

1. A microphone

A microphone is an input device that converts sound waves into electrical signals. Microphones are used in a
variety of applications, including recording, live sound reinforcement, and broadcasting.

2. Music instruments

Music instruments are also input devices. They convert mechanical energy into electrical signals. Music
instruments are used in a variety of applications, including live performance, recording, and composition.

There are many different types of music instruments, each with its own unique sound and characteristics.
Some of the most common types of music instruments include:

 Stringed instruments: Stringed instruments, such as the guitar, violin, and cello, produce sound when the
strings are vibrated.
 Wind instruments: Wind instruments, such as the flute, trumpet, and trombone, produce sound when the
player blows into them.
 Percussion instruments: Percussion instruments, such as the drums and cymbals, produce sound when
they are struck.

1.2.2: Audio processing devices

Audio processing devices are used to modify the sound of audio signals. They can be used to improve
the sound quality, add effects, or change the volume of the signal.

Here are some of the most common audio processing devices:

 Audio mixer: An audio mixer is a device that allows you to combine and control multiple audio signals. It
typically has multiple inputs, each of which can be connected to a microphone, instrument, or other audio
source. The mixer also has multiple outputs, which can be connected to speakers, headphones, or other
recording equipment.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 Sound card/Audio interface: A sound card or audio interface is a device that allows you to connect your
audio equipment to your computer. It converts the analog signals from your audio equipment into digital
signals that your computer can understand and process.
 Compressor: A compressor is a device that reduces the dynamic range of an audio signal. This means
that it makes the soft parts of the signal louder and the loud parts of the signal quieter. Compressors are
often used to improve the intelligibility of vocals and to make drums sound more punchy.
 Gate: A gate is a device that silences an audio signal below a certain threshold. This is useful for
removing background noise from recordings.
 Limiter: A limiter is a device that prevents an audio signal from exceeding a certain level. This is useful
for protecting speakers from damage and for preventing clipping.
 Expander: An expander is the opposite of a compressor. It increases the dynamic range of an audio
signal by making the soft parts of the signal softer and the loud parts of the signal louder. Expanders are
often used to create a more spacious and open sound.
 De-esser: A de-esser is a device that reduces excessive sibilance (hissing sounds) in an audio signal.
De-essers are often used on vocal recordings.
1.2.3 Output devices

Headphones and studio monitors are two types of output devices that are used to listen to audio.
Headphones are worn over the ears and provide a personal listening experience. Studio monitors are
speakers that are designed to reproduce audio accurately.

1. Headphones

Headphones are a convenient and portable way to listen to audio. They can be used anywhere, such as
on the go, in the studio, or at home. Headphones come in a variety of different types, each with its own
unique design and sound characteristics.

2. Studio monitors

Studio monitors are designed to reproduce audio accurately. They are typically used in recording studios, but they
can also be used for home listening. Studio monitors come in a variety of different sizes and configurations, each
with its own unique sound characteristics.

1.2.3 Selection of audio materials and accessories

The selection of audio materials and accessories will depend on the specific needs of the user. However,
the following items are a good starting point:

 Microphone protection

 A foam windscreen can help to reduce wind noise and plosives (hard consonants such as "p" and "b").
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
 A metal pop filter can also help to reduce plosives, and can be used in conjunction with a foam
windscreen.
 A shock mount can help to reduce vibration and noise from handling the microphone.

 Memory cards

 Memory cards are used to store audio recordings on digital recorders and cameras.
 Choose a memory card with a fast enough write speed to handle the audio format you are recording in.
 It is also a good idea to have a backup memory card in case your primary card fails.

 Batteries

 Many audio devices are powered by batteries.


 Choose rechargeable batteries whenever possible, as this will save you money in the long run.
 Be sure to have a spare set of batteries on hand, especially if you are recording in a remote location.

 External hard drive

 An external hard drive can


 be used to store large amounts of audio recordings.
 Choose a hard drive with a fast enough transfer speed to handle the audio format you are recording in.
 It is also a good idea to back up your audio recordings to a second external hard drive or to a cloud
storage service.

 Acoustic panels

 Acoustic panels can help to reduce echoes and improve the sound quality in a room.
 Acoustic panels can be placed on the walls and ceiling of a room, or they can be used to create a vocal
booth.

 Soundproofing insulation

 Soundproofing insulation can help to reduce noise from outside entering a room.
 Soundproofing insulation can be placed in the walls and ceiling of a room, or it can be used to create a
vocal booth.

 Microphone stand

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 A microphone stand can help to position the microphone at the correct height and distance from the
speaker.
 Microphone stands come in a variety of styles, so choose one that is appropriate for your needs.

 Wind and pop noise protection

 A foam windscreen can help to reduce wind noise and plosives (hard consonants such as "p" and "b").
 A metal pop filter can also help to reduce plosives, and can be used in conjunction with a foam
windscreen.

 Shock and vibration protection

 A shock mount can help to reduce vibration and noise from handling the microphone.

In addition to the above items, there are a number of other audio materials and accessories that may be
useful, such as:

 A mixer can be used to combine multiple audio signals into a single signal.
 A preamplifier can be used to boost the signal from a microphone or other audio device.
 A compressor can be used to reduce the dynamic range of an audio signal, making it more consistent in
volume.
 An equalizer can be used to adjust the frequency response of an audio signal.
 A limiter can be used to prevent an audio signal from exceeding a certain level.

1.3 Selection of digital audio workstation (DAW) Tools

The selection of digital audio workstation (DAW) tools is vast and varied, with options available for Windows, Mac,
and cross-platform users. Here is a list of some of the most popular DAWs, organized by operating system
compatibility:
✔ Operating system

The selection of digital audio workstation (DAW) tools is vast and varied, with options available for
Windows, Mac, and cross-platform users. Here is a list of some of the most popular DAWs, organized by
operating system compatibility:

Windows-Based DAWs

 Abelton Live
 FL Studio

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 Pro Tools
 PreSonus Studio One
 Reaper
 Steinberg Cubase
 Bitwig Studio
 Propellerhead Reason
 Acoustica Mixcraft
 MOTU Digital Performer
 Audacity

Mac-Based DAWs

 GarageBand
 Logic Pro
 Ableton Live
 Cubase Pro
 Bitwig Studio
 Propellerhead Reason
 Digital Performer
 FL Studio (since 2022)

Cross-Platform software DAWs

 Ardour
 Reaper
 Bitwig Studio
 Propellerhead Reason
 CockOS REAPER
 Waveform 12
 Tracktion Waveform
 LMMS (Linux MultiMedia Studio)
 Hydrogen (Drum Machine)
 MuseScore
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
The selection of digital audio workstation (DAW) tools is vast and varied, with options available for
Windows, Mac, and cross-platform users. Here is a list of some of the most popular DAWs, organized by
operating system compatibility:

Windows-Based DAWs

 Abelton Live
 FL Studio
 Pro Tools
 PreSonus Studio One
 Reaper
 Steinberg Cubase
 Bitwig Studio
 Propellerhead Reason
 Acoustica Mixcraft
 MOTU Digital Performer
 Audacity

Mac-Based DAWs

 GarageBand
 Logic Pro
 Ableton Live
 Cubase Pro
 Bitwig Studio
 Propellerhead Reason
 Digital Performer
 FL Studio (since 2022)

Cross-Platform DAWs

 Ardour
 Reaper
 Bitwig Studio
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
 Propellerhead Reason
 CockOS REAPER
 Waveform 12
 Tracktion Waveform
 LMMS (Linux MultiMedia Studio)
 Hydrogen (Drum Machine)
 MuseScore

When choosing a DAW, it is important to consider your needs and budget. Some factors to think about
include:

 Operating system compatibility: Make sure to choose a DAW that is compatible with your operating
system.
 Feature set: Different DAWs have different feature sets. Consider which features are important to
you, such as multitrack recording, MIDI sequencing, audio editing, mixing, and mastering.
 Workflow: DAWs have different workflows. Some DAWs are more linear, while others are more non-
linear. Experiment with different DAWs to see which one has the workflow that you prefer.
 Price: DAWs can range in price from free to several hundred dollars. Choose a DAW that fits your budget.

If you are new to music production, I recommend starting with a free or low-cost DAW. Once you have
learned the basics of music production, you can then upgrade to a more powerful DAW if needed.

Here are a few specific DAW recommendations:

 For beginners: Audacity (free), Reaper (low-cost), GarageBand (free for Mac users)
 For intermediate users: FL Studio, Ableton Live, PreSonus Studio One
 For professional users: Pro Tools, Steinberg Cubase, Logic Pro

✔ Digital audio workstation features

 Multitrack recording: This allows you to record multiple audio tracks simultaneously, such as
vocals, guitar, bass, and drums. Once the tracks are recorded, you can edit, mix, and master them to
create a finished song.
 MIDI support: MIDI (Musical Instrument Digital Interface) is a protocol that allows you to connect
electronic musical instruments to computers. This allows you to control virtual instruments and
sequencers with your MIDI keyboard or other MIDI controller.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Hardware Requirements

 Processor (CPU): The processor is the most important component of your computer for music
production. It is responsible for processing all of the audio and MIDI data. A faster processor will allow you
to run more tracks and plugins without experiencing any latency or performance issues.
 RAM (Random access memory): RAM is used to store the audio and MIDI data that is currently being
processed by your computer. The more RAM you have, the more tracks and plugins you can use.
 Storage: You will need enough storage space to store your audio files, MIDI files, and plugin libraries. If
you plan on recording high-resolution audio, you will need even more storage space.
 Graphic card: A dedicated graphics card is not essential for music production, but it can be helpful if you
plan on using video editing software or other graphics-intensive applications.

Recommended Hardware Requirements

 Processor: Intel Core i7 or AMD Ryzen 7 processor or higher


 RAM: 32GB or more
 Storage: 1TB SSD or more
 Graphic card: NVIDIA GeForce RTX 3060 or AMD Radeon RX 6600 or higher

Learning outcome 2: Setup Audio equipment

2.1 Set up of audio input device


1. Microphone placement

The placement of your microphone will have a big impact on the sound quality of your recordings. Here
are a few tips for placing your microphone:

 Place the microphone directly in front of your mouth, but slightly off-center. This will help to reduce
plosives (harsh consonant sounds).
 Keep the microphone at a distance of 6-12 inches from your mouth.
 Avoid placing the microphone in front of a hard surface, such as a wall or desk. This can cause reflections
and feedback.
 Experiment with different microphone placements to find the one that sounds best to you.
Microphone Polar pattern

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


A microphone polar pattern is a graphical representation of a microphone's directional sensitivity to sound.
In other words, it shows how well the microphone will pick up sound from different directions.

There are three main types of microphone polar patterns:

 Omnidirectional: Omnidirectional microphones pick up sound equally from all directions.


 Unidirectional: Unidirectional microphones are most sensitive to sound coming from directly in front of
them. They are less sensitive to sound coming from the sides and back.
 Bidirectional: Bidirectional microphones are equally sensitive to sound coming from the front and
back. They are less sensitive to sound coming from the sides.

Within the unidirectional category, there are three subcategories:

 Cardioid: Cardioid microphones are most sensitive to sound coming from directly in front of them and
least sensitive to sound coming from behind them.
 Super cardioid: Super cardioid microphones are even more directional than cardioid microphones. They
are very good at rejecting sound from the sides and back.
 Hyper cardioid: Hyper cardioid microphones are the most directional type of microphone. They are
extremely good at rejecting sound from the sides and back.

Each type of microphone polar pattern has its own advantages and disadvantages.

Omnidirectional microphones are good for recording multiple people or instruments in a single room. They
are also good for recording ambient noise. However, they can pick up unwanted sounds from the sides
and back.

Unidirectional microphones are good for recording a single person or instrument in isolation. They are
also good for reducing feedback in live sound applications. However, they can make it difficult to record
ambient noise.

Bidirectional microphones are good for recording two people or instruments in a stereo image. They are
also good for recording interviews. However, they can pick up unwanted sounds from the sides and back.

When choosing a microphone, it is important to consider the polar pattern and how it will affect your
recordings.

Here are some examples of how to use different microphone polar patterns in different recording
situations:

 Record a single vocal: Use a unidirectional microphone, such as a cardioid or supercardioid


microphone, to focus on the vocal and reduce unwanted background noise.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 Record a band in a studio: Use a combination of unidirectional and omnidirectional microphones to
capture the sound of the band as well as the ambiance of the room.
 Record an interview: Use a bidirectional microphone to record the voices of both the interviewer and the
interviewee in a stereo image.
 Record a live performance: Use unidirectional microphones to focus on the sound of the performers and
reduce feedback.
2. Microphone Power

Microphone power is the electrical power that is used to operate a microphone. There are two main types
of microphone power: phantom power and battery power.

Phantom power is a type of DC power that is supplied to a microphone through the microphone cable.
Phantom power is typically used to power condenser microphones, which require a small amount of
power to operate.

To use phantom power, you will need a mixer or audio interface that has phantom power capability.
Simply connect the microphone to the mixer or audio interface using a microphone cable, and then enable
phantom power on the mixer or audio interface.

Battery power is used to power microphones that do not require phantom power, such as dynamic
microphones and some condenser microphones. Battery-powered microphones typically have a built-in
battery compartment. To install the battery, open the battery compartment and insert the battery according
to the polarity markings.

Some microphones, such as electret condenser microphones, can be powered by either phantom power
or battery power. This type of microphone typically has a switch that allows you to select the desired
power source.

Here are some additional things to keep in mind about microphone power:

 Phantom power is typically 48 volts DC. Some mixers and audio interfaces may also offer 24 volts DC or
12 volts DC phantom power.
 Make sure to match the voltage of the phantom power supply to the voltage requirement of the
microphone.
 Do not enable phantom power if you are using a microphone that does not require phantom power. This
can damage the microphone.
 If you are using a battery-powered microphone, make sure to replace the battery regularly to ensure
optimal performance.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


2.2 Musical instrument

 Electric guitar:

Electric guitars are one of the most popular musical instruments in the world. They are used in a wide
variety of musical genres, including rock, pop, blues, and jazz. Electric guitars work by converting the
vibrations of the strings into an electrical signal, which is then amplified and sent to a speaker.

Electric guitars have a number of advantages over acoustic guitars. They can be louder and more
distorted, which makes them ideal for playing in loud bands. They are also more versatile than acoustic
guitars, as they can be used to create a wide range of different sounds.

 Piano:

Pianos are another popular musical instrument. They are used in a wide variety of musical genres,
including classical, jazz, pop, and rock. Pianos work by striking hammers against metal strings, which
produces a sound that is amplified by a soundboard.

Pianos are known for their rich and expressive sound. They are also very versatile instruments, as they
can be used to play a wide range of different styles of music.

 Electronic drum:

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Electronic drums are a type of drum that uses electronic sensors to trigger sound samples. Electronic
drums are often used in pop, rock, and dance music.

Electronic drums have a number of advantages over acoustic drums. They are typically quieter than
acoustic drums, which makes them ideal for practicing in apartments or other small spaces. They are also
more versatile than acoustic drums, as they can be used to create a wide range of different sounds.

2.3 Audio interface


 Input and output

Audio interface inputs are used to connect audio sources to the interface. Common input types include:

 XLR: XLR inputs are typically used for microphones. They provide a balanced signal, which is less
susceptible to noise and interference.
 TRS: TRS (tip-ring-sleeve) inputs are typically used for line-level signals, such as from electric
guitars, keyboards, and synthesizers. They can also be used for microphones with a TRS connector.
 RCA: RCA inputs are typically used for consumer-grade audio devices, such as CD players and MP3
players.
 Optical: Optical inputs are used for digital audio signals. They are less susceptible to noise and
interference than coaxial inputs.

Audio interface output

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Audio interface outputs are used to connect the interface to speakers or headphones. Common
output types include:

 XLR: XLR outputs are typically used to connect to studio monitors. They provide a balanced signal, which
is less susceptible to noise and interference.
 TRS: TRS outputs are typically used to connect to headphones or consumer-grade speakers.
 RCA: RCA outputs are typically used to connect to consumer-grade speakers.
 Optical: Optical outputs are used to connect to digital audio devices, such as digital recorders and
converters.

Examples of audio interface input and output

 Microphone: XLR input


 Electric guitar: TRS input
 Keyboard: TRS input
 Synthesizer: TRS input
 CD player: RCA input
 MP3 player: RCA input
 Studio monitors: XLR output
 Headphones: TRS output
 Consumer-grade speakers: TRS output or RCA output
 Digital recorder: Optical output
 Converter: Optical output

 Audio interface latency

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Audio interface latency is the delay between the time that an audio signal enters the audio interface and
the time that it is output from the audio interface. This delay is caused by the various stages of processing
that the audio signal goes through, such as analog-to-digital conversion, digital signal processing, and
digital-to-analog conversion.

The amount of latency that an audio interface introduces will vary depending on a number of factors,
including:

 Sample rate: The sample rate is the number of times per second that the audio signal is sampled. A
higher sample rate will result in lower latency.
 Buffer size: The buffer size is the amount of audio data that is processed at once by the audio interface. A
smaller buffer size will result in lower latency.
 CPU power: The CPU power of the computer that the audio interface is connected to will also affect
latency. A faster CPU will result in lower latency.

2.4 Microphone protection set up

Setting up a microphone protection kit is essential to ensure the longevity and performance of your microphone.
Here's a breakdown of the components you've listed and their roles in protecting and maintaining a microphone:

1. Pop Filter: A pop filter is a screen or shield placed in front of the microphone to reduce plosive sounds like
"p" and "b" sounds that can cause unwanted distortion. It helps protect the microphone from moisture and
spit particles that can accumulate over time.
2. Shock Mount: A shock mount is a suspension system that holds the microphone and isolates it from
vibrations and physical shocks. This helps prevent handling noise, mechanical vibrations, and impact
damage from reaching the microphone.
3. Microphone Windshield: A microphone windshield, also known as a foam windscreen or a "dead cat" (for
larger microphones), is designed to protect the microphone from wind noise, plosives, and light physical
contact. It's particularly useful for outdoor recording or in situations with airflow.
4. Audio Cable Protection: Properly managing and protecting audio cables is essential for maintaining signal
integrity. Cable management techniques, such as cable clips, organizers, and strain relief connectors, help
prevent cable damage and interference.
5. Storage and Transport: Safely storing and transporting your microphone is crucial to protect it from
physical damage and environmental factors. Using a protective case or bag designed for microphones can
keep your equipment safe during transit and when not in use.

2.5 Audio processing devices set up

Setting up audio processing devices involves arranging and connecting various components to optimize the quality
and control of audio signals. Here's a breakdown of the components you've listed and their role in the audio
processing setup:

1. Signal Chain Placement: The signal chain refers to the order in which audio processing devices are
connected. It's crucial to place these devices in a logical sequence to achieve the desired audio effect and
ensure smooth signal flow.
2. Audio Connection: Use appropriate audio cables and connectors to establish connections between your
audio devices. Common cable types include XLR cables, 1/4-inch cables, RCA cables, and digital audio
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
cables like USB or HDMI, depending on your equipment.
3. Computer: The computer serves as the central control unit for audio processing when using digital audio
workstations (DAWs) or software-based effects. It's where you control and manipulate audio signals.
4. Sound Card: A sound card (or audio interface) is essential for converting analog audio signals into digital
data that can be processed by the computer. It also converts digital audio back to analog for output.
5. Equalizer (EQ): An equalizer allows you to adjust the frequency response of the audio signal. It can boost or
attenuate specific frequency ranges to shape the tonal quality of the sound.
6. Limiter: A limiter is used to set a maximum output level for an audio signal. It prevents signal peaks from
exceeding a certain threshold, helping to control dynamics and prevent clipping.
7. Gate: A gate is used to control the presence of audio signals below a set threshold. It can be helpful in
reducing background noise or unwanted sounds when the signal falls below the threshold.
8. Compressor: A compressor reduces the dynamic range of an audio signal by attenuating loud sounds and
boosting quiet sounds. This helps in achieving a more consistent audio level.

2.6 Digital audio workstation set up

2.6.1 Digital audio workstation installation

Step 1: Download the DAW installer

Go to the website of the DAW you want to install and download the installer.

Step 2: Run the installer

Once the installer has downloaded, run it and follow the on-screen instructions.

Step 3: Authorize the DAW

If required, authorize the DAW using your license key.

Step 4: Create a new project

Once the DAW is installed and authorized, create a new project.

Step 5: Import your audio files

Import the audio files you want to work with into the DAW.

Step 6: Record audio

If you want to record new audio, connect your microphone or instrument to your computer and record
audio into the DAW.

Step 7: Mix and edit your audio


Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
Use the DAW's tools to mix and edit your audio.

Step 8: Export your audio

Once you are finished mixing and editing your audio, export it from the DAW in a format of your choice.

2.6.2 Driver and ASIO settings

Drivers are software components that allow your computer to communicate with your audio hardware. It is
important to install the latest drivers for your audio hardware to ensure that it works properly with your
DAW.

ASIO is a low-latency audio protocol that allows your DAW to access your audio hardware directly. This
can reduce latency, which is the time it takes for your DAW to process audio and produce sound.

To set up ASIO in your DAW:

1. Go to the DAW's preferences or settings menu.


2. Select the audio tab.
3. Select ASIO as the audio driver.
4. Select your audio interface as the ASIO device.
5. Click OK to save your changes.
2.6.3 Plugin format

Plugins are software components that add new features and functionality to your DAW. There are many
different types of plugins available, including instruments, effects, and processors.

The most common plugin format is VST. VST plugins are supported by most DAWs.

Other popular plugin formats include:

 AU (Audio Units): Used by Apple Logic Pro and GarageBand.


 AAX (Avid Audio eXtension): Used by Avid Pro Tools.
 RTX (Re Wire): Used to connect different DAWs together.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


To install a plugin:

1. Download the plugin installer.


2. Run the installer and follow the on-screen instructions.
3. Once the plugin is installed, open your DAW and scan for new plugins.
4. Once the plugin has been scanned, you should be able to find it in the DAW's plugin menu.

2.7 Digital audio workstation settings

Input level

The input level is the amount of signal that is being sent to the DAW from your audio interface. It is
important to set the input level correctly to avoid clipping. Clipping occurs when the signal is too loud and
distorts.

To set the input level correctly, record a test signal and adjust the input level until the signal is peaking at
around -18 dB.

Recording format

The recording format is the sample rate and bit depth of the audio that is being recorded. The sample rate
is the number of times per second that the audio signal is sampled. The bit depth is the number of bits
that are used to represent each sample.

The most common recording format is 16-bit/44.1 kHz. This format is CD quality and is suitable for most
applications. However, if you are recording high-quality audio, you may want to use a higher sample rate
and bit depth, such as 24-bit/96 kHz.

Recording channels

The recording channels are the number of audio tracks that you want to record at the same time. For
example, if you are recording a band, you will need to set the recording channels to the number of
instruments in the band.

Buffer size

The buffer size is the amount of time that the DAW needs to process audio before it can be played back.
A larger buffer size will reduce latency, which is the time it takes for audio to be processed and played
back. However, a larger buffer size can also increase the amount of RAM that is used by the DAW.
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
A good starting point for the buffer size is 256 samples. However, you may need to experiment with
different buffer sizes to find the one that works best for your system.

Monitor settings

The monitor settings control how you listen to the audio that is being played back from the DAW. You can
choose to listen to the audio through your computer's speakers, through headphones, or through a
monitor system.

2.8 Audio system testing

To test your audio system, follow these steps:

1. Connect all of your audio components. This includes your speakers, amplifier, and any other audio
devices you are using.
2. Turn on all of your audio components.
3. Play a test signal. You can do this by playing a song on your computer or using a test signal generator.
4. Listen to the audio from each speaker. Make sure that all of the speakers are working properly and that
the audio is balanced.

Channel testing

Channel testing is a way to verify that all of the channels in your audio system are working properly. To
test your channels, follow these steps:

1. Play a test signal through one channel at a time. You can do this by using a test signal generator or by
playing a song on your computer and panning it to one channel.
2. Listen to the audio from each speaker. Make sure that all of the speakers are working properly and that
the audio is balanced.
3. Repeat steps 1 and 2 for each channel.

If you notice any problems with the audio from any of the channels, check the connections between your
audio components. If the connections are correct, you may need to troubleshoot the audio component
itself.

Here are some additional tips for testing your audio system:

 Test your audio system at different volume levels. This will help you to identify any problems with the
system at different volumes.
 Test your audio system with different types of audio content. This will help you to identify any problems
with the system with different types of audio.
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
 Test your audio system in different environments. This will help you to identify any problems with the
system in different acoustic environments.

Learning outcome 3: Capture sound

3.1 Application of crane positioning techniques

Overhead Boom

Overhead booms are the most common type of crane boom. They are typically used for lifting and moving
loads over a large area. Overhead booms can be either fixed or articulated. Fixed overhead booms are
mounted in a fixed position and cannot move. Articulated overhead booms can be moved up and down,
as well as from side to side.

Overhead booms are used in a wide variety of applications, including:

 Construction: Overhead booms are used to lift and move building materials, such as steel
beams, concrete slabs, and prefabricated walls.
 Manufacturing: Overhead booms are used to lift and move heavy machinery and equipment.
 Shipping: Overhead booms are used to load and unload cargo ships.
 Mining: Overhead booms are used to lift and move mining equipment and materials.
Front Boom

Front booms are typically used for lifting and moving loads that are in front of the crane. They are often
used in construction applications to lift and move materials over walls or other obstacles. Front booms can
be either fixed or articulated.

Front booms are used in a variety of applications, including:

 Construction: Front booms are used to lift and move building materials, such as steel beams, concrete
slabs, and prefabricated walls.
 Demolition: Front booms are used to demolish buildings and other structures.
 Landscaping: Front booms are used to lift and move trees and other landscaping materials.
 Industrial: Front booms are used to lift and move heavy machinery and equipment.
Side Boom

Side booms are typically used for lifting and moving loads that are to the side of the crane. They are often
used in construction applications to lift and move materials into and out of tight spaces. Side booms can
be either fixed or articulated.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Side booms are used in a variety of applications, including:

 Construction: Side booms are used to lift and move building materials, such as steel beams, concrete
slabs, and prefabricated walls, into and out of tight spaces.
 Demolition: Side booms are used to demolish buildings and other structures in tight spaces.
 Landscaping: Side booms are used to lift and move trees and other landscaping materials into and out of
tight spaces.
 Industrial: Side booms are used to lift and move heavy machinery and equipment into and out of tight
spaces.
Follow Boom

Follow booms are typically used for lifting and moving loads that are following the crane. They are often
used in construction applications to lift and place materials on roofs or other high places. Follow booms
can be either fixed or articulated.

Follow booms are used in a variety of applications, including:

 Construction: Follow booms are used to lift and place building materials, such as steel beams, concrete
slabs, and prefabricated walls, on roofs and other high places.
 Roofing: Follow booms are used to lift and place roofing materials, such as shingles and tiles.
 HVAC: Follow booms are used to lift and place HVAC equipment, such as air conditioners and heaters.
 Industrial: Follow booms are used to lift and place heavy machinery and equipment on high places.
Telescoping Boom

Telescoping booms are booms that can be extended and retracted. This makes them very versatile and
allows them to be used in a wide variety of applications. Telescoping booms can be either fixed or
articulated.

Telescoping booms are used in a variety of applications, including:

 Construction: Telescoping booms are used to lift and move building materials, such as steel
beams, concrete slabs, and prefabricated walls, in a variety of locations.
 Manufacturing: Telescoping booms are used to lift and move heavy machinery and equipment in a variety
of locations.
 Shipping: Telescoping booms are used to load and unload cargo ships from a variety of distances.
 Mining: Telescoping booms are used to lift and move mining equipment and materials from a variety of
distances.
Stationary Boom
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
Stationary booms are booms that are fixed in a single position. They are often used in industrial
applications to lift and move materials within a limited area. Stationary booms can be either fixed or
articulated.

Stationary booms are used in a variety of applications, including:

 Manufacturing: Stationary booms are used to lift and move heavy machinery and equipment within a
limited area.
 Warehousing: Stationary booms are used to lift and move inventory within a limited area.
 Mining: Stationary booms are used to lift and move mining equipment and materials within a limited area.

3.2 Application of microphone movement techniques

Panning

Panning is the process of adjusting the volume of a sound source in each of two stereo channels. This
can be used to create the illusion that the sound source is coming from a specific direction. For example,
panning a guitar to the left channel will make it sound like the guitar is coming from the left side of the
room.

Application: Panning can be used to create a more realistic and immersive soundstage. For example, in a
recording of a live band, you might pan the drums to the center, the guitars to the left and right, and the
vocals to the center. This would create the illusion that the listener is standing in the middle of the band.

Stereo imaging

Stereo imaging is the process of creating a sense of width and depth in a stereo recording. This can be
done by using different microphone techniques, such as XY stereo, ORTF stereo, and Blumlein stereo.

Application: Stereo imaging can be used to make a recording sound more realistic and immersive. For
example, in a recording of a classical orchestra, you might use a stereo microphone technique to capture
the width and depth of the orchestra. This would make the recording sound more like the listener is sitting
in the concert hall.

Tracking

Tracking is the process of recording multiple instruments or vocals onto separate tracks. This allows for
more control over the mixing process. For example, you can adjust the volume, panning, and EQ of each
track individually.

Application: Tracking is essential for multi-track recording. It allows you to create a more complex and
layered sound. For example, you might track a guitar part multiple times to create a thicker sound.
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
3.3 Application of audio recording techniques

Blumlein Pair

The Blumlein pair is a stereo microphone technique that uses two figure-eight microphones placed 90
degrees apart at the same height. This technique produces a very natural and realistic stereo image.

Application: The Blumlein pair is a good choice for recording a variety of sound sources, including
classical music, jazz, and acoustic music. It is also a good choice for recording in stereo in a reverberant
environment.

Mid-Side Technique

The mid-side technique is a stereo microphone technique that uses one omnidirectional microphone and
one figure-eight microphone placed close together. The omnidirectional microphone captures the center
of the soundstage, while the figure-eight microphone captures the sides. This technique allows for a great
deal of flexibility in post-production, as the panning and width of the stereo image can be adjusted.

Application: The mid-side technique is a good choice for recording a variety of sound sources, including
vocals, instruments, and ensembles. It is also a good choice for recording in stereo in a variety of
environments.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Spaced Omni

The spaced omni technique is a stereo microphone technique that uses two omnidirectional microphones
spaced apart. The spacing between the microphones determines the width of the stereo image.

Application: The spaced omni technique is a good choice for recording large ensembles and orchestras. It
can also be used to create a wide and spacious stereo image of a single sound source.

A-B Technique

The A-B technique is a stereo microphone technique that uses two cardioid microphones spaced apart.
The spacing between the microphones determines the width of the stereo image.

Application: The A-B technique is a good choice for recording a variety of sound sources, including
vocals, instruments, and ensembles. It is also a good choice for recording in stereo in a variety of
environments.

Close-Mid-Far Technique

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


The close-mid-far technique is a microphone placement technique that uses three microphones to capture
a sound source from close, mid, and far distances. This technique can be used to create a more detailed
and realistic recording of a sound source.

Application: The close-mid-far technique is a good choice for recording vocals, instruments, and
ensembles. It is also a good choice for recording in stereo in a variety of environments.

Baffled Omni

The baffled omni technique is a microphone placement technique that uses an omnidirectional
microphone placed inside a sphere or other baffle. This technique reduces the amount of unwanted
background noise and creates a more focused sound.

Application: The baffled omni technique is a good choice for recording vocals and instruments in a noisy
environment. It is also a good choice for recording in stereo in a variety of environments.

Decca Tree

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


The Decca tree is a stereo microphone technique that uses three omnidirectional microphones arranged
in a triangle. This technique produces a very natural and realistic stereo image, with a wide soundstage
and good depth.

Application: The Decca tree is a good choice for recording large ensembles and orchestras. It is also a
good choice for recording in stereo in a reverberant environment.

3.4 Monitoring of audio signal parameters

3.4.1 Level monitoring

Level monitoring is the process of measuring and tracking the level of an audio signal. This is important
for ensuring that the signal is not too loud or too soft, and that it is within the dynamic range of the
recording or playback system.

Peak level is the highest instantaneous amplitude of an audio signal. It is measured in decibels (dB)
relative to full scale (0 dBFS). Peak levels are important to monitor to prevent clipping, which is a form of
distortion that occurs when the signal exceeds the maximum level of the system.

RMS level is the root mean square of the amplitude of an audio signal over a period of time. It is
measured in decibels relative to full scale (0 dBFS). RMS levels are a better measure of the overall
loudness of an audio signal than peak levels, as they take into account the duration of the signal.

VU level is a measurement of the average loudness of an audio signal over a period of time. It is
measured in volume units (VU). VU meters were originally developed for use in broadcast audio, but they
are now commonly used in a variety of other applications.

Monitoring of audio signal parameters is important for a variety of reasons, including:

 Quality control: Ensuring that the audio signal is within the desired dynamic range and that it is free of
clipping and other distortion.
 Loudness control: Ensuring that the audio signal is not too loud or too soft.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 Audience protection: Preventing listeners from being exposed to excessive levels of sound.

Level monitoring can be performed using a variety of tools, including:

 Audio meters: Audio meters measure the level of an audio signal and display the results in a variety of
formats, such as peak level, RMS level, and VU level.
 Waveform editors: Waveform editors allow you to visualize the waveform of an audio signal. This can be
helpful for identifying clipping and other distortion.
 Integrated loudness meters (ILMs): ILMs measure the loudness of an audio signal over time. This can be
helpful for ensuring that the audio signal is within the desired dynamic range.

3.4.2 Frequency response monitoring

Frequency response monitoring is the process of measuring and evaluating the frequency response of an
audio signal. This can be done using a variety of methods, including:

Frequency range is the range of frequencies that an audio system can reproduce. It is typically
measured in hertz (Hz) and is expressed as a lower and upper limit. For example, a frequency range of 20
Hz to 20 kHz indicates that the system can reproduce all frequencies from 20 Hz (the lowest audible
frequency) to 20 kHz (the highest audible frequency).

Frequency balance is the distribution of energy across the frequency range. It is important to have a
good frequency balance in order to produce a natural and realistic sound. For example, too much bass
can make the sound muddy, while too little bass can make the sound thin.

Impulse response is a measure of how a system responds to a short pulse of sound. It can be used to
assess the quality of a system's transient response, which is its ability to reproduce sudden changes in
the signal.

Frequency response monitoring is important for a variety of reasons. It can be used to:

 Ensure that an audio system is reproducing the signal accurately.


 Identify and correct frequency response problems.
 Compare different audio systems.
 Tune an audio system for a specific room or environment.

Here are some specific examples of how frequency response monitoring can be used:

 A recording engineer might use frequency response monitoring to ensure that a recording is capturing the
full frequency range of the sound source.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 A live sound engineer might use frequency response monitoring to tune the PA system for a specific
venue.
 A car audio installer might use frequency response monitoring to ensure that a car audio system is
reproducing the signal accurately.
 A consumer electronics reviewer might use frequency response monitoring to compare different
headphones or speakers.

3.4.3 Phase monitoring

Phase monitoring is the process of measuring and evaluating the phase of an audio signal. Phase is a
measure of the time relationship between two signals. It is typically measured in degrees and is
expressed as a relative phase shift.

Phase shift

Phase shift is the difference in time between two signals. It can be caused by a variety of factors, such
as:

 Distance: Sound travels at a finite speed, so the sound from a distant source will arrive at a listener's ears
later than the sound from a closer source. This can cause a phase shift between the two signals.
 Microphones: Microphones can introduce phase shifts due to their design and placement.
 Electronics: Electronic devices can also introduce phase shifts due to their design and operation.

Phase difference

Phase difference is the difference in phase between two signals. It is typically measured in degrees. A
phase difference of 0 degrees indicates that the two signals are in phase, while a phase difference of 180
degrees indicates that the two signals are out of phase.

Phase coherence

Phase coherence is a measure of how well the phase of an audio signal is preserved. It is important to
have good phase coherence in order to produce a natural and realistic sound. For example, poor phase
coherence can cause the sound to sound muddy or distant.

Phase monitoring is important for a variety of reasons. It can be used to:

 Identify and correct phase problems.


 Tune an audio system for a specific room or environment.
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
 Compare different audio systems.
 Assess the quality of a recording.
3.4.4 Stereo image

Stereo image is the perception of width and depth in a stereo recording. It is created by the difference in
arrival time and level of the sound signal at each ear. The wider the stereo image, the more spacious and
realistic the recording will sound.

There are a number of factors that contribute to stereo image, including:

 Microphone placement: The placement of the microphones used to record the sound source has a
significant impact on the stereo image. For example, a spaced pair of microphones will create a wider
stereo image than a single close-miked microphone.
 Panning: Panning is the process of adjusting the volume of each channel in a stereo recording. This can
be used to create the illusion that the sound source is coming from a specific direction. For
example, panning a guitar to the left channel will make it sound like the guitar is coming from the left side
of the room.
 Equalization: Equalization (EQ) is the process of adjusting the frequency content of an audio signal. This
can be used to create the illusion of distance and depth. For example, boosting the low frequencies of a
bass guitar will make it sound closer to the listener, while boosting the high frequencies of a vocal will
make it sound further away.
 Reverb and delay: Reverb and delay can be used to create the illusion of space and depth in a stereo
recording. Reverb simulates the reflections of sound off of surfaces in a room, while delay adds a delay to
the signal, which can create the illusion of distance.

3.4.5 Dynamic range

Dynamic range is the difference in loudness between the softest and loudest sounds in an audio
recording or performance. It is typically measured in decibels (dB).

A wide dynamic range is desirable because it allows the listener to hear all of the nuances of the music or
performance. A narrow dynamic range, on the other hand, can make the music sound flat and lifeless.

There are a number of factors that affect the dynamic range of an audio recording, including:

 Microphone placement: Microphones that are placed closer to the sound source will typically capture a
wider dynamic range.
 Recording level: Recording at a lower level will typically capture a wider dynamic range.
 Compression: Compression is a process that reduces the dynamic range of an audio signal. It is often
used to make recordings sound louder and more consistent.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 Limiting: Limiting is a process that prevents the peaks of an audio signal from exceeding a certain
level. It is often used to prevent clipping, which is a form of distortion that occurs when the signal is too
loud.

Dynamic range is an important concept to understand for anyone who works with audio. By understanding
the factors that affect dynamic range, engineers and musicians can create recordings that are both loud
and dynamic.

3.4.6 Distortion and artifacts

Distortion and artifacts are two of the most common audio signal parameters that need to be monitored.

Distortion

Distortion is any alteration of the audio signal that causes it to sound different from the original signal. It
can be caused by a variety of factors, including:

 Clipping
 Overloading
 Intermodulation
 Phase distortion
 Harmonic distortion

Clipping is the most common type of distortion. It occurs when the audio signal exceeds the maximum
level that the recording or playback system can handle. This causes the peaks of the signal to be
flattened, which can make the sound harsh and unpleasant.

Overloading occurs when the input level to a piece of audio equipment is too high. This can cause the
equipment to distort the signal.

Intermodulation distortion occurs when two or more signals interact with each other and produce new
frequencies that are not present in the original signals. This type of distortion is often caused by non-
linearity in the audio equipment.

Phase distortion occurs when the phase of the audio signal is altered. This can cause the sound to lose
its clarity and definition.

Harmonic distortion occurs when the harmonics of the audio signal are not reproduced accurately. This
can make the sound sound harsh and unnatural.

Artifacts
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
Artifacts are unwanted sounds that are introduced into the audio signal during recording, playback, or
processing. They can be caused by a variety of factors, including:

 Noise
 Interference
 Digital clipping
 Compression artifacts
 EQ artifacts

Noise is any unwanted sound that is present in the audio signal. It can be caused by a variety of factors,
such as electronic noise from equipment, environmental noise from traffic or aircraft, or acoustic noise
from the recording environment.

Interference is the unwanted interaction between two or more audio signals. It can cause a variety of
problems, such as noise, distortion, and dropouts.

Digital clipping occurs when the audio signal exceeds the maximum level that the digital recording system
can handle. This causes the peaks of the signal to be flattened, which can introduce harsh and
unpleasant artifacts into the recording.

Compression artifacts are introduced when the audio signal is compressed. Compression is a process
that reduces the dynamic range of an audio signal, which can make it sound louder and more consistent.
However, compression can also introduce artifacts into the signal, such as pumping and breathing.

EQ artifacts are introduced when the frequency content of the audio signal is altered. Equalization (EQ) is
a process that boosts or cuts specific frequencies in an audio signal. However, EQ can also introduce
artifacts into the signal, such as ringing and phase distortion.

Monitoring distortion and artifacts

There are a number of ways to monitor distortion and artifacts in an audio signal. Some common methods
include:

 Visual monitoring: This involves using a spectrum analyzer or oscilloscope to visualize the audio
signal. Distortion and artifacts can often be identified by looking for irregularities in the waveform or
spectrum.
 Auditory monitoring: This involves listening to the audio signal carefully. Distortion and artifacts can often
be identified by their unpleasant sound.
 Technical monitoring: This involves using specialized audio measurement equipment to measure the
distortion and artifact levels in the signal.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


3.4.7 Time based effect

Time-based effects are audio effects that manipulate the time domain of a signal. This can include effects
such as delay, reverb, chorus, and flanging. Monitoring of time-based effects is important to ensure that
these effects are being applied correctly and that they are not causing any unwanted artifacts.

One of the most important parameters to monitor for time-based effects is the delay time. This is the
amount of time that the signal is delayed before it is played back. Delay times can be very short (a few
milliseconds) or very long (several seconds). The delay time will affect the overall character of the effect.
For example, a short delay time will create a sense of snapback delay, while a long delay time will create
a sense of echo.

Another important parameter to monitor is the feedback level. This is the amount of the delayed signal
that is fed back into the original signal. Feedback can be used to create a variety of effects, such as
fluttering echoes and sustained reverb. However, too much feedback can lead to instability and oscillation.

It is also important to monitor the wet/dry mix of time-based effects. This is the balance between the
original signal and the processed signal. A wet/dry mix of 100% wet will mean that only the processed
signal is heard. A wet/dry mix of 0% wet will mean that only the original signal is heard. Finding the right
wet/dry mix is essential to creating a natural and realistic sound.

3.5 Application of audio level adjustment techniques

Gain/Trim

Gain or trim is the first step in the audio mixing process. It is used to adjust the level of each individual
track so that they are all at a similar level. This is important for creating a balanced and cohesive mix.

EQ

EQ stands for equalization. It is used to adjust the frequency content of an audio signal. This can be used
to boost or cut specific frequencies to improve the sound of the signal. For example, EQ can be used to
boost the low frequencies of a bass guitar to make it sound fuller or to cut the high frequencies of a vocal
track to reduce harshness.

Fader riding

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


Fader riding is the process of manually adjusting the volume of each track during a mix. This is used to
create dynamics and interest in the mix. For example, the fader for a lead vocal track might be raised
during the chorus and lowered during the verses.

Compression

Compression is used to reduce the dynamic range of an audio signal. This makes the softest sounds
louder and the loudest sounds quieter. Compression can be used to make a recording sound louder and
more consistent, or to create a specific effect, such as a punchy drum sound or a sustained guitar solo.

Automation

Automation is used to record the changes made to the faders and other controls during a mix. This allows
the engineer to recreate the mix without having to do it manually each time. Automation can also be used
to create complex effects, such as a gradual fade-in or fade-out.

Limiting

Limiting is used to prevent the peaks of an audio signal from exceeding a certain level. This is important to
prevent clipping, which is a form of distortion that occurs when the signal is too loud. Limiting is often used
in mastering to make recordings sound louder and more consistent.

3.6 Verification of recorded audio signal factors

3.6.1 Sound quality

Tonal balance

Tonal balance is the distribution of energy across the frequency spectrum of an audio signal. A well-
balanced audio signal will have a good mix of low, mid, and high frequencies. A tonal imbalance can
make the audio sound muddy, harsh, or thin.

Sonic characteristics

Sonic characteristics are the unique qualities of an audio signal, such as its warmth, brightness, and
spaciousness. Sonic characteristics can be affected by a variety of factors, including the recording
environment, the microphones used, and the post-production processing.

3.6.2 Clarity

Clarity is the ability of an audio signal to be easily understood and distinguished from background noise. It
is an important factor to consider when evaluating a recorded audio signal.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


There are a number of ways to verify the clarity of a recorded audio signal. One way is to listen to the
signal on a variety of playback devices, including headphones, speakers, and car stereos. Another way is
to use a spectrum analyzer to measure the frequency content of the signal. A clear signal will have a well-
defined frequency response with no major peaks or dips.

Here are some specific things to listen for when verifying the clarity of a recorded audio signal:

 Frequency response: The frequency response should be smooth and even, with no major peaks or dips.
 Noise floor: The noise floor should be low enough that the signal can be easily distinguished from
background noise.
 Transients: Transients are the sharp attacks and decays of sounds. A clear signal will have well-defined
transients that are not obscured by noise.
 Timbre: Timbre is the characteristic sound of a particular instrument or voice. A clear signal will have a
well-defined timbre that is not distorted.

Learning outcome 4: Finalize sound recording operation

4.1 Transferring audio files

4.1.1 Storage medium

External drive

Pros:

 Portable and easy to carry


 Relatively inexpensive
 Large storage capacity

Cons:

 Can be damaged or lost if dropped or mishandled


 Not as fast as SSDs
 May not be compatible with all devices

SSD: Solid-state drive

Pros:

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 Very fast read and write speeds
 Durable and less likely to be damaged than external hard drives
 Portable and easy to carry

Cons:

 More expensive than external hard drives


 Smaller storage capacity

NAS: Network-attached storage

Pros:

 Centralized storage for all of your audio files


 Accessible from any device on your network
 Scalable storage capacity

Cons:

 Can be expensive to set up and maintain


 Requires some technical knowledge to use
 May not be as fast as SSDs

Cloud storage services

Pros:

 Convenient and accessible from anywhere


 Automatic backup and synchronization
 Scalable storage capacity

Cons:

 Can be expensive for large storage capacities


 Requires a reliable internet connection
 May not be suitable for sensitive audio files due to security concerns

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


4.1.2 Use file transferring methods

USB cable

Transferring audio files via USB cable is a simple and reliable way to transfer large files quickly. However,
it requires a physical connection between the two devices.

Bluetooth

Bluetooth is a wireless technology that can be used to transfer audio files between devices. It is
convenient and easy to use, but it can be slow and unreliable, especially for large files.

Wi-Fi Direct

Wi-Fi Direct is a wireless technology that allows devices to connect directly to each other without the need
for a router. It is faster than Bluetooth, but it is not as widely supported.

Email attachments

Email attachments can be used to transfer audio files, but most email providers have limits on the size of
attachments. This makes it impractical for transferring large files.

Cloud storage service

Cloud storage services, such as Google Drive and Dropbox, allow you to store files online and access
them from anywhere. This can be a convenient way to transfer audio files between devices, but it can be
slow and expensive to upload and download large files.

File transfer apps

There are a number of file transfer apps available that can be used to transfer audio files between
devices. These apps typically use Wi-Fi Direct or Bluetooth to transfer files. They can be a convenient
way to transfer large files quickly and reliably.

The best way to transfer audio files will depend on your specific needs. If you need to transfer a large file
quickly, USB cable or a file transfer app is the best option. If you need to transfer a file wirelessly,
Bluetooth or Wi-Fi Direct is a good option. If you need to transfer a file to multiple devices, a cloud storage
service is a good option.

4.1.3 Delivering final audio

File formats
Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426
When delivering final audio, it is important to choose the right file format. The most common file formats
for audio delivery are:

 WAV (Waveform Audio File Format): WAV is an uncompressed audio format that is widely supported by
most audio software and playback devices. It is a good choice for delivering high-quality audio.
 AIFF (Audio Interchange File Format): AIFF is similar to WAV, but it is more common on Mac
computers. It is also a good choice for delivering high-quality audio.
 MP3 (MPEG-1 Audio Layer 3): MP3 is a compressed audio format that is very popular for online
streaming and music downloads. It is a good choice for delivering audio when file size is a concern.
 AAC (Advanced Audio Coding): AAC is another compressed audio format that is similar to
MP3. However, AAC offers better audio quality at lower file sizes. It is a good choice for delivering audio
when both file size and audio quality are important.

Sample rate

The sample rate is the number of times per second that the audio signal is sampled. A higher sample rate
means that the audio signal is sampled more often, which results in higher quality audio. However, a
higher sample rate also results in larger file sizes.

The standard sample rate for audio delivery is 44.1 kHz. However, higher sample rates, such as 48 kHz
and 96 kHz, are sometimes used for high-resolution audio.

Bit depth

The bit depth is the number of bits that are used to represent each sample of the audio signal. A higher bit
depth means that each sample is represented with more precision, which results in higher quality audio.
However, a higher bit depth also results in larger file sizes.

The standard bit depth for audio delivery is 16 bits. However, higher bit depths, such as 24 bits and 32
bits, are sometimes used for high-resolution audio.

Channel configuration

The channel configuration refers to the number of audio channels in the file. The most common channel
configurations are:

 Mono: Mono audio has a single audio channel. This is the most basic type of audio and is typically used
for voice recordings and podcasts.
 Stereo: Stereo audio has two audio channels, one for the left ear and one for the right ear. This is the
most common type of audio for music and movies.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426


 Multichannel: Multichannel audio has three or more audio channels. This is often used for surround sound
systems.

The channel configuration that you choose will depend on the intended use of the audio. For example, if
you are delivering audio for a stereo music player, you would choose a stereo channel configuration. If
you are delivering audio for a surround sound system, you would choose a multichannel configuration.

Prepared by ISHIMO Jean d’Amour from LYCEE DE KICUKIRO APADE 0785425426

You might also like