0% found this document useful (0 votes)
90 views98 pages

Sounds' Final Slide - PPSX

Sound is a longitudinal wave that travels through air or other mediums as vibrations, which can be described by properties like wavelength, amplitude, frequency, time period, and velocity. For digital representation in computers, sound waves are converted into discrete numeric samples through digitization at a certain sampling rate and bit depth, allowing digital audio files to store and playback sound. Key aspects of digital audio include file formats, sampling rates, and applications in multimedia like background music, sound effects, and narration.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPSX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views98 pages

Sounds' Final Slide - PPSX

Sound is a longitudinal wave that travels through air or other mediums as vibrations, which can be described by properties like wavelength, amplitude, frequency, time period, and velocity. For digital representation in computers, sound waves are converted into discrete numeric samples through digitization at a certain sampling rate and bit depth, allowing digital audio files to store and playback sound. Key aspects of digital audio include file formats, sampling rates, and applications in multimedia like background music, sound effects, and narration.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPSX, PDF, TXT or read online on Scribd
You are on page 1/ 98

UNIT 3 SOUND

MBA (FT) 402EG: MULTIMEDIA APPLICATIONS


Presentation slides by :
Lec. Deepika Mehrotra
WHAT IS SOUND ?

• Sound is a wave made of vibrations in the air.


• When something makes a sound, it vibrates
the air molecules, which sends a chain
reaction through the air until it reaches our
ear drums.
• When our ears pick up that sound, signals are
sent to our brain so that we can interpret
what we're hearing. 
INTRODUCTION

• When we plot air density, we see that sound is


a longitudinal wave. This is different from
a transverse wave (such as light), because the
vibrations are parallel to the direction the
wave is moving.
• Sound, like all longitudinal waves, requires a
medium (material) to move through.
• Sound cannot travel in space, for example.
• They have wavelengths (the distance peak to peak)
and frequencies (the number of waves per second).
High frequency sound waves are high pitch, and low
frequency sound waves are low pitch.
• Sound waves contain high
density compressions (peaks) and low
density rarefactions (troughs).
PROPERTIES OF SOUND

Sound wave can be described by five


characteristics:
• Wavelength,
• Amplitude,
• Frequency,
• Time-Period,
• Velocity or Speed.
MULTIMEDIA SOUND SYSTEMS

• Macintosh or windows operating system uses sound to


make alert signal.
• System uses beep and warning sounds are available
with the operating system.
• They provide several sounds for the system alert.
• In windows system sound are WAV files and they are
found on windows\Media subdirectory.
• Windows makes use of WAV files as the default file
format for audio and Macintosh systems use SND as
default file format for audio.
CAPTURE & PLAYBACK OF DIGITAL
AUDIO
SOUND WAVE PROPERTIES
• Sound wave have 5 properties:
– Wavelength,
– Amplitude,
– Time-Period,
– Frequency and
– Velocity or Speed.
WAVELENGTH
• Length of wave
• Distance between repeating units of a wave
pattern. 
AMPLITUDE

• The "height" of a wave when viewed as a


graph. The strength or power of a wave signal.
Higher amplitudes are interpreted as a higher
volume. 
FREQUENCY
• (sometimes referred to as pitch) the distance
between the peaks. The greater the distance,
the lower the sound.
• Number of times the wavelength occurs in one
second.
• Measured in Hertz (Hz)
• Example: singing in a high-pitched voice forces
the vocal chords to vibrate quickly. 
TIME-PERIOD
• The time required to produce one complete
wave or cycle or cycle is called time-period of
the wave.
• It is denoted by letter T.
• The unit of measurement of time-period is
second (s).
VELOCITY OF WAVE (SPEED OF WAVE)

• The distance travelled by a wave in one


second is called velocity of the wave or speed
of the wave.
• It is represented by the letter v.
• The S.I unit for measuring the velocity is metres
per second (m/s or ms-1).
• Period: the interval at which a periodic signal
repeats regularly.

• Pitch: a perception of sound by human beings. It


measures how ‘high’ is the sound as it is perceived by
a listener.

• Volume: the height of each peak in the sound wave

• Frequency: (sometimes referred to as pitch) the


distance between the peaks. The greater the
distance, the lower the sound.
• Loudness: important perceptual quality is loudness
or volume.
• Amplitude: is the measure of sound levels. For a
digital sound, amplitude is the sample value.
• The reason that sounds have different loudness is
that they carry different amount of power. the unit of
power is watt.
ACOUSTICS

• Acoustics is the branch of physics that studies


sound.
ACOUSTICS

• Acoustics is the branch of physics that studies


sound.
• Sound pressure levels (loudness or volume)
are measured in decibels (dB);
LOGARITHMIC SCALE
• In order to express levels of sound
meaningfully in numbers that are more
manageable, a logarithmic scale is used,
rather than a linear one. This scale is the
decibel scale.
WHAT IS A DECIBEL?

• (in general use) a degree of loudness.


• A unit used to measure the intensity of a sound
or the power level of an electrical signal by
comparing it with a given level on a
logarithmic scale.
• 0 decibels (0 dB) is the quietest sound audible
to a healthy human ear. From there, every
increase of 3 dB represents a doubling of sound
intensity, or acoustic power.
WHAT IS A DECIBEL?
• The decibel (abbreviated dB) is the unit used to
measure the intensity of a sound.
• The decibel scale is a little odd because
the human ear is incredibly sensitive.
• Your ears can hear everything from your fingertip
brushing lightly over your skin to a loud jet
engine.
• On the decibel scale, the smallest audible sound
(near total silence) is 0 dB.
• Here are some common sounds and their decibel
ratings:
 160 dB Jet engine
130 dB Large orchestra
100 dB Car/bus on highway
70 dB Voice conversation
50 dB Quiet residential areas
30 dB Very soft whisper
20 dB Sound studio
Typical Sound Levels
in Decibels (dB)
and Watts
IMPORTANCE OF SOUND IN MULTIMEDIA

 Sound is perhaps the most sensuous element of


multimedia.
 It can provide the listening pleasure of music, the
startling accent of special effects, or the ambience
of a mood-setting background.
 Turn an ordinary multimedia presentation into a
professionally spectacular one.
 However, misuse of sound can wreck your project.
IMPORTANCE OF SOUND IN MULTIMEDIA
• It captures attention.
• It increases the associations the end-user makes
with the information in their minds.
• Sound adds an exciting dimension to an
otherwise flat presentation.
• Example usage of sound in multimedia
application.
– Background music
– Sound effects
– Voice over or narration
MULTIMEDIA SOUND SYSTEM

• Macintosh or windows operating system uses


sound to make alert signal.
• System uses beep and warning sounds for the
system alert and operating system.
• Windows makes use off WAV files as the
default file format for audio and Macintosh
systems use SND as default file format for
audio.
SOUND RECORDERS FOR WINDOWS &
MACINTOSH
DIGITAL AUDIO
COMPUTER REPRESENTATION OF
SOUND
 Sound waves are continuous while computers
are good at handling discrete numbers.
 In order to store a sound wave in a computer,
samples of the wave are taken.
 Each sample is represented by a number, the
‘code’.
 This process is known as digitisation.
 This method of digitising sound is know as
pulse code modulation (PCM)
 This is why one of the most popular sampling
rate for high quality sound is 4410Hz.
 Another aspect we need to consider is the
resolution, i.e., the number of bits used to
represent a sample.
 16 bits are used for each sample in high
quality sound
 Different sound card have different capability
of processing digital sounds.
COMPUTER REPRESENTATION OF
SOUND
Recording and Digitising sound:
• An analogue-to-digital converter (ADC) converts
the analogue sound signal into digital samples.
• A digital signal processor (DSP) processes the
sample, e.g. filtering, modulation, compression,
and so on.
Play back sound:
• A digital signal processor processes the
sample, e.g. decompression, demodulation,
and so on.
• A digital-to-analogue converter (DAC)
converts the digital samples into sound signal.
DIGITAL AUDIO
• Digital audio is created when you represent
the characteristics of a sound wave using
numbers—a process referred to as digitizing.
• You can digitize sound from a microphone, a
synthesizer, existing recordings, live radio and
television broadcasts, and popular CD and
DVDs.
• In fact, you can digitize sounds from any
natural or pre-recorded source.
USE DIGITAL AUDIO TO RECORD, PROCESS,
AND EDIT SOUND

• Digital audio data is the actual representation


of a sound, stored in the form of thousands of
individual samples that represent the
amplitude (or loudness) of a sound at a
discrete point in time.
SAMPLING RATE

• How often the samples are taken is the sampling


rate. For example, the current sample rate for
CD-quality audio is 44,100 samples per second. 
• The three sampling frequencies most often used
in multimedia are CD-quality
 44.1 kHz (kilohertz),
 22.05 kHz, and
 11.025 kHz.
• The amount of information stored about each
sample is the sample size and is determined
by the number of bits used to describe the
amplitude of the sound wave when the
sample is taken.
• Sample sizes are either 8 bits or 16 bits.
• The preparation and programming required
for creating digital audio do not demand
knowledge of music theory.
DIGITAL V/S MIDI
• Digital audio is not device dependent, and
sounds the same every time it is played.
• For this reason digital audio is used far more
frequently than MIDI data for multimedia
sound tracks.
DIGITAL V/S MIDI
• Since the quality of your audio is based on the
quality of your recording and not the device on
which your end user will play the audio, digital
audio is said to be device independent.
• The three sampling rates most often used in
multimedia are 44.1 kHz (CD-quality), 22.05 kHz,
and 11.025 kHz. Sample sizes are either 8 bits or
16 bits.
SAMPLING RATE

• In audio production, a sample rate (or


sampling rate) defines how many times per
second a sound is sampled.
Or
• It is the frequency of samples used in digital
recording.
• For example, the standard sample rate used
for audio CDs is 44,100 hertz. (44.1 kilohertz) 
• The three sampling frequencies most often
used in multimedia are CD-quality
 44.1 kHz (kilohertz),
 22.05 kHz, and
 11.025 kHz.
SAMPLE SIZE

• Sample Size or Audio Bit Depth is a measure


of how many bits a sample contains which is
directly a measure of quality.
• Sample frequency (or sample rate ) is the
number of samples per second in a sound.
• For example, if the sample frequency is 44100
hertz, a recording with a duration of 60
2,646,000 samples.
seconds will contain……………
• The amount of information stored about each
sample is the sample size and is determined
by the number of bits used to describe the
amplitude of the sound wave when the
sample is taken.
• Sample sizes are either 8 bits or 16 bits.
• The larger the sample size, the more
accurately the data will describe the recorded
sound.
• An 8-bit sample size provides 256 equal
measurement units to describe the level and
frequency of the sound in that slice of time.
• A 16-bit sample size, on the other hand,
provides a staggering 65,536 equal units to
describe the sound in that same slice of time.
PCM
• Pulse-code Modulation is a method used to
digitally represent sampled analog signals. It is
the standard form of digital audio in
computers, compact disc, digitally telephony
an other digital audio applications.
2 COMMON FORMATS OF PCM

WAV FLAC
MP3

AAC
ALAC
AIFF
RECORDING DIGITAL
AUDIO
• Plug a microphone into the microphone jack
of your computer. If you want to digitize
archived analog source materials—music or
sound effects that you have saved on
videotape, for example—simply plug the
“Line-Out” or “Headphone” jack of the device
into the “Line-In” jack on your computer.
HOW TO RECORD DIGITAL SOUND

Points to keep in mind:


– Setting Proper Recording Levels
– Editing Digital Recordings
– File Size vs. Quality
Setting Proper Recording Levels
• A distorted recording sounds terrible.
• Recordings that are made at too low a level are
often unusable because the amount of sound
recorded does not sufficiently exceed the residual
noise levels of the recording process itself.
• The trick is to set the right levels when you
record.
• Any good piece of digital audio recording and
editing software will display digital meters to let
you know how loud your sound is.
• Unlike analog meters that usually have
a 0 setting somewhere in the middle
and extend up into ranges like +5, +8,
or even higher. To avoid distortion, do
not cross over this limit. If this
happens, lower your volume (either by
lowering the input level of the
recording device or the output level of
your source) and try again.
• Watch the meters closely during
recording. In digital meter displays, if
you see red, you are over the peak.
EDITING DIGITAL RECORDING
• The basic sound editing operations that most
commonly needed are :-

• Trimming, •Fade-ins and Fade-outs,


• Splicing and Assembly, •Equalization,
•Time Stretching,
• Volume Adjustments,
•Digital Signal Processing
• Format Conversion,
(DSP),
• Resampling or •Reversing Sounds
Downsampling,
Editing Digital Recordings
• There are many softwares like Adobe Audition,
Sound Forge, Audacity etc. are tools/platform
where you can create sound tracks and digital
mixes.
Trimming
• Removing “dead air” or silence space from the
front of recording to reduce file size.
Splicing and Assembly
• Cutting and Pasting different recording into one.
Volume adjustment
• If you combining several recordings into one there
is a good chance that you won’t get a consistent
volume level. It is best to use a sound editor to
normalize the combined audio about 80% – 90% of
the maximum level. If the volume is increased too
loud, you will hear a distortion. 
• Format conversion Saving into different file
formats.

• Resampling/Downsampling If you have


recorded your sounds at 16-bit sampling rates,
you can downsample to lower rates by
downsampling the file to reduce the file size. 
Editing Digital Recording Fade-ins and Fade-outs
• To smooth the beginning and the end of the
sound file by gradually increasing or decreasing
volume.
Equalization
• Some program offer digital equalization
capabilities to modify the bass, treble or
midrange frequency to make the audio sounds
better. 
Time stretching
• Alter the length (in seconds) of a sound file
without changing its pitch.
Reversing sound
• Spoken dialog can produce a surreal effect
when played backward. 
• Digital Signal
Processing (Special
Effect) To increase pitch,
robot voice, echo, and
other special effects. 
File Size vs. Quality

• Audio resolution (such as 8- or 16-bit)


determines the accuracy with which a sound
can be digitized. Using more bits for the
sample size yields a recording that sounds
more like its original.
SOUND CHANNEL
• Whether you want mono or stereo sound will
affect the size of the file.
• Mono means sound will be playing from one
channel whereas stereo means two channels.
• Therefore, stereo sound will require larger
storage space than mono sound. 
There are three major groups of audio file formats:
• Uncompressed audio formats, such
as WAV, AIFF, AU or raw header-less PCM;
• Formats with lossless compression, such
as FLAC, Monkey's Audio (filename
extension .ape), WavPack (filename
extension .wv), TTA, ATRAC Advanced
Lossless, ALAC (filename extension .m4a), MPEG-4
SLS, MPEG-4 ALS, MPEG-4 DST, Windows Media
Audio Lossless (WMA Lossless), and Shorten (SHN).
• Formats with lost compression, such
as Opus, MP3, Vorbis, Musepack, AAC, ATRAC and Wi
ndows Media Audio Lossy (WMA lossy).
BENEFITS OF USING DIGITAL AUDIO

• Sound can be permanently stored in inexpensive CD.


• Consistent sound quality without noise or distortion.
• Duplicate will sound exactly the same as the master
copy.
• Digital sound can be played at any point of the
sound track. (random access)
• It can also be integrated with other media.
• Can be edited without loss in quality. 
MIDI 
MIDI 
• MIDI is short for Musical Instrument Digital
Interface.
• It’s a language that allows computers, musical
instruments and other hardware to
communicate.
• A MIDI setup includes the interface, the
language that MIDI data is transmitted in, and
the connections needed to communicate
between hardware.
WHO INVENTED MIDI?
• The MIDI standard was
unveiled in 1982. Kakehashi
and Dave Smith both later
received Technical Grammy
Awards in 2013 for their key
roles in the development of
MIDI.
MIDI 

• MIDI is a technical standard that describes


a communications protocol, digital interface,
and electrical connectors that connect a wide
variety of electronic musical
instruments, computers, and related audio
devices for playing, editing and recording
music.
BASIC MIDI HARDWARE SETUP

• There are still plenty of MIDI setups that work


in the traditional way, with the computer just
recording and playing MIDI messages, and the
sound created by an external synthesizer.
• These are especially useful in live setups,
where the reliability and faster response of
hardware synthesizers are distinct advantages.
COMPONENTS OF A MIDI SYSTEM
Synthesizer:
• It is a sound generator (various pitch, loudness, tone
colour). A good (musician's) synthesizer often has a
microprocessor, keyboard, control panels, memory,
etc.
Sequencer:
• It can be a stand-alone unit or a software program
for a personal computer. (It used to be a storage
server for MIDI data. Nowadays it is more a
software music editor on the computer. It has one or
more MIDI INs and MIDI OUTs.
COMPONENTS OF A MIDI SYSTEM
• Track: Track in sequencer is used to organize the
recordings. Tracks can be turned on or off on recording
or playing back.
• Channel: MIDI channels are used to separate
information in a MIDI system.There are 16 MIDI
channels in one cable.Channel numbers are coded into
each MIDI message.
• Timbre: The quality of the sound, e.g., flute sound,
cello sound, etc.
• Multitimbral - capable of playing many different
sounds at the same time (e.g., piano, brass, drums,
etc.)
COMPONENTS OF A MIDI SYSTEM

• Pitch: musical note that the instrument plays


• Voice: Voice is the portion of the synthesizer that
produces sound.
• Synthesizers can have many (12, 20, 24, 36, etc.)
voices. Each voice works independently and
simultaneously to produce sounds of different
timbre and pitch.
• Patch: the control settings that define a
particular timbre.
RECORDING MIDI FILES

• MIDI files can be generated :


– By recording the MIDI data from a MIDI
instrument (electronic keyboard) as it is
played.
– By using a MIDI sequencer software
application.
MIDI VERSUS DIGITAL AUDIO

• Advantages of MIDI over Digital Audio:


– Midi files are smaller than digital files.
– Therefore MIDI files are embedded easily in
web pages and load and play more quickly.
– more high quality and sound better
– Can change the length of MIDI files without
changing the pitch of the music or
degrading the audio quality.
COMMUNICATION BY MESSAGE

• The most important thing to understand about


MIDI is that it is based on the idea of message-
passing between devices (pieces of equipment
or software).
COMMUNICATION BY MESSAGE
• Imagine a common situation: you have a keyboard
synthesizer and would like to record a sequence
using the sounds that are in that synthesizer. You
connect the computer and synthesizer so that
they can communicate using the MIDI protocol,
and start recording. What happens?
• When you play notes on the synthesizer, all your
physical actions (except the dance moves) are
transmitted as MIDI messages to the computer
sequencing software, which records the messages. 
COMMUNICATION BY MESSAGE

• MIDI messages are brief numeric descriptions


of an action. 
• Keys you press, knobs you turn, the joystick
you wiggle — all these actions are encoded as
MIDI messages.
• You hear the sound you’re making, but that
sound comes out of the synthesizer, directly to
your speakers.
• The computer does not record the sound itself.
COMMUNICATION BY MESSAGE

• When you play your recorded sequence, the computer


sends MIDI messages back to the synthesizer, which
interprets them and creates audio in response.
• Because the music handled by the computer is in the
form of encoded messages, rather than acoustic
waveforms, it’s possible to change the sound of a track
from a piano to a guitar after having recorded the track.
• That would not be possible if you were recording the
sound that the synthesizer makes.
MIDI MESSAGES 

• A single MIDI link through a MIDI cable can


carry up to sixteen channels of information,
each of which can be routed to a separate
device or instrument.
• It’s a way to connect devices that make and
control sound — such as synthesizers,
samplers, and computers — so that they can
communicate with each other, using MIDI
messages.
MIDI NOTES AND MIDI EVENTS
• When using a MIDI instrument, each time you
press a key a MIDI note is created (sometimes
called a MIDI event).
• Each MIDI event carries instructions that
determine:
– Key ON and OFF: when the key is
pressed/released
– Pitches or notes played
– Velocity: how fast and hard the key is pressed
– Aftertouch: how hard the key is held down
– Tempo (or BPM)
– Panning
– Modulations
– Volume
MIDI NOTES AND MIDI EVENTS

• MIDI also carries MIDI clock data between 2


or more instruments. This allows for perfect
synchronization between your whole setup.
• MIDI clock data is dependent on the tempo
of your main device—usually the sequencer.
So if you change your main tempo, MIDI
ensures that your setup stays synced. It’s like
a tiny digital band leader for all your gear!
MIDI CHANNELS
• The concept of channels is central to how
most MIDI messages work.
• A channel is an independent path over which
messages travel to their destination.
MIDI CHANNELS
• There are 16 channels per
MIDI device.
• Each channel (marked
“Ch”) carries its own
instrumental part, and
has independent volume,
panning, and other
settings.
• A track in your sequencer
program plays one
instrument over a single
channel.
MIDI FILE FORMATS

• MIDI files may be converted to MP3, WAV,


WMA, FLAC, OGG, AAC, MPC on any Windows
platform using Total Audio Converter.
• Total Audio Converter supports the following
conversions with MIDI format files: MIDI to
MP3. MIDI to WMA.
SOFTWARE

MIDI softwares, sequencers and editors


• Sibelius
• Pro Tools
• Ableton live : Recommended at all levels
• FL studio : Beginner level
• Apple Logic Pro X: Medium to expert level
• Avid Pro Tool
• Propellerhead Reason
• Anvil Studio
Important Questions
• What is difference between MIDI and digital
audio?
• How is MIDI used?
• Describe what MIDI is, what its benefits are, and
how it is best used in a multimedia project.
• List the steps you would go through to record,
edit, and process a set of sound files for inclusion
on a web site. How would you digitally process
the files to ensure they are consistent, have
minimum file size, and sound their best?
MCQs
• The maximum amount of receive or transmit
channels for a MIDI device is
• 18
• 32
• 16
• 127
• 8
(Choose the right answer)
• What software program is a MIDI Set Up
Utility for Mac OS X?
• Opcode Music System (OMS)
• Open music status
• Free midi
 Audio midi set up (ams)
• Open music system
• The MIDI port that transmits MIDI data is
 Out port
• Patch port
• In port
• Audio port
• Omni port
• The value for MIDI velocity data ranges from 
• 1 to 127
 0 to 127
• 0 to 256
• 0 to 128
• 1 to 256
A MIDI interface is used:-
• For the transmission and receiption of audio
between MIDI devices
 For the transmission and reception of MIDI
signals in/out of a computer
• Translate MIDI code so that we can read it
• To route audio signals into the computer
DSP stands for:
a. dynamic sound programming
b. data structuring parameters
c. direct splicing and partitioning
d. delayed streaming playback
 e. digital signal processing
• Removing blank space or "dead air" at the
beginning or end of a recording is sometimes
called:
a. quieting
b. pre-rolling
c. quantizing
d. Trimming
e. flashing
• Audio recorded at 44.1 kHz (kilohertz), 16-bit
stereo is considered:
a. phone-quality
b. voice-quality
c. FM-quality
 d. CD-quality
e. AM-quality
• Each individual measurement of a sound that
is stored as digital information is called a:
• Buffer
• Stream
 Sample
• Capture
• byte

You might also like