Laboratory5 ELECTIVE2
Laboratory5 ELECTIVE2
I. Sampling Theory
Sequences, or discrete-time signals, are often generated by sampling analog, or continuous-time, signals. A digital
signal processor may process and alter a succession of samples, essentially a sequence of numbers, by sampling a
continuous-time signal. Discrete-time signals are represented mathematically as a sequence of numbers. The notation used
will denote a sequence, x, as x = {x[n]} where n is the index of the nth element in the sequence. In terms of notation, x[n]
represents both the nth sample in the sequence and the entire sequence that is a function of n. The index, n, can range
over all values -∞ to +∞. [1]
Sampling theory: The requirements for sampling are summarized by the Nyquist sampling theorem.1 Let xc(t) be a
band-limited signal with Xc(jΩ) = 0 for |Ω| > ΩN. Then xc(t) is uniquely determined by its samples, [1]
Figure 1: Final result of reconstructing the analog signal from the sampled signal
x[n] = xc(nT), if ΩS = 2ꙥ/T2ΩN.. The frequency ΩN is referred to as the Nyquist frequency, and the frequency 2ΩN is
referred to as the Nyquist rate. This theory is significant because it states that as long as a continuous-time signal is band-
limited and sampled at least twice as fast as the highest frequency, then it can be exactly reproduced by the sampled
sequence. The sampling analysis can be extended to the frequency response of the discrete time sequence, x[n], by using
the relationships x[n]= xc(nT) and
X(ejw) is a frequency-scaled version of the continuous-time frequency response, Xs(jΩ), with the frequency scale
specified by ɯ= ΩT. This scaling is also known as normalizing the frequency axis by the sample rate, such that frequency
components that occurred at the sample rate now occur at 2. Because the sample period T has been used to equalize the
time axis, the sampling rate 1/T may be used to normalize the frequency axis.
Sample or sampling rate: In digital audio, a factor in setting the frequency range. The sampling rate specifies how
many audio signal samples (pictures) are collected in a one-second period. A typical sample rate of 44.1 kHz indicates that
over 44,000 samples are collected each second. We can calculate the maximum frequency that will be captured and
reproduced by halving the sample rate. For instance, a CD’s standard sample rate is 44.1 kHz, which represents up to 22.05
kHz. Other common audio sampling rates are 48, 88.2, 96, and 192 kHz. [2]
The Nyquist Theorem: The sample rate of the signal must be at least twice as high as the highest desirable
frequency. A sample rate of 44.1 kHz, for example, represents sound up to 22.05 kHz, whereas a sample rate of 48 kHz
conveys sound up to 24 kHz.
BSECE 4
Quantization: The quantization process takes a sample from the continuous to discrete conversion and identifies
the closest corresponding finite precision value, which is then represented by a bit pattern. This bit pattern code for the
sample value is often a binary 2s-complement code, allowing the sample to be utilized directly in arithmetic operations
without the requirement for conversion to another numerical format (which requires a lot of instructions on a DSP
processor). In essence, the continuous-time signal must be both quantized in time (i.e., sampled), and then quantized in
amplitude. [1]
The amplitude component of digital audio that controls how many steps or computations are performed. The
smoother the depiction, the more steps there are and the smaller the gap between each step. 8 bit = 256 steps, 16 bit =
65,536 steps, and 24 bit = 16,777,216 steps. Basically, the higher the bit resolution, the more the sound wave will appear
as a sine wave. [2]
Figure 4: Quantization
Bit depth: The dynamic range is determined by this parameter. The most common audio formats are 16- and 24-
bit. Higher bit depths boost the audio's resolution. Consider bit depth to be the picture sharpening knob on a pair of
BSECE 4
binoculars. In this scenario, greater bit depth helps a digital sample of square steps look smoother and more like an analog
sine wave.
Bit rate: The rate at which digital audio is transmitted. It is expressed in bits per second, generally written as bps.
Normalize: In digital audio, a gain-related procedure in which the loudness of the entire file is boosted to a
predetermined standard. Normalizing a file, unlike compression, does not alter the dynamic interaction between the tracks.
It maintains consistent volume levels from song to song. If you're a DJ, making mix tapes, or attempting to bring up low
levels in recordings, this is something you should know.
Figure 5: Digital Audio Comparisons according to bit depth, sample rate, bit rate, and file sizes
A standard audio CD is 16-bits with a 44.1 kHz sample rate. A blank 700 MB CD would hold about 80 min of
stereo audio. As you can see from the chart, a 24 bit/96 kHz recording takes up at least three times the amount of storage
space compared to a 16-bit/44.1 kHz recording
The application or implementations of sampling were visible in wide coverage of audio processing. One of the
critical example was music production, and how they produce through these methods from sampling.
The Synth. A synthesizer is a type of electronic instrument that employs several sound generators to generate
complex waveforms that may be combined (using various waveform synthesis techniques) to produce an infinite number
of audible variants. It creates sounds and percussion sets by utilizing a variety of technologies or software techniques. The
first synthesizers were analog in nature, producing sounds by additive or subtractive FM (frequency modulation) synthesis.
This procedure typically requires the use of at least two signal generators (also known as operators) to produce and change
a voice.
Figure 7: Synthesizer
Software Synthesis and Sample Re-synthesis. Because wavetable synthesizers derive their sounds from prepared
samples stored in a digital memory media, it follows that these sounds may likewise be saved on hard drive (or any other
medium) and loaded into a personal computer's RAM memory. This method of downloading wavetable samples into a
computer and then altering them is used to construct a virtual or software synthesizer.
BSECE 4
Samplers. A sampler, is a device that uses the system's own random access memory (RAM) to convert audio into
digital form and/or alter prepared sampled data. Once placed into RAM, the sampled audio can be altered, transposed,
processed, and played polyphonically. A sampler is essentially a wavetable synth that allows you to record, load, and
manipulate samples into RAM memory. Once loaded, these sounds (whose duration and complexity are frequently limited
only by memory capacity and your imagination) may be looped, manipulated, filtered, and amplified (using user or
manufacturer setup options), allowing the wave forms and envelopes to be adjusted. Basic editing, looping, gain adjusting,
reverse, sample-rate conversion, pitch change, and digital mixing capabilities may all be tweaked and/or varied.
`
Figure 9: Steinberg’s HALion VST software sampler
Software samplers, like software synths, generate their sounds from recorded and/or imported audio data saved
as digital audio data within a personal computer. Most software samplers may store and retrieve samples within the
internal memory of a laptop or desktop computer by utilizing the DSP capabilities of today's computers (as well as the
recording, sequencing, processing, mixing, and signal routing capabilities of most digital audio workstations). These
sampling systems frequently provide the user with the following options via a graphical interface:
Import previously recorded soundfiles (often in WAV, AIF, and other common formats)
Edit and loop sounds into a usable form
Vary envelope parameters (i.e., dynamics over time)
Vary processing parameters
Save the edited sample performance setup as a file for later recall.
Now, the user can able to arrange the samples and other musical components into a composition using the
sequencer. The sequencer is a software or hardware device that lets the user to generate and modify musical sequences,
which may then be combined into patterns and loops to form a song.
When the composition is finished, it may be saved as an audio file. The audio file contains the digital signals that
have been processed to represent the musical composition, and it may be played back on a range of devices, from
headphones to professional sound systems.
III. REFERENCES