Digital Representation Sept 4-5-11 12

Download as pdf or txt
Download as pdf or txt
You are on page 1of 33

Digital Representation of Analog

Signals
Gaurav S. Kasbekar
Dept. of Electrical Engineering
IIT Bombay
Introduction
• Recall: digital communication systems have several advantages
over analog communication systems
❑ former have replaced or are replacing latter in most contexts, e.g.,
cellular networks, TV
• “Analog communication” and “digital communication”:
❑ in practice, all communication is via continuous signals and hence
analog in nature
❑ the message signal that is to be transmitted is either analog or digital
❑ E.g., if the source is speech, then:
o In analog communication, it is directly used to modulate a high-frequency
carrier signal
o In digital communication, it is sampled and quantized to obtain a bit stream,
which is then used to modulate a high-frequency carrier signal
• First step in digital transmission of analog source (e.g., speech,
music) is conversion of source to digital representation
• We now study:
❑ this analog to digital conversion
❑ and representation of the analog information as a sequence of pulses
The Sampling Process
• Sampling is used to convert an analog signal to
sequence of samples that are usually spaced
uniformly in time
• Sampling rate must be chosen carefully, so that:
❑the sequence of samples uniquely defines the original
analog signal
• Sampling theorem tells us how to choose sampling
rate
• We now briefly review the sampling process and
prove the sampling theorem
The Sampling Process (contd.)
• Consider an arbitrary signal 𝑔(𝑡) of finite energy, which is specified for all
time 𝑡
• Suppose 𝑔(𝑡) sampled at uniform rate:
❑ once every 𝑇𝑠 seconds
• Then we obtain an infinite sequence of samples spaced 𝑇𝑠 seconds apart:
❑ denoted by 𝑔 𝑛𝑇𝑠 , where 𝑛 takes on all possible integer values
• We refer to:
❑ 𝑇𝑠 as “sampling period”
❑ and 𝑓𝑠 = 1/𝑇𝑠 as “sampling rate”
• Let:
❑ 𝑔𝛿 𝑡 = σ∞
𝑛=−∞ 𝑔 𝑛𝑇𝑠 𝛿(𝑡 − 𝑛 𝑇𝑠 )
• 𝑔(𝑡) and 𝑔𝛿 𝑡 shown in fig.
• We will show that Fourier transform of sampled
signal 𝑔𝛿 𝑡 is:
1) 𝐺𝛿 𝑓 = 𝑓𝑠 σ∞
𝑚=−∞ 𝐺 𝑓 − 𝑚𝑓𝑠
❑ where 𝐺 𝑓 is Fourier transform of 𝑔(𝑡)
• 1) shows that process of uniformly sampling a
signal 𝑔(𝑡) results in a periodic spectrum with
period equal to the sampling rate
Ref: “Communication Systems” by Haykin and Moher, 5th ed
Proof of the Claim 𝐺𝛿 𝑓 = 𝑓𝑠 σ∞
𝑚=−∞ 𝐺 𝑓 − 𝑚𝑓𝑠
• First, consider a periodic signal 𝑓𝑇0 (𝑡) of period 𝑇0
• We can represent it using Fourier series:
❑ 𝑓𝑇0 𝑡 = σ∞
𝑛=−∞ 𝑐𝑛 exp 𝑗2𝜋𝑛𝑓0 𝑡 , where
1 𝑇0 /2 1
❑ 𝑐𝑛 = ‫׬‬ 𝑓 𝑡 exp −𝑗2𝜋𝑛𝑓0 𝑡 𝑑𝑡 and 𝑓0 =
𝑇0 −𝑇0 /2 𝑇0 𝑇0

𝑇0 𝑇0
𝑓𝑇0 𝑡 , − ≤𝑡≤ ,
• Let 𝑓 𝑡 = ቐ 2 2
0, else.
❑ So 𝑓𝑇0 𝑡 = σ∞
𝑚=−∞ 𝑓(𝑡 − 𝑚𝑇0 )
• Hence, 𝑐𝑛 :
❑ 𝑓0 𝐹(𝑛𝑓0 ), where
❑ 𝐹(𝑓) is the Fourier transform of 𝑓 𝑡
• Thus:
❑ σ∞ ∞
𝑚=−∞ 𝑓 𝑡 − 𝑚𝑇0 = 𝑓0 σ𝑛=−∞ 𝐹(𝑛𝑓0 )exp 𝑗2𝜋𝑛𝑓0 𝑡
1) So Fourier transform of σ∞
𝑚=−∞ 𝑓(𝑡 − 𝑚𝑇0 ) is:
❑ 𝑓0 σ∞𝑛=−∞ 𝐹(𝑛𝑓0 )𝛿(𝑓 − 𝑛𝑓0 )
• Now, in the sampling context: 𝑔𝛿 𝑡 = σ∞𝑛=−∞ 𝑔 𝑛𝑇𝑠 𝛿(𝑡 − 𝑛 𝑇𝑠 )
• Fourier transform of 𝑔𝛿 𝑡 is 𝐺𝛿 𝑓 = 𝑓𝑠 σ∞
𝑚=−∞ 𝐺 𝑓 − 𝑚𝑓𝑠 by:
❑ Duality theorem and the fact that the 𝛿(. ) function is an even function
• Recall:
1) 𝑔𝛿 𝑡 = σ∞𝑛=−∞ 𝑔 𝑛𝑇𝑠 𝛿(𝑡 − 𝑛 𝑇𝑠 )
The Sampling Process
2) 𝐺𝛿 𝑓 = 𝑓𝑠 σ∞
𝑚=−∞ 𝐺 𝑓 − 𝑚𝑓𝑠
• Taking Fourier transforms on both sides of 1), we get:
3) 𝐺𝛿 𝑓 = σ∞
(contd.)
𝑛=−∞ 𝑔 𝑛𝑇𝑠 exp −𝑗2𝜋𝑛𝑓𝑇𝑠
❑ This relation is called:
o discrete-time Fourier transform
❑ Can be viewed as Fourier series representation of the periodic frequency function 𝐺𝛿 𝑓
• Next, suppose the signal 𝑔(𝑡) is strictly bandlimited:
❑ 𝐺 𝑓 = 0 for 𝑓 ≥ 𝑊
1
• Also, suppose we choose the sampling period 𝑇𝑠 =
2𝑊
• Then by 3), we get:
𝑛 −𝑗𝜋𝑛𝑓
4) 𝐺𝛿 𝑓 = σ∞
𝑛=−∞ 𝑔 exp
2𝑊 𝑊
• Also, by 2), we get:
1
5) 𝐺 𝑓 = 2𝑊 𝐺𝛿 𝑓 , for −𝑊 < 𝑓 < 𝑊
• Substituting 4) into 5), we get:
1 𝑛 −𝑗𝜋𝑛𝑓
6) 𝐺 𝑓 = 2𝑊 σ∞
𝑛=−∞ 𝑔 exp , for −𝑊 < 𝑓 < 𝑊
2𝑊 𝑊
𝑛
• 6) shows that if sample values 𝑔 of signal 𝑔(𝑡) are specified for all 𝑛, then signal 𝑔(𝑡) is completely
2𝑊
determined for all values of 𝑡
• Taking inverse Fourier transform of 6), we get:
𝑛
7) 𝑔 𝑡 = σ∞
𝑛=−∞ 𝑔 2𝑊
sinc(2𝑊𝑡 − 𝑛) for 𝑡 ∈ (−∞, ∞)
• Equation 7) provides an interpolation formula for reconstructing the original signal 𝑔 𝑡 from the sequence
𝑛
of sample values 𝑔
2𝑊
• Thus, we have derived the “Sampling Theorem”, which states the following:
❑ A band-limited signal which only has frequency components in the range −𝑊 < 𝑓 < 𝑊 is completely described by
specifying the values of the signal at instants of time separated by 1/2𝑊 seconds
❑ Such a signal can be completely recovered from a knowledge of its samples taken at the rate of 2𝑊 samples per second
• Sampling rate of 2𝑊 samples per second, for a signal bandwidth of 𝑊 Hz, called Nyquist rate; its reciprocal
1
called Nyquist interval
2𝑊
Aliasing
• In above derivation of sampling theorem, we assumed that signal
𝑔(𝑡) is strictly band-limited
• However, in practice, an information-bearing signal is not strictly
band-limited
❑ so some undersampling occurs
• So sampling process produces some “aliasing” as shown in fig
• To combat the effects of aliasing in
practice:
❑ Prior to sampling, a low-pass filter
used to attenuate those high-
frequency components that are not
essential to information being
conveyed by signal
❑ Filtered signal is sampled at a rate
slightly higher than Nyquist rate

Ref: “Communication Systems” by Haykin and Moher, 5th ed


Aliasing (contd.)
• What is the benefit of using a sampling rate that is slightly higher than (not equal
to) Nyquist rate?
❑ Eases the design of the reconstruction filter used to recover original signal from its
sampled version
• E.g., suppose a message signal with bandwidth 𝑊 is sampled at rate 𝑓𝑠 > 2𝑊
• Then reconstruction filter:
❑ can be low-pass filter with a passband
extending from −𝑊 to 𝑊 and
❑ transition band extending (for positive
frequencies) from 𝑊 to 𝑓𝑠 − 𝑊 (see
fig)
• Thus, reconstruction filter allowed to
have transition band of width 𝑓𝑠 −
2𝑊 > 0
❑ In contrast, if 𝑓𝑠 = 2𝑊, then ideal
reconstruction filter with zero width
of transition band would be required,
which is not practically realizable

Ref: “Communication Systems” by Haykin and Moher, 5th ed


Practical Sampling
• So far, we have considered ideal sampling
using an impulse pulse train
• But this sampling process is physically
unrealizable
• So next, we consider a practical
implementation of sampling
• Called “Pulse Amplitude Modulation”
Pulse Amplitude Modulation (PAM)
• In PAM, amplitudes of regularly spaced pulses varied in proportion to
corresponding sample values of a continuous message signal 𝑚(𝑡) as
shown in fig.
❑ 𝑠(𝑡) is PAM signal obtained from 𝑚(𝑡)
• PAM signal 𝑠(𝑡) can be generated by following operations:
1) Instantaneous sampling of message signal 𝑚(𝑡) every 𝑇𝑠 seconds, where
sampling rate 𝑓𝑠 = 1/𝑇𝑠 chosen in accordance with sampling theorem
2) Lengthening duration of each sample to some constant value 𝑇
• Above two operations jointly referred to as “sample and hold”
• Reason for lengthening duration of each sample (step 2):
❑ To avoid use of excessive channel bandwidth
• PAM signal 𝑠(𝑡) can be expressed as:
❑ 𝑠 𝑡 = σ∞𝑛=−∞ 𝑚 𝑛𝑇𝑠 ℎ(𝑡 − 𝑛 𝑇𝑠 ),
1, 0 ≤ 𝑡 ≤ 𝑇,
❑ where ℎ 𝑡 = ቊ
0, else.
• Recall: 𝑚𝛿 𝑡 = σ∞𝑛=−∞ 𝑚 𝑛𝑇𝑠 𝛿(𝑡 − 𝑛 𝑇𝑠 )
• 𝑠 𝑡 in terms of 𝑚𝛿 𝑡 and ℎ 𝑡 :
❑ 𝑚𝛿 𝑡 ∗ ℎ(𝑡)
• Taking Fourier transforms on both sides:
• 𝑆 𝑓 = 𝑀𝛿 𝑓 𝐻(𝑓)
Ref: “Communication Systems” by Haykin and Moher, 5th ed
Pulse Amplitude Modulation (PAM) (contd.)
• Recall:
❑ 𝑠 𝑡 = σ∞
𝑛=−∞ 𝑚 𝑛𝑇𝑠 ℎ(𝑡 − 𝑛 𝑇𝑠 ) = 𝑚𝛿 𝑡 ∗ ℎ(𝑡)
❑ 𝑆 𝑓 = 𝑀𝛿 𝑓 𝐻(𝑓)
❑ 𝑀𝛿 𝑓 = 𝑓𝑠 σ∞
𝑚=−∞ 𝑀 𝑓 − 𝑚𝑓𝑠
• So 𝑆 𝑓 :
❑ 𝑓𝑠 σ∞
𝑚=−∞ 𝑀 𝑓 − 𝑚𝑓𝑠 𝐻(𝑓)
• Given a PAM signal 𝑠 𝑡 , how can we recover
message signal 𝑚(𝑡)?
• Assuming that sampling rate exceeds Nyquist rate,
i.e., 𝑓𝑠 > 2𝑊, we pass 𝑠(𝑡) through low-pass filter to
get signal with Fourier transform 𝑀 𝑓 𝐻(𝑓)
1, 0 ≤ 𝑡 ≤ 𝑇,
• Recall: ℎ 𝑡 = ቊ
0, else.
• So 𝐻(𝑓):
❑ 𝑇sinc(𝑓𝑇)𝑒 −𝑗𝜋𝑓𝑇
• We can recover 𝑚(𝑡) by:
❑ passing the above signal with Fourier transform
𝑀 𝑓 𝐻(𝑓) through filter with amplitude response
1 1
=
|𝐻 𝑓 | |𝑇sinc 𝑓𝑇 |
• Fig. shows relevant amplitude spectra

Ref: “Communication Systems” by Haykin and Moher, 5th ed


Communication Using Pulse Modulation
• Suppose a continuous-time message signal 𝑔(𝑡) needs to be sent over a baseband channel
• In “pulse modulation”:
❑ 𝑔(𝑡) is sampled
❑ sample values are used to modify certain parameters of a periodic pulse train
• Fig. shows:
❑ PAM signal, in which pulse amplitudes varied
❑ “Pulse Width Modulation (PWM)”, in which pulse widths varied
❑ “Pulse Position Modulation (PPM)”, in which pulse positions varied
• In all the above cases, instead of sending 𝑔(𝑡),
we transmit the corresponding pulse modulated
signal over channel
• Recall: previous slide shows that bandwidth of
PAM signal is larger than bandwidth of message
signal
• Advantage of pulse modulation over sending
message signal 𝑔(𝑡) itself:
❑ Pulse modulation allows simultaneous transmission
of several signals on a time-sharing basis, i.e., Time
Division Multiplexing (TDM), as shown in fig.

Ref: “Modern Digital and Analog Comm. Systems” by B.P. Lathi and Z. Ding, 4th ed

Quantization
Samples of a continuous signal, such as voice, take real values
❑ Hence, infinite number of amplitude levels
• Not necessary to transmit exact amplitudes of the samples
❑ Since human eye or ear, as final receiver, can only detect finite intensity differences
• So original continuous signal can be approximated by a signal constructed of
discrete amplitudes selected from a finite set as shown in fig
• If the discrete amplitude levels are assigned with sufficiently close spacing, then:
❑ approximated signal can be made practically indistinguishable from original continuous
signal
• “Quantization” defined as process of transforming the sample amplitude 𝑚 𝑛𝑇𝑠 of
a message signal 𝑚 𝑡 into a discrete amplitude 𝑣 𝑛𝑇𝑠 taken from a finite set of
possible amplitudes
• When digital communication used to
transmit an analog message source
(e.g., voice), then:
❑ After sampling, message signal is
quantized and then converted into a
sequence of bits

Ref: “Communication Systems” by Haykin and Moher, 5th ed


Quantization (contd.)
• Recall: quantizer converts sample amplitude 𝑚 𝑛𝑇𝑠 of message signal
𝑚 𝑡 into amplitude 𝑣 𝑛𝑇𝑠 taken from a finite set of possible amplitudes
• For simplicity, drop time index and denote 𝑚 𝑛𝑇𝑠 and 𝑣 𝑛𝑇𝑠 by 𝑚 and 𝑣,
respectively
• For 𝑘 = 1, … , 𝐿: m, v are different from mk and vk
❑ if 𝑚 lies in the interval ℐ𝑘 = (𝑚𝑘 , 𝑚𝑘+1 ], then 𝑣 = 𝑣𝑘 as shown in fig
❑ where 𝐿 is total number of amplitude levels used in quantizer
❑ 𝑚𝑘 , 𝑘 = 1, … , 𝐿 + 1, called “decision thresholds”
❑ 𝑣𝑘 , 𝑘 = 1, … , 𝐿, called “representation levels”
• Spacing between adjacent representation levels called:
❑ “quantum” or “step-size”
• Quantizers can be of a uniform or
non-uniform type:
❑ in uniform quantizer,
representation levels are uniformly
spaced and in non-uniform
quantizer, they are not
• We first discuss uniform quantizers
and then non-uniform quantizers
Ref: “Communication Systems” by Haykin and Moher, 5th ed
Quantization (contd.)
• Quantizer characteristic can
be of following types (see
fig.):
❑“midtread type”: origin lies in
middle of a tread of the
staircase graph
❑“midrise type”: origin lies in
middle of a rising part of the
staircase graph

Ref: “Communication Systems” by Haykin and Moher, 5th ed


• Use of quantization introduces an error, given by: Quantization Noise
❑ 𝑞 = 𝑚 − 𝑣 m, v are different from mk, vk so this depends on message signal instead of defined
❑ called “quantization noise” values
• Suppose 𝑚, 𝑣, and 𝑞 are realizations of random variables 𝑀, 𝑉, and 𝑄, respectively, so that:
❑ 𝑄 =𝑀−𝑉 is this zero mean due to message signal being equally dist.
on either sides of zero or that the thresholds and
• For simplicity, assume that 𝑀 and 𝑉 are zero-mean representation levels are equally distributed on either sides
❑ so 𝑄 is also zero-mean of zero
• To find signal-to-quantization-noise ratio, we need to find 𝐸 𝑄 2 :
❑ i.e., variance of 𝑄, say 𝜎𝑄2
• Consider an input 𝑚 of continuous amplitude in range −𝑚𝑚𝑎𝑥 , 𝑚𝑚𝑎𝑥
• If 𝐿 is number of representation levels, then step-size:
2𝑚𝑚𝑎𝑥
❑ ∆=
𝐿
• In what range does 𝑄 lie?
∆ ∆
❑ −2,2
∆ ∆
• Assume that 𝑄 is uniformly distributed in − ,
2 2
• 𝜎𝑄2 :
∆2
❑ 12
• Typically, output of quantizer is transmitted to receiver in binary form
• Number of bits per sample, 𝑅:
❑ log 2 𝐿
• So 𝐿 = 2𝑅
• 𝜎𝑄2 in terms of 𝑚𝑚𝑎𝑥 and 𝑅:
2
1 𝑚𝑚𝑎𝑥

3 22𝑅
• If average power of message signal 𝑚(𝑡) is 𝑃, then output SNR of quantizer:
3𝑃
❑ 2
𝑚𝑚𝑎𝑥
22𝑅
• Thus, output SNR of quantizer increases exponentially with increasing number of bits per sample, 𝑅
Example
• Recall: if average power of message signal 𝑚(𝑡) is
𝑃, then output SNR of quantizer:
3𝑃
❑ 2 22𝑅
𝑚𝑚𝑎𝑥

• Consider sinusoidal message signal of amplitude


𝐴𝑚
• Output SNR:
3
❑ × 22𝑅
2
• Output SNR in dB:
❑1.761 + 6.02𝑅
Pulse-Code Modulation (PCM)
• Message signal sampled and each sample is quantized and represented
using a set of bits
• Fig. shows transmitter of PCM system
• Low-pass filter used to:
❑ prevent aliasing
• Sampling (e.g., PAM) used with a sampling rate greater than Nyquist rate
• Quantization may be:
❑ uniform or
❑ non-uniform (discussed later)
• Encoding used to:
❑ convert the sequence of quantized values to a form that is suitable for
transmission over communication channel

Ref: “Communication Systems” by Haykin and Moher, 5th ed


Non-Uniform Quantization
• Recall: in a uniform quantizer, there is a constant separation
between each pair of neighboring representation levels
• However, in certain applications, it is better to use variable
separations between the representation levels
• E.g.: range of voltages covered by voice signals, from peaks of loud
talk to weak passages of weak talk, is on the order of 1000 to 1
• For voice signal, non-uniform quantizer with the feature that step-
size increases as the separation from the origin of the input-output
characteristic increases is used
• Using above process, nearly uniform percentage precision is
achieved over the amplitude range of input signal
• Also, above process beneficial since for voice signal, there is higher
probability of signal having smaller magnitudes than higher
magnitudes
• Fewer steps are needed than in the case of:
❑ a uniform quantizer that achieves the same maximum percentage error
❑ this reduces the bandwidth requirement of quantized signal
Compander
• Use of a non-uniform quantizer equivalent to:
❑ passing the input sample through a compressor and
❑ applying the compressed sample to a uniform quantizer
• Examples of compression laws that are widely used in practice:
❑ 𝜇-law
❑ 𝐴-law
• 𝜇-law defined by:
log 1+𝜇|𝑚|
1) 𝑣 = log 1+𝜇 ,
❑ where 𝑚 and 𝑣 are the normalized
input and output voltages of the
compressor
❑ and 𝜇 is a positive constant
• The case 𝜇 = 0 corresponds to:
❑ uniform quantization
• 𝜇-law in 1) is approximately linear at
low values of |𝑚| and logarithmic at
high values of |𝑚|
• 𝜇-law shown in fig Ref: “Communication Systems” by Haykin
• Practical values of 𝜇: around 100 and Moher, 5th ed
• 𝐴-law defined by: Compander (contd.)
𝐴|𝑚| 1
, 0≤ 𝑚 ≤ ,
1+log 𝐴 𝐴
1) 𝑣 = ൞1+log 𝐴|𝑚| 1
≤ |𝑚| ≤ 1.
1+log 𝐴 𝐴
• The case 𝐴 = 1 corresponds to:
❑ uniform quantization
• 𝐴-law shown in fig
• 𝐴-law in 1) is:
❑ linear for small values of 𝑚
❑ logarithmic for large values of 𝑚
• Practical values of 𝐴 : around 100
• When compressor used in transmitter:
❑ in receiver, we need a device to restore the
signal samples to their correct relative levels
❑ such a device called “expander”
• Compression and expansion laws are
approximately inverse of each other
• Combination of a compressor and expander
called a “compander” Ref: “Communication Systems” by Haykin and
Moher, 5th ed
Encoding
• Encoding used to:
❑convert the sequence of quantized values to a form
that is suitable for transmission over communication
channel
• Typically, each quantized value converted to a
binary code:
❑e.g., if there are 16 representation levels, then first
level denoted by code 0000, second by 0001, third by
0010, sixteenth by 1111
• More generally, ternary codes or codes in which
more than three values are taken by each symbol
may also be used
• In a “line code”, a binary stream of data
Line Codes
takes on an electrical representation
• Fig. shows the waveforms of five common
line codes for the example data stream
01101001
• Line codes are of the nonreturn-to-zero
(NRZ) or return-to-zero (RZ) type
❑ RZ denotes that the pulse shape used to
represent the bit always returns to the 0
volts level by the end of the bit
❑ NRZ denotes that the pulse does not
necessarily return to the neutral level by
the end of the bit
a) Unipolar NRZ Signaling:
❑ Symbol 1 is represented by transmitting a
pulse of amplitude 𝐴 for the symbol
duration and symbol 0 by transmitting no
pulse
❑ Also referred to as “on-off signaling”
b) Polar NRZ Signaling:
❑ Symbols 1 and 0 represented by
transmitting pulses of amplitudes +𝐴 and
− 𝐴, respectively
Ref: “Communication Systems” by Haykin and Moher, 5th ed
Line Codes (contd.)
c) Unipolar RZ Signaling:
❑ Symbol 1 represented by a rectangular
pulse of amplitude 𝐴 and half-symbol width
and symbol 0 by transmitting no pulse
• Polar RZ Signaling (not shown in fig.):
❑ symbol 1 (respectively, 0) represented by
rectangular pulse of amplutude 𝐴
(respectively, −𝐴) and half-symbol width
❑ Easy to extract clock signal at receiver for
timing synchronization:
o By taking absolute value of received signal
d) Bipolar RZ Signaling:
❑ Symbol 0 represented by transmitting no
pulse and symbol 1 alternately by
transmitting pulse of amplitude 𝐴 and half-
symbol width and pulse of amplitude −𝐴
and half-symbol width
• Bipolar RZ Signaling has the advantage over
Unipolar RZ signaling that:
❑ Former has 0 dc component and latter has
positive dc component
❑ DC component does not pass through some
components of a communication system
(e.g., transformer), which can cause
distortion of transmitted signal Ref: “Communication Systems” by Haykin and Moher, 5th ed
Line Codes (contd.)
e) Split-Phase (Manchester
Code):
❑ symbol 1 represented by
positive pulse of amplitude
𝐴 followed by negative
pulse of amplitude −𝐴,
each of half symbol-width
❑ for symbol 0, polarities of
above two pulses reversed
• Desirable feature of
Manchester code:
❑Zero dc component
Ref: “Communication Systems” by Haykin and Moher, 5th ed
Example: T1 System
• T1 carrier system designed to accommodate 24 voice channels for short-distance
communications
• Voice signal mostly limited to a band from 300 Hz to 3.1 kHz:
❑ so passed through a low-pass filter with cut-off frequency of 3.1 kHz before sampling
• Nyquist rate is 6.2 kHz
• Sampling rate of 8 kHz used
• For companding, T1 system uses a piecewise-linear characteristic (consisting of 15 linear
segments) to approximate the 𝜇-law with 𝜇 = 255
• There are a total of 255 possible amplitude levels
• So each quantized sample represented using an 8-bit codeword
• 24 voice channels combined using Time Division Multiplexing (TDM)
• Each frame of the multiplexed signal consists of:
❑ 24 8-bit words, one for each voice source,
❑ plus a single bit that is added to the end of the frame for synchronization (called “framing bit”)
• Hence, each frame consists of a total of 24 × 8 + 1 = 193 bits
• Each frame has a period of:
❑ 125 𝜇s, since sampling rate of 8 kHz used for each voice channel
• Hence, duration of each bit is 0.647 𝜇s
• Resultant transmission rate:
❑ 1.544 Mbps
• Framing bits allow receiver to find out where each frame begins
• Framing bits chosen such that a sequence of framing bits, one at the end of each frame,
forms a special pattern that is unlikely to be formed in a speech signal
❑ The special pattern is 100011011100
Differential PCM
• PCM not a very efficient system; it generates a large number of bits and requires a lot of
bandwidth to transmit
• “Differential PCM”: a more bandwidth-efficient scheme than PCM
• Basic idea: given an analog source (e.g., speech signal):
❑ sample values are not independent
❑ instead, we can make a good guess about a sample value from past sample values
• Let 𝑚[𝑘] denote 𝑘’th sample of signal 𝑚(𝑡)
• Suppose instead of transmitting 𝑚[𝑘], we transmit the difference 𝑑 𝑘 = 𝑚 𝑘 − 𝑚[𝑘 − 1]
• At receiver, we can reconstruct 𝑚 𝑘
• Advantage of this scheme over transmitting 𝑚[𝑘]:
❑ differences between successive samples are generally much smaller than sample values
❑ so fewer bits are required to represent 𝑑 𝑘 than 𝑚 𝑘 , for a given signal-to-quantization-noise target
❑ equivalently, for a fixed number of bits, SNR improves
3𝑃
❑ e.g., recall that for uniform quantizer, output SNR of quantizer is 2 22𝑅 , which increases when
𝑚𝑚𝑎𝑥
𝑚𝑚𝑎𝑥 decreases
• Consider following improvement over above scheme: at transmitter, we estimate (predict)
the value of 𝑚[𝑘] from knowledge of previous several sample values
❑ let 𝑚[𝑘]
ෝ be predicted value
❑ we transmit 𝑑 𝑘 = 𝑚 𝑘 − 𝑚[𝑘]ෝ instead of 𝑚 𝑘
• If the prediction 𝑚[𝑘]
ෝ is good, then:
❑ 𝑚 𝑘 − 𝑚[𝑘]
ෝ will have smaller magnitude than 𝑚 𝑘 − 𝑚[𝑘 − 1]
❑ which will lead to further reduction in number of bits that need to be sent for a given SNR target
• This scheme, wherein 𝑑 𝑘 = 𝑚 𝑘 − 𝑚[𝑘]
ෝ is transmitted, is called “Differential PCM”
❑ first scheme described above is a special case of Differential PCM in which 𝑚
ෝ 𝑘 = 𝑚[𝑘 − 1]
• How can we predict the value of 𝑚[𝑘] from knowledge of previous several sample values?
• One way: using Taylor series:
❑ 𝑚 𝑡 + 𝑇𝑠 = 𝑚 𝑡 + 𝑇𝑠 𝑚ሶ 𝑡 +
𝑇𝑠2
𝑚ሷ 𝑡 +
𝑇𝑠3
𝑚
ഺ 𝑡 +⋯
Differential PCM
2 3!
• For small 𝑇𝑠 :
❑ 𝑚 𝑡 + 𝑇𝑠 ≈ 𝑚 𝑡 + 𝑇𝑠 𝑚ሶ 𝑡
(contd.)
• How can we approximately find 𝑚ሶ 𝑡 ?
1
❑ 𝑚ሶ 𝑘𝑇𝑠 ≈ 𝑚 𝑘𝑇𝑠 − 𝑚 (𝑘 − 1)𝑇𝑠
𝑇𝑠
• So: 𝑚[𝑘 + 1]:
1) ≈ 2𝑚 𝑘 − 𝑚[𝑘 − 1]
• 1) provides crude prediction for 𝑚[𝑘 + 1]
• We can improve prediction by using more terms from Taylor series
• So more generally, we can express the prediction formula as:
❑ 𝑚 𝑘 ≈ 𝑎1 𝑚 𝑘 − 1 + 𝑎2 𝑚 𝑘 − 2 + ⋯ + 𝑎𝑁 𝑚 𝑘 − 𝑁
• So our prediction is:
❑ 𝑚
ෝ 𝑘 = 𝑎1 𝑚 𝑘 − 1 + 𝑎2 𝑚 𝑘 − 2 + ⋯ + 𝑎𝑁 𝑚 𝑘 − 𝑁
• Transmitter quantizes and sends 𝑑 𝑘 = 𝑚 𝑘 − 𝑚[𝑘]

• Difficulty with above scheme:
❑ Receiver does not know 𝑚 𝑘 − 1 , 𝑚 𝑘 − 2 , … , 𝑚 𝑘 − 𝑁 , but instead knows only their quantized
versions 𝑚𝑞 [𝑘 − 1], 𝑚𝑞 𝑘 − 2 , …, 𝑚𝑞 [𝑘 − 𝑁]
• So a better scheme is as follows
• Prediction at transmitter is:
❑ 𝑚
ෝ 𝑘 = 𝑎1 𝑚𝑞 𝑘 − 1 + 𝑎2 𝑚𝑞 𝑘 − 2 + ⋯ + 𝑎𝑁 𝑚𝑞 𝑘 − 𝑁
• If 𝑑 𝑘 = 𝑚 𝑘 − 𝑚[𝑘]
ෝ , then transmitter sends its quantized version, say 𝑑𝑞 [𝑘]
• Receiver finds 𝑚𝑞 𝑘 = 𝑑𝑞 𝑘 + 𝑚 ෝ 𝑘
• Note that in above scheme, the difference 𝑑 𝑘 is quantized, and not 𝑚 𝑘
• Amplitude of a signal 𝑚(𝑡) is in range −1 V to 1 V Example
• Max. frequency in the signal is 4 kHz
• It is transmitted using 8 bit/ sample PCM
• The same signal is transmitted using differential PCM, where the error signal 𝑑(𝑡)
ranges from −0.1 V to 0.1 V
• In both cases, uniform quantization is used
• Suppose step size in differential PCM case must be at most step size in PCM case
• Want to find transmission bit rate in each case and hence bit rate compression
ratio
• Bit rate in case of PCM:
❑ 64 kbps
• Step size in case of PCM:
2
❑ = 7.8125 mV
256
• If same step size used in differential PCM case, then number of quantization levels:
0.2
❑ = 25.6
7.8125×10−3
• Since number of quantization levels must be a power of 2, we use 5 bits/ sample in
differential PCM
• Bit rate in case of differential PCM:
❑ 40 kbps
• So bit rate compression ratio:
❑ 1.6
Delta Modulation
• A message signal 𝑚(𝑡) that needs to be transmitted is oversampled, i.e., sampled
at a rate much higher than Nyquist rate
• Benefit of oversampling:
❑ increases the correlation between adjacent samples of the signal
❑ this is done to permit use of a simple quantizing strategy for constructing the encoded
signal
• Delta modulation (DM) provides a staircase approximation to oversampled version
of message signal as shown in Fig.
• Difference between input and the approximation is quantized into only two levels,
viz., ±∆:
❑ If approximation falls below signal at any sampling epoch, it is increased by ∆
❑ If approximation lies above signal, it is decreased by ∆
• If signal does not change
too rapidly from sample to
sample, staircase
approximation remains
within ±∆ of input signal

Ref: “Communication Systems” by Haykin


and Moher, 5th ed
Delta Modulation (contd.)
• If 𝑚(𝑡) is input signal and 𝑚𝑞 (𝑡) is staircase approximation, then:
❑ 𝑒 𝑛𝑇𝑠 = 𝑚 𝑛𝑇𝑠 − 𝑚𝑞 (𝑛𝑇𝑠 − 𝑇𝑠 )
❑ 𝑒𝑞 𝑛𝑇𝑠 = ∆sgn 𝑒 𝑛𝑇𝑠
❑ 𝑚𝑞 𝑛𝑇𝑠 = 𝑚𝑞 𝑛𝑇𝑠 − 𝑇𝑠 + 𝑒𝑞 𝑛𝑇𝑠

• Fig. shows transmitter and


receiver of delta
modulation system
• Function of low-pass filter
in receiver:
❑ It has bandwidth equal to
that of 𝑚(𝑡)
❑ Removes noise in the
quantized staircase
waveform

Ref: “Communication Systems” by Haykin


and Moher, 5th ed
• Delta modulation is subject to two types of quantization error: Delta Modulation
1) Slope Overload Distortion
2) Granular Noise (contd.)
• 𝑚𝑞 𝑡 can change at the maximum rate:


𝑇𝑠

• So if signal 𝑚(𝑡) varies too rapidly in comparison with the value 𝑇 , slope overload
𝑠
distortion (noise) occurs as shown in fig
• Sufficient condition to prevent slope overload distortion:
∆ 𝑑𝑚(𝑡)
❑ ≥ max
𝑇𝑠 𝑑𝑡
• Granular noise occurs when (see fig):
❑ Step size ∆ is too large relative to local slope characteristics of 𝑚(𝑡)
❑ This causes the staircase approximation 𝑚𝑞 𝑡 to hunt around a relatively flat segment of 𝑚(𝑡)
• Optimal step size ∆ needs to be chosen taking into account the trade-off between small
slope overload distortion and small granular noise
• Adaptive delta modulation:
❑ improved version of delta modulation in which step size ∆ varies with time in accordance with 𝑚(𝑡)

Ref: “Communication Systems” by Haykin


and Moher, 5th ed
Example
• Delta modulation has smaller bandwidth requirement
than PCM as illustrated in this example
• For telephone applications, a typical PCM system uses
8 kHz sampling rate and an 8 bit representation of
each sample
• So bit rate:
❑64 kbps
• Sampling rates for delta modulation range from 16 to
32 kHz depending on the desired voice quality
• Corresponding bit rates:
❑16 to 32 kbps, since delta modulation requires 1 bit per
sample
• Bandwidth savings under delta modulation compared
with PCM:
❑50 to 75%

You might also like