0% found this document useful (0 votes)
13 views

Lecture 2

1) The first step in any digital communication system is to transform the information source into a form compatible with digital transmission. This involves sampling, quantizing, and encoding the analog signal. 2) Sampling converts a continuous-time analog signal into discrete-time samples by multiplying it with an impulse train. Quantization maps the sampled amplitudes to a finite set of discrete levels. Encoding assigns a digital code to each quantized sample using pulse code modulation. 3) Quantization introduces error as the sampled amplitudes are approximated. The error can be modeled as additive quantization noise. Uniform quantization uses fixed step sizes while non-uniform quantization varies the step sizes based on the statistics of the input signal, such

Uploaded by

Amir Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Lecture 2

1) The first step in any digital communication system is to transform the information source into a form compatible with digital transmission. This involves sampling, quantizing, and encoding the analog signal. 2) Sampling converts a continuous-time analog signal into discrete-time samples by multiplying it with an impulse train. Quantization maps the sampled amplitudes to a finite set of discrete levels. Encoding assigns a digital code to each quantized sample using pulse code modulation. 3) Quantization introduces error as the sampled amplitudes are approximated. The error can be modeled as additive quantization noise. Uniform quantization uses fixed step sizes while non-uniform quantization varies the step sizes based on the statistics of the input signal, such

Uploaded by

Amir Khan
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Digital Communications

Lecture 2
Last time, we talked about:

 Important features of digital communication


systems
 Some basic concepts and definitions such as
as signal classification, spectral density,
random process, linear systems and signal
bandwidth.

Lecture 2 2
Today, we are going to talk about:

 The first important step in any DCS:

 Transforming the information source to a form


compatible with a digital system

Lecture 2 3
Formatting and transmission of baseband signal

Digital info.

Textual Format

Analog Sample Quantize Encode modulate


info.

Pulse
Bit stream waveforms Channel
Format
Analog
info. Low-pass
Decode Demodulate/
filter Receive
Textual Detect
sink
info.

Digital info.

Lecture 2 4
Format analog signals
 To transform an analog waveform into a form
that is compatible with a digital
communication system, the following steps
are taken:
1. Sampling
2. Quantization and encoding
3. Baseband transmission

Lecture 2 5
Sampling

Time domain Frequency domain


xs (t)  x (t)  Xs( f )  X ( f )  X
x(t)
x(t) ( f ) | X ( f )
|

x | X ( f )
(t) |

xs (t)
| X s( f )
|

Lecture 2 6
Aliasing effect

LP filter

Nyquist rate

aliasing

Lecture 2 7
Sampling theorem

Analog
Sampling Pulse amplitude
process m
signal o dulated (PAM)
signal
 Sampling theorem: A bandlimited
signal
with no spectral components beyond ,
can be uniquely determined by values
sampled at
uniform intervals ofrate,
 The sampling is
called Nyquist rate.
Lecture 2 8
Quantization
 Amplitude quantizing: Mapping samples of a continuous
amplitude waveform to a finite set of amplitudes.

Out

In
Average quantization noise power

Signal peak power


Quantized
values

Signal power to average


quantization noise power

Lecture 2 9
Encoding (PCM)

 A uniform linear quantizer is called Pulse Code


Modulation (PCM).
 Pulse code modulation (PCM): Encoding the quantized
signals into a digital word (PCM word or codeword).
 Each quantized sample is digitally encoded into an l bits
codeword where L in the number of quantization levels
and

Lecture 2 10
Quantization example
amplitude
x(t)
111 3.1867
110 2.2762
Quant. levels
101 1.3657
100 0.4552

011 -0.4552
boundaries
010 -1.3657

001 - x(nTs): sampled values


2.2762 xq(nTs): quantized values
000 -3.1867
Ts: sampling time
PCM t
codeword 110 110 111 110 100 010 011 100
011 PCM sequence
100
Lecture 2 11
Quantization error
 Quantizing error: The difference between the input and
output of a quantizer
e(t)  xˆ (t) 
Process of quantizing noise x(t)
Qauntizer
Model of quantizing noise
y
q(x)
AGC x(t)
x(t) xˆ (t
xˆ (t )
x
) e(t)

+
e(t)

xˆ (t) 
x(t)
Lecture 2 12
Quantization error …
 Quantizing error:
 Granular or linear errors happen for inputs within the dynamic
range of quantizer
 Saturation errors happen for inputs outside the dynamic
range of quantizer
 Saturation errors are larger than linear errors
 Saturation errors can be avoided by proper tuning of AGC
 Quantization noise variance:


2
  E{[x 
q  e 2 (x)p(x)dx  2
Lin 
2
Sat
q(x)]2 } L / 2 2  2

2 1 ql  2
  p(xl )q l Uniform  q
Lin
2
 12
q. Lin  12
l0

Lecture 2 13
Uniform and non-uniform quant.
 Uniform (linear) quantizing:
 No assumption about amplitude statistics and correlation
properties of the input.
 Not using the user-related specifications
 Robust to small changes in input statistic by not finely tuned to a
specific set of input parameters
 Simple implementation
 Application of linear quantizer:
 Signal processing, graphic and display applications, process
control applications
 Non-uniform quantizing:
 Using the input statistics to tune quantizer parameters
 Larger SNR than uniform quantizing with same number of levels
 Non-uniform intervals in the dynamic range with same quantization
noise variance
 Application of non-uniform quantizer:
 Commonly used for speech

Lecture 2 14
Non-uniform quantization
 It is achieved by uniformly quantizing the “compressed”
 signal.
At the receiver, an inverse compression characteristic,
called “expansion” is employed to avoid signal distortion.

compression+expansion companding

y
C(x)
x(t) x
y(t) yˆ (t xˆ (t
ˆ
) )
x
y
Compress Qauntize Expand
Channel
ˆ
Transmitter Receiver
Lecture 2 15
Statistics of speech amplitudes
 In speech, weak signals are more frequent than strong
ones.
Probability density function 1.0

0.5

0.0
1.0 2.0 3.0
Normalized magnitude of speech signal
S
 Using equal step sizes (uniform quantizer) gives 
 for weak
signals
low and high  S  for strong  N
 N q q

 signals.
 Adjusting the step size of the quantizer by taking into account the speech statistics
improves the SNR for the input range.

Lecture 2 16
Baseband transmission

 To transmit information through physical


channels, PCM sequences (codewords) are
transformed to pulses (waveforms).
 Each waveform carries a symbol from a set of size M.
 Each transmit symbol represents k  log 2 M bits of
the PCM words.
 PCM waveforms (line codes) are used for binary
symbols (M=2).
 M-ary pulse modulation are used for non-binary
symbols (M>2).

Lecture 2 17
PCM waveforms

 PCM waveforms category:


 Nonreturn-to-zero (NRZ)  Phase encoded
 Return-to-zero (RZ)  Multilevel binary
+V 1 0 1 1 0 +V 1 0 1 1 0
NRZ-L -V Manchester -V
Unipolar-RZ +V
0 Miller +V
-V
+V +V
Bipolar-RZ 0 Dicode NRZ 0
-V -V
0 0 T 2T 3T 4T 5T

T
Lecture 2 18
PCM waveforms …
 Criteria for comparing and selecting PCM
waveforms:
 Spectral characteristics (power spectral density and
bandwidth efficiency)

Bit synchronization capability

Error detection capability

Interference and noise immunity
 Implementation cost and
complexity

Lecture 2 19
Spectra of PCM waveforms

Lecture 2 20
M-ary pulse modulation

 M-ary pulse modulations category:



M-ary pulse-amplitude modulation (PAM)

M-ary pulse-position modulation (PPM)

M-ary pulse-duration modulation (PDM)
 M-ary PAM is a multi-level signaling where each
symbol takes one of the M allowable amplitude levels,
each representing k  log 2 M bits of PCM words.
 For a given data rate, M-ary PAM (M>2) requires less
bandwidth than binary PCM.
 For a given average pulse power, binary PCM is
easier to detect than M-ary PAM (M>2).

Lecture 2 21
PAM example

Lecture 2 22

You might also like