0% found this document useful (0 votes)
2 views

Waveform Coding

The document outlines the structure and requirements for a Digital Communications course, emphasizing the importance of academic integrity and outlining key topics such as waveform coding, source coding, channel coding, and digital modulation. It provides guidelines for success in the course, including attendance, completing assignments, and understanding key mathematical concepts related to continuous and discrete random variables. Additionally, it introduces the basic components of communication systems and the significance of digital communication in modern technology.

Uploaded by

mmmaupa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Waveform Coding

The document outlines the structure and requirements for a Digital Communications course, emphasizing the importance of academic integrity and outlining key topics such as waveform coding, source coding, channel coding, and digital modulation. It provides guidelines for success in the course, including attendance, completing assignments, and understanding key mathematical concepts related to continuous and discrete random variables. Additionally, it introduces the basic components of communication systems and the significance of digital communication in modern technology.

Uploaded by

mmmaupa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Digital Communications

Prof H Xu
Feb. 10, 2025
Text Book
• Communication Systems – Haykin
• Digital Communication– Proakis
Cheating
• No late summary/practical will be accepted.
• Cheating will be not be tolerated and it will be
strongly dealt with. This includes:
– Copying someone else’s Prac codes and report.
– Copying someone’s report/exam answers.
– etc.

• DP requirements: school rule

Copyright © 2001, S. K. Mitra


How to pass Digital Comms
• Attend all lectures
• Do all tutorial/Assignment problems
• Summarize your understanding after each
chapter which is regarded as a quiz or homework.
• Ask any questions from lectures, tutorial,
homework, test and past exam
• Write easy questions first !!
• Do your best to get full mark if you know how to
do exam questions!!
Contents

Part 1: waveform coding

Part 2: source coding

Part 3: channel coding

Part 4: digital modulation


Part 1 waveform coding
Useful Maths
--Theory of continuous random Variable
• Let X be a random variable. Assume the probability density
function (pdf) of X is f(x) ( LB  x  UB )
• The mean of X is defined as
UB
 = E[ X ] =  LB
x  f ( x)dx

• The variance of X is defined as


UP
 = D[ X ] =  ( x −  ) 2  f ( x)dx
2
LB

• Relationship: D[ X ] = E[ X 2 ] − {E[ X ]}2

UP
• Remind: LB
f ( x)dx = 1
Useful Maths
--Applications of Continuous RVs
 
• Example of  = E[ X ] = − x  f ( x)dx and  = D[ X ] = − ( x −  ) 2  f ( x)dx
2
x

Let X be with Gaussian distribution


1  ( x − a) 2 
f ( x) = exp − 
2 2  2 
2


 = E[ X ] =  x  f ( x)dx = a
−


 = D[ X ] =  ( x −  ) 2  f ( x)dx =  2
2
x
−
Useful Maths
--Theory of discrete Random Variable
Discrete time domain:
The mean of X is defined as:
k
 = E[ X ] =  xi  pi
i =1

The variance of X is defined as


k
 2 = D[ X ] =  ( xi −  )2  pi
i =1

Relationship: D[ X ] = E[ X 2 ] − {E[ X ]}2

k
Remind: p
i =1
i =1
Useful Maths
--Application in Digital Communications
1
, −4 ≤ 𝑥 < −1
12
1 −3, −4 ≤ 𝑥 < −2
, −1 ≤ 𝑥 < 1 −1, −2 ≤ 𝑥 < 0
𝑓 𝑥 = 4 𝑚(𝑥) =
1, 0≤𝑥<2
1
,1 ≤ 𝑥 < 4 3, 2≤𝑥<4
12
0, 𝑥 ∉ [−4,4ሿ
Useful Maths
--Application in Digital Communications
𝑋 ∈ 1 + 𝑗; 1 − 𝑗; −1 + 𝑗; −1 − 𝑗

𝑝 𝑥 = 1 + 𝑗 = 𝑝 1 − 𝑗 = 𝑝 −1 + 𝑗 = 𝑝 −1 − 𝑗
= 0.25
4
𝜇=෍ 𝑥𝑖 𝑝𝑖
𝑖=1
= 0.25 1 + 𝑗 + 1 − 𝑗 + −1 + 𝑗 + −1 − 𝑗 =0
4 2
𝜎2 = ෍ |𝑥𝑖 − 𝜇 ቚ 𝑝𝑖
𝑖=1
= 0.25 1 + 𝑗 2 + 1−𝑗 2 + −1 + 𝑗 2 + −1 − 𝑗 2 =2
Useful Maths
--Applications in Digital Communications

Assume that the distribution of a random variable is


Poisson distribution.
 k e− 
P( X = k ) = k = 0,1, ,  0
k!
 
 k e−
E[ X ] =  xi  pi =  ( xi = k )  ( pi =k = )=
i =0 i =0 k!

 
 k e− 
E[ X ] =  x  pi =  ( xi = k )  ( pi =k =
2 2
i
2
) = 2 + 
i =0 i =0 k!

D[ X ] = E[ X 2 ] − {E[ X ]}2 = 
Introduction
→Communication systems are used to transport information
bearing signal from source to destination via a channel.
→The information bearing signal can be:
(a) Analog : analog communication system;
(b) Digital : digital communication system

→Digital communication is expanding because:


(a) The impact of the computer;
(b) flexibility and compatibility;
(c) possible to improve reliability;
(d) availability of wideband channels
(d) integrated solid-state electronic technology
Introduction
Basic communication system
Introduction
→Information Source

(a) Generates the message(s) . Examples are voice,


television picture, computer key board, etc..
(b) If the message is not electrical, a transducer is used to
convert it into an electrical signal.
(c) Source can be analog or digital.
(d) Source can have memory or memoryless.

What does memory mean?

What does memoryless mean?


Introduction
→Source encoder/decoder
(a) The source encoder maps the signal produced by the
source into a digital form (for both analog and digital).
(b) The mapping is done so as to remove redundancy in the
output signal and also to represent the original signal as
efficiency as possible (using as few bits as possible).
(c) The mapping must be such that an inverse operation
(source decoding) can be easily done.
(d) Primary objective of source encoding/decoding is to reduce
bandwidth, while maintaining adequate signal fidelity.
Introduction
→Channel encoder/decoder
(a) Maps the input digital signal into another digital signal in
such a way that the noise will be minimized.
(b) Channel coding thus provides for reliable communication
over a noisy channel.
(c) Redundancy is introduced at the channel encoder and
exploited at the decoder to correct errors.
→Modulator
(a) Modulation provides for efficient transmission of the
signal over channel.
(b) Most modulation schemes impress the information on
either the amplitude, phase or frequency of a sinusoid.
(c) Modulation and demodulation is done such that
Bit error rate is minimized and Bandwidth is conserved.
Introduction
→Channel

Characteristics of channel are


(a) Bandwidth
(b) Power
(c) Amplitude and phase variations
(d) Linearity, etc..

Typical channel models are Additive White Gaussian Channel


and Rayleigh fading channel;
Waveform coding
-- introduction
→Techniques for converting analog signal into a digital bit stream
fall into the broad category “waveform coding”. Example are:
(a) Pulse code modulation (PCM)
(b) Differential PCM
(c) Delta modulation
(d) Linear prediction coding (LPC)
(e) Subband coding
→The basic operations in most waveform codes are:
(a) Sampling
(b) Quantization
(c) Encoding
Waveform coding
-- introduction
Example: PCM

→Lower Pass Filter (LPF) at transmitter is used to attenuate


high frequency components
→ Sampling operation is performed in accordance with the
sampling theorem – a band limited signal of finite energy, with
no frequency components higher than f M is completely
described by samples taken at a rate 2 f M .
Waveform coding
-- introduction

→Aliasing results if sampling frequency f s  2 f M .


→Quantization produces a discrete amplitude, discrete-time
signal from the discrete time, continuous amplitude signal.
→Encoding assigns binary codewords into the quantized signal.
Waveform coding
-- Quantization
Classification of quantization

Uniform quantization nonuniform quantization

Mid-tread type Mid-rise type  law A law


Waveform coding
-- uniform quantization
→A uniform Quantizing is the type in which the 'step size'
remains same throughout the input range.
→No assumption about amplitude statistics and correlation
properties of the input.

Mid-tread Mid-rise
Zero is one of the output Zero is not one of the
levels M is odd output levels M is even
Waveform coding
-- uniform quantizing noise
→Quantizing error consists of the difference between the input
and output signal of the quantizer.

−  / 2  input   / 2  output = 0
 / 2  input  3 / 2  output = 
Waveform coding
-- uniform quantizing noise


Maximum instantaneous value of quantization error is
2
Waveform coding
-- uniform quantizing noise
Can you analyze uniform quantizing noise for mid-rise?
Waveform coding
-- performance of a uniform quantizer
→The performance of a quantizer is measured in terms of the
signal to quantizing error ratio (SQER):
signal _ power
SQER =
Noise _ power

SQER =
 
E m2 (kTs )
mean square quantizing noise
Waveform coding
-- performance of a uniform quantizer

0  input    output =  / 2

  input  2  output = 3 / 2

3 / 2 −  / 2  input  3 / 2 +  / 2

𝑚𝑖 − Δ/2 ≤ 𝑖𝑛𝑝𝑢𝑡 ≤ 𝑚𝑖 + Δ/2

→For a signal with distribution p (m) , the signal power is



Em (kTs ) =  m
Q
2 mi + 2

2
i m −
p (m)dm
i =1 i
2
Waveform coding
-- Sampling, quantization and coding

1
For example: Q=16 quantization steps;  = step size = 0.5v ; Ts =
2 fM

 3 5 7
Threshold: = 0.25 = 0.75 = 1.25 = 1.75
2 2 2 2
Waveform coding
-- Sampling, quantization and coding
1
For example: Q=16 quantization steps;  = step size = 0.5v ; Ts =
2 fM

0100 0101

Output coding : natural binary


Waveform coding
-- Sampling, quantization and coding
1
For example: Q=16 quantization steps;  = step size = 0.5v ; Ts =
2 fM

Output coding: natural binary

Bit rate= (log 2 Q )  2 f M


Waveform coding
-- Sampling, quantization and coding
Q
MSQE =  (MSQE )i Pi
i =1

Where Pi is the probability that signal falls in the ith interval.


(MSQE )i is the mean square quantization error in the ith
interval.
mi +  / 2
(MSQE )i = m − / 2 (m − mi ) 2 p(m | i)dm
i

mi +  / 2
p(m | i ) =
p(m, i ) p(m)
= Pi =  p(m)dm
mi −  / 2
Pi Pi
Q
So MSQE =    mi +  / 2
(m − mi ) 2 p(m | i)dm Pi
 m − / 2
i =1  i

Waveform coding
-- Sampling, quantization and coding
Q
MSQE =    mi +  / 2
(m − mi ) 2 p(m | i)dm Pi
 m − / 2
i =1  i



p(m)
p(m | i ) =
Pi
Q
mi +  / 2
MSQE =   (m − mi ) 2 p(m)dm
mi −  / 2
i =1
Waveform coding
-- Linear quantizer with larger Q
If the number of quantizing steps is larger, then p (m) can be
considered constant in a quantization interval.
Q
mi +  / 2
So MSQE =   (m − mi ) 2 p(m)dm
mi −  / 2
i =1

 larger Q, p (m) can be considered constant


Q
mi +  / 2
MSQE =  p(mi )  (m − mi ) 2 dm
mi −  / 2
i =1


Q Q 3
/2 1
MSQE =  p (mi )  x dx =  p (mi )   2   
2

i =1
− / 2
i =1 3 2


Q Q
1
 p(mi ) =
i =1 i =1 Q
 =1

2
MSQE =
12
Waveform coding
-- Linear quantizer with larger Q
Example: the input to a Q-step uniform quantizer has a uniform
pdf over the interval . Calculate the average signal to quantizer
noise power at the output.
Q
mi +  / 2
Solution: MSQE =   (m − mi ) 2 p(m)dm
mi −  / 2
i =1

 Q = 2 a p ( m) =
1
2a
1 Q mi +  / 2
MSQE =  
2a i =1 i
m −  / 2
( m − mi ) 2
dm
Q
1 3 Q 3
=  = 
i =1 2 a 12 2a 12
 Q = 2 a
2
=
12
Waveform coding
-- Linear quantizer with larger Q
a 2 1  Q  Q 2 2
2
2 1
a
Output signal power: = x dx = =   =
−a 2a 3 3 2  12

Or Em (kTs ) =  m
Q
mi +

2 2 2
i  p(m)dm
mi −
i =1
  2 
2

2
 Q 1 
= 2  + ( 2 ) +  +   
 m +
2

Q
2a   2  
1
=  mi2  2 dm
i

3  2 − 
2
i =1
m − 2a  Q 1 
= 2 1 + (2) +  + 
i
 
2 2

2a   2  

Q
  n
12 + 2 2 +  + n 2 = (n + 1)(2n + 1)
=  mi2 6
i =1 2a Q 2 −1 2
= 

12
assume Q is odd
Waveform coding
-- Linear quantizer with larger Q

signal power (Q 2 2 / 12)


SQER = = = Q 2

noise power (2 / 12)

SQER |dB = 10 log10 Q 2 = 20 log10 2n = 20  n  log10 2 = 6.02ndB

Every additional bit will increase 6.02dB


Waveform coding
-- Linear quantizer with larger Q
Example: consider a zero mean Gaussian signal with N bit
binary coding. So Q = 2 N is the number of quantizing steps.
Signal power is equal to variance of Gaussian signal=  2 .
Q
mi +  / 2
Solution: MSQE =   (m − mi ) 2 p(m)dm
m − / 2
i =1 i

1  m2 
where p(m) = exp − 2 
2 2
 2 

Q = 2N  Q is even number.

1  m2 
p ( m) =
2 2
exp − 2 
 2 
 p(m) is symmetric.
Waveform coding
-- Linear quantizer with larger Q
2 Q / 2−1 mi +  / 2  m  
MSQE =   m − / 2 −  − 2 dm
2
( m m ) exp
 2  i =0 i  2  
i

mi = i +  / 2

2 Q / 2−2 (i +1)   m    m  
MSQE =   i −  −   −  +  −  −   − 2 dm
2 2
( m / 2 i ) exp dm ( m / 2 i ) exp
 2  i =0  2   2  
2 [ Q / 2 −1] 

signal power 2
SQER = =
MSQE MSQE
Example:
Solution:

(a) Key point:  −
f (v)dv = 1

(b) Key point: the value of peak-to-peak is divided by Q

The output of the uniform quantizer is −3, −1,1,3

(c) the variance of quantization error is given as


Q
mi +  / 2
MSQE =   (m − mi ) 2 p(m)dm
mi −  / 2
i =1

4 1 2 1 1 1

E[e ] = 2 (3 − v)  dv + 2  
(1 − v)  dv + 2 (1 − v) 2  dv
2 2 2
2 12 1 12 0 4
Q 

   
mi +
(d) Signal power is given as E m2 (kTs ) = mi2 
2 p(m)dm
mi −
i =1 2

−3+1 −1+1 1+1 3+1


= (−3)  f (v)dv + (−1)  f (v)dv + (1)  f (v)dv + (3) 
2 2 2 2
f (v)dv
−3−1 −1−1 1−1 3−1
Optimal quantizer
𝑓(𝑣)
1
4

−4 4
Waveform coding
-- Nonuniform quantizing
→Problems with uniform quantization
– Only optimal for uniformly distributed signal
– Real audio signals (speech and music) are more concentrated
near zeros
– Human ear is more sensitive to quantization errors at small
values

→Solution: use non-uniform quantization


– quantization interval is smaller near zero
Waveform coding
-- Nonuniform quantizing
→uses variable steps;
→small steps in regions where the signal has a higher
probability;
→the quantizer steps (  i ) and the levels (m i ) are chosen to
maximize the SQER.
→in practice, a nonuniform quantizer is realized by signal
compression followed by uniform quatizer y = g (m)
→at the receiver an expander is used to produce the
inverse operation m = g −1 ( y )
→the compressor and expander taken together
constitute a compander.
Waveform coding
-- Nonuniform quantizing
Two common laws are the  law and the A law.

 law
|x|
log(1 +  )
xmax
y = xmax sign( x)
log(1 +  )

A law
 A| x | 1
 1 + log A , 0 | x |
A
y =
1 + log( A | x |) 1
 , | x | 1
 1 + log A A
Waveform coding
-- Nonuniform quantizing
Waveform coding
-- implementation of μ-law
→(1) Transform the signal using μ-law
| x|
log(1 +  )
xmax
y = F ( x) = xmax sign( x)
log(1 +  )
→(2) Quantize the transformed value using a uniform
quantizer
→(3) Transform the quantized value back using inverse μ-
law
log(1+  )

xmax  | y| 
−1
x = F ( y) = 10 xmax
− 1 sign ( y )
  

Waveform coding
-- implementation of μ-law
Example:
For the following sequence {1.2,-0.2,-0.5,0.4,0.89,1.3…},
Quantize it using a mu-law quantizer in the range of (-1.5,1.5)
with 4 levels, and write the quantized sequence.
Solution: suppose μ=9. we also know x_max=1.5
| x|
log(1 +  )
xmax
Step 1: y = F ( x) = xmax sign( x)
log(1 +  )

y = [1.3707 - 0.5136 - 0.9031 0.7972 1.2031 1.4167 ]


Waveform coding
-- implementation of μ-law
Example:
Step 2: uniform quantization of (-1.5,1.5)

-1.125 -0.375 0.375 1.125

-1.5 -0.75 0 0.75 1.5

y = [1.3707 - 0.5136 - 0.9031 0.7972 1.2031 1.4167 ]

Q( y ) = [1.125 - 0.375 - 1.125 1.125 1.125 1.125 ]


Waveform coding
-- implementation of μ-law
Example:
→Step 3: inverse μ-law
log(1+  )

xmax  | y| 
−1
x = F ( y) = 10 xmax
− 1 sign ( y )
  

x = F −1 ( y )
0.375 1.125  0.1297 0.7706

Quantized sequence:
[0.77 - 0.13 - 0.77 0.77 0.77 0.77 ]
Waveform coding
-- performance of a nonuniform quantizer
Recall :
→The performance of a quantizer is measured in terms of the
signal to quantizing error ratio:

SQER =
E m 2 (kTs )  
mean square quatizing noise( MSQE)
→For a signal with distribution p (m) , the signal power is

 
Q
mi +
E m 2 (kTs ) =  m 2
i  
2
p(m)dm
mi −
i =1 2
Q
mi +  / 2
MSQE =   (m − mi ) 2 p(m)dm
mi −  / 2
i =1

For nonuniform quantization system, it is very difficult to


calculate MSQE.
Waveform coding
-- performance of a nonuniform quantizer
Normally we use mean square error (MSE) between original and
quantized samples or signal to noise ratio (SNR) to evaluate the
performance of nonuniform quantization system.
1 N
MSE =  (x(k ) − xˆ (k ) )
2

N k =1
where N is the number of samples in the sequence.

 x2
SNR =
MSE

where  x2 is the variance of the original signal


Waveform coding
-- performance of a nonuniform quantizer
example: in the above example,
the original sequence: {1.2,-0.2,-0.5,0.4,0.89,1.3}
the quantized sequence, [0.77 - 0.13 - 0.77 0.77 0.77 0.77 ]
 x2
SNR =
MSE
1 N
MSE =  (x(k ) − xˆ (k ) )
2

N k =1

6

= (1.2 - 0.77) + (,-0.2 + 0.13) + (− 0.5 + 0.77 ) + (0.4 − 0.77 ) + (0.89 − 0.77 ) + (1.3 − 0.77 )
1 2 2 2 2 2 2

 x2 =
1
6

(1.2 − x )2 + (- 0.2 − x )2 + (− 0.5 − x )2 + (0.4 − x )2 + (0.89 − x )2 + (1.3 − x )2 
x=
1
(1.2 − 0.2 − 0.5 + 0.4 + 0.89 + 1.3)
6
Waveform coding
-- Sampling, quantization and coding
1
For example: Q=16 quantization steps;  = step size = 0.5v ; Ts =
2 fM

Output coding: natural binary— PCM sequence

Bit rate= (log 2 Q )  2 f M


Waveform coding
-- Differential PCM (DPCM)
PCM: Pulse Code Modulation
A uniform linear quantizer is called PCM
PCM: encoding the quantized signal into digit bits

Each quantized sample is encoded an l = log 2 L bits


Where L is the number of quantization levels

For a correlated signal, is it possible to reduce l ?


Waveform coding
-- Differential PCM
→speech and many signals contain enough structure such that
there is correlation among adjacent samples.
→mostly evident when sampled at higher than Nyquist.
→if samples are m(Ts ), m(2Ts ), m(3Ts ), ... , the first
difference Dr = m(rTs ) − m((r − 1)Ts ) .
     
E Dr2 = E m 2 (rTs ) + E m 2 ((r − 1)Ts ) − 2 Em(rTs )m((r − 1)Ts )

→For a zero mean stationary process,


E m 2 (rTs ) = E m 2 ((r − 1)Ts ) =  m2

Em(rTs )m((r − 1)Ts ) =  m2 = Rmm (Ts )

where  are correlation coefficients.


Waveform coding
-- Differential PCM
     
E Dr2 = E m 2 (rTs ) + E m 2 ((r − 1)Ts ) − 2 Em(rTs )m((r − 1)Ts ) = 2 m2 (1 −  )

then E Dr2   m2
1
For   2
That means that the variance of Dr = m(rTs ) − m((r − 1)Ts ) is
less than the variance of sampled signal.

So a given number of quantization steps, better performance


can be obtained by quantizing Dr = m(rTs ) − m((r − 1)Ts ) rather
than the samples.

 
1 1
(  Differential PCM;  PCM)
2 2
Waveform coding
-- Differential PCM
→the procedure is to encoder the
difference e(rTs ) = m(rTs ) − m[(r − 1)Ts ]

Where m[(r − 1)Ts ] is predicted by using previous values of


unquantized output.

Transmitter
S(i) + e(i) e(i) + q(i) DPCM
 Quantiser encoder

-
^
S(i)

+
Predictor 
+
Waveform coding
-- Delta Modulation

→uses single bit quantization.

→possible with oversampling to increase correlation


between adjacent samples.

→it’s a 1-bit version of DPCM

→uses a staircase approximation to the oversampled signal


Bit Rate of a Digital Sequence
→Nyquist sampling rate f S  2 f m

→Quantization resolution: B bit/sample

→Bit rate: R = f S  B bit/sec


For example:
Speech signal sampled at 8 KHz, quantized to 8 bit/sample,
Then R = f S  B = 8  8 = 64 kbits/sec
Summary of waveform coding
→Understand the general concept of quantization

→Can perform uniform quantization on a given signal and


calculate the SQER

→Understand the principle of non-uniform quantization, and can


perform mu-law quantization and calculate SQER

→Can calculate bit rate given sampling rate and quantization


Levels

→Know advantages of digital representation

→understand the difference between DPCM and PCM.

You might also like