0% found this document useful (0 votes)
8 views78 pages

EC8501 Notes PZ - by WWW - Easyengineering.net 4

Uploaded by

22l113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views78 pages

EC8501 Notes PZ - by WWW - Easyengineering.net 4

Uploaded by

22l113
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Download From: www.EasyEngineering.

net

UNIT II

Delta Modulation (DM) :

ww
w.E
asy
En
gi nee
rin
g.n
e t
Let mn  m(nTs ) , n  0,1,2,
w hereTs is the sampling period and m(nTs ) is a sample of m(t ).
The error signal is en
 mn  mq n  1 eq (3.52)

n   sgn(en ) (3.53)

mq nmq n 1 eq n (3.54)


w her e mq n is the quantizer output , e q n is
SCAD Engineering College Page 18

the quantized version of en , and  is the step size www.EasyEngineering.net


Download From: www.EasyEngineering.net

The modulator consists of a comparator, a quantizer, and an


accumulator
The output of the accumulator is
n
mq n   sgn(ei )
i 1
n
  eq i  (3.55)
i 1

ww
w.E
asy
En
gi nee
rin
g.n
e t
Two types of quantization errors :

Slope overload distortion and granular noise

Page 19

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Slope Overload Distortion and Granular Noise:

Denote the quantization error by qn ,


mq n  mn  qn

(3.56)

Rec all (3.52), w ehave


en  mn  mn 1  qn 1 (3.57)
Exc ept for qn 1, the quantizer input is a first
ww
bac kw arddifferenc eof the input signal

(slope)

w.E
To avoid slope - overload distortion , w erequire

 max
dm(t )
(3.58)
Ts asy
dt

En
On the other hand, granular noise oc c ursw henstep size
 is too large relative to the loc al slope of m(t ).
gi
Delta-Sigma modulation (sigma-delta modulation):
nee
The    modulation which has an integrator can
relieve the draw back of delta modulation (differentiator) rin
Beneficial effects of using integrator: g.n
1. Pre-emphasize the low-frequency content
2. Increase correlation between adjacent samples
(reduce the variance of the error signal at the quantizer input)
e t
3. Simplify receiver design
Because the transmitter has an integrator , the receiver
consists simply of a low-pass filter.
(The differentiator in the conventional DM receiver is cancelled
by the integrator )

Page 20

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Linear Prediction (to reduce the sampling rate):

Consider a finite-duration impulse response (FIR)


discrete-time filter which consists of three blocks :
1. Set of p ( p: prediction order) unit-delay elements (z-1)
2. Set of multipliers with coefficients w1,w2,…wp
3. Set of adders (  )

ww
w.E
asy
En
gi nee
The filter output (The linear predition of the input ) is rin
p
x̂n    wk x(n  k ) (3.59) g.n
The predic tion error is
k 1
e t
en   xn   x̂n 

(3.60)
Let the index of performance be
 

J  E e n  (mean squareerror) (3.61)


2

Find w1 , w2 ,, w p to minimize J


From (3.59)(3.60)and (3.61) w e have

 
p
J E x n  2 w E
2
k xn xn k 
k 1
Page 21
p p
   w j wk E xn  j xn  k  (3.62)
j 1 k 1
www.EasyEngineering.net
Download From: www.EasyEngineering.net

Ass ume X (t ) is stationary proc ess w ith zero mean ( E[ x[n]]  0)


X2  E x 2 n ( E xn) 2

 E x 2 n 
The autoc orrelation
RX (  k Ts )  RX k   E xnxn  k 
We may simplify J as
p p p
J  2
X  2wk R X k   w j wk R X k  j  (3.63)

k 1 j 1 k 1

ww J
wk
 2R X

p
k   2w j RX k  j   0

w
p
w.E j 1

RX k  j   RX k   RX  k  , k  1,2, ,p (3.64)

asy
j
j 1

(3.64)are c alled Wiener - Hopf equations


as , if R 1
X exists w 0  R 1
X rX En (3.66)
w here 
w 0  w1, w2 ,, w p
rX  [RX [1], R X [2],...,RX [ p]]T
gi T

nee
 RX 0 RX 1  RX  p 1 rin
 R 1
RX   X  RX 0  RX  p  2
 g.n


 X
R 

p 1   R X 

p  2    R X

0



e t


RX 0 , RX 1 ,, RX  p 
Substituting (3.64)into (3.63) yields

p p
J m in    2wk RX k  wk RX k 
2
X
k 1 k 1
 p
    wk RX k 
2
X
SCAD Engkin1eering College Page 22
2 2
   r w0    r R r
X
T
X X
T
X
1
X X (3.67)
 r T R 1r  0, J is alw ays less than  2 www.EasyEngineering.net
Download From: www.EasyEngineering.net

X X X m in X

ww
w.E
asy
En
gi nee
rin
g.n
e t

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Linear adaptive prediction :

The predic tor is adaptive in the follow sense


1. Compute wk , k  1,2,, p, starting any initial values
2. Do iteration using the method of steepest desc ent
Define the gradient vec tor

ww kg 
J
wk
, k  1,2, ,p (3.68)


w.E
wk ndenotes the value at iteration n . Then update wk n  1
1
asy
wk n  1  wk n  gk , k  1,2, ,p (3.69)
2
1 En
w here  is a step - size parameter and is for c onvenience

of presentation.
2
gi nee
J
rin
g.n
P
g   2R X k   2 w j R X k  j 
k
wk j 1
p
 2Exn xn  k   2 w j E xn  j xn  k  , k  1,2,, p
j 1
(3.70) e t
To simplify the computing w euse xnxn  k  for E[x[n]x[n- k]]
(ignore the expectation)
p
ĝ n  2xnxn  k   2  w j nxn  j xn  k  , k  1,2,, p
k
(3.71)
j 1


 p

ŵk n 1  ŵk n  xn  k xn   ŵ j nxn  j 
 j 1 
SCADEwn̂kginneerixnng Coklleenge , k  1,2,, p (3.72) Page 23
p
w hereen  xn    ŵ j nxn  j  by (3.59) (3.60) (3.73)
j 1 www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
Figure 3.27
w.E
Block diagram illustrating the linear adaptive prediction process

asy
Differential Pulse-Code Modulation (DPCM):

En
Usually PCM has the sampling rate higher than the Nyquist rate
gi
.The encode signal contains redundant information. DPCM can
efficiently remove this redundancy.
nee
rin
g.n
e t

Page 24

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Figure 3.28 DPCM system. (a) Transmitter. (b) Receiver.

Input signal to the quantizer is defined by:

en   mn  m̂ n 


(3.74)
m̂ n  is a predic tion value.
wTwhe quantizer output is
e nw en qn

(3.75)
q
. E
w hereqn a  issquantization error.
yfilter input is
T he predic tion E
n ginn (3.77)
m n  m̂ n  en  q

From (3.74)
e eri
mn 
ng.
 m n      
q  m n q n
n e (3.78) t

Page 25

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Processing Gain:

The (SNR)o of the DPCM systemis


2M
(SNR)o  2 (3.79)
Q
ww
w here M2 and  Q2 are varianc esof mn  (E[m[n]]  0) and qn 
w.E
 M2  E2
(SNR)o  ( 2 )( 2 )
E Q
 G p (SNR) Q
asy
(3.80)
En
w here E2 is the varianc eof the predic tions error
gi
and the signal - to - quantization noise ratio is
nee
 2E
(SNR) Q  2
Q
(3.81)
rin
 2M
Proc essingGain, G p  2 (3.82) g.n
E
Design a predic tion filter to maximize G p (minimize  E2 )
e t
Adaptive Differential Pulse-Code Modulation
(ADPCM):
Need for coding speech at low bit rates , we have two
aims in mind:
1. Remove redundancies from the speech signal as far
as possible.

Page 26

www.EasyEngineering.net
Download From: www.EasyEngineering.net

2. Assign the available bits in a perceptually efficient


manner.

ww
w.E
asy
Figure 3.29 Adaptive quantization with backward estimation
(AQB).
En
gi nee
rin
g.n
e t
Figure 3.30 Adaptive prediction with backward estimation (APB).

Page 27

www.EasyEngineering.net
Download From: www.EasyEngineering.net

UNIT III

CORRELATIVE LEVEL CODING:

 Correlative-level coding (partial response signaling)


– adding ISI to the transmitted signal in a controlled
manner
ww
 Since ISI introduced into the transmitted signal is
known, its effect can be interpreted at the receiver
w.E
 A practical method of achieving the theoretical
maximum signaling rate of 2W symbol per second in a
asy
bandwidth of W Hertz
En
 Using realizable and perturbation-tolerant filters

Duo-binary Signaling : gi nee


Duo : doubling of the transmission capacity of a straight binary
system rin
g.n
e t
 Binary input sequence {bk} : uncorrelated binary symbol 1, 0

1 if symbol bk is 1 ck  ak  ak 1
ak 
1 if symbol bk is 0
Page 28

www.EasyEngineering.net
Download From: www.EasyEngineering.net

H I ( f )  H Nyquist ( f )[1  exp(  j 2fTb )]


 H Nyquist ( f )[exp( jfTb )  exp(  jfTb )] exp(  jfTb )
 2H Nyquist ( f ) cos(fTb ) exp(  jfTb )

1, | f | 1/ 2Tb
H Nyquist ( f )  
0, otherwise

2 cos( fTb ) exp( j fTb ), | f | 1/ 2Tb

ww H I ( f )  
 0, otherwise

w.E hI (t ) 
sin(t / Tb ) sin[(t  Tb ) / Tb ]
t / Tb

 (t  Tb ) / Tb

asy 
Tb2 sin(t / Tb )
t (Tb  t )

En
gi nee
 The tails of hI(t) decay as 1/|t|2, which is a faster rate of
decay than 1/|t| encountered in the ideal Nyquist channel.
 Let represent the estimate of the original pulse ak as rin
conceived by the receiver at time t=kTb
 Decision feedback : technique of using a stored estimate of g.n
the previous symbol
 Propagate : drawback, once error are made, they tend to
propagate through the output
e t
 Precoding : practical means of avoiding the error propagation
phenomenon before the duobinary coding

d k  bk  d k 1

 symbol 1 if either symbol bk or dk 1 is 1


dk  
symbol 0 otherwise
Page 29

www.EasyEngineering.net
Download From: www.EasyEngineering.net

 {dk} is applied to a pulse-amplitude modulator, producing a


corresponding two-level sequence of short pulse {ak}, where
+1 or –1 as before

ck  ak  ak 1

 0 if data symbol bk is1


ck  
2 if data symbol bk is 0
ww
 |ck|=1 : random guess in favor of symbol 1 or 0
w.E
 |ck|=1 : random guess in favor of symbol 1 or 0
asy
En
gi nee
rin
g.n
e t

Page 30

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Modified Duo-binary Signaling :

 Nonzero at the origin : undesirable


 Subtracting amplitude-modulated pulses spaced 2Tb second

ck  ak  ak 1
H IV ( f )  H Nyquist ( f )[1 exp( j4 fTb )]
 2 jH Nyquist ( f ) sin(2 fTb ) exp( j2 fTb )
ww 2 j sin(2 fTb ) exp( j2 fTb ), | f | 1/ 2Tb
w.E
HIV ( f )  
 0, elsewhere

h IV (t )  
asy
sin( t / Tb ) sin[(t 2Tb ) / Tb ]
 t / Tb
2T b2 sin( t / T b )
En
 (t  2Tb ) / Tb


 t (2Tb  t ) gi nee
rin
g.n
e t
 precoding


dk  bk  d k 2 

 symbol 1 if either symbol bk or d k 2 is 1

symbol 0 otherwise

Page 31

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
 |ck|=1 : random guess in favor of symbol 1 or 0 g.n
If | ck | 1, say symbol bk is 1
e t
If | ck | 1, say symbol bk is 0

Page 32

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Generalized form of correlative-level coding:

 |ck|=1 : random guess in favor of symbol 1 or 0

ww
w.E
asy
En
gi nee N 1
 t 
h(t)  wn sin c  n 
rin n  Tb 

g.n
e t

Page 33

www.EasyEngineering.net
Download From: www.EasyEngineering.net









Baseband M-ary PAM Transmission:

ww
w.E
asy
En
gi nee
rin
 Produce one of M possible amplitude level
g.n
 T : symbol duration
 1/T: signaling rate, symbol per second, bauds
e t
– Equal to log2M bit per second
 Tb : bit duration of equivalent binary PAM :
 To realize the same average probability of symbol error,
transmitted power must be increased by a factor of
M2/log2M compared to binary PAM

Page 34

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Tapped-delay-line equalization :

 Approach to high speed transmission


– Combination of two basic signal-processing operation
– Discrete PAM
– Linear modulation scheme
 The number of detectable amplitude levels is often limited by
ISI
 Residual distortion for ISI : limiting factor on data rate of the
system
ww
w.E
asy
En
gi nee
rin
g.n
e t
 Equalization : to compensate for the residual distortion
 Equalizer : filter
– A device well-suited for the design of a linear equalizer
is the tapped-delay-line filter
– Total number of taps is chosen to be (2N+1)

Page 35

www.EasyEngineering.net
Download From: www.EasyEngineering.net

N
h(t )   w  (t  kT )
k  N
k

 P(t) is equal to the convolution of c(t) and h(t)


N
p(t )  c(t )  h(t )  c(t )   w  (t  kT)
k
k  N
N N

 wk c(t )   (t  kT)   wk c(t  kT)


ww 
k  N

 nT=t sampling time, discrete convolution sum


k  N

w.E N
p(nT )   wk c((n  k )T )
asy
k  N

En
 Nyquist criterion for distortionless transmission, with T used
in place of Tb, normalized condition p(0)=1
gi nee
1, n  0 1, n0
p(nT )    
0, n  0 0, n  1,  2,....., N rin
 Zero-forcing equalizer
– Optimum in the sense that it minimizes the peak
g.n
distortion(ISI) – worst case
– Simple implementation
e t
– The longer equalizer, the more the ideal condition for
distortionless transmission

Adaptive Equalizer :

 The channel is usually time varying


– Difference in the transmission characteristics of the
individual links that may be switched together
– Differences in the number of links in a connection
Page 36

www.EasyEngineering.net
Download From: www.EasyEngineering.net

 Adaptive equalization
– Adjust itself by operating on the the input signal
 Training sequence
– Precall equalization
– Channel changes little during an average data call
 Prechannel equalization
– Require the feedback channel
 Postchannel equalization
 synchronous
– Tap spacing is the same as the symbol duration of
ww transmitted signal

w.E
Least-Mean-Square Algorithm:

asy
 Adaptation may be achieved
– By observing the error b/w desired pulse shape and
actual pulse shape En
tap-weight should be changed
 Mean-square error criterion
gi
– Using this error to estimate the direction in which the

nee
– More general in application
– Less sensitive to timing perturbations rin
 : desired response, : error signal, : actual response
 Mean-square error is defined by cost fuction
g.n
  E en2
e t
 Ensemble-averaged cross-correlation

  e   y 
 2E en n   2E en n  2E  en xnk   2Rex (k )
wk  wk   wk 

Rex (k ) E en xnk 


Page 37

www.EasyEngineering.net
Download From: www.EasyEngineering.net

 Optimality condition for minimum mean-square error

 
0 for k  0, 1,....,  N
wk
 Mean-square error is a second-order and a parabolic function
of tap weights as a multidimentional bowl-shaped surface
 Adaptive process is a successive adjustments of tap-weight
ww seeking the bottom of the bowl(minimum value
 Steepest descent algorithm
)

w.E
– The successive adjustments to the tap-weight in
direction opposite to the vector of gradient )
asy
– Recursive formular ( : step size parameter)

wk (n  1)  wk (n) 
En
1 
 , k  0, 1,....,  N
2 wk
gi nee
 wk (n)   Rex (k ), k  0,  1,....,  N

 Least-Mean-Square Algorithm rin


– Steepest-descent algorithm is not available in an
unknown environment
g.n
– Approximation to the steepest descent algorithm using
instantaneous estimate
e t

Rex (k )  en xnk
 
wk (n  1)  wk (n)   e n x nk

 LMS is a feedback system

Page 38

www.EasyEngineering.net
Download From: www.EasyEngineering.net

 In the case of small , roughly similar to steepest


descent algorithm

ww
w.E
asy
En
Operation of the equalizer:

 square error Training mode


gi nee
– Known sequence is transmitted and synchorunized
version is generated in the receiver rin
– Use the training sequence, so called pseudo-noise(PN)
sequence g.n
 Decision-directed mode
– After training sequence is completed e t
– Track relatively slow variation in channel characteristic
 Large  : fast tracking, excess mean

Page 39

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Implementation Approaches:

ww Analog
– CCD, Tap-weight is stored in digital memory, analog
w.E
sample and multiplication
– Symbol rate is too high
 Digital
asy
– Sample is quantized and stored in shift register
En
– Tap weight is stored in shift register, digital
multiplication
 Programmable digital
– Microprocessor
gi nee
– Flexibility
– Same H/W may be time shared rin
Decision-Feed back equalization: g.n
e t

 Baseband channel impulse response : {hn}, input : {xn}

Page 40

www.EasyEngineering.net
Download From: www.EasyEngineering.net

yn   hk xn k
k

 h0 xn   hk xnk   hk xnk
 Using data decisk i0ons madek o0 n the basis of precursor to take
care of the postcursors
– The decision would obviously have to be correct

ww
w.E
asy
En
 Feedforward section : tapped-delay-line equalizer

detected symbols of the input sequence


gi
 Feedback section : the decision is made on previously
nee
– Nonlinear feedback loop by decision device
 wn(1) 


 xn  rin
cn    (2) 
 wn 
vn    
an 
en  an  c v T
n n
 (1)  (1)
wn1
 ( 2)  ( 2)
g.n
 wn1  1en xn

Eye Pattern:
wn1  wn1  1enan
e t
 Experimental tool for such an evaluation in an insightful
manner
– Synchronized superposition of all the signal of interest
viewed within a particular signaling interval
 Eye opening : interior region of the eye pattern

Page 41

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
 In the case of an M-ary system, the eye pattern contains (M-
1) eye opening, where M is the number of discreteamplitude

w.E
levels

asy
En
gi nee
rin
g.n
e t

Page 42

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t

Interpretation of Eye Diagram:

Page 43

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t

Page 44

www.EasyEngineering.net
Download From: www.EasyEngineering.net

UNIT IV

ww
w.E
asy
En
gi nee
rin
g.n
e t

Page 45

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t

Page 46

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t

Page 47

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t
ASK, OOK, MASK:

• The amplitude (or height) of the sine wave varies to transmit


the ones and zeros

Page 48

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww• One amplitude encodes a 0 while another amplitude encodes


a 1 (a form of amplitude modulation)
w.E
Binary amplitude shift keying, Bandwidth:
asy
• d ≥ 0-related to the condition of the line
En
gi nee
rin
g.n
e t
B = (1+d) x S = (1+d) x N x 1/r

Page 49

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Implementation of binary ASK:

ww
w.E
asy
Frequency Shift Keying:
En
gi
• One frequency encodes a 0 while another frequency encodes
a 1 (a form of frequency modulation)
nee
rin
g.n
e t

Page 50

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy

s t    En
A cos2f 2t  binary1
A cos2f 2t  binary 0

FSK Bandwidth:

gi nee
rin
• Limiting factor: Physical capabilities of the carrier
• Not susceptible to noise as much as ASK g.n
e t

Page 51

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
• Applications
– On voice-grade lines, used up to 1200bps
– Used for high-frequency (3 to 30 MHz) radio
asy
transmission

En
– used at higher frequencies on LANs that use coaxial
cable
gi nee
rin
g.n
DBPSK: e t
• Differential BPSK
– 0 = same phase as last signal element
– 1 = 180º shift from last signal element

Page 52

www.EasyEngineering.net
Download From: www.EasyEngineering.net

 
A co s 2f c t 
  3   11
 4 

ww  A cos 2f t  4 


s t     
c


01

w.E

A co s 2 f ct 

3
4




00


 asy 
  
A cos 2 f ct  
 4 
10

Concept of a constellation : En
gi nee
rin
g.n
e t

Page 53

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
M-ary PSK:
rin
Using multiple phase angles with each angle having more than one
amplitude, multiple signals elements can be achieved g.n
D
R
L

R
log 2 M
e t
– D = modulation rate, baud
– R = data rate, bps
– M = number of different signal elements = 2L
– L = number of bits per signal element

Page 54

www.EasyEngineering.net
Download From: www.EasyEngineering.net

QAM:
– As an example of QAM, 12 different phases are
combined with two different amplitudes
– Since only 4 phase angles have 2 different amplitudes,
there are a total of 16 combinations
– With 16 signal combinations, each baud equals 4 bits of
information (2 ^ 4 = 16)
– Combine ASK and PSK such that each signal
corresponds to multiple bits
ww – More phases than amplitudes
– Minimum bandwidth requirement same as ASK or PSK
w.E
asy
En
gi nee
rin
g.n
e t

Page 55

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
QAM and QPR:

w.E
• QAM is a combination of ASK and PSK
– Two different signals sent simultaneously on the same

asy
carrier frequency

En
– M=4, 16, 32, 64, 128, 256
• Quadrature Partial Response (QPR)
gi
– 3 levels (+1, 0, -1), so 9QPR, 49QPR
nee
rin
g.n
e t

Page 56

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Offset quadrature phase-shift keying (OQPSK):

• QPSK can have 180 degree jump, amplitude fluctuation


• By offsetting the timing of the odd and even bits by one bit-
period, or half a symbol-period, the in-phase and quadrature
components will never change at the same time.

ww
w.E
asy
En
gi nee
rin
g.n
e t

Page 57

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Generation and Detection of Coherent BPSK:

ww
w.E
asy
En
gi nee
rin
g.n
e t
Figure 6.26 Block diagrams for (a) binary FSK transmitter and
(b) coherent binary FSK receiver.

Page 58

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Fig. 6.28

ww
w.E
asy
En
gi nee
rin
Figure 6.30 (a) Input binary sequence. (b) Waveform of scaled
timefunction s1f1(t). (c) Waveform of scaled time function s2f2(t).
g.n
(d) Waveform of the MSK signal s(t) obtained by adding s1f1(t) and

e t

Page 59

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t
Figure 6.29 Signal-space diagram for MSK system.

Page 60

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Generation and Detection of MSK Signals:

ww
w.E
asy
En
gi nee
rin
g.n
e t
Figure 6.31 Block diagrams for (a) MSK transmitter and (b)
coherent MSK receiver.

Page 61

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t

Page 62

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t

Page 63

www.EasyEngineering.net
Download From: www.EasyEngineering.net

UNIT V ERROR CONTROL CODING

• Forward Error Correction (FEC)


– Coding designed so that errors can be corrected at the
receiver
– Appropriate for delay sensitive and one-way
transmission (e.g., broadcast TV) of data
– Two main types, namely block codes and convolutional
codes. We will only look at block codes

ww
Block Codes:

w.E
• We will consider only binary data
• Data is grouped into blocks of length k bits (dataword)
asy
• Each dataword is coded into blocks of length n bits
(codeword), where in general n>k
En
• This is known as an (n,k) block code
gi
• A vector notation is used for the datawords and codewords,
– Dataword d = (d1 d2….dk)
– Codeword c = (c1 c2……..cn) nee
• The redundancy introduced by the code is quantified by the
code rate, rin
– Code rate = k/n g.n
– i.e., the higher the redundancy, the lower the code rate
Hamming Distance: e t
• Error control capability is determined by the Hamming
distance
• The Hamming distance between two codewords is equal to
the number of differences between them, e.g.,
10011011
11010010 have a Hamming distance = 3
• Alternatively, can compute by adding codewords (mod 2)
=01001001 (now count up the ones)

Page 64

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• The maximum number of detectable errors is

d m in  1
• That is the maximum number of correctable errors is given
by,
 d m in 1
t
2
where dmin is the minimum Hamming distance between 2
codewords and means the smallest integer

ww
Linear Block Codes:
w.E
• As seen from the second Parity Code example, it is possible
asy
to use a table to hold all the codewords for a code and to
look-up the appropriate codeword based on the supplied
dataword En
gi
• Alternatively, it is possible to create codewords by addition
of other codewords. This has the advantage that there is now
nee
no longer the need to held every possible codeword in the
table.
rin
• If there are k data bits, all that is required is to hold k linearly
independent codewords, i.e., a set of k codewords none of
which can be produced by linear combinations of 2 or more
g.n
codewords in the set.
• The easiest way to find k linearly independent codewords is
e t
to choose those which have „1‟ in just one of the first k
positions and „0‟ in the other k-1 of the first k positions.

• For example for a (7,4) code, only four codewords are


required, e.g.,
1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1
Page 65

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• So, to obtain the codeword for dataword 1011, the first, third
and fourth codewords in the list are added together, giving
1011010
• This process will now be described in more detail

• An (n,k) block code has code vectors


d=(d1 d2….dk) and
c=(c1 c2……..cn)
• The block coding process can be written as c=dG
where G is the Generator Matrix
ww  a11 a12 ... a1n   a1 
w.Ea
G
 .
21 a22
.

... .   . 

... a2n  a 2 




ak1 asy
ak 2
  
... akn  a k 

• Thus, En
k
c   dia i
i 1
gi nee
• ai must be linearly independent, i.e.,
Since codewords are given by summations of the ai vectors, rin
then to avoid 2 datawords having the same codeword the ai vectors
must be linearly independent.
g.n
• Sum (mod 2) of any 2 codewords is also a codeword, i.e.,
Since for datawords d1 and d2 we have;
e t
d 3  d1  d 2

So,
k k k k
c3   d 3i a i   (d1i  d 2i )a i  d1i a i   d 2i a i
i 1 i 1 i 1 i 1

c3  c1 c 2
Page 66

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Error Correcting Power of LBC:

• The Hamming distance of a linear block code (LBC) is


simply the minimum Hamming weight (number of 1‟s or
equivalently the distance from the all 0 codeword) of the
non-zero codewords
• Note d(c1,c2) = w(c1+ c2) as shown previously
• For an LBC, c1+ c2=c3
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
• Therefore to find min Hamming distance just need to search
ww among the 2k codewords to find the min Hamming weight –
far simpler than doing a pair wise check for all possible


w.E
codewords.

asy
Linear Block Codes – example 1:

En
• For example a (4,2) code, suppose;

G  
1 0 1 1

gi nee
0 1 0 1
a1 = [1011] rin
a2 = [0101]
• For d = [1 1], then; g.n

1
0
0
1
1
0
1
1
e t
c 

 1 1 1 0

Linear Block Codes – example 2:

• A (6,5) code with


1 0 0 0 0 1
0 1 0 0 0 1

G  0 0 1 0 0 1
 
0 0 0 1 0 1
 0 0 0 0 1 1

www.EasyEngineering.net
Download From: www.EasyEngineering.net




• Is an even single parity code

Systematic Codes:

• For a systematic block code the dataword appears unaltered


in the codeword – usually at the start
• The generator matrix has the structure,

0 .. 0 
1 p11 p12 .. p1R 

ww 0
G
..
1 .. 0
.. .. ..
p21
..
p22
..
.. p2 R 
.. .. 
 I | P

R=n-k




0
w.E
0 .. 1 pk1 pk 2 .. pkR 




asy
• P is often referred to as parity bits
En
I is k*k identity matrix. Ensures data word appears as beginning of
codeword P is k*R matrix.

Decoding Linear Codes:


gi nee
• One possibility is a ROM look-up table rin
• In this case received codeword is used as an address
• Example – Even single parity check code;
g.n
Address Data
000000 0
e t
000001 1
000010 1
000011 0
……… .
• Data output is the error flag, i.e., 0 – codeword ok,
• If no error, data word is first k bits of codeword
• For an error correcting code the ROM can also store data
words

Page 68

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• Another possibility is algebraic decoding, i.e., the error flag


is computed from the received codeword (as in the case of
simple parity codes)
• How can this method be extended to more complex error
detection and correction codes?

Parity Check Matrix:

• A linear block code is a linear subspace S sub of all length n


vectors (Space S)
ww• Consider the subset S null of all length n vectors in space S
that are orthogonal to all length n vectors in S sub
w.E
• It can be shown that the dimensionality of S null is n-k, where
n is the dimensionality of S and k is the dimensionality of
S sub
asy
• It can also be shown that S null is a valid subspace of S and
En
consequently S sub is also the null space of S null

gi nee
• S null can be represented by its basis vectors. In this case the
generator basis vectors (or „generator matrix‟ H) denote the
generator matrix for S null - of dimension n-k = R
• This matrix is called the parity check matrix of the code rin
defined by G, where G is obviously the generator matrix for
S sub - of dimension k
g.n
• Note that the number of vectors in the basis defines the
dimension of the subspace
e t
• So the dimension of H is n-k (= R) and all vectors in the null
space are orthogonal to all the vectors of the code
• Since the rows of H, namely the vectors bi are members of
the null space they are orthogonal to any code vector
• So a vector y is a codeword only if yHT=0
• Note that a linear block code can be specified by either G or
H

Page 69

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Parity Check Matrix:

 b11 b12 ... b1n   b1 


b b22 ... b2 n   b 2 
H  21

R=n-k
 . . ... .   . 
   
 R1 R 2
b b ... bRn  b R 




• So H is used to check if a codeword is valid,
• The rows of H, namely, bi, are chosen to be orthogonal to
ww rows of G, namely ai
• Consequently the dot product of any valid codeword with any
w.E
bi is zero

This is so since,
asy
c   dia i
k

En
and so,
k
i 1

k
gi nee
b j .c  b j . d i a i   d i (a i .b j )  0
i 1 i 1
rin
• This means that a codeword is valid (but not necessarily
correct) only if cHT = 0. To ensure this it is required that the
g.n
rows of H are independent and are orthogonal to the rows of
G
e t
• That is the bi span the remaining R (= n - k) dimensions of
the codespace

• For example consider a (3,2) code. In this case G has 2 rows,


a1 and a2
• Consequently all valid codewords sit in the subspace (in this
case a plane) spanned by a1 and a2

Page 70

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• In this example the H matrix has only one row, namely b1.
This vector is orthogonal to the plane containing the rows of
the G matrix, i.e., a1 and a2
• Any received codeword which is not in the plane containing
a1 and a2 (i.e., an invalid codeword) will thus have a
component in the direction of b1 yielding a non- zero dot
product between itself and b1.

Error Syndrome:

ww• For error correcting codes we need a method to compute the


required correction
w.E
• To do this we use the Error Syndrome, s of a received
codeword, cr
asy
s = crHT
• If cr is corrupted by the addition of an error vector, e, then
cr = c + e En
and
gi
s = (c + e) HT = cHT + eHT
s = 0 + eHT nee
Syndrome depends only on the error
rin
• That is, we can add the same error pattern to different code
words and get the same syndrome.
– There are 2(n - k) syndromes but 2n error patterns
g.n
– For example for a (3,2) code there are 2 syndromes and
8 error patterns
e t
– Clearly no error correction possible in this case
– Another example. A (7,4) code has 8 syndromes and
128 error patterns.
– With 8 syndromes we can provide a different value to
indicate single errors in any of the 7 bit positions as
well as the zero value to indicate no errors
• Now need to determine which error pattern caused the
syndrome

Page 71

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• For systematic linear block codes, H is constructed as


follows,
G = [ I | P] and so H = [-PT | I]
where I is the k*k identity for G and the R*R identity for H
• Example, (7,4) code, dmin= 3
1 0 0 0 0 1 1 
 0 1 1 1 1 0 0
0 1 0 0 1 0 1
G  I | P   
0 0 1 0 1 1 0

H  - P | I  
T

1 0 1 1 0 1 0

  1 1 0 1 0 0 1
0 0 0 1 1 1 1

ww


Error Syndrome – Example:
w.E
• For a correct received codeword cr = [1101001]
In this case,
asy
En
0
1 gi 1 1


0 1 



 nee

 

1

1 0



rin
s  c r H T  1 1 0 1 0 0 11 1 1  0 0 0 g.n
1

0 0

1 0
e t
0
0 0 1


Page 72

www.EasyEngineering.net
Download From: www.EasyEngineering.net




Standard Array:

• The Standard Array is constructed as follows,

c1 (all zero) c2 …… cM s0
e1 c2+e1 …… cM+e1 s1
e2 c2+e2 …… cM+e2 s2
e3 c2+e3 …… cM+e3 s3
ww … …… …… …… …
eN w.E c2+eN …… cM+eN sN
asy
En
• The array has 2k columns (i.e., equal to the number of valid
codewords) and 2R rows (i.e., the number of syndromes)

Hamming Codes:
gi nee
• We will consider a special class of SEC codes (i.e., Hamming rin
distance = 3) where,
– Number of parity bits R = n – k and n = 2R – 1 g.n
– Syndrome has R bits
– 0 value implies zero errors
– 2R – 1 other syndrome values, i.e., one for each bit that
e t
might need to be corrected
– This is achieved if each column of H is a different
binary word – remember s = eHT
• Systematic form of (7,4) Hamming code is,
1 0 0 0 0 1 1 
 0 1 1 1 1 0 0
0 1 0 0 1 0 1
G  I | P   
0 0 1 0 1 1 0
  1 0 1 1 0 1 0
H  - P | I  
T

  1 1 0 1 0 0 1
0 0 0 1 1 1 1

Page 73

www.EasyEngineering.net
Download From: www.EasyEngineering.net






• The original form is non-systematic,
1 1 1 0 0 0 0 
1 0 0 0 1 1 1 1
0 0 1 1 0 0
G  H  0 1 1 0 0 1 1
0 1 0 1 0 1 0
  1 0 1 0 1 0 1
1 1 0 1 0 0 1

• Compared with the systematic code, the column orders of


both G and H are swapped so that the columns of H are a
binary count
ww • The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the
non-systematic H is col. 7 in the systematic H.
w.E
Convolutional Code Introduction:
asy
• Convolutional codes map information to code bits
En
sequentially by convolving a sequence of information bits
with “generator” sequences
gi nee
• A convolutional encoder encodes K information bits to N>K
code bits at one time step
• Convolutional codes can be regarded as block codes for
which the encoder has a certain structure such that we can rin
express the encoding operation as convolution
g.n
• Convolutional codes are applied in applications that require
good performance with low implementation cost. They
e t
operate on code streams (not in blocks)
• Convolution codes have memory that utilizes previous bits to
encode or decode following bits (block codes are
memoryless)
• Convolutional codes achieve good performance by
expanding their memory depth
• Convolutional codes are denoted by (n,k,L), where L is code
(or encoder) Memory depth (number of register stages)

Page 74

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• Constraint length C=n(L+1) is defined as the number of


encoded bits a message bit can influence to

ww
• Convolutional encoder, k = 1, n = 2, L=2
w.E
– Convolutional encoder is a finite state machine (FSM)
processing information bits in a serial manner

asy
– Thus the generated code is a function of input and the
state of the FSM
En
– In this (n,k,L) = (2,1,2) encoder each message bit
influences a span of C= n(L+1)=6 successive output
gi
bits = constraint length C
nee
– Thus, for generation of n-bit output, we require n shift
registers in k = 1 convolutional encoders
rin
g.n
e t

Page 75

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
x ' j  m j3  m j2
E
 m ng
i j
nee
x ''  m  m  m
j j 3 j 1 j
rin
x ''' j  m j2  m j g.n
Here each message bit influences
e t
a span of C = n(L+1)=3(1+1)=6
successive output bits

Page 76

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t

Page 77

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Convolution point of view in encoding and generator matrix:

ww
w.E
asy
En
gi nee
rin
g.n
Example: Using generator matrix e t

Page 78

www.EasyEngineering.net
Download From: www.EasyEngineering.net

 g  [1 0 1 1] 
(1)

 g ( 2 )  [1 1 1 1] 
 

ww
w.E
asy
En
Representing convolutional codes: Code tree:

gi nee
rin
g.n
e t
(n,k,L) = (2,1,2) encoder

 x ' j  m j 2  m j 1  m j

x '' j  m j 2  m j
x  x ' x '' x ' x '' x ' x '' ...
out 1 1 2 2 3 3

Page 79

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t

Page 80

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Representing convolutional codes compactly: code trellis and


state diagram:

State diagram

ww
w.E
asy
En
gi nee
rin
g.n
e t
Inspecting state diagram: Structural properties of
convolutional codes:

• Each new block of k input bits causes a transition into new


state
• Hence there are 2k branches leaving each state

Page 81

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• Assuming encoder zero initial state, encoded word for any


input of k bits can thus be obtained. For instance, below for
u=(1 1 1 0 1), encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1
1, 1 1) is produced:

ww
w.E
asy
En
gi nee
- encoder state diagram for (n,k,L)=(2,1,2) code rin
- note that the number of states is 2L+1 = 8
g.n
Distance for some convolutional codes: e t

Page 82

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
THE VITERBI ALGORITHEM:
w.E
• Problem of optimum decoding is to find the minimum
asy
distance path from the initial state back to initial state (below
from S0 to S0). The minimum distance is the sum of all path
metrics En
g
ln p(y, x )   ln p( y |ixn)

m j 0

eer
• that is maximized by the correct path
j mj

• Exhaustive maximum likelihood


method must search all the paths
i ng.
in phase trellis (2k paths emerging/
entering from 2 L+1 states for
net
an (n,k,L) code)
• The Viterbi algorithm gets its
efficiency via concentrating intosurvivor paths of the trellis

Page 83

www.EasyEngineering.net
Download From: www.EasyEngineering.net

THE SURVIVOR PATH:


ww
• Assume for simplicity a convolutional code with k=1, and up
w.E
to 2k = 2 branches can enter each state in trellis diagram
• Assume optimal path passes S. Metric comparison is done by
asy
adding the metric of S into S1 and S2. At the survivor path
the accumulated metric is naturally smaller (otherwise it
En
could not be the optimum path)
gi nee
rin
g.n
e t

Page 84

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• For this reason the non-survived path can


be discarded -> all path alternatives need not
to be considered
• Note that in principle whole transmitted
sequence must be received before decision.
However, in practice storing of states for
input length of 5L is quite adequate

ww
w.E
asy
En
gi nee
rin
g.n
e t
The maximum likelihood path:

Page 85

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
The decoded ML code sequence is 11 10 10 11 00 00 00 whose
Hamming En
gi
distance to the received sequence is 4 and the respective decoded
nee
sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum
distance path.
rin
(Black circles denote the deleted branches, dashed lines: '1' was
applied)
g.n
How to end-up decoding?

• In the previous example it was assumed that the register was


e t
finally filled with zeros thus finding the minimum distance
path
• In practice with long code words zeroing requires feeding of
long sequence of zeros to the end of the message bits: this
wastes channel capacity & introduces delay
• To avoid this path memory truncation is applied:
– Trace all the surviving paths to the
depth where they merge

Page 86

www.EasyEngineering.net
Download From: www.EasyEngineering.net

– Figure right shows a common point


at a memory depth J
– J is a random variable whose applicable
magnitude shown in the figure (5L)
has been experimentally tested for
negligible error rate increase
– Note that this also introduces the
delay of 5L!

ww
w.E
asy
En
gi nee
J  5L stages of the trellis rin
g.n
Hamming Code Example:
e t
• H(7,4)
• Generator matrix G: first 4-by-4 identical matrix

Page 87

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• Message information vector p

• Transmission vector x
• Received vector r
and error vector e
• Parity check matrix H
ww
w.E
asy
En
gi nee
rin
g.n
Error Correction:
e t
• If there is no error, syndrome vector z=zeros

• If there is one error at location 2

• New syndrome vector z is

Page 88

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
Example of CRC:
g.n
e t

Page 89

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
Example: Using generator matrix:
En  g  [1 0 1 1] 
(1)


gi nee  g ( 2 )  [1 1 1 1] 
 



11
r i ng 01
 00 11 01

 11 .ne 10

t
01

Page 90

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
gi nee
rin
g.n
e t
correct:1+1+2+2+2=8;8  (0.11)  0.88
false:1+1+0+0+0=2;2  (2.30)  4.6
total path metric:  5.48

Page 91

www.EasyEngineering.net
Download From: www.EasyEngineering.net

ww
w.E
asy
En
Turbo Codes:
gi nee
• Backgound rin
– Turbo codes were proposed by Berrou and Glavieux in
the 1993 International Conference in Communications.
g.n
– Performance within 0.5 dB of the channel capacity limit
for BPSK was demonstrated.
e t
• Features of turbo codes
– Parallel concatenated coding
– Recursive convolutional encoders
– Pseudo-random interleaving
– Iterative decoding

Page 92

www.EasyEngineering.net
Download From: www.EasyEngineering.net

Motivation: Performance of Turbo Codes

• Comparison:
– Rate 1/2 Codes.
– K=5 turbo code.
– K=14 convolutional code.
• Plot is from:
– L. Perez, “Turbo Codes”, chapter 8 of Trellis Coding by
C. Schlegel. IEEE Press, 1997

ww
w.E
asy
En
gi nee
rin
g.n
Pseudo-random Interleaving:

• The coding dilemma:


e t
– Shannon showed that large block-length random codes
achieve channel capacity.
– However, codes must have structure that permits
decoding with reasonable complexity.
– Codes with structure don‟t perform as well as random
codes.
– “Almost all codes are good, except those that we can
think of.”

Page 93

www.EasyEngineering.net
Download From: www.EasyEngineering.net

• Solution:
– Make the code appear random, while maintaining
enough structure to permit decoding.
– This is the purpose of the pseudo-random interleaver.
– Turbo codes possess random-like properties.
– However, since the interleaving pattern is known,
decoding is possible.

Why Interleaving and Recursive Encoding?

ww• In a coded systems:


– Performance is dominated by low weight code words.
w.E
• A “good” code:
– will produce low weight outputs with very low
asy
probability.
• An RSC code:
En
– Produces low weight outputs with fairly low
probability.
gi
– However, some inputs still cause low weight outputs.
• Because of the interleaver: nee
– The probability that both encoders have inputs that
cause low weight outputs is very low. rin
– Therefore the parallel concatenation of both encoders
will produce a “good” code.
g.n
e t
Iterative Decoding:
• There is one decoder for each elementary encoder.
• Each decoder estimates the a posteriori probability (APP) of
each data bit.
• The APP‟s are used as a priori information by the other
decoder.
• Decoding continues for a set number of iterations.

Page 94

www.EasyEngineering.net

You might also like