DC Digital Communication MODULE IV PART2
DC Digital Communication MODULE IV PART2
AND CORRECTION
MODULE IV
MODULE IV Part II
C di
Coding: Parity check bit coding for error detection -
Coding for error detection and correction - Block codes –
coding and decoding - Systematic and nonsystematic codes
- Cyclic codes:- generator polynomial, Generator and
Parity check matrices, Encoding and decoding of cyclic
codes, Syndrome computation and error detection,
Convolutional coding – Code generation - decoding - Code
diagrams
tree - Sequential decoding – State and Trellis diagrams,
Viterby algorithm, Burst error correction - Block and
convolutional interleaving - ARQ:- Types of ARQ,
Performance of ARQ. Comparison of error rates in coded &
uncoded system.
CLASSIFICATION OF ERROR CONTROL
METHODS
Error control methods may be classified as
(i) Error detection and retransmission: When an error is
detected at the receiver a retransmission request is sent
b k to
back t the
th transmitter.
t itt
(ii) Forward error correction: The receiver detects the errors
and corrects the errors by proper coding techniques
techniques.
When a single source transmits to a number of receivers,
this method is used
Error Control Techniques
E
Error detection
d t ti andd retransmission
t i i
When,errors are detected by the receiver, it requests
a retransmission.
retransmission This is known as automatic repeat
request (ARQ) and is used for delay insensitive data
Appropriate for
Low delay channels
c1 = [1 0 1 1 0 0 1 ] ⎫ Hamming
H i
c2 = [1 0 011 0 0 ⎬
] ⎭ Distance d=3
K 1 1 0 1 0 0 1 0 Even Even
O 1 1 1 1 0 0 1 1 Odd Even
I 1 0 0 1 0 0 1 1 Odd Even
K ASCII = [ 1 1 0 1 0 0 1 0]
PARITY BIT
EX-OR
Parityy Encodingg
Parityy Encodingg
At the receiver the parity of the received code is checked. If a
single bit error has occurred it will make the parity of the
received code odd. The receiver detects this error.
The receiver can only detect errors; it cannot correct errors.
If there are two bit errors it again makes the parity of the
received code even and the receiver cannot detect this error.
ERROR
K → [ 1 1 0 1 0 0 1 0] [ 1 0 0 1 0 0 1 0]
Even Parity Odd Parity
Error
o Detected
etected
This type of parity encoding and decoding is called Vertical
Redundancy Check (VRC)
Longitudinal
g Redundancyy Check (LRC)
( )
In LRC, large message block is divided in to several
characters.
Parity check is applied in both row and columns.
It is possible to detect 3 errors and correct single errors.
For parity checking purposes, a complete character known as
Block Check Character (BCC) is added at the end of block of
information.
information
At the start of the block a Start of Text character (STX) is
sent which indicates that information block follows the
character.
At the end of the block an End of Transmission Block (ETB)
character
h t isi transmitted
t itt d followed
f ll d by
b BCC.
BCC
Longitudinal
g Redundancyy Check (LRC)
( )
b1 0 0 1 1 0 1 1 1 1 1 1
b2 1 1 0 0 0 0 1 1 1 1 0
b3 0 1 0 0 1 0 0 1 0 1 0
b4 0 1 0 1 0 1 0 1 1 0 1
b5 0 0 0 0 1 0 1 0 0 1 1
b6 0 0 0 0 0 0 0 0 0 0 0
b7 0 1 1 1 1 1 1 1 1 0 0
Row Parity
b8 1 0 0 1 1 1 0 1 0 0 1
Bits
Longitudinal
g Redundancyy Check (LRC)
( )
STX N A I ~ I S O K ETB BCC Column
Parity Bits
b1 0 0 1 1 0 1 1 1 1 1 1
b2 1 1 0 0 0 0 1 1 1 1 0
b3 0 1 0 0 1 0 0 1 0 1 0
b4 0 1 0 1 1 1 0 1 1 0 1 X
b5 0 0 0 0 1 0 1 0 0 1 1
b6 0 0 0 0 0 0 0 0 0 0 0
b7 0 1 1 1 1 1 1 1 1 0 0
Row Parity
b8 1 0 0 1 1 1 0 1 0 0 1
Bits
X
LINEAR BLOCK
CODES
LINEAR BLOCK CODES
To generate an (n,k) block code the channel encoder accepts
information in successive k bit blocks.
For each block it adds n-k redundant bits that are algebraically
related to the k message bits, thereby producing an overall
encoded
d d block
bl k off n bits
bit where
h n>k.
>k
The n bit block is called a code word and n is called block length
of the code
The ratio n/k denoted by r is called rate of the code.
R0=(n/k)Rs where Rs is the bit rate of the information source and
R0 is the channel data rate
A code is said to be linear if any two code words in the code can
be added in modulo-2 arithmetic to produce a third code word in
the code.
LINEAR BLOCK CODES
k bits
n − k bits
n bits
LINEAR BLOCK CODES
In the case of (n,k) block codes there are (n-k) redundant bits
added to the k bit message to produce n bit message.
(n-k) bits are referred to as generalized parity check bits or parity
bits.
bits
Block codes in which the message bits are transmitted in
unaltered form are called systematic codes.
Let m0,m1,…,mk-1 constitute a block of k arbitrary message bits.
Thus we can have 2k distinct message blocks.
Let this sequence of message bits be applied to a linear block
encoder producing an n bit code word whose elements are
denoted by c0,c1,…,cn-1.
Let b0,b1,…,bn-k-1 denote the (n-k) parity bits in the code word
LINEAR BLOCK CODES
The code word can be divided into two parts, one part is
occupied by the message bits and the other part by parity bits.
The parity bits may be sent before or after the codeword.
St t
Structure off a code
d word d where
h th message bits
the bit are sentt
before the parity bits is illustrated below.
b0 b1 . . . . . . . bn-k-1 m0 m1 . . . . . . . . mk-1
Code bits
LINEAR BLOCK CODES
The (n-k) leftmost bits of a code word are identical to the
corresponding parity bits and the k rightmost bits are identical
to the corresponding message bits.
The (n-k) parity bits are linear sums of the k message bits as
shown by the relation
bi = pi 0 m 0 + pi1m1 + p i 2 m 2 + ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ + p i ,k −1m k −1
W h e r e i = 0 ,1, ......., n − k − 1
⎧1 if bi depends
d d on m j
A nd p ij = ⎨
⎩ 0 O therw ise
LINEAR BLOCK CODES
The system of equations can be re-written in a compact form
using matrix notation.
Let the message vector, parity vector and code vector be
represented by the row vectors
⎡ p 00 p10 ⋅ ⋅ p n − k −1,0 ⎤
⎢ p p11 ⋅ ⋅ p n − k −1,1, ⎥
⎢ 01 ⎥
[P ] = ⎢ ⋅ ⋅ ⋅ ⋅ ⋅ ⎥
⎢ ⎥
⎢ ⋅ ⋅ ⋅ ⋅ ⋅ ⎥
⎢⎣ p 0 , k −1 p1, k −1 ⋅ ⋅ p n − k −1, k −1 ⎥⎦
The code vector [c] may be expressed as a partitioned row
vector in terms of [m] and [b]
[c ] = ⎡⎣ [b ] [m ]⎤⎦
LINEAR BLOCK CODES
Substituting for [b] from equation (1)
[c ] = ⎡⎣ [m ][ P ] [m ]⎤⎦
Taking out the common factor m
[c ] = [m ] ⎡⎣ [ P ] [ I k ]⎤⎦ ......(( 2 )
Where Ik is a k - by - k identity matrix
⎡1 0 ⋅ ⋅ 0⎤
⎢0 1 ⋅ ⋅ 0⎥
⎢ ⎥
[I k ] = ⎢ ⋅ ⋅ ⋅ ⋅ ⋅⎥
⎢ ⎥
⎢ ⋅ ⋅ ⋅ ⋅ ⋅⎥
⎢⎣ 0 0 ⋅ ⋅ 1 ⎥⎦
LINEAR BLOCK CODES
It is convenient to define a k-by-n generator matrix
[G ] = ⎡⎣ [ P ] [ I k ]⎤⎦
Now equation (2) may be simplified as
[c ] = [m ][G ]......( 3 )
The full set of code words is generated by letting the message
vector [m] range through the set of all 2k combinations.
Now the sum of any two code words is another code word.
This basic property of linear block codes is called closure.
PROOF OF CLOSURE PROPERTY
Consider any pair of code vectors ci and cj corresponding to a
pair
i off message vectors
t mi and
d mj respectively.
ti l Now
N
[c] = [m][G]
ci + c j = mi [G] + m j [G ]
= (mi + m j )[G ]
The modulo-2 sum of mi and mj represent a new message
vector. Correspondingly the modulo-2 sum of ci and cj
represent a new code vector.
IMPORTANT RELATIONS
[G ] = ⎡⎣ [ P ] [ I k ]⎤⎦
[c] = [m][G]
[c ] = [m ] ⎡⎣ [ P ] [ I k ]⎤⎦
[c ] = ⎡⎣ [m ][ P ]⎤⎦ ⎡⎣ [m ][ I k ]⎤⎦
[ P ] is
i a ( n - k ) × k m atrix
t i
[ I k ] is a k × k identity m atrix
[G ] is a k × n matrix
HAMMING CODES
An (n,k) linear block code is called a Hamming code if it
satisfies the following conditions.
Number of parity bits (n-k)≥3
Block length n = 2n-k-1
Mi i
Minimum di
distance
t dmin=3
3
Since the minimum distance of Hamming code is 3 it can be
used to detect double errors and correct single errors
errors.
Example
p 1
Consider a Hamming code with n = 7, k = 4 with (7 – 4) redundant
bits.
⎡1 0 1⎤
⎢1 1 1⎥
L
Let P=⎢ ⎥
⎢1 1 0⎥
⎢ ⎥
⎣0 1 1⎦
[G] is a k × n, i.e., 4 × 7 matrix and Ik is a k × k, i.e., 4 × 4 matrix.
[P] is a (n – k) × k, i.e., 3 × 4 matrix.
⎡1 0 1 1 0 0 0⎤
⎢ ⎥
⎢1 1 1 0 1 0 0⎥
[G ] = ⎡⎣ [ P ] [ I k ]⎤⎦ G=
⎢1 1 0 0 0 1 0⎥
⎢ ⎥
⎣0 1 1 0 0 0 1⎦
Example
p 1
[c] = [m][G] ⎡1 0 1 1 0 0 0⎤
⎢ ⎥
⎢ 1 1 1 0 1 0 0⎥
[c0 , c1, c2 , c3, c4 , c5 , c6 ] = [m0 , m1, m2 , m3 ] ⎢
1 1 0 0 0 1 0⎥
⎢ ⎥
⎣0 1 1 0 0 0 1⎦
c0 = m0 + m1 + m2 + 0 c3 = m0
c = 0+m +m +m c4 = m1
1 1 2 3
c5 = m2
c2 = m0 + m1 + 0 + m3
c6 = m3
Alternate Method
[c ] = ⎡⎣ [m ][ P ]⎤⎦ ⎡⎣ [m ][ I k ]⎤⎦
[c0 , c1, c2 , c3, c4 , c5 , c6 ] =
⎡1 0 1⎤ ⎡1 0 0 0⎤
⎢1 1 1⎥ ⎢0 1 0 0⎥
[m0 , m1, m2 , m3 ] ⎢ ⎥ [m0 , m1, m2 , m3 ] ⎢ ⎥
⎢1 1 0⎥ ⎢0 0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎣0 1 1⎦ ⎣0 0 0 0⎦
c0 = m0 + m1 + m2 + 0 c4 = m0 c6 = m2
c1 = 0 + m1 + m2 + m3 c5 = m1 c7 = m3
c3 = m1 + m2 + 0 + m3
[m] Code word [C] Weight
m0….m3 c0…….….c6
0000 000 0000 0
0001 011 0001 3
0010 110 0010 3
0011 101 0011 4
0100 111 0100 4
0101 100 0101 3
0110 001 0110 3
0111 010 0111 4
1000 101 1000 3
1001 110 1001 4
1010 011 1010 4
1011 000 1011 3
1100 010 1100 3
1101 001 1101 4
1110 100 1110 4
1111 111 1111 7
Example
p 2
Consider a (6, 3) code with n = 6, k = 3, n – k = 3.
In this the pparity
y coefficient matrix [[P]] is k byy n-k matrix. i.e.,,
[P] is a 3 × 3 matrix given by
⎡0 1 1⎤ ⎡1 0 0⎤
[P] = ⎢1 1 0⎥ ⎢
[ Ik ] = 0 1 0 ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣1 0 1⎥⎦
⎢⎣0 0 1⎥⎦
[c ] = ⎡⎣ [m ][ P ]⎤⎦ ⎡⎣ [m ][ I k ]⎤⎦
Let m = [m0 m1 m2 ]
Example
p 2
⎡ 0 1 1⎤ ⎡ 1 0 0⎤
[c] = ⎡⎣m0,m1,m2 ⎤⎦ ⎢1 1 0⎥ ⎡⎣m0,m1,m2 ⎤⎦ ⎢0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎢⎣1 0 1⎥⎦ ⎢⎣0 0 1⎥⎦
[c] = [m1 + m2, m0 + m1, m0 + m2, m0, m1, m2 ]
Example
p 2 ((contd.))
Data[m] Code[c] Weight
(m0….m2) (c0…….….c5)
⎡1 0 1⎤ n = 6,
6 k = 3,
3 (n − k) = 3
⎢
[P] = 0 1 1 ⎥
⎢ ⎥ G =⎡ P I ⎤
⎢⎣1 1 0⎥⎦ [ ] ⎣ [ ] [ ]⎦ k
⎡1 0 1 1 0 0⎤
⎢ ⎥
[G] = ⎢0 1 1 0 1 0⎥
⎢⎣1 1 0 0 0 1⎥⎦
Let m = [m0 m1 m2 ]
Example
p 3
[c] = [m][G]
⎡1 0 1 1 0 0⎤
⎢ ⎥
[c0, c1, c2, c3, c4, c5] = [m0, m1, m2 ] ⎢0 1 1 0 1 0⎥
⎢⎣1 1 0 0 0 1⎥⎦
[c] = [m0 + m2 m1 + m2 m0 + m1 m0 m1 m2 ]
Example
p 3 ((contd.))
Data
D t [m]
[ ] Code
C d [C] W i ht
Weight
(m0….m2) (c0…….….c6)
[ H ] = ⎣[ I n−k ] [ P ] ⎤⎦
⎡ T
⎢ ⎥
[ H ][G] = ⎣⎡[ In−k ] [ P] ⎦⎤ ⎢ ... ⎥
T T
= [ P] + [ P] = 0
T T
⎢ [I ] ⎥
⎣ k ⎦
Since multiplication of a rectangular matrix by an identity
matrix of compatible dimensions leaves the matrix unchanged
and in modulo-2 arithmetic, [P]T + [P]T = [0] and [0] denotes
an (n – k) by k null matrix.
EXAMPLE
Consider a (5,3) code n=5, k=3, (n-k)=2. In this case the
coefficient matrix [P] is k X (n-k) and is given by say,
⎡ P0 0 P1 0 ⎤
⎡P00 P01 P02 ⎤
[ P ] = ⎢ P0 1 ⎥
[ P] = ⎢
T
P1 1
⎢ ⎥ ⎥
⎢⎣ P 0 2 P1 2 ⎥⎦ ⎣ 10
P P11 P12 ⎦
G
Generator
t matrix i a k × n matrix
t i [G] is t i ( 3×
3 5 ) given
i by
b
⎣⎡[ P ] [ I 3 ]⎦⎤
⎡ P 00 P10 1 0 0⎤
⎢ P 01 P11 0 1 0⎥
[ ] ⎢
G =
⎥
⎢⎣ P 02 P12 0 0 1 ⎥⎦
EXAMPLE
⎡ P00 P01 P02 ⎤
⎢P ⎥ ⎡ [ ] ⎤
T
P11 P12 P
⎢ 10 ⎥ ⎢ ⎥
[G ] =⎢ 1 0 ⎥ = ⎢…… ⎥
T
0
⎢ ⎥
⎢ 0 1 0 ⎥ ⎢⎣ [ I K ] ⎦⎥
⎢⎣ 0 0 1 ⎥⎦
[ H ] = ⎣[ I n-k ] [ P ] ⎤⎦ = ⎡⎣[ I 2 ]
⎡ T
[ P]T
⎤
⎦
⎡1 0 P00 P01 P02 ⎤
=⎢
⎣0 1 P10 P11 P12 ⎥⎦
EXAMPLE
⎡ P00 P01 P02 ⎤
⎢P P11 P12 ⎥
⎡1 0 P00 P01 P02 ⎤ ⎢ 10 ⎥
[ H ][G] = ⎢
T
⎥ ⎢1 0 0⎥
⎣0 1 P10 P11 P12 ⎦ ⎢ ⎥
⎢0 1 0⎥
⎢⎣ 0 0 1 ⎥⎦
⎡P00 + P00 P01 + P01 P02 + P02 ⎤ ⎡ P00 P01 P02 ⎤ ⎡ P00 P01 P02 ⎤
=⎢ ⎥ =⎢ ⎥ +⎢ ⎥
P + P P +
⎣ 10 10 11 11 12 12 ⎦
P P + P ⎣ 10 11 12 ⎦ ⎣ 10 11 12 ⎦
P P P P P P
= [ P]T + [ P]T
Since Pij are either o ' s or 1' s and 1 + 1 = 0 and 0 + 0 = 0
[ P ]T + [ P ]T = 0 [ H ][G ]T = 0
PARITY CHECK MATRIX
[ H ][G ]T = 0
[c ] = [ m ][G ]......(( 3 )
[G ][ H ] = 0 ...................... (8)
T
[H] is called parity check matrix of the code and equation (10)
is called parity check equation.
PARITY CHECK MATRIX
Message Code
C d
vector [m] GENERATOR vector [c]
MATRIX
Code
C d Null
vector [c] PARITY CHECK vector [0]
MATRIX
SYNDROME
The generator matrix [G] is used in the encoding operation at
the transmitter and parity check matrix [H] is used in the
decoding operation at the receiver.
Let [r] denote a 1-by-n received vector that results from
sending the code vector [c] over a noisy channel.
channel
We can express the vector [r] as the sum of the original code
vector [c] and a vector [e] as shown by
[r]=[c]+[e].
The vector [[e]] is called error vector or error p pattern. The ith
element of [e] equals 0 if the corresponding element of [r] is the
same as that of [c].
Th ith element
The l t off [e]
[ ] equals
l 1 if the
th corresponding
di element
l t off
[r] is different from that of [c]. In this case an error has occurred
in the ith position.
p
SYNDROME
Let [r] = r1, r2, ……., rn be a received code vector (one of 2n n-
tuples) resulting from the transmission of [c] =c1,c2,…….,cn
(one of the 2k n-tuples).
Then [r]=[c]+[e] where [e] = e1, e2, ……, en is the error vector
or error pattern introd
introduced
ced b
by the channel
channel.
In the space of 2n n-tuples there are a total of (2n –1) potential
non-zero
non zero error patterns.
The syndrome of [r] is defined as: [S] = [r] [H]T
The syndrome
y is the result of a p
parity
y check p
performed on [[r]] to
determine whether [r] is a valid member of the codeword set.
If [r ] contains detectable errors the syndrome has some non-
zero valuel
syndrome of [r] is [S] = ([c]+[e]) HT = cHT + eHT
SYNDROME
Since [c][H]T = 0 for all code words [S] = [e][H]T
Syndrome
y is a 1 byy n-k row vector.
An important property of linear block codes, fundamental to the
decodinggpprocess, is that the mapping
pp g between correctable error
patterns and syndromes is one-to-one.
EXAMPLE
Suppose that code vector c = [ 1 0 1 1 1 0 ] is transmitted and
the vector r = [ 0 0 1 1 1 0 ] is received. Note that one bit is in
error. Find the syndrome vector [S], and verify that it is equal to
[e][H]T. Assume that (6,3) code has a coefficient matrix P as
given below
⎡1 1 0⎤
P = ⎢0 1 1⎥
⎢ ⎥
⎢⎣1 0 1⎥⎦
[G ] = ⎡⎣ [ P ] [ I k ]⎤⎦
EXAMPLE
⎡1 0 0⎤
[ H ] = ⎣⎡[ I n−k ] [ P ] ⎤
T ⎢ ⎥
⎦ ⎢
⎢0 1 0 ⎥⎥
⎢ ⎥
⎢0 0 1⎥
⎡1 0 0 1 0 1 ⎤ [ H ]T = ⎢ ⎥
⎢1 0⎥
[H ] = ⎢0 1 0 1 1 0⎥
1
⎢ ⎥
⎢ ⎥ ⎢0 1 1 ⎥⎥
⎢
⎢⎣ 0 0 1 0 1 1 ⎥⎦ ⎢ ⎥
⎢⎣ 1 0 1 ⎥⎦
⎡1 0 0⎤
⎢0 1 0⎥
⎢ ⎥
⎢0 0 1⎥
[ S ] = [ r ][ H ]T = [001110] ⎢ ⎥
= [ 1, 1+1, 1+1 ] = [ 1 0 0 ]
⎢1 1 0⎥
⎢0 1 1⎥ (syndrome of corrupted
⎢ ⎥
⎣1 0 1⎦ code vector is [1 0 0 ])
EXAMPLE
Now we can verify that syndrome of the corrupted code vector is the
same as the syndrome of the error pattern:
Syndrome of error pattern
[S] = [e][H]T = [1 0 0 0 0 0] [H]T
⎡1 0 0⎤
⎢0 1 0⎥
⎢ ⎥
⎢0 0 1⎥
[S ] = [e ][ H ]
T
= [1 0 0 0 0 0 ] ⎢ ⎥ = [1 0 0 ]
⎢1 1 0⎥
⎢0 1 1⎥
⎢ ⎥
⎣1 0 1⎦
c1, e1 c2 c3 . ci . c2k
e2 c2 + e2 c3 + e2 . ci + e2 . c2k + e2
. . . . . . .
ej c2 + e j c3 + e j . ci + e j . c2k + e j
. . . . . . .
e2n−k c2 + e2n−k c3 + e2n−k . ci + e2n−k . c2k + e2n−k
ERROR CORRECTION
Each row called a coset consists of an error pattern in the first
column also known as the coset leader,
column, leader followed by the code
vectors perturbed by that error pattern.
The arrayy contains 2n n-tuples
p as a whole.
Each coset consists of 2k n-tuples
2n = 2n−k
There are k cosets
2
If the error pattern caused by the channel is a coset leader,
the received vector will be decoded correctly into the
transmitted code vector [ci].
]
If the error pattern is not a coset leader the decoding will
produce an error.
STANDARD ARRAY
For a given channel the probability of decoding error is
minimized when the most likely error patterns (those with
largest probability of occurrence) are chosen as the coset
leaders.
leaders
So the standard array should be constructed with each coset
leader having the minimum Hamming weight in its coset.
ERROR CORRECTION
If [ej] is the coset leader of the jth coset then [ci] + [ej] is an n-
tuple in this coset
(
Syndrome of this coset is: [ S ] = ci + e j H )[ ] T
= ci [ H ] + e j [ H ] = e j H T
T T
[c ] = ⎡⎣ [m ][G ]⎤⎦
⎡1 1 0 1 0 0⎤
⎢ ⎥
⎡⎣c1,c2,c3,c4,c5,c6 ⎤⎦ = ⎧⎨⎩m1,m2,m3⎫⎬⎭ ⎢⎢0 1 1 0 1 0⎥⎥
⎢ ⎥
⎢⎣1 0 1 0 0 1⎦⎥
000 000000 0
001 101001 3
010 011010 3
011 110011 4
100 110100 3
101 011101 4
110 101110 4
111 000111 3
STANDARD ARRAY
⎡0 0 0 0 0 0⎤ ⎡0 0 0⎤
⎢0 0 0 0 0 1⎥ ⎡1 0 0 ⎤ ⎢⎢ 1 0 1⎥
⎢ ⎥ ⎢ ⎥ ⎥
⎢0 0 0 0 1 0⎥ ⎢
⎢0 1 0 ⎥⎥ ⎢ 0 1 1⎥
⎢0 0 0 1 0 0⎥ ⎢ ⎥ ⎢ 0⎥
⎢0 0 1 ⎥ ⎢1 1
⎡⎣ S ⎦⎤ = ⎢ ⎥× ⎢ ⎥ = ⎥
⎢0 0 1 0 0 0⎥ ⎢1 1 0⎥ ⎢0 0 1⎥
⎢ ⎥
⎢ ⎥ ⎢0 ⎢ ⎥
⎢0 1 0 0 0 0⎥ ⎢
1 1 ⎥⎥ ⎢ 0 1 0⎥
⎢ ⎥
⎢1 0 0 0 0 0⎥ ⎢⎣ 1 0 1 ⎥⎦ ⎢ 1 0 0⎥
⎢ ⎥ ⎢ ⎥
⎣0 1 0 0 0 1⎦ ⎣1 1 1⎦
PROCEDURE FOR ERROR CORRECTION
We receive the vector [r] and calculate its syndrome S
We then use the syndrome-look-up
syndrome look up table to find the
corresponding error pattern.
This error pattern is an estimate of the error, we denote it as ê
The decoder then adds [ê] to [r] to obtain an estimate of the
transmitted code vector [ĉ]
[ cˆ ] = [ r ] + [ eˆ ] = ([ c ] + [ e ] ) + [ eˆ ] = [ c ] + ([ e ] + [ eˆ ] )
000000 000
000001 101
000010 011
000100 110
001000 001
010000 010
100000 100
010001 111
EXAMPLE
Assume code vector [c] = [ 1 0 1 1 1 0 ] is transmitted and the
vector [r]=[0 0 1 1 1 0] is received
received.
The syndrome of [r] is computed as:
S = [0 0 1 1 1 0 ]HT = [ 1 0 0 ]
From the look-up table 100 has corresponding error pattern:
[[ê]] = [1
[ 00000]
The corrected vectors is the
[ĉ] =[r] + [ê] = [0 0 1 1 1 0] + [1 0 0 0 0 0] = [1 0 1 1 1 0 ]
[c] = [m0 + m2 + m3 m0 + m1 + m2 m1 + m2 + m3 m0 m1 m2 m3 ]
Message
g Code Word Weight
g
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 0 1 0 0 0 1 3
0 0 1 0 1 1 1 0 0 1 0 4
0 0 1 1 0 1 0 0 0 1 1 3
0 1 0 0 0 1 1 0 1 0 0 3
0 1 0 1 1 1 0 0 1 0 1 4
0 1 1 0 1 0 0 0 1 1 0 3
0 1 1 1 0 0 1 0 1 1 1 4
1 0 0 0 1 1 0 1 0 0 0 3
1 0 0 1 0 1 1 1 0 0 1 4
1 0 1 0 0 0 1 1 0 1 0 3
1 0 1 1 1 0 0 1 0 1 1 4
1 1 0 0 1 0 1 1 1 0 0 4
1 1 0 1 0 0 0 1 1 0 1 3
1 1 1 0 0 1 0 1 1 1 0 4
1 1 1 1 1 1 1 1 1 1 1 7
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
[ s ] = [e ][ H ] T
⎡1 0 0 1 0 1 1 ⎤
⎢0 1 0 1 1 1 0⎥
[ ] ⎣ n −k
H = ⎡ I P T
⎤ =
⎦ ⎢ ⎥
⎡ ⎤
⎢ 0 0 0 0 0 0 0⎥
⎢⎣0 0 1 0 1 1 1⎥⎦
⎢
⎢ 1 0 0 0 0 0 0⎥
⎥
⎢ 0 1 0 0 0 0 0⎥
[e] =
⎢ ⎥
⎢ 0 0 1 0 0 0 0⎥
⎢ ⎥
⎢ 0 0 0 1 0 0 0⎥
⎢ ⎥
⎢ 0 0 0 0 1 0 0⎥
⎢ ⎥
⎢ 0 0 0 0 0 1 0⎥
⎢ ⎥
⎢⎣ 0 0 0 0 0 0 1⎥
⎦
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code [e ][ H ] = [ s ] T
⎡ [0 0 0 0 0 0 0] ⎤
⎡1 0 0⎤ ⎡0 0 0⎤
⎢ [1 0 0 0 0 0 0] ⎥
⎢ ⎥ ⎢0 1 0 ⎥ ⎢1 0 0 ⎥
⎢ [0 1 0 0 0 0 0] ⎥ ⎢ ⎥ ⎢ ⎥
⎢
.
⎥
⎢0 0 1⎥ ⎢0 1 0 ⎥
⎢ ⎥ ⎢ ⎥
⎢
⎢
. ⎥
⎥ X ⎢⎢1 ⎥
1 0⎥ = ⎢ 0 0 1⎥
⎢ . ⎥ ⎢1 1 0 ⎥
⎢ . ⎥ ⎢0 1 1⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢0 1 1 ⎥
⎢ . ⎥
⎢ ⎥ ⎢1 1 1⎥ ⎢1 1 1 ⎥
⎢
.
⎥ ⎢⎣1 ⎥ ⎢ ⎥
⎢⎣ [0 0 0 0 0 0 1] ⎥⎦
0 1⎦ ⎣1 0 1 ⎦
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
E
Error Pattern
P tt S d
Syndrome
0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 1 0 0
0 1 0 0 0 0 0 0 1 0
0 0 1 0 0 0 0 0 0 1
0 0 0 1 0 0 0 1 1 0
0 0 0 0 1 0 0 0 1 1
0 0 0 0 0 1 0 1 1 1
0 0 0 0 0 0 1 1 0 1
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
Let the transmitted code vector is [1 1 1 0 0 1 0] and the
received vector is [1 1 0 0 0 1 0] with an error in the third bit.
The syndrome of this code is
⎡1 0 0⎤
⎢0 1 0⎥
⎢ ⎥
⎢0 0 1⎥
⎢ ⎥
[1 1 0 0 0 1 0] ⎢1 1 0 ⎥ = [0 0 1]
⎢0 1 1⎥
⎢ ⎥
⎢1 1 1⎥
⎢⎣1 0 1 ⎥⎦
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
Corresponding
C di to this
hi syndrom
d the
h error pattern with
i h the
h highest
hi h
probability of occurrence is [0 0 1 0 0 0 0]. Now this error pattern is
added to the received code word to generate the original code
word.
[0 0 1 0 0 0 0 ] + [1 1 0 0 0 1 0] = [1 1 1 0 0 1 0]
For the (6,3) code generator matrix given below the received
word is 100011.
100011 Find the transmitted information word
n=6, k=3, (n-k)=3
⎡1 0 1 ⎤
⎢0 1 1⎥ [G ] = [ P I k ]
[ ] ⎢
P =
⎥
⎢⎣1 1 0⎥⎦
⎡1 0 1 1 0 0 ⎤
[G ] = ⎢⎢0 1 1 0 1 0⎥⎥
⎢⎣1 1 0 0 0 1 ⎥⎦
EXAMPLE -2 Decoding of (6,3) Hamming code
[c ] = ⎡⎣ [m ][G ]⎤⎦
⎡1 0 1 1 1 0 0⎤
⎢0 1 1 0 0 1 0 ⎥
[c0 , c1, c2 , c3 , c4 , c5 ] = [m0 , m1, m2 ] ⎢ ⎥
⎢⎣1 1 0 0 0 0 1 ⎥⎦
[c] = [m0 + m2 m1 + m2 m0 + m1 m0 m1 m2 ]
EXAMPLE -2 Decoding of (6,3) Hamming code
0 0 0 0 0 0 0 0 0 0
0 0 1 1 1 1 0 0 1 4
0 1 0 0 1 1 0 1 0 3
0 1 1 1 0 1 0 1 1 4
1 0 0 1 0 1 1 0 0 3
1 0 1 0 1 1 1 0 1 4
1 1 0 1 1 0 1 1 0 4
1 1 1 0 0 0 1 1 1 3
EXAMPLE -2 Decoding of (6,3) Hamming code
[ s ] = [e][ H ]T
⎡1 0 0 1 0 1 ⎤
[ H ] = [ I n −k PT ] = ⎢0 1 0 0 1 1⎥
⎢ ⎥
⎢⎣0 0 1 1 1 0⎥⎦
EXAMPLE -2 Decoding of (6,3) Hamming code
[e ][ H ]
T
= [s]
⎡0 0 0⎤
⎡[0 0 0 0 0 0]⎤ ⎡1 0 0⎤ ⎢1 0 0 ⎥
⎢[1 0 0 0 0 0]⎥ ⎢ ⎥
⎢ ⎥ ⎢0 ⎥
⎢[0 1 0 0 0 0]⎥ ⎢
1 0
⎥ ⎢0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎢0 0 1⎥
×⎢
0 0 1⎥
⎢[0 0 1 0 0 0] ⎥
⎥ =⎢
⎢[0 0 0 1 0 0]⎥
1 0 1⎥ ⎢1 0 1⎥
⎢ ⎥ ⎢ ⎢ ⎥
⎢[0 0 0 0 1 0]⎥ ⎢0
⎢[0 0 0 0 0 1]⎥ 1 1⎥ ⎢0 1 1⎥
⎢ ⎥ ⎢ ⎥ ⎢1 1 0⎥
⎣[0 0 1 0 0 1]⎦ ⎣1 1 0⎦ ⎢ ⎥
⎣1 1 1⎦
EXAMPLE -2 Decoding of (6,3) Hamming code
⎡1 0 0⎤
⎢0 1 0⎥
⎢ ⎥
⎢0 0 1⎥
[1 0 0 0 1 1] × ⎢ ⎥ = [0 0 1]
⎢1 0 1⎥
⎢0 1 1⎥
⎢ ⎥
⎣1 1 0⎦
EXAMPLE -2 Decoding of (6,3) Hamming code
The error pattern corresponding to this syndrome is [0 0 1 0 0 0]
So the original transmitted word is
[0 0 1 0 0 0] + [1 0 0 0 1 1] = [101011]
Original
g transmitted vector is [1 0 1 0 1 1]
PROPERTIES OF SYNDROME
1. The syndrome depends only on the error pattern and not on the
transmitted code word.
[ S ] = [ r ][ H ]T
= ( [ C ] + [ e ] ) [H ]T = [c ][ H ]T + [e][ H ]T
H ere [c][H ]T = 0
Hence [S] = [e][H]T
Hence using the parity check matrix [H] of a code we can compute
the syndrome [S] which depends only on the error pattern [e].
PROPERTIES OF SYNDROME
2. All error patterns that differ by a code word have the same
syndrome.
y
For k message bits there are 2k distinct code vectors
denoted as [ ci ] Where i = 0,1,.......2
, , k
−1
Correspondingly for any error pattern [e] there are
2k distinct vectors ei as
[ e i ] = [ e ] + [ c i ] W h e re i = 0,1, .....2 − 1
k
= [ e ][ H ]
T
which is independent
p of the index i. Each coset of the code is
characterised by a unique syndrome.
CYCLIC CODING
CYCLIC CODES
Cyclic codes are a special type of linear block codes that
exhibits two fundamental properties:
(i) Linearity: The sum of any two code words is also a code
word.
(ii) Cyclic property: Any cyclic shift of a code word produces
another code word.
Let {c0,c1,c2,……,cn-1} be a code word of an (n,k) linear block
code. The code word is c cyclic code if
{cn −1 , c0 , c1 , c2 ,.........., cn −2 }
{cn −2 , cn −1 , c0 , c1 ,........., cn −3}
{.......................................}
n 1 , c0 } are all code words of the code.
{c1 , c2 , c3 ,,..............,, cn−
CYCLIC CODES
The code vector {c0,c1,c2,……,cn-1} may be expressed in the
form of a polynomial called code polynomial
c( x ) = c0 + c1 x + c2 x 2 + ⋅⋅⋅⋅⋅⋅⋅ + cn −1 x n −1
where x is any variable and the power of x represents the
position of the code word bits. i.e., (n-1) represents MSB and
0 represents LSB
Let the code polynomial c(x) be multiplied by xi.
x c( x) = x ( c0 + c1x + c2 x +⋅⋅⋅⋅+ )
n−i−1 n−i n−1
i i 2
+ +cn−i−1x + cn−i x +⋅⋅⋅⋅+
+ +cn−1x
= c0 xi + c1xi +1 + ⋅⋅⋅⋅ +cn−i −1x n−1 + cn−i xn + ⋅⋅ ⋅⋅ +cn−1xn+i −1
= cn−i xn + ⋅⋅⋅ ⋅ +cn−1xn+i −1 + c0 xi + c1xi +1 + ⋅ ⋅⋅ ⋅ +cn−i −1x n−1
CYCLIC CODES
n+i−1 i−1
x c( x) = cn−i x + [cn−1 + cn−1]x +⋅⋅⋅⋅+cn−1x
i n 0
+ [cn−1 + cn−1]x
i +1 n −1
+c0 x + c1x + ⋅⋅⋅⋅ +cn−i −1x
i
i −1 i+1 n−1
= cn−i + .......... + cn−1x + c0 x + c1x + .......... + cn−i−1x
i
i −1
q( x) = cn−i + cn−i+1x + ............ + cn−1x
CYCLIC CODES
Then x i c ( x ) = q ( x )( x n + 1) + c i ( x ) .................(1)
x 7 + 1 = (1 + x )(1 + x + x 3 )(1 + x 2 + x 3 )
For n=7,and k=4, the generator polynomial will be of degree
n-k=3.
There are two factors of (xn+1) that are of order 3
3.We
We can
choose any one of them. let it be (1+x+x3)
For an information vector [[1000]] the data polynomial
p y would be
1x + 0 x + 0 x + 0 x
0 1 2 3
CYCLIC CODES
So the code polynomial would be c( x ) = m( x ) g ( x )
c( x ) = (1x 0 + 0 x1 + 0 x 2 + 0 x 3 )(1 + x + x 3 ) = 1 + x + x 3
c( x ) = [ 1 1 0 1 0 0 0]
For an information vector [0100] the message polynomial
would be 0 x 0 + 1x1 + 0 x 2 + 0 x 3 c( x ) = m( x ) g ( x )
c ( x ) = ( 0 x + 1x + 0 x + 0 x
0 1 2 3
)(1 + x + x ) = x + x
3 2
+x 4
c( x ) = [0 1 1 0 1 0 0]
CYCLIC CODES
c( x ) = ( 0 x 0 + 0 x1 + 1x 2 + 0 x 3 )(1 + x + x 3 ) = x 2 + x 3 + x 5
c( x ) = [0 0 1 1 0 1 0]
For an information vector [0 0 0 1] the message polynomial
would be 0 x 0 + 0 x1 + 0 x 2 + 1x 3 c( x ) = m( x ) g ( x )
c( x ) = ( 0 x 0 + 0 x1 + 0 x 2 + 1x 3 )(1 + x + x 3 ) = x 3 + x 4 + x 6
c( x ) = [0 0 0 1 1 0 1 ]
CYCLIC CODES – GENERATOR MATRIX
c( x ) = [1 1 0 1 0 0 0]
c( x ) = [0 1 1 0 1 0 0]
c( x ) = [0 0 1 1 0 1 0]
c( x ) = [0 0 0 1 1 0 1]
⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0⎥
[G ] = ⎢ ⎥
⎢0 0 1 1 0 1 0⎥
⎢ ⎥
⎣0 0 0 1 1 0 1⎦
[ p0 ....................... p6 ]
SHORT CUT METHOD FOR GENERATOR MATRIX
g ( x) = 1 + x + x3
xg ( x ) = x + x 2 + x 4
x 2 g ( x) = x2 + x3 + x5
x3g ( x) = x3 + x4 + x6
Now construct the generator matrix using the coefficients of
the p
polynomials
y as the elements of the rows.
We get the same generator matrix as obtained above.
CYCLIC CODES
Message Code Word
0000 0000000
0001 1101000
0010 0110100
0011 0010111
0100 0011010
0101 0111001
0110 0101110
0111 0100011
1000 0001101
1001 1100101
1010 1110010
1011 1111111
1100 1011100
1101 1010001
1110 1000110
1111 1001011
SYSTEMATIC CYCLIC CODES
If a (n,k) cyclic code is to be systematic it must have the
structure
k −1
Let the message polynomial be m( x ) = m0 + m1 x + ........ + mk −1 x
n − k −1
andd th
the parity
it polynomial
l be b( x ) = b0 + b1 x + ........ + bn −k −1 x
i lb
According to equation (6) we want the code polynomial to be
in the form c( x ) = b( x ) + x n −k m( x ) i.e.,
i e m(x) should shifted to
(n-k)th position and added to b(x)
But c( x ) = m( x ) g ( x )
m ( x ) g ( x ) = b( x ) + x n − k m ( x )
SYSTEMATIC CYCLIC CODES
m ( x ) g ( x ) = b( x ) + x n − k m ( x )
x n − k m ( x ) = m ( x ) g ( x ) − b( x )
x n −k m( x ) = m( x ) g ( x ) + b( x ) in modulo 2 arithmetic
x n −k m( x ) b( x )
= m( x ) +
g ( x) g ( x)
(ii) Divide xn-k m(x) by the generator polynomial and obtain the
remainder b(x)
b(x).
(iii) Add b(x) to xn-k m(x) to obtain the code polynomial c(x)
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
1
1 + x + x3 x3
x + x +1
3
REMAINDER b(x)
( ) x +1
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
b( x ) = x + 1
n −k
Find c ( x ) = b ( x ) + x m( x )
c( x ) = 1 + x + x 3
c( x ) = [1101000]
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
x +1
1 + x + x3 x4
x +x +x
4 2
REMAINDER b(x)
x +x
2
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
b( x ) = x 2 + x
n −k
Find c ( x ) = b ( x ) + x m( x )
c( x ) = x + x + x
2 4
c( x ) = [0110100]
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
x +1
2
1+ x + x 3
x 5
x +x +x
5 3 2
x +x
3 2
x + x +1
3
REMAINDER b(x) x + x +1
2
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
b( x ) = x 2 + x + 1
n −k
Find c ( x ) = b ( x ) + x m( x )
c( x ) = 1 + x + x + x
2 5
c( x ) = [1110010]
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
x + x +1
3
1+ x + x 3
x 6
x +x +x
6 4 3
x +x
4 3
x +x +x
4 2
x +x +x
3 2
x + x+ 1
3
REMAINDER b(x) x + 1
2
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
b( x ) = x 2 + 1
n −k
Find c ( x ) = b ( x ) + x m( x )
c( x ) = 1 + x 2 + x 6
c( x ) = [1010001]
Similarly we can find all the code words and tabulate.
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
c(1000) = [1101000]
c(0100) = [0110100]
c(0010) = [1110010]
c(0001) = [1010001]
⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0⎥
[G ] = ⎢ ⎥
⎢1 1 1 0 0 1 0⎥
⎢ ⎥
⎣1 0 1 0 0 0 1⎦
⎡1 1 0 1 0 0 0⎤ ⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0⎥ ⎢0 1 1 0 1 0 0⎥
[G] = ⎢ ⎥ ⇒⎢ ⎥
⎢0 0 1 1 0 1 0⎥ ⎢1 1 1 0 0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 0 1 1 0 1⎦ ⎣1 0 1 0 0 0 1⎦
PARITY CHECK POLYNOMIAL OF CYCLIC CODES
x3 + x2 + x + 1
1 + x2 + x3 1 + x6
x6 + x5 + x3
x5 + x3
x5 + x 4 + x 2
x4 + x3 + x2
x4 + x3 + x
s( x ) = x + x 2
[G ] = [ P I k ]
⎡1 1 0 1 0 0 0⎤ ⎡1 1 0⎤
⎢0 1 1 0 1 0 0⎥ ⎢0 1 1⎥
[G ] = ⎢ ⎥ [P] = ⎢ ⎥
⎢1 1 1 0 0 1 0⎥ ⎢1 1 1⎥
⎢ ⎥ ⎢ ⎥
⎣1 0 1 0 0 0 1⎦ ⎣1 0 1⎦
⎡1 0 1 1 ⎤
[ H ] = ⎡⎣ I n−k P ⎤⎦
T
[P]
T
= ⎢1 1 1 0⎥
⎢ ⎥
⎢⎣0 1 1 1⎦⎥
PARITY CHECK MATRIX OF CYCLIC CODES
⎡1 0 0 . 1 0 1 1 ⎤
⎢0 1 0 . 1 1 1 0⎥
[ ] ⎢
H =
⎥
⎢⎣0 0 1 . 0 1 1 1 ⎥⎦
CONVOLUTIONAL
CODING
CONVOLUTIONAL CODING
In the case of block coding the encoder accepts a block of k
message bits and generates an n bit code word.
word
The code words are produced on a block by block basis.
So an entire message block need to be stored before
generating the associated code word.
In applications where the message bits come serially rather
than as blocks, convolutional coding is the preferred method.
A convolutional code is generated by combining the outputs of
a m-
m stage shift register using n EX-OR
EX OR logic summers.
summers
Such a convolutional coder is shown in figure (1).
The outputs v1 and v2 of the adders are:
v1 = S1 +S3 v2 = S1 + S2 + S3
CONVOLUTIONAL CODING
1011
bi
M M1 M2
S1 S2 S3
v1 v2
bo 11 01 00 10 10 11
CONVOLUTIONAL CODING
Convolutional codes are commonly specified by three
parameters; (n,k,m).
n = number
b off output
t t bits
bit
m = number of input bits
k = number of memory registers
The quantity m/n called the code rate, is a measure of the
efficiency of the code. Commonly m and n parameters range
f
from 1 to
t 8,
8 k from
f 2 to
t 10 and d the
th code
d rate
t from
f 1/8 to
t 7/8
Often the manufacturers of convolutional code chips specify
the code by y pparameters ((n,k,L),
, , ), The q
quantity
y L is called the
constraint length of the code and is defined by
Constraint Length, L = m (k-1)
The constraint length L represents the number
n mber of bits in the
encoder memory that affect the generation of the n output
bits.
CONVOLUTIONAL CODING
The convolutional code structure is easy to draw from its
parameters. First draw k boxes representing the k memory
registers. Then draw n modulo-2 adders to represent the n
output bits. Now connect the memory registers to the adders
using the generator polynomials.
polynomials
There are many choices for polynomials for any m order code.
Theyy do not all result in output
p sequences
q that have ggood
error protection properties.
Good polynomials are found from this list usually by computer
simulation.
i l ti A list
li t off good
d polynomials
l i l for
f rate
t ½ codes
d i
is
given in table.
A 1 indicates the existence of a connection from the register
to the summer, and 0 the absence of it.
Buildingg blocks for rate 1/n
/ encoder
M1 M2 M2 Mk
A1 A2 A3 An
v1 v2 v3 vn
Buildingg blocks for rate 1/2
/ encoder
M1 M2 M3 Mk
A1 A2
v1 v2
GENERATOR POLYNOMIALS
OPTIMUM COFIGURATIONS FOR COVOLUTIONAL CODERS RATE 1/2
k v1 v2
3 1 1 0 1 1 1
4 1 1 0 1 1 1 1 0
5 1 1 0 1 0 1 1 1 0 1
6 1 1 0 1 0 1 1 1 1 0 1 1
7 1 1 0 1 0 1 1 1 0 1 0 1
8 1 1 0 1 1 1 1 1 1 0 0 1 1
Buildingg blocks for rate 1/3
/ encoder
M1 M2 M3 Mk
A1 A2 A3
v1 v2 v3
EXAMPLE 1 Encoding a single bit using (2,4,1) encoder
v1 =1 v1 =1
1
1 0 0 0 0 1 0 0
V2=11 V2=11
0 0 1 0 0 0 0 1
V2=00 V2=11
bi
M1 M2 M3 M4
S1 S2 S3 S4
v1 v2
v3
Commutator
bo c=111 010 100 110 001 000 011 000 000
CONVOLUTIONAL CODING Example 2
Initially the register is clear and the first bit of the input data
stream is entered into M1.
During this message, the commutator samples the adder
p
outputs v1,,v2,,v3.
A single message bit produces three coded output bits.
The encoder is then said to be of rate 1/3.
The next message bit then enters M1 while the bit initially in
M1 is transferred to M2.
Th commutator
The t t again i samples
l the
th adder
dd outputs
t t v.
This process continues until the last bit of the message has
been entered in to M1.
Thereafter enough 0’s are added to the message so that
entire message g bit mayy p
proceed through
g the shift register.
g
CONVOLUTIONAL CODING Example 2
For the input bit stream m=10110 the coded output bit stream
is c=111 010 100 110 001 000 011 000 000. 000
The convolutional encoder operates continuously even if the
p message
input g bit consists of millions of bits.
Each bit remains in the shift register for as many message bit
intervals as there are stages in the shift register.
Each input bit influences k groups of n bits.
CONVOLUTIONAL CODING Example 3
10011
bi
M M1 M2
S1 S2 S3
v1 v2 11 10 11 11 01 01 11
bo
CONVOLUTIONAL CODING
PRESENT STATE NEXT STATE CODE
NEXT BIT M1 M2 M1 M2 v1 v2
0 0 0 0 0
0 0
1 1 0 1 1
0 0 1 1 0
1 0
1 1 1 0 1
0 0 0 1 1
0 1
1 1 0 0 0
0 0 1 0 1
1 1
1 1 1 1 0
STATE DIAGRAM
( 0,0 )
(1,1)
a b
M2 M1 M2 M1
0 0 0 1
( 0,0 ) (1,0 )
d ( 0,1)
c
M2 M1 M2 M1
1 1 1 0
(1,0 )
CODE TREE REPRESENTATION
00
a
Note: Here codes are 00
•a( 0,0) 11
represented in M2 M1 00
•a ( 0, 0 ) 10
b
c
•b( 0,1)
sequence
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
11
•a( 0,0) 11
1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
01
•d (1,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
CODE TRELLIS Note: Here codes are
represented in M2 M1
sequence
00
a=00 a=00
b=01 b=01
c=10 c=10
d=11 d=11
10
CURRENT STATE OUTPUT NEXT STATE
DIRECT DECODING
Consider the first message bit. It has an effect only on the first
kv bits in the code word.
With k=3 and v=2 the first digit has an effect only on the first 3
groups of 2 bits.
Hence to deduce the first digit we should examine the first 6
digits of the code.
We need not consider digits beyond 6 and we should not omit
the first 6 digits in our decision process as we will not be
taking full advantage of the redundancy of the code then.
There are 8 possible combinations of the first 6 digits which
are acceptable code words.
Th
These combinations
bi ti correspondsd to
t 8 possible
ibl paths
th through
th h
the code tree which penetrate in to the tree to the extent of
three nodes.
DIRECT DECODING
Take a count, for each of the 8 paths, of the number of
differences between bits of the received word and the
acceptable
t bl coded word d corresponding
di to
t each h path.
th
If the path that yields the minimum number of discrepancies is a
path which diverges
p g upward
p from the first node we make a
decision that the first message bit is 0.
If the path diverges downward from the first node we decide
that the first bit is 1
The second message bit is decoded in the same manner
starting from the new node obtained after decoding the first bit.
We compare the 6 bits corresponding to each of the 8 paths
with the 6 received code bits after discarding the first 3 received
code bits.
This procedure is repeated for all message bits advancing node
by node in the process.
DIRECT DECODING
When a message is decoded in this manner the probability that
a bit in the decoded message is in error decreases
exponentially with k.
So ideally k should be as large as possible.
But the decoding of each bit requires an examination of the 2k
branch sections of the code tree.
Hence for large k the decoding procedure becomes
impracticable.
DIRECT DECODING USING CODE TREE
00
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
Correct
11
•a( 0,0) 11
•c (1, 0 )
10 b
1 path
•b( 0,1)
10
c
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
00
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0)
Move to
01 10
00
d
a
next node
11
•a( 0,0) 11
Correct
•c (1, 0 )
10 b path
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
00
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0)
Move to
01 10
00
d
a
next node
11
•a( 0,0) 11
Correct
•c (1, 0 )
10 b path
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
00
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0)
Move to
01 10
00
d
a
next node
11
•a( 0,0) 11
Correct
•c (1, 0 )
10 b path
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
Correct
C 00
a
path 00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0)
Move to
01 10
00
d
a
next node
11
•a( 0,0) 11
•c (1, 0 )
10 b
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
Decoding
di 00
a
complete 00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
11
•a( 0,0) 11
•c (1, 0 )
10 b
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING
In sequential decoding, at the arrival of the first v code bits,
the encoder compares these bits with the two branches which
diverge from the starting node.
If one of the branches agrees exactly with these code bits,
then the encoder follows
follo s this branch.
branch
If, because of noise, there are errors in the received bits, the
encoder follows the branch in which there are less
discrepancies.
At the second a similar comparison is made between the
diverging branches and the second set of v code bits and a
decision is taken.
If more than half the bits of a group of v bits are in error the
decoder will make a mistake and it will follow the wrong path.
SEQUENTIAL DECODING
To avoid this problem, the decoder keeps a record of the total
number of discrepancies between the received code bits and
the corresponding bits encountered in its path.
If the decoder takes a wrong path the number of
discrepancies grows
gro s much
m ch rapidly
rapidl than when
hen it is following
follo ing the
correct path.
In such a situation the decoder may be programmed to
retrace its path to the node at which the error has occurred
and to follow the alternate path.
In this way the decoder will eventually find a path through the
k nodes.
If the decoder takes the correct turn at each node then it will
be able to make a decision on the basis of a single path.
SEQUENTIAL DECODING
When the decoder has retraced its path due to an error it can
exclude from its searchings all the branches which diverge in
the wrong direction from this node.
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
a
00
•a( 0,0) 11
1 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
11
•a( 0,0) 11
1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
2
a
00
•a( 0,0) 11
1 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
11
•a( 0,0) 11
1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
2 3
a
00
•a( 0,0) 11
1 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 11 3
01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 10
00
d
a
11
•a( 0,0) 11
1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
2 3
a
00
•a( 0,0) 11
1
00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 3
01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 10
00
d
a
•a( 0,0)
2
11
11
1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
2 3
a
00
•a( 0,0) 11
1
00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 3
01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
3
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 4
10
00
d
a
•a( 0,0)
2
11
11
1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 3
00
2
a
00
•a( 0,0) 11
00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 11 3
01
11
d
a
•c(1,0
1 4
10 1 0)
0 •b ( 0,1)
00
11 01 b
3
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 4
10
00
d
a
•a( 0,0)
2
11
11
1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 3
00
2
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 4
11 3
01
11
d
a
•c(1,0
1
10 1 0)
0 •b ( 0,1)
00
11 01 b
3
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 4
10
00
d
a
•a( 0,0)
2
11
11
1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
1
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
ADDITIONAL EXAMPLES FOR
CONVOLUTIONAL CODING AND
DECODING USING STATE DIAGRAM
AND TREE
ADDITIONAL EXAMPLE-4 Encoding Three Digit
Word Using (2,1,4) Encoder
v1 =1
1 v1 =1
110 11
1 0 0 0 0 1 0 0
V2=1 V2=1
1
1 0 1 0 1 1 0 1
V2=1 V2=1
IInput state=010,
010 Input
I bit=1,
bi 1 IInput state=001,
001 Input
I bit=1,
bi 1
Output bit=01 Output bit=11
EXAMPLE 4 Encoding Three Digit Word
v1 =0 v1 =0
0 1 1 0 0 0 1 1
V2=1
1 V2=1
1
0 0 0 1 0 0 0 0
V2=1
1 V2=0
0
0 0 0 0 0 0 0 0 0
1 0 0 0 1 1 1 0 0
0 0 0 1 1 1 0 0 0
1 0 0 1 0 0 1 0 0
0 0 1 0 1 0 0 0 1
1 0 1 0 0 1 1 0 1
0 0 1 1 0 1 0 0 1
1 0 1 1 1 0 1 0 1
0 1 0 0 1 1 0 1 0
1 1 0 0 0 0 1 1 0
0 1 0 1 0 0 0 1 0
1 1 0 1 1 1 1 1 0
0 1 1 0 0 1 0 1 1
1 1 1 0 1 0 1 1 1
0 1 1 1 1 0 0 1 1
1 1 1 1 0 1 1 1 1
010
10
11
011 01 001
00 3 7
2 00 6 11
100 11 000 00
10
10
01 1
01
00 5
101 111 11
11 10
4 110
STATE DIAGRAM
A state diagram for the (2,1,4) code is shown in Fig 8. Each
circle represents a state.
At any one time, the encoder resides in one of these states.
The lines to and from it show state transitions that are
possible as bits arrive.
arrive
Only two events can happen at each time, arrival of a 1 bit or
arrival of a 0 bit.
Each of these two events allows the encoder to jump into a
different state. The state diagram does not have time as a
dimension and hence it tends to be not intuitive.
intuitive
The state diagram contains the same information that is in the
table lookup but it is a graphic representation.
The solid lines indicate the arrival of a 0 and the dashed lines
indicate the arrival of a 1. The output bits for each case are
shown on the line and the arrow indicates the state transition.
Decodingg of the sequence
q 1011
Let’s start at state 000. The arrival of a 1 bit outputs 11 and
puts us in state 100.
100
The arrival of the next 0 bit outputs 11 and put us in state 010.
The arrival of the next 1 bit outputs
p 01 and pputs us in state
101.
The last bit 1 takes us to state 110 and outputs 11. So now
we have 11 11 01 11.11
But this is not the end. We have to take the encoder back to
all zero state.
From state 110, go to state 011 outputting 01.
From state 011 we go to state 001 outputting 01 and then to
state 00 with a final output of 11.
11
The final answer is : 11 11 01 11 01 01 11
TREE DIAGRAM 0 0 (0 0 0 )
0 0 (0 0 0 )
1 1 (1 0 0 )
0 0 (0 0 0 )
1 1 (0 1 0 )
1 1 (1 0 0 )
0 0 (1 1 0 )
0 0 (0 0 0 )
1 0 (0 0 1 )
1 1 (0 1 0 )
0 1 (1 0 1 )
1 1 (1 0 0 )
0 1 (0 1 1 )
0 0 (1 1 0 )
1 0 (1 1 1 )
0 0 (0 0 0 )
1 1 (0 0 0 )
1 0 (0 0 1 )
0 0 (1 0 0 )
1 1 (0 1 0 )
0 0 (0 1 0 )
0 1 (1 0 1 )
1 1 (1 1 0 )
1 1 (1 0 0 )
0 1 (0 0 1 )
0 b it 0 1 (0 1 1 )
1 0 (1 0 1 )
0 0 (1 1 0 )
1 0 (0 1 1 )
1 0 (1 1 1 )
0 1 (1 1 1 )
0 0 (0 0 0 )
1 0 (0 0 0 )
1 1 (1 0 0 )
0 1 (0 1 0 )
1 1 (0 1 0 )
0 1 (1 0 0 )
0 0 (1 1 0 )
1 b it 1 1 (0 1 1 )
1 0 (0 0 1 )
0 0 (0 1 0 )
0 1 (1 0 1 )
0 1 (1 0 1 )
0 1 (0 1 1 )
1 1 (1 1 0 )
1 1 (1 0 0 ) 1 0 (1 1 1 )
1 1 (0 0 0 )
0 1 (0 0 1 )
0 1 (0 1 1 ) 0 0 (1 0 0 )
0 0 (0 1 0 )
1 0 (1 0 1 )
1 1 (1 1 0 )
0 0 (1 1 0 )
0 1 (0 0 1 )
1 0 (0 1 1 )
1 0 (1 0 1 )
1 0 (1 1 1 )
1 0 (0 1 1 )
0 1 (1 1 1 )
0 1 (1 1 1 )
TREE DIAGRAM
Here instead of jumping from one state to another, we go
down branches of the tree depending on whether a 1 or 0 is
received.
i d
The first branch in indicates the arrival of a 0 or a 1 bit. The
g state is assumed to be 000. If a 0 is received,, we g
starting go
up and if a 1 is received, then we go downwards.
In figure the solid lines show the arrival of a 0 bit and the
shaded lines the arrival of a 1 bit.
bit The first 2 bits show the
output bits and the number inside the parenthesis is the
output state.
Let’s code the sequence 1011 as before. At branch 1, we go
down. The output is 11 and we are now in state 111. Now we
get a 0 bit, so we g
g go up.
p The output
p bits are 11 and the state
is now 011.
The next incoming bit is 1. We go downwards and get an
output of 01 and now the output state is 101.
101
TREE DIAGRAM
v1 v2
11 01 00 10 10 11
bo
STATE DIAGRAM
( 0,0 )
(1,1)
a b
M1 M2 M1 M2
0 0 0 1
( 0,0 ) ( 0,1)
d (1,0 )
c
M1 M2 M1 M2
1 1 1 0
( 0,1)
TRELLIS DIAGRAM
00
a 00 00 a
11 11
b 01 01 b
00
c 10 01 10 c
10
10
d 11 11 d
01
COMPLETE TRELLIS DIAGRAM
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11
00 00 00 00 00 00
01 01 01 01 01 01
10 10 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
TRELLIS DIAGRAM – CODING & DIRECT DECODING
001011 11 01 00 10 10 11
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11
00 00 00 00 00 00
01 01 01 01 01 01
10 10 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
DECODING USING TRELLIS
There are several different approaches to decoding of
convolutional codes. These are grouped in two basic
categories.
t i
1. Sequential Decoding : Fano algorithm
2 Maximum likely
2. hood decoding: Viterbi algorithm
likely-hood
Both of these methods represent 2 different approaches to the
same basic idea behind decoding.
Assume that 4 bits were sent via a rate ½ code. We receive
12 bits, ignoring the flush bits.
These 12 bits may or may not have errors.
errors
We know from the encoding process that these bits map
uniquely.
So a 4 bit sequence will have a unique 12 bit output. But due
to errors, we can receive any and all possible combinations of
the 12 bits.
DECODING USING TRELLIS
1. We can compare this received sequence to all permissible
q
sequences and ppick the one with the smallest bit
disagreement: hard decision decoding
2. We can do a correlation and pick the sequences with the
best correlation: soft-decision decoding
00 00 00 00 00 00
01 01 01 01 01 01
10 10 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11
00 00 00 00 00 00
01 01 01 01 01 01
3
10 10
2 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
4 2
001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11
4
5
00 00 00 00 00 00
01 01 01 01 01 01
3
10 10
2 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
4 2
001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1 6 2
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11
4
5
00 00 00 00 00 00
01 01 01 01 01 01
3
10 10
2 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
4 2
001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1 6 2
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11
4
5
00 00 00 00 00 00
01 01 01 01 01 01
3
10 10
2 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
4 2
001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1 6 2
00 00 00 00 00 00
1
11 11 11 11 11 11 1
11 11 11 11 11 11 1
4 1
1
5
1 0
00 00 00 00 00 00
01 01 01 01 01 01
7 3 9 1
10
1 10
2 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
8 1 4 2
1011 11 01 00 10 10 01
MAXIMUM LIKELYHOOD OR VITERBI DECODING
All paths are followed until two paths converge on one node.
Then the path with the lower metric is kept and the one with
higher metric is discarded. The paths selected are called the
survivors.
For an N bit sequence, total numbers of possible received
sequences are 2N. Of these only 2kL are valid. The Viterbi
algorithm applies the maximum-likelihood
maximum likelihood principles to limit the
comparison to 2kL surviving paths instead of checking all paths.
COMPLETE TRELLIS DIAGRAM
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11
00 00 00 00 00 00
01 01 01 01 01 01
10 10 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
REDUCED TRELLIS DIAGRAM
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11
00 00 00 00 00 00
01 01 01 01 01 01
10 10 10 10 10 10
10 10 10 10 10 10
01 01 01 01 01 01
VITERBI DECODING- STEPS
4
2 3
00 (1)
1
00(1) 00(1) 00 00 00
00(1) 00 00 00
01(2) 01(2) 01 01 01
1 2 4
10(0) 10(0) 3 10 10 1 10
10
10(0) 10
01(2) 01 01 01
1 2
3
1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
2 3
00 (1)
1
00(1) 00(1) 00 00 00
00 00 00
01(2) 01 01 01
1 2
3
10(0) 10(0) 10 10 10 10
10
10(0) 10
01 01 01
1 2
1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
2 3 3
00 (1)
1 3
00(1) 00(1) 00(0) 00 00
00(0) 00 00
01(2) 01(1) 01 01
5
1 2 1
3
10(0) 10(0) 10(1) 10(0) 10 10 10
10(0) 10(1)
01(1) 01 01
1 2 3
4
1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
2 3
00 (1)
1 3
00(1) 00(1) 00(0) 00 00
11 11
3
1
00(0) 00 00
01(2) 01 01
1 2 1
10(0) 10(0) 10(0) 10 10 10
10(0) 10(1)
01(1) 01 01
1 2 3
1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
2 3 4
00 (1)
1 3 4
00(1) 00(1) 00(0) 00(1) 00
11(1) 11
3 5
1
1
00(0) 00(1) 00
01(2) 01(0) 01
4
1 2 1 4
10(0) 10(0) 10(2) 10(2) 10 10
10(0) 10(1)
01(1) 01(0) 01
1 2 3 3 3
1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
2 3
00 (1)
1 3 4
00(1) 00(1) 00(0) 00(1) 00
11(1) 11
3
1
1
00(0) 00(1) 00
01(2) 01(0) 01
4
1 2 1
10(0) 10(0) 10(2) 10 10
10(0) 10(1)
01(1) 01(0) 01
1 2 3 3
1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
2 3 1
00 (1)
1 3 4 6
00(1) 00(1) 00(0) 00(1) 00(2)
11(1) 11(0)
3
1 4
1 5
1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
2 3 1
00 (1)
1 3 4
00(1) 00(1) 00(0) 00(1)
4
1 2 1
10(0) 10(0) 10(2) 10 (1)
10(0) 10(1)
1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
1
00 (1)
1
11(1) 11(1)
11(0)
1 4
1
00(0) 00(2)
01(2) 01(0)
3
1 2 1
10(0) 10(0) 10(2) 10 (1)
10(0) 10(1)
1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
00 (1)
1 1
11(1) 11(1)
11(0)
1 4
1
00(0) 00(2)
01(2) 01(0)
3
1 2 1
10(0) 10(0) 10(2) 10 (1)
10(0) 10(1)
1011 11 01 00 10 10 01
BURST ERROR
CORRECTION
BURST ERROR CORRECTION
The parity bits added in the block codes will correct a limited
number of bit errors in each code words.
When the errors are clustered the error correcting codes are
not very effective even if the average bit error rate is small.
The errors in this case are clustered, i.e., in one region a large
percentage of bits are in error where as in other regions the
average error is very small.
BLOCK INTERLEAVING
In block interleaving, the data goes through a process of
interleaving and error correction coding before being applied
to the channel.
At the receiver a decoding process followed by a de-
interleaving is used to recover the original data.
data
Recovered
Data
CHANNEL D di
Decoding D i t l i
De-interleaving
BLOCK INTERLEAVING
A block of kl data bits is loaded in to a shift register which is
organized in to k rows with l per bits per row.
row
The data stream is entered in to the storage element at a11.
At each shift, each bit moves one position to the right while
the bit in the rightmost storage element moves in to the
leftmost storage element of the next row.
When kl data bits have been entered, the register is full, the
first bit being akl and the last bit in a11.
At this point the data stream is diverted to a similar shift
register.
A process of error control coding is now applied to the data
held stored in the first register.
BLOCK INTERLEAVING
In this coding process, the information bits in a column are
viewed as the bits of an uncoded word to which parity bits are
to be attached.
Thus the code word (a11, a21, a31,............., ak1, c11, c21, c31,..............., cr1 )
is formed there by generating a code word with k information
bits r parity bits.
Th information
The i f ti bits bit in i this
thi code d word d are l bits bit apartt than th in i
the original bit stream.
When the coding is completed, completed the entire contents of (k x l)
information register as well as (r x l) parity bits are transmitted
over the channel.
Generally the bit by bit serial transmission is carried out row
by row i.e.,( crl ,....., cr1 ,......, c1l ......, c11 , akl ,......, ak 1 , a2 l ,......, a1l , a11 )
BLOCK INTERLEAVING
l bits per row
The four switches operate in step and move from line to line
at the bit rate of the input bit stream d(k).
)
Each switch makes contact with line 1 at the same time and
g
moves to line 2 together and returns to line 1 at the same
time.
The cascade of storage elements in the lines are shift
registers.
i t
Starting with line 1 the number of elements increase by s as
we progress from line to line.
line
The last line l has (l-1)s storage elements.
The total number of storage element in each case is the
same, (l-1)s.
CONVOLUTIONAL INTERLEAVING
1 (l-1)s 1
2 s (l-2)s 2
CHAN
3 2s (l-3)s
3
4 3s NNEL (l-4)s 4
l (l-1)s
l
CONVOLUTIONAL INTERLEAVING
Consider a single line li on the transmitter side.
During the particular bit interval of d(k) there is switch contact
at the input and output sides of line li.
At the end of the bit interval, a clock signal causes the shift
register
i t off line
li li to
t enter
t ini to
t the
th leftmost
l ft t storage
t element
l t
the bit on the input side of the line and to start moving the
contents of each of its storage elements one bit to the right.
Then a synchronous clock advances the switches to the next
line li+1 .
When the shift register response is completed,
completed there will be a
new bit at the output end of the line, li
Because of the propagation delay through the storage
element, the switch at the output end of the line will have
already lost contact with line li , before the new bit has
appeared
pp at the line output.
p
CONVOLUTIONAL INTERLEAVING
If bit d(k) occurs when all switches are on line li , then the
corresponding received bit d(k) will appear immediately
immediately.
The next input bit d(k+1) will be the next received bit except
that it will be transmitted over line .
The received sequence will be the same as the transmitted
sequence. With the shift registers in place in the transmitter
andd receiver,
i each
h off th
the l lines
li will
ill h
have th
the same d
delay
l
(l-1)s and therefore the output sequence will still be identical
p sequence.
to the input q
The sequence in which the bits are transmitted over the
channel is different.
Suppose that two successive bits in the input bit stream are
d(k) and d(k+1).
CONVOLUTIONAL INTERLEAVING
1 1 1 1
1
2 2 2 2
2
3 3 3 3
3 1
4 4 4 4
4
CONVOLUTIONAL INTERLEAVING-EXAMPLE
1 1 1
5 1
2 2 2 2
6 2
3 3 3 3
7 3 1
5
4 4 4 4
8 4 2
CONVOLUTIONAL INTERLEAVING-EXAMPLE
1 1 1
9 5 1
2 2 2 2
10 6 2
3 3 1 3 3
11 7 5 3
2
4 4 9 4 4
12 8 4 6
3
CONVOLUTIONAL INTERLEAVING-EXAMPLE
1 1 11
1 13 9 5
2 2 2 2 2
14 10 6
3 3 1 3 3 3
15 11 5 7
2
4 4 9 4 4 4
16 12 8 6
3
13
10
7
AUTOMATIC REPEAT
REQUEST
AUTOMAIC REPEAT REQUEST (ARQ)
There are basically two different techniques available for
controlling transmission errors:
(1) Forward Error Correction (FEC) schemes in which
redundancy is deliberately introduced to detect and
correct errors.
(2) Automatic Repeat Request (ARQ) schemes in which
errors in the received code word is detected and a
request for retransmission is send back to the transmitter
Th FEC method
The th d has
h th limitation
the li it ti th t if errors are too
that t
numerous, the code will not be effective.
Also to achieve low error rates,
rates it is necessary to add a large
number of redundant bits.
As a result the efficiency of the code is very low.
AUTOMAIC REPEAT REQUEST (ARQ)
When the code is very long, we require complex and
expensive hardware to process the codes.
ARC is used where extremely low error rates is required.
In this system the receiver need only to detect errors and not
to correct them.
When an error is detected in a word, the receiver signals back
to the transmitter and the word is retransmitted.
retransmitted
So a feedback channel must be provided in ARQ systems.
Since coding allows more error detection than error
correction, ARQ uses more effective use of coding.
There are basically three ARQ systems: Stop and wait ARQ,
Go back N ARQ, and Selective repeat ARQ
STOP AND WAIT ARQ
The transmitter sends a code word to the receiver during the
time Tw.
The receiver receives the code word and processes it and if
the receiver detects no error, it sends back to the transmitter
an acknowledgement (ACK) signal. signal
When the ACK is received the transmitter sends the next
word.
If the receiver does detect an error, it returns a negative
acknowledge (NACK) to the transmitter.
In this case the transmitter retransmits the same message
and again waits for ACK or NACK response before
undertaking further transmission.
transmission
The elapsed time between the end of transmission of one
oda
word andd tthe
e sta
startt o
of ttransmission
a s ss o o of next
e t word
o d is
s TI
STOP AND WAIT ARQ
Tw TI
TRANSMITTER
1 2 3 3
ACK ACK NACK
RECEIVER 1 2 3 3
ERROR
DETECTED
The main drawback of such a system is that the transmitter must stand
idle while waiting for the ACK or NACK.
GO BACK N ARQ
The transmitter sends messages, one after another without
delay and does not wait for an ACK signal.
When the receiver detects an error in a message i, a NAK
signal is returned to the transmitter.
In response to the NAK, the transmitter returns to codeword i
and starts all over again at that word.
In figure 2,
2 the schematic is drawn on the assumption that the
propagation delay and the processing at the receiver occupies
an interval of such length that when an error is detected in
word i, the number N of words that are sent over again is N=5.
GO BACK N ARQ
TRANSMITTER Tw
1 2 3 4 5 6 2 3 4 5 6 7 8 9 10 11
RECEIVER 1 2 3 4 5 6 2 3 4 5 6 7 8 9
ERROR
DETECTED
SELECTIVE REPEAT ARQ
The transmitter sends messages in succession, without waiting
for an ACK after each message.
If the receiver detects that there is an error in code word i, the
transmitter is notified by NAK.
The transmitter retransmits the code word i and thereafter
returns immediately to its sequential transmission.
The selective repeat ARQ has the highest efficiency of the
three systems but it is most costly to implement.
SELECTIVE REPEAT ARQ
TRANSMITTER Tw
1 2 3 4 5 6 2 7 8 9 10 11 12 13 14 15
RECEIVER 1 2 3 4 5 6 2 7 8 9 10 11 12 13
ERROR
DETECTED
PERFORMANCE OF ARQ SYSTEMS
= (1 − x ) (1 + 2 x + 3x 2 + ...................)
1
1 1 N =
= (1 − x ) (1 − x )
−2 SW
= = PA
(1 − x ) PA
The total time devoted to a single
g attempt
p to g get the receiver
to accept a word is Tw+TI .
Hence, on the average the time required to transmit one word
is
TW + T I
T SW =
PA
THROUGHPUT OF STOP AND WAIT ARQ
If ARQ were not used and no coding bits were added to the k
information bits
bits, the time needed to transmit the k bits would
be
k
k Tk = TW
Tk = TW n
n
The throughput efficiency of stop and wait ARQ system is
then
Tk k PA
η S &W = = .
T SW n 1 + TI
TW
THROUGHPUT OF GO-BACK-N ARQ
In this system when the transmitter is informed that an error
has been in a particular word
word, retransmission is required of
that word and of the (N-1) words that follow.
Hence the retransmission involves N words. If a word is
received in error, N words are retransmitted.
Thus the total number of words transmitted is N+1
If the same word is again in error, the N words are repeated
once again and so on.
The average number of word transmissions required for the
acceptance of a single word is
N GBN = 1.
1 PA + ( N + 1) PA (1 − PA ) + (2 N + 1) PA (1 − PA ) 2 + ...................
Put (1 − PA ) = x PA = 1 − x
= (1 − x) (1 + x + x 2 + ...) + N (1 − x) ( x + 2 x 2 + .......)
+ Nx(1 − x) (1 + 2 x + 3x 2 + .......)
1
= (1 − x)
(1 − x)
1 1
= (1 − x) + Nx(1 − x)
((1 − x) ((1 − x)2
1 N (1 − PA ) N (1 − PA )
= 1 + Nx = 1+ NGBN = 1 +
(1 − x) PA PA
THROUGHPUT OF GO-BACK-N ARQ
Correspondingly the average time to transmit one word is
⎛ N (1 − PA ) ⎞
T GBN = TW ⎜1 + ⎟
⎝ PA ⎠
TK k 1
ηGBN = = .
T GBN n ⎛ N (1 − PA ) ⎞
⎜1 + ⎟
⎝ PA ⎠
k 1
ηGBN = .
n ⎛ N (1 − PA ) ⎞
⎜1 + ⎟
⎝ PA ⎠
THROUGHPUT OF SELECTIVE REPEAT ARQ