0% found this document useful (0 votes)
598 views245 pages

DC Digital Communication MODULE IV PART2

The document discusses error detection and correction techniques. It describes parity check bit coding which uses extra parity bits to detect errors. It discusses block codes and cyclic codes for encoding and decoding data with error correction. It also covers convolutional coding and techniques like interleaving to correct burst errors. Forward error correction allows detection and correction of errors at the receiver without retransmission.

Uploaded by

ARAVIND
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
598 views245 pages

DC Digital Communication MODULE IV PART2

The document discusses error detection and correction techniques. It describes parity check bit coding which uses extra parity bits to detect errors. It discusses block codes and cyclic codes for encoding and decoding data with error correction. It also covers convolutional coding and techniques like interleaving to correct burst errors. Forward error correction allows detection and correction of errors at the receiver without retransmission.

Uploaded by

ARAVIND
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 245

CODING FOR ERROR DETECTION

AND CORRECTION

MODULE IV
MODULE IV Part II
C di
Coding: Parity check bit coding for error detection -
Coding for error detection and correction - Block codes –
coding and decoding - Systematic and nonsystematic codes
- Cyclic codes:- generator polynomial, Generator and
Parity check matrices, Encoding and decoding of cyclic
codes, Syndrome computation and error detection,
Convolutional coding – Code generation - decoding - Code
diagrams
tree - Sequential decoding – State and Trellis diagrams,
Viterby algorithm, Burst error correction - Block and
convolutional interleaving - ARQ:- Types of ARQ,
Performance of ARQ. Comparison of error rates in coded &
uncoded system.
CLASSIFICATION OF ERROR CONTROL
METHODS
„ Error control methods may be classified as
(i) Error detection and retransmission: When an error is
detected at the receiver a retransmission request is sent
b k to
back t the
th transmitter.
t itt
(ii) Forward error correction: The receiver detects the errors
and corrects the errors by proper coding techniques
techniques.
When a single source transmits to a number of receivers,
this method is used
Error Control Techniques
„ E
Error detection
d t ti andd retransmission
t i i
‰ When,errors are detected by the receiver, it requests
a retransmission.
retransmission This is known as automatic repeat
request (ARQ) and is used for delay insensitive data
‰ Appropriate for
„ Low delay channels

„ Channels with a return path


p
‰ Not appropriate for delay sensitive data, e.g., real time
speech and data
Error Control Techniques
q
„ Forward
o a d Error
o Co
Correction
ect o ((FEC)
C)
‰ Coding designed so that errors can be corrected at the
receiver
‰ Appropriate for delay sensitive and one-way transmission
(e.g., broadcast TV) of data
‰ Th
There are two
t main
i types
t off FEC,
FEC namely
l block
bl k codes
d and d
convolutional codes.
Some Properties
p of Codes -Hammingg distance
„ Hamming distance between two code words is defined as the
number of elements in which they differ.

c1 = [1 0 1 1 0 0 1 ] ⎫ Hamming
H i
c2 = [1 0 011 0 0 ⎬
] ⎭ Distance d=3

„ Minimum distance dmin is the smallest hamming distance


b t
between valid
lid code
d word
d vectors.
t
„ Error detection is possible only if the received vector is not
equal to some other code vector.
vector
„ So transmission errors in the received code vectors should be
less than the minimum distance dmin.
Hammingg distance
Find the minimum Hamming distance of the given coding scheme

We first find all the Hamming distances.

The dmin in this case is 3.


Distance requirement
q for error detection
and correction
DISTANCE NUMBER OF ERRORS DETECTED/CORRECTED

dmin≥s+1 Detect up to s errors per word

dmin≥2t+1 Corrects up to t errors per word

dmin≥t+s+1 Detect s errors and corrects t errors per word, s>t


Code efficiencyy and Weight
g
„ Efficiency is the ratio of message bits in a block to the actual
t
transmitted
itt d bits
bit after
ft encoding
di
Message bits in a block k
Code efficiency η = =
Transmitted bits for the
Transmitted the block n
⎡ ⎤ 4
If the code word is ⎢ 1 0 1 1 0 0 1 ⎥ η =
⎢⎣ Parity bits M essage bits ⎥⎦ 7

„ Weight is the number of non-zero elements in the transmitted


code vector.
c1 = [1 0 1 1 0 0 1 ]→ W e ig h t = 4
PARITY CODING
Parityy Codingg
„ Parity bits are added to the message bit at the time of
transmission. The receiver checks these parity bits . An error
is detected if the pattern of parity bits is not correct.
„ The simplest form of parity encoding is single bit parity
encoding.
encoding
„ Even parity: If the number of 1’s in the message block is even
a zero is added as the parity bit to make the parity of
t
transmitted
itt d code
d even.
„ Odd parity: If the number of 1’s is odd a 1 is added as the
parity
p y bit to make the pparity
y of transmitted code even.
„ The parity of the transmitted code in either case is always
even
„ ASCII Code
C d is i an example l where
h one parity
it bit is
i inserted
i t d as
the eighth bit along with the seven bit code.
Parityy Encodingg
Character Parity of Parity of
b0 b1 b2 b3 b4 b5 b6 b7 message T d
Txd
block code

K 1 1 0 1 0 0 1 0 Even Even

O 1 1 1 1 0 0 1 1 Odd Even

I 1 0 0 1 0 0 1 1 Odd Even

K ASCII = [ 1 1 0 1 0 0 1 0]
PARITY BIT

EX-OR
Parityy Encodingg
Parityy Encodingg
„ At the receiver the parity of the received code is checked. If a
single bit error has occurred it will make the parity of the
received code odd. The receiver detects this error.
„ The receiver can only detect errors; it cannot correct errors.
„ If there are two bit errors it again makes the parity of the
received code even and the receiver cannot detect this error.

ERROR

K → [ 1 1 0 1 0 0 1 0] [ 1 0 0 1 0 0 1 0]
Even Parity Odd Parity
Error
o Detected
etected
„ This type of parity encoding and decoding is called Vertical
Redundancy Check (VRC)
Longitudinal
g Redundancyy Check (LRC)
( )
„ In LRC, large message block is divided in to several
characters.
„ Parity check is applied in both row and columns.
„ It is possible to detect 3 errors and correct single errors.
„ For parity checking purposes, a complete character known as
Block Check Character (BCC) is added at the end of block of
information.
information
„ At the start of the block a Start of Text character (STX) is
sent which indicates that information block follows the
character.
„ At the end of the block an End of Transmission Block (ETB)
character
h t isi transmitted
t itt d followed
f ll d by
b BCC.
BCC
Longitudinal
g Redundancyy Check (LRC)
( )

STX INFORMATION BLOCK ETB BCC

One Many One One


Character Characters Character Character
Longitudinal
g Redundancyy Check (LRC)
( )
STX N A I T I S O K ETB BCC Column
Parity Bits

b1 0 0 1 1 0 1 1 1 1 1 1
b2 1 1 0 0 0 0 1 1 1 1 0
b3 0 1 0 0 1 0 0 1 0 1 0
b4 0 1 0 1 0 1 0 1 1 0 1
b5 0 0 0 0 1 0 1 0 0 1 1
b6 0 0 0 0 0 0 0 0 0 0 0
b7 0 1 1 1 1 1 1 1 1 0 0

Row Parity
b8 1 0 0 1 1 1 0 1 0 0 1
Bits
Longitudinal
g Redundancyy Check (LRC)
( )
STX N A I ~ I S O K ETB BCC Column
Parity Bits

b1 0 0 1 1 0 1 1 1 1 1 1
b2 1 1 0 0 0 0 1 1 1 1 0
b3 0 1 0 0 1 0 0 1 0 1 0
b4 0 1 0 1 1 1 0 1 1 0 1 X
b5 0 0 0 0 1 0 1 0 0 1 1
b6 0 0 0 0 0 0 0 0 0 0 0
b7 0 1 1 1 1 1 1 1 1 0 0

Row Parity
b8 1 0 0 1 1 1 0 1 0 0 1
Bits
X
LINEAR BLOCK
CODES
LINEAR BLOCK CODES
„ To generate an (n,k) block code the channel encoder accepts
information in successive k bit blocks.
„ For each block it adds n-k redundant bits that are algebraically
related to the k message bits, thereby producing an overall
encoded
d d block
bl k off n bits
bit where
h n>k.
>k
„ The n bit block is called a code word and n is called block length
of the code
„ The ratio n/k denoted by r is called rate of the code.
„ R0=(n/k)Rs where Rs is the bit rate of the information source and
R0 is the channel data rate
„ A code is said to be linear if any two code words in the code can
be added in modulo-2 arithmetic to produce a third code word in
the code.
LINEAR BLOCK CODES

k bits

n − k bits

n bits
LINEAR BLOCK CODES
„ In the case of (n,k) block codes there are (n-k) redundant bits
added to the k bit message to produce n bit message.
„ (n-k) bits are referred to as generalized parity check bits or parity
bits.
bits
„ Block codes in which the message bits are transmitted in
unaltered form are called systematic codes.
„ Let m0,m1,…,mk-1 constitute a block of k arbitrary message bits.
Thus we can have 2k distinct message blocks.
„ Let this sequence of message bits be applied to a linear block
encoder producing an n bit code word whose elements are
denoted by c0,c1,…,cn-1.
„ Let b0,b1,…,bn-k-1 denote the (n-k) parity bits in the code word
LINEAR BLOCK CODES
„ The code word can be divided into two parts, one part is
occupied by the message bits and the other part by parity bits.
„ The parity bits may be sent before or after the codeword.
„ St t
Structure off a code
d word d where
h th message bits
the bit are sentt
before the parity bits is illustrated below.

b0 b1 . . . . . . . bn-k-1 m0 m1 . . . . . . . . mk-1

Parity bits Message bits

Code bits
LINEAR BLOCK CODES
„ The (n-k) leftmost bits of a code word are identical to the
corresponding parity bits and the k rightmost bits are identical
to the corresponding message bits.

„ The (n-k) parity bits are linear sums of the k message bits as
shown by the relation

bi = pi 0 m 0 + pi1m1 + p i 2 m 2 + ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ + p i ,k −1m k −1
W h e r e i = 0 ,1, ......., n − k − 1
⎧1 if bi depends
d d on m j
A nd p ij = ⎨
⎩ 0 O therw ise
LINEAR BLOCK CODES
„ The system of equations can be re-written in a compact form
using matrix notation.
„ Let the message vector, parity vector and code vector be
represented by the row vectors

[m] = [m0 , m1, m2 ,......, mk −1 ]


[b ] = [b0 , b1 , b2 ,......, bn −k −1 ]
[c ] = [c0 , c1 , c 2 ,......, c n −1 ]
„ The parity vector may be represented as
[b ] = [ m ] [ P ] ....... (1 )
LINEAR BLOCK CODES
„ The vector [P] is a k by (n-k) coefficient matrix defined by

⎡ p 00 p10 ⋅ ⋅ p n − k −1,0 ⎤
⎢ p p11 ⋅ ⋅ p n − k −1,1, ⎥
⎢ 01 ⎥
[P ] = ⎢ ⋅ ⋅ ⋅ ⋅ ⋅ ⎥
⎢ ⎥
⎢ ⋅ ⋅ ⋅ ⋅ ⋅ ⎥
⎢⎣ p 0 , k −1 p1, k −1 ⋅ ⋅ p n − k −1, k −1 ⎥⎦
„ The code vector [c] may be expressed as a partitioned row
vector in terms of [m] and [b]

[c ] = ⎡⎣ [b ] [m ]⎤⎦
LINEAR BLOCK CODES
„ Substituting for [b] from equation (1)
[c ] = ⎡⎣ [m ][ P ] [m ]⎤⎦
„ Taking out the common factor m
[c ] = [m ] ⎡⎣ [ P ] [ I k ]⎤⎦ ......(( 2 )
Where Ik is a k - by - k identity matrix

⎡1 0 ⋅ ⋅ 0⎤
⎢0 1 ⋅ ⋅ 0⎥
⎢ ⎥
[I k ] = ⎢ ⋅ ⋅ ⋅ ⋅ ⋅⎥
⎢ ⎥
⎢ ⋅ ⋅ ⋅ ⋅ ⋅⎥
⎢⎣ 0 0 ⋅ ⋅ 1 ⎥⎦
LINEAR BLOCK CODES
„ It is convenient to define a k-by-n generator matrix

[G ] = ⎡⎣ [ P ] [ I k ]⎤⎦
„ Now equation (2) may be simplified as

[c ] = [m ][G ]......( 3 )
„ The full set of code words is generated by letting the message
vector [m] range through the set of all 2k combinations.
„ Now the sum of any two code words is another code word.
This basic property of linear block codes is called closure.
PROOF OF CLOSURE PROPERTY
„ Consider any pair of code vectors ci and cj corresponding to a
pair
i off message vectors
t mi and
d mj respectively.
ti l Now
N
[c] = [m][G]
ci + c j = mi [G] + m j [G ]
= (mi + m j )[G ]
„ The modulo-2 sum of mi and mj represent a new message
vector. Correspondingly the modulo-2 sum of ci and cj
represent a new code vector.
IMPORTANT RELATIONS
[G ] = ⎡⎣ [ P ] [ I k ]⎤⎦
[c] = [m][G]
[c ] = [m ] ⎡⎣ [ P ] [ I k ]⎤⎦
[c ] = ⎡⎣ [m ][ P ]⎤⎦ ⎡⎣ [m ][ I k ]⎤⎦
[ P ] is
i a ( n - k ) × k m atrix
t i
[ I k ] is a k × k identity m atrix
[G ] is a k × n matrix
HAMMING CODES
„ An (n,k) linear block code is called a Hamming code if it
satisfies the following conditions.
‰ Number of parity bits (n-k)≥3
‰ Block length n = 2n-k-1
‰ Mi i
Minimum di
distance
t dmin=3
3
„ Since the minimum distance of Hamming code is 3 it can be
used to detect double errors and correct single errors
errors.
Example
p 1
„ Consider a Hamming code with n = 7, k = 4 with (7 – 4) redundant
bits.
⎡1 0 1⎤
⎢1 1 1⎥
L
Let P=⎢ ⎥
⎢1 1 0⎥
⎢ ⎥
⎣0 1 1⎦
[G] is a k × n, i.e., 4 × 7 matrix and Ik is a k × k, i.e., 4 × 4 matrix.
[P] is a (n – k) × k, i.e., 3 × 4 matrix.
⎡1 0 1 1 0 0 0⎤
⎢ ⎥
⎢1 1 1 0 1 0 0⎥
[G ] = ⎡⎣ [ P ] [ I k ]⎤⎦ G=
⎢1 1 0 0 0 1 0⎥
⎢ ⎥
⎣0 1 1 0 0 0 1⎦
Example
p 1
[c] = [m][G] ⎡1 0 1 1 0 0 0⎤
⎢ ⎥
⎢ 1 1 1 0 1 0 0⎥
[c0 , c1, c2 , c3, c4 , c5 , c6 ] = [m0 , m1, m2 , m3 ] ⎢
1 1 0 0 0 1 0⎥
⎢ ⎥
⎣0 1 1 0 0 0 1⎦
c0 = m0 + m1 + m2 + 0 c3 = m0
c = 0+m +m +m c4 = m1
1 1 2 3
c5 = m2
c2 = m0 + m1 + 0 + m3
c6 = m3
Alternate Method
[c ] = ⎡⎣ [m ][ P ]⎤⎦ ⎡⎣ [m ][ I k ]⎤⎦
[c0 , c1, c2 , c3, c4 , c5 , c6 ] =
⎡1 0 1⎤ ⎡1 0 0 0⎤
⎢1 1 1⎥ ⎢0 1 0 0⎥
[m0 , m1, m2 , m3 ] ⎢ ⎥ [m0 , m1, m2 , m3 ] ⎢ ⎥
⎢1 1 0⎥ ⎢0 0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎣0 1 1⎦ ⎣0 0 0 0⎦

c0 = m0 + m1 + m2 + 0 c4 = m0 c6 = m2
c1 = 0 + m1 + m2 + m3 c5 = m1 c7 = m3
c3 = m1 + m2 + 0 + m3
[m] Code word [C] Weight
m0….m3 c0…….….c6
0000 000 0000 0
0001 011 0001 3
0010 110 0010 3
0011 101 0011 4
0100 111 0100 4
0101 100 0101 3
0110 001 0110 3
0111 010 0111 4
1000 101 1000 3
1001 110 1001 4
1010 011 1010 4
1011 000 1011 3
1100 010 1100 3
1101 001 1101 4
1110 100 1110 4
1111 111 1111 7
Example
p 2
„ Consider a (6, 3) code with n = 6, k = 3, n – k = 3.
„ In this the pparity
y coefficient matrix [[P]] is k byy n-k matrix. i.e.,,
[P] is a 3 × 3 matrix given by
⎡0 1 1⎤ ⎡1 0 0⎤
[P] = ⎢1 1 0⎥ ⎢
[ Ik ] = 0 1 0 ⎥
⎢ ⎥ ⎢ ⎥
⎢⎣1 0 1⎥⎦
⎢⎣0 0 1⎥⎦
[c ] = ⎡⎣ [m ][ P ]⎤⎦ ⎡⎣ [m ][ I k ]⎤⎦
Let m = [m0 m1 m2 ]
Example
p 2
⎡ 0 1 1⎤ ⎡ 1 0 0⎤
[c] = ⎡⎣m0,m1,m2 ⎤⎦ ⎢1 1 0⎥ ⎡⎣m0,m1,m2 ⎤⎦ ⎢0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎢⎣1 0 1⎥⎦ ⎢⎣0 0 1⎥⎦
[c] = [m1 + m2, m0 + m1, m0 + m2, m0, m1, m2 ]
Example
p 2 ((contd.))
Data[m] Code[c] Weight
(m0….m2) (c0…….….c5)

000 000 000 0


001 101 001 3
010 110 010 3
011 011 011 4
100 011 100 3
101 110 101 4
110 101 110 4
111 000 111 3
Example
p 3
„ Consider a (6, 3) code with [P] matrix given by

⎡1 0 1⎤ n = 6,
6 k = 3,
3 (n − k) = 3

[P] = 0 1 1 ⎥
⎢ ⎥ G =⎡ P I ⎤
⎢⎣1 1 0⎥⎦ [ ] ⎣ [ ] [ ]⎦ k

⎡1 0 1 1 0 0⎤
⎢ ⎥
[G] = ⎢0 1 1 0 1 0⎥
⎢⎣1 1 0 0 0 1⎥⎦

Let m = [m0 m1 m2 ]
Example
p 3
[c] = [m][G]
⎡1 0 1 1 0 0⎤
⎢ ⎥
[c0, c1, c2, c3, c4, c5] = [m0, m1, m2 ] ⎢0 1 1 0 1 0⎥
⎢⎣1 1 0 0 0 1⎥⎦

[c] = [m0 + m2 m1 + m2 m0 + m1 m0 m1 m2 ]
Example
p 3 ((contd.))

Data
D t [m]
[ ] Code
C d [C] W i ht
Weight
(m0….m2) (c0…….….c6)

000 000 000 0


001 110 001 3
010 011 010 3
011 101 011 4
100 101 100 3
101 011 101 4
110 110 110 4
111 000 111 3
PARITY CHECK MATRIX
„ Let [H] denote an (n – k) by n matrix defined as

[ H ] = ⎣[ I n−k ] [ P ] ⎤⎦
⎡ T

where PT is an (n – k) by k matrix representing the transpose


off the
th coefficient
ffi i t matrix
t i [P] and
d [In – k] is
i the
th (n
( – k) by
b (n
( – k)
identity matrix.
⎡[ P] ⎤
T

⎢ ⎥
[ H ][G] = ⎣⎡[ In−k ] [ P] ⎦⎤ ⎢ ... ⎥
T T
= [ P] + [ P] = 0
T T

⎢ [I ] ⎥
⎣ k ⎦
„ Since multiplication of a rectangular matrix by an identity
matrix of compatible dimensions leaves the matrix unchanged
and in modulo-2 arithmetic, [P]T + [P]T = [0] and [0] denotes
an (n – k) by k null matrix.
EXAMPLE
„ Consider a (5,3) code n=5, k=3, (n-k)=2. In this case the
coefficient matrix [P] is k X (n-k) and is given by say,
⎡ P0 0 P1 0 ⎤
⎡P00 P01 P02 ⎤
[ P ] = ⎢ P0 1 ⎥
[ P] = ⎢
T
P1 1
⎢ ⎥ ⎥
⎢⎣ P 0 2 P1 2 ⎥⎦ ⎣ 10
P P11 P12 ⎦

G
Generator
t matrix i a k × n matrix
t i [G] is t i ( 3×
3 5 ) given
i by
b

⎣⎡[ P ] [ I 3 ]⎦⎤
⎡ P 00 P10 1 0 0⎤
⎢ P 01 P11 0 1 0⎥
[ ] ⎢
G =

⎢⎣ P 02 P12 0 0 1 ⎥⎦
EXAMPLE
⎡ P00 P01 P02 ⎤
⎢P ⎥ ⎡ [ ] ⎤
T
P11 P12 P
⎢ 10 ⎥ ⎢ ⎥
[G ] =⎢ 1 0 ⎥ = ⎢…… ⎥
T
0
⎢ ⎥
⎢ 0 1 0 ⎥ ⎢⎣ [ I K ] ⎦⎥
⎢⎣ 0 0 1 ⎥⎦

[ H ] = ⎣[ I n-k ] [ P ] ⎤⎦ = ⎡⎣[ I 2 ]
⎡ T
[ P]T


⎡1 0 P00 P01 P02 ⎤
=⎢
⎣0 1 P10 P11 P12 ⎥⎦
EXAMPLE
⎡ P00 P01 P02 ⎤
⎢P P11 P12 ⎥
⎡1 0 P00 P01 P02 ⎤ ⎢ 10 ⎥
[ H ][G] = ⎢
T
⎥ ⎢1 0 0⎥
⎣0 1 P10 P11 P12 ⎦ ⎢ ⎥
⎢0 1 0⎥
⎢⎣ 0 0 1 ⎥⎦
⎡P00 + P00 P01 + P01 P02 + P02 ⎤ ⎡ P00 P01 P02 ⎤ ⎡ P00 P01 P02 ⎤
=⎢ ⎥ =⎢ ⎥ +⎢ ⎥
P + P P +
⎣ 10 10 11 11 12 12 ⎦
P P + P ⎣ 10 11 12 ⎦ ⎣ 10 11 12 ⎦
P P P P P P

= [ P]T + [ P]T
Since Pij are either o ' s or 1' s and 1 + 1 = 0 and 0 + 0 = 0
[ P ]T + [ P ]T = 0 [ H ][G ]T = 0
PARITY CHECK MATRIX
[ H ][G ]T = 0
[c ] = [ m ][G ]......(( 3 )
[G ][ H ] = 0 ...................... (8)
T

P ost m ultiplying E q - (3) by [H ] T


[c ][ H ]T = [ m ][G ][ H ] ........ (9)
T

Using Eq - (8) in Eq - (9)


[c ][ H ]T = 0 ..................... (10)

„ [H] is called parity check matrix of the code and equation (10)
is called parity check equation.
PARITY CHECK MATRIX

Message Code
C d
vector [m] GENERATOR vector [c]
MATRIX

Code
C d Null
vector [c] PARITY CHECK vector [0]
MATRIX
SYNDROME
The generator matrix [G] is used in the encoding operation at
the transmitter and parity check matrix [H] is used in the
decoding operation at the receiver.
Let [r] denote a 1-by-n received vector that results from
sending the code vector [c] over a noisy channel.
channel
We can express the vector [r] as the sum of the original code
vector [c] and a vector [e] as shown by
[r]=[c]+[e].
ƒ The vector [[e]] is called error vector or error p pattern. The ith
element of [e] equals 0 if the corresponding element of [r] is the
same as that of [c].
ƒ Th ith element
The l t off [e]
[ ] equals
l 1 if the
th corresponding
di element
l t off
[r] is different from that of [c]. In this case an error has occurred
in the ith position.
p
SYNDROME
„ Let [r] = r1, r2, ……., rn be a received code vector (one of 2n n-
tuples) resulting from the transmission of [c] =c1,c2,…….,cn
(one of the 2k n-tuples).
„ Then [r]=[c]+[e] where [e] = e1, e2, ……, en is the error vector
or error pattern introd
introduced
ced b
by the channel
channel.
„ In the space of 2n n-tuples there are a total of (2n –1) potential
non-zero
non zero error patterns.
„ The syndrome of [r] is defined as: [S] = [r] [H]T
„ The syndrome
y is the result of a p
parity
y check p
performed on [[r]] to
determine whether [r] is a valid member of the codeword set.
„ If [r ] contains detectable errors the syndrome has some non-
zero valuel
„ syndrome of [r] is [S] = ([c]+[e]) HT = cHT + eHT
SYNDROME
„ Since [c][H]T = 0 for all code words [S] = [e][H]T
„ Syndrome
y is a 1 byy n-k row vector.
„ An important property of linear block codes, fundamental to the
decodinggpprocess, is that the mapping
pp g between correctable error
patterns and syndromes is one-to-one.
EXAMPLE
„ Suppose that code vector c = [ 1 0 1 1 1 0 ] is transmitted and
the vector r = [ 0 0 1 1 1 0 ] is received. Note that one bit is in
error. Find the syndrome vector [S], and verify that it is equal to
[e][H]T. Assume that (6,3) code has a coefficient matrix P as
given below
⎡1 1 0⎤
P = ⎢0 1 1⎥
⎢ ⎥
⎢⎣1 0 1⎥⎦

[G ] = ⎡⎣ [ P ] [ I k ]⎤⎦
EXAMPLE
⎡1 0 0⎤
[ H ] = ⎣⎡[ I n−k ] [ P ] ⎤
T ⎢ ⎥
⎦ ⎢
⎢0 1 0 ⎥⎥
⎢ ⎥
⎢0 0 1⎥
⎡1 0 0 1 0 1 ⎤ [ H ]T = ⎢ ⎥
⎢1 0⎥
[H ] = ⎢0 1 0 1 1 0⎥
1
⎢ ⎥
⎢ ⎥ ⎢0 1 1 ⎥⎥

⎢⎣ 0 0 1 0 1 1 ⎥⎦ ⎢ ⎥
⎢⎣ 1 0 1 ⎥⎦
⎡1 0 0⎤
⎢0 1 0⎥
⎢ ⎥
⎢0 0 1⎥
[ S ] = [ r ][ H ]T = [001110] ⎢ ⎥
= [ 1, 1+1, 1+1 ] = [ 1 0 0 ]
⎢1 1 0⎥
⎢0 1 1⎥ (syndrome of corrupted
⎢ ⎥
⎣1 0 1⎦ code vector is [1 0 0 ])
EXAMPLE
„ Now we can verify that syndrome of the corrupted code vector is the
same as the syndrome of the error pattern:
„ Syndrome of error pattern
„ [S] = [e][H]T = [1 0 0 0 0 0] [H]T
⎡1 0 0⎤
⎢0 1 0⎥
⎢ ⎥
⎢0 0 1⎥
[S ] = [e ][ H ]
T
= [1 0 0 0 0 0 ] ⎢ ⎥ = [1 0 0 ]
⎢1 1 0⎥
⎢0 1 1⎥
⎢ ⎥
⎣1 0 1⎦

„ It is the same as the syndrome of the corrupted code vector.

Syndrome of the corrupted code vector is the same as the


syndrome
y off the error pattern
p
ERROR CORRECTION
„ Since there is a one-to-one correspondence between
correctable error patterns and syndromes we can correct such
error patterns.
„ The 2n n-tuples that represent possible received vectors can
be arranged in an array called the standard array.
array
„ There are 2n possible error pattern and only 2n-k syndromes
and different error patterns may result in the same syndrome.
„ The standard array for a (n,k) code is constructed as below.
HOW TO CONSTRUCT STANDARD ARRAY
)The 2k code vectors are placed in a row with the all-zero
code vector c1 as the left most element.
)All zero code d word d also
l represents
t the th allll zero error
pattern.
)We fill the first column by listing first all nn-11 error patterns of
weight 1.
)If n<2n-k, we may then list all double error patterns, then all
triple error patterns etc. until we have a total of 2n-k entries in
the first column.
)Thus
Th the th numberb off rows we can haveh i 2n-k
is n k, which
hi h is
i equall
to the number of syndromes.
)Next we add each error pattern in the first column to the
corresponding code words.
)Thus we fill the remainder of the (n x n-k) table as follows.
SYNDROME DECODING

c1, e1 c2 c3 . ci . c2k
e2 c2 + e2 c3 + e2 . ci + e2 . c2k + e2
. . . . . . .
ej c2 + e j c3 + e j . ci + e j . c2k + e j
. . . . . . .
e2n−k c2 + e2n−k c3 + e2n−k . ci + e2n−k . c2k + e2n−k
ERROR CORRECTION
„ Each row called a coset consists of an error pattern in the first
column also known as the coset leader,
column, leader followed by the code
vectors perturbed by that error pattern.
„ The arrayy contains 2n n-tuples
p as a whole.
„ Each coset consists of 2k n-tuples
2n = 2n−k
„ There are k cosets
2
„ If the error pattern caused by the channel is a coset leader,
the received vector will be decoded correctly into the
transmitted code vector [ci].
]
„ If the error pattern is not a coset leader the decoding will
produce an error.
STANDARD ARRAY
„ For a given channel the probability of decoding error is
minimized when the most likely error patterns (those with
largest probability of occurrence) are chosen as the coset
leaders.
leaders
„ So the standard array should be constructed with each coset
leader having the minimum Hamming weight in its coset.
ERROR CORRECTION
„ If [ej] is the coset leader of the jth coset then [ci] + [ej] is an n-
tuple in this coset
„ (
Syndrome of this coset is: [ S ] = ci + e j H )[ ] T

= ci [ H ] + e j [ H ] = e j H T
T T

„ All members of a coset have the same syndrome.

All members of a coset have the same syndrome


ERROR CORRECTION
„ The procedure for error correction decoding is as follows:
1. Calculate the syndrome of [r] using [S] = [r][H]
T

2. Locate the coset leader (error pattern) [ej] whose


syndrome equals [r][H]T
3. This error pattern is the corruption caused by the channel

4. We retrieve the valid code vector by y subtracting g out the


identified error (In modulo-2 arithmetic subtraction is
identical to that of addition)
5. The
Th correctedt d received
i d vector
t is
i identified
id tifi d as [c]
[ ] = [r]
[ ] + [e
[ j].
]
EXAMPLE (6,3)HAMMING CODE

„ Consider a (6,3) code with the following values


n=6 k = 3 m=3
⎡1 1 0⎤
P = ⎢0 1 1⎥
⎢ ⎥
⎢⎣1 0 1⎥⎦
The generator matrix has a structure [G ] = [ P I k ]
⎡1 1 0 1 0 0⎤
⎢ ⎥
⎡⎣G⎤⎦ = ⎢⎢0 1 1 0 1 0⎥⎥
⎢ ⎥
⎢⎣1 0 1 0 0 1⎥⎦
EXAMPLE (6,3)HAMMING CODE

[c ] = ⎡⎣ [m ][G ]⎤⎦
⎡1 1 0 1 0 0⎤
⎢ ⎥
⎡⎣c1,c2,c3,c4,c5,c6 ⎤⎦ = ⎧⎨⎩m1,m2,m3⎫⎬⎭ ⎢⎢0 1 1 0 1 0⎥⎥
⎢ ⎥
⎢⎣1 0 1 0 0 1⎦⎥

[c] = m1+m3, m1+m2, m2+m3, m1, m2, m3


EXAMPLE (6,3)HAMMING CODE
MESSAGE CODE WORD WEIGHT

000 000000 0
001 101001 3
010 011010 3
011 110011 4
100 110100 3
101 011101 4
110 101110 4
111 000111 3
STANDARD ARRAY

000000 101001 011010 110011 110100 011101 101110 000111


000001 101000 011011 110010 110101 011100 101111 000110
000010 101011 011000 110001 110110 011111 101100 000101
000100 101101 011110 110111 110000 011001 101010 000011
001000 100001 010010 111011 111100 010101 100110 001111
010000 111001 001010 100011 100100 001101 111110 010111
100000 001001 111010 010011 010100 111101 001110 100111
010001 111000 001011 100010 100101 001100 111111 010110
ERROR CORRECTION
„ The valid code vectors are the eight vectors in the first row and
the correctable error patterns are the eight coset leaders in the
first column.
„ Decoding will be correct if and only if the error pattern caused by
the channel is one of the coset leaders
„ We now compute the syndrome corresponding to each of the
correctable error sequences by computing ejHT for each coset
leader
⎡1 0 0⎤
[ H ] = ⎡⎣ [ I n − k ] [ P ]
T

⎦ ⎢0 1 0⎥
⎢ ⎥
⎡1 0 0 i 1 0 1⎤ T ⎢0 0 1⎥
⎡⎣ ⎤⎦ = ⎢1 0⎥
⎢ ⎥ H
1
⎡⎣H⎤⎦ =⎢⎢0 1 0 i 1 1 0⎥⎥ ⎢ ⎥
⎢0 1 1⎥
⎢ ⎥ ⎢ ⎥
⎢⎣0 0 1 i 0 1 1⎥⎦ ⎣1 0 1⎦
ERROR CORRECTION
⎡ 1 0 0 ⎤ ⎡ 1 0 0 ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ 0 1 0 ⎥ 0 1 0
⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎤ ⎢
0 0 1 ⎥ 0 0 1
⎡⎣ S ⎤⎦ = ⎡
⎢⎣
e j ⎥⎦ ⎢ ⎥ = ⎡e , e2 , e3, e4 , e5,

e 6 ⎤⎦ ⎢


⎢ 1 1 0 ⎥ ⎣ 1
⎢ ⎥ ⎢ 1 1 0 ⎥
⎢ ⎥ ⎢ ⎥

0 1 1 ⎥ ⎢ ⎥
⎢ ⎥ ⎢
0 1 1 ⎥
⎣⎢ 1 0 1 ⎦⎥ ⎢ ⎥
⎣⎢ 1 0 1 ⎦⎥

⎡0 0 0 0 0 0⎤ ⎡0 0 0⎤
⎢0 0 0 0 0 1⎥ ⎡1 0 0 ⎤ ⎢⎢ 1 0 1⎥
⎢ ⎥ ⎢ ⎥ ⎥
⎢0 0 0 0 1 0⎥ ⎢
⎢0 1 0 ⎥⎥ ⎢ 0 1 1⎥
⎢0 0 0 1 0 0⎥ ⎢ ⎥ ⎢ 0⎥
⎢0 0 1 ⎥ ⎢1 1
⎡⎣ S ⎦⎤ = ⎢ ⎥× ⎢ ⎥ = ⎥
⎢0 0 1 0 0 0⎥ ⎢1 1 0⎥ ⎢0 0 1⎥
⎢ ⎥
⎢ ⎥ ⎢0 ⎢ ⎥
⎢0 1 0 0 0 0⎥ ⎢
1 1 ⎥⎥ ⎢ 0 1 0⎥
⎢ ⎥
⎢1 0 0 0 0 0⎥ ⎢⎣ 1 0 1 ⎥⎦ ⎢ 1 0 0⎥
⎢ ⎥ ⎢ ⎥
⎣0 1 0 0 0 1⎦ ⎣1 1 1⎦
PROCEDURE FOR ERROR CORRECTION
„ We receive the vector [r] and calculate its syndrome S
„ We then use the syndrome-look-up
syndrome look up table to find the
corresponding error pattern.
„ This error pattern is an estimate of the error, we denote it as ê
The decoder then adds [ê] to [r] to obtain an estimate of the
transmitted code vector [ĉ]
[ cˆ ] = [ r ] + [ eˆ ] = ([ c ] + [ e ] ) + [ eˆ ] = [ c ] + ([ e ] + [ eˆ ] )

„ If the estimated error pattern is the same as the actual error


pattern that is if [ê] = [e] then [ĉ] =[c]
„ If [ê] ≠ [e] the decoder will estimate a code vector that was not
transmitted and hence we have an undetectable decoding
error.
SYNDROME LOOKUP TABLE

ERROR PATTERN SYNDROME

000000 000
000001 101
000010 011
000100 110
001000 001
010000 010
100000 100
010001 111
EXAMPLE
„ Assume code vector [c] = [ 1 0 1 1 1 0 ] is transmitted and the
vector [r]=[0 0 1 1 1 0] is received
received.
„ The syndrome of [r] is computed as:
S = [0 0 1 1 1 0 ]HT = [ 1 0 0 ]
„ From the look-up table 100 has corresponding error pattern:
[[ê]] = [1
[ 00000]
„ The corrected vectors is the
[ĉ] =[r] + [ê] = [0 0 1 1 1 0] + [1 0 0 0 0 0] = [1 0 1 1 1 0 ]

„ In this example actual error pattern is the estimated error


pattern, Hence [ĉ] =[c]
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
„ Consider a (7,4)
(7 4) Hamming Code with n=7, n=7 k=4
corresponding to n-k=3.
⎡1 1 0⎤
⎢0 1 1⎥
[P] = ⎢ ⎥
⎢1 1 1 ⎥
⎢ ⎥
⎣ 1 0 1 ⎦
„ [ ] [ ]
The generator matrix has a structure given by G = P I k
⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0⎥
[G ] = ⎢ ⎥
⎢1 1 1 0 0 1 0⎥
⎢ ⎥
⎣1 0 1 0 0 0 1⎦
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
[c ] = ⎡⎣ [m ][G ]⎤⎦
⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0⎥
[c0 , c1, c2 , c3 , c4 , c5 , c6 ] = [m0 , m1, m2 , m3 ] ⎢ ⎥
⎢1 1 1 0 0 1 0⎥
⎢ ⎥
⎣1 0 1 0 0 0 1⎦

[c] = [m0 + m2 + m3 m0 + m1 + m2 m1 + m2 + m3 m0 m1 m2 m3 ]
Message
g Code Word Weight
g
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 0 1 0 0 0 1 3
0 0 1 0 1 1 1 0 0 1 0 4
0 0 1 1 0 1 0 0 0 1 1 3
0 1 0 0 0 1 1 0 1 0 0 3
0 1 0 1 1 1 0 0 1 0 1 4
0 1 1 0 1 0 0 0 1 1 0 3
0 1 1 1 0 0 1 0 1 1 1 4
1 0 0 0 1 1 0 1 0 0 0 3
1 0 0 1 0 1 1 1 0 0 1 4
1 0 1 0 0 0 1 1 0 1 0 3
1 0 1 1 1 0 0 1 0 1 1 4
1 1 0 0 1 0 1 1 1 0 0 4
1 1 0 1 0 0 0 1 1 0 1 3
1 1 1 0 0 1 0 1 1 1 0 4
1 1 1 1 1 1 1 1 1 1 1 7
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
[ s ] = [e ][ H ] T

⎡1 0 0 1 0 1 1 ⎤
⎢0 1 0 1 1 1 0⎥
[ ] ⎣ n −k
H = ⎡ I P T
⎤ =
⎦ ⎢ ⎥
⎡ ⎤
⎢ 0 0 0 0 0 0 0⎥
⎢⎣0 0 1 0 1 1 1⎥⎦

⎢ 1 0 0 0 0 0 0⎥

⎢ 0 1 0 0 0 0 0⎥
[e] =
⎢ ⎥
⎢ 0 0 1 0 0 0 0⎥
⎢ ⎥
⎢ 0 0 0 1 0 0 0⎥
⎢ ⎥
⎢ 0 0 0 0 1 0 0⎥
⎢ ⎥
⎢ 0 0 0 0 0 1 0⎥
⎢ ⎥
⎢⎣ 0 0 0 0 0 0 1⎥

EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code [e ][ H ] = [ s ] T

⎡ [0 0 0 0 0 0 0] ⎤
⎡1 0 0⎤ ⎡0 0 0⎤
⎢ [1 0 0 0 0 0 0] ⎥
⎢ ⎥ ⎢0 1 0 ⎥ ⎢1 0 0 ⎥
⎢ [0 1 0 0 0 0 0] ⎥ ⎢ ⎥ ⎢ ⎥

.

⎢0 0 1⎥ ⎢0 1 0 ⎥
⎢ ⎥ ⎢ ⎥


. ⎥
⎥ X ⎢⎢1 ⎥
1 0⎥ = ⎢ 0 0 1⎥
⎢ . ⎥ ⎢1 1 0 ⎥
⎢ . ⎥ ⎢0 1 1⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢0 1 1 ⎥
⎢ . ⎥
⎢ ⎥ ⎢1 1 1⎥ ⎢1 1 1 ⎥

.
⎥ ⎢⎣1 ⎥ ⎢ ⎥
⎢⎣ [0 0 0 0 0 0 1] ⎥⎦
0 1⎦ ⎣1 0 1 ⎦
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
E
Error Pattern
P tt S d
Syndrome

0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 1 0 0
0 1 0 0 0 0 0 0 1 0
0 0 1 0 0 0 0 0 0 1
0 0 0 1 0 0 0 1 1 0
0 0 0 0 1 0 0 0 1 1
0 0 0 0 0 1 0 1 1 1
0 0 0 0 0 0 1 1 0 1
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
„ Let the transmitted code vector is [1 1 1 0 0 1 0] and the
received vector is [1 1 0 0 0 1 0] with an error in the third bit.
The syndrome of this code is
⎡1 0 0⎤
⎢0 1 0⎥
⎢ ⎥
⎢0 0 1⎥
⎢ ⎥
[1 1 0 0 0 1 0] ⎢1 1 0 ⎥ = [0 0 1]
⎢0 1 1⎥
⎢ ⎥
⎢1 1 1⎥
⎢⎣1 0 1 ⎥⎦
EXAMPLE - Decodingg of (7,4)
( ) Hammingg
Code
„ Corresponding
C di to this
hi syndrom
d the
h error pattern with
i h the
h highest
hi h
probability of occurrence is [0 0 1 0 0 0 0]. Now this error pattern is
added to the received code word to generate the original code
word.

[0 0 1 0 0 0 0 ] + [1 1 0 0 0 1 0] = [1 1 1 0 0 1 0]

Original transmitted vector is [1 1 1 0 0 1 0]


EXAMPLE -2 Decoding of (6,3) Hamming code

„ For the (6,3) code generator matrix given below the received
word is 100011.
100011 Find the transmitted information word
n=6, k=3, (n-k)=3

⎡1 0 1 ⎤
⎢0 1 1⎥ [G ] = [ P I k ]
[ ] ⎢
P =

⎢⎣1 1 0⎥⎦

⎡1 0 1 1 0 0 ⎤
[G ] = ⎢⎢0 1 1 0 1 0⎥⎥
⎢⎣1 1 0 0 0 1 ⎥⎦
EXAMPLE -2 Decoding of (6,3) Hamming code

[c ] = ⎡⎣ [m ][G ]⎤⎦
⎡1 0 1 1 1 0 0⎤
⎢0 1 1 0 0 1 0 ⎥
[c0 , c1, c2 , c3 , c4 , c5 ] = [m0 , m1, m2 ] ⎢ ⎥
⎢⎣1 1 0 0 0 0 1 ⎥⎦

[c] = [m0 + m2 m1 + m2 m0 + m1 m0 m1 m2 ]
EXAMPLE -2 Decoding of (6,3) Hamming code

Data Code Weight

0 0 0 0 0 0 0 0 0 0
0 0 1 1 1 1 0 0 1 4
0 1 0 0 1 1 0 1 0 3
0 1 1 1 0 1 0 1 1 4
1 0 0 1 0 1 1 0 0 3
1 0 1 0 1 1 1 0 1 4
1 1 0 1 1 0 1 1 0 4
1 1 1 0 0 0 1 1 1 3
EXAMPLE -2 Decoding of (6,3) Hamming code

„ Received word is [r]=[100011 ] Syndrome of the error pattern is

[ s ] = [e][ H ]T

⎡1 0 0 1 0 1 ⎤
[ H ] = [ I n −k PT ] = ⎢0 1 0 0 1 1⎥
⎢ ⎥
⎢⎣0 0 1 1 1 0⎥⎦
EXAMPLE -2 Decoding of (6,3) Hamming code
[e ][ H ]
T
= [s]
⎡0 0 0⎤
⎡[0 0 0 0 0 0]⎤ ⎡1 0 0⎤ ⎢1 0 0 ⎥
⎢[1 0 0 0 0 0]⎥ ⎢ ⎥
⎢ ⎥ ⎢0 ⎥
⎢[0 1 0 0 0 0]⎥ ⎢
1 0
⎥ ⎢0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎢0 0 1⎥
×⎢
0 0 1⎥
⎢[0 0 1 0 0 0] ⎥
⎥ =⎢
⎢[0 0 0 1 0 0]⎥
1 0 1⎥ ⎢1 0 1⎥
⎢ ⎥ ⎢ ⎢ ⎥
⎢[0 0 0 0 1 0]⎥ ⎢0
⎢[0 0 0 0 0 1]⎥ 1 1⎥ ⎢0 1 1⎥
⎢ ⎥ ⎢ ⎥ ⎢1 1 0⎥
⎣[0 0 1 0 0 1]⎦ ⎣1 1 0⎦ ⎢ ⎥
⎣1 1 1⎦
EXAMPLE -2 Decoding of (6,3) Hamming code

„ Syndrome of the received word [1 0 0 0 1 1] is

⎡1 0 0⎤
⎢0 1 0⎥
⎢ ⎥
⎢0 0 1⎥
[1 0 0 0 1 1] × ⎢ ⎥ = [0 0 1]
⎢1 0 1⎥
⎢0 1 1⎥
⎢ ⎥
⎣1 1 0⎦
EXAMPLE -2 Decoding of (6,3) Hamming code
„ The error pattern corresponding to this syndrome is [0 0 1 0 0 0]
So the original transmitted word is

[0 0 1 0 0 0] + [1 0 0 0 1 1] = [101011]
Original
g transmitted vector is [1 0 1 0 1 1]
PROPERTIES OF SYNDROME
„ 1. The syndrome depends only on the error pattern and not on the
transmitted code word.
[ S ] = [ r ][ H ]T
= ( [ C ] + [ e ] ) [H ]T = [c ][ H ]T + [e][ H ]T
H ere [c][H ]T = 0
Hence [S] = [e][H]T
Hence using the parity check matrix [H] of a code we can compute
the syndrome [S] which depends only on the error pattern [e].
PROPERTIES OF SYNDROME
„ 2. All error patterns that differ by a code word have the same
syndrome.
y
For k message bits there are 2k distinct code vectors
denoted as [ ci ] Where i = 0,1,.......2
, , k
−1
Correspondingly for any error pattern [e] there are
2k distinct vectors ei as
[ e i ] = [ e ] + [ c i ] W h e re i = 0,1, .....2 − 1
k

„ The set of vectors [ei] ,i = 0,1,…,2k-1 so defined is called a


coset of the code.
„ A coset has exactly 2k elements that differ at most by a code
vector. Thus (n,k) linear block code has 2 n-k possible cosets.
PROPERTIES OF SYNDROME
[e i ] = [e ] + [c i ]
[e i ][ H ] = [ e ][ H ] + [ c i ][ H ]
T T T

= [ e ][ H ]
T

which is independent
p of the index i. Each coset of the code is
characterised by a unique syndrome.
CYCLIC CODING
CYCLIC CODES
„ Cyclic codes are a special type of linear block codes that
exhibits two fundamental properties:
(i) Linearity: The sum of any two code words is also a code
word.
(ii) Cyclic property: Any cyclic shift of a code word produces
another code word.
„ Let {c0,c1,c2,……,cn-1} be a code word of an (n,k) linear block
code. The code word is c cyclic code if
{cn −1 , c0 , c1 , c2 ,.........., cn −2 }
{cn −2 , cn −1 , c0 , c1 ,........., cn −3}
{.......................................}
n 1 , c0 } are all code words of the code.
{c1 , c2 , c3 ,,..............,, cn−
CYCLIC CODES
„ The code vector {c0,c1,c2,……,cn-1} may be expressed in the
form of a polynomial called code polynomial
c( x ) = c0 + c1 x + c2 x 2 + ⋅⋅⋅⋅⋅⋅⋅ + cn −1 x n −1
where x is any variable and the power of x represents the
position of the code word bits. i.e., (n-1) represents MSB and
0 represents LSB
„ Let the code polynomial c(x) be multiplied by xi.

x c( x) = x ( c0 + c1x + c2 x +⋅⋅⋅⋅+ )
n−i−1 n−i n−1
i i 2
+ +cn−i−1x + cn−i x +⋅⋅⋅⋅+
+ +cn−1x
= c0 xi + c1xi +1 + ⋅⋅⋅⋅ +cn−i −1x n−1 + cn−i xn + ⋅⋅ ⋅⋅ +cn−1xn+i −1
= cn−i xn + ⋅⋅⋅ ⋅ +cn−1xn+i −1 + c0 xi + c1xi +1 + ⋅ ⋅⋅ ⋅ +cn−i −1x n−1
CYCLIC CODES

n+i−1 i−1
x c( x) = cn−i x + [cn−1 + cn−1]x +⋅⋅⋅⋅+cn−1x
i n 0
+ [cn−1 + cn−1]x
i +1 n −1
+c0 x + c1x + ⋅⋅⋅⋅ +cn−i −1x
i

i −1 i+1 n−1
= cn−i + .......... + cn−1x + c0 x + c1x + .......... + cn−i−1x
i

+cn−i ( xn +1) + ............ + cn−1xi−1( xn +1)


i−1 i+1 n−1
Let c (x) = cn−i +.......... + cn−1x + c0x + c1x +.......... + cn−i−1x
i i

i −1
q( x) = cn−i + cn−i+1x + ............ + cn−1x
CYCLIC CODES
Then x i c ( x ) = q ( x )( x n + 1) + c i ( x ) .................(1)

„ The polynomial ci(x) is actually the code polynomial of the


code word
{cn−i , ⋅⋅⋅⋅, cn−1, c0 , c1, ⋅⋅⋅⋅, cn−i −1}
obtained by applying i cyclic shifts to the code word
{c0 , c1, ⋅⋅⋅⋅, cn−i −1, cn−i , cn−1}
„ From equation (1) we can see that ci(x) is the remainder
obtained by dividing xic(x) by (xn+1)
c i ( x ) = x i c ( x ) mod( x n + 1) .................(2)
„ If c(x) is a code polynomial then the polynomial
c i ( x ) = x i c ( x ) mod( x n + 1) is also a code polynomial for any
cyclic shift i in the case of a cyclic code.
CYCLIC CODES-GENERATOR POLYNOMIAL
„ The polynomial xn+1 and its factors play an important role in
the generation of cyclic codes.
„ Let g(x) be a polynomial of degree (n-k), that is a factor of
xn+1.
„ In general g(x) may be expressed as
n − k −1
g ( x) = 1 + ∑
i =1
gi x i + x n −k = 1 + g1 x + ........ + g n −k −1 x n −k −1 + x n −k
„ g(x) is called generator polynomial of the cyclic code.
„ A cyclic code is uniquely determined by the generator
polynomial g(x).
„ Each code p polynomial
y in the code can be expressed
p as
c( x) = m( x) g( x)
where m(x) is the message polynomial of degree (k-1)
CYCLIC CODES: EXAMPLE 1
„ Find the generator polynomial g(x) for a (7,4) cyclic code. Also
find all the code vectors.

„ g(x) is a factor of (xn+1) so we can obtain g(x) as follows.

x 7 + 1 = (1 + x )(1 + x + x 3 )(1 + x 2 + x 3 )
„ For n=7,and k=4, the generator polynomial will be of degree
n-k=3.
„ There are two factors of (xn+1) that are of order 3
3.We
We can
choose any one of them. let it be (1+x+x3)
„ For an information vector [[1000]] the data polynomial
p y would be
1x + 0 x + 0 x + 0 x
0 1 2 3
CYCLIC CODES
„ So the code polynomial would be c( x ) = m( x ) g ( x )
c( x ) = (1x 0 + 0 x1 + 0 x 2 + 0 x 3 )(1 + x + x 3 ) = 1 + x + x 3
c( x ) = [ 1 1 0 1 0 0 0]
„ For an information vector [0100] the message polynomial
would be 0 x 0 + 1x1 + 0 x 2 + 0 x 3 c( x ) = m( x ) g ( x )

c ( x ) = ( 0 x + 1x + 0 x + 0 x
0 1 2 3
)(1 + x + x ) = x + x
3 2
+x 4

c( x ) = [0 1 1 0 1 0 0]
CYCLIC CODES

„ For an information vector [[0 0 1 0]] the message


g polynomial
p y
would be
0 x + 0 x + 1x + 0 x c ( x ) = m ( x ) g ( x )
0 1 2 3

c( x ) = ( 0 x 0 + 0 x1 + 1x 2 + 0 x 3 )(1 + x + x 3 ) = x 2 + x 3 + x 5
c( x ) = [0 0 1 1 0 1 0]
„ For an information vector [0 0 0 1] the message polynomial
would be 0 x 0 + 0 x1 + 0 x 2 + 1x 3 c( x ) = m( x ) g ( x )

c( x ) = ( 0 x 0 + 0 x1 + 0 x 2 + 1x 3 )(1 + x + x 3 ) = x 3 + x 4 + x 6

c( x ) = [0 0 0 1 1 0 1 ]
CYCLIC CODES – GENERATOR MATRIX
c( x ) = [1 1 0 1 0 0 0]
c( x ) = [0 1 1 0 1 0 0]
c( x ) = [0 0 1 1 0 1 0]
c( x ) = [0 0 0 1 1 0 1]

⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0⎥
[G ] = ⎢ ⎥
⎢0 0 1 1 0 1 0⎥
⎢ ⎥
⎣0 0 0 1 1 0 1⎦
[ p0 ....................... p6 ]
SHORT CUT METHOD FOR GENERATOR MATRIX

„ To find the generator matrix of a (7,4) cyclic code take g(x)


and three cyclic shifted versions of it and arrange one below
the other as given below.

g ( x) = 1 + x + x3
xg ( x ) = x + x 2 + x 4
x 2 g ( x) = x2 + x3 + x5
x3g ( x) = x3 + x4 + x6
„ Now construct the generator matrix using the coefficients of
the p
polynomials
y as the elements of the rows.
„ We get the same generator matrix as obtained above.
CYCLIC CODES
Message Code Word
0000 0000000
0001 1101000
0010 0110100
0011 0010111
0100 0011010
0101 0111001
0110 0101110
0111 0100011
1000 0001101
1001 1100101
1010 1110010
1011 1111111
1100 1011100
1101 1010001
1110 1000110
1111 1001011
SYSTEMATIC CYCLIC CODES
„ If a (n,k) cyclic code is to be systematic it must have the
structure

{b0 , b1 ,.............., bn −k −1 m0 , m1 ,.............., mk −1}


( n − k ) parity
it bits
bit k message bits

k −1
„ Let the message polynomial be m( x ) = m0 + m1 x + ........ + mk −1 x
n − k −1
andd th
the parity
it polynomial
l be b( x ) = b0 + b1 x + ........ + bn −k −1 x
i lb
„ According to equation (6) we want the code polynomial to be
in the form c( x ) = b( x ) + x n −k m( x ) i.e.,
i e m(x) should shifted to
(n-k)th position and added to b(x)
„ But c( x ) = m( x ) g ( x )
m ( x ) g ( x ) = b( x ) + x n − k m ( x )
SYSTEMATIC CYCLIC CODES
„ m ( x ) g ( x ) = b( x ) + x n − k m ( x )

x n − k m ( x ) = m ( x ) g ( x ) − b( x )

x n −k m( x ) = m( x ) g ( x ) + b( x ) in modulo 2 arithmetic

x n −k m( x ) b( x )
= m( x ) +
g ( x) g ( x)

„ This equation has the form


Numerator Remainder
= Quotient +
Denominator Denominator
„ The polynomial b(x) is the remainder left over after dividing
xn-k m(x) by g(x)
STEPS INVOLVED IN THE GENERATION OF
CYCLIC CODES
„ (i) Multiply the message polynomial m(x) by xn-k n k.

„ (ii) Divide xn-k m(x) by the generator polynomial and obtain the
remainder b(x)
b(x).
„ (iii) Add b(x) to xn-k m(x) to obtain the code polynomial c(x)
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

„ Construct a systematic (7,4) cyclic code using the generator


polynomial g(x) = 1+x2+x3

„ Let the message vector be [1000] and the message


polynomial be 1x0.
„ Multiply m(x) by xn-k i.e.,x3.
n −k
x m ( x ) = 1x = x
3 3

„ Divide xn-k m(x) by g(x) and obtain the remainder b(x)


SYSTEMATIC CYCLIC CODES: EXAMPLE 1

1
1 + x + x3 x3
x + x +1
3

REMAINDER b(x)
( ) x +1
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

b( x ) = x + 1

n −k
„ Find c ( x ) = b ( x ) + x m( x )
c( x ) = 1 + x + x 3

c( x ) = [1101000]
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

„ Let the message vector be [0100] and the message


polynomial be 1x1.
„ Multiply m(x) by xn-k i.e.,x3.
x n −k m( x ) = xx 3 = x 4

„ Divide xn-k m(x) by g(x) and obtain the remainder b(x)


SYSTEMATIC CYCLIC CODES: EXAMPLE 1

x +1
1 + x + x3 x4
x +x +x
4 2

REMAINDER b(x)
x +x
2
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

b( x ) = x 2 + x
n −k
„ Find c ( x ) = b ( x ) + x m( x )
c( x ) = x + x + x
2 4

c( x ) = [0110100]
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

„ Let the message vector be [0010] and the message


polynomial be 1x2.
„ Multiply m(x) by xn-k i.e.,x3.
x n −k m( x ) = x 2 x 3 = x 5
„ Divide xn-k m(x)
( ) by
y g(x)
g( ) and obtain the remainder b(x)
( )
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

x +1
2

1+ x + x 3
x 5

x +x +x
5 3 2

x +x
3 2

x + x +1
3

REMAINDER b(x) x + x +1
2
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

b( x ) = x 2 + x + 1
n −k
„ Find c ( x ) = b ( x ) + x m( x )
c( x ) = 1 + x + x + x
2 5

c( x ) = [1110010]
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

„ Let the message vector be [0001] and the message


polynomial be x3.
„ Multiply m(x) by xn-k i.e.,x3.
x n −k m( x ) = x 3 x 3 = x 6
„ Divide xn-k m(x)
( ) by
y g(x)
g( ) and obtain the remainder b(x)
( )
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

x + x +1
3

1+ x + x 3
x 6

x +x +x
6 4 3

x +x
4 3

x +x +x
4 2

x +x +x
3 2

x + x+ 1
3

REMAINDER b(x) x + 1
2
SYSTEMATIC CYCLIC CODES: EXAMPLE 1

b( x ) = x 2 + 1
n −k
„ Find c ( x ) = b ( x ) + x m( x )
c( x ) = 1 + x 2 + x 6
c( x ) = [1010001]
„ Similarly we can find all the code words and tabulate.
SYSTEMATIC CYCLIC CODES: EXAMPLE 1
c(1000) = [1101000]
c(0100) = [0110100]
c(0010) = [1110010]
c(0001) = [1010001]

⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0⎥
[G ] = ⎢ ⎥
⎢1 1 1 0 0 1 0⎥
⎢ ⎥
⎣1 0 1 0 0 0 1⎦

„ All the other code words can be generated using this


generator matrix
Message
g Code Word Weight
g
0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 0 1 0 0 0 1 3
0 0 1 0 1 1 1 0 0 1 0 4
0 0 1 1 0 1 0 0 0 1 1 3
0 1 0 0 0 1 1 0 1 0 0 3
0 1 0 1 1 1 0 0 1 0 1 4
0 1 1 0 1 0 0 0 1 1 0 3
0 1 1 1 0 0 1 0 1 1 1 4
1 0 0 0 1 1 0 1 0 0 0 3
1 0 0 1 0 1 1 1 0 0 1 4
1 0 1 0 0 0 1 1 0 1 0 3
1 0 1 1 1 0 0 1 0 1 1 4
1 1 0 0 1 0 1 1 1 0 0 4
1 1 0 1 0 0 0 1 1 0 1 3
1 1 1 0 0 1 0 1 1 1 0 4
1 1 1 1 1 1 1 1 1 1 1 7
SYSTEMATIC GENERATOR MATRIX
„ We can manipulate the generator matrix in non systematic
form to obtain the generator matrix in systematic form.
(i) ADD FIRST ROW TO THE THIRD ROW
(ii) ADD THE SUM OF FIRST AND SECOND ROWS TO THE
FOURTH ROW

⎡1 1 0 1 0 0 0⎤ ⎡1 1 0 1 0 0 0⎤
⎢0 1 1 0 1 0 0⎥ ⎢0 1 1 0 1 0 0⎥
[G] = ⎢ ⎥ ⇒⎢ ⎥
⎢0 0 1 1 0 1 0⎥ ⎢1 1 1 0 0 1 0⎥
⎢ ⎥ ⎢ ⎥
⎣0 0 0 1 1 0 1⎦ ⎣1 0 1 0 0 0 1⎦
PARITY CHECK POLYNOMIAL OF CYCLIC CODES

„ An (n,k) cyclic code is uniquely specified by its generator


polynomial g(x) of degree (n-k).
„ Such a code is uniquely specified by another polynomial of
degree k also.
„ It is called parity check polynomial.
„ The generator polynomial is an equivalent representation of
the generator matrix [G].[G]
„ Correspondingly the parity check polynomial h(x) is an
q
equivalent representation
p of the p
parity
y check matrix [[H]]
„ The matrix relation HGT=0 of a linear block code corresponds
to the relation
h( x ) g ( x ) mod( x n − 1) = 0 for cyclic codes.
PARITY CHECK POLYNOMIAL OF CYCLIC CODES
„ Any code word polynomial c(x) in the code satisfies the
following fundamental relation
h( x ) g ( x ) mod( x n − 1) = 0
„ In mod-2 arithmetic (xn-1) = (1-xn) = (1+xn) and so
h( x ) g ( x ) mod(1 + x n ) = 0
„ The generator polynomial g(x) and the parity check
polynomial h(x) are factors of the polynomial (1+xn) since
h( x ) g ( x ) = (1 + x n )
„ If g(x) is a polynomial of degree (n-k) and it is also a factor of
(1+xn) then g(x) is a generator polynomial of an (n,k)(n k) cyclic
code.
„ If h(x) is a polynomial of degree k and it is also a factor of
(1+xn) then h(x) is the parity check polynomial of the (n,k)
cyclic code.
SYNDROME OF CYCLIC CODES
„ Let the code word (c0,c1,………,cn-1) is transmitted over a noisy
channel, resulting in the received word (y0,y1,………,yn-1)
„ Let the received word is represented by a polynomial of degree
(n-1) or less.
n −1
y ( x ) = y0 + y1 x + ........... + yn −1 x
„ Lett a(x)
L ( )ddenote
t the
th quotient
ti t andd s(x)
( ) denote
d t the
th numerator
t
obtained by dividing y(x) by the generator polynomial g(x).
Then
y ( x ) = a ( x ) g ( x ) + s( x )
„ The remainder is a polynomial of degree (n-k-1) or less.
„ It is called syndrome polynomial.
SYNDROME OF CYCLIC CODES
„ When the syndrome polynomial s(x) is non zero transmission
errors in the received word is detected.
„ Once we know the syndrome s(x) we can determine the
corresponding error pattern and there by make appropriate
correction.
correction
SYNDROME DECODING
„ Let the transmitted word of an=(7,4) cyclic code is 1010001
and due to noise the received word is 1000001
„ s(x) is obtained as below.

x3 + x2 + x + 1
1 + x2 + x3 1 + x6
x6 + x5 + x3
x5 + x3
x5 + x 4 + x 2
x4 + x3 + x2
x4 + x3 + x
s( x ) = x + x 2

Corresponding syndrome is [011] x2 + x


PARITY CHECK MATRIX OF CYCLIC CODES
„

[G ] = [ P I k ]
⎡1 1 0 1 0 0 0⎤ ⎡1 1 0⎤
⎢0 1 1 0 1 0 0⎥ ⎢0 1 1⎥
[G ] = ⎢ ⎥ [P] = ⎢ ⎥
⎢1 1 1 0 0 1 0⎥ ⎢1 1 1⎥
⎢ ⎥ ⎢ ⎥
⎣1 0 1 0 0 0 1⎦ ⎣1 0 1⎦

⎡1 0 1 1 ⎤
[ H ] = ⎡⎣ I n−k P ⎤⎦
T
[P]
T
= ⎢1 1 1 0⎥
⎢ ⎥
⎢⎣0 1 1 1⎦⎥
PARITY CHECK MATRIX OF CYCLIC CODES
„
⎡1 0 0 . 1 0 1 1 ⎤
⎢0 1 0 . 1 1 1 0⎥
[ ] ⎢
H =

⎢⎣0 0 1 . 0 1 1 1 ⎥⎦
CONVOLUTIONAL
CODING
CONVOLUTIONAL CODING
„ In the case of block coding the encoder accepts a block of k
message bits and generates an n bit code word.
word
„ The code words are produced on a block by block basis.
„ So an entire message block need to be stored before
generating the associated code word.
„ In applications where the message bits come serially rather
than as blocks, convolutional coding is the preferred method.
„ A convolutional code is generated by combining the outputs of
a m-
m stage shift register using n EX-OR
EX OR logic summers.
summers
„ Such a convolutional coder is shown in figure (1).
„ The outputs v1 and v2 of the adders are:
v1 = S1 +S3 v2 = S1 + S2 + S3
CONVOLUTIONAL CODING

1011
bi
M M1 M2
S1 S2 S3

v1 v2

bo 11 01 00 10 10 11
CONVOLUTIONAL CODING
„ Convolutional codes are commonly specified by three
parameters; (n,k,m).
n = number
b off output
t t bits
bit
m = number of input bits
k = number of memory registers
„ The quantity m/n called the code rate, is a measure of the
efficiency of the code. Commonly m and n parameters range
f
from 1 to
t 8,
8 k from
f 2 to
t 10 and d the
th code
d rate
t from
f 1/8 to
t 7/8
„ Often the manufacturers of convolutional code chips specify
the code by y pparameters ((n,k,L),
, , ), The q
quantity
y L is called the
constraint length of the code and is defined by
Constraint Length, L = m (k-1)
„ The constraint length L represents the number
n mber of bits in the
encoder memory that affect the generation of the n output
bits.
CONVOLUTIONAL CODING
„ The convolutional code structure is easy to draw from its
parameters. First draw k boxes representing the k memory
registers. Then draw n modulo-2 adders to represent the n
output bits. Now connect the memory registers to the adders
using the generator polynomials.
polynomials
„ There are many choices for polynomials for any m order code.
Theyy do not all result in output
p sequences
q that have ggood
error protection properties.
„ Good polynomials are found from this list usually by computer
simulation.
i l ti A list
li t off good
d polynomials
l i l for
f rate
t ½ codes
d i
is
given in table.
„ A 1 indicates the existence of a connection from the register
to the summer, and 0 the absence of it.
Buildingg blocks for rate 1/n
/ encoder

M1 M2 M2 Mk

A1 A2 A3 An

v1 v2 v3 vn
Buildingg blocks for rate 1/2
/ encoder

M1 M2 M3 Mk

A1 A2

v1 v2
GENERATOR POLYNOMIALS
OPTIMUM COFIGURATIONS FOR COVOLUTIONAL CODERS RATE 1/2

k v1 v2
3 1 1 0 1 1 1
4 1 1 0 1 1 1 1 0
5 1 1 0 1 0 1 1 1 0 1
6 1 1 0 1 0 1 1 1 1 0 1 1
7 1 1 0 1 0 1 1 1 0 1 0 1
8 1 1 0 1 1 1 1 1 1 0 0 1 1
Buildingg blocks for rate 1/3
/ encoder

M1 M2 M3 Mk

A1 A2 A3

v1 v2 v3
EXAMPLE 1 Encoding a single bit using (2,4,1) encoder
v1 =1 v1 =1

1
1 0 0 0 0 1 0 0

V2=11 V2=11

Input state=000, Input bit=1, Input state=100, Input bit=0,


Output bit=11 O
Output bi
bit=11
11
EXAMPLE 1 Encoding a single bit
v1 =1 v1 =1

0 0 1 0 0 0 0 1

V2=00 V2=11

Input state=010, Input bit=0, Input state=001, Input bit=0,


Output bit=10 O
Output bi
bit=11
11
EXAMPLE 1 Encoding a single bit
„ A single 1 bit produces 8 coded bits: 11 11 10 11
„ W would
We ld gett an 8 bit allll zero sequence forf a single
i l 0
„ The 1 bit has a response of 11 11 10 11 which is called the
impulse response.
response The 0 0-bit
bit similarly has an impulse response
of 00 00 00 00
„ Suppose we have a input sequence of 1011 and we want to
know what the coded sequence would be. We can calculate
the output by just adding the shifted versions of the individual
impulse responses.
responses
EXAMPLE 1

INPUT BIT OUTPUT BITS


1 11 11 10 11
0 00 00 00 00
1 11 11 10 11
1 11 11 10 11
1011 11 11 01 11 01 01 11
CONVOLUTIONAL CODING Example 2
m= 10110

bi
M1 M2 M3 M4
S1 S2 S3 S4

v1 v2
v3
Commutator
bo c=111 010 100 110 001 000 011 000 000
CONVOLUTIONAL CODING Example 2
„ Initially the register is clear and the first bit of the input data
stream is entered into M1.
„ During this message, the commutator samples the adder
p
outputs v1,,v2,,v3.
„ A single message bit produces three coded output bits.
„ The encoder is then said to be of rate 1/3.
„ The next message bit then enters M1 while the bit initially in
M1 is transferred to M2.
„ Th commutator
The t t again i samples
l the
th adder
dd outputs
t t v.
„ This process continues until the last bit of the message has
been entered in to M1.
„ Thereafter enough 0’s are added to the message so that
entire message g bit mayy p
proceed through
g the shift register.
g
CONVOLUTIONAL CODING Example 2
„ For the input bit stream m=10110 the coded output bit stream
is c=111 010 100 110 001 000 011 000 000. 000
„ The convolutional encoder operates continuously even if the
p message
input g bit consists of millions of bits.
„ Each bit remains in the shift register for as many message bit
intervals as there are stages in the shift register.
„ Each input bit influences k groups of n bits.
CONVOLUTIONAL CODING Example 3
10011
bi
M M1 M2
S1 S2 S3

v1 v2 11 10 11 11 01 01 11

bo
CONVOLUTIONAL CODING
PRESENT STATE NEXT STATE CODE

NEXT BIT M1 M2 M1 M2 v1 v2
0 0 0 0 0
0 0
1 1 0 1 1
0 0 1 1 0
1 0
1 1 1 0 1
0 0 0 1 1
0 1
1 1 0 0 0
0 0 1 0 1
1 1
1 1 1 1 0
STATE DIAGRAM
( 0,0 )

(1,1)
a b
M2 M1 M2 M1
0 0 0 1

( 0,0 ) (1,0 )

d ( 0,1)
c
M2 M1 M2 M1

1 1 1 0

(1,0 )
CODE TREE REPRESENTATION
00
a
Note: Here codes are 00
•a( 0,0) 11
represented in M2 M1 00
•a ( 0, 0 ) 10
b
c
•b( 0,1)
sequence
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
11
•a( 0,0) 11

1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
01
•d (1,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
CODE TRELLIS Note: Here codes are
represented in M2 M1
sequence
00
a=00 a=00

b=01 b=01

c=10 c=10

d=11 d=11
10
CURRENT STATE OUTPUT NEXT STATE
DIRECT DECODING
„ Consider the first message bit. It has an effect only on the first
kv bits in the code word.
„ With k=3 and v=2 the first digit has an effect only on the first 3
groups of 2 bits.
„ Hence to deduce the first digit we should examine the first 6
digits of the code.
„ We need not consider digits beyond 6 and we should not omit
the first 6 digits in our decision process as we will not be
taking full advantage of the redundancy of the code then.
„ There are 8 possible combinations of the first 6 digits which
are acceptable code words.
„ Th
These combinations
bi ti correspondsd to
t 8 possible
ibl paths
th through
th h
the code tree which penetrate in to the tree to the extent of
three nodes.
DIRECT DECODING
„ Take a count, for each of the 8 paths, of the number of
differences between bits of the received word and the
acceptable
t bl coded word d corresponding
di to
t each h path.
th
„ If the path that yields the minimum number of discrepancies is a
path which diverges
p g upward
p from the first node we make a
decision that the first message bit is 0.
„ If the path diverges downward from the first node we decide
that the first bit is 1
„ The second message bit is decoded in the same manner
starting from the new node obtained after decoding the first bit.
„ We compare the 6 bits corresponding to each of the 8 paths
with the 6 received code bits after discarding the first 3 received
code bits.
„ This procedure is repeated for all message bits advancing node
by node in the process.
DIRECT DECODING
„ When a message is decoded in this manner the probability that
a bit in the decoded message is in error decreases
exponentially with k.
„ So ideally k should be as large as possible.
„ But the decoding of each bit requires an examination of the 2k
branch sections of the code tree.
„ Hence for large k the decoding procedure becomes
impracticable.
DIRECT DECODING USING CODE TREE
00
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
Correct
11
•a( 0,0) 11
•c (1, 0 )
10 b
1 path
•b( 0,1)
10
c
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
00
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0)
Move to
01 10
00
d
a
next node
11
•a( 0,0) 11
Correct

•c (1, 0 )
10 b path
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
00
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0)
Move to
01 10
00
d
a
next node
11
•a( 0,0) 11
Correct

•c (1, 0 )
10 b path
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
00
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0)
Move to
01 10
00
d
a
next node
11
•a( 0,0) 11
Correct

•c (1, 0 )
10 b path
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
Correct
C 00
a
path 00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0)
Move to
01 10
00
d
a
next node
11
•a( 0,0) 11
•c (1, 0 )
10 b
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
DIRECT DECODING USING CODE TREE
Decoding
di 00
a
complete 00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
11
•a( 0,0) 11
•c (1, 0 )
10 b
1 •b( 0,1)
10
c
•b (0,1) 00 01 d
11
11 a
01
01
•c(1,0
1 0) 00
11 10 11 11 01 01 01 •b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING
„ In sequential decoding, at the arrival of the first v code bits,
the encoder compares these bits with the two branches which
diverge from the starting node.
„ If one of the branches agrees exactly with these code bits,
then the encoder follows
follo s this branch.
branch
„ If, because of noise, there are errors in the received bits, the
encoder follows the branch in which there are less
discrepancies.
„ At the second a similar comparison is made between the
diverging branches and the second set of v code bits and a
decision is taken.
„ If more than half the bits of a group of v bits are in error the
decoder will make a mistake and it will follow the wrong path.
SEQUENTIAL DECODING
„ To avoid this problem, the decoder keeps a record of the total
number of discrepancies between the received code bits and
the corresponding bits encountered in its path.
„ If the decoder takes a wrong path the number of
discrepancies grows
gro s much
m ch rapidly
rapidl than when
hen it is following
follo ing the
correct path.
„ In such a situation the decoder may be programmed to
retrace its path to the node at which the error has occurred
and to follow the alternate path.
„ In this way the decoder will eventually find a path through the
k nodes.
„ If the decoder takes the correct turn at each node then it will
be able to make a decision on the basis of a single path.
SEQUENTIAL DECODING
„ When the decoder has retraced its path due to an error it can
exclude from its searchings all the branches which diverge in
the wrong direction from this node.
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
a
00
•a( 0,0) 11
1 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
11
•a( 0,0) 11

1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
2
a
00
•a( 0,0) 11
1 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 11 01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 01 10
00
d
a
11
•a( 0,0) 11

1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
2 3
a
00
•a( 0,0) 11
1 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 11 3
01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 10
00
d
a
11
•a( 0,0) 11

1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
2 3
a
00
•a( 0,0) 11
1
00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 3
01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 10
00
d
a
•a( 0,0)
2
11
11

1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 00
2 3
a
00
•a( 0,0) 11
1
00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
• a ( 0,0 ) 11 3
01
11
d
a
10
•c(1,0
1 0)
0 •b ( 0,1)
00
11 01 b
3
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 4
10
00
d
a
•a( 0,0)
2
11
11

1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 3
00
2
a
00
•a( 0,0) 11
00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 11 3
01
11
d
a
•c(1,0
1 4
10 1 0)
0 •b ( 0,1)
00
11 01 b
3
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 4
10
00
d
a
•a( 0,0)
2
11
11

1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
SEQUENTIAL DECODING USING CODE TREE-ANOTHER
EXAMPLE 3
00
2
a
00
•a( 0,0) 11
0010011 00
•a ( 0, 0 ) 10
b
c
•a( 0,0)
00
•a ( 0,0 ) 4
11 3
01
11
d
a
•c(1,0
1
10 1 0)
0 •b ( 0,1)
00
11 01 b
3
c
•d(1,1)
Start
•a (0,0
0 0) 2
01 4
10
00
d
a
•a( 0,0)
2
11
11

1
10
•c (1, 0 ) 10
b
c
•b( 0,1)
•b (0,1) 00 01 d
11
11
01 a
1
•c(1,0
1 0) 00
11 10 11 11 01 01 01 01
•b ( 0,1) 01
b
c
10 •d(1,1) 10
d
The branch pattern repeats after the third bit
ADDITIONAL EXAMPLES FOR
CONVOLUTIONAL CODING AND
DECODING USING STATE DIAGRAM
AND TREE
ADDITIONAL EXAMPLE-4 Encoding Three Digit
Word Using (2,1,4) Encoder
v1 =1
1 v1 =1

110 11
1 0 0 0 0 1 0 0

V2=1 V2=1

Input state=000, Input bit=1, Input state=100, Input bit=0,


Output bit=11 Output bit=11
EXAMPLE 4 Encoding Three Digit Word
v1 =0 v1 =1

1
1 0 1 0 1 1 0 1

V2=1 V2=1

IInput state=010,
010 Input
I bit=1,
bi 1 IInput state=001,
001 Input
I bit=1,
bi 1
Output bit=01 Output bit=11
EXAMPLE 4 Encoding Three Digit Word
v1 =0 v1 =0

0 1 1 0 0 0 1 1

V2=1
1 V2=1
1

Input state=010, Input bit=0, Input state=001, Input bit=0,


Output bit=01 O
Output bi
bit=01
01
EXAMPLE 4 Encoding Three Digit Word
v1 =1 v1 =0

0 0 0 1 0 0 0 0

V2=1
1 V2=0
0

Input state=001, Input bit=0, Registers are flushed


Output bit=11
Output Bits and the Encoder Bits through the (2,1,4 Code)
I
Input bits:
bi 1011000

Time Input Bit Output Encoder Bits


Bits
0 1 11 000
1 0 11 100
2 1 01 010
3 1 11 101
4 0 01 110
5 0 01 011
6 0 11 001
Input Bit Input State Output Bits Output State
I1 s1 s2 s3 O1 O2 s1 s2 s3

0 0 0 0 0 0 0 0 0
1 0 0 0 1 1 1 0 0
0 0 0 1 1 1 0 0 0
1 0 0 1 0 0 1 0 0
0 0 1 0 1 0 0 0 1
1 0 1 0 0 1 1 0 1
0 0 1 1 0 1 0 0 1
1 0 1 1 1 0 1 0 1
0 1 0 0 1 1 0 1 0
1 1 0 0 0 0 1 1 0
0 1 0 1 0 0 0 1 0
1 1 0 1 1 1 1 1 0
0 1 1 0 0 1 0 1 1
1 1 1 0 1 0 1 1 1
0 1 1 1 1 0 0 1 1
1 1 1 1 0 1 1 1 1
010
10
11
011 01 001
00 3 7
2 00 6 11

100 11 000 00
10
10
01 1
01
00 5
101 111 11

11 10
4 110
STATE DIAGRAM
„ A state diagram for the (2,1,4) code is shown in Fig 8. Each
circle represents a state.
„ At any one time, the encoder resides in one of these states.
The lines to and from it show state transitions that are
possible as bits arrive.
arrive
„ Only two events can happen at each time, arrival of a 1 bit or
arrival of a 0 bit.
„ Each of these two events allows the encoder to jump into a
different state. The state diagram does not have time as a
dimension and hence it tends to be not intuitive.
intuitive
„ The state diagram contains the same information that is in the
table lookup but it is a graphic representation.
„ The solid lines indicate the arrival of a 0 and the dashed lines
indicate the arrival of a 1. The output bits for each case are
shown on the line and the arrow indicates the state transition.
Decodingg of the sequence
q 1011
„ Let’s start at state 000. The arrival of a 1 bit outputs 11 and
puts us in state 100.
100
„ The arrival of the next 0 bit outputs 11 and put us in state 010.
„ The arrival of the next 1 bit outputs
p 01 and pputs us in state
101.
„ The last bit 1 takes us to state 110 and outputs 11. So now
we have 11 11 01 11.11
„ But this is not the end. We have to take the encoder back to
all zero state.
„ From state 110, go to state 011 outputting 01.
„ From state 011 we go to state 001 outputting 01 and then to
state 00 with a final output of 11.
11
„ The final answer is : 11 11 01 11 01 01 11
TREE DIAGRAM 0 0 (0 0 0 )
0 0 (0 0 0 )

1 1 (1 0 0 )
0 0 (0 0 0 )
1 1 (0 1 0 )
1 1 (1 0 0 )
0 0 (1 1 0 )
0 0 (0 0 0 )
1 0 (0 0 1 )
1 1 (0 1 0 )
0 1 (1 0 1 )
1 1 (1 0 0 )
0 1 (0 1 1 )
0 0 (1 1 0 )
1 0 (1 1 1 )
0 0 (0 0 0 )
1 1 (0 0 0 )
1 0 (0 0 1 )
0 0 (1 0 0 )
1 1 (0 1 0 )
0 0 (0 1 0 )
0 1 (1 0 1 )
1 1 (1 1 0 )
1 1 (1 0 0 )
0 1 (0 0 1 )
0 b it 0 1 (0 1 1 )
1 0 (1 0 1 )
0 0 (1 1 0 )
1 0 (0 1 1 )
1 0 (1 1 1 )
0 1 (1 1 1 )
0 0 (0 0 0 )
1 0 (0 0 0 )
1 1 (1 0 0 )
0 1 (0 1 0 )
1 1 (0 1 0 )
0 1 (1 0 0 )
0 0 (1 1 0 )
1 b it 1 1 (0 1 1 )
1 0 (0 0 1 )
0 0 (0 1 0 )
0 1 (1 0 1 )
0 1 (1 0 1 )
0 1 (0 1 1 )
1 1 (1 1 0 )
1 1 (1 0 0 ) 1 0 (1 1 1 )
1 1 (0 0 0 )
0 1 (0 0 1 )
0 1 (0 1 1 ) 0 0 (1 0 0 )
0 0 (0 1 0 )
1 0 (1 0 1 )
1 1 (1 1 0 )
0 0 (1 1 0 )
0 1 (0 0 1 )
1 0 (0 1 1 )
1 0 (1 0 1 )
1 0 (1 1 1 )
1 0 (0 1 1 )
0 1 (1 1 1 )
0 1 (1 1 1 )
TREE DIAGRAM
„ Here instead of jumping from one state to another, we go
down branches of the tree depending on whether a 1 or 0 is
received.
i d
„ The first branch in indicates the arrival of a 0 or a 1 bit. The
g state is assumed to be 000. If a 0 is received,, we g
starting go
up and if a 1 is received, then we go downwards.
„ In figure the solid lines show the arrival of a 0 bit and the
shaded lines the arrival of a 1 bit.
bit The first 2 bits show the
output bits and the number inside the parenthesis is the
output state.
„ Let’s code the sequence 1011 as before. At branch 1, we go
down. The output is 11 and we are now in state 111. Now we
get a 0 bit, so we g
g go up.
p The output
p bits are 11 and the state
is now 011.
„ The next incoming bit is 1. We go downwards and get an
output of 01 and now the output state is 101.
101
TREE DIAGRAM

„ The next incoming bit is 1 so we go downwards again and get


output bits 11. From this point, in response to a 0 bit input, we
get an output of 01 and an output state of 011.
„ We have run out of tree branches.
branches The tree diagram now
repeats. In fact we need to flush the encoder so our sequence
is actually 1011 000, with the last 3 bits being the flush bits.
„ We now jump to point 2 in the tree and go upwards for three
branches. Now we have the output to the complete sequence
and it is 11 11 01 11 01 01 11
Coding And Decoding Of
C n l ti n l C
Convolutional Codes
d U Using
in
Trellis Diagram
CONVOLUTIONAL CODES: CODING AND
DECODING USING TRELLIS DIAGRAM
1011
bi
M M1 M2
S1 S2 S3

v1 v2
11 01 00 10 10 11
bo
STATE DIAGRAM
( 0,0 )

(1,1)
a b
M1 M2 M1 M2
0 0 0 1

( 0,0 ) ( 0,1)

d (1,0 )
c
M1 M2 M1 M2

1 1 1 0

( 0,1)
TRELLIS DIAGRAM

00
a 00 00 a
11 11

b 01 01 b
00

c 10 01 10 c
10
10
d 11 11 d
01
COMPLETE TRELLIS DIAGRAM

00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11

00 00 00 00 00 00

01 01 01 01 01 01

10 10 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01
TRELLIS DIAGRAM – CODING & DIRECT DECODING
001011 11 01 00 10 10 11
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11

00 00 00 00 00 00

01 01 01 01 01 01

10 10 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01
DECODING USING TRELLIS
„ There are several different approaches to decoding of
convolutional codes. These are grouped in two basic
categories.
t i
1. Sequential Decoding : Fano algorithm
2 Maximum likely
2. hood decoding: Viterbi algorithm
likely-hood
„ Both of these methods represent 2 different approaches to the
same basic idea behind decoding.
„ Assume that 4 bits were sent via a rate ½ code. We receive
12 bits, ignoring the flush bits.
„ These 12 bits may or may not have errors.
errors
„ We know from the encoding process that these bits map
uniquely.
„ So a 4 bit sequence will have a unique 12 bit output. But due
to errors, we can receive any and all possible combinations of
the 12 bits.
DECODING USING TRELLIS
„ 1. We can compare this received sequence to all permissible
q
sequences and ppick the one with the smallest bit
disagreement: hard decision decoding
„ 2. We can do a correlation and pick the sequences with the
best correlation: soft-decision decoding

„ If a message of length s bits is received, then the possible


number of codewords are 2s.
„ H
How can we decode
d d the
th sequence without
ith t checking
h ki each h andd
everyone of these 2s codewords? This is the basic idea
g
behind decoding.
SEQUENTIAL DECODING
„ Sequential decoding was one of the first methods proposed for
decoding a convolutionally coded bit stream. It was first
proposed by Wozencraft and later a better version was
proposed by Fano.
„ In Sequential decoding you are dealing with just one path at a
time. You may give up that path at any time and turn back to
follow an another path but important thing is that only one path is
f ll
followedd att any one time.
ti
„ Sequential decoding allows both forwards and backwards
movement through g the trellis.
„ The decoder keeps track of its decisions, each time it makes an
ambiguous decision, it records it.
„ If the
th bit disagreements
di t increases
i f t than
faster th some threshold
th h ld
value, decoder gives up that path and retraces the path back to
the last fork where the tally was below the threshold.
SEQUENTIAL DECODING
„ Assume that the bit sequence 11 01 00 10 10 11 was sent but
due to error we received 11 01 00 10 10 01. 01
„ Assume that the threshold value for error count is set to 3
„ The decoder looks at the first two bits,, 01. Rightg awayy it sees that
an error has occurred because the starting two bits can only be 00
or 11.
„ B t which
But hi h off the
th two
t bit was received
bits i d ini error, the
th first
fi t or the
th
second? The decoder randomly selects 00 as the starting choice.
To correspond to 00, it decodes the input bit as a 0. It puts a count
of 1 into its error counter. It is now at point 2.
SEQUENTIAL DECODING
„ The decoder now looks at the next set of bits, which are 10. It
sees that an error has occurred because the next two bits can onlyy be
00 or 11. It increments the error counter by 1 to make it 2 and
randomly chooses 11 as the next path and reaches point 3.
„ At Decision point 3,3 the received bits are 10,
10 there is no error
and the error counter remains the same and we reach node 4
„ At decision point 4 The decoder now looks at the next set of
bit which
bits, hi h are 00.
00 What
Wh t ever pathth we choose
h th error is
the i
increased to 3 which is our threshold value and so we turn back
to node 3
„ At point 3 the code is 10 and the only choice is 01 which
increases the error count to 4 and so we turn back to node 2.
„ Here the code is 10 and the only choice is 00 which increases
the error count to 2 and so we reach node 6.
SEQUENTIAL DECODING
„ The decoder now looks at the next set of bits, which are 10. It
sees that an error has occurred because the next two bits can only
be 00 or 11. It increments the error counter by 1 to make it 3. Since
the threshold value is reached it returns back to point 2.
„ All the possible paths from point 2 has already been exhausted so
the decoder turns back to point 1 and follows the path to 7.
„ In this path at each decision point error is one and the decoder
traces the correct path through the trellis decoding the correct
sequence 1011.
SEQUENTIAL DECODING
1 2 1
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11

00 00 00 00 00 00

01 01 01 01 01 01

10 10 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01

001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11

00 00 00 00 00 00

01 01 01 01 01 01

3
10 10
2 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01
4 2

001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11

4
5

00 00 00 00 00 00

01 01 01 01 01 01

3
10 10
2 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01
4 2

001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1 6 2
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11

4
5

00 00 00 00 00 00

01 01 01 01 01 01

3
10 10
2 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01
4 2

001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1 6 2
00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11

4
5

00 00 00 00 00 00

01 01 01 01 01 01

3
10 10
2 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01
4 2

001011 11 01 00 10 10 01
SEQUENTIAL DECODING
1 2 1 6 2
00 00 00 00 00 00
1
11 11 11 11 11 11 1
11 11 11 11 11 11 1

4 1
1
5
1 0

00 00 00 00 00 00

01 01 01 01 01 01

7 3 9 1
10
1 10
2 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01
8 1 4 2

1011 11 01 00 10 10 01
MAXIMUM LIKELYHOOD OR VITERBI DECODING

„ Viterbi decoding is the best known implementation of the


y
maximum likely-hood decoding.
g Here we narrow the options
systematically at each time tick. The principal used to reduce the
choices is this.
1. The errors occur infrequently.
q y The pprobability
y of error is small.
2. The probability of two errors in a row is much smaller than a
single error, that is the errors are distributed randomly.
„ The Viterbi decoder examines an entire received sequence of a
given length.
„ The decoder computes a metric for each path and makes a
decision based on this metric.
metric
„ The number of discrepancies between the received code and
the path code may be taken as a metric
MAXIMUM LIKELY HOOD OR VITERBI DECODING

„ All paths are followed until two paths converge on one node.
Then the path with the lower metric is kept and the one with
higher metric is discarded. The paths selected are called the
survivors.
„ For an N bit sequence, total numbers of possible received
sequences are 2N. Of these only 2kL are valid. The Viterbi
algorithm applies the maximum-likelihood
maximum likelihood principles to limit the
comparison to 2kL surviving paths instead of checking all paths.
COMPLETE TRELLIS DIAGRAM

00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11

00 00 00 00 00 00

01 01 01 01 01 01

10 10 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01
REDUCED TRELLIS DIAGRAM

00 00 00 00 00 00
11 11 11 11 11 11 11 11 11 11 11 11

00 00 00 00 00 00

01 01 01 01 01 01

10 10 10 10 10 10
10 10 10 10 10 10

01 01 01 01 01 01
VITERBI DECODING- STEPS
4
2 3
00 (1)
1
00(1) 00(1) 00 00 00

11(1) 11(1) 11(1) 11 11 11


11
11(1) 11
11
1
3 4

00(1) 00 00 00
01(2) 01(2) 01 01 01

1 2 4
10(0) 10(0) 3 10 10 1 10
10
10(0) 10

01(2) 01 01 01
1 2
3

1011 11 01 00 10 10 01
VITERBI DECODING- STEPS

2 3
00 (1)
1
00(1) 00(1) 00 00 00

11(1) 11(1) 11(1) 11 11 11


11
11 11
1

00 00 00
01(2) 01 01 01

1 2
3
10(0) 10(0) 10 10 10 10
10
10(0) 10

01 01 01
1 2

1011 11 01 00 10 10 01
VITERBI DECODING- STEPS

2 3 3
00 (1)
1 3
00(1) 00(1) 00(0) 00 00

11(1) 11(1) 11(1) 11(2)


( )11(2) 11
11
11 11
3
1
4

00(0) 00 00
01(2) 01(1) 01 01

5
1 2 1
3
10(0) 10(0) 10(1) 10(0) 10 10 10
10(0) 10(1)

01(1) 01 01
1 2 3
4

1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
2 3
00 (1)
1 3
00(1) 00(1) 00(0) 00 00

11(1) 11(1) 11(2) 11 11

11 11
3
1

00(0) 00 00
01(2) 01 01

1 2 1
10(0) 10(0) 10(0) 10 10 10
10(0) 10(1)

01(1) 01 01
1 2 3

1011 11 01 00 10 10 01
VITERBI DECODING- STEPS

2 3 4
00 (1)
1 3 4
00(1) 00(1) 00(0) 00(1) 00

11(1) 11(1) 11(2) 11(1) 11

11(1) 11
3 5
1
1

00(0) 00(1) 00
01(2) 01(0) 01

4
1 2 1 4
10(0) 10(0) 10(2) 10(2) 10 10
10(0) 10(1)

01(1) 01(0) 01
1 2 3 3 3

1011 11 01 00 10 10 01
VITERBI DECODING- STEPS
2 3
00 (1)
1 3 4
00(1) 00(1) 00(0) 00(1) 00

11(1) 11(1) 11(2) 11(1) 11

11(1) 11
3
1
1

00(0) 00(1) 00
01(2) 01(0) 01

4
1 2 1
10(0) 10(0) 10(2) 10 10
10(0) 10(1)

01(1) 01(0) 01
1 2 3 3

1011 11 01 00 10 10 01
VITERBI DECODING- STEPS

2 3 1
00 (1)
1 3 4 6
00(1) 00(1) 00(0) 00(1) 00(2)

11(1) 11(1) 11(2) 11(1) 11(0)

11(1) 11(0)
3
1 4
1 5

00(0) 00(1) 00(2)


01(2) 01(0) 01(1)
4
4
1 2 1 3
10(0) 10(0) 10(2) 10 (1) 10(1)
10(0) 10(1)

01(1) 01(0) 01(1) 4


1 2 3 3 5

1011 11 01 00 10 10 01
VITERBI DECODING- STEPS

2 3 1
00 (1)
1 3 4
00(1) 00(1) 00(0) 00(1)

11(1) 11(1) 11(2) 11(1)


11(1) 11(0)
3
1 4
1

00(0) 00(1) 00(2)


01(2) 01(0)
3

4
1 2 1
10(0) 10(0) 10(2) 10 (1)
10(0) 10(1)

01(1) 01(0) 01(1) 4


1 2 3 3

1011 11 01 00 10 10 01
VITERBI DECODING- STEPS

1
00 (1)
1

11(1) 11(1)
11(0)
1 4
1

00(0) 00(2)
01(2) 01(0)
3

1 2 1
10(0) 10(0) 10(2) 10 (1)
10(0) 10(1)

01(1) 01(0) 01(1) 4


1 2 3 3

1011 11 01 00 10 10 01
VITERBI DECODING- STEPS

00 (1)
1 1

11(1) 11(1)
11(0)
1 4
1

00(0) 00(2)
01(2) 01(0)
3

1 2 1
10(0) 10(0) 10(2) 10 (1)
10(0) 10(1)

01(1) 01(0) 01(1) 4


1 2 3 3

1011 11 01 00 10 10 01
BURST ERROR
CORRECTION
BURST ERROR CORRECTION
„ The parity bits added in the block codes will correct a limited
number of bit errors in each code words.
„ When the errors are clustered the error correcting codes are
not very effective even if the average bit error rate is small.
„ The errors in this case are clustered, i.e., in one region a large
percentage of bits are in error where as in other regions the
average error is very small.
BLOCK INTERLEAVING
„ In block interleaving, the data goes through a process of
interleaving and error correction coding before being applied
to the channel.
„ At the receiver a decoding process followed by a de-
interleaving is used to recover the original data.
data

Interleaving Error control


Data coding
di

Recovered
Data
CHANNEL D di
Decoding D i t l i
De-interleaving
BLOCK INTERLEAVING
„ A block of kl data bits is loaded in to a shift register which is
organized in to k rows with l per bits per row.
row
„ The data stream is entered in to the storage element at a11.
„ At each shift, each bit moves one position to the right while
the bit in the rightmost storage element moves in to the
leftmost storage element of the next row.
„ When kl data bits have been entered, the register is full, the
first bit being akl and the last bit in a11.
„ At this point the data stream is diverted to a similar shift
register.
„ A process of error control coding is now applied to the data
held stored in the first register.
BLOCK INTERLEAVING
„ In this coding process, the information bits in a column are
viewed as the bits of an uncoded word to which parity bits are
to be attached.
„ Thus the code word (a11, a21, a31,............., ak1, c11, c21, c31,..............., cr1 )
is formed there by generating a code word with k information
bits r parity bits.
„ Th information
The i f ti bits bit in i this
thi code d word d are l bits bit apartt than th in i
the original bit stream.
„ When the coding is completed, completed the entire contents of (k x l)
information register as well as (r x l) parity bits are transmitted
over the channel.
„ Generally the bit by bit serial transmission is carried out row
by row i.e.,( crl ,....., cr1 ,......, c1l ......, c11 , akl ,......, ak 1 , a2 l ,......, a1l , a11 )
BLOCK INTERLEAVING
l bits per row

⎡ a11 a12 a13 . . a1l ⎤ ⎫


⎢a a 22 a 23 . . a 2 l ⎥ ⎪⎪ k bits per
⎢ 21 ⎥
⎥ ⎬
column
⎢ . . . . .
kl information
⎢ ⎥ ⎪
bits a
⎢ k1 ak2 ak3 . . a k l ⎥ ⎪⎭
⎢ c1 1 c1 2 c1 3 . . c1 l ⎥ ⎫
⎢ ⎥ ⎪
⎢ c 21 c 22 c 23 . . c2l ⎥ ⎪ r parity bits
⎢ . . . . . . ⎥ ⎬ per column
⎢ ⎥ ⎪
⎣ c r1 cr2 cr3 . . c r l ⎦ ⎪⎭
BLOCK INTERLEAVING
„ Data are transmitted in exactly the same order it entered the
register but now parity bits are also transmitted.
register, transmitted
„ The received data are again stored in the same order as in
the transmitter and error correction decoding
g is pperformed.
„ The parity bits are then discarded and the data bits are shifted
out of the register.
„ Consider that the code incorporated in to a column is
adequate to correct a single error.
„ If in the transmitted data stream there occurs a burst of noise
lasting l consecutive coded bits.
„ Because of the arrangement of the registers only one error
will appear in each column and this single error will be
corrected.
BLOCK INTERLEAVING
„ If there are (l+1)consecutive errors, then one column will have
2 errors andd correction
ti will
ill nott be
b assured.
d
„ If the code is able to correct t errors then the process of
interleaving will permit the correction of B bits with B ≤ tl
CONVOLUTIONAL INTERLEAVING

„ The four switches operate in step and move from line to line
at the bit rate of the input bit stream d(k).
)
„ Each switch makes contact with line 1 at the same time and
g
moves to line 2 together and returns to line 1 at the same
time.
„ The cascade of storage elements in the lines are shift
registers.
i t
„ Starting with line 1 the number of elements increase by s as
we progress from line to line.
line
„ The last line l has (l-1)s storage elements.
„ The total number of storage element in each case is the
same, (l-1)s.
CONVOLUTIONAL INTERLEAVING

1 (l-1)s 1

2 s (l-2)s 2

CHAN
3 2s (l-3)s
3

4 3s NNEL (l-4)s 4

l (l-1)s
l
CONVOLUTIONAL INTERLEAVING
„ Consider a single line li on the transmitter side.
„ During the particular bit interval of d(k) there is switch contact
at the input and output sides of line li.
„ At the end of the bit interval, a clock signal causes the shift
register
i t off line
li li to
t enter
t ini to
t the
th leftmost
l ft t storage
t element
l t
the bit on the input side of the line and to start moving the
contents of each of its storage elements one bit to the right.
„ Then a synchronous clock advances the switches to the next
line li+1 .
„ When the shift register response is completed,
completed there will be a
new bit at the output end of the line, li
„ Because of the propagation delay through the storage
element, the switch at the output end of the line will have
already lost contact with line li , before the new bit has
appeared
pp at the line output.
p
CONVOLUTIONAL INTERLEAVING

„ During the interval of input d(k) , during which the switches


were connected to line li , there is a one bit shift of the shift
register on line li , which accepts bit d(k) in to the register.
„ However, the fact that such a shift take p place is not noticed on
the output switch until the next time the switch makes contact
with line li.
„ Whil the
While th clock
l k that
th t drives
di th switches
the it h hash a clockl k rate
t fb, the
th
clock that drives the shift registers have a rate fb/l.
„ The shift registers are not driven in unison,
unison but in sequence
each register being activated as the switches are about to
break contact with its line.
„ Assume that initially all the shift registers in the transmitter
and receiver are short circuited.
CONVOLUTIONAL INTERLEAVING

„ If bit d(k) occurs when all switches are on line li , then the
corresponding received bit d(k) will appear immediately
immediately.
„ The next input bit d(k+1) will be the next received bit except
that it will be transmitted over line .
„ The received sequence will be the same as the transmitted
sequence. With the shift registers in place in the transmitter
andd receiver,
i each
h off th
the l lines
li will
ill h
have th
the same d
delay
l
(l-1)s and therefore the output sequence will still be identical
p sequence.
to the input q
„ The sequence in which the bits are transmitted over the
channel is different.
„ Suppose that two successive bits in the input bit stream are
d(k) and d(k+1).
CONVOLUTIONAL INTERLEAVING

„ Then the bit originally d(k+1) will be instead d(k+1+ls).


„ Thus if l=55 and s=3 3 there will be ls l =15
15 bits interposed
between two bits that were initially adjacent to one another.
„ The main advantage of convolutional interleaver is that less
storage elements are required and the structure can be easily
changed as per our requirement.
CONVOLUTIONAL INTERLEAVING-EXAMPLE

1 1 1 1
1
2 2 2 2
2
3 3 3 3
3 1

4 4 4 4
4
CONVOLUTIONAL INTERLEAVING-EXAMPLE

1 1 1
5 1
2 2 2 2
6 2
3 3 3 3
7 3 1
5
4 4 4 4
8 4 2
CONVOLUTIONAL INTERLEAVING-EXAMPLE

1 1 1
9 5 1
2 2 2 2
10 6 2
3 3 1 3 3
11 7 5 3
2
4 4 9 4 4
12 8 4 6
3
CONVOLUTIONAL INTERLEAVING-EXAMPLE

1 1 11
1 13 9 5
2 2 2 2 2
14 10 6
3 3 1 3 3 3
15 11 5 7
2
4 4 9 4 4 4
16 12 8 6
3
13
10
7
AUTOMATIC REPEAT
REQUEST
AUTOMAIC REPEAT REQUEST (ARQ)
„ There are basically two different techniques available for
controlling transmission errors:
(1) Forward Error Correction (FEC) schemes in which
redundancy is deliberately introduced to detect and
correct errors.
(2) Automatic Repeat Request (ARQ) schemes in which
errors in the received code word is detected and a
request for retransmission is send back to the transmitter
„ Th FEC method
The th d has
h th limitation
the li it ti th t if errors are too
that t
numerous, the code will not be effective.
„ Also to achieve low error rates,
rates it is necessary to add a large
number of redundant bits.
„ As a result the efficiency of the code is very low.
AUTOMAIC REPEAT REQUEST (ARQ)
„ When the code is very long, we require complex and
expensive hardware to process the codes.
„ ARC is used where extremely low error rates is required.
„ In this system the receiver need only to detect errors and not
to correct them.
„ When an error is detected in a word, the receiver signals back
to the transmitter and the word is retransmitted.
retransmitted
„ So a feedback channel must be provided in ARQ systems.
„ Since coding allows more error detection than error
correction, ARQ uses more effective use of coding.
„ There are basically three ARQ systems: Stop and wait ARQ,
Go back N ARQ, and Selective repeat ARQ
STOP AND WAIT ARQ
„ The transmitter sends a code word to the receiver during the
time Tw.
„ The receiver receives the code word and processes it and if
the receiver detects no error, it sends back to the transmitter
an acknowledgement (ACK) signal. signal
„ When the ACK is received the transmitter sends the next
word.
„ If the receiver does detect an error, it returns a negative
acknowledge (NACK) to the transmitter.
„ In this case the transmitter retransmits the same message
and again waits for ACK or NACK response before
undertaking further transmission.
transmission
„ The elapsed time between the end of transmission of one
oda
word andd tthe
e sta
startt o
of ttransmission
a s ss o o of next
e t word
o d is
s TI
STOP AND WAIT ARQ
Tw TI

TRANSMITTER
1 2 3 3
ACK ACK NACK

RECEIVER 1 2 3 3
ERROR
DETECTED
„ The main drawback of such a system is that the transmitter must stand
idle while waiting for the ACK or NACK.
GO BACK N ARQ
„ The transmitter sends messages, one after another without
delay and does not wait for an ACK signal.
„ When the receiver detects an error in a message i, a NAK
signal is returned to the transmitter.
„ In response to the NAK, the transmitter returns to codeword i
and starts all over again at that word.
„ In figure 2,
2 the schematic is drawn on the assumption that the
propagation delay and the processing at the receiver occupies
an interval of such length that when an error is detected in
word i, the number N of words that are sent over again is N=5.
GO BACK N ARQ
TRANSMITTER Tw

1 2 3 4 5 6 2 3 4 5 6 7 8 9 10 11

Retransmission starting from 2


……………………

RECEIVER 1 2 3 4 5 6 2 3 4 5 6 7 8 9

ERROR
DETECTED
SELECTIVE REPEAT ARQ
„ The transmitter sends messages in succession, without waiting
for an ACK after each message.
„ If the receiver detects that there is an error in code word i, the
transmitter is notified by NAK.
„ The transmitter retransmits the code word i and thereafter
returns immediately to its sequential transmission.
„ The selective repeat ARQ has the highest efficiency of the
three systems but it is most costly to implement.
SELECTIVE REPEAT ARQ
TRANSMITTER Tw

1 2 3 4 5 6 2 7 8 9 10 11 12 13 14 15

Transmission continues from 7 onwards


Retransmt 2
……………………

RECEIVER 1 2 3 4 5 6 2 7 8 9 10 11 12 13

ERROR
DETECTED
PERFORMANCE OF ARQ SYSTEMS

„ Throughput efficiency is defined as the ratio of the average


number of information bits accepted at the receiver per unit of
time to the number of information bits that would be accepted
per unit of time if ARQ were not used.
THROUGHPUT OF STOP AND WAIT ARQ

„ Let PA be the probability that receiver accepts the message on


any particular transmission.
transmission
„ Then the probability that only a single transmission is all that
is needed for acceptance
p is PA.
„ The probability that two transmissions will be required is the
product of two probabilities: (1-PA), the probability that the
t
transmission
i i was rejected
j t d and d PA , the
th probability
b bilit that
th t the
th
transmission was accepted on the second try. i.e., PA (1-PA)
„ The average number of transmissions required for
acceptance of a single word is is the sum of the products of
the number of transmissions j and the probability of requiring j
transmissions, PA (1-
( PA)j-1
j 1.
THROUGHPUT OF STOP AND WAIT ARQ
N SW = 1.PA + 2.PA (1 − PA ) + 3.PA (1 − PA ) 2 + ...................

= PA (1 + 2(1 − PA ) + 3(1 − PA ) 2 + ...................)


Put (1 − PA ) = x PA = 1 − x

= (1 − x ) (1 + 2 x + 3x 2 + ...................)
1
1 1 N =
= (1 − x ) (1 − x )
−2 SW
= = PA
(1 − x ) PA
„ The total time devoted to a single
g attempt
p to g get the receiver
to accept a word is Tw+TI .
„ Hence, on the average the time required to transmit one word
is
TW + T I
T SW =
PA
THROUGHPUT OF STOP AND WAIT ARQ

„ If ARQ were not used and no coding bits were added to the k
information bits
bits, the time needed to transmit the k bits would
be
k
k Tk = TW
Tk = TW n
n
„ The throughput efficiency of stop and wait ARQ system is
then
Tk k PA
η S &W = = .
T SW n 1 + TI
TW
THROUGHPUT OF GO-BACK-N ARQ
„ In this system when the transmitter is informed that an error
has been in a particular word
word, retransmission is required of
that word and of the (N-1) words that follow.
„ Hence the retransmission involves N words. If a word is
received in error, N words are retransmitted.
„ Thus the total number of words transmitted is N+1
„ If the same word is again in error, the N words are repeated
once again and so on.
„ The average number of word transmissions required for the
acceptance of a single word is
N GBN = 1.
1 PA + ( N + 1) PA (1 − PA ) + (2 N + 1) PA (1 − PA ) 2 + ...................

= PA + NPA (1 − PA ) + PA (1 − PA ) + 2 NPA (1 − PA )2 + PA (1 − PA )2 + ...................


THROUGHPUT OF GO-BACK-N ARQ
NGBN = ( PA + PA (1 − PA ) + PA (1 − PA )2 + ....) + ( NPA (1 − PA ) + 2 NPA (1 − PA )2 + ........)

= PA (1 + (1 − PA ) + (1 − PA )2 + ...) + NPA ( (1 − PA ) + 2 A (1 − PA )2 + .......)

Put (1 − PA ) = x PA = 1 − x

= (1 − x) (1 + x + x 2 + ...) + N (1 − x) ( x + 2 x 2 + .......)
+ Nx(1 − x) (1 + 2 x + 3x 2 + .......)
1
= (1 − x)
(1 − x)
1 1
= (1 − x) + Nx(1 − x)
((1 − x) ((1 − x)2
1 N (1 − PA ) N (1 − PA )
= 1 + Nx = 1+ NGBN = 1 +
(1 − x) PA PA
THROUGHPUT OF GO-BACK-N ARQ
„ Correspondingly the average time to transmit one word is

⎛ N (1 − PA ) ⎞
T GBN = TW ⎜1 + ⎟
⎝ PA ⎠
TK k 1
ηGBN = = .
T GBN n ⎛ N (1 − PA ) ⎞
⎜1 + ⎟
⎝ PA ⎠
k 1
ηGBN = .
n ⎛ N (1 − PA ) ⎞
⎜1 + ⎟
⎝ PA ⎠
THROUGHPUT OF SELECTIVE REPEAT ARQ

„ The mean time for transmission of a word in selective repeat


ARQ is calculated exactly as in stop-and-wait case except
that TI is set to 0.
TW + 0 TW
T SR = = k
PA PA η SR = PA
Tk k n
η SR = = PA
T SR n

You might also like