0% found this document useful (0 votes)
19 views25 pages

Digital Coding Lecture Slide 4

Digital code is very important in digital communication. It allows one to identify selected codes for transmission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views25 pages

Digital Coding Lecture Slide 4

Digital code is very important in digital communication. It allows one to identify selected codes for transmission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 25

EEE436

DIGITAL COMMUNICATION
Coding

En. Mohd Nazri Mahmud


MPhil (Cambridge, UK)
BEng (Essex, UK)
[email protected]
Room 2.14

EE436 Lecture Notes 1


Convolutional Codes

Unlike block codes that operate on block-by-block basis, the


convolutional codes are based on message bits that come in
serially rather than in blocks.
The structure of a generic convolutional encoder can be written in a
compact form of (n, k, L) ; n= no of streams, or no of modulo-2 adder;
k=number of group of shift register; L=number of state of the shift register
The encoder takes an L-bit message sequence to produce a coded
output sequence of length n(L+M) bits, where n=no of stream of the
encoded bits, M=no of bits of message sequence
The encoder is implemented by a tapped shift register with L+1
stages.

EE436 Lecture Notes 2


Convolutional Codes

cj

The message bits in the register are combined by modulo-2


addition to form the encoded bit, cj

Cj = mj-L.gL + …..+ mj-1.g1 + mj.g0


L
 m j  i .g i
i =0
EE436 Lecture Notes 3
Convolutional Codes – Example

A (2,1,2) convolutional encoder with n=2, k=1 and L=2

c’j

c’’j

EE436 Lecture Notes 4


Convolutional Codes
To provide the extra check bits for error control, the encoded bits
from multiple streams are interleaved.
For example, a convolutional encoder with n=2 (ie two streams of
encoded bits)

c’j

c’’j

Encoded bits from stream 1, c’j = mj-2 + mj-1 + mj


interleaved
Encoded bits from stream 2, c’’j = mj-2 + mj

C= c’1c’’1c’2c’’2c’3c’’3………..

EE436 Lecture Notes 5


Convolutional Codes
Each stream may be characterised in terms of its impulse response
( the response of that stream to a symbol 1 with zero initial
condition).
Every stream is also characterised in terms of a generator
polynomial, which is the unit-delay transform of the impulse
response

Let the following generator sequence denotes the impulse


response of the i-th path
 g i , g i , g i ,........, g i  
 0 1 2 M 

The generator polynomial of the i-th path is given by
 g i D   g i   g i D  g i D 2  ........  g i  D M 
 0 1 2 M 
 
Where D denotes the unit-delay variable

EE436 Lecture Notes 6


Convolutional Codes – Example
A convolutional encoder with two streams of encoded bits with
message sequence 10011 as an input

Stream 1

c’j

c’’j
Stream 2
First, we find the impulse response of both streams to a symbol 1.
Impulse response of stream 1 = (1 1 1)
Impulse response of stream 2 = (1 0 1)
Then, write the corresponding generator polynomial of both streams
 g i D   g i   g i D  g i D 2  ........  g i  D M 
 0 1 2 M 
 

EE436 Lecture Notes 7


Convolutional Codes – Example

First, we find the impulse response of both streams to a symbol 1.


Impulse response of stream 1 = (1 1 1)
Impulse response of stream 2 = (1 0 1)

Then, write the corresponding generator polynomials of both streams


 g 1D   1  D  D 2 
 
 

 g 2 D   1  D 2 
 
 

Then, write the message polynomial for input message (10011)


 mD   1  D 3  D 4 
 

Then, find the output polynomial for both streams by multiplying the
generator polynomial and the message polynomial

EE436 Lecture Notes 8


Convolutional Codes – Example

Then, find the output polynomial for both streams by multiplying the
generator polynomial and the message polynomial
 c 1D   g 1D .mD   c 2 D   g 2 D .mD 
   
   

= (1 + D + D2)(1 + D3 + D4) = (1 + D2)(1 + D3 + D4)

= 1 + D + D 2 + D3 + D6 = 1 + D 2 + D3 + D4 + D5 + D6

So, the output sequence for stream 1 The output sequence for stream 2
is 1111001 is 1011111

Interleave
C = 11,10,11,11,01,01,11
Original message (10011)
Encoded sequence (11101111010111)
EE436 Lecture Notes 9
Convolutional Codes – Exercise
A convolutional encoder with two streams of encoded bits with
message sequence 110111001 as an input

Stream 1

c’j

c’’j
Stream 2

EE436 Lecture Notes 10


Convolutional Codes – Exercise

First, we find the impulse response of both streams to a symbol 1.


Impulse response of stream 1 = (1 1 1)
Impulse response of stream 2 = (1 0 1)

Then, write the corresponding generator polynomials of both streams


 g 1D   1  D  D 2 
 
 

 g 2 D   1  D 2 
 
 

Then, write the message polynomial for input message ( 110111001)

 mD   1  D  D 3  D 4  D 5  D8 
 

Then, find the output polynomial for both streams by multiplying the
generator polynomial and the message polynomial

EE436 Lecture Notes 11


Convolutional Codes – Exercise

Then, find the output polynomial for both streams by multiplying the
generator polynomial and the message polynomial
 c 1D   g 1D .mD   c 2 D   g 2 D .mD 
   
   

= 1 + D5 + D7 + D8 + D9 + D10 = 1 + D +D2 + D4 + D6 + D7 + D8 + D10

So, the output sequence for stream 1 The output sequence for stream 2
is 10000101111 is 11101011101

Interleave
C = 11 01 01 00 01 10 01 11 11 10 11
Original message (110111001)
Encoded sequence (11 01 01 00 01 10 01 11 11 10 11 )
EE436 Lecture Notes 12
Convolutional Codes – Code Tree, Trellis and State Diagram

Draw a code tree for the convolutional encoder below.

Stream 1

c’j

c’’j
Stream 2

EE436 Lecture Notes 13


Code Tree

EE436 Lecture Notes 14


Trellis

EE436 Lecture Notes 15


State Diagram

EE436 Lecture Notes 16


Maximum Likelihood Decoding of a Convolutional Code

Let m denotes a message vector


c denotes the corresponding code vector
r the received code vector (which may differ from c due to channel noise)

Given the received vector r, the decoder is required to make an estimate m^


of the message vector
Since there is a one-to-one correspondence between the message vector m and
the code vector c, therefore

m^ = m only if c^ = c , otherwise a decoding error is committed

The decoding rule is said to be optimum when the probability of decoding error is
minimised

EE436 Lecture Notes 17


Maximum Likelihood Decoding of a Convolutional Code

Let say both the transmitted code vector c and the received vector r represent
binary sequence of length N.

These two sequences may differ from each other in some locations because of
errors due to channel noise.

Let ci and ri denote the ith element of c and r respectively

We then have
N
p r c    p ri ci 
i 1

The decoding rule is said to be optimum when the probability of decoding error is
minimised

EE436 Lecture Notes 18


N
p r c    p ri ci 
i 1

Correspondingly, the log-likelihood is


N
log p r c    log p ri ci 
i 1

The probability of decoding error is minimised if the log-likelihood


function is maximized (the estimate c^ is chosen to maximise the function)

Let the transition probability p ri ci  be defined as

p ri ci    p
1 p
If ri  ci
If ri  ci
EE436 Lecture Notes 19
Suppose also that the received vector r differs from the transmitted code vector c in
exactly d positions (where d is the Hamming distance between vectors r and c)

We may rewrite the log-likelihood function

N
log p r c    log p ri ci 
i 1

as
log p r c   d log p  N  d log1  p 
 p 
 d log   N log1  p 
 1  p 

EE436 Lecture Notes 20


log p r c   d log p  N  d log1  p 
 p 
 d log   N log1  p 
 1  p 
In general, the probability of error occurring is low enough for us to assume p < ½
And also that Nlog(1-p) is constant for all c.

The function is maximised when d is minimised. (ie the smallest Hamming distance)

“ Choose the estimate c^ that minimise the Hamming distance between


The received vector r and the transmitted vector c.”

The maximum likelihood decoder is reduced to a minimum distance decoder.

The received vector r is compared with each possible transmitted code vector c
and the particular one “closest” to r is chosen as the correct transmitted code vector.

EE436 Lecture Notes 21


The Viterbi Algorithm
The equivalence between maximum likelihood decoding and minimum distance
decoding implies that we may decode a convolutional code by choosing a path
in the code tree whose coded sequence differs from the received sequence in the
fewest number of places

Since a code tree is equivalent to a trellis, we may limit our choice to the possible
Paths in the trellis representation of the code.

Viterbi algorithm operates by computing a metric or discrepancy for every possible


Path in the trellis

The metric for a particular path is defined as the Hamming distance between the
Coded sequence represented by that path and the received sequence.

For each state in the trellis, the algorithm compares the two paths entering the node
And the path with the lower metric is retained.

The retained paths are called survivors.

The sequence along the path with the smallest metric is the maximum likelihood
Choice and represent the transmitted sequence
EE436 Lecture Notes 22
The Viterbi Algorithm -example

In the circuit beliow, suppose that the encoder generates an all-zero sequence and
that the received sequence is (0100010000…) in which the are two errors due to
channel noise: one in second bit and the other in the sixth bit.

We can show that this double-error pattern is correctable using the Viterbi algorithm.

Stream 1

c’j

c’’j
Stream 2
EE436 Lecture Notes 23
The Viterbi Algorithm -example

EE436 Lecture Notes 24


The Viterbi Algorithm -exercise

Using the same circuit, suppose that the received sequence is 110111.
Using the Viterbi algorithm, what is the corresponding encoded sequence
transmitted by the receiver? What is the original message bit?

EE436 Lecture Notes 25

You might also like