Digital Coding Lecture Slide 4
Digital Coding Lecture Slide 4
DIGITAL COMMUNICATION
Coding
cj
c’j
c’’j
c’j
c’’j
C= c’1c’’1c’2c’’2c’3c’’3………..
Stream 1
c’j
c’’j
Stream 2
First, we find the impulse response of both streams to a symbol 1.
Impulse response of stream 1 = (1 1 1)
Impulse response of stream 2 = (1 0 1)
Then, write the corresponding generator polynomial of both streams
g i D g i g i D g i D 2 ........ g i D M
0 1 2 M
g 2 D 1 D 2
Then, find the output polynomial for both streams by multiplying the
generator polynomial and the message polynomial
Then, find the output polynomial for both streams by multiplying the
generator polynomial and the message polynomial
c 1D g 1D .mD c 2 D g 2 D .mD
= 1 + D + D 2 + D3 + D6 = 1 + D 2 + D3 + D4 + D5 + D6
So, the output sequence for stream 1 The output sequence for stream 2
is 1111001 is 1011111
Interleave
C = 11,10,11,11,01,01,11
Original message (10011)
Encoded sequence (11101111010111)
EE436 Lecture Notes 9
Convolutional Codes – Exercise
A convolutional encoder with two streams of encoded bits with
message sequence 110111001 as an input
Stream 1
c’j
c’’j
Stream 2
g 2 D 1 D 2
mD 1 D D 3 D 4 D 5 D8
Then, find the output polynomial for both streams by multiplying the
generator polynomial and the message polynomial
Then, find the output polynomial for both streams by multiplying the
generator polynomial and the message polynomial
c 1D g 1D .mD c 2 D g 2 D .mD
So, the output sequence for stream 1 The output sequence for stream 2
is 10000101111 is 11101011101
Interleave
C = 11 01 01 00 01 10 01 11 11 10 11
Original message (110111001)
Encoded sequence (11 01 01 00 01 10 01 11 11 10 11 )
EE436 Lecture Notes 12
Convolutional Codes – Code Tree, Trellis and State Diagram
Stream 1
c’j
c’’j
Stream 2
The decoding rule is said to be optimum when the probability of decoding error is
minimised
Let say both the transmitted code vector c and the received vector r represent
binary sequence of length N.
These two sequences may differ from each other in some locations because of
errors due to channel noise.
We then have
N
p r c p ri ci
i 1
The decoding rule is said to be optimum when the probability of decoding error is
minimised
p ri ci p
1 p
If ri ci
If ri ci
EE436 Lecture Notes 19
Suppose also that the received vector r differs from the transmitted code vector c in
exactly d positions (where d is the Hamming distance between vectors r and c)
N
log p r c log p ri ci
i 1
as
log p r c d log p N d log1 p
p
d log N log1 p
1 p
The function is maximised when d is minimised. (ie the smallest Hamming distance)
The received vector r is compared with each possible transmitted code vector c
and the particular one “closest” to r is chosen as the correct transmitted code vector.
Since a code tree is equivalent to a trellis, we may limit our choice to the possible
Paths in the trellis representation of the code.
The metric for a particular path is defined as the Hamming distance between the
Coded sequence represented by that path and the received sequence.
For each state in the trellis, the algorithm compares the two paths entering the node
And the path with the lower metric is retained.
The sequence along the path with the smallest metric is the maximum likelihood
Choice and represent the transmitted sequence
EE436 Lecture Notes 22
The Viterbi Algorithm -example
In the circuit beliow, suppose that the encoder generates an all-zero sequence and
that the received sequence is (0100010000…) in which the are two errors due to
channel noise: one in second bit and the other in the sixth bit.
We can show that this double-error pattern is correctable using the Viterbi algorithm.
Stream 1
c’j
c’’j
Stream 2
EE436 Lecture Notes 23
The Viterbi Algorithm -example
Using the same circuit, suppose that the received sequence is 110111.
Using the Viterbi algorithm, what is the corresponding encoded sequence
transmitted by the receiver? What is the original message bit?