CODE2
CODE2
• The information, consisting of individual bits, is fed into a shift register in order to be
encoded.
A convolutional encoder is generally characterized in form (n, k, or (k / n,
where : K) K)
– k inputs and n outputs
• In practice, usually k=1 is chosen.
– K is the constraint length of the convolutinal code (where the encoder has
K-1 memory elements).
1 0 0
0 1 0
1
0
At time t = 3 0
1 0 1
0
Using the (7, 5)8 NSC Code to Encode the
Message 101000
At time t = 4 1 At time t = 5
1
0 1 0 0 0 1
0 1
At time t = 6 0
0 0 0
=[11 10 00 10 11 00]
State Table of a Convolutional Code
• The memory elements can contain four possible values: 00, 01, 10, 11
• These values are called states and only certain state transitions are allowed. For
example, we cannot go from state 00 to state 11 in one step or stay in state 10.
• All possible state transitions, along with their corresponding input and
outputs, can be determined from the encoder and stored in a state table.
• The state table for the (7, 5)8 NSC code is shown below:
Input Initial State Next State Output
S1 S2 S1 S2 c1 c2
0 0 0 0 0 0 0
1 0 0 1 0 1 1
0 0 1 0 0 1 1
1 0 1 1 0 0 0
0 1 0 0 1 1 0
1 1 0 1 1 0 1
0 1 1 0 1 0 1
1 1 1 1 1 1 0
State diagram of a Convolutional
Code
• A more compact form of the state table is the state diagram
• It is much easier to determine a codeword corresponding to a message using
the state diagram
• Starting from state 00, trace the message through the state diagram and
record each output
1/10 Input/Output
11
What is the codeword for the message
1/01
m = [1 0 0 1 1 1]?
0/01 0/10
01 10
1/00
0/11 1/11
00
: Time 1 2 3 4
• A Tree Diagram is obtained from the state a
table and is one method of showing all a 00
possible code words up to a given time. a 00 b
0 11
00 c
• A input of 0 is represented by moving to an b
b
10
upper branch in the tree and an input of 1 11 00
corresponds to moving to a lower branch. a
d
01
a
• The states are denoted by a, b, c 1 c 11
and d: a = 00, b = 10, c = 01, d = 10 b
b 00
11 11 c
• The red line is the input 0, 1, 0, 1, which d 01
has the codeword 00, 11, 10, 00 01 d
10
Trellis Diagrams
State Input/Output
11 1/10
Trellis Diagrams
Example2: Consider the binary convolutional encoder with constraint length K=3, k=1, and
n=3. The generators are: g1=[100], g2=[101], and g3=[111]. The generators are more
conveniently given in octal form as (4,5,7)
Q1 Q2 Q3
[g1]=[1 0 0], C1=Q1
[g2]=[1 0 1] , C2=Q1+Q3
[g3]=[1 1 1 ], C3=Q1+Q2+Q3
State Table of a Convolutional Code
C1=Q1+Q3+Q4
C2=Q1+Q2+Q4
C3=Q1+Q3
State Table of a Convolutional Code
State diagram of a Convolutional
Code
Trellis Diagram
Viterbi’s Algorithm
• When two paths converge into a state the least likely path is discarded.
• Finally, the path of length n with the smallest accumulated distance is the
codeword that maximises P(r|c).
Viterbi’s Algorithm
• A branch metric t(s, s’) is the distance between the received output and branch
output from a previous state s to a current state s’ at time t
• A path metric Mt(s’) is the accumulated distance of a path at state s’
• To obtain the path metrics at time t we add the previous path metrics that are
connected to each state to the corresponding branch metrics.
Current Path metric
Mt-1(s1)
s1
1 t (s ,
s’)
Previous
path metrics ’s Mt(s’) = min{Mt-1(s1) + t(s1, s’), Mt-1(s2) + t(s2, s’)}
Received
Word 11 10 00 10 11
0 2 2=0+2 1 0 1 2
00 0 /0 0 0 /0 0 0/00 0 /0 0 0/00
2 1 0
0 /1 1 0/11 0/11
0 1 2
01 1 /1 1 1 /1 1 1 /1 1
0
1/00
0 1 0
0/10 0 /1 0 0 /1 0
1 2
10 0/01 0/01
0+0=0 2
1
1/01 1/01
t=1
1
11 1/10
Viterbi Decoding Example with No Errors
Received
Word 11 10 00 10 11
0 2 2 1 2+1=3 0 1 2
00 0 /0 0 0 /0 0 0 /0 0 0 /0 0 0/00
2 1 0
0 /1 1 0/11 0/11
0 1
0=0+0
2
01 1 /1 1 1 /1 1 1 /1 1
0
1/00
0 1 0
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0/01
0 2
3=2+1
1
1 /0 1 1/01
t=2
1
11 1/10
2=0+2
Viterbi Decoding Example with No Errors
2 1 0
0 /1 1 0/11 0/11
,4=3+1
0 1 0 2 3=2+1
01 1/11 1/11 1/11
0
0
1 /0 0
0
1
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0 /0 1
3 ,0=0+0
0 2 1 5=3+2
1 /0 1 1/01
t=3
1
11 1/10 ,4=3+1
2 3=2+1
Viterbi Decoding Example with No Errors
Received
Word 11 10 00 10 11
,3=2+1
0 2 2 1 3 0 2 1 4=3+1 2
00 0/00 0 /0 0 0 /0 0 0/00 0/00
2 1 0
0 /1 1 0/11 0/11
,0=0+0
0 1 0 2 3 5=3+2
01 1 /1 1 1 /1 1 1 /1 1
0
0
1/00
0
1
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0 /0 1
0 3 0
2 1
1 /0 1 1/01
t=4
1
11 1/10
2 3
Viterbi Decoding Example with No Errors
Received
Word 11 10 00 10 11
,5=3+2
0 2 2 3 0 2 1 3 2 0 =0 +
00 0/00
1
0/00 0/00 0/00 0/00
0
2 1 0
0/11 0/11 0/11
0 1 0 2 3 0
01 1/11 1/11 0 1/11
0
1 /0 0
0
1
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0 /0 1
0 3 0
2 1
1 /0 1 1/01
t=5
1
11 1/10
2 3
Viterbi Decoding Example with No Errors
Received
Word 11 10 00 10 11
0 2 2 3 0 2 1 3 2 0
00 0/00
1
0/00 0/00 0/00
0/00
0
2 1 0/11
0/11 0/11
0 1 0 2 3 0
01 1/11 1/11 0 1/11
0
1 /0 0
0
1
0/10 0 /1 0 0 /1 0
1 2
10 0 /0 1 0 /0 1
0 3 0
2 1
1 /0 1 1/01
1
11 1/10
2 3
Viterbi Decoding Example with No Errors
• At t = 3 each state has two branches entering it. We calculate the path
metrics for each branch entering the state and keep the smallest one.
The other branch is discarded, eliminating a competitor path.
• We can see that Viterbi’s algorithm has found a single path in the
trellis that matches the received word.
• In the next example, we add one error to the received word and see if
Viterbi’s algorithm is able to correct it.
Viterbi Decoding Example with One Error
An error
Received
Word 11 10 01 10 11
,4=3+1
0 2 2 1 3 1 1=0+1 1 2
00 0/00 0 /0 0 0 /0 0 0 /0 0 0/00
1 1 0
0 /1 1 0/11 0/11
,5=3+2
0 1 0 1 2=2+0
01 1 /1 1 1 /1 1 1 /1 1
1
0
1/00
0
2
0/10 0 /1 0 0 /1 0
0 2
10 0 /0 1 0 /0 1
3 ,1=0+1
0 2 0 4=3+1
1 /0 1 1/01
t=3
2
11 1/10 ,3=3+0
2 4=2+2
Viterbi Decoding Example with One Error
Received
Word 11 10 01 10 11
,2=1+1
0 2 2 1 3 1 1 1 3=2+1 2
00 0/00 0 /0 0 0 /0 0 0/00 0/00
1 1 0
0 /1 1 0/11 0/11
,1=1+0
0 1 0 1 2 5=3+2
01 1 /1 1 1 /1 1 1 /1 1
1
0
1/00
0
2
0/10 0 /1 0 0 /1 0
0 2
10 0 /0 1 0 /0 1
0 3 1
2 0
1 /0 1 1/01
t=4
2
11 1/10
2 3
Viterbi Decoding Example with One Error
Received
Word 11 10 01 10 11
,4=2+2
0 2 2 1 3 1 1 1 2 2 1=1+0
00 0/00 0 /0 0 0 /0 0 0 /0 0 0 /0 0
1 1 0
0 /1 1 0/11 0/11
0 1 0 1 2 1
01 1 /1 1 1 /1 1 1 /1 1
1
0
1/00
0
2
0/10 0 /1 0 0 /1 0
0 2
10 0 /0 1 0 /0 1
0 3 1
2 0
1 /0 1 1/01
t=5
2
11 1/10
2 3
Viterbi Decoding Example with One Error
Received
Word 11 10 01 10 11
0 2 2 1 3 1 1 1 2 2 1
00 0/00 0 /0 0 0 /0 0 0 /0 0 0/00
1 1 0
0 /1 1 0/11 0/11
0 1 0 1 2 1
01 1 /1 1 1 /1 1 1 /1 1
1
0
1/00
0
2
0/10 0 /1 0 0 /1 0
0 2
10 0 /0 1 0 /0 1
0 3 1
2 0
1 /0 1 1/01
2
11 1/10
2 3
Draw the binary convolutional decoder given in example2 K=3, k=1, and n=3
Viterbi Decoding K=3, k=1, n=3 convolutional decoder
Soft-Decision Decoding
Hard
Soft Decision
Values
Channel Demodulator Decoder
Euclidean Soft-Decision
Distance Decoding
Decoder
1.3 0.7
• The only difference is the branch metrics are now squared Euclidean
distances and the trellis outputs are the modulated symbols (1), not
the coded bits
• Now we will repeat the previous hard-decision Viterbi algorithm example, but with
soft inputs.
Message: m = [1 0 1 0 0]
Codeword: c = [11 10 00 10 11]
After x = [11 1-1 -1-1 1-1 11]
BPSK
Modulatio
n: r = [0.4, 0.6, 0.5, -0.3, -0.8, -0.7, 0.9, -0.2, 0.4,
-1, 1 0( 0.3
) 1
Received values from
demodulator:
Soft-Decision Viterbi Decoding Example
0.6 ,0.4 0.3- ,0.5 0.7- ,0.8- 0.2- ,0.9 0.3 ,0.4
,12.49=8.84+3.6511.64,=7.39+4.257.39,=7.26+0.13
0 2.74 4.52 4.52 7.26 0.13
2.89=2.04+0.85 3.65 8.84=7.39+1.45 4.25 7.39=1.26+6.13
6.13 1.45
,9.79=6.46+3.33
0.52 7.39=4.46+2.93 ,2.04=1.39+0.65 0.85
1.94 1.26 6.13 12.84=7.79+5.05
0.13
0.74
3.33 0.65
2.93
,9.39=6.46+2.93
4.46 3.33 7.79=4.46+3.33