Digital Communication Systems by Simon Haykin-107
Digital Communication Systems by Simon Haykin-107
Digital Communication Systems by Simon Haykin-107
The Viterbi algorithm is a maximum likelihood decoder, which is optimal for any
discrete memoryless channel. It proceeds in three basic steps. In computational terms,
the so-called add–compare–select (ACS) operation in Step 2 is at the heart of the
Viterbi algorithm.
Initialization
Set the all-zero state of the trellis to zero.
Computation Step 1: time-unit j
Start the computation at some time-unit j and determine the metric for the path that
enters each state of the trellis. Hence, identify the survivor and store the metric for each
one of the states.
Computation Step 2: time-unit j + 1
For the next time-unit j + 1, determine the metrics for all 2 – 1 paths that enter a state
where is the constraint length of the convolutional encoder; hence do the following:
a. Add the metrics entering the state to the metric of the survivor at the preceding
time-unit j;
b. Compare the metrics of all 2 paths entering the state;
c. Select the survivor with the largest metric, store it along with its metric, and
discard all other paths in the trellis.
Computation Step 3: continuation of the search to convergence
Repeat Step 2 for time-unit j < L + L , where L is the length of the message sequence
and L is the length of the termination sequence.
Stop the computation once the time-unit j = L + L is reached.
j=1
1
Received
sequence 01 00
1
0 1
1
3
j=2
2
Received
sequence 01 00 01
1 1 2 1 1 2
0 0
3
1 3 2 1 3
2
j=3 3
2 5
2
2
3
2 2
3
4
Survivors
Received
sequence 01 00 01 00
1 1 2 2 1 1 2 2
0 0
4
1 3 2 4 1 2
2 2
j=4 3
2 2
4 3
3
3
2 3 4 2
Survivors
Received
sequence 01 00 01 00 00
1 1 2 2 2 1 1 2 2 2
0 0
5
1 2 2 4 2 2
3
3
j=5 3
2 3
3
4 2 3
3
3
2 3 4
Survivors
Haykin_ch10_pp3.fm Page 619 Friday, January 4, 2013 5:03 PM
Received
sequence 11
2
0
j =1
0
Received
sequence 11 00
2 2
0
0
4
j =2
Received
sequence 11 00 01
0 2
0
2
j =3
1
1
1 3
Received
sequence 11 00 01 00
2
0 3
2
3
j =4
Figure 10.19
3
Illustrating breakdown 1
of the Viterbi algorithm
in Example 6. 1 3
Haykin_ch10_pp3.fm Page 620 Friday, January 4, 2013 5:03 PM
DL
DL DL
D2L DL D 2L
a0 b c a1
3. The signal at a node is applied equally to all the branches outgoing from that node.
4. The transfer function of the graph is the ratio of the output signal to the input signal.
Returning to the signal-flow graph of Figure 10.20, the exponent of D on a branch in this
graph describes the Hamming weight of the encoder output corresponding to that branch;
the symbol D used here should not be confused with the unit-delay variable in Section
10.6 and the symbol L used herein should not be confused with the length of the message
sequence. The exponent of L is always equal to one, since the length of each branch is one.
Let T(D,L) denote the transfer function of the signal-flow graph, with D and L playing the
role of dummy variables. For the example of Figure 10.20, we may readily use rules 1, 2,
and 3 to obtain the following input-output relations:
b = D La 0 + Lc
2
c = DLb + DLd
(10.58)
d = DLb + DLd
2
a 1 = D Lc
where a0, b, c, d, and a1 denote the node signals of the graph. Solving the system of four
equations in (10.58) for the ratio a1a0, we obtain the transfer function
5 3
D L
T D L = ---------------------------------- (10.59)
1 – DL 1 + L
Using the binomial expansion, we may equivalently express T(D,L) as follows:
5 3 –1
T D L = D L 1 – DL 1 + L
DL 1 + L
5 3 i
= D L
i=0
Setting L = 1 in this formula, we thus get the distance transfer function expressed in the
form of a power series as follows:
T D 1 = D + 2D + 4D +
5 6 7
(10.60)
Since the free distance is the minimum Hamming distance between any two codewords in
the code and the distance transfer function T(D,1) enumerates the number of codewords
that are a given distance apart, it follows that the exponent of the first term in the
expansion of T(D,1) in (10.60) defines the free distance. Thus, on the basis of this
equation, the convolutional code of Figure 10.13 has the free distance dfree = 5.
This result indicates that up to two errors in the received sequence are correctable, as
two or fewer transmission errors will cause the received sequence to be at most at a
Hamming distance of 2 from the transmitted sequence but at least at a Hamming distance
of 3 from any other code sequence in the code. In other words, in spite of the presence of
any pair of transmission errors, the received sequence remains closer to the transmitted
sequence than any other possible code sequence. However, this statement is no longer true
if there are three or more closely spaced transmission errors in the received sequence. The
observations made here reconfirm the results reported earlier in Examples 5 and 6.
Haykin_ch10_pp3.fm Page 622 Friday, January 4, 2013 5:03 PM
Comparing (10.61) and (10.62) for cases 1 and 2, respectively, we see that the asymptotic
coding gain for the binary-input AWGN channel is greater than that for the binary
symmetric channel by 3 dB. In other words, for large Eb N0, the transmitter for a binary
symmetric channel must generate an additional 3 dB of signal energy (or power) over that
for a binary-input AWGN channel if we are to achieve the same error performance.
Clearly, there is an advantage to be gained by using an unquantized demodulator output in
place of making hard decisions. This improvement in performance, however, is attained at
the cost of increased decoder complexity due to the requirement for accepting analog
inputs.
It turns out that the asymptotic coding gain for a binary-input AWGN channel is
approximated to within about 0.25 dB by a binary input Q-ary output discrete memoryless
channel with the number of representation levels Q = 8. This means that, for practical