A Comparison of Optimal and Sub-Optimal MAP Decoding Algorithms Operating in The Log Domain
A Comparison of Optimal and Sub-Optimal MAP Decoding Algorithms Operating in The Log Domain
perating i
Patrick Robertson, Emmanuelle Villebrun and Peter Hoehes
the sequence of information bits P = (xi,..., xk) = d , since 2.3 The Max-Log-MAP Algorithm
the encoder is systematic. T h e other is the 'parity informa-
M As we already said in the introduction, the M A P algorithm
tion' sequence ?P = (zy , ..., zL), with z i = gf a k - i ,
is likely t o be considered too complex for implementation in
where (si,.. . , &) is the feed-forward generator and a k = a real system [5]. To avoid the number of complicated opera-
dk g f a k - i ; similarly, ( g ! , . . . , gk)is the feed-back gen- tions and also number representation problems, one does not
erator of the encoder. calculate y Z ( ( y i ,$), S k - 1 , Sk), a ~ ( S kand
) Pk(Sk)any longer.
The sequences 2 ' and 3 may be punctured, are then mod- One computes and works with the logarithms of these values
ulated and transmitted over a channel. In this work, we have instead [3]-[5].
assumed BPSK modulation and an AWGN channel with one- By taking the logarithm of rZ((yi, $), S k - 1 , Sk) derived in
sided noise power spectral density No. The corresponding (4) and inserting
received sequences be y" and y'p. For brevity, the sequence
y" = (y",9) will refer t o the sequence of pairs of received
systematic and parity symbols.
sk Sk-1
2 Yi(Yk>
i=o
sk-l, sk) ' ak-l(Sk-l) To obtain a simple solution, we use the following approxima-
tion:
and the backward recursion as: maxi E { l , , n }bi can be calculated by successively using n - I
maximum functions over onlv two values. From now on. we
(18)
By applying the recursive definition of GN-l(SN-1) (without
normalization) we obtain:
hence InPr{SklSk-1) = - I n ( l +
eL(dk)). Similarly we can
M ( ~ N= )SN- N , SN = 0) +
~ ~ x { % ( YSN-I, (19)
approximate lnPr{Sk(Sk-,-1} M -max(O, L ( d k ) ) . 1
1011
We apply the recursive definitions of &k-l(Sk-~)and MAP I dl I
/&-1(Sk-l), so t h a t we obtain:
all paths are
considered
...
----- :0
-. .1
pk(sk)} = M(dk). (24)
We can deduce from this recursion step and from the initial two paths are
step where k = N that V k E { 1 , . . . , N } considered
...
A N~=) &I(&) = M
M M ~ ~ - L ~ ~= -MM( ~ + C. (25)
1012
5 Simulation Results for Applica- the substitution of logarithms by the Jacobian logarithm.
This correction is simple to implement and quite insensitive
tion in a Turbo-Code System to a quantization; a loss due to quantization of the correc-
Figure 3 shows the BER for decoding Turbo-Codes with the tion function is not visible. Further implementation issues
MAP, Log-MAP, Max-Log-MAP, and SOVA, respectively, concerning continuous d a t a (“real-time implementation”) and
taking no quantization effects into account. However, the simplifications valid for feed-forward trellises [5] are still ap-
Log-MAP is applied with a n 8-values correction table; a loss plicable. We have compared the MAP, (Max-)Log-MAP and
due t o the 3 bit quantization of the correction table is not SOVA from a theoretical point of view t o illuminate their
visible. Results for the SOVA are taken from [7]. commonalities and differences. As a practical example form-
ing the basis for simulations, we consid’ered Turbo-decoding,
1oo
where recursive systematic convolutional (RSC) component
codes have been decoded with the three algorithms. Quanti-
10.’ zation of the whole Log-MAP algorithm was also investigated:
the loss is about 0.5 dB. This can proba,bly be improved, but
1o-2
Turbo-Codes have a huge variation of variables’ ranges as a
IO-^ result of iterative decoding. Finally, we have compared the
complexity of the th.ree algorithms. T h e number of operations
10.~ of the Log-MAP is about twice the number of operations of
the SOVA, however, the former is more ;suited to parallel pro-
1o - ~
cessing [3].
1o-6 We conclude t h a t the Log-MAP is particulsrly suitable for
decoding Turbo-Codes, since in this chlallenging application
I 0.’ we have a very low signal-to-noise ratio but a small number
I o-8 of states, so t h a t the additional complexity is less pronounced.
1 .o 1.5 2.0 2.5
EJN,, dB Acknowledgements
T h e authors would especially like to thank Dr. J . Hagenauer
Figure 3: BER for Turbo-decoding with MAP, Log-MAP,
and Dr. J. Huber for valuable discussioiis and comments.
Max-Log-MAP and SOVA (8 iterations). N = 1024, M = 4.
1 .o 1.5 2.0
1
2.5
I T G Tagung, Codierung f u r Quelle, Kana1 und Ubertragung,
pp. 41-48, October 1994.
[6] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shan-
non limit error-correcting coding and decoding: Turbo-
codes,” in Proc. ICC ’93, pp. 1064-1070, May 1993.
Figure 4: BER for Turbo-decoding with MAP, Log-MAP, and
[7] J. Hagenauer, P. Robertson, and L. Papke, “Iterative
quantized Log-MAP (8 iterations). (“Turbo”) decoding of systematic convolutional codes with
the MAP and SOVA algorithms,” .in Proc. ITG Tagung,
6 Conclusions Codierung fur Quelle, Kana1 und Ubertragung, pp. 21-29, Oc-
tober 1994.
We have demonstrated a Log-MAP algorithm t h a t is equiv-
alent to the (true) symbol-by-symbol MAP algorithm, i.e., [8] P. Robertson, “Illuminating the structure of code and decoder
for parallel concatenated recursive systematic (turbo) codes,”
is optimum for estimating the states or outputs of a Markov
in Proc. GLOBL‘COM ’94, pp. 1298-1303, December 1994.
process. However, the novel implementation works exclusively
[9] E. Villebrun, “Turbo-decoding with close-to-optimal M A P al-
in the logarithmic domain, thus avoiding the basic problems
gorithms.” Diploma thesis, T U Munich, September 1994.
of the symbol-by-symbol MAP algorithm. The difference be-
tween our Log-MAP algorithm and the known Max-Log-MAP [lo] G. D. Forney, “The Viterbi algorithm,” Proc. of the IEEE,
algorithm, which just approximates the M A P algorithm, is vol. 61, pp. 268-278, March 1973.
1013