0% found this document useful (0 votes)
7 views5 pages

A Comparison of Optimal and Sub-Optimal MAP Decoding Algorithms Operating in The Log Domain

The document presents a Log-MAP algorithm for estimating states of a Markov process, addressing the complexities of the traditional MAP algorithm and its approximations like Max-Log-MAP and SOVA. The Log-MAP algorithm is shown to be equivalent to the true MAP without its disadvantages, particularly in low SNR scenarios. The paper includes theoretical comparisons, practical applications in Turbo decoding, and discusses the computational complexities of the various algorithms.

Uploaded by

Nhung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views5 pages

A Comparison of Optimal and Sub-Optimal MAP Decoding Algorithms Operating in The Log Domain

The document presents a Log-MAP algorithm for estimating states of a Markov process, addressing the complexities of the traditional MAP algorithm and its approximations like Max-Log-MAP and SOVA. The Log-MAP algorithm is shown to be equivalent to the true MAP without its disadvantages, particularly in low SNR scenarios. The paper includes theoretical comparisons, practical applications in Turbo decoding, and discusses the computational complexities of the various algorithms.

Uploaded by

Nhung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

arison of Optimal and Su

perating i
Patrick Robertson, Emmanuelle Villebrun and Peter Hoehes

Institute for Communications Technology, German Aerospace Research Establishment (DLR)


D-82230 Oberpfaffenhofen, Germany, Tel.: ++49 8153 28 2808; Email: [email protected]

Abstract operation [9]. This algorithm, to be called Log-MAP alyo-


For estimating the states or outputs of a Markov process, the rzthm, is equivalent t o the MAP algorithm in terms of perfor-
symbol-by-symbol MAP algorithm is optimal. However, this al- mance, but without its problems of implementation. The cor-
gorithm, even in its recursive form, poses technical difficulties be- rection needs just a n additional one-dimensional table look-up
cause of numerical representation problems, the necessity of non- and a n addition per max-operation.
linear functions and a high number of additions and multiplica- T h e organization of the paper is as follows: After reviewing
tions. MAP like algorithms operating in the logarithmic domain the M A P and Max-ILog-MAP algorithms, we will derive the
presented in the past solve the numerical problem and reduce the Log-MAP algorithm in Section 2. In Section 3, we will com-
computational complexity, but are suboptimal especially at low pare these algorithms with the VA and thle SOVA. Complexity
SNR (a common example is the Max-Log-MAP because of its use comparisons and quantization issues are covered in Section 4.
of the max function). A further simplification yields the soft-output Finally, numerical results are presented in Section 5 by ap-
Viterbi algorithm (SOVA). plying the addressed algorithms in a Turbo-Code system.
In this paper, we present a Log-MAP algorithm that avoids the ap-
proximations in the Max-Log-MAP algorithm and hence is equiv-
alent to the true MAP, but without its major disadvantages. We 2 Definition o f t
compare the (Log-)MAP, Max-Log-MAP and SOVA from a theo- MAP, and Log-
retical point of view to illuminate their commonalities and differ-
ences. As a practical example forming the basis for simulations, 2.1 The Encoder and Notation
we consider Turbo decoding, where recursive systematic convolu-
tional component codes are decoded with the three algorithms, Since we will study the behaviour of the (Max-)Log-MAP al-
and we also demonstrate the practical suitability of the Log-MAP gorithm applied to the decoding of convolutional codes (and
by including quantization effects. The SOVA is, at l o p 4 , approxi- in particular recursive systematic convolutional (RSC) codes),
mately 0.7 dB inferior to the (Log-)MAP, the Max-Log-MAP lying we will choose a notation t h a t complies with such an en-
roughly in between. We also present some complexity comparisons coder, for example one with four memory elements as shown in
and conclude that the three algorithms increase in complexity in Fig. 1. Since the MAP algorithm is essentially block-oriented,
the order of their optimality. we shall represent the binary input (information) d a t a se-
+.
quence by d = ( d l , ..., d N ) .
1. Introduction
We will consider trellis-based soft-output decoding algorithms
delivering additional reliability information together with
hard decisions. The Bahl-Jelinek algorithm, also known as the
symbol-by-symbolMAP algorithm (MAP algorithm for short),
is optimal for estimating the states or outputs of a Markov
process observed in white noise [l].However, this algorithmis
perhaps too difficult in practice, basically because of the nu-
merical representation of probabilities, non-linear functions
and because of mixed multiplications and additions of these
values.
Some approximations of the MAP algorithm have been de-
rived, such as the soft-output Viterbi algorithm (SOVA) [a]
and the Max-Log-MAP algorithm [3, 4, 51. In both algo-
rithms, processing is exclusively in the logarithmic domain;
values and operations (addition and max-function) are easier
t o handle. However, both algorithms are suboptimal at low Figure 1: Two identical recursive systematic convolutional
signal-to-noise ratios, where we use Turbo-Codes [6, 7, 8, 93, encoders employed in a Turbo coding sclheme.
for example.
In this paper, we will modify the Max-Log-MAP algorithm The encoder has A4 memory elements. In our example
through the use of a simple correction function at each max- (the RSC code is of rate l / 2 ) it has two outputs, one is

0-7803-2486-2195 $4.00 0 1995 IEEE 1009


4

the sequence of information bits P = (xi,..., xk) = d , since 2.3 The Max-Log-MAP Algorithm
the encoder is systematic. T h e other is the 'parity informa-
M As we already said in the introduction, the M A P algorithm
tion' sequence ?P = (zy , ..., zL), with z i = gf a k - i ,
is likely t o be considered too complex for implementation in
where (si,.. . , &) is the feed-forward generator and a k = a real system [5]. To avoid the number of complicated opera-
dk g f a k - i ; similarly, ( g ! , . . . , gk)is the feed-back gen- tions and also number representation problems, one does not
erator of the encoder. calculate y Z ( ( y i ,$), S k - 1 , Sk), a ~ ( S kand
) Pk(Sk)any longer.
The sequences 2 ' and 3 may be punctured, are then mod- One computes and works with the logarithms of these values
ulated and transmitted over a channel. In this work, we have instead [3]-[5].
assumed BPSK modulation and an AWGN channel with one- By taking the logarithm of rZ((yi, $), S k - 1 , Sk) derived in
sided noise power spectral density No. The corresponding (4) and inserting
received sequences be y" and y'p. For brevity, the sequence
y" = (y",9) will refer t o the sequence of pairs of received
systematic and parity symbols.

T AP Algorithm we obtain the following expression for q ( . ) = 1:

We will not repeat the derivation of the MAP algorithm, but


only state the results. For more detail see [l, 6, 81. Let the
state of the encoder a t time k be Sk,it can take on values
between 0 and 2M - 1. The bit d k is associated with the tran-
sition from step k - 1 t o step IC. Goal of the MAP algorithm
is to provide us with the logarithm of the ratio of the a pos-
teriori probability (APP) of each information bit d k being 1 We can ignore the constant K , indeed it cancels out in the
to the APP of it being 0. We obtain: calculation of In ~k(Sk) and I n P k ( S k ) . We must remember
that No must be estimated t o correctly weight the channel
cs k Sk-I
Yl(Yk,Sk-l,Sk). @ k - l ( S k - l ) .Pk(Sk) information with the a priori probability Pr{Sk ISk-1).
For lncrk(Sk), we get:
A ( d k ) = In
c c YO(Yk,Sk-I,Sk)
s k Sk-I
.@k-l(Sk-l) 44%)'
(1)
where the forward recursion of the MAP can be expressed as:

sk Sk-1
2 Yi(Yk>
i=o
sk-l, sk) ' ak-l(Sk-l) To obtain a simple solution, we use the following approxima-
tion:

and the backward recursion as: maxi E { l , , n }bi can be calculated by successively using n - I
maximum functions over onlv two values. From now on. we

The branch transition probabilities are,given by and similarly,

The value of q ( d k = ilSk,Sk-l) is either one or zero de- - max (Yi((Yi+,,


( 5 k ~ s k + l , i )
$+I), Sk,S k + l ) + ..k(Sk)). (10)
pending on whether bit i is associated with the transition
from state S k - 1 t o s k or not. It is in the last component The second terms are a consequence of the derivation from
that we use a priori information for bit d k [lo]: In our case (2) and (3); they are needed for numerical reasons. Omitting
of no parallel transitions, Pr{SklSk-1) = Pr{dk = l} if them has no effect on the value of the output of the Max-Log-
q ( d k = l I S k , Ss-1) = 1; and Pr{SklSk-1}= Pr{dk = 0} if MAP algorithm, since these normalization terms will cancel
q (dk = O I S k , S k - 1 ) = 1. out in (11).
I010
In the same way we can give an approximation of the log- 3 Comparison of the (Max-)Log-
likelihood reliability of each bit d k :
MAP and Soft-Ou-tput Viterbi
A(dk) M
Algorit hLms
3.1 Hard Decisions
In [5], it was claimed that the hard decision of the Max-Log-
To be used in a Turbo decoder, we want the output of the MAP provides exactly the same hard dlecision as the Viterbi
Max-Log-MAP algorithm, A(dk), t o be split into three terms algorithm. We now present a mathematical proof of this re-
(extrinsic, a priori and systematic components) as shown in sult for our example of rate l / 2 RSC codes; extensions are
[6, 7, 81; it can easily be shown that this is possible here. rudimentary. We assume for simplicity that the definitions
The extrinsic component will be used as a priori informa- of &k(Sk)and ,&(Sk) do not include the normalization term.
tion in the next decoding stage. This a priori log-likelihood Remember that the Viterbi algorithm iselects that path with
ratio (LLR) for bit dk is called L ( d k ) for short. We need the largest metric:
to determine the a priori information in (6). If q(dk =
l I S k , S k - 1 ) = 1, then M = max
vpaths k=l
[-L(yi
NO
- {5
+
hence lnPr{SkISk-I} = L ( d k ) - ln(1 eL(dkj).An approxi-
mation for P r { S k l S k - 1 ) can be easily found using (8): The Max-Log-MAP output (11) for the last bit dN can be
written as I l ( d ~ =
) M l ( d ~-
) M o ( d ~ )where
,
lnPr{SklSk-1} M L ( d k ) - max(O,L(dk)). (13)
If p(dk = OISk,S k - 1 ) = 1 then = max (?i(yN,SN-l, SN= 10)
Mi(d~)
SN- 1
+ (YN-l(sN-1)).

(18)
By applying the recursive definition of GN-l(SN-1) (without
normalization) we obtain:
hence InPr{SklSk-1) = - I n ( l +
eL(dk)). Similarly we can
M ( ~ N= )SN- N , SN = 0) +
~ ~ x { % ( YSN-I, (19)
approximate lnPr{Sk(Sk-,-1} M -max(O, L ( d k ) ) . 1

2.4 Correction of the Approximation: The


Log-MAP Algorithm There exists an S N - ~and
, ~
an J~ " -~: L , ~ such
~ ~ that
Because of the approximation (8) we applied, the Max-Log- Mi(dN) max{?i(yN,
SN-1
SN-1, S N = 0) + (20)
MAP algorithm is suboptimal and yields an inferior soft-
output than the MAP algorithm. The problem is to exactly y j ~ - l , ~ ~ ~ ( YSN
N --Zl, m
, a z , SN-1) + &N-2(SN-Z,maz)}.
+ +
calculate ln(exp 61 . .. exp 6,). This problem can be solved This is repeated N - 2 times, yielding:
by using the Jacobian logarithm [3, 41:
ln(e61 + e62) = max(61,52) + 1 n ( l + e-162-610
= imax {?i(yiy, S N - ~s"
Mi(d~)
SN- 1
, = O)} +
= max(&, 6 2 ) + fc(l&- &I), (15)
N-1.

7jk,,,,(YNl Sk-l", Sk,maz), (21)


where fc(.) is a correction function. Let us now prove re- k=l
+ +
cursively that the expression ln(exp 61 ... exp 6,) can be
since we assume &(O) = 0. By making a decision for the bit
computed exactly. The recursion is initialized with (15). Sup-
pose that 6 = ln(& + +
... e 6 = - l ) is known. Hence, d N , the Max-Log-MAP algorithm selects the largest M i ( d ~ ) .
By inserting (6) into the maximum over i of (21) and com-
ln(e61 + ._.+ e',) paring with (17), we can easily see that the Max-Log-MAP
= l n ( A + e&-) with A = e61 ... + + = e6 and Viterbi algorithm make the same decision for bit d N , since
maqe{o,l) Mi(dr~-) = M+C = M ( d r ~ - where
), C is a constant.
= max(1n A , 6,) + fc( I In A - 6, I) To continue, let us define a metric for the Max-Log-MAP as:
= max(S,S,) + fc(16 - 6,l) q.e.d. (16)
When deriving the Log-MAP algorithm, we now augment all
maximizations over two values with the correction function. (22)
As a consequence, by correcting at each step the approxi- The Max-Log-MAP will choose that bit d k that maximizes
mation made by the Max-Log-MAP, we have preserved the (22). We now suppose that M ( d k ) == A4 + C . Let us first
original MAP algorithm. prove that M ( d k ) = M ( d k - 1 ) :
By calculating fC(.), we lose some of the lower complexity of
the Max-Log-MAP algorithm. T h a t is why we approximate
fc(.) by a pre-computed table. Since the correction only de-
pends on 162 - 61], this table is one dimensional. We shall see
that only very few values need to be stored.

1011
We apply the recursive definitions of &k-l(Sk-~)and MAP I dl I
/&-1(Sk-l), so t h a t we obtain:
all paths are
considered
...
----- :0
-. .1
pk(sk)} = M(dk). (24)

We can deduce from this recursion step and from the initial two paths are
step where k = N that V k E { 1 , . . . , N } considered
...
A N~=) &I(&) = M
M M ~ ~ - L ~ ~= -MM( ~ + C. (25)

For each bit d k , the Max-Log-MAP algorithm calculates two


Viterbi metrics and takes the largest one. This proves that
the Max-Log-MAP algorithm makes the same hard decisions
two paths are
as the Viterbi algorithm.
...
considered but

the competing path


3.2 Soft Outputs may not be the best

As we have already explained, the Max-Log-MAP algorithm competing path - 2


:
that now determines
and the SOVA work with the same metric. If we consider reliability (will survive to IS eliminated
only the hard decisions, they are identical. But, they behave merge with ML path)
in different ways in computing the information returned about
the reliability of decoded bit d k . The SOVA considers only
Figure 2: Comparison between (Log)MAP, Max-Log-MAP
one competing path per decoding step. That is t o say, for
and SOVA. T h e MAP uses all paths in the trellis t o optimally
each bit d j it does not consider all the competing paths but determine the reliability of bit d j . T h e Max-Log-MAP makes
only the survivors of the Viterbi algorithm. To be taken into its decision (and soft output) based on the best two paths
account in the reliability estimation, a competing path must
with different d j . T h e SOVA also takes two paths, but not
join the path chosen by the Viterbi algorithm without being necessarily both the same as for the Max-Log-MAP.
eliminated.
T h e differences between the (Log)MAP, Max-Log-MAP
and SOVA are illustrated in Fig. 2. The MAP takes all paths I ODeration II Max-Lon-MAP I Lon-MAP I SOVA I
"
into its calculation, but splits them into two sets: those that mar ops
I
5X2M-2 5 X 2 M - 2 3(M+1)+2M
have an information bit one a t step j and those that have a adhtions 10 x 2 M 11 + 15 X 2M +9 2 X 2M 8 +
zero; it returns the LLR of these two sets. All t h a t changes mult. bv +1 8 8 8
from step t o step, is the classification of the paths into the
respective sets. Due t o the Markov properties of the trellis,
the computation can be done relatively easily. In contrast,
the Max-Log-MAP looks at only two paths p e r step: the best If we assume that one bit comparison costs as much as one
with bit zero and the best with bit one a t transition j ; it then addition, this table allows us t o conclude that the Max-Log-
outputs the difference of the log-likelihoods. However, from M A P algorithm is more than twice as complex as the SOVA,
step t o step one of these paths can change, but one will always for memory M = 4 and less than 2 times as complex for
be the maximum-likelihood (ML) path. T h e SOVA will al- M = 2.
ways correctly find one of these two paths (the ML path), but
not necessarily the other, since it may have been eliminated
before merging with the ML path. There is no bias on the
SOVA output when compared t o that of the Max-Log-MAP
4.2 Quantization
algorithm, only the former will be more noisy. We shall now present the quantization ranges that were used
in simulations of the Log-MAP, they are based on observa-
4 Complexity and Quantization tions of the distributions of the pertinent variables. We have
attempted to take into account the large variations that are
the result of the iterative decoding process in a Turbo decoder.
4.1 Complexity Comparisons
As mentioned earlier, the correction function in (15) used by
the Log-MAP can be implemented using a look-up table. We
found that excellent results can be obtained with 8 stored
values and 161 - 6 2 I ranging between 0 and 5. No improvement
is achieved when using a finer representation. We now present
the result of complexity analyses in the following table, taking
into account the additional complexity of including a priori
information:

1012
5 Simulation Results for Applica- the substitution of logarithms by the Jacobian logarithm.
This correction is simple to implement and quite insensitive
tion in a Turbo-Code System to a quantization; a loss due to quantization of the correc-
Figure 3 shows the BER for decoding Turbo-Codes with the tion function is not visible. Further implementation issues
MAP, Log-MAP, Max-Log-MAP, and SOVA, respectively, concerning continuous d a t a (“real-time implementation”) and
taking no quantization effects into account. However, the simplifications valid for feed-forward trellises [5] are still ap-
Log-MAP is applied with a n 8-values correction table; a loss plicable. We have compared the MAP, (Max-)Log-MAP and
due t o the 3 bit quantization of the correction table is not SOVA from a theoretical point of view t o illuminate their
visible. Results for the SOVA are taken from [7]. commonalities and differences. As a practical example form-
ing the basis for simulations, we consid’ered Turbo-decoding,
1oo
where recursive systematic convolutional (RSC) component
codes have been decoded with the three algorithms. Quanti-
10.’ zation of the whole Log-MAP algorithm was also investigated:
the loss is about 0.5 dB. This can proba,bly be improved, but
1o-2
Turbo-Codes have a huge variation of variables’ ranges as a
IO-^ result of iterative decoding. Finally, we have compared the
complexity of the th.ree algorithms. T h e number of operations
10.~ of the Log-MAP is about twice the number of operations of
the SOVA, however, the former is more ;suited to parallel pro-
1o - ~
cessing [3].
1o-6 We conclude t h a t the Log-MAP is particulsrly suitable for
decoding Turbo-Codes, since in this chlallenging application
I 0.’ we have a very low signal-to-noise ratio but a small number
I o-8 of states, so t h a t the additional complexity is less pronounced.
1 .o 1.5 2.0 2.5
EJN,, dB Acknowledgements
T h e authors would especially like to thank Dr. J . Hagenauer
Figure 3: BER for Turbo-decoding with MAP, Log-MAP,
and Dr. J. Huber for valuable discussioiis and comments.
Max-Log-MAP and SOVA (8 iterations). N = 1024, M = 4.

Finally, Figure 4 shows the corresponding BER for the Log-


References
MAP with and without quantization. We set No t o 2, and [1] L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding
used a n 8 values table. T h e step-size of the correction table of linear codes for minimizing symbol error rate,” I E E E Trans.
is equal t o 1/8. Inform. Theory, vol. IT-20, pp. 284-2137, March 1974.
[2] J. Hagenauer and P. Hoeher, “A Viterbi algorithm with soft-
1oo decision outputs and its applications,” in Proc. GLOBECOM
’89, pp. 1680-1686, November 1989.
10.’ i [3] J. A. Erfanian, S.Pasupathy, and G. Gulak, “Reduced com-
1o-2 plexity symbol detectors with parallel structures for isi chan-
nels,” I E E E Trans. Commun., vol. 42, pp. 1661-1671, Febru-
ary/March/April 1994.
[4] W. Koch and A. Baier, “Optimum and sub-optimum detec-
$
m
10’~
tion of coded data disturbed by time-varying intersymbol in-
1o -~ terference,” in Proc. GLOBECOM ’90, pp. 1679-1684, De-
cember 1990.
1o-6 [5] J. Petersen, “Implementierungsaspekte zur Symbol-by-
Symbol MAP Decodierung von Faltungscodes,” in Proc.
I o . ~ W quantized Log-MAP

1 .o 1.5 2.0
1
2.5
I T G Tagung, Codierung f u r Quelle, Kana1 und Ubertragung,
pp. 41-48, October 1994.
[6] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shan-
non limit error-correcting coding and decoding: Turbo-
codes,” in Proc. ICC ’93, pp. 1064-1070, May 1993.
Figure 4: BER for Turbo-decoding with MAP, Log-MAP, and
[7] J. Hagenauer, P. Robertson, and L. Papke, “Iterative
quantized Log-MAP (8 iterations). (“Turbo”) decoding of systematic convolutional codes with
the MAP and SOVA algorithms,” .in Proc. ITG Tagung,
6 Conclusions Codierung fur Quelle, Kana1 und Ubertragung, pp. 21-29, Oc-
tober 1994.
We have demonstrated a Log-MAP algorithm t h a t is equiv-
alent to the (true) symbol-by-symbol MAP algorithm, i.e., [8] P. Robertson, “Illuminating the structure of code and decoder
for parallel concatenated recursive systematic (turbo) codes,”
is optimum for estimating the states or outputs of a Markov
in Proc. GLOBL‘COM ’94, pp. 1298-1303, December 1994.
process. However, the novel implementation works exclusively
[9] E. Villebrun, “Turbo-decoding with close-to-optimal M A P al-
in the logarithmic domain, thus avoiding the basic problems
gorithms.” Diploma thesis, T U Munich, September 1994.
of the symbol-by-symbol MAP algorithm. The difference be-
tween our Log-MAP algorithm and the known Max-Log-MAP [lo] G. D. Forney, “The Viterbi algorithm,” Proc. of the IEEE,
algorithm, which just approximates the M A P algorithm, is vol. 61, pp. 268-278, March 1973.

1013

You might also like