Chapter 3 FEC
Chapter 3 FEC
FORWARD ERROR
CORRECTION CODES: TURBO CODES
14
Forward Error Correcting Codes
: Turbo Codes
Consider two discrete memory less sources X and Y with possible outcomes
x 1 , x 2 , … . , x n and y 1 , y 2 , … , y m respectively. Let the probability of X =x i be denoted
P ( x i| y i )
I ( x i ; y i )=log
P(x i)
P ( x i| yi ) =1
1
Therefore I ( x i ; y i )=log
P( xi )
¿−log P(x i)
Which simply contains the information contained in the event x i, thus self information
of x i is given as
I ( x i ) =−log P(x i)
15
Forward Error Correcting Codes
: Turbo Codes
From the self information between events one can derive the average self information
of the source. This average self information per source symbol is represented by H ( X )
and is called the entropy of the source.
n
H ( X )=∑ P ( xi ) I ( x i)
i =1
n
¿−∑ P(x i)logP(x i )
i=1
When X represents the alphabet of possible output symbols from the source. The
average conditional self information is called the conditional entropy and is defined as
n m
1
H ( X|Y )=∑ ∑ P(x i , y i ) log
i=1 j =1 P ( x i| y i )
H ( X|Y ) represents the information in X after Y has been observed. The concepts of
average mutual information can be carried from the discrete random variables to the
continuous random variables with joint probability density functions (PDF) p(x , y )
and marginal PDFs p(x ) and p( y ).
‘the maximum mutual information I ( X ; Y ) in any single use of the channel, where the
maximization is over all possible input probability distributions { P(x i)} on X ”
C=max I ( X ; Y )
P( xi )}
16
Forward Error Correcting Codes
: Turbo Codes
Most of the real world communication systems are modeled in part as additive
Gaussian white noise (AWGN) channels. The capacity of the AWGN channel was
derived by Shannon in his landmark paper in 1948 and it can be given as:
Pav
C=Wlog(1+ ) bits/sec
W No
Where W represents the bandwidth Pav is the average signal noise power and N o is
the power spectral density of the additive noise.
with an alphabet S
with an entropy H ( S )
produce symbols once every T sseconds
have capacity C
be used once every T c seconds.
3. Then if,
H ( S) C
≤
Ts Tc
There exists a coding scheme for which the source output can be transmitted over the
channel and be reconstructed with an arbitrarily small probability of error. The
C
parameter is called critical rate.
Tc
4. Conversly if
17
Forward Error Correcting Codes
: Turbo Codes
H ( S) C
>
Ts T c
it is not possible to transmit information over the channel and reconstruct it with an
arbitrarily small probability of error.
Thus once the capacity for any channel over which one would like to communicate
has been derived, there are clear bounds on what is achievable and clear goals for the
design of the code.
There are two coding methodologies that exist for channel codes:
FEC could be said to work by "averaging noise", since each data bit affects many
transmitted symbols, the corruption of some symbols by noise usually allows the
original user data to be extracted from the other, uncorrupted received symbols that
also depend on the same user data.
18
Forward Error Correcting Codes
: Turbo Codes
Most telecommunication systems used a fixed channel code designed to tolerate the
expected worst-case bit error rate, and then fail to work at all if the bit error rate is
ever worse. However, some systems adapt to the given channel error conditions:
hybrid automatic repeat-request uses a fixed FEC method as long as the FEC can
handle the error rate, then switches to ARQ when the error rate gets too high; adaptive
modulation and coding uses a variety of FEC rates, adding more error-correction bits
per packet when there are higher error rates in the channel, or taking them out when
they are not needed
The two main categories of FEC codes are block codes and convolutional codes.
19
Forward Error Correcting Codes
: Turbo Codes
while the encoding and the decoding algorithms differ for convolution and block
codes, most of the results hold for the both types.
Now briefly convolution codes are discussed. Convolutional codes differ from block
codes in the sense that they do not break the message stream into fixed-size blocks.
Instead, redundancy is added continuously to the whole stream. The encoder keeps M
previous input bits in memory. Each output bit of the encoder then depends on the
current input bit as well as the M stored bits. Figure 2.1 depicts a sample
convolutional encoder. The encoder produces two output bits per every input bit,
defined by the equations
For this encoder, M = 3, since the i-th bits of output depend on input bit i, as well as
three previous bits i − 1, i − 2, i − 3. The encoder is nonsystematic, since the input bits
do not appear explicitly in its output.
An important parameter of a channel code is the code rate. If the input size (or
message size) of the encoder is k bits and the output size (the code word size) is n
k
bits, then the ratio is called the code rate r. Since our sample convolutional encoder
n
1
produces two output bits for every input bit, its rate is . Code rate expresses the
2
amount of redundancy in the code—the lower the rate, the more redundant the code.
20
Forward Error Correcting Codes
: Turbo Codes
Turbo codes are a class of high-performance FEC codes developed in 1993, which
were the first practical codes to closely approach the channel capacity, a theoretical
maximum for the code rate at which reliable communication is still possible given a
specific noise level []. Turbo codes are finding use in (deep space) satellite
communications and other applications where designers seek to achieve reliable
information transfer over bandwidth or latency constrained communication links in
the presence of data-corrupting noise. Turbo codes are nowadays competing with
LDPC codes, which provide similar performance.
The first turbo code, based on convolutional encoding, was introduced in 1993 by
Berrou et al. [ ]. Since then, several schemes have been proposed and the term “turbo
codes” has been generalized to cover block codes as well as convolutional
codes.
The generic design of a turbo code is depicted in Figure 2.2. Although the general
concept allows for free choice of the encoders and the interleaver, most designs
follow the ideas presented in [ ]:
Encoder 1
Interleaver
Encoder 2
21
Forward Error Correcting Codes
: Turbo Codes
The choice of the interleaver is a crucial part in the turbo code design. The task of the
interleaver is to “scramble” bits in a (pseudo-)random, albeit predetermined fashion.
This serves two purposes. Firstly, if the input to the second encoder is interleaved, its
output is usually quite different from the output of the first encoder.
This means that even if one of the output code words has low weight, the other
usually does not, and there is a smaller chance of producing an output with very low
weight. Higher weight, as we saw above, is beneficial for the performance of the
decoder. Secondly, since the code is a parallel concatenation of two codes, the divide-
and-conquer strategy can be employed for decoding. If the input to the second
decoder is scrambled, also its output will be different or “uncorrelated” from the
output of the first encoder. This means that the corresponding two decoders will gain
more from information exchange. Now some interleaver design ideas are briefly
reviewed, stressing that the list is by no means complete.
There is no such thing as a universally best interleaver. For short block sizes, the odd-
even interleaver has been found to outperform the pseudo-random interleaver, and
vice versa. The choice of the interleaver has a key part in the success of the code and
the best choice is dependent on the code design.
2.4.1 Decoding
In the traditional decoding approach, the demodulator makes a “hard” decision of the
received symbol, and passes to the error control decoder a discrete value, either a 0 or
22
Forward Error Correcting Codes
: Turbo Codes
a 1. The disadvantage of this approach is that while the value of some bits is
determined with greater certainty than that of others, the decoder cannot make use of
this information.
A soft-in-soft-out (SISO) decoder receives as input a “soft” (i.e. real) value of the
signal. The decoder then outputs for each data bit an estimate expressing the
probability that the transmitted data bit was equal to one. In the case of turbo codes,
there are two decoders for outputs from both encoders. Both decoders provide
estimates of the same set of data bits, albeit in a different order. If all intermediate
values in the decoding process are soft values, the decoders can gain greatly from
exchanging information, after appropriate reordering of values. Information exchange
can be iterated a number of times to enhance performance. At each round, decoders
re-evaluate their estimates, using information from the other decoder, and only in the
final stage will hard decisions be made, i.e. each bit is assigned the value 1 or 0. Such
decoders, although more difficult to implement, are essential in the design of turbo
codes.
1
Already the first rate code proposed in 1993 made a huge improvement: the gap
3
between Shannon’s limit and implementation practice was only 0.7dB, giving a less
than 1.2-fold overhead. In [2], a thorough comparison between convolutional codes
1 1
and turbo codes is given. In practice, the code rate usually varies between and .
2 6
1
Let the allowed bit error rate be 10−6 . For code rate , the relative increase in energy
2
consumption is then 4.80dB for convolutional codes, and 0.98dB for turbo codes. For
1
code rate , the respective numbers are 4.28dB and -0.12dB. It can also be noticed,
6
that turbo codes gain significantly more from lowering the code rate than
conventional convolutional codes.
23