0% found this document useful (0 votes)
74 views10 pages

Chapter 3 FEC

This document provides an overview of forward error correction (FEC) codes, specifically turbo codes. It begins with a discussion of information theory concepts like entropy and channel capacity that are relevant to understanding error correction. Next, it describes the basic principles of channel coding and the channel coding theorem. Finally, it provides a high-level introduction to different types of FEC codes, focusing on turbo codes.

Uploaded by

Sachin Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views10 pages

Chapter 3 FEC

This document provides an overview of forward error correction (FEC) codes, specifically turbo codes. It begins with a discussion of information theory concepts like entropy and channel capacity that are relevant to understanding error correction. Next, it describes the basic principles of channel coding and the channel coding theorem. Finally, it provides a high-level introduction to different types of FEC codes, focusing on turbo codes.

Uploaded by

Sachin Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

CHAPTER THREE

FORWARD ERROR
CORRECTION CODES: TURBO CODES

A simple communication model consists of a transmitter, a channel and the receiver.


The transmitter is the entity which has the information, channel is the path through
which the information must travel and the receiver is the entity which wants/receives
the information transmitted by the transmitter. If the channel would have been perfect
then the communication would be easy and one would be able to send infinite
amounts of information without any error. However channels used by current
communication systems are far from perfect. In digital communication systems these
imperfections result in errors in the received signal.

In telecommunication and information theory, FEC (also called channel coding) is a


system of error control for data transmission, whereby the sender adds systematically
generated redundant data to its messages, also known as an error-correcting code. The
carefully designed redundancy allows the receiver to detect and correct a limited
number of errors occurring anywhere in the message without the need to ask the
sender for additional data. FEC gives the receiver an ability to correct errors without
needing a reverse channel to request retransmission of data, but this advantage is at
the cost of a fixed higher forward channel bandwidth. FEC is therefore applied in
situations where retransmissions are relatively costly, or impossible such as when
broadcasting to multiple receivers. In particular, FEC information is usually added to
mass storage devices to enable recovery of corrupted data.

The amount of redundancy added can be measured as a ratio of number of symbols


after encoding to the number of symbols before encoding. If k represents the number
of symbols before encoding and n represents the number of encoded symbols, then the
rate of code is defined as r=k/n

14
Forward Error Correcting Codes
: Turbo Codes

2.1 INFORMATION THEORY


Information theory is a branch of applied mathematics and electrical engineering
involving the quantification of information. Information theory was developed by
Claude E. Shannon to find fundamental limits on signal processing operations such as
reliably communicating data compressing data and storing data.

To determine mathematical bounds on what is achievable in a communication system


one first needs a mathematical definition of information

Consider two discrete memory less sources X and Y with possible outcomes
x 1 , x 2 , … . , x n and y 1 , y 2 , … , y m respectively. Let the probability of X =x i be denoted

by P(x i) and the conditional probability that X =x i given that Y = y i be denoted as


P ( x i| yi ) . The information content provided by occurrence of y i about x i is given by

P ( x i| y i )
I ( x i ; y i )=log
P(x i)

I (x i ; y i ) is called the mutual information between x i and y i. The units of information


depends upon the base of logarithm. For base 2 the units are bits. When x i and y i are
perfectly correlated then

P ( x i| yi ) =1

1
Therefore I ( x i ; y i )=log
P( xi )

¿−log P(x i)

Which simply contains the information contained in the event x i, thus self information
of x i is given as

I ( x i ) =−log P(x i)

A key measure of information is known as entropy, which is usually expressed by the


average number of bits needed for communication or storage. Entropy quantifies the

15
Forward Error Correcting Codes
: Turbo Codes

uncertainty involved in predicting the value of a random variable. For example,


specifying the outcome of a fair coin flip (two equally likely outcomes) provides less
information (lower entropy) than specifying the outcome from a roll of a die (six
equally likely outcomes).

From the self information between events one can derive the average self information
of the source. This average self information per source symbol is represented by H ( X )
and is called the entropy of the source.

n
H ( X )=∑ P ( xi ) I ( x i)
i =1

n
¿−∑ P(x i)logP(x i )
i=1

When X represents the alphabet of possible output symbols from the source. The
average conditional self information is called the conditional entropy and is defined as

n m
1
H ( X|Y )=∑ ∑ P(x i , y i ) log
i=1 j =1 P ( x i| y i )

H ( X|Y ) represents the information in X after Y has been observed. The concepts of
average mutual information can be carried from the discrete random variables to the
continuous random variables with joint probability density functions (PDF) p(x , y )
and marginal PDFs p(x ) and p( y ).

2.2 CHANNEL CODING

Channel capacity is defined as:

‘the maximum mutual information I ( X ; Y ) in any single use of the channel, where the
maximization is over all possible input probability distributions { P(x i)} on X ”

C=max I ( X ; Y )
P( xi )}

C is measured in bits/channel-use, or bits/transmission.

16
Forward Error Correcting Codes
: Turbo Codes

Most of the real world communication systems are modeled in part as additive
Gaussian white noise (AWGN) channels. The capacity of the AWGN channel was
derived by Shannon in his landmark paper in 1948 and it can be given as:

Pav
C=Wlog(1+ ) bits/sec
W No

Where W represents the bandwidth Pav is the average signal noise power and N o is
the power spectral density of the additive noise.

2.2.1 Channel Coding Theorem

The goal of a channel code is to increase the resistance of a digital communication


system to noise. The channel coding theorem is defined as:

1. Let a discrete memory less source

 with an alphabet S
 with an entropy H ( S )
 produce symbols once every T sseconds

2. Let a discrete memory less channel

 have capacity C
 be used once every T c seconds.

3. Then if,

H ( S) C

Ts Tc

There exists a coding scheme for which the source output can be transmitted over the
channel and be reconstructed with an arbitrarily small probability of error. The

C
parameter is called critical rate.
Tc

4. Conversly if

17
Forward Error Correcting Codes
: Turbo Codes

H ( S) C
>
Ts T c

it is not possible to transmit information over the channel and reconstruct it with an
arbitrarily small probability of error.

Thus once the capacity for any channel over which one would like to communicate
has been derived, there are clear bounds on what is achievable and clear goals for the
design of the code.

There are two coding methodologies that exist for channel codes:

 Backward error correction (BEC) requires only error detection: if an error is


detected, the sender is requested to retransmit the message. While this method
is simple and sets lower requirements on the code’s error-correcting
properties, it on the other hand requires duplex communication and causes
undesirable delays in transmission.
 Forward error correction (FEC) requires that the decoder should also be
capable of correcting a certain number of errors, i.e. it should be capable of
locating the positions where the errors occurred. Since FEC codes require only
simplex communication, they are especially attractive in wireless
communication systems, helping to improve the energy efficiency of the
system. Now FEC codes are discussed.

2.3 FEC CODES

FEC is accomplished by adding redundancy to the transmitted information using a


predetermined algorithm. A redundant bit may be a complex function of many
original information bits. The original information may or may not appear literally in
the encoded output; codes that include the unmodified input in the output are
systematic, while those that do not are non-systematic.

FEC could be said to work by "averaging noise", since each data bit affects many
transmitted symbols, the corruption of some symbols by noise usually allows the
original user data to be extracted from the other, uncorrupted received symbols that
also depend on the same user data.

18
Forward Error Correcting Codes
: Turbo Codes

 Because of this "risk-pooling" effect, digital communication systems that use


FEC tend to work well above a certain minimum signal-to-noise ratio and not
at all below it.
 This all-or-nothing tendency — the cliff effect — becomes more pronounced
as stronger codes are used that more closely approach the theoretical Shannon
limit.
 Interleaving FEC coded data can reduce the all or nothing properties of
transmitted FEC codes when the channel errors tend to occur in bursts.
However, this method has limits; it is best used on narrowband data.

Most telecommunication systems used a fixed channel code designed to tolerate the
expected worst-case bit error rate, and then fail to work at all if the bit error rate is
ever worse. However, some systems adapt to the given channel error conditions:
hybrid automatic repeat-request uses a fixed FEC method as long as the FEC can
handle the error rate, then switches to ARQ when the error rate gets too high; adaptive
modulation and coding uses a variety of FEC rates, adding more error-correction bits
per packet when there are higher error rates in the channel, or taking them out when
they are not needed

The two main categories of FEC codes are block codes and convolutional codes.

 Block codes work on fixed-size blocks (packets) of bits or symbols of


predetermined size. Practical block codes can generally be decoded in
polynomial time to their block length.
 Convolutional codes work on bit or symbol streams of arbitrary length. They
are most often decoded with the Viterbi algorithm, though other algorithms are
sometimes used. Viterbi decoding allows asymptotically optimal decoding
efficiency with increasing constraint length of the convolutional code, but at
the expense of exponentially increasing complexity. A convolutional code can
be turned into a block code, if desired, by "tail-biting".

However in real world applications, generally the streams of data encoded by a


convolutional encoder are of finite length, effectively resulting in a block code. Thus

19
Forward Error Correcting Codes
: Turbo Codes

while the encoding and the decoding algorithms differ for convolution and block
codes, most of the results hold for the both types.

Now briefly convolution codes are discussed. Convolutional codes differ from block
codes in the sense that they do not break the message stream into fixed-size blocks.
Instead, redundancy is added continuously to the whole stream. The encoder keeps M
previous input bits in memory. Each output bit of the encoder then depends on the
current input bit as well as the M stored bits. Figure 2.1 depicts a sample
convolutional encoder. The encoder produces two output bits per every input bit,
defined by the equations

Fig.2.1. A Convolution Encoder

For this encoder, M = 3, since the i-th bits of output depend on input bit i, as well as
three previous bits i − 1, i − 2, i − 3. The encoder is nonsystematic, since the input bits
do not appear explicitly in its output.

y 1 ,i= xi + x i−1+ xi−2

y 2 ,i =xi + x i−2+ xi−3

An important parameter of a channel code is the code rate. If the input size (or
message size) of the encoder is k bits and the output size (the code word size) is n

k
bits, then the ratio is called the code rate r. Since our sample convolutional encoder
n

1
produces two output bits for every input bit, its rate is . Code rate expresses the
2
amount of redundancy in the code—the lower the rate, the more redundant the code.

20
Forward Error Correcting Codes
: Turbo Codes

2.4 TURBO CODES: ENCODING WITH INTERLEAVING

Turbo codes are a class of high-performance FEC codes developed in 1993, which
were the first practical codes to closely approach the channel capacity, a theoretical
maximum for the code rate at which reliable communication is still possible given a
specific noise level []. Turbo codes are finding use in (deep space) satellite
communications and other applications where designers seek to achieve reliable
information transfer over bandwidth or latency constrained communication links in
the presence of data-corrupting noise. Turbo codes are nowadays competing with
LDPC codes, which provide similar performance.

The first turbo code, based on convolutional encoding, was introduced in 1993 by
Berrou et al. [ ]. Since then, several schemes have been proposed and the term “turbo
codes” has been generalized to cover block codes as well as convolutional
codes.

The generic design of a turbo code is depicted in Figure 2.2. Although the general
concept allows for free choice of the encoders and the interleaver, most designs
follow the ideas presented in [ ]:

 The two encoders used are normally identical;


 The code is in a systematic form, i.e. the input bits also occur in the output
(see Figure 2);
 The interleaver reads the bits in a pseudo-random order.

Encoder 1

Interleaver

Encoder 2

Fig2.2 Generic Turbo encoder

21
Forward Error Correcting Codes
: Turbo Codes

The choice of the interleaver is a crucial part in the turbo code design. The task of the
interleaver is to “scramble” bits in a (pseudo-)random, albeit predetermined fashion.
This serves two purposes. Firstly, if the input to the second encoder is interleaved, its
output is usually quite different from the output of the first encoder.

This means that even if one of the output code words has low weight, the other
usually does not, and there is a smaller chance of producing an output with very low
weight. Higher weight, as we saw above, is beneficial for the performance of the
decoder. Secondly, since the code is a parallel concatenation of two codes, the divide-
and-conquer strategy can be employed for decoding. If the input to the second
decoder is scrambled, also its output will be different or “uncorrelated” from the
output of the first encoder. This means that the corresponding two decoders will gain
more from information exchange. Now some interleaver design ideas are briefly
reviewed, stressing that the list is by no means complete.

 A “row-column” interleaver: Data is written row-wise and read column-


wise. While very simple, it also provides little randomness.
 A “helical” interleaver: Data is written row-wise and read diagonally.
 An “odd-even” interleaver: First, the bits are left uninterleaved and encoded,
but only the odd-positioned coded bits are stored. Then, the bits are scrambled
and encoded, but now only the even-positioned coded bits are stored. Odd-
even encoders can be used, when the second encoder produces one output bit
per one input bit.
 A pseudo-random interleaver defined by a pseudo-random number generator
or a look-up table.

There is no such thing as a universally best interleaver. For short block sizes, the odd-
even interleaver has been found to outperform the pseudo-random interleaver, and
vice versa. The choice of the interleaver has a key part in the success of the code and
the best choice is dependent on the code design.

2.4.1 Decoding

In the traditional decoding approach, the demodulator makes a “hard” decision of the
received symbol, and passes to the error control decoder a discrete value, either a 0 or

22
Forward Error Correcting Codes
: Turbo Codes

a 1. The disadvantage of this approach is that while the value of some bits is
determined with greater certainty than that of others, the decoder cannot make use of
this information.

A soft-in-soft-out (SISO) decoder receives as input a “soft” (i.e. real) value of the
signal. The decoder then outputs for each data bit an estimate expressing the
probability that the transmitted data bit was equal to one. In the case of turbo codes,
there are two decoders for outputs from both encoders. Both decoders provide
estimates of the same set of data bits, albeit in a different order. If all intermediate
values in the decoding process are soft values, the decoders can gain greatly from
exchanging information, after appropriate reordering of values. Information exchange
can be iterated a number of times to enhance performance. At each round, decoders
re-evaluate their estimates, using information from the other decoder, and only in the
final stage will hard decisions be made, i.e. each bit is assigned the value 1 or 0. Such
decoders, although more difficult to implement, are essential in the design of turbo
codes.

2.4.2 Performance of Turbo Codes

1
Already the first rate code proposed in 1993 made a huge improvement: the gap
3
between Shannon’s limit and implementation practice was only 0.7dB, giving a less
than 1.2-fold overhead. In [2], a thorough comparison between convolutional codes

1 1
and turbo codes is given. In practice, the code rate usually varies between and .
2 6

1
Let the allowed bit error rate be 10−6 . For code rate , the relative increase in energy
2
consumption is then 4.80dB for convolutional codes, and 0.98dB for turbo codes. For

1
code rate , the respective numbers are 4.28dB and -0.12dB. It can also be noticed,
6
that turbo codes gain significantly more from lowering the code rate than
conventional convolutional codes.

23

You might also like