Turbo Codes: Farah Ilyas Vohra (TC-28), Shafaq Mustafa (TC-13), Sabieka Rizvi (TC-61) and Zainab Qazi (TC-11)
Turbo Codes: Farah Ilyas Vohra (TC-28), Shafaq Mustafa (TC-13), Sabieka Rizvi (TC-61) and Zainab Qazi (TC-11)
Farah Ilyas Vohra (TC-28), Shafaq Mustafa (TC-13), Sabieka Rizvi (TC-61) and Zainab Qazi (TC-11)
Telecommunications Department, NED University of Engineering and Technology
Abstract In information theory, turbo codes are a class of high-performance forward error correction (FEC) codes developed in 1993, which were the first practical codes to closely approach the channel capacity, a theoretical maximum for the code rate at which reliable communication is still possible given a specific noise level. Turbo codes are finding use in 3G mobile communications and (deep space) satellite communications as well as other applications where designers seek to achieve reliable information transfer over bandwidth- or latencyconstrained communication links in the presence of data-corrupting noise. Backward error correction (BEC) requires only error detection: if an error is detected, the sender is requested to retransmit the message. While this method is simple and sets lower requirements on the codes error-correcting properties, it on the other hand requires duplex communication and causes undesirable delays in transmission. Forward error correction (FEC) requires that the decoder should also be capable of correcting a certain number of errors, i.e. it should be capable of locating the positions where the errors occurred. Since FEC codes require only simplex communication, they are especially attractive in wireless communication systems, helping to improve the energy efficiency of the system.
I. INTRODUCTION
This paper is on turbo codes, a technique of error correction coding developed in the 1990s. The paper starts with a short overview of channel coding and the concept of convolutional encoding. Bottlenecks of the traditional approach are described and the motivation behind turbo codes is explained. After examining the design of turbo codes in more detail, reasons behind their efficiency are brought out.
B. Convolutional Codes
Convolutional codes differ from block codes in the sense that they do not break the message stream into fixed-size blocks. Instead, redundancy is added continuously to the whole stream. The encoder keeps M previous input bits in memory. Each output bit of the encoder then depends on the current input bit as well as the M stored bits.
II. BACKGROUND
A. Channel Coding
The task of channel coding is to encode the information sent over a communication channel in such a way that in the presence of channel noise, errors can be detected and/or corrected. We distinguish between two coding methods:
1
Figure 1 depicts a sample convolutional encoder. The encoder produces two output bits per every input bit, defined by the equations y1,i = xi + xi1 + xi3, y2,i = xi + xi2 + xi3. For this encoder, M = 3, since the ith bits of output depend on input bit i, as well as three previous bits i 1, i 2, i 3. The encoder is nonsystematic, since the input bits do not appear explicitly in its output.
problem of channel coding: encoding is easy but decoding is hard. For every combination of bandwidth (W), channel type, signal power (S) and received noise power (N), there is a theoretical upper limit on the data transmission rate R, for which error-free data transmission is possible. This limit is called channel capacity or also Shannon capacity (after Claude Shannon, who introduced the notion in 1948). For additive white Gaussian noise channels, the formula is R < W log2( 1+S/N) [bits/second] In practical settings, there is of course no such thing as an ideal error-free channel. Instead, error-free data transmission is interpreted in a way that the bit error probability can be brought to an arbitrarily small constant. The bit error probability, or bit error rate (BER) used in benchmarking is often chosen to be 105or 106. Now, if the transmission rate, the bandwidth and the noise power are fixed, we get a lower bound on the amount of energy that must be expended to convey one bit of information. Hence, Shannon capacity sets a limit to the energy efficiency of a code. Although Shannon developed his theory already in the 1940s, several decades later the code designs were unable to come close to the theoretical bound. Even in the beginning of the 1990s, the gap between the theoretical bound and practical implementations was still at best about 3dB. This means that practical codes required about twice as much energy as the theoretical predicted minimum. Hence, new codes were sought that would allow for easier decoding. One way of making the task of the decoder easier is using a code with mostly highweight code words. High-weight code words, i.e. code words containing more ones and less zeros, can be distinguished more easily. Another strategy involves combining simple codes in a parallel fashion, so that each part of the code can be decoded separately with less complex decoders and each decoder can gain from information exchange with others. This is called the divide-and-conquer strategy.
C. Code Rate
An important parameter of a channel code is the code rate. If the input size (or message size) of the encoder is k bits and the output size (the code word size) is n bits, and then the ratio k/n is called the code rate r. Since our sample convolutional encoder produces two output bits for every input bit, its rate is 1/2 .Code rate expresses the amount of redundancy in the codethe lower the rate, the more redundant the code.
D. Hamming Weight
Finally, the Hamming weight or simply the weight of a code word is the number of non-zero symbols in the code word. In the case of binary codes, dealt with in this paper, the weight of a code word is the number of ones in the word.
divide-and-conquer strategy can be employed for decoding. If the input to the second decoder is scrambled, also its output will be different or uncorrelated from the output of the first encoder. This means that the corresponding two decoders will gain more from information exchange. We now briefly review some interleaver design ideas, stressing that the list is by no means complete. The first three designs are illustrated in Figure 3 with a sample input size of 15 bits. 1. A row-column interleaver: data is written row-wise and read column-wise. While very simple, it also provides little randomness. 2. A helical interleaver: data is written row-wise and read diagonally. 3. An odd-even interleaver: first, the bits are left uninterleaved and encoded, but only the oddpositioned coded bits are stored. Then, the bits are scrambled and encoded, but now only the evenpositioned coded bits are stored. Odd-even encoders can be used, when the second encoder produces one output bit per one input bit. 4. A pseudo-random interleaver defined by a pseudo-random number generator or a look-up table.
The choice of the interleaver is a crucial part in the turbo code design. The task of the interleaver is to scramble bits in a (pseudo-)random, albeit predetermined fashion. This serves two purposes. Firstly, if the input to the second encoder is interleaved, its output is usually quite different from the output of the first encoder. This means that even if one of the output code words has low weight, the other usually does not, and there is a smaller chance of producing an output with very low weight. Higher weight, as we saw above, is beneficial for the performance of the decoder. Secondly, since the code is a parallel concatenation of two codes, the
There is no such thing as a universally best interleaver. For short block sizes, the odd-even interleaver has been found to outperform the
3
pseudo-random interleaver, and vice versa. The choice of the interleaver has a key part in the success of the code and the best choice is dependent on the code design.
this paper proposes a new interleaver design based on an extension of S-random interleavers. The proposed interleaver makes use of several short length S-random interleavers to construct an interleaver of longer length. The overall complexity of constructing longer-length interleaver is relatively low while retaining comparable performance with regular S-random interleavers of full length. The major advantage of the proposed interleaver is its robustness against pruning of the interleaver length and thus suitable for the system that requires variable-length turbo codes. The S-random interleaver imposes additional constraints on the pseudo-random interleaver and is described as follows. Each randomly and uniquely selected integer from [0, N -1] is compared to the S1 previously selected integers. If the distance between the currently selected integer and any of the S1 previously selected integers is greater than or equal to the integer parameter S2, then the current candidate integer is accepted. Otherwise, the selected candidate is rejected and another integer is tested again. The above process is iterated until all N integers are accepted.
V. RESEARCH PAPERS
A. First Research Paper:
A Simple Interleaver Design For Variable-length Turbo Codes (Enokizono, K.; Ochiai, H.; Dept. of Electr. & Comput. Eng., Yokohama Nat. Univ., Yokohama, Japan October 2010) Since the performance of turbo codes is largely determined by the interleaver structure, therefore,
SOVA has the least computational complexity and the worse bit error rate (BER) performance. LogMAP algorithm has the best BER performance but high computational complexity. The bit error rate (BER) performances of these algorithms are compared. Simulation results are provided for bit error rate performances (over AWGN channel) and they show improvements of 0.4 dB for log-MAP over SOVA at BER 10e-4.
REFERENCES
[1] University of South Australia, Institute for Telecommunications Research, Turbo coding research group. http://www.itr.unisa.edu.au/~steven/turbo/. [2] S.A. Barbulescu and S.S. Pietrobon. Turbo codes: A tutorial on a new class of powerful error correction coding schemes. Part I: Code structures and interleaver design. J. Elec. And Electron.Eng., Australia, 19:129142, September 1999. [3] S.A. Barbulescu and S.S. Pietrobon. Turbo codes: A tutorial on a new class of powerful error correction coding schemes. Part II: Decoder design and performance. J. Elec. and Electron.Eng., Australia, 19:143152, September 1999. [4] C. Berrou, A. Glavieux, and P. Thitimajshima. Near Shannon limit errorcorrecting coding and decoding: Turbo codes. In Proceedings of the IEEE International Conference on Communications, Geneva, Switzerland, May 2003.
VI. CONCLUSION
Turbo codes are a recent development in the field of forward-error-correction channel coding. The codes make use of three simple ideas: parallel concatenation of codes to allow simpler decoding; interleaving to provide better weight distribution; and soft decoding to enhance decoder decisions and maximize the gain from decoder interaction. While earlier, conventional codes performedin terms of energy efficiency or, equivalently, channel capacityat least twice as bad as the theoretical bound suggested, turbo codes immediately achieved performance results in the near range of the theoretically best values, giving a less than 1.2-fold overhead. Since the first proposed design in 1993, research in the field of turbo codes has produced even better results. Nowadays, turbo codes are used in many commercial applications, including both
[5] Third Generation Partnership Project(3GPP). Multiplexing and Channel Coding (FDD), March 2005. TS 25.212 Version 6.4.0. [6] M. C. Valenti and J. Sun. Turbo codes. In F. Dowla, editor, Handbook of RF and Wireless Technologies, pages 375400. Newnes, 2004. [7] A. M. Viterbi. Shannon theory, concatenated codes and turbo coding. http://occs.donvblack.com/viterbi/index.htm, 1998.