Analysis of Iterative Decoding For Turbo Codes Using Maximum A Posteriori Algorithm
Analysis of Iterative Decoding For Turbo Codes Using Maximum A Posteriori Algorithm
CHAPTER 1
INTRODUCTION
2009-2011
2 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
INTRODUCTION
1.1Introduction:
The main aim of any communication scheme is to provide error-free data transmission. In a communication system, information can be transmitted by analog or digital signals. For analog means, the amplitude of the signal reflects the information of the source, whereas for digital case, the information will first be translated into a stream of 0 and 1. Then two different signals will be used to represent 0 and 1 respectively. As can be referred to the following illustration, the main advantage of using digital signal is that errors introduced by noise during the transmission can be detected and possibly corrected. For communication using cables the random motion of charges in conducting (e.g. resistors), known as thermal noise, is the major source of noise. For wireless communication channels, noise can be introduced in various ways.
Digital Communications: In the last two decades, there has been an explosion of interest in the transmission of digital information mainly due to its low cost, simplicity, higher reliability and possibility of transmission of many services in digital forms. The term digital communications broadly refers to the transmission of information using digital messages or bit streams. There are notable advantages to transmitting data using discrete messages. It allows for enhanced signal processing and quality control. In particular, errors caused by noise and interference can be detected and corrected systematically. Digital communications also make the networking of heterogeneous systems possible, with the Internet being the most obvious such example. These advantages and many more, explain the widespread adoption and constantly increasing popularity of digital communication systems.
2009-2011
3 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The above figure shows a simple block diagram of a digital transmission system.
Digital Source: At the source, information suitable for transmission is produced. The input to this block is either analog or discrete. In the case of an analog input, appropriate processes, i.e. quantization, sampling and coding are performed so as to form a discrete signal. Source Encoder: Discrete information obtained from the source block a certain sampling rate is then input to the source encoder block. In this block, symbol sequences are converted to binary sequences by assigning code words to the input symbols according to a specified rule and based on reducing the redundancy of the encoded data. Since redundancy has been removed from the information source, encoded information is sensitive to noise in transmission media. Hence, a channel encoder inserts redundancies into the source encoded data so as to protect the required signal against channel errors.
2009-2011
4 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Channel Encoder: The purpose of the channel coding is to introduce some redundancy in the binary information sequence that can be used at the receiver to overcome the effects of noise and interference encountered in the transmission of the signal through the channel. Modulator: The modulator converts the input binary stream to a waveform compatible with the channel characteristics and provides suitable conditions for transmission of the signal. The primary purpose of the digital modulator is to map the binary information sequence into signal waveforms. Channel: The communication channel is the physical medium that is used to send the signal from transmitter to receiver. Here any one type of noise is added. The remaining blocks in this figure perform inverse operations of their corresponding blocks at the transmitter to finalize extraction of the required signal at the destination as seen below. Demodulator: Digital demodulator reduces the waveforms to a sequence of numbers. Channel Decoder: This sequence of numbers are passed to the channel decoder, which attempts to reconstruct the original information signal from the knowledge of the code used bt the channel encoder and redundancy contained in the received data. Source Decoder: Source decoder accepts the output from the channel decoder and, from knowledge of the source encoding method used, it attempts to reconstruct original information from source In this project we concentrate on the channel coding concept.
2009-2011
5 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Reduce the occurrence of undetected errors. This was one of the first uses of error-control coding. Today's error detection codes are so effective that the occurrence of undetected errors is, for all practical purposes, eliminated. Reduce the cost of communications systems. Transmitter power is expensive, especially on satellite transponders. Coding can reduce the satellite's power needs because messages received at close to the thermal noise level can still be recovered correctly. Overcome Jamming. Error-control coding is one of the most effective techniques for reducing the effects of the enemy's jamming. In the presence of pulse jamming, for example, coding can achieve coding gains of over 35 dB [8]. Eliminate Interference. As the electromagnetic spectrum becomes more crowded with manmade signals, error-control coding will mitigate the effects of unintentional interference. Despite all the new uses of error-control coding, there are limits to what coding can do. On the Gaussian noise channel, for example, Shannon's capacity formula sets a lower limit on the signal-to-noise ratio that we must achieve to maintain reliable communications. Shannons lower limit depends on whether the channel is power-limited or bandwidth-limited. The deep space channel is an example of a power-limited channel because bandwidth is an abundant resource compared to transmitter power. Telephone channels, on the other hand, are considered bandwidth-limited because the telephone company adheres to a strict 3.1 kHz channel bandwidth.
2009-2011
6 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
also prone to duping by the enemy. A pulse jammer can optimize its duty cycle to increase its chances of causing one or more errors in each codeword. Ideally (from the jammer's point of view), the jammer forces the communicator to retransmit the same codeword over and over, rendering the channel useless. There are two types of ARQ: stop and wait ARQ and continuous ARQ. Stop-and-wait ARQ. With stop-and-wait ARQ, the transmitter sends a single codeword and waits for a positive acknowledgement (ACK) or negative acknowledgement (NAK) before sending any more code words. The advantage of stop-and-wait ARQ is that it only requires a half duplex channel. The main disadvantage is that it wastes time waiting for ACKs, resulting in low throughput. Continuous ARQ. Continuous ARQ requires a full duplex channel because code words are sent continuously until a NAK is received. A NAK is handled in one of two ways: With go back-N ARQ, the transmitter retransmits the errored codeword plus all code words that followed until the NAK was received. The parameter N is determined from the round trip channel delay. For geosynchronous satellite channels, N can be very large because of the 540 millisecond round trip delay. The transmitter must store N code words at a time and large values of N result in expensive memory requirements. With selective-repeat ARQ, only the errored codeword is retransmitted, thus increasing the throughput over go back-N ARQ. Both types of continuous ARQ offer greater throughput efficiency than stop-and-wait ARQ at the cost of greater memory requirements. B. Forward Error Correction (FEC): Forward error correction is appropriate for applications where the user must get the message right the first time. The one-way or broadcast channel is one example. Today's error correction codes fall into two categories: block codes and Convolutional codes. Block Codes. The
operation of binary block codes was described in Section 4.0 of this paper. All we need to add here is that not all block codes are binary. In fact, one of the most popular block codes is the Reed-Solomon code which operates on m-bit symbols, not bits. The traditional role for error-control coding was to make a troublesome channel acceptable by lowering the frequency of error events. The error events could be bit errors, message errors, or undetected errors. The codes are produced in the channel encoding schemes.
2009-2011
7 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
S/N=(Eb/No)*(R/W)-------------------------(1.2)
In the case of an infinite channel bandwidth (W>infinity Shannon bound is defined by:
and
Eb/No=1/(log2e)= 0.693
2009-2011
8 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
In order to achieve this bound, i.e. Eb/N0= -1.59 dB value, it would be necessary to use a code with a such long length that encoding and decoding would be practically impossible. However, the most significant step in obtaining this target, was by Forney, who found that a long code length could be achieved by concatenation of two simple component codes with short lengths linked by an interleaver. Figure 1.2shows basic structures of the serial, parallel and hybrid concatenated codes. Unlike serial and hybrid concatenated codes, turbo codes, which are basically implemented by parallel concatenation of two similar Recursive Systematic Convolutional (RSC) component codes, create a perfect balance between component codes with the close performance to the Shannon bound. On the basis of these properties, the recent decade saw this type of coding as the subject of much research and its usage in several applications.
9 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Thus, a big, powerful code with high BER performance, but of impractical complexity, can be constructed in an equivalent concatenated form by combining two or more constituent codes to provide the same performance at a lower cost in terms of complexity. The reduced complexity is important especially for the decoding of these codes, which can take advantage of the combined structure of a concatenated code. Decoding is done by combining two or more relatively low complexity decoders, thus effectively decomposing the problem of the decoding of a big code. If these partial decoders properly share the decoded information, by using an iterative technique, for example, then there need be no loss in performance. There are essentially two ways of concatenating codes: traditionally, by using the so-called serial concatenation and more recently, by using the parallel concatenated structure of the first turbo coding schemes. Both concatenation techniques allow the use of iterative decoding.
2009-2011
10 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Component Codes 1
Interleaver
Component Codes 2
Figure (a)
Component Codes 1
C1
Interleaver
Component Codes 2
C2 Figure (b)
for a parallel concatenated encoder is seen in Figure.2.6, which is the encoder of a turbo code of code rate Rc = 1/3.
A block or sequence of message bits m is input to the encoder of one of the constituent codes C1, generating an output sequence c1. In addition, the same input sequence is first interleaved, and then input to the encoder of the second code C2, generating an output sequence c2. The output of both encoders, c1 and c2, are multiplexed with the input m to form the output or code sequence c, so that the concatenated code is of code rate Rc = 1/3 if all the multiplexed sequences are of the same length.
2009-2011
11 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
12 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2.1Introduction
Turbo codes are introduced as one of the most powerful error control codes. They are basically constructed by two parallel Recursive Systematic Convolutional (RSC) codes, which are linked by an interleaver. Generally, turbo encoded data are decoded by iterative decoding techniques. Due to the feedback connection from the output to the input of the RSC encoder, it is possible to find bit streams that automatically return the RSC encoders to zero state. This generates code words with low weight for the turbo code. As one of the effective solution to reduce this drawback, application of good interleavers is suggested. The interleavers are designed in such a way that to prohibit generation of bad bit streams of the second RSC codes. This chapter reviews structure of turbo codes. Among several suggested algorithms, the iterative turbo decoding by MAP Algorithm (MAP) is discussed and some modifications improving its performance are presented.
A turbo encoder is constructed by a parallel concatenation of two identical component codes, which are linked by an interleaver. Generally, Recursive Systematic Convolutional (RSC) codes with the rate 1/2 are applied as the component codes. However, it is possible to utilize block Sir CRR COE.ELURU, Dept of ECE 2009-2011
13 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
codes instead of convolutional codes as component codes . Also, the structure of component codes can be different from each other resulting in asymmetric turbo codes .For an RSC encoder with rate 1/2 , input bit streams are directly transferred to the encoder output to form systematic data part of the encoder. Based on the feedback connection from another RSC encoder output to the encoder input, the systematic bits are encoded providing parity bits.
xs X
RSC Encoder 1
xp1
Interleaver
RSC Encoder 2
xp2
The typical Turbo Encoder shown in figure (2). It consists of two parallel Recursive Systematic Convolution Encoders separated with Interleaver as shown. Without the punctuator, the coding rate is 1/3 because of the two parity sequences. If this rate is too low, the parity sequences can be punctured to give a higher rate. For example, by puncturing out the even symbols of xp1 and odd symbols of xp2, the coding rate is increased to . By convention in digital communications, we assume that the binary 0 and 1 are mapped to physical values of +1 and -1 by the modulator. Similarly to the Convolutional code, it is possible to define the generator matrix for the SC code. This matrix can be represented as G(D) = (1;
g0/g1), where 1, g0, g1 introduce the systematic, feed forward and feedback connections of the RSC encoder, respectively. For the proposed RSC code, the polynomials g0 = 1 +D2 and g1 = 1 +D +D2 are obtained and represented by the values of 5 and 7 in the octal mode. Figures 2.1 and 2.5(b) show block diagrams of the turbo encoder and a simple structure of the RSC encoder (1; 5/7 ) with rate 1/2 , respectively.
2009-2011
14 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
In most applications, channel codes are designed with rate 1/2 . In order to achieve this rate for a turbo code, it is necessary to puncture half of the bits of each RSC encoded data. Since puncturing of the systematic bits dramatically reduces the code performance, instead of sending two half punctured systematic bits from two RSC encoders, only systematic bits of the first RSC encoder are fully transmitted and puncturing is only performed on the parity bits forming the desired code rate. Therefore, in the case of non-puncturing, a turbo code with rate 1/3 is constructed. It has been confirmed that the equally distributed puncturing between two parity data creates the best performance for the turbo code. First we see from the Convolutional encoders.
2.2.1Convolutional Encoder:
A Convolutional encoder with the rate R = k/n is constructed on a basis of k input bits,n output bits and m memory units. The memory outputs and input data are joined each other in the required combination by an Exclusive OR (XOR) operator which generates the output bits . Figure 2.1(a) shows the Convolutional encoder (n = 2; k = 1;m = 2) structure. In the Convolutional encoder, one bit entering the encoder will affect to the code performance for m+1 time slots, which represents the constraint length value of the code. Since an XOR is a linear operation, the Convolutional encoder is a linear feed forward circuit. Based on this property, the encoder outputs can be obtained by convolution of input bits with n impulse responses. The impulse responses are obtained by considering input bit stream (100...0) and observing the output sequences. Generally, these impulse responses are called generator sequences having lengths equal to the constraint length of the code. The generator sequences determine the existence connection between the encoder memories, its input and output. For the illustrated encoder in Figure 2.1(a) the generator matrix G(D) is of the form :
G(D)=[g1,g2] And g1=(1 1 1 )2=78 g2= (101)2=(5)8 A Convolutional encoder can also be considered as a sequential circuit. Based on this approach it is possible to illustrate its behavior by the state diagram . This diagram has 2m
2009-2011
15 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
distinct states corresponding with the possible memories state of the encoder. In the state diagram, 2k branches leave each state and enter the new state to represent the state transitions for memories and the encoder outputs based on the combination of input data. Figure 2.1(b) shows the state diagram of the Convolutional encoder (2,1,2) in the above example. In this figure, dotted and solid lines represent input bits of 0 and 1, respectively, while the values on the top of each line indicate the first and the second encoder output bits, respectively. Here u1 and u2 represents outputs of the encoder.
Systematic Convolutional Encoder: The encoder which contains input as a one part of the encoder output directly is called Systematic Convolutional encoder.
2009-2011
16 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
u1
+
p1
Fig 2.2 : The RSC Encoder (1,5/7) obtained from the previous figure The RSC Encoder is obtained from the non recursive non systematic(conventional) Convolutional encoder by feeding back of its encoded outputs to its input. Suppose the conventional encoder is represented by the generator sequences g1=[1 1 1 ] and g2=[ 1 0 1] It can be represented in a more compact form as g= [g1, g2].The RSC encoder representation as G=[1, g2/g1]. Where the first output (represented by g1) is feedback to the input. In the above representation, 1 denotes the systematic output, g2 denotes the feed forward output and g1 is the feedback to the input of the RSC encoder. Figure 3 shows the schematic diagram for an constituent or RSC encoder. In this example, G(z) = N(z)/D(z) with D(z)=1+z-1+z-3 and N(z) = 1 + z-2+z-3. Since only binary
2009-2011
17 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
coefficients are used, it is common to use the octal representation of the coefficients of G(z). For this example, the binary coefficient sequences for D(z) and N (Z) are 1101 and 1011, respectively. Their octal representations are 15 and 13. Therefore, it is common to denote G(z) = (15,13).
We now try to understand the design of constituent encoders. A codeword consists of three sequences, xs, xp1 and xp2 . Recall that a bad codeword has a low weight, i.e., it is such that xs, xp1 and xp2 all must have low weights. This is a powerful observation and it also suggests that turbo codes are most effective when the block size is large. The full diagram of the Turbo encoder with RSC encoder as (1,5/7) is as shown in following figure.
2009-2011
18 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
We see from the above that the interleaver plays a crucial role in turbo code design. For a small to median block size, the interleaver needs to be carefully designed to ensure that the minimum weight of the code is maximized. For a large block size, the choice of the interleaver is not crucial, as long as it provides sufficient randomization. Consequently, it usually suffices to us a pseudo-random interleaver.
2009-2011
19 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2.2.4 Interleaving
Interleaving generally refers to a process which permutes symbols of an input sequence. It is especially utilized in forward error correction coding to reduce the effect of impulse noise and burst errors in fading and multipath channels. For the same reason it is also applied in magnetic recording systems.
2.2.4.1 Interleaver
Interleaving is a widely used technique in digital communication and storage systems. An interleaver takes a given sequence of symbols and permutes their positions, arranging them in a different temporal order. The basic goal of an interleaver is to randomize the data sequence. When used against burst errors, interleavers are designed to convert error patterns that contain long sequences of serial erroneous data into a more random error pattern, thus distributing errors
2009-2011
20 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
among many code vectors. Burst errors are characteristic of some channels, like the wireless channel, and they also occur in concatenated codes, where an inner decoder overloaded with errors can pass a burst of errors to the outer decoder.In general, data interleavers can be classified into block, convolutional, random, and linear interleavers. The effect of the error floor can be reduced by applying an interleaver tailored to the RSC code structure to prohibit generation of self-terminating patterns for the second RSC code. Since the free distance value approximately determines the code performance at the error floor region and it is usually obtained from the weight-2 input bitstream, interleavers are particularly designed to improve the weight-2 distribution of the code . The obtained free distance from the weight-2 distribution is called the effective free distance of the code. An interleaver is ideal when it breaks all self terminating patterns generating higher weight for the code. In fact, the main function of interleaver is related to generation of a high weight for the second RSC code from the low weight input bit stream .
2009-2011
21 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
This confirms that the errors are separated by four positions. In a given application of error-control coding, a block interleaver is selected to have a number of rows that should be ideally larger than the longest burst of errors expected, and in practice at least as large as the length of most expected bursts. The other parameter of a block interleaver is the number of columns of the permutation matrix, NI, which is normally selected to be equal to or larger than the block or decoding length of the code that is used. In this way, a burst of NI errors will produce only one error per code vector.
2009-2011
22 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
23 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
24 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
interleaving. It can be seen that the backward interleaver has increased the distance of the last bit of the proposed pattern from the end part of the interleaver. Application of these interleavers improves the turbo codes performance especially at medium signal to noise ratios for interleavers with short block lengths
2009-2011
25 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
interleaver any two input bit positions with distance S can not be permuted to two bit positions, whose distance is less than S. Figure 2.9 shows the permutation process for the semi-random interleaver with length L = 9 and threshold value S = 3. As verified in, the best turbo code performance is achieved by the threshold value qL 2. Although the obtained distance in comparison with other interleavers such as the circular-shift and row-column is shorter, it efficiently breaks self-terminating patterns to provide the suitable pattern for the second RSC encoder.
2009-2011
26 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
27 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
28 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Le21^
De-Inter..
Le12 Le12^
yp1
Decoder1
Inter..
Decoder2
ys
Inter.. Ys^
Le21
yp2
There are many algorithms that operate using LLRs, and perform decoding using softinputsoft- output values. One of these algorithms is the BCJR algorithm. Some background on the measures and probabilities involved in this algorithm is presented next, in order to then introduce the BCJR algorithm. In the traditional decoding approach, the demodulator makes a hard decision of the received symbol, and passes to the error control decoder a discrete value, either a 0 or a 1. The disadvantage of this approach is that while the value of some bits is determined with greater certainty than that of others, the decoder cannot make use of this information. A soft-in-soft-out (SISO) decoder receives as input a soft (i.e. real) value of the signal. The decoder then outputs for each data bit an estimate expressing the probability that the transmitted data bit was equal to one. In the case of turbo codes, there are two decoders for outputs from both encoders. Both decoders provide estimates of the same set of data bits, albeit in a different order. If all intermediate values in the decoding process are soft values, the decoders can gain greatly from exchanging information, after appropriate reordering of values. Information exchange can be Sir CRR COE.ELURU, Dept of ECE 2009-2011
29 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
estimates, using information from the other decoder, and only in the final stage will hard decisions be made, i.e. each bit is assigned the value 1 or 0. Such decoders, although more difficult to implement, are essential in the design of turbo codes. Turbo encoded data is conventionally decoded by iterative decoding techniques, as shown in Figure 4.1. In this technique, two component decoders are linked by an interleaver. Each component decoder provides soft output decoded information usable for the alternative component decoder at the next iteration. This recursive soft output information is called a-priori information. In addition to the a-priori information, component decoders accept the systematic and parity information from the channel output. The obtained soft information from the component decoder output is subtracted from a-priori and systematic information to produce the extrinsic information. The extrinsic information from each component decoder is interleaved or deinterleaved to create a-priori information for the alternative decoder at the next iteration step. At each iteration, the interleaver and deinterleaver rearrange the extrinsic information generating a new combination of soft information as a-priori information for other component decoders to provide decoded information having the maximum likelihood with the original bit stream.
2009-2011
30 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
error probability. Information bits returned by the MAP algorithm need not form a connected path through the trellis while for SOVA it will be a connected path. All of the above algorithms provide soft decoded information based on Log-Likelihood Ratios (LLR). The polarity of the LLR determines the sign of the decoded bit and its amplitude corresponds to the probability of a correct decision. In this project, the concept of LLR and its application to the iterative turbo decoding by MAP is explained. Finally, some methods to improve SOVA performance are presented The BCJR, or MAP, algorithm suffers from an important disadvantage: it needs to perform many multiplications. In order to reduce this computational complexity several simplifying versions have been proposed, namely the max-log-MAP algorithm in 1990-1994 [3] [4] and the log-MAP algorithm in 1995 [5]. Both substitute additions for multiplications. The log-MAP algorithm uses exact formulas so its performance equals that of the BCJR algorithm although it is simpler, which means it is preferred in implementations. In turn, the max-log-MAP algorithm uses approximations; therefore its performance is slightly worse. So MAP Algorithm is better Algorithm.
2009-2011
31 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
32 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Log-MAP 6. . 4. 4. -2 -2 . +6
Max-Log_MAP 4. 2. 4. . . -2 +6
SOVA . . 2. -1 +9
Figure 3.3 : Complexity comparision of various decoding algorithms So in this project we are using MAP decoding Algorithm.
2009-2011
33 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The upper bound equation specifies that the code with the higher free distance value has better performance in terms of error reduction. Because of the feedback connection between the RSC encoder output and its input, the effect of bit 1 from the impulse response of the code, will last until certain external bits are inserted to the RSC encoders returning their memories to the zero state. Therefore, a higher weight and consequently improved performance for the code compared to a non-recursive Convolutional code is expected. For example, for the input bit stream (10000000...0) the obtained codeword from the Convolutional code (2,1,2) illustrated in Figure 2.1(a) is (11101100000000...0) with a weight 5, while its equivalent RSC code (1, 5/7 ) gives the codeword (11101101... ) with infinite length, whose weight for every 3 inserted zero bits is periodically increased by 2 units. The finite weights for this code can be obtained when input bit streams with higher weights than 1 are applied in such a way that they return the RSC code to the zero state, without inserting any external bits to the memories. These patterns are called selfterminating patterns. Depending on the weight of self-terminating patterns and their combinations, low weights can be obtained for the code. For example, the weight-2 selfterminating pattern (100100) for the RSC code (1, 5/7 ), generates a codeword (11010111) with weight 6. The similar pattern generates the weight 10 (ten) relevant to the codeword (111011111011) obtained from the trellis terminated Convolutional code (2,1,2). Similar conditions can occur from self-terminating patterns with higher weights than 2. For example, the weight-6 self-terminating pattern (11100111) generates a codeword (1110110000111011) with the weight 10 (ten) for the RSC code (1, 5/7 ), while this pattern produces a codeword (11101001111101100111) with the weight 14 for the trellis terminated Convolutional code (2,1,2). In addition, if trellis termination is considered for the RSC code, existence of weight1 input bit stream whose weight is positioned in the end part of the bit stream, can generate a codeword with a lower weight than a codeword obtained from the Convolutional code. For the RSC code (1, 5/7 ), the weight-1 bit stream (000..001) generates a codeword (101110) with the weight 5, which is lower than the obtained weight 6 from a codeword (111011) for the trellis terminated Convolutional code (2,1,2). The above analysis can be extended to the turbo codes which incorporate similar RSC codes. The obtained results from the iterative decoding indicate that the turbo code has very good performance in error reduction at the low signal to noise ratios. This area is named waterfall. At
2009-2011
34 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
medium to high signal to noise ratios, which is named error floor region, BER slope of the iterative decoder is reduced significantly. At very low signal to noise ratios, BER stays high and at an almost constant value. This area is named non-convergence. Figure 2.6 specifies the mentioned regions of turbo codes performance with the maximum likelihood iterative decoding methods. Analysis of the turbo code indicates that at low signal to noise ratios, a large number of codeword with medium weight determines the code performance. The results confirm that by increasing the length of the interleaver the influence of those weights on the code performance can be reduced. At medium to high signal to noise ratios, only the first items of the weight distributions contribute to the code performance. This indicates that the interleaver type has the essential influence on the code performance.
2009-2011
35 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Figure 6: Performance
2009-2011
36 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
CHAPTER - 4
2009-2011
37 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
In turbo codes, the MAP algorithm is used iteratively to improve performance. It is like the 20 questions game, where each previous guess helps to improve your knowledge of the hidden information. The number of iteration is often preset as in 20 questions. More iteration are done when the SNR is low, when SNR is high, lesser number of iterations are required since the results converge quickly. Doing 20 iterations maybe a waste if signal quality is good. Instead of making a decision ad-hoc, the algorithm is often pre-set with number of iterations. On the
2009-2011
38 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
average, seven iterations give adequate results and no more 20 are ever required. These numbers have relationship to the Central Limit Theorem.
Although used together, the terms MAP and iterative decoding are separate concepts. MAP algorithm refers to specific math. The iterative process on the other hand can be applied to any type of coding including block coding which is not trellis based and may not use MAP algorithm.
process. After following some analysis it can be shown that LLR can be expressed as a sum of 3 values as follows. L(uk)=L_apriori+L_channel+L_extrinsic The L-channel value does not change from iteration to iteration. So lets called it K. The only two items that change are the a-priori and a-posteriori L-values. L(uk)=L_apriori+K+L_extrinsic A-priori value goes in, it is used to compute the new a-posteriori probability and then cabe used to compute L( ) or is passed to the next decoder.
2009-2011
39 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
done. Only a-posteriori metric is computed and decoders and keep doing this either a fixed number of times or until it converges. L-posteriori is also called extrinsic information.
This process shown above.So the whole objective for the calculations is to compute the extrinsic L-value. And eventually a decision is made about the bit in question by looking at the sign of the L value.
= sign {L (
)}
2009-2011
40 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Let us consider a rate 1/n systematic Convolutional encoder in which the first coded bit, xk1, is equal to the information bit u k. In that case the a posteriori log-likelihood ratio L(uk/ y) can be decomposed into a sum of three elements. L(uk/ y) = L(uk ) + Lc yk1 + Le (uk )------------------------(4.1) The first two terms in the right hand side are related with the information bit uk. On the contrary, the third term, Le(uk), depends only on the codeword parity bits. That is why Le(uk) is called extrinsic information. This extrinsic information is an estimate of the a priori LLR L(uk). How? It is easy: we provide L(uk) and Lc yk1 as inputs to a MAP (or other) decoder and in turn we get L(uk y) at its output. Then, by subtraction, we get the estimate of L(uk): Sir CRR COE.ELURU, Dept of ECE 2009-2011
41 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Le(uk) =L(uk/y)-L(uk)-Lcykl -------------------------- ----- (4.2) This estimate of L(uk) is presumably a more accurate value of the unknown a priori LLR so it should replace the former value of L(uk). If we repeat the previous procedure in an iterative way providing Lcyk1 (again) and the new L(uk) = Le(uk) as inputs to another decoder we expect to get a more accurate L(uk/ y) at its output. This fact is explored in turbo decoding, our next subject.
The turbo code inventors [6] worked with two parallel concatenated2 and interleaved rate recursive systematic Convolutional codes decoded iteratively with two MAP decoders. Eq. 4.2 is the basis for iterative decoding. In the first iteration the a priori LLR L(uk) is zero if we consider equally likely input bits. The extrinsic information Le(uk) that each decoder delivers will be used to update L(uk) from iteration to iteration and from that decoder to the other. This way the turbo decoder progressively gains more confidence on the 1 hard decisions the decision device will have to make in the end of the iterative process. Fig. 9 shows a simplified turbo decoder block diagram that helps illuminate the iterative procedures.
2009-2011
42 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The iterative decoding proceeds as follows: In the first iteration we arbitrarily assume L(uk) = 0, then decoder 1 outputs the extrinsic information Le1(uk|y) on the systematic, or message, bit it gathered from the first parity bit (so note that actually decoder 2 does not need the LLR L1(uk|y)!); After appropriate interleaving the extrinsic information Le1(uk|y) from decoder 1, computed from Eq. (4.2), is delivered to decoder 2 as L1(uk), a new, more educated guess on L(uk). Then decoder 2 outputs Le2(uk |y), its own extrinsic information on the systematic bit based on the other parity bit (note again we keep on disdaining the LLR!). After suitable deinterleaving, this information is delivered to decoder 1 as L2(uk), a newer, even more educated guess on L(uk). A new iteration will then begin. Sir CRR COE.ELURU, Dept of ECE 2009-2011
43 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
After a prescribed number of iterations or when a stop criterion is reached the log-likelihood L2(uk|y) at the output of decoder 2 is deinterleaved and delivered as L(uk|y) to the hard decision device, which in turn estimates the information bit based only on the sign of the deinterleaved LLR,
Here we are using BCJR Algorithm. First we discuss about BCJR algorithm in detail.
In addition to MAP algorithm, another algorithm called SOVA , based on Viterbi decoding is also used. SOVA uses Viterbi decoding method but with soft outputs instead of hard. SOVA maximizes the probability of the sequence, whereas MAP maximizes the bit probabilities at each time, even if that makes the sequence not-legal. MAP produces near optimal decoding. In turbo codes, the MAP algorithm is used iteratively to improve performance. It is like the 20 questions game, where each previous guess helps to improve your knowledge of the hidden information. The number of iteration is often preset as in 20 questions. More iteration are done when the SNR is low, when SNR is high, lesser number of iterations are required since the Sir CRR COE.ELURU, Dept of ECE 2009-2011
44 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
results converge quickly. Doing 20 iterations maybe a waste if signal quality is good. Instead of making a decision ad-hoc, the algorithm is often pre-set with number of iterations. On the average, seven iterations give adequate results and no more 20 are ever required. These numbers have relationship to the Central Limit Theorem.
Although used together, the terms MAP and iterative decoding are separate concepts. MAP algorithm refers to specific math. The iterative process on the other hand can be applied to any type of coding including block coding which is not trellis based and may not use MAP algorithm. I am going to concentrate only on PCCC decoding using iterative MAP algorithm In part 2, we will go through a step-by-step example. In this part, we will cover the theory of MAP algorithm. We are going to describe MAP decoding using a Turbo code in shown Figure 5 with two RSC encoders. Each RSC has two memory registers so the trellis has four states with constraint length equal to 3.
The info bits are called uk. The coded bits are referred to by the vector c. Then the coded bits are transformed to an analog symbol x and transmitted. On the receive side, a noisy version of x is received. By looking at how far the received symbol is from the decision regions, a metric of confidence is added to each of the three bits in the symbol. Often Gray coding is used, which means that not all bits in the symbol have same level of confidence for decoding purposes. There are special algorithms for mapping the symbols (one received voltage value, to M soft-decisions, with M being the M in M-PSK.) Lets assume that after the mapping and creating of soft-metrics, the vector y is received. One pair of these decoded soft-bits are sent to the first decoder and another set, using a de-interleaved version of the systematic bit and the second parity bit are sent to the second decoder. Each decoder works only on these bits of information and pass their confidence scores to each other until both agree within a certain threshold. Then the process restarts with next symbol in a sequence or block consisting of N symbols (or bits.)
2009-2011
45 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
the encoder at time k. The corresponding information or message input bit, uk, can take on the values -1 or +1 with an a priori probability P(uk), from which we can define the so-called loglikelihood ratio (LLR)
L( )=ln
This log-likelihood ratio is zero with equally likely input bits. Suppose the coded sequence x is transmitted over a memory less additive white Gaussian noise (AWGN) channel and received as the sequence of nN real numbers y = y1 y2 yN , as shown in Fig. 4.1. This sequence is delivered to the decoder and used by the BCJR [1], or any other, algorithm in order to estimate the original bit sequence uk. for which the algorithm computes the a posteriori loglikelihood ratio L(uk y) , a real number defined by the ratio
The numerator and denominator of Eq. (2) contain a posteriori conditional probabilities, that is, probabilities computed after we know y. The positive or negative sign of L(uk/ y) is an indication of which bit, +1 or -1, was coded at time k. Its magnitude is a measure of the confidence we have in the preceding. The numerator and denominator of Eq. (2) contain a posteriori conditional probabilities, that is, probabilities computed after we know y. The positive or negative sign of L(uk /y) is an indication of which bit, +1 or -1, was coded at time k. Its magnitude is a measure of the confidence we have in the preceding section.
2009-2011
46 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
It is convenient to work with trellises. Let us admit we have a rate 1/2, n = 2, Convolutional code, with M = 4 states S = {0, 1, 2, 3} and a trellis section like the one presented in Fig. 2. Here a dashed line means that branch was generated by a -1 message bit and a solid line means the opposite. Each branch is labeled with the associated two bit codeword xk, where, for simplicity, 0 and 1 correspond to -1 and +1, respectively.
Let us suppose we are at time k. The corresponding state is Sk = s , the previous state is Sk 1 = s and the decoder received symbol is yk. Before this time k-1 symbols have already been received and after it N-k symbols will be received. That is, the complete sequence y can be divided into three subsequences, one representing the past, another the present and another one the future:
2009-2011
47 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
P(s, s, y) represents the joint probability of receiving the N-bit sequence y and being in state s at time k-1 and in state s at the current time k. In the numerator R1 means the summation is computed over all the state transitions from s to s that are due to message bits uk = +1 (i. e. , dashed branches). Likewise, in the denominator R0 is the set of all branches originated by message bits uk = -1. The variables , and represent probabilities to be defined later.
They are defined as: k 1 (s) = P(s, y<k ) k (s, s) = P(y k , s s) k (s) = P(y >k, s) At time k the probabilities , and are associated with the past, the present and the future of sequence y, respectively. Let us see how to compute them by starting with .
4.2.2.1 Calculation of :
The probability
k
(s, s) = P(y
symbol is yk at time k and the current state is Sk = s, knowing that the state from which the connecting branch came was Sk 1 = s . It turns out that is given by k (s, s) = P(y k/ xk )P(uk ) In the special case of an AWGN channel the previous expression becomes
2009-2011
48 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
where Ck is a quantity that will cancel out when we compute the conditional LLR L(uk /y) , as it appears both in the numerator and the denominator of Eq. (4), and Lc, the channel reliability measure, or value, is equal toN0/2 is the noise bilateral power spectral density, a is the fading amplitude (for nonfading channels it is a = 1), Ec and Eb are the transmitted energy per coded bit and message bit, respectively, and Rc is the code rate.
=4a =4a.
(s) =
(s)=0 if s=1
In both cases we need the same quantity, k (s, s) . It will have to be computed first. In the
k
= s linked to state s by
reached with the branches stemming from s . Theses summations contain only two elements with binary codes. The probability is being computed as the sequence y is received. That is, when computing we go forward from the beginning to the end of the trellis. The probability can only be computed after we have received the whole sequence y. That is, when computing we come backward from the end to the beginning of the trellis. We will see that and are associated with the encoder states and that is associated with the branches or transitions between states.
2009-2011
49 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The initial values 0 (s) and N (s) mean the trellis is terminated, that is, begins and ends in the all-zero state. Therefore it will be necessary to add some tail bits to the message in order that the trellis path is forced to return back to the initial state. It is self-evident now why the BCJR algorithm is also known as the forward-backward algorithm.
After all M values of and have been computed sum them all: Then
Likewise, after all 2M products k 1 (s) k (s, s) k (s) over all the trellis branches have been computed at time k their sum
2009-2011
50 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
None of these normalization summations affect the final log-likelihood ratio L(uk|y) as all them appear both in its numerator and denominator:
2009-2011
51 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
4.2.5.1 Calculation of
Let us suppose then that we know
k 1
converge into s. For example, at time k two branches arrive at state Sk = 2, one coming from state 0 and the other one coming from state 1.After all M k (s) have been computed as explained they should be divided by their sum. The procedure is repeated until we reach the end of the received sequence and have computed N (0) (remember our trellis terminates at the all-zero state).
2009-2011
52 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
4.2.5.2 Calculation of
The quantity can be calculated recursively only after the complete sequence y has been received. Knowing k (s) the value of k 1 (s) is computed in a similar fashion as k (s) was: we look for the branches that leave state Sk 1 = s , sum the corresponding products k (s) k (s) and divide by the sum s k 1 (s ) . For example, in Fig. 3 we see that two branches leave the state Sk 1 = 1 , one directed to state Sk = 0 and the other directed to state Sk = 2 , as Fig. 4b shows. The procedure is repeated until the calculation of 0 (0) .
53 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
caused, of course, by an input bit -1. These are the R0 transitions. Therefore, the first four transitions are associated with the numerator of Eq. (4) and the remaining ones with the denominator:
Figure 4.9 :The unnormalized probability P(s,s,y) as the three factor product
/y)=ln =ln
A summary of the unnormalized expressions needed to compute the conditional LLR is presented in Fig. 4.10
2009-2011
54 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
55 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The encoder input bits, uk = 1, are equally likely and the trellis path associated with the coded sequence x begins and ends in the all-zero state, for which two tail bits have to be added to the message.
The encoder input bits, uk = 1, are equally likely and the trellis path associated with the coded sequence x begins and ends in the all-zero state, for which two tail bits have to be added to the message. The AWGN channel is such that Ec/No=1 and a=1
2009-2011
56 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Figure 5.1:Symbols received and sent at time k=1 and associated probabilities And therefore,
2009-2011
57 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Figure 5.2 : Values of , and after all the six received symbols have been processed.
2009-2011
58 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Calculation of P(s,s,y):
For instance (see Fig. 12), at time k = 3 Pnorm(2,3,y) = P(2,3,y)/P3 is equal to Pnorm (2,3,y)(0.010.470.001) / 0.561.1.105 , as it turns out that the sum of all eight branch products is
We are on the brink of getting the wanted values of L(uk y) given by Eq. (14): we collect thevalues of the last two rows of Table 1 .
2009-2011
59 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
60 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The two last LLR values from the enforced trellis termination . In face of all the values obtained the decoder hard decision about the message sequence will be u^ = +1 +1 -1 +1 -1 -1 or 110100
2009-2011
61 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Figure 5.4: Values of Pnorm (s, s, y) and L(uk|y), and estimate of uk.
2009-2011
62 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The 18-element received sequence is y = 0.3 -4.0 -1.9 -2.0 -2.4 -1.3 1.2 -1.1 0.7 -2.0 -1.0 -2.1 -0.2 -1.4 -0.3 -0.1 -1.1 0.3. The initial a priori log-likelihood ratio is assumed to be L(uk) = 0.
Therefore, we will have the following input sequences in the turbo decoder
In the first decoding iteration we would get the following values of the a posteriori loglikelihood ratio L1(uk|y) and of the extrinsic information Le2(uk), both at the output of decoder 1, when we have L(uk) and Lc yk1 at its input:
2009-2011
63 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The values of the extrinsic information (fourth row) were computed subtracting the second and third rows from the first (Eq. (18). As we see, the soft information L1(uk|y) provided by decoder 1 would result in four errored bits, those occurring when L1(uk|y) is positive (at times k = 4, 5, 7 and 8). This same decoder would pass the extrinsic information Le1(uk) to decoder 2 after convenient interleaving that is, L1(uk) in the last row. Note that after this halfiteration we have gained a strong confidence on the decision about the first bit due to the large negative values of L1(u1|y) and Le1(u1) most probably u1 = -1 (well, we already know that). We are not so certain about the other bits, especially those with small absolute values of L1(u1|y) and Le1(u1). Decoder 2 will now work with the interleaved systematic values P Lcy k1 ykp
(2) (p)
and the new a priori LLRs L1(uk). The following table presents the decoding results
according to Eq. (18), where sequence L2(uk) represents the deinterleaved version of Le2(uk) that will act as a more educated guess of the a priori LLR L(uk). So L2(uk) will be used as one of the decoder 1 inputs in the next iteration.
How was L2(uk) computed? As said before, the Pi-th element of L2(uk) is the i-th element of the original sequence Le2(uk). For example, the fourth element of L2(uk) is equal to
We can see again that we would still get four wrong bits if we made a hard decision on sequence L2(uk/y) after it is reorganized in the correct order through deinterleaving. That sequence, -3.90 -3.04 -3.65 0.25 1.23 -0.72 0.18 0.04 -1.44, enforces errors in positions 4, 5, 7 and 8. The previous procedures should be repeated iteration after iteration. For example, we would get Table 4 during five iterations. Shaded (positive) values would yield wrong hard decisions. All four initially wrong bits have been corrected just after the third iteration. From
2009-2011
64 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
then on it is a waste of time to proceed further. In fact, the values of the a posteriori loglikelihood ratios stabilize very quickly and we gain nothing if we go on with the decoding. Table 4 Outputs of turbo decoders during five iterations
Iteration 1 2 3 4 5 1 2 3 4 5
K L1(uk/y) L1(uk/y) L1(uk/y) L1(uk/y) L1(uk/y) L(uk/y) L(uk/y) L(uk/y) L(uk/y) L(uk/y)
1 -4.74
2 -3.20
3 -3.66 -3.28 -3.35 -3.49 -3.64 -3.65 -3.29 -3.35 -3.50 -3.65
4 1.59 0.11 -0.58 -1.02 -1.35 0.25 -0.41 -0.87 -1.22 -1.51
5 1.45 0.27 -0.34 -0.74 -1.05 1.23 0.13 -0.45 -0.85 -1.15
6 -0.74
7 0.04
8 0.04
9 -1.63
-3.64 -2.84 -3.65 -3.85 -4.08 -3.90 -3.61 -3.75 -3.98 -3.00 -3.21 -3.42 -3.04 -2.96 -3.11 -3.32
-0.95 -0.17 -1.07 -1.20 -1.32 -0.72 -0.97 -1.08 -1.21 -1.33 -0.61
0.04 -1.44
-0.43 -0.25 -1.48 -0.80 -0.63 -1.66 -1.07 -0.90 -1.86 -1.28 -1.11 -2.06
-4.21 -3.52
2009-2011
65 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
66 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
SIMULATION RESULTS
The below figure shows BER curves for interleaver of length 100 and the amount of data is 1500 and the number of iterations is 5.
2009-2011
67 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The below figures (1, 2, 3 and 4 ) shows BER curves for the amount of data is 500, signal to noise ratio as up to 4 db, number of iterations are 5 and the interleaver lengths are taken as 50,100,250 and 500 respectively.
Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5
10
-2
Amount of data:500
BER
10
-3
10
-4
10
-5
1.5
2.5 Eb/No
3.5
Fig: ( 1 )
2009-2011
68 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5
10
-2
BER
10
-3
10
-4
10
-5
1.5
2.5 Eb/No
3.5
10
-1
Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5
10
-2
10
BER
-3
10
-4
10
-5
10
-6
1.5
2.5 Eb/No
3.5
Fig :( 3 )
2009-2011
69 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
10
-1
Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5
10
-2
10
BER
-3
10
-4
10
-5
10
-6
1.5
2.5 Eb/No
3.5
Fig: ( 4 )
The below figures (5, 6, 7 and 8 ) shows BER curves for taking amount of data as 500 bits, number of iterations are 5 , the interleaver length is taken as 50. Signal to Noise ratio ( S/N ) is changing as up to 2,3,4 and 6 db respectively. Here Signal to Noise ratio ( S/N) is only changed .
2009-2011
70 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
10
-1.3
10
-1.4
10
BER
-1.5
10
-1.6
10
-1.7
10
-1.8
10
-1.9
1.1
1.2
1.3
1.4
1.5 Eb/No
1.6
1.7
1.8
1.9
Fig : ( 5 )
10
-2
10
-3
1.2
1.4
1.6
1.8
2 Eb/No
2.2
2.4
2.6
2.8
71 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
10
-1
10
-2
BER
10
-3
10
-4
1.5
2.5 Eb/No
3.5
Fig :
( 7)
10
-2
10
-3
10
-4
10
-5
1.5
2.5
3 Eb/No
3.5
4.5
Fig: ( 8)
2009-2011
72 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
The below figures ( 9, 10, 11 1nd 12 ) shows BER curves for taking amount of data as 500 bits, signal to noise ratio as up to 4 db, the interleaver length is 50 and the number of iterations are taken as 3,4,5 and 6 respectively.
10
-2
BER
Interleaver length=50
10
-4
1.5
2.5 Eb/No
3.5
Fig. ( 9 )
2009-2011
73 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
10
-2
S/N Ratio= 5 db
10
-3
Interleaver length=50
10
-4
1.5
2.5 Eb/No
3.5
Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5
10
-2
10
-4
1.5
2.5 Eb/No
3.5
Fig : ( 11 )
2009-2011
74 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5 iteration 6
10
-2
BER
10
-3
10
-4
10
-5
1.5
2.5 Eb/No
3.5
Fig: (12 )
2009-2011
75 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
2009-2011
76 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
1. The bit error probability decreases as the iterations go up. This means as the iterations increase the reliability of the decisions taken increases. 2. The interleaver length also affects the performance. As the interleaver length increases the bit error probability decreases. As BCJR algorithm is very complex, we are trying to modify the algorithm to save memory and to reduce complexity. The basic idea is as follows.
It is usually assumed that all state metric values are necessary in the maximum a posteriori (MAP) algorithm in order to compute the a posteriori probability (APP) values. This paper extends the mathematical derivation of the original MAP algorithm and shows that the log likelihood values can be computed using only partial state metric values. By processing N stages in a trellis concurrently, the proposed algorithm results in savings in the required memory size and leads to a power efficient implementation of the MAP algorithm in channel decoding. The computational complexity analysis for the proposed algorithm is presented. Especially for the N = 2 case, we show that the proposed algorithm halves the memory requirement without increasing the computational complexity.
2009-2011
77 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
CHAPTER - 7 REFERENCES
2009-2011
78 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
REFERENCES
[1].C. Berrou and A. Glavieux, Near Decoding; [2].S. Benedetto, G. Montorsi, D. Optimum Error Correcting Coding and
Turbo Codes, IEEE Trans. Comm., pp. 1261-1271, Oct.1996. Divsalar and F. Pollara , Serial Concatenation of
Interleaved Codes: Performance Analysis, Design and Iterative Decoding, TDA Progress Rep. 42-126, Apr-June 1996, Jet Rropulsion lab., Pasadena, CA, pp.1-26, Aug.15, 1996. [3].L.Bahl, J.Cocke ,F.Jelinek, and J.Raviv, Optimum Decoding of linear codes for minimzing Symbol Error Rate,IEEE Trans. On Inf. Theory,vol.IT-20,pp.284-
287,Mar1974. [4].P.Robertson, E.Villebrum, and P.Hoeher, A Comparision of Optimal and SubOptimal MAP Decoding algorithms Operating in the Log-domain. International Conference on Communications,pp.1009-1013, June 1995 [5].S. Benedetto, G. Montorsi, D. Divsalar, F. Pollara ,Soft-Output decoding Algoitham in Iteratie Decoding of Turbo Codes, TDA progress Report42-124,pp.63-87,February15,1996 [6]. J. Hagenauer, The turbo principle : Tutorial introduction and state of the art, Proc. 1st Internet. Symp. Turbo Codes, pp. 1-12, 1997. [7]. S. Brink, Designing Iterative Decoding Schemes with the Extrinsic Information Transfer Chart, AEU Int. J. Electron. Commun., vol.54, no.6, pp.389-398, 2000. [8]. S. Brink, J.Speidel, and R. Yan, Iterative demapping and decoding for multilevel modulation, in Proc. Globecom98, vol.1, pp.579-584, 1998. [9]. Forney,G.D :Concatenated Codes. Cambridge (Mass, USA) : MIT Press, 1966. [10].T. Mittelholzer, X.Lin, J. Massey, Multilevel Turbo Coding for M-ary Quadrature and Amplitude Modulation. Int. Symp. On Turbo Codes, Brest, September, 1997. [11].Li, X; Ritcey, J. A: Bit Interleaved coded modulation with iterative decoding. Proc. ICC (1999), 858-863. [12].Cover, T. M. Thomas, J. A . Elements of Information theory. New York: Wiley, 1991. [13].W.S.Wong and .W.Brockett, Sysytems with finite communication bandwidth constraints II: stabilization with limited information feedback. IEEE Trans. Automatic Control, vol.44, no.5,pp.1049-1053,May 1999.
2009-2011
79 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
[14]. H.EI Gamal and A.R. hammons, Analyzing the turbo decoder using the Approximation. IEEE trans. Inform.Theory,vol.47,no.2,pp.671-686,Feb.2011.
Gaussian
[16]. D.J.C. MacKay and R. M.Neal, Near Shannon Limit performance of low- density parity- check codes. Elecron. Lett., vol.32, pp.1645-1646. Aug.1996. [15]. J.W.Lee and R.E.Blahut,Generalized Exit chrat and BER analysis of finite length turbo codes, Proc.Globe Com 2003.San fransisco,Dec 2003. [16].C.Berrou.A.Glaviax, and P.Thitimajshima. Near Shannon limit error correcting coding and decoding;Turbo-codes, in Proc. Int. Conf. pp.10641070, May1993. [17]. T.Richarson, The geometry of turbo- decoding dynamics. IEEE Transn Inform. Theory, Vol.46,no.1.pp.1824-1834.Oct.2001 [18]. W.Ryan, A Turbo Code Tutorial, available at http://www.ece.arizona.edu.ryan. [19]. T.A. Summers and S.G. Wilson , SNR Mismatch and Online Estimation in Turbo decoding, IEEE Trans.on Comm. Vol.46, no.4, pp. 421-424,April 1998. [20]. M.Fu, Stochastic analysis of Turbo decoding, IEEE Trans. Info. Theory, vol.54, no.1,pp.81-100,2005. Also to appear in Int.Conf. Comm.,Seoul,May 2005. [21]. W.ryan, A turbo code tutorial, available at; http://www.ece.arizona.edu.ryan [21]. R. W. Brockett and D. Liberzon. Quaintized feedback stabilization of linear systems., IEEE Trans., Automatic Control., vol.45, no. 7, pp. 12791289, July 2000. [22]. M.SrinivasaRao and A.Srinu, Analysis of Iterative decoding for turbo codes using Maximum A Posteriori Algorithm,International Conference on Industrial Applications of Soft Computing techniques(IIASCT-2011), Bhuvaneswar [23].M.Srinivasa Rao And A.Srinu, Analysis of Iterative decoding for turbo codes using Maximum A Posteriori Algorithm, National Conference on Recent Trends and Advances in Nano Technology,Department of ECE in Collabaration with IEEE Hyderabad section [24]. Prof M. Srinivasa Rao, Dr P. Rajesh Kumar, K. Anitha , Addanki Srinu, Modified Maximum A Posteriori Algorithm For Iterative Decoding of Turbo codes, IJEST Journal with Manuscript Id: IJEST11-03-08-174 Commun., Switzerland,
2009-2011
80 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
CHAPTER - 8 APPENDIX
2009-2011
81 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
APPENDIX
SOURCE CODE
main.m
% Turbo Decoding
clc
clear all
% Block size
block_size =100;
code_polynomial = [ 1 1 1; 1 0 1 ];
[n,K]=size(code_polynomial);
m=K-1;
code_rate = 1/2;
% Number of iterations
2009-2011
82 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
no_of_iterations = 5;
block_error_limit = 15;
% signal-to-noise-ratio in db
SNRdb = [1 2 3 4];
for snrdb=1:length(SNRdb)
snr = 10^(SNRdb(snrdb)/10);
fprintf('Signal-to-Noise-ratio = %d\n',SNRdb(snrdb))
block_number = 0;
total_errors=0;
2009-2011
83 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
block_number=block_number+1;
% random scrambler
% turbo-encoder output
received_signal = turbo_encoded+sqrt(noise_var)*randn(1,(block_size)*2);
2009-2011
84 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
% First decoder
apriori(Alpha) = extrinsic;
% Second decoder
apriori = extrinsic(Alpha);
Datahat(Alpha) = (sign(LLR)+1)/2;
2009-2011
85 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
bit_errors(iteration) = length(find(Datahat(1:block_size-m)~=Data));
if bit_errors(iteration )>0
end %if
end %iterations
total_errors=total_errors+ bit_errors;
if block_errors(no_of_iterations)==block_error_limit
end %if
end %while
2009-2011
86 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
end %snrdb
% prints
semilogy(BER) ,grid
xlabel('Eb/No')
ylabel('BER')
turbo encoder.m
% Turbo encoder
[n,K] = size(code_g);
m = K - 1;
block_s = length(Data);
y=zeros(3,block_s+m);
% encorder 1
2009-2011
87 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
%---------------------------------state = zeros(m,1);
for i = 1: block_s+m
if i <= block_s
d = Data(1,i);
d = rem( code_g(1,2:K)*state, 2 );
end
v = code_g(2,1)*a;
for j = 2:K
v = xor(v, code_g(2,j)*state(j-1));
end;
state = [a;state(1:m-1)];
y(1,i)=d;
y(2,i)=v;
2009-2011
88 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
end
%encorder 2 %-------------------------
for i = 1: block_s+m
ytilde(1,i) = y(1,Alpha(i));
end
state = zeros(m,1);
for i = 1: block_s+m
d = ytilde(1,i);
v = code_g(2,1)*a;
for j = 2:K
v = xor(v, code_g(2,j)*state(j-1));
end; %j
2009-2011
89 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
y(3,i)=v;
end %i
output(1,n*i-1) = 2*y(1,i)-1;
if rem(i,2)
output(1,n*i) = 2*y(2,i)-1;
else
output(1,n*i) = 2*y(3,i)-1;
end %if
end %i
block_s = fix(length(Data)/2);
output=zeros(2,block_s);
2009-2011
90 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
for i = 1: block_s
Dataf(i) = Data(2*i-1);
if rem(i,2)>0
output(1,2*i) = Data(2*i);
else
output(2,2*i) = Data(2*i);
end
end
for i = 1: block_s
output(1,2*i-1) = Dataf(i);
output(2,2*i-1) = Dataf(Alpha(i));
end
2009-2011
91 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
% code properties
dec_cnt_s=s-1; i=1;
% decimal to binary state while dec_cnt_s >=0 & i<=m bin_cnt_s(i) = rem( dec_cnt_s,2) ; dec_cnt_s = (dec_cnt_s- bin_cnt_s(i))/2; i=i+1; end bin_cnt_s=bin_cnt_s(m:-1:1);
2009-2011
92 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
% finding the possible previous state frm the trellis lst_s(nxt_s(s, 1), 1)=s; lst_s(nxt_s(s, 2), 2)=s; lst_o(nxt_s(s, 1), 1:4) = nxt_o(s, 1:4) ; lst_o(nxt_s(s, 2), 1:4) = nxt_o(s, 1:4) ;
end
2009-2011
93 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
block_s = fix(length(Datar)/2);
[n,K] = size(code_g);
m = K - 1;
no_of_states = 2^m;
infty = 1e10;
zero=1e-300;
% forward recursion
alpha(1,1) = 0;
alpha(1,2:no_of_states) = -infty*ones(1,no_of_states-1);
% code-trellis
2009-2011
94 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
nxt_o = 2*nxt_o-1;
lst_o = 2*lst_o-1;
for i = 1:block_s
branch = -infty*ones(1,no_of_states);
if(sum(exp(branch+alpha(i,:)))>zero)
end end
alpha_max(i+1) = max(alpha(i+1,:));
end
2009-2011
95 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
% backward recursion
beta(block_s,1)=0;
beta(block_s,2:no_of_states) = -infty*ones(1,no_of_states-1);
for i = block_s-1:-1:1
branch = -infty*ones(1,no_of_states);
if(sum(exp(branch+beta(i+1,:)))>zero)
beta(i,cnt_s) = log(sum(exp(branch+beta(i+1,:))));
else
beta(i,cnt_s)=-infty;
end
end
2009-2011
96 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
end
for k = 1:block_s
branch0 = -Datar(2*k-1)+Datar(2*k)*lst_o(cnt_s,2)-log(1+exp(apriori(k)));
end
end
2009-2011
97 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
block_s = fix(length(Datar)/2);
[n,K] = size(code_g);
m = K - 1;
no_of_states = 2^m;
infty = 1e10;
zero=1e-300;
% forward recursion
alpha(1,1) = 0;
alpha(1,2:no_of_states) = -infty*ones(1,no_of_states-1);
% code-trellis
nxt_o = 2*nxt_o-1;
lst_o = 2*lst_o-1;
for i = 1:block_s
branch = -infty*ones(1,no_of_states);
2009-2011
98 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
if(sum(exp(branch+alpha(i,:)))>zero)
else
alpha(i+1,cnt_s) =-1*infty;
end
end
alpha_max(i+1) = max(alpha(i+1,:));
end
% backward recursion
beta(block_s,1:no_of_states)=0;
for i = block_s-1:-1:1
2009-2011
99 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
branch = -infty*ones(1,no_of_states);
if(sum(exp(branch+beta(i+1,:)))>zero)
beta(i,cnt_s) = log(sum(exp(branch+beta(i+1,:))));
else
beta(i,cnt_s)=-infty;
end
end
end
for k = 1:block_s
2009-2011
100 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM
branch0 = -Datar(2*k-1)+Datar(2*k)*lst_o(cnt_s,2)-log(1+exp(apriori(k)));
end
End
2009-2011