0% found this document useful (0 votes)
105 views100 pages

Analysis of Iterative Decoding For Turbo Codes Using Maximum A Posteriori Algorithm

The document discusses channel coding and iterative decoding for turbo codes using maximum a posteriori algorithms. It provides an introduction to digital communication systems and channel coding. Channel coding techniques like automatic repeat request and forward error correction are described. Turbo codes are an error correcting code that can achieve performance close to the theoretical maximum through an iterative decoding process using the maximum a posteriori algorithm.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views100 pages

Analysis of Iterative Decoding For Turbo Codes Using Maximum A Posteriori Algorithm

The document discusses channel coding and iterative decoding for turbo codes using maximum a posteriori algorithms. It provides an introduction to digital communication systems and channel coding. Channel coding techniques like automatic repeat request and forward error correction are described. Turbo codes are an error correcting code that can achieve performance close to the theoretical maximum through an iterative decoding process using the maximum a posteriori algorithm.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 100

1 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

CHAPTER 1

INTRODUCTION

Sir CRR COE.ELURU, Dept of ECE

2009-2011

2 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

INTRODUCTION
1.1Introduction:
The main aim of any communication scheme is to provide error-free data transmission. In a communication system, information can be transmitted by analog or digital signals. For analog means, the amplitude of the signal reflects the information of the source, whereas for digital case, the information will first be translated into a stream of 0 and 1. Then two different signals will be used to represent 0 and 1 respectively. As can be referred to the following illustration, the main advantage of using digital signal is that errors introduced by noise during the transmission can be detected and possibly corrected. For communication using cables the random motion of charges in conducting (e.g. resistors), known as thermal noise, is the major source of noise. For wireless communication channels, noise can be introduced in various ways.

Digital Communications: In the last two decades, there has been an explosion of interest in the transmission of digital information mainly due to its low cost, simplicity, higher reliability and possibility of transmission of many services in digital forms. The term digital communications broadly refers to the transmission of information using digital messages or bit streams. There are notable advantages to transmitting data using discrete messages. It allows for enhanced signal processing and quality control. In particular, errors caused by noise and interference can be detected and corrected systematically. Digital communications also make the networking of heterogeneous systems possible, with the Internet being the most obvious such example. These advantages and many more, explain the widespread adoption and constantly increasing popularity of digital communication systems.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

3 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Block Diagram of the Digital Transmission System:

Figure 1.1: Block Diagram of the Digital Transmission System

The above figure shows a simple block diagram of a digital transmission system.

Digital Source: At the source, information suitable for transmission is produced. The input to this block is either analog or discrete. In the case of an analog input, appropriate processes, i.e. quantization, sampling and coding are performed so as to form a discrete signal. Source Encoder: Discrete information obtained from the source block a certain sampling rate is then input to the source encoder block. In this block, symbol sequences are converted to binary sequences by assigning code words to the input symbols according to a specified rule and based on reducing the redundancy of the encoded data. Since redundancy has been removed from the information source, encoded information is sensitive to noise in transmission media. Hence, a channel encoder inserts redundancies into the source encoded data so as to protect the required signal against channel errors.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

4 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Channel Encoder: The purpose of the channel coding is to introduce some redundancy in the binary information sequence that can be used at the receiver to overcome the effects of noise and interference encountered in the transmission of the signal through the channel. Modulator: The modulator converts the input binary stream to a waveform compatible with the channel characteristics and provides suitable conditions for transmission of the signal. The primary purpose of the digital modulator is to map the binary information sequence into signal waveforms. Channel: The communication channel is the physical medium that is used to send the signal from transmitter to receiver. Here any one type of noise is added. The remaining blocks in this figure perform inverse operations of their corresponding blocks at the transmitter to finalize extraction of the required signal at the destination as seen below. Demodulator: Digital demodulator reduces the waveforms to a sequence of numbers. Channel Decoder: This sequence of numbers are passed to the channel decoder, which attempts to reconstruct the original information signal from the knowledge of the code used bt the channel encoder and redundancy contained in the received data. Source Decoder: Source decoder accepts the output from the channel decoder and, from knowledge of the source encoding method used, it attempts to reconstruct original information from source In this project we concentrate on the channel coding concept.

1.2 Channel Coding Concept:


Channel coding refers to the class of signal transformation designed to improve communication performance by enabling the transmitted signals to better withstand the effects of various channel impairments. Channel coding can be partitioned into two areas, waveform (or signal design) coding and structured sequences (or structured redundancy). Waveform coding deals with transforming waveforms into better waveforms, to make the detection process less subject to errors. Structured sequence deals with transforming data sequences into better sequences, having structured redundancy. Coding's role has expanded tremendously and today coding can do the following:

Sir CRR COE.ELURU, Dept of ECE

2009-2011

5 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Reduce the occurrence of undetected errors. This was one of the first uses of error-control coding. Today's error detection codes are so effective that the occurrence of undetected errors is, for all practical purposes, eliminated. Reduce the cost of communications systems. Transmitter power is expensive, especially on satellite transponders. Coding can reduce the satellite's power needs because messages received at close to the thermal noise level can still be recovered correctly. Overcome Jamming. Error-control coding is one of the most effective techniques for reducing the effects of the enemy's jamming. In the presence of pulse jamming, for example, coding can achieve coding gains of over 35 dB [8]. Eliminate Interference. As the electromagnetic spectrum becomes more crowded with manmade signals, error-control coding will mitigate the effects of unintentional interference. Despite all the new uses of error-control coding, there are limits to what coding can do. On the Gaussian noise channel, for example, Shannon's capacity formula sets a lower limit on the signal-to-noise ratio that we must achieve to maintain reliable communications. Shannons lower limit depends on whether the channel is power-limited or bandwidth-limited. The deep space channel is an example of a power-limited channel because bandwidth is an abundant resource compared to transmitter power. Telephone channels, on the other hand, are considered bandwidth-limited because the telephone company adheres to a strict 3.1 kHz channel bandwidth.

Popular Coding Techniques


In this section we discuss two popular error-control coding techniques. They are automatic repeat request (ARQ), forward error correction (FEC). A. Automatic Repeat Request (ARQ) An error detection code by itself does not control errors, but it can be used to request repeated transmission of errored code words until they are received error-free. This technique is called automatic repeat request, or ARQ. In terms of error performance, RQ outperforms forward error correction because code words are always delivered error-free (provided the error detection code doesn't fail). This advantage does not come free we pay for it with decreased throughput. The chief advantage of ARQ is that error detection requires much simpler encoding equipment than error correction. ARQ is also adaptive since it only re-transmits information when errors occur. On the other hand, ARQ schemes require a feedback path which may not be available. They are

Sir CRR COE.ELURU, Dept of ECE

2009-2011

6 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

also prone to duping by the enemy. A pulse jammer can optimize its duty cycle to increase its chances of causing one or more errors in each codeword. Ideally (from the jammer's point of view), the jammer forces the communicator to retransmit the same codeword over and over, rendering the channel useless. There are two types of ARQ: stop and wait ARQ and continuous ARQ. Stop-and-wait ARQ. With stop-and-wait ARQ, the transmitter sends a single codeword and waits for a positive acknowledgement (ACK) or negative acknowledgement (NAK) before sending any more code words. The advantage of stop-and-wait ARQ is that it only requires a half duplex channel. The main disadvantage is that it wastes time waiting for ACKs, resulting in low throughput. Continuous ARQ. Continuous ARQ requires a full duplex channel because code words are sent continuously until a NAK is received. A NAK is handled in one of two ways: With go back-N ARQ, the transmitter retransmits the errored codeword plus all code words that followed until the NAK was received. The parameter N is determined from the round trip channel delay. For geosynchronous satellite channels, N can be very large because of the 540 millisecond round trip delay. The transmitter must store N code words at a time and large values of N result in expensive memory requirements. With selective-repeat ARQ, only the errored codeword is retransmitted, thus increasing the throughput over go back-N ARQ. Both types of continuous ARQ offer greater throughput efficiency than stop-and-wait ARQ at the cost of greater memory requirements. B. Forward Error Correction (FEC): Forward error correction is appropriate for applications where the user must get the message right the first time. The one-way or broadcast channel is one example. Today's error correction codes fall into two categories: block codes and Convolutional codes. Block Codes. The

operation of binary block codes was described in Section 4.0 of this paper. All we need to add here is that not all block codes are binary. In fact, one of the most popular block codes is the Reed-Solomon code which operates on m-bit symbols, not bits. The traditional role for error-control coding was to make a troublesome channel acceptable by lowering the frequency of error events. The error events could be bit errors, message errors, or undetected errors. The codes are produced in the channel encoding schemes.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

7 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Types of Error Correction Codes:


1. Block codes 2. Convolutional codes 3. Turbo codes 4. LDPC codes

In this project we are using Turbo codes.

1.3 Why Turbo codes are introduced?


First we discuss about Shannons limit: Theoretically, Shannon stated that the maximum rate of transmitted signal or capacity of a channel over Additive White Gaussian Noise (AWGN), with an arbitrarily low bit error rate depends on the Signal to Noise Ratio (SNR) and the bandwidth of the system (W), according to [2]: C = W log 2 (1 +S/N) .. ..(1.1) Where C is the capacity of the channel, S and N are the signal and the average noise power, respectively. Based on this theory, it would be possible to transmit information with any rate (R) less than or equal to the channel capacity (R < C), when suitable coding is applied. Instead of (S /N) the channel capacity can be represented based on the signal to noise ratio per information bit (Eb/N0). Considering the relationship between SNR and Eb/N0, and the channel capacity (with value R) , Equation 1.1can be rewritten as follows:

S/N=(Eb/No)*(R/W)-------------------------(1.2)

C/W=Log 2 ( 1+(Eb/No)* (C/W) )----------(1.3) C/W 0), the

In the case of an infinite channel bandwidth (W>infinity Shannon bound is defined by:

and

Eb/No=1/(log2e)= 0.693

Sir CRR COE.ELURU, Dept of ECE

2009-2011

8 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

In order to achieve this bound, i.e. Eb/N0= -1.59 dB value, it would be necessary to use a code with a such long length that encoding and decoding would be practically impossible. However, the most significant step in obtaining this target, was by Forney, who found that a long code length could be achieved by concatenation of two simple component codes with short lengths linked by an interleaver. Figure 1.2shows basic structures of the serial, parallel and hybrid concatenated codes. Unlike serial and hybrid concatenated codes, turbo codes, which are basically implemented by parallel concatenation of two similar Recursive Systematic Convolutional (RSC) component codes, create a perfect balance between component codes with the close performance to the Shannon bound. On the basis of these properties, the recent decade saw this type of coding as the subject of much research and its usage in several applications.

Shannons limit graphical representation:

Figure 1.2: graphical representation of Shannons limit

1.4 Code concatenation methods


Concatenation of codes is a very useful technique that leads to the construction of very efficient codes by using two or more constituent codes of relatively small size and complexity. Sir CRR COE.ELURU, Dept of ECE 2009-2011

9 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Thus, a big, powerful code with high BER performance, but of impractical complexity, can be constructed in an equivalent concatenated form by combining two or more constituent codes to provide the same performance at a lower cost in terms of complexity. The reduced complexity is important especially for the decoding of these codes, which can take advantage of the combined structure of a concatenated code. Decoding is done by combining two or more relatively low complexity decoders, thus effectively decomposing the problem of the decoding of a big code. If these partial decoders properly share the decoded information, by using an iterative technique, for example, then there need be no loss in performance. There are essentially two ways of concatenating codes: traditionally, by using the so-called serial concatenation and more recently, by using the parallel concatenated structure of the first turbo coding schemes. Both concatenation techniques allow the use of iterative decoding.

1.4.1. Serial concatenation


Serial concatenation of codes was introduced by David Forney. In a serial concatenated code a message block of k elements is first encoded by a code C1 (n1, k), normally called the outer code, which generates a code vector of n1 elements that are then encoded by a second code C2 (n2, n1), usually called the inner code, which generates a code vector of n2 elements. A block diagram of a serial concatenated code is seen in Figure 2.5. The decoding of the concatenated code operates in two stages: first performing the decoding of the inner code C2 and then the decoding of the outer code C1. The decoding complexity decomposed into these two decoders is much lower than that of the direct decoding of the whole code equivalent to the concatenated code, and the error-control efficiency can be the same if the two decoders interactively share their decoded information, as in turbo decoding.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

10 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Component Codes 1

Interleaver

Component Codes 2

Figure (a)

Component Codes 1

C1

Interleaver

Component Codes 2

C2 Figure (b)

Figure 1.3: Serial and Parallel concatenated code Structures

1.4.2. Parallel concatenation


Parallel concatenation of codes was introduced by Berrou, Glavieux and Thitimajshima as an efficient technique suitable for turbo decoding. Iterative decoding and parallel concatenation of codes are two of the most relevant concepts introduced in the construction of a turbo code, which have a strong influence on the impressive BER performance of these codes. A simple structure

for a parallel concatenated encoder is seen in Figure.2.6, which is the encoder of a turbo code of code rate Rc = 1/3.

A block or sequence of message bits m is input to the encoder of one of the constituent codes C1, generating an output sequence c1. In addition, the same input sequence is first interleaved, and then input to the encoder of the second code C2, generating an output sequence c2. The output of both encoders, c1 and c2, are multiplexed with the input m to form the output or code sequence c, so that the concatenated code is of code rate Rc = 1/3 if all the multiplexed sequences are of the same length.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

11 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

CHAPTER - 2 THE STRUCTURE OF TURBO ENCODER

Sir CRR COE.ELURU, Dept of ECE

2009-2011

12 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The Structure of Turbo Encoder

2.1Introduction
Turbo codes are introduced as one of the most powerful error control codes. They are basically constructed by two parallel Recursive Systematic Convolutional (RSC) codes, which are linked by an interleaver. Generally, turbo encoded data are decoded by iterative decoding techniques. Due to the feedback connection from the output to the input of the RSC encoder, it is possible to find bit streams that automatically return the RSC encoders to zero state. This generates code words with low weight for the turbo code. As one of the effective solution to reduce this drawback, application of good interleavers is suggested. The interleavers are designed in such a way that to prohibit generation of bad bit streams of the second RSC codes. This chapter reviews structure of turbo codes. Among several suggested algorithms, the iterative turbo decoding by MAP Algorithm (MAP) is discussed and some modifications improving its performance are presented.

2.2 Turbo Encoder


Turbo code design is a big research area by itself. For our purposes, it suffices to understand some basic rules and features of turbo code design. A turbo code is a linear code in the sense that the sum (i.e., exclusive-or) or any two codes is still a codeword. For a linear code, analysis can be done by assuming that the all zero codeword is transmitted. Then, the probability that this codeword will be mistaken by the receiver as another codeword is related to the weight of the later. The weight of a codeword is the number of 1s in the codeword. A low weight code is bad because it can be easily mistaken as the all-zero code (the transmitted one). The quality of a code is measured by the distribution of the weights of the codeword. A good linear code should have very few low weight code words. Turbo codes achieve this property by using recursive constituent encoders and an interleaver [16].

A turbo encoder is constructed by a parallel concatenation of two identical component codes, which are linked by an interleaver. Generally, Recursive Systematic Convolutional (RSC) codes with the rate 1/2 are applied as the component codes. However, it is possible to utilize block Sir CRR COE.ELURU, Dept of ECE 2009-2011

13 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

codes instead of convolutional codes as component codes . Also, the structure of component codes can be different from each other resulting in asymmetric turbo codes .For an RSC encoder with rate 1/2 , input bit streams are directly transferred to the encoder output to form systematic data part of the encoder. Based on the feedback connection from another RSC encoder output to the encoder input, the systematic bits are encoded providing parity bits.

xs X
RSC Encoder 1

xp1

Interleaver

RSC Encoder 2

xp2

Fig 2 : Turbo encoder

The typical Turbo Encoder shown in figure (2). It consists of two parallel Recursive Systematic Convolution Encoders separated with Interleaver as shown. Without the punctuator, the coding rate is 1/3 because of the two parity sequences. If this rate is too low, the parity sequences can be punctured to give a higher rate. For example, by puncturing out the even symbols of xp1 and odd symbols of xp2, the coding rate is increased to . By convention in digital communications, we assume that the binary 0 and 1 are mapped to physical values of +1 and -1 by the modulator. Similarly to the Convolutional code, it is possible to define the generator matrix for the SC code. This matrix can be represented as G(D) = (1;

g0/g1), where 1, g0, g1 introduce the systematic, feed forward and feedback connections of the RSC encoder, respectively. For the proposed RSC code, the polynomials g0 = 1 +D2 and g1 = 1 +D +D2 are obtained and represented by the values of 5 and 7 in the octal mode. Figures 2.1 and 2.5(b) show block diagrams of the turbo encoder and a simple structure of the RSC encoder (1; 5/7 ) with rate 1/2 , respectively.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

14 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

In most applications, channel codes are designed with rate 1/2 . In order to achieve this rate for a turbo code, it is necessary to puncture half of the bits of each RSC encoded data. Since puncturing of the systematic bits dramatically reduces the code performance, instead of sending two half punctured systematic bits from two RSC encoders, only systematic bits of the first RSC encoder are fully transmitted and puncturing is only performed on the parity bits forming the desired code rate. Therefore, in the case of non-puncturing, a turbo code with rate 1/3 is constructed. It has been confirmed that the equally distributed puncturing between two parity data creates the best performance for the turbo code. First we see from the Convolutional encoders.

2.2.1Convolutional Encoder:
A Convolutional encoder with the rate R = k/n is constructed on a basis of k input bits,n output bits and m memory units. The memory outputs and input data are joined each other in the required combination by an Exclusive OR (XOR) operator which generates the output bits . Figure 2.1(a) shows the Convolutional encoder (n = 2; k = 1;m = 2) structure. In the Convolutional encoder, one bit entering the encoder will affect to the code performance for m+1 time slots, which represents the constraint length value of the code. Since an XOR is a linear operation, the Convolutional encoder is a linear feed forward circuit. Based on this property, the encoder outputs can be obtained by convolution of input bits with n impulse responses. The impulse responses are obtained by considering input bit stream (100...0) and observing the output sequences. Generally, these impulse responses are called generator sequences having lengths equal to the constraint length of the code. The generator sequences determine the existence connection between the encoder memories, its input and output. For the illustrated encoder in Figure 2.1(a) the generator matrix G(D) is of the form :

G(D)=[g1,g2] And g1=(1 1 1 )2=78 g2= (101)2=(5)8 A Convolutional encoder can also be considered as a sequential circuit. Based on this approach it is possible to illustrate its behavior by the state diagram . This diagram has 2m

Sir CRR COE.ELURU, Dept of ECE

2009-2011

15 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

distinct states corresponding with the possible memories state of the encoder. In the state diagram, 2k branches leave each state and enter the new state to represent the state transitions for memories and the encoder outputs based on the combination of input data. Figure 2.1(b) shows the state diagram of the Convolutional encoder (2,1,2) in the above example. In this figure, dotted and solid lines represent input bits of 0 and 1, respectively, while the values on the top of each line indicate the first and the second encoder output bits, respectively. Here u1 and u2 represents outputs of the encoder.

Figure 2.1: Convolutional encoder of code rate and k=3 with(7,5)

Systematic Convolutional Encoder: The encoder which contains input as a one part of the encoder output directly is called Systematic Convolutional encoder.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

16 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

2.2.2 Recursive Systematic Convolutional Encoder


The Convolutional encoder which has a one of the output as a feedback to the input is called Recursive Convolutional encoder. The encoder with systematic property and feedback property is called recursive systematic Convolutional encoder.

u1
+

p1

Fig 2.2 : The RSC Encoder (1,5/7) obtained from the previous figure The RSC Encoder is obtained from the non recursive non systematic(conventional) Convolutional encoder by feeding back of its encoded outputs to its input. Suppose the conventional encoder is represented by the generator sequences g1=[1 1 1 ] and g2=[ 1 0 1] It can be represented in a more compact form as g= [g1, g2].The RSC encoder representation as G=[1, g2/g1]. Where the first output (represented by g1) is feedback to the input. In the above representation, 1 denotes the systematic output, g2 denotes the feed forward output and g1 is the feedback to the input of the RSC encoder. Figure 3 shows the schematic diagram for an constituent or RSC encoder. In this example, G(z) = N(z)/D(z) with D(z)=1+z-1+z-3 and N(z) = 1 + z-2+z-3. Since only binary

Sir CRR COE.ELURU, Dept of ECE

2009-2011

17 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

coefficients are used, it is common to use the octal representation of the coefficients of G(z). For this example, the binary coefficient sequences for D(z) and N (Z) are 1101 and 1011, respectively. Their octal representations are 15 and 13. Therefore, it is common to denote G(z) = (15,13).

Fig.2.3.Systematic Recursive Convolutional Encoder with G(z)=(15,13)

We now try to understand the design of constituent encoders. A codeword consists of three sequences, xs, xp1 and xp2 . Recall that a bad codeword has a low weight, i.e., it is such that xs, xp1 and xp2 all must have low weights. This is a powerful observation and it also suggests that turbo codes are most effective when the block size is large. The full diagram of the Turbo encoder with RSC encoder as (1,5/7) is as shown in following figure.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

18 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Fig 2.4 : Turbo encoder of Full Structure with RSC Encoders

We see from the above that the interleaver plays a crucial role in turbo code design. For a small to median block size, the interleaver needs to be carefully designed to ensure that the minimum weight of the code is maximized. For a large block size, the choice of the interleaver is not crucial, as long as it provides sufficient randomization. Consequently, it usually suffices to us a pseudo-random interleaver.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

19 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Turbo code formation:


The formation of turbo code from the Turbo encoder is as follows schematically.

Fig 2.5: Turbo code Formation(code rate=1/3)

2.2.4 Interleaving
Interleaving generally refers to a process which permutes symbols of an input sequence. It is especially utilized in forward error correction coding to reduce the effect of impulse noise and burst errors in fading and multipath channels. For the same reason it is also applied in magnetic recording systems.

2.2.4.1 Interleaver
Interleaving is a widely used technique in digital communication and storage systems. An interleaver takes a given sequence of symbols and permutes their positions, arranging them in a different temporal order. The basic goal of an interleaver is to randomize the data sequence. When used against burst errors, interleavers are designed to convert error patterns that contain long sequences of serial erroneous data into a more random error pattern, thus distributing errors

Sir CRR COE.ELURU, Dept of ECE

2009-2011

20 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

among many code vectors. Burst errors are characteristic of some channels, like the wireless channel, and they also occur in concatenated codes, where an inner decoder overloaded with errors can pass a burst of errors to the outer decoder.In general, data interleavers can be classified into block, convolutional, random, and linear interleavers. The effect of the error floor can be reduced by applying an interleaver tailored to the RSC code structure to prohibit generation of self-terminating patterns for the second RSC code. Since the free distance value approximately determines the code performance at the error floor region and it is usually obtained from the weight-2 input bitstream, interleavers are particularly designed to improve the weight-2 distribution of the code . The obtained free distance from the weight-2 distribution is called the effective free distance of the code. An interleaver is ideal when it breaks all self terminating patterns generating higher weight for the code. In fact, the main function of interleaver is related to generation of a high weight for the second RSC code from the low weight input bit stream .

2.2.4.2. Block Interleaver


As explained previously, block interleavers consist of a matrix array of size MI NI where data are usually written in row format and then read in column format. Filling of all the positions in the matrix is required, and this results in a delay of MI NI intervals. The operation can be equivalently performed by first writing data in column format and then by reading data in row format. A block interleaver of size MI NI separates the symbols of any burst error pattern of length less than MI by at least NI symbols. If, for example, a burst of three consecutive errors in the following sequence is written by columns into a 4 x 4 de interleaver (1 5 9 13 2 6 10 14 3 7 11 15 4 8 12 16), then these errors will be separated by at least four intervals

The de-interleaved sequence in this case is (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16)

Sir CRR COE.ELURU, Dept of ECE

2009-2011

21 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

This confirms that the errors are separated by four positions. In a given application of error-control coding, a block interleaver is selected to have a number of rows that should be ideally larger than the longest burst of errors expected, and in practice at least as large as the length of most expected bursts. The other parameter of a block interleaver is the number of columns of the permutation matrix, NI, which is normally selected to be equal to or larger than the block or decoding length of the code that is used. In this way, a burst of NI errors will produce only one error per code vector.

2.2.4.2. Convolutional interleaver


Convolutional interleaver is formed with a set of N registers that are multiplexed in such a way that each register stores L symbols more than the previous register. The order zero register does not contain delay and it consists of the direct transmission of the corresponding symbol. The multiplexers commute through the different register outputs and take out the oldest symbol stored in each register, while another symbol is input to that register at the same time .The operation of Convolutional interleaving is shown in Figure 2.6. Convolutional interleavers are also known as multiplexed interleavers. The interleaver Operation can be properly described by a permutation rule defined over a set of N integer numbers.

Figure 2.6: A Convolutional interleaver

Sir CRR COE.ELURU, Dept of ECE

2009-2011

22 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

2.2.4.3. Random interleaver


Random interleavers are constructed as block interleavers where the data positions are determined randomly. A pseudo-random generator can be utilized for constructing these interleavers. The memory requirement of a random interleaver is of size MI NI symbols, but and since there is the practical need of having two interleavers, one for being written (filled) and another one for being read (emptied), the actual memory requirement is then 2MI NI symbols. In a turbo coding scheme, the interleaver plays a very important role. In general, the BER performance is improved if the length of the interleaver that is part of the scheme is increased. Either block or random interleavers can be used in a turbo code. In general, it is shown in that block interleavers perform better than random interleavers if the size MI NI of the interleaver is small, and random interleavers perform better than block interleavers when the size MI NI of the interleaver is medium or large. The BER performance of a turbo code with large random interleavers is significantly better than that of a turbo code with block interleavers of the same size. However, the larger the interleaver, the larger is the delay in the system. Sometimes, and depending on the application, the delay occasioned by a turbo code, or more precisely, by its interleaver, can be unacceptable for a given practical application, and so in spite of their impressive BER performance, turbo codes with large random interleavers cannot be used. This is the case for instance in audio applications, where sometimes the delay of a turbo code cannot be tolerated. If the delay is acceptable in a particular application, large random interleavers allow the turbo coding BER performance to be close to the Shannon limit. It can be concluded that both families of turbo codes, those constructed using small block interleavers, and those constructed with considerably larger random interleavers, can be used in practice, depending on the application. It has also been shown in that square block interleavers are better than rectangular block interleavers, and that odd dimension interleavers are also better than even dimension interleavers. Therefore, the best selection of a block interleaver is obtained by using MI = NI, and by making MI and NI be odd numbers.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

23 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

2.2.4.4. Linear interleaver


Another kind of interleaver also utilized in turbo coding schemes is the linear interleaver. One interesting characteristic of this interleaver is that it has a mathematical expression for generating the interleaving permutation, which avoids the need to store all the structure of the interleaver, usually in the form of a big memory allocation, which is the case for random or block interleavers. In general, turbo codes have an impressive BER performance in the so-called waterfall region, which is where the curve of Pb versus Eb/N0 falls steeply. There is also another characteristic region of the turbo code BER performance curve, which is known as the error floor region. This floor region arises because of the degradation in the BER performance caused by the relatively small minimum distance of a turbo code. This floor region is also a consequence of the minimum distance of each of the constituent codes, so that the smaller the minimum distance of the constituent codes, the higher is the BER at which the floor effect starts appearing. In addition, the type and size of the interleaver plays an important role in determining the minimum distance of the turbo code.

2.2.4.5 Reverse Row-column Interleaver


The reverse row-column interleaver was proposed as an interleaver to remove the drawback of bit 1 existence located at the end part of the interleaver. In this interleaver, data are written to the memories row-by-row and then column-by-column reading started from the last column. In a similar way to the row-column interleaver, it can not remove the problem of bad weight-4 input bit stream. However, it provides good performance for the code with very short block length and for the moderate and long block length at very low signal to noise ratios.

2.2.4.6 Rotated and Backward Interleaver


Rotated and backward interleavers have been also proposed to remove the effect of bit l located at the end part of the row-column interleaver. Similarly to the row column interleavers, data are written row-by-row. For the rotated interleaver, the written data are rotated 90 and then reading is performed row-by-row. The interleaved data of the rotated interleaver with the dimension 3into3. In this permutation, the last bit of the input pattern will still be close to the end part of the interleaver. In the backward interleaver, row-by-row reading data starts from the last column of the interleaver in backward direction. Figure 2.8(b) shows the procedure of backward

Sir CRR COE.ELURU, Dept of ECE

2009-2011

24 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

interleaving. It can be seen that the backward interleaver has increased the distance of the last bit of the proposed pattern from the end part of the interleaver. Application of these interleavers improves the turbo codes performance especially at medium signal to noise ratios for interleavers with short block lengths

2.2.4.7 Pseudo-random Interleaver


In contrast to the deterministic permutation role, it is possible to implement interleavers with random permutations. A simple random interleaver is constructed when it selects the memories randomly and reads their contents. In the permutation, each memory is selected only once. The weight-2 distribution analysis of turbo codes indicates that the pseudo-random interleaver cannot provide suitably permuted data for the second RSC encoder. However, for input bit streams with higher weights, or multiple turbo codes, the interleaver performs very well, breaking down bad input bit stream to improve the code performance.

2.2.4.8 Uniform Interleaver


Due to the unpredictable behavior of the pseudo-random interleaver, determining the weight distribution of the code, and consequently its analysis and design is a major obstacle. In order to overcome this problem, Benedetto, et:al proposed uniform interleavers, which consider equivalent probability of permutation for L! Interleavers ( L is referred to the interleaver length) . When this interleaver is utilized for the turbo codes, its performance is determined based on the average performance of all codes and it is expected that a code using the random interleaver has similar performance to a code utilizing this interleaver.

2.2.4.8 Semi-random Interleaver


Semi-random interleavers are introduced as another type of random interleavers. They remove the drawback of pseudo-random interleavers for permutation of weight-2 input bit streams. In order to guarantee that two bit 1s of these bit stream have sufficient distance from each other, a threshold value is considered in such a way that the distance between consecutive selected memories during the reading procedure is equal or greater than that value. In fact in this

Sir CRR COE.ELURU, Dept of ECE

2009-2011

25 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

interleaver any two input bit positions with distance S can not be permuted to two bit positions, whose distance is less than S. Figure 2.9 shows the permutation process for the semi-random interleaver with length L = 9 and threshold value S = 3. As verified in, the best turbo code performance is achieved by the threshold value qL 2. Although the obtained distance in comparison with other interleavers such as the circular-shift and row-column is shorter, it efficiently breaks self-terminating patterns to provide the suitable pattern for the second RSC encoder.

2.2.4.9 Modified Pseudo-random Interleaver


In, a modification has been performed on the pseudo-random interleaver, which recognizes self terminating input bit stream when they are divisible by the generator matrix of the code (G(D)) and prohibits them to produce self-terminating interleaved data. Conducted simulations show that this interleaver gives better performance of the turbo code compared to than the semi-random interleaver.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

26 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

CHAPTER - 3 STRUCTURE OF TURBO DECODER

Sir CRR COE.ELURU, Dept of ECE

2009-2011

27 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Structure of Turbo Decoder


3.1 Typical Turbo Decoder:
Turbo Codes are so named because of their iterative soft-decision decoding process, which enables the combination of relatively simple RSC codes to achieve near-optimum performance. Turbo decoding involves iterative exchange between the constituent decoders of progressively better estimates of the message bits, in a decoding procedure that is helped by the statistical independence of the two code sequences generated by each input bit. The turbo decoder is shown in Figure 3.2. In the decoding procedure, each decoder takes into account the information provided by the samples of the channel, which correspond to the systematic (message) and parity bits, together with the a priori information that was provided by the other decoder, which was calculated as its extrinsic information in the previous iteration. However, instead of making a hard decision on the estimated message bits, as done for instance in the traditional decoding of Convolutional codes using the Viterbi algorithm, the decoder produces a soft-decision estimate of each message bit. This soft-decision information is an estimate of the corresponding bit being a 1 or a 0; that is, it is a measure of the probability that the decoded bit is a 1 or a 0. This information is more conveniently evaluated in logarithmic form, by using what is known as a log likelihood ratio (LLR).. This measure is very suitable because it is a signed number, and its sign directly indicates whether the bit being estimated is a 1 (positive sign) or a 0 (negative sign), whereas its magnitude gives a quantitative measure of the probability that the decoded bit is a 1 or a 0.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

28 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Le21^

De-Inter..
Le12 Le12^

yp1

Decoder1

Inter..

Decoder2

ys
Inter.. Ys^

Le21

yp2

Figure 3.1: Turbo decoder

There are many algorithms that operate using LLRs, and perform decoding using softinputsoft- output values. One of these algorithms is the BCJR algorithm. Some background on the measures and probabilities involved in this algorithm is presented next, in order to then introduce the BCJR algorithm. In the traditional decoding approach, the demodulator makes a hard decision of the received symbol, and passes to the error control decoder a discrete value, either a 0 or a 1. The disadvantage of this approach is that while the value of some bits is determined with greater certainty than that of others, the decoder cannot make use of this information. A soft-in-soft-out (SISO) decoder receives as input a soft (i.e. real) value of the signal. The decoder then outputs for each data bit an estimate expressing the probability that the transmitted data bit was equal to one. In the case of turbo codes, there are two decoders for outputs from both encoders. Both decoders provide estimates of the same set of data bits, albeit in a different order. If all intermediate values in the decoding process are soft values, the decoders can gain greatly from exchanging information, after appropriate reordering of values. Information exchange can be Sir CRR COE.ELURU, Dept of ECE 2009-2011

29 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

estimates, using information from the other decoder, and only in the final stage will hard decisions be made, i.e. each bit is assigned the value 1 or 0. Such decoders, although more difficult to implement, are essential in the design of turbo codes. Turbo encoded data is conventionally decoded by iterative decoding techniques, as shown in Figure 4.1. In this technique, two component decoders are linked by an interleaver. Each component decoder provides soft output decoded information usable for the alternative component decoder at the next iteration. This recursive soft output information is called a-priori information. In addition to the a-priori information, component decoders accept the systematic and parity information from the channel output. The obtained soft information from the component decoder output is subtracted from a-priori and systematic information to produce the extrinsic information. The extrinsic information from each component decoder is interleaved or deinterleaved to create a-priori information for the alternative decoder at the next iteration step. At each iteration, the interleaver and deinterleaver rearrange the extrinsic information generating a new combination of soft information as a-priori information for other component decoders to provide decoded information having the maximum likelihood with the original bit stream.

3.2 The algorithms used in the Turbo Decoder :


There are two types of algorithms are used in the Iterative decoding process. They are 1. MAP Algorithm, its logarithmic and its optimum versions 2. SOVA Algorithm. Generally, the component decoders are designed based on the Maximum A-Posteriori (MAP) algorithm, its logarithmic and optimum versions, i.e. Log-MAP and Max-Log-MAP, or Soft Output Viterbi Algorithm (SOVA).With more complexity, MAP and Log-MAP create better performance than SOVA. However, some modifications have been introduced to SOVA to improve its performance, while maintaining its low complexity design compared to two other methods. The MAP algorithm is a Maximum Likelihood (ML) algorithm and the SOVA is asymptotically an ML algorithm at moderate and high SNR. The MAP algorithm finds the most probable information bit that was transmitted, while the SOVA finds the most probable information sequence to have been transmitted given the code sequence. That means the MAP algorithm minimizes the bit or symbol error probability, where as SOVA minimizes the word

Sir CRR COE.ELURU, Dept of ECE

2009-2011

30 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

error probability. Information bits returned by the MAP algorithm need not form a connected path through the trellis while for SOVA it will be a connected path. All of the above algorithms provide soft decoded information based on Log-Likelihood Ratios (LLR). The polarity of the LLR determines the sign of the decoded bit and its amplitude corresponds to the probability of a correct decision. In this project, the concept of LLR and its application to the iterative turbo decoding by MAP is explained. Finally, some methods to improve SOVA performance are presented The BCJR, or MAP, algorithm suffers from an important disadvantage: it needs to perform many multiplications. In order to reduce this computational complexity several simplifying versions have been proposed, namely the max-log-MAP algorithm in 1990-1994 [3] [4] and the log-MAP algorithm in 1995 [5]. Both substitute additions for multiplications. The log-MAP algorithm uses exact formulas so its performance equals that of the BCJR algorithm although it is simpler, which means it is preferred in implementations. In turn, the max-log-MAP algorithm uses approximations; therefore its performance is slightly worse. So MAP Algorithm is better Algorithm.

3.3 Comparisons between SOVA and Map Decoding Algorithms:


Generally, the component decoders are designed based on the Maximum A-Posteriori (MAP) algorithm, its logarithmic and optimum versions, i.e. Log-MAP and Max-Log-MAP, or Soft Output Viterbi Algorithm (SOVA).With more complexity, MAP and Log-MAP create better performance than SOVA. However, some modifications have been introduced to SOVA to improve its performance, while maintaining its low complexity design compared to two other methods. All of the above algorithms provide soft decoded information based on Log-Likelihood Ratios (LLR). The polarity of the LLR determines the sign of the decoded bit and its amplitude corresponds to the probability of a correct decision. In this section, the concept of LLR and its application to the iterative turbo decoding by MAP is explained.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

31 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Figure 3.2 : Comparison of SOVA and MAP Algorithm

Types of MAP Algorithm:


However the MAP algorithm is not easily implement able due to its complexities. Several approximations on the MAP algorithm are now available, such as the Max-Log-MAP algorithm where computations are largely in the logarithmic domain and hence values and operations are easier to implement. The Log-MAP algorithm avoids the approximations in the Max-Log-MAP algorithm through the use of a simple correction function at each maximization operation and thus its performance is close to that of the MAP algorithm. A complexity comparison of different decoding methods per unit time for (n,k) convolutional code with memory order v is given in following table. Assuming that one table-look-up operation is equivalent to one addition, one may see that the Log-MAP algorithm is about three times complex than the SOVA algorithm and the Max-LogMAP algorithm is about twice as complex as the SOVA algorithm.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

32 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

MAP Additions Multiplications Max-operations Look-ups Exponentiation 2. . 2. 5. . . +6 +8

Log-MAP 6. . 4. 4. -2 -2 . +6

Max-Log_MAP 4. 2. 4. . . -2 +6

SOVA . . 2. -1 +9

Figure 3.3 : Complexity comparision of various decoding algorithms So in this project we are using MAP decoding Algorithm.

3.4 Turbo Codes Analysis


A turbo code can be analyzed as a code with a block-wise or continuous performance. For the block-wise performance, similarly to the Convolutional code, an analysis is performed based on the trellis diagram of the code with the conventional termination methods applied to the RSC codes to provide isolated codeword of the specified length. In this case, a block interleaver, whose length is equal to the length of input bit stream is usually utilized. Since with the continuous performance, termination methods are not applied to RSC codes, the memories state of the encoder at the end of an input bit stream is considered as the initial state for the next bit stream. This leads to utilization of non-block interleavers to sufficiently permute the incoming bit streams of the second RSC encoder. Based on this structure, continuous decoding is accomplished. In comparison with the usual iterative turbo decoding method applied to turbo codes with block wise performance, continuous methods produce better performance at the expense of increased complexity, which is directly related to the interleaver length and its structure. In addition, results show that continuous decoding is more reliable in turbo codes with the higher number of states, while in codes with the lower number of states, it does not improve iterative decoding methods utilized for the block-wise performance of code. Hence, in most of the applications, a turbo code with block structure is preferred.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

33 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The upper bound equation specifies that the code with the higher free distance value has better performance in terms of error reduction. Because of the feedback connection between the RSC encoder output and its input, the effect of bit 1 from the impulse response of the code, will last until certain external bits are inserted to the RSC encoders returning their memories to the zero state. Therefore, a higher weight and consequently improved performance for the code compared to a non-recursive Convolutional code is expected. For example, for the input bit stream (10000000...0) the obtained codeword from the Convolutional code (2,1,2) illustrated in Figure 2.1(a) is (11101100000000...0) with a weight 5, while its equivalent RSC code (1, 5/7 ) gives the codeword (11101101... ) with infinite length, whose weight for every 3 inserted zero bits is periodically increased by 2 units. The finite weights for this code can be obtained when input bit streams with higher weights than 1 are applied in such a way that they return the RSC code to the zero state, without inserting any external bits to the memories. These patterns are called selfterminating patterns. Depending on the weight of self-terminating patterns and their combinations, low weights can be obtained for the code. For example, the weight-2 selfterminating pattern (100100) for the RSC code (1, 5/7 ), generates a codeword (11010111) with weight 6. The similar pattern generates the weight 10 (ten) relevant to the codeword (111011111011) obtained from the trellis terminated Convolutional code (2,1,2). Similar conditions can occur from self-terminating patterns with higher weights than 2. For example, the weight-6 self-terminating pattern (11100111) generates a codeword (1110110000111011) with the weight 10 (ten) for the RSC code (1, 5/7 ), while this pattern produces a codeword (11101001111101100111) with the weight 14 for the trellis terminated Convolutional code (2,1,2). In addition, if trellis termination is considered for the RSC code, existence of weight1 input bit stream whose weight is positioned in the end part of the bit stream, can generate a codeword with a lower weight than a codeword obtained from the Convolutional code. For the RSC code (1, 5/7 ), the weight-1 bit stream (000..001) generates a codeword (101110) with the weight 5, which is lower than the obtained weight 6 from a codeword (111011) for the trellis terminated Convolutional code (2,1,2). The above analysis can be extended to the turbo codes which incorporate similar RSC codes. The obtained results from the iterative decoding indicate that the turbo code has very good performance in error reduction at the low signal to noise ratios. This area is named waterfall. At

Sir CRR COE.ELURU, Dept of ECE

2009-2011

34 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

medium to high signal to noise ratios, which is named error floor region, BER slope of the iterative decoder is reduced significantly. At very low signal to noise ratios, BER stays high and at an almost constant value. This area is named non-convergence. Figure 2.6 specifies the mentioned regions of turbo codes performance with the maximum likelihood iterative decoding methods. Analysis of the turbo code indicates that at low signal to noise ratios, a large number of codeword with medium weight determines the code performance. The results confirm that by increasing the length of the interleaver the influence of those weights on the code performance can be reduced. At medium to high signal to noise ratios, only the first items of the weight distributions contribute to the code performance. This indicates that the interleaver type has the essential influence on the code performance.

3.5 Performance of Turbo Codes


We have seen that the conventional codes left a 3dB gap between theory and practice. After bringing out the arguments for the efficiency of turbo codes, one clearly wants to ask: how efficient are they? Already the first rate 1/3 code proposed in 1993 made a huge improvement: the gap between Shannons limit and implementation practice was only 0.7dB, giving a less than 1.2-fold overhead. (In the authors measurements, the allowed bit error rate BER was 105). In [2], a thorough comparison between Convolutional codes and turbo codes is given. In practice, the code rate usually varies between 1/2 and 1/6 . Let the allowed bit error rate be 106. For code rate 1/2 , the relative increase in energy consumption is then 4.80dB for Convolutional codes, and 0.98dB for turbo codes. For code rate 1/6 , the respective numbers are 4.28dB and -0.12dB2. It can also be noticed, that turbo codes gain significantly more from lowering the code rate than conventional Convolutional codes.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

35 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Figure 6: Performance

Sir CRR COE.ELURU, Dept of ECE

2009-2011

36 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

CHAPTER - 4

ITERATIVE DECODING CONCEPT

Sir CRR COE.ELURU, Dept of ECE

2009-2011

37 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Iterative Decoding Concept


Turbo codes differ with other codes in the way they are decoded. They are decoded iteratively. Indeed it is this iterative decoding which makes them more efficient and impressive. The word turbo is a carryover term from turbo engine which works iteratively by using certain feedback. So the main factors that make turbo codes different are concatenation and iterative decoding. Turbo codes are decoded using a method called the Maximum Likelihood Detection or MLD. Filtered signal is fed to the decoders, and the decoders work on the signal amplitude to output a soft decision The a priori probabilities of the input symbols is used, and a soft output indicating the reliability of the decision (amounting to a suggestion by decoder 1 to decoder 2) is calculated which is then iterated between the two decoders. The form of MLD decoding used by turbo codes is called the Maximum a-posteriori Probability or MAP. In communications, this algorithm was first identified in BCJR. And that is how it is known for Turbo applications. The MAP algorithm is related to many other algorithms, such as Hidden Markov Model, HMM which is used in voice recognition, genomics and music processing. Other similar algorithms are Baum-Welch algorithm, Expectation maximization, Forward-Backward algorithm, and more. MAP is a complex algorithm, hard to understand and hard to explain. In addition to MAP algorithm, another algorithm called SOVA , based on Viterbi decoding is also used. SOVA uses Viterbi decoding method but with soft outputs instead of hard. SOVA maximizes the probability of the sequence, whereas MAP maximizes the bit probabilities at each time, even if that makes the sequence not-legal. MAP produces near optimal decoding.

In turbo codes, the MAP algorithm is used iteratively to improve performance. It is like the 20 questions game, where each previous guess helps to improve your knowledge of the hidden information. The number of iteration is often preset as in 20 questions. More iteration are done when the SNR is low, when SNR is high, lesser number of iterations are required since the results converge quickly. Doing 20 iterations maybe a waste if signal quality is good. Instead of making a decision ad-hoc, the algorithm is often pre-set with number of iterations. On the

Sir CRR COE.ELURU, Dept of ECE

2009-2011

38 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

average, seven iterations give adequate results and no more 20 are ever required. These numbers have relationship to the Central Limit Theorem.

Figure.4.1: Iterative decoding in MAP algorithm

Although used together, the terms MAP and iterative decoding are separate concepts. MAP algorithm refers to specific math. The iterative process on the other hand can be applied to any type of coding including block coding which is not trellis based and may not use MAP algorithm.

4.1 Turbo decoding process


For each time tick k, decoding is done by calculating the L-values of a +1 bit. If it is positive, the decision is in favor of a +1. Calculation of the L-values or L( ) is quite a complex

process. After following some analysis it can be shown that LLR can be expressed as a sum of 3 values as follows. L(uk)=L_apriori+L_channel+L_extrinsic The L-channel value does not change from iteration to iteration. So lets called it K. The only two items that change are the a-priori and a-posteriori L-values. L(uk)=L_apriori+K+L_extrinsic A-priori value goes in, it is used to compute the new a-posteriori probability and then cabe used to compute L( ) or is passed to the next decoder.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

39 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Figure 4.2. Iteration Process

Although in the example we compute L(

) each time, during actual decoding this is not

done. Only a-posteriori metric is computed and decoders and keep doing this either a fixed number of times or until it converges. L-posteriori is also called extrinsic information.

This process shown above.So the whole objective for the calculations is to compute the extrinsic L-value. And eventually a decision is made about the bit in question by looking at the sign of the L value.

= sign {L (

)}

The iteration cycle of MAP algorithm is show below.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

40 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Figure.4.3: Iteration cycle of MAP Turbo Decoding

Application of BCJR algorithm in Iterative Decoding:

Let us consider a rate 1/n systematic Convolutional encoder in which the first coded bit, xk1, is equal to the information bit u k. In that case the a posteriori log-likelihood ratio L(uk/ y) can be decomposed into a sum of three elements. L(uk/ y) = L(uk ) + Lc yk1 + Le (uk )------------------------(4.1) The first two terms in the right hand side are related with the information bit uk. On the contrary, the third term, Le(uk), depends only on the codeword parity bits. That is why Le(uk) is called extrinsic information. This extrinsic information is an estimate of the a priori LLR L(uk). How? It is easy: we provide L(uk) and Lc yk1 as inputs to a MAP (or other) decoder and in turn we get L(uk y) at its output. Then, by subtraction, we get the estimate of L(uk): Sir CRR COE.ELURU, Dept of ECE 2009-2011

41 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Le(uk) =L(uk/y)-L(uk)-Lcykl -------------------------- ----- (4.2) This estimate of L(uk) is presumably a more accurate value of the unknown a priori LLR so it should replace the former value of L(uk). If we repeat the previous procedure in an iterative way providing Lcyk1 (again) and the new L(uk) = Le(uk) as inputs to another decoder we expect to get a more accurate L(uk/ y) at its output. This fact is explored in turbo decoding, our next subject.

The turbo code inventors [6] worked with two parallel concatenated2 and interleaved rate recursive systematic Convolutional codes decoded iteratively with two MAP decoders. Eq. 4.2 is the basis for iterative decoding. In the first iteration the a priori LLR L(uk) is zero if we consider equally likely input bits. The extrinsic information Le(uk) that each decoder delivers will be used to update L(uk) from iteration to iteration and from that decoder to the other. This way the turbo decoder progressively gains more confidence on the 1 hard decisions the decision device will have to make in the end of the iterative process. Fig. 9 shows a simplified turbo decoder block diagram that helps illuminate the iterative procedures.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

42 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Fig 4.4: The Simplified Turbo Decoder.

The iterative decoding proceeds as follows: In the first iteration we arbitrarily assume L(uk) = 0, then decoder 1 outputs the extrinsic information Le1(uk|y) on the systematic, or message, bit it gathered from the first parity bit (so note that actually decoder 2 does not need the LLR L1(uk|y)!); After appropriate interleaving the extrinsic information Le1(uk|y) from decoder 1, computed from Eq. (4.2), is delivered to decoder 2 as L1(uk), a new, more educated guess on L(uk). Then decoder 2 outputs Le2(uk |y), its own extrinsic information on the systematic bit based on the other parity bit (note again we keep on disdaining the LLR!). After suitable deinterleaving, this information is delivered to decoder 1 as L2(uk), a newer, even more educated guess on L(uk). A new iteration will then begin. Sir CRR COE.ELURU, Dept of ECE 2009-2011

43 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

After a prescribed number of iterations or when a stop criterion is reached the log-likelihood L2(uk|y) at the output of decoder 2 is deinterleaved and delivered as L(uk|y) to the hard decision device, which in turn estimates the information bit based only on the sign of the deinterleaved LLR,

Here we are using BCJR Algorithm. First we discuss about BCJR algorithm in detail.

4.2 Introduction to MAP Algorithm:


Turbo codes are decoded using a method called the Maximum Likelihood Detection or MLD. Filtered signal is fed to the decoders, and the decoders work on the signal amplitude to output a soft decision The a priori probabilities of the input symbols is used, and a soft output indicating the reliability of the decision (amounting to a suggestion by decoder 1 to decoder 2) is calculated which is then iterated between the two decoders. The form of MLD decoding used by turbo codes is called the Maximum a-posteriori Probability or MAP. In communications, this algorithm was first identified in BCJR. And that is how it is known for Turbo applications. The MAP algorithm is related to many other algorithms, such as Hidden Markov Model, HMM which is used in voice recognition, genomics and music processing. Other similar algorithms are Baum-Welch algorithm, Expectation maximization, Forward-Backward algorithm, and more. MAP is a complex algorithm, hard to understand and hard to explain.

In addition to MAP algorithm, another algorithm called SOVA , based on Viterbi decoding is also used. SOVA uses Viterbi decoding method but with soft outputs instead of hard. SOVA maximizes the probability of the sequence, whereas MAP maximizes the bit probabilities at each time, even if that makes the sequence not-legal. MAP produces near optimal decoding. In turbo codes, the MAP algorithm is used iteratively to improve performance. It is like the 20 questions game, where each previous guess helps to improve your knowledge of the hidden information. The number of iteration is often preset as in 20 questions. More iteration are done when the SNR is low, when SNR is high, lesser number of iterations are required since the Sir CRR COE.ELURU, Dept of ECE 2009-2011

44 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

results converge quickly. Doing 20 iterations maybe a waste if signal quality is good. Instead of making a decision ad-hoc, the algorithm is often pre-set with number of iterations. On the average, seven iterations give adequate results and no more 20 are ever required. These numbers have relationship to the Central Limit Theorem.

Although used together, the terms MAP and iterative decoding are separate concepts. MAP algorithm refers to specific math. The iterative process on the other hand can be applied to any type of coding including block coding which is not trellis based and may not use MAP algorithm. I am going to concentrate only on PCCC decoding using iterative MAP algorithm In part 2, we will go through a step-by-step example. In this part, we will cover the theory of MAP algorithm. We are going to describe MAP decoding using a Turbo code in shown Figure 5 with two RSC encoders. Each RSC has two memory registers so the trellis has four states with constraint length equal to 3.

The info bits are called uk. The coded bits are referred to by the vector c. Then the coded bits are transformed to an analog symbol x and transmitted. On the receive side, a noisy version of x is received. By looking at how far the received symbol is from the decision regions, a metric of confidence is added to each of the three bits in the symbol. Often Gray coding is used, which means that not all bits in the symbol have same level of confidence for decoding purposes. There are special algorithms for mapping the symbols (one received voltage value, to M soft-decisions, with M being the M in M-PSK.) Lets assume that after the mapping and creating of soft-metrics, the vector y is received. One pair of these decoded soft-bits are sent to the first decoder and another set, using a de-interleaved version of the systematic bit and the second parity bit are sent to the second decoder. Each decoder works only on these bits of information and pass their confidence scores to each other until both agree within a certain threshold. Then the process restarts with next symbol in a sequence or block consisting of N symbols (or bits.)

4.2.1The purpose of the BCJR Algorithm:


Let us consider a block or Convolutional encoder described by a trellis, and a sequence x = x1x2 xN of N n-bit code words or symbols at its output, where xk is the symbol generated by

Sir CRR COE.ELURU, Dept of ECE

2009-2011

45 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

the encoder at time k. The corresponding information or message input bit, uk, can take on the values -1 or +1 with an a priori probability P(uk), from which we can define the so-called loglikelihood ratio (LLR)

L( )=ln
This log-likelihood ratio is zero with equally likely input bits. Suppose the coded sequence x is transmitted over a memory less additive white Gaussian noise (AWGN) channel and received as the sequence of nN real numbers y = y1 y2 yN , as shown in Fig. 4.1. This sequence is delivered to the decoder and used by the BCJR [1], or any other, algorithm in order to estimate the original bit sequence uk. for which the algorithm computes the a posteriori loglikelihood ratio L(uk y) , a real number defined by the ratio

The numerator and denominator of Eq. (2) contain a posteriori conditional probabilities, that is, probabilities computed after we know y. The positive or negative sign of L(uk/ y) is an indication of which bit, +1 or -1, was coded at time k. Its magnitude is a measure of the confidence we have in the preceding. The numerator and denominator of Eq. (2) contain a posteriori conditional probabilities, that is, probabilities computed after we know y. The positive or negative sign of L(uk /y) is an indication of which bit, +1 or -1, was coded at time k. Its magnitude is a measure of the confidence we have in the preceding section.

Fig4.5: A simplified block diagram of the system under consideration

Sir CRR COE.ELURU, Dept of ECE

2009-2011

46 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

It is convenient to work with trellises. Let us admit we have a rate 1/2, n = 2, Convolutional code, with M = 4 states S = {0, 1, 2, 3} and a trellis section like the one presented in Fig. 2. Here a dashed line means that branch was generated by a -1 message bit and a solid line means the opposite. Each branch is labeled with the associated two bit codeword xk, where, for simplicity, 0 and 1 correspond to -1 and +1, respectively.

Fig 4.6: The Convolutional Code trellis used in the Example

Let us suppose we are at time k. The corresponding state is Sk = s , the previous state is Sk 1 = s and the decoder received symbol is yk. Before this time k-1 symbols have already been received and after it N-k symbols will be received. That is, the complete sequence y can be divided into three subsequences, one representing the past, another the present and another one the future:

y=y1, y2, y3, yk-1, yk, yk+1,..yN.


The a posteriori LLR L(uk y) is given by the expression

Sir CRR COE.ELURU, Dept of ECE

2009-2011

47 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

P(s, s, y) represents the joint probability of receiving the N-bit sequence y and being in state s at time k-1 and in state s at the current time k. In the numerator R1 means the summation is computed over all the state transitions from s to s that are due to message bits uk = +1 (i. e. , dashed branches). Likewise, in the denominator R0 is the set of all branches originated by message bits uk = -1. The variables , and represent probabilities to be defined later.

4.2.2 The Joint Probability(s, s,y) :


This probability can be computed as the product of three other probabilities, P(s, s, y) = k 1 (s) k (s, s) k (s)

They are defined as: k 1 (s) = P(s, y<k ) k (s, s) = P(y k , s s) k (s) = P(y >k, s) At time k the probabilities , and are associated with the past, the present and the future of sequence y, respectively. Let us see how to compute them by starting with .

4.2.2.1 Calculation of :
The probability
k

(s, s) = P(y

, s/ s) is the conditional probability that the received

symbol is yk at time k and the current state is Sk = s, knowing that the state from which the connecting branch came was Sk 1 = s . It turns out that is given by k (s, s) = P(y k/ xk )P(uk ) In the special case of an AWGN channel the previous expression becomes

Sir CRR COE.ELURU, Dept of ECE

2009-2011

48 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

where Ck is a quantity that will cancel out when we compute the conditional LLR L(uk /y) , as it appears both in the numerator and the denominator of Eq. (4), and Lc, the channel reliability measure, or value, is equal toN0/2 is the noise bilateral power spectral density, a is the fading amplitude (for nonfading channels it is a = 1), Ec and Eb are the transmitted energy per coded bit and message bit, respectively, and Rc is the code rate.

=4a =4a.

4.2.2.2 Recursive calculation of and


The probabilities and can (and should) be computed recursively. The respective recursive formulas are: (s) = (s) Here initial conditions (s) (s)=1 if s=0,and (s)=0 if s=1

(s) =

Here initial conditions Here note that:

(s)=1 if s=0 ,and

(s)=0 if s=1

In both cases we need the same quantity, k (s, s) . It will have to be computed first. In the
k

(s) case the summation is over all previous states Sk


k 1

= s linked to state s by

converging branches, while in the

(s) case the summation is over all next states Sk = s

reached with the branches stemming from s . Theses summations contain only two elements with binary codes. The probability is being computed as the sequence y is received. That is, when computing we go forward from the beginning to the end of the trellis. The probability can only be computed after we have received the whole sequence y. That is, when computing we come backward from the end to the beginning of the trellis. We will see that and are associated with the encoder states and that is associated with the branches or transitions between states.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

49 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The initial values 0 (s) and N (s) mean the trellis is terminated, that is, begins and ends in the all-zero state. Therefore it will be necessary to add some tail bits to the message in order that the trellis path is forced to return back to the initial state. It is self-evident now why the BCJR algorithm is also known as the forward-backward algorithm.

4.2.4 Combatting numerical instability


Numerical problems associated with the BCJR algorithm are well known. In fact, the iterative nature of some computations may lead to undesired overflow or underflow situations. To circumvent them normalization countermeasures should be taken. Therefore, instead of using and directly from recursive equations those probabilities should previously be normalized by the sum of all and at each time, respectively. The same applies to the joint probability P(s, s, y), as follows. Define the auxiliary unnormalized variables and at each time step k:

After all M values of and have been computed sum them all: Then

Likewise, after all 2M products k 1 (s) k (s, s) k (s) over all the trellis branches have been computed at time k their sum

Sir CRR COE.ELURU, Dept of ECE

2009-2011

50 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

will normalize P(s,s,y):

None of these normalization summations affect the final log-likelihood ratio L(uk|y) as all them appear both in its numerator and denominator:

4.2.5 Trellis-aided calculation of and


We recall that in our convention a dashed line results from a +1 input bit while a solid line results from a -1 input bit. Let us do the following as computations are made: Label each trellis branch with the value of k (s, s) computed according to known equation In each state node write the value of k (s) computed according to Eqs. (10) or (12) from the initial conditions 0 (s). In each state node and below k (s) write the value of k (s) computed according to known equations from the initial conditions N (s).

Sir CRR COE.ELURU, Dept of ECE

2009-2011

51 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Figure 4.7: , and as Trellis labels

4.2.5.1 Calculation of
Let us suppose then that we know
k 1

(s). The probability k (s) (without normalization) is


k 1

obtained by summing the products of

(s) and k (s, s) associated with the branches that

converge into s. For example, at time k two branches arrive at state Sk = 2, one coming from state 0 and the other one coming from state 1.After all M k (s) have been computed as explained they should be divided by their sum. The procedure is repeated until we reach the end of the received sequence and have computed N (0) (remember our trellis terminates at the all-zero state).

Sir CRR COE.ELURU, Dept of ECE

2009-2011

52 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Figure 4.8 : Trellis aided recursive calculation of normalized

and . Expressions are not

4.2.5.2 Calculation of
The quantity can be calculated recursively only after the complete sequence y has been received. Knowing k (s) the value of k 1 (s) is computed in a similar fashion as k (s) was: we look for the branches that leave state Sk 1 = s , sum the corresponding products k (s) k (s) and divide by the sum s k 1 (s ) . For example, in Fig. 3 we see that two branches leave the state Sk 1 = 1 , one directed to state Sk = 0 and the other directed to state Sk = 2 , as Fig. 4b shows. The procedure is repeated until the calculation of 0 (0) .

4.2.5.3 Calculation of P(s, s,y) and L (uk / y)


With all the values of , and available we are ready to compute the joint probability Pnorm (s, s, y) = k 1 (s) k (s, s) k (s) Pk . For instance, what is the value of unnormalized P(1,2,y)? As Fig. 5 shows, it is k 1 (1) k (1,2) k (2) . We are left just with the a posteriori LLR L(uk/ y ) . Well, let us observe the trellis of Fig. 3 again: we notice that a message bit +1 causes the following state transitions: 02, 12, 23 e 33. These are the R1 transitions . The remaining four state transitions, represented by a solid line, are Sir CRR COE.ELURU, Dept of ECE 2009-2011

53 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

caused, of course, by an input bit -1. These are the R0 transitions. Therefore, the first four transitions are associated with the numerator of Eq. (4) and the remaining ones with the denominator:

Figure 4.9 :The unnormalized probability P(s,s,y) as the three factor product

/y)=ln =ln

A summary of the unnormalized expressions needed to compute the conditional LLR is presented in Fig. 4.10

Sir CRR COE.ELURU, Dept of ECE

2009-2011

54 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Figure4.10.Summery of expressions used in the MAP Algorithm

4.3 Numerical examples


In the next examples of the BCJR algorithm and its simplifications we are going to use the same Convolutional encoder we have used up until now. Suppose that upon the transmission of a sequence x of six coded symbols over an AWGN channel the following sequence of twelve real numbers is received in the decoder. Y= 0.3 0.1 -0.5 0.2 0.8 0.5 -0.5 0.3 0.1 -0.7 1.5 -0.4 as

Sir CRR COE.ELURU, Dept of ECE

2009-2011

55 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The encoder input bits, uk = 1, are equally likely and the trellis path associated with the coded sequence x begins and ends in the all-zero state, for which two tail bits have to be added to the message.

The encoder input bits, uk = 1, are equally likely and the trellis path associated with the coded sequence x begins and ends in the all-zero state, for which two tail bits have to be added to the message. The AWGN channel is such that Ec/No=1 and a=1

Sir CRR COE.ELURU, Dept of ECE

2009-2011

56 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

4.3.1 Example with the BCJR algorithm


Then the Channel reliability measure is Lc=4 a Ec/No=5.0. So L(uk)=0 At time k = 1 we have the situation depicted infollowing figure Thus, the values of are

Figure 5.1:Symbols received and sent at time k=1 and associated probabilities And therefore,

Let us see a final example of how to compute , and at time k = 3:

Sir CRR COE.ELURU, Dept of ECE

2009-2011

57 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Figure 5.2 : Values of , and after all the six received symbols have been processed.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

58 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The values in the denominators came from the summations

Calculation of P(s,s,y):
For instance (see Fig. 12), at time k = 3 Pnorm(2,3,y) = P(2,3,y)/P3 is equal to Pnorm (2,3,y)(0.010.470.001) / 0.561.1.105 , as it turns out that the sum of all eight branch products is

Figire 5.3:Values used in the calculation of unnormalized P(2,3,y)

We are on the brink of getting the wanted values of L(uk y) given by Eq. (14): we collect thevalues of the last two rows of Table 1 .

Sir CRR COE.ELURU, Dept of ECE

2009-2011

59 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

And from this we obtain

Sir CRR COE.ELURU, Dept of ECE

2009-2011

60 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The two last LLR values from the enforced trellis termination . In face of all the values obtained the decoder hard decision about the message sequence will be u^ = +1 +1 -1 +1 -1 -1 or 110100

Sir CRR COE.ELURU, Dept of ECE

2009-2011

61 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Figure 5.4: Values of Pnorm (s, s, y) and L(uk|y), and estimate of uk.

4.4 Example of turbo decoding:


Now let us consider the following conditions: An all-zero 9-bit message sequence is applied to two equal recursive systematic Convolutional encoders, each with generator matrix G(x) = [1 (1+x2)/(1+x+x2)]. The output sequence is obtained through puncturing so the turbo encoders code rate is 1/2. The RSC encoder trellis is shown in Fig. 16. The interleaver pattern is P = [1 4 7 2 5 9 3 6 8]. Thus, for example, the third element of the interleaved version of arbitrary sequence m = [2 4 3 1 8 5 9 6 0] is 3 .Therefore, m(P) = [2 1 9 4 8 0 3 5 6]. The AWGN channel is such that Ec/N0 = 0.25 and a = 1. So, Lc = 1.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

62 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The 18-element received sequence is y = 0.3 -4.0 -1.9 -2.0 -2.4 -1.3 1.2 -1.1 0.7 -2.0 -1.0 -2.1 -0.2 -1.4 -0.3 -0.1 -1.1 0.3. The initial a priori log-likelihood ratio is assumed to be L(uk) = 0.

Figure: The trellis of the RSC encoder characterized by G(x) = [1 (1+x2)/(1+x+x2)]

Therefore, we will have the following input sequences in the turbo decoder

L1(uk/ y) L(uk ) Lc y k1 Le1 u k L1(uk)

-4.74 0 0.30 -5.04 -5.04

-3.20 0 -1.90 -1.30 0.39

-3.66 0 -2.40 -1.26 0.24

1.59 0 1.20 0.39 -1.30

1.45 0 0.70 0.75 0.75

-0.74 0 -1.00 0.26 -0.53

0.04 0 -0.20 0.24 -1.26

0.04 0 -0.30 0.34 0.26

-1.63 0 -1.10 -0.53 0.34

In the first decoding iteration we would get the following values of the a posteriori loglikelihood ratio L1(uk|y) and of the extrinsic information Le2(uk), both at the output of decoder 1, when we have L(uk) and Lc yk1 at its input:

Sir CRR COE.ELURU, Dept of ECE

2009-2011

63 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The values of the extrinsic information (fourth row) were computed subtracting the second and third rows from the first (Eq. (18). As we see, the soft information L1(uk|y) provided by decoder 1 would result in four errored bits, those occurring when L1(uk|y) is positive (at times k = 4, 5, 7 and 8). This same decoder would pass the extrinsic information Le1(uk) to decoder 2 after convenient interleaving that is, L1(uk) in the last row. Note that after this halfiteration we have gained a strong confidence on the decision about the first bit due to the large negative values of L1(u1|y) and Le1(u1) most probably u1 = -1 (well, we already know that). We are not so certain about the other bits, especially those with small absolute values of L1(u1|y) and Le1(u1). Decoder 2 will now work with the interleaved systematic values P Lcy k1 ykp
(2) (p)

, the parity values

and the new a priori LLRs L1(uk). The following table presents the decoding results

according to Eq. (18), where sequence L2(uk) represents the deinterleaved version of Le2(uk) that will act as a more educated guess of the a priori LLR L(uk). So L2(uk) will be used as one of the decoder 1 inputs in the next iteration.

L2(uk/ y) L1(uk) Lc y k1 (p) Le2 u k L2(uk)

-3.90 -5.04 0.30 0.85 0.85

0.25 0.39 1.20 -1.34 0.16

0.18 0.24 -0.20 0.14 0.01

-3.04 -1.30 -1.90 0.16 -1.34

1.23 0.75 0.70 -0.22 -0.22

-1.44 -0.53 -1.10 0.19 0.02

-3.65 -1.26 -2.40 0.01 0.14

-0.72 0.26 -1.00 0.02 0.00

0.04 0.34 -0.30 0.00 0.19

How was L2(uk) computed? As said before, the Pi-th element of L2(uk) is the i-th element of the original sequence Le2(uk). For example, the fourth element of L2(uk) is equal to

We can see again that we would still get four wrong bits if we made a hard decision on sequence L2(uk/y) after it is reorganized in the correct order through deinterleaving. That sequence, -3.90 -3.04 -3.65 0.25 1.23 -0.72 0.18 0.04 -1.44, enforces errors in positions 4, 5, 7 and 8. The previous procedures should be repeated iteration after iteration. For example, we would get Table 4 during five iterations. Shaded (positive) values would yield wrong hard decisions. All four initially wrong bits have been corrected just after the third iteration. From

Sir CRR COE.ELURU, Dept of ECE

2009-2011

64 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

then on it is a waste of time to proceed further. In fact, the values of the a posteriori loglikelihood ratios stabilize very quickly and we gain nothing if we go on with the decoding. Table 4 Outputs of turbo decoders during five iterations

Iteration 1 2 3 4 5 1 2 3 4 5

K L1(uk/y) L1(uk/y) L1(uk/y) L1(uk/y) L1(uk/y) L(uk/y) L(uk/y) L(uk/y) L(uk/y) L(uk/y)

1 -4.74

2 -3.20

3 -3.66 -3.28 -3.35 -3.49 -3.64 -3.65 -3.29 -3.35 -3.50 -3.65

4 1.59 0.11 -0.58 -1.02 -1.35 0.25 -0.41 -0.87 -1.22 -1.51

5 1.45 0.27 -0.34 -0.74 -1.05 1.23 0.13 -0.45 -0.85 -1.15

6 -0.74

7 0.04

8 0.04

9 -1.63

-3.64 -2.84 -3.65 -3.85 -4.08 -3.90 -3.61 -3.75 -3.98 -3.00 -3.21 -3.42 -3.04 -2.96 -3.11 -3.32

-0.95 -0.17 -1.07 -1.20 -1.32 -0.72 -0.97 -1.08 -1.21 -1.33 -0.61

-0.25 -1.40 -0.63 -1.53

-0.93 -0.90 -1.75 -1.18 -1.11 0.18 -1.95

0.04 -1.44

-0.43 -0.25 -1.48 -0.80 -0.63 -1.66 -1.07 -0.90 -1.86 -1.28 -1.11 -2.06

-4.21 -3.52

Sir CRR COE.ELURU, Dept of ECE

2009-2011

65 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

CHAPTER - 5 SIMULATION RESULTS

Sir CRR COE.ELURU, Dept of ECE

2009-2011

66 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

SIMULATION RESULTS
The below figure shows BER curves for interleaver of length 100 and the amount of data is 1500 and the number of iterations is 5.

For Different cases the Simulation results are

Sir CRR COE.ELURU, Dept of ECE

2009-2011

67 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The below figures (1, 2, 3 and 4 ) shows BER curves for the amount of data is 500, signal to noise ratio as up to 4 db, number of iterations are 5 and the interleaver lengths are taken as 50,100,250 and 500 respectively.

Here interleaver lengths are only changed .

For interleaver length=50


10
-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5

10

-2

Amount of data:500
BER

10

-3

Number of iterations=5 S/N ratio=4db

10

-4

10

-5

1.5

2.5 Eb/No

3.5

Fig: ( 1 )

Sir CRR COE.ELURU, Dept of ECE

2009-2011

68 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

For interleaver length=100


10
-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5

10

-2

Amount of data:500 Number of iterations=5 S/N ratio=4db

BER

10

-3

10

-4

10

-5

1.5

2.5 Eb/No

3.5

Fig : ( 2 ) For interleaver length=250

10

-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5

10

-2

Amount of data:500 Number of iterations=5 S/N ratio=4db

10
BER

-3

10

-4

10

-5

10

-6

1.5

2.5 Eb/No

3.5

Fig :( 3 )

Sir CRR COE.ELURU, Dept of ECE

2009-2011

69 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

For interleaver length=500

10

-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5

10

-2

10
BER

-3

Amount of data:500 Number of iterations=5 S/N ratio=4db

10

-4

10

-5

10

-6

1.5

2.5 Eb/No

3.5

Fig: ( 4 )

The below figures (5, 6, 7 and 8 ) shows BER curves for taking amount of data as 500 bits, number of iterations are 5 , the interleaver length is taken as 50. Signal to Noise ratio ( S/N ) is changing as up to 2,3,4 and 6 db respectively. Here Signal to Noise ratio ( S/N) is only changed .

Sir CRR COE.ELURU, Dept of ECE

2009-2011

70 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

For Signal to Noise ratio ( S/N) =2 db


Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4

10

-1.3

10

-1.4

Amount of data: 500bits Number of iterations=5 Interleaver length=50

10
BER

-1.5

10

-1.6

10

-1.7

10

-1.8

10

-1.9

1.1

1.2

1.3

1.4

1.5 Eb/No

1.6

1.7

1.8

1.9

Fig : ( 5 )

For Signal to Noise ratio ( S/N) =3 db


10
-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4

Amount of data: 500bits


BER

10

-2

Number of iterations=5 Interleaver length=50

10

-3

1.2

1.4

1.6

1.8

2 Eb/No

2.2

2.4

2.6

2.8

Fig : ( 6 ) Sir CRR COE.ELURU, Dept of ECE 2009-2011

71 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

For Signal to Noise ratio ( S/N) =4 db

10

-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4

10

-2

Amount of data: 500bits Number of iterations=5 Interleaver length=50

BER
10
-3

10

-4

1.5

2.5 Eb/No

3.5

Fig :

( 7)

For Signal to Noise ratio ( S/N) =5 db


10
-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4

10

-2

Amount of data: 500bits


BER

10

-3

Number of iterations=5 Interleaver length=50

10

-4

10

-5

1.5

2.5

3 Eb/No

3.5

4.5

Fig: ( 8)

Sir CRR COE.ELURU, Dept of ECE

2009-2011

72 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

The below figures ( 9, 10, 11 1nd 12 ) shows BER curves for taking amount of data as 500 bits, signal to noise ratio as up to 4 db, the interleaver length is 50 and the number of iterations are taken as 3,4,5 and 6 respectively.

Here number of iterations only changed .

For number of iterations=2


10
-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3

10

-2

BER

Amount of data: 500bits S/N Ratio= 5 db


10
-3

Interleaver length=50

10

-4

1.5

2.5 Eb/No

3.5

Fig. ( 9 )

Sir CRR COE.ELURU, Dept of ECE

2009-2011

73 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

For number of iterations=3


10
-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4

10

-2

Amount of data: 500bits


BER

S/N Ratio= 5 db
10
-3

Interleaver length=50

10

-4

1.5

2.5 Eb/No

3.5

Fig :( 10 ) For number of iterations=4


10
-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5

10

-2

Amount of data: 500bits


BER

S/N Ratio= 5 db Interleaver length=50


10
-3

10

-4

1.5

2.5 Eb/No

3.5

Fig : ( 11 )

Sir CRR COE.ELURU, Dept of ECE

2009-2011

74 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

For number of iterations=5


10
-1

Effect of number of iterations on the performance iteration 1 iteration 2 iteration 3 iteration 4 iteration 5 iteration 6

10

-2

Amount of data: 500bits S/N Ratio= 5 db Interleaver length=50

BER

10

-3

10

-4

10

-5

1.5

2.5 Eb/No

3.5

Fig: (12 )

Sir CRR COE.ELURU, Dept of ECE

2009-2011

75 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

CHAPTER - 6 CONCLUSION & FUTURE WORK

Sir CRR COE.ELURU, Dept of ECE

2009-2011

76 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

CONCLUSION AND FUTURE WORK

From the simulation results we can notice two things

1. The bit error probability decreases as the iterations go up. This means as the iterations increase the reliability of the decisions taken increases. 2. The interleaver length also affects the performance. As the interleaver length increases the bit error probability decreases. As BCJR algorithm is very complex, we are trying to modify the algorithm to save memory and to reduce complexity. The basic idea is as follows.

It is usually assumed that all state metric values are necessary in the maximum a posteriori (MAP) algorithm in order to compute the a posteriori probability (APP) values. This paper extends the mathematical derivation of the original MAP algorithm and shows that the log likelihood values can be computed using only partial state metric values. By processing N stages in a trellis concurrently, the proposed algorithm results in savings in the required memory size and leads to a power efficient implementation of the MAP algorithm in channel decoding. The computational complexity analysis for the proposed algorithm is presented. Especially for the N = 2 case, we show that the proposed algorithm halves the memory requirement without increasing the computational complexity.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

77 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

CHAPTER - 7 REFERENCES

Sir CRR COE.ELURU, Dept of ECE

2009-2011

78 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

REFERENCES
[1].C. Berrou and A. Glavieux, Near Decoding; [2].S. Benedetto, G. Montorsi, D. Optimum Error Correcting Coding and

Turbo Codes, IEEE Trans. Comm., pp. 1261-1271, Oct.1996. Divsalar and F. Pollara , Serial Concatenation of

Interleaved Codes: Performance Analysis, Design and Iterative Decoding, TDA Progress Rep. 42-126, Apr-June 1996, Jet Rropulsion lab., Pasadena, CA, pp.1-26, Aug.15, 1996. [3].L.Bahl, J.Cocke ,F.Jelinek, and J.Raviv, Optimum Decoding of linear codes for minimzing Symbol Error Rate,IEEE Trans. On Inf. Theory,vol.IT-20,pp.284-

287,Mar1974. [4].P.Robertson, E.Villebrum, and P.Hoeher, A Comparision of Optimal and SubOptimal MAP Decoding algorithms Operating in the Log-domain. International Conference on Communications,pp.1009-1013, June 1995 [5].S. Benedetto, G. Montorsi, D. Divsalar, F. Pollara ,Soft-Output decoding Algoitham in Iteratie Decoding of Turbo Codes, TDA progress Report42-124,pp.63-87,February15,1996 [6]. J. Hagenauer, The turbo principle : Tutorial introduction and state of the art, Proc. 1st Internet. Symp. Turbo Codes, pp. 1-12, 1997. [7]. S. Brink, Designing Iterative Decoding Schemes with the Extrinsic Information Transfer Chart, AEU Int. J. Electron. Commun., vol.54, no.6, pp.389-398, 2000. [8]. S. Brink, J.Speidel, and R. Yan, Iterative demapping and decoding for multilevel modulation, in Proc. Globecom98, vol.1, pp.579-584, 1998. [9]. Forney,G.D :Concatenated Codes. Cambridge (Mass, USA) : MIT Press, 1966. [10].T. Mittelholzer, X.Lin, J. Massey, Multilevel Turbo Coding for M-ary Quadrature and Amplitude Modulation. Int. Symp. On Turbo Codes, Brest, September, 1997. [11].Li, X; Ritcey, J. A: Bit Interleaved coded modulation with iterative decoding. Proc. ICC (1999), 858-863. [12].Cover, T. M. Thomas, J. A . Elements of Information theory. New York: Wiley, 1991. [13].W.S.Wong and .W.Brockett, Sysytems with finite communication bandwidth constraints II: stabilization with limited information feedback. IEEE Trans. Automatic Control, vol.44, no.5,pp.1049-1053,May 1999.

Sir CRR COE.ELURU, Dept of ECE

2009-2011

79 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

[14]. H.EI Gamal and A.R. hammons, Analyzing the turbo decoder using the Approximation. IEEE trans. Inform.Theory,vol.47,no.2,pp.671-686,Feb.2011.

Gaussian

[16]. D.J.C. MacKay and R. M.Neal, Near Shannon Limit performance of low- density parity- check codes. Elecron. Lett., vol.32, pp.1645-1646. Aug.1996. [15]. J.W.Lee and R.E.Blahut,Generalized Exit chrat and BER analysis of finite length turbo codes, Proc.Globe Com 2003.San fransisco,Dec 2003. [16].C.Berrou.A.Glaviax, and P.Thitimajshima. Near Shannon limit error correcting coding and decoding;Turbo-codes, in Proc. Int. Conf. pp.10641070, May1993. [17]. T.Richarson, The geometry of turbo- decoding dynamics. IEEE Transn Inform. Theory, Vol.46,no.1.pp.1824-1834.Oct.2001 [18]. W.Ryan, A Turbo Code Tutorial, available at http://www.ece.arizona.edu.ryan. [19]. T.A. Summers and S.G. Wilson , SNR Mismatch and Online Estimation in Turbo decoding, IEEE Trans.on Comm. Vol.46, no.4, pp. 421-424,April 1998. [20]. M.Fu, Stochastic analysis of Turbo decoding, IEEE Trans. Info. Theory, vol.54, no.1,pp.81-100,2005. Also to appear in Int.Conf. Comm.,Seoul,May 2005. [21]. W.ryan, A turbo code tutorial, available at; http://www.ece.arizona.edu.ryan [21]. R. W. Brockett and D. Liberzon. Quaintized feedback stabilization of linear systems., IEEE Trans., Automatic Control., vol.45, no. 7, pp. 12791289, July 2000. [22]. M.SrinivasaRao and A.Srinu, Analysis of Iterative decoding for turbo codes using Maximum A Posteriori Algorithm,International Conference on Industrial Applications of Soft Computing techniques(IIASCT-2011), Bhuvaneswar [23].M.Srinivasa Rao And A.Srinu, Analysis of Iterative decoding for turbo codes using Maximum A Posteriori Algorithm, National Conference on Recent Trends and Advances in Nano Technology,Department of ECE in Collabaration with IEEE Hyderabad section [24]. Prof M. Srinivasa Rao, Dr P. Rajesh Kumar, K. Anitha , Addanki Srinu, Modified Maximum A Posteriori Algorithm For Iterative Decoding of Turbo codes, IJEST Journal with Manuscript Id: IJEST11-03-08-174 Commun., Switzerland,

Sir CRR COE.ELURU, Dept of ECE

2009-2011

80 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

CHAPTER - 8 APPENDIX

Sir CRR COE.ELURU, Dept of ECE

2009-2011

81 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

APPENDIX

SOURCE CODE
main.m

% Turbo Decoding

clc

clear all

% Block size

block_size =100;

% Convolutional code polynomial

code_polynomial = [ 1 1 1; 1 0 1 ];

[n,K]=size(code_polynomial);

m=K-1;

% Code rate for punctured code

code_rate = 1/2;

% Number of iterations

Sir CRR COE.ELURU, Dept of ECE

2009-2011

82 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

no_of_iterations = 5;

% Number of blocks in error for termination

block_error_limit = 15;

% signal-to-noise-ratio in db

SNRdb = [1 2 3 4];

for snrdb=1:length(SNRdb)

snr = 10^(SNRdb(snrdb)/10);

fprintf('Signal-to-Noise-ratio = %d\n',SNRdb(snrdb))

% channel reliability value and variance of AWGN channel

channel_reliability_value = 4*snr*code_rate; noise_var = 1/(2*code_rate*snr);

%initializing the error counters

block_number = 0;

block_errors(1,1:no_of_iterations) = zeros(1, no_of_iterations);

bit_errors(1,1:no_of_iterations) = zeros(1, no_of_iterations);

total_errors=0;

Sir CRR COE.ELURU, Dept of ECE

2009-2011

83 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

while block_errors(1, no_of_iterations)< block_error_limit

block_number=block_number+1;

% Transmitter end %-----------------------------------% generating random data

Data = round(rand(1, block_size-m));

% random scrambler

[dummy, Alpha] = sort(rand(1,block_size));

% turbo-encoder output

turbo_encoded = turbo_encoder( Data, code_polynomial, Alpha) ;

% Receiver end %-------------------------------------------------% AWGN+turbo-encoder out put

received_signal = turbo_encoded+sqrt(noise_var)*randn(1,(block_size)*2);

% demultiplexing the signals

demul_output = demultiplexer(received_signal, Alpha );

%scaled received signal

Sir CRR COE.ELURU, Dept of ECE

2009-2011

84 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

Datar= demul_output *channel_reliability_value/2;

% Turbo decoder %-----------------------------------------------------

extrinsic = zeros(1, block_size);

apriori = zeros(1, block_size);

for iteration = 1: no_of_iterations

% First decoder

apriori(Alpha) = extrinsic;

LLR = BCJL1(Datar(1,:), code_polynomial, apriori);

extrinsic = LLR - 2*Datar(1,1:2:2*(block_size)) - apriori;

% Second decoder

apriori = extrinsic(Alpha);

LLR = BCJL2(Datar(2,:), code_polynomial, apriori);

extrinsic = LLR - 2*Datar(2,1:2:2*(block_size)) - apriori;

% Hard decision of information bits

Datahat(Alpha) = (sign(LLR)+1)/2;

Sir CRR COE.ELURU, Dept of ECE

2009-2011

85 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

% Number of bit errors

bit_errors(iteration) = length(find(Datahat(1:block_size-m)~=Data));

% Number of block errors

if bit_errors(iteration )>0

block_errors(iteration) = block_errors(iteration) +1;

end %if

end %iterations

%Total bit errors

total_errors=total_errors+ bit_errors;

% bit error rate

if block_errors(no_of_iterations)==block_error_limit

BER(snrdb,1:no_of_iterations)= total_errors(1:no_of_iterations)/... block_number/(block_size-m);

end %if

end %while

Sir CRR COE.ELURU, Dept of ECE

2009-2011

86 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

end %snrdb

% prints

semilogy(BER) ,grid

title('Effect of number of iterations on the performance')

xlabel('Eb/No')

ylabel('BER')

legend('iteration 1','iteration 2','iteration 3','iteration 4','iteration 5')

turbo encoder.m

function output = turbo_encoder( Data, code_g, Alpha)

% Turbo encoder

[n,K] = size(code_g);

m = K - 1;

block_s = length(Data);

y=zeros(3,block_s+m);

% encorder 1

Sir CRR COE.ELURU, Dept of ECE

2009-2011

87 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

%---------------------------------state = zeros(m,1);

for i = 1: block_s+m

if i <= block_s

d = Data(1,i);

elseif i > block_s

d = rem( code_g(1,2:K)*state, 2 );

end

a = rem( code_g(1,:)*[d ;state], 2 );

v = code_g(2,1)*a;

for j = 2:K

v = xor(v, code_g(2,j)*state(j-1));

end;

state = [a;state(1:m-1)];

y(1,i)=d;

y(2,i)=v;

Sir CRR COE.ELURU, Dept of ECE

2009-2011

88 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

end

%encorder 2 %-------------------------

% interleaving the data

for i = 1: block_s+m

ytilde(1,i) = y(1,Alpha(i));

end

state = zeros(m,1);

for i = 1: block_s+m

d = ytilde(1,i);

a = rem( code_g(1,:)*[d ;state], 2 );

v = code_g(2,1)*a;

for j = 2:K

v = xor(v, code_g(2,j)*state(j-1));

end; %j

state = [a; state(1:m-1)];

Sir CRR COE.ELURU, Dept of ECE

2009-2011

89 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

y(3,i)=v;

end %i

% inserting data, odd parity and even parity

for i=1: block_s+m

output(1,n*i-1) = 2*y(1,i)-1;

if rem(i,2)

output(1,n*i) = 2*y(2,i)-1;

else

output(1,n*i) = 2*y(3,i)-1;

end %if

end %i

demultiplexer.m function output = demultiplexer(Data, Alpha);

% demultiplexing the received signal

block_s = fix(length(Data)/2);

output=zeros(2,block_s);

Sir CRR COE.ELURU, Dept of ECE

2009-2011

90 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

for i = 1: block_s

Dataf(i) = Data(2*i-1);

if rem(i,2)>0

output(1,2*i) = Data(2*i);

else

output(2,2*i) = Data(2*i);

end

end

for i = 1: block_s

output(1,2*i-1) = Dataf(i);

output(2,2*i-1) = Dataf(Alpha(i));

end

cnn trellis.m function [nxt_o, nxt_s, lst_o, lst_s] = cnc_trellis(code_g);

Sir CRR COE.ELURU, Dept of ECE

2009-2011

91 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

% code trellis for RSC

% code properties

[n,K] = size(code_g); m = K - 1; no_of_states = 2^m;

for s=1: no_of_states

dec_cnt_s=s-1; i=1;

% decimal to binary state while dec_cnt_s >=0 & i<=m bin_cnt_s(i) = rem( dec_cnt_s,2) ; dec_cnt_s = (dec_cnt_s- bin_cnt_s(i))/2; i=i+1; end bin_cnt_s=bin_cnt_s(m:-1:1);

% next state when input is 0 d = 0; a = rem( code_g(1,:)*[0 bin_cnt_s ]', 2 ); v = code_g(2,1)*a;

for j = 1:K-1 v = xor(v, code_g(2,j+1)*bin_cnt_s(j)); end;

nstate0 = [a bin_cnt_s(1:m-1)]; y_0 = [0 v];

Sir CRR COE.ELURU, Dept of ECE

2009-2011

92 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

% next state when input is 1 d = 1; a = rem( code_g(1,:)*[1 bin_cnt_s]', 2 ); v = code_g(2,1)*a;

for j = 1:K-1 v = xor(v, code_g(2,j+1)*bin_cnt_s(j)); end;

nstate1 = [a bin_cnt_s(1:m-1)]; y_1=[1 v];

% next output when input 0 1 nxt_o(s,:) = [y_0 y_1];

% binary to decimal state d=2.^(m-1:-1:0); dstate0=nstate0*d'+1; dstate1=nstate1*d'+1;

% next state when input 0 1 nxt_s(s,:) = [ dstate0 dstate1 ];

% finding the possible previous state frm the trellis lst_s(nxt_s(s, 1), 1)=s; lst_s(nxt_s(s, 2), 2)=s; lst_o(nxt_s(s, 1), 1:4) = nxt_o(s, 1:4) ; lst_o(nxt_s(s, 2), 1:4) = nxt_o(s, 1:4) ;

end

Sir CRR COE.ELURU, Dept of ECE

2009-2011

93 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

BCJL1.m function L = BCJL1(Datar, code_g ,apriori)

% log-BCJL (LOG-MAP algorithm) for decoder 1

% states, memory, constraint length and block size

block_s = fix(length(Datar)/2);

[n,K] = size(code_g);

m = K - 1;

no_of_states = 2^m;

infty = 1e10;

zero=1e-300;

% forward recursion

alpha(1,1) = 0;

alpha(1,2:no_of_states) = -infty*ones(1,no_of_states-1);

% code-trellis

[nxt_o, nxt_s, lst_o, lst_s] = cnc_trellis(code_g);

Sir CRR COE.ELURU, Dept of ECE

2009-2011

94 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

nxt_o = 2*nxt_o-1;

lst_o = 2*lst_o-1;

for i = 1:block_s

for cnt_s = 1:no_of_states

branch = -infty*ones(1,no_of_states);

branch(lst_s(cnt_s,1)) = -Datar(2*i-1)+Datar(2*i)*... lst_o(cnt_s,2)-log(1+exp(apriori(i)));

branch(lst_s(cnt_s,2)) = Datar(2*i-1)+Datar(2*i)*... lst_o(cnt_s,4)+apriori(i)-log(1+exp(apriori(i)));

if(sum(exp(branch+alpha(i,:)))>zero)

alpha(i+1,cnt_s) = log( sum( exp( branch+alpha(i,:))));

else alpha(i+1,cnt_s) =-1*infty;

end end

alpha_max(i+1) = max(alpha(i+1,:));

alpha(i+1,:) = alpha(i+1,:) - alpha_max(i+1);

end

Sir CRR COE.ELURU, Dept of ECE

2009-2011

95 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

% backward recursion

beta(block_s,1)=0;

beta(block_s,2:no_of_states) = -infty*ones(1,no_of_states-1);

for i = block_s-1:-1:1

for cnt_s = 1:no_of_states

branch = -infty*ones(1,no_of_states);

branch(nxt_s(cnt_s,1)) = -Datar(2*i+1)+Datar(2*i+2)*... nxt_o(cnt_s,2)-log(1+exp(apriori(i+1)));

branch(nxt_s(cnt_s,2)) = Datar(2*i+1)+Datar(2*i+2)*... nxt_o(cnt_s,4)+apriori(i+1)-log(1+exp(apriori(i+1)));

if(sum(exp(branch+beta(i+1,:)))>zero)

beta(i,cnt_s) = log(sum(exp(branch+beta(i+1,:))));

else

beta(i,cnt_s)=-infty;

end

end

Sir CRR COE.ELURU, Dept of ECE

2009-2011

96 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

beta(i,:) = beta(i,:) - alpha_max(i+1);

end

for k = 1:block_s

for cnt_s = 1:no_of_states

branch0 = -Datar(2*k-1)+Datar(2*k)*lst_o(cnt_s,2)-log(1+exp(apriori(k)));

branch1 = Datar(2*k-1)+Datar(2*k)*lst_o(cnt_s,4)+apriori(k)-... log(1+exp(apriori(k)));

den(cnt_s) = exp( alpha(k,lst_s(cnt_s,1))+branch0+ beta(k,cnt_s));

num(cnt_s) = exp( alpha(k,lst_s(cnt_s,2))+branch1+ beta(k,cnt_s));

end

L(k) = log(sum(num)) - log(sum(den));

end

BCJL2.m function L = BCJL2(Datar, code_g ,apriori)

% log-BCJL (LOG-MAP algorithm) for decoder 2

% states, memory, constraint length and block size

Sir CRR COE.ELURU, Dept of ECE

2009-2011

97 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

block_s = fix(length(Datar)/2);

[n,K] = size(code_g);

m = K - 1;

no_of_states = 2^m;

infty = 1e10;

zero=1e-300;

% forward recursion

alpha(1,1) = 0;

alpha(1,2:no_of_states) = -infty*ones(1,no_of_states-1);

% code-trellis

[nxt_o, nxt_s, lst_o, lst_s] = cnc_trellis(code_g);

nxt_o = 2*nxt_o-1;

lst_o = 2*lst_o-1;

for i = 1:block_s

for cnt_s = 1:no_of_states

branch = -infty*ones(1,no_of_states);

Sir CRR COE.ELURU, Dept of ECE

2009-2011

98 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

branch(lst_s(cnt_s,1)) = -Datar(2*i-1)+Datar(2*i)*... lst_o(cnt_s,2)-log(1+exp(apriori(i)));

branch(lst_s(cnt_s,2)) = Datar(2*i-1)+Datar(2*i)*... lst_o(cnt_s,4)+apriori(i)-log(1+exp(apriori(i)));

if(sum(exp(branch+alpha(i,:)))>zero)

alpha(i+1,cnt_s) = log( sum( exp( branch+alpha(i,:))));

else

alpha(i+1,cnt_s) =-1*infty;

end

end

alpha_max(i+1) = max(alpha(i+1,:));

alpha(i+1,:) = alpha(i+1,:) - alpha_max(i+1);

end

% backward recursion

beta(block_s,1:no_of_states)=0;

for i = block_s-1:-1:1

Sir CRR COE.ELURU, Dept of ECE

2009-2011

99 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

for cnt_s = 1:no_of_states

branch = -infty*ones(1,no_of_states);

branch(nxt_s(cnt_s,1)) = -Datar(2*i+1)+Datar(2*i+2)*... nxt_o(cnt_s,2)-log(1+exp(apriori(i+1)));

branch(nxt_s(cnt_s,2)) = Datar(2*i+1)+Datar(2*i+2)*... nxt_o(cnt_s,4)+apriori(i+1)-log(1+exp(apriori(i+1)));

if(sum(exp(branch+beta(i+1,:)))>zero)

beta(i,cnt_s) = log(sum(exp(branch+beta(i+1,:))));

else

beta(i,cnt_s)=-infty;

end

end

beta(i,:) = beta(i,:) - alpha_max(i+1);

end

for k = 1:block_s

for cnt_s = 1:no_of_states

Sir CRR COE.ELURU, Dept of ECE

2009-2011

100 ANALYSIS OF ITERATIVE DECODING FOR TURBO CODES USING MAXIMUM A POSTERIORI ALGORITHM

branch0 = -Datar(2*k-1)+Datar(2*k)*lst_o(cnt_s,2)-log(1+exp(apriori(k)));

branch1 = Datar(2*k-1)+Datar(2*k)*lst_o(cnt_s,4)+apriori(k)-... log(1+exp(apriori(k)));

den(cnt_s) = exp( alpha(k,lst_s(cnt_s,1))+branch0+ beta(k,cnt_s));

num(cnt_s) = exp( alpha(k,lst_s(cnt_s,2))+branch1+ beta(k,cnt_s));

end

L(k) = log(sum(num)) - log(sum(den));

End

Sir CRR COE.ELURU, Dept of ECE

2009-2011

You might also like