0% found this document useful (0 votes)
47 views29 pages

20 Convolutional Codes 3

1) The document discusses soft decision decoding of signals, which involves dividing the decision space into multiple regions rather than just two regions. This improves decoding sensitivity and performance. 2) It also discusses the transfer function of convolutional codes, which can be used to calculate properties like minimum distance and error rate performance. The minimum free distance corresponds to the code's ability to estimate the best decoded sequence. 3) Interleaving is described as a way to make bursty error channels appear random, improving the performance of convolutional codes. Concatenated codes are also summarized as using an inner code like convolutional codes along with an outer code like Reed-Solomon codes.

Uploaded by

mailstonaik
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views29 pages

20 Convolutional Codes 3

1) The document discusses soft decision decoding of signals, which involves dividing the decision space into multiple regions rather than just two regions. This improves decoding sensitivity and performance. 2) It also discusses the transfer function of convolutional codes, which can be used to calculate properties like minimum distance and error rate performance. The minimum free distance corresponds to the code's ability to estimate the best decoded sequence. 3) Interleaving is described as a way to make bursty error channels appear random, improving the performance of convolutional codes. Concatenated codes are also summarized as using an inner code like convolutional codes along with an outer code like Reed-Solomon codes.

Uploaded by

mailstonaik
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Lecture 20

Decision Metrics Transfer Function of CC Interleaving Concatenated Codes

Hard Decision Decoding


Viterbi Decoding of (n,1,m) code on BSC

Soft Decision Decoding


The spectrum of two signals used to represent a 0 and 1 bit when corrupted by noise makes the decision process difficult. The signal spreads out and the energy from one signal flows into the other. Fig b, c, shows two different cases for different S/N.

a) Two signals representing a 1 and 0 bit. b) Noise of S/N = 2 spreads out the signal, c) Noise of S/N = 4 spread out to spill energy from one decision region to another.

If the added noise is small (ie. its variance is small) then the spread will be smaller as we can see in c as compared to b. Intuitively we can see from these pictures that we are less likely to make decoding errors if the S/N is high or the variance of the noise is small.

Making a hard decision means that a simple decision threshold which is usually between the two signals is chosen, such that if the received voltage is positive then the signal is decoded as a 1 otherwise a 0. In very simple terms this is also what the Maximum likelihood decoding means.
We can quantify the error that is made with this decision method. The probability that a 0 will be decoded, given that a 1 was sent is a function of the two shaded areas seen in the figure above.

This is the familiar bit error rate equation. We see that it assumes that hard-decision is made

But now what if we did this; instead of having just two regions of decision, we divided the area into 4 regions as shown below.

The probability that the decision is correct can be computed from the area under the gaussian curve. Region 1 = if received voltage is greater than .8 v Region 2 = if received voltage is greater than 0 but less than .8 v Region 3 = if received voltage is greater than -.8 v but less than 0 v Region 4 = if received voltage is less than -.8 v

Now we ask? If the received voltage falls in region 3, then what is probability of error that a 1 was sent? With hard-decision, the answer is easy. It can be calculated from the equation above. How do we calculate similar probabilities for a multi-region space? We use the Q function that is tabulated in many books. The Q function gives us the area under the tail defined by the distance from the mean to any other value. So Q(2) for a signal the mean of which is 2 would give us the probability of a value that is 4 or greater.

This process of subdividing the decision space into regions greater than two such as this 4 -level example is called soft-decision. These probabilities are also called the transition probabilities. There are 4 different values of voltage for each signal received with which to make a decision. In signals, as in real life, more information means better decisions. Soft-decision improve the sensitivity of the decoding metrics and improves the performance by as much as 3 dB in case of 8-level soft decision.

Transfer Function of a Convolutional Code


The performance of convolutional codes can be quantified through analytical means or by computer simulation. The analytical approach is based on the transfer function of the convolutional code which is obtained from the state diagram. With the transfer function, code properties such as distance properties and the error rate performance can be easily calculated. To obtain the transfer function, the following rules are applied:

1. Break the all-zero (initial) state of the state diagram into a start state and an end state. This will be called the modified state diagram. 2. For every branch of the modified state diagram, assign the symbol D with its exponent equal to the Hamming weight of the output bits. 3. For every branch of the modified state diagram, assign the symbol J. 4. Assign the symbol N to the branch of the modified state diagram, if the branch transition is caused by an input bit 1.

Example
Convolutional encoder with k=1, n=2, r=1/2, m=2

State Diagram of this coder

Modified State Diagram


Sa is the start state and Se is the end state.

Nodal equations are obtained for all the states except for the start state in These results are

Here

By substituting and rearranging,


Closed Form

Expanded polynomial form

Free distance of Convolutional codes


Since a Convolutional encoder generates codewords with various sizes (as opposite to the block codes), the following approach is used to find the minimum distance between all pairs of codewords: Since the code is linear, the minimum distance of the code is the minimum distance between each of the codewords and the all-zero codeword. This is the minimum distance in the set of all arbitrary long paths along the trellis that diverge and remerge to the all-zero path. It is called the minimum free distance or the free distance of the code, denoted by d free or d f

The minimum free distance corresponds to the ability of the convolutional code to estimate the best decoded bit sequence. As dfree increases, the performance of the convolutional code also increases. From the transfer function, the minimum free distance is identified as the lowest exponent of D. From the above transfer function considered, dfree = 5. If N and J are set to 1, the coefficients of Dis represent the number of paths through the trellis with weight Di. More information about the codeword is obtained from observing the exponents of N and J. For a codeword, the exponent of N indicates the number of 1s in the input sequence (data weight), and the exponent of J indicates the length of the path that merges with the all-zero path for the first time

Free distance
The path diverging and remerging to all-zero path with minimum weight
All-zero path Hamming weight of the branch

df =5
0 2 2 1 1 1
t1 t
2

0 2

2 0 1 1 1
t
4

2 1

Interleaving
Convolutional codes are suitable for memoryless channels with random error events. Some errors have bursty nature:
Statistical dependence among successive error events (time-correlation) due to the channel memory. Like errors in multipath fading channels in wireless communications, errors due to the switching noise

Interleaving makes the channel looks like as a memoryless channel at the decoder.

Interleaving
Interleaving is done by spreading the coded symbols in time (interleaving) before transmission. The reverse in done at the receiver by deinterleaving the received sequence. Interleaving makes bursty errors look like random. Hence, Conv. codes can be used. Types of interleaving:
Block interleaving Convolutional or cross interleaving

Block diagram of system employing interleaving for burst error channel.

A block interleaver for coded data.

Concatenated codes
A concatenated code uses two levels on coding, an inner code and an outer code (higher rate).
Popular concatenated codes: Convolutional codes with Viterbi decoding as the inner code and Reed-Solomon codes as the outer code

The purpose is to reduce the overall complexity, yet achieving the required error performance.
Input data Outer encoder Interleaver Inner encoder Modulate Channel

Output data

Outer decoder

Deinterleaver

Inner decoder

Demodulate

Thank You for your Patient Listening..

You might also like