Convolution Coding
Convolution Coding
Introduction
In this age of information, there is increasing need not only for speed but also for the accuracy in the storage, retrieval and transmission of data. The channels over which messages are transmitted are often imperfect. Machines do make errors, and their non-man-made mistakes can turn otherwise flawless programming into worthless, even dangerous, trash. Just as architects design buildings that will stand even through an earthquake, their computer counterparts have come up with sophisticated techniques capable of counteracting digital manifestations of Murphys Law (If anything can go wrong, it will go). Error Correcting Codes are a kind of safety net the mathematical insurance against the vagaries of an imperfect digital world. Error Correcting Codes, as the name suggests, are used for correcting errors when messages are transmitted over a noisy channel or stored data is retrieved. The physical medium through which the messages are transmitted is called a channel (e.g. a telephone line, a satellite link, a wireless channel used for mobile communications etc.). Different kinds of channels are prone to different kinds of noise, which corrupt the data being transmitted. The noise could be caused by lightning, human errors, equipment malfunction, voltage surge etc. Because these error correcting codes try to overcome the detrimental effects of noise in the channel, the encoding procedure is also called Channel Coding. Error control codes are also used for accurate transfer of information from one place to another, for example storing data and reading it from a compact disc (CD). In this case, the error could be due to a scratch on the surface of the CD. The error correcting coding scheme will try to recover the original data from the corrupted one. The basic idea behind error correcting codes is to add some redundancy in the form of extra symbol to a message prior to its transmission through a noisy channel. This redundancy is added in a controlled manner. The encoded message when transmitted might be corrupted by noise in the channel.
Fig.1.1: Block Diagram (and the principle) of a Digital Communication System. At the receiver, the original message can be recovered from the corrupted one if the numbers of errors are within the limit for which the code has been designed. The block diagram of a digital communication system is illustrated in Fig 1.1. Note that the most important block in the figure is that of noise, without which there will be no need for the channel encoder. Objective of error control coding The objectives of a good error control coding scheme are error correcting capability in terms of the number of errors that it can rectify fast and efficient encoding of the message, fast and efficient decoding of the received message maximum transfer of information bits per unit time (i.e., fewer overheads in terms of redundancy).
1.1.
Applications
satellite communication data transmission data storage mobile communication file transfer and digital audio/video transmission.
1.2.
acceptable by lowering the frequency of error events. Reduce the occurrence of undetected errors. This was one of the first uses of error-control coding. Today's error detection codes are so effective that the occurrence of undetected errors is, for all practical purposes, eliminated.
Reduce the cost of communications systems. Transmitter power is expensive, especially on satellite transponders. Coding can reduce the satellite's power needs because messages received at close to the thermal noise level can still be recovered correctly.
Eliminate Interference As the electromagnetic spectrum becomes more crowded with man-made signals, error-control coding will mitigate the effects of unintentional interference.
1.3.
For strictly power-limited (unlimited bandwidth) channels, Shannon's lower bound is 0.69, or 1.6 dB. In other words, we must maintain an of at least -1.6 dB to
ensure reliable communications, no matter how powerful an error-control code we SILIGURI INSTITUTE OF TECHNOLOGY 3
use. For bandwidth-limited channels with Gaussian noise, Shannon's capacity formula can be written as the following
must resort to other measures, like increasing transmitter power. Another limitation to the performance of error-control codes is the modulation technique of the communication system. Coding must go hand-in-hand with the choice of modulation technique for the channel. Even the most powerful codes cannot overcome the consequences of a poor modulation choice.
CHAPTER 2
Hybrid ARQ
Concealment error
Convolutional codes
RS codes
Hamming codes
Message signal
Coded signal
Discrete source
Channel encoder
Discrete channel
channel decoder
Destination
noise
Fig.2.2. A digital communication system using FEC In FEC technique, the discrete source generates information in the form of binary symbols .The channel encoders accepts these message bits and add redundancy bits. Thus the encoded data is produced at a higher bit rate.
Assembler Process
Assembler Process
Assembler Process
Assembler Process
Assembler Process
ACK/NAK
Feedback path
ed signal
The major advantage of ARQ over FEC is that error detection require much simpler
decoding equipment than does error correction .Also ARQ is adaptive in the sense that information is retransmitted only when error occurs.
code word length= n bits message bits channel Encoder code words k-message bits (n-k) parity bits or check parity bits
2.5. Convolution Codes In convolution codes the code words are generated by discrete time convolution of the input sequence with the impulse response of the encoder .Convolution codes need memory for their generation. The encoder of a convolutional code operates on the incoming message sequence using a sliding window equal to the duration of its own memory. Thus in convolutional code unlike a block code, the channel encoder accepts message bits as a continuous sequence and generates a continuous sequence of encoded bits at its output.
Message m(t)
Fig.2.5. Convolution code 2.6. Cyclic code Cyclic code forms an important subclass of linear codes. These codes are attractive for two reasons: first encoding and syndrome computation can be implemented easily by employing shift resistors with feedback connections ( or linear sequential circuits ): and second because they have considerable inherent algebraic structure , it is possible to find various practical methods for decoding them.
generalization over the Galois field GF (q) of the binary BCH codes. Here q is a power of a prime number p prime, q = pm prime, where m is a positive integer. These non-binary BCH codes are usually called q-array codes since they operate over the alphabet of q elements of the Galois field GF(q), with q > 2. In this sense these codes are different from binary codes, which have elements taken from the binary field GF(2). This is why q-array codes are also called non-binary codes. All the concepts and properties verified for binary BCH codes are also valid for these non-binary codes.
CHAPTER 3
Terminology
In this section we define the terms that we will need to handle the later topics.
Fig.3.1. The Digital Communications System Encoder and Decoder - The encoder adds redundant bits to the sender's bit stream to create a codeword. The decoder uses the redundant bits to detect and/or correct as many bit errors as the particular error-control code will allow. Modulator and Demodulator - The modulator transforms the output of the encoder, which is digital, into a format suitable for the channel, which is usually analog (e.g., a telephone channel). The demodulator attempts to recover the correct channel symbol in the presence of noise. When the wrong symbol is selected, the decoder tries to correct any errors that result. Communications Channel - The part of the communication system that introduces errors. The channel can be radio, twisted wire pair, coaxial cable, fibre optic cable, magnetic tape, Optical discs, or any other noisy medium. Error-Control Code A set of code words used with an encoder and decoder to detect errors, correct errors, or both detect and correct errors. Bit-Error-Rate (BER) - The probability of bit error. This is often the figure of merit for a error-control code. We want to keep this number small, typically less than 10 -4. Bit-error rate is a useful indicator of system performance on an independent error channel, but it has little meaning on bursty, or dependent error channels. SILIGURI INSTITUTE OF TECHNOLOGY 10
Message-Error-Rate (MER) - The probability of message error. This may be a more appropriate figure of merit because the smart operator wants all of his messages error-free and could care less about the BER. Undetected Message Error Rate (UMER) - The probability that the error detection decoder fails and an errored message (codeword) slips through undetected. This event happens when the error pattern introduced by the channel is such that the transmitted codeword is converted into another valid codeword. The decoder can't tell the difference and must conclude that the message is error-free. Practical error detection codes ensure that the UMER is very small, often less than 1016. Random Errors - Errors that occur independently. This type of error occurs on channels thatare impaired solely by thermal (Gaussian) noise. Independent-error channels are also called memoryless channels because knowledge of previous channel symbols adds nothing to our knowledge of the current channel symbol Burst Errors - Errors that are not independent. For example, channels with deep fades experience errors that occur in bursts. Because the fades make consecutive bits more likely to be in error, the errors are usually considered dependent rather than independent. In contrast to independent-error channels, burst-error channels have memory.
Eb Pt /R............................................................................................................. (2)
If transmit power is fixed, the energy per bit can be increased by lowering the bit rate. Thus, the reason why lower bit rates are considered more robust. The required energy per bit to maintain reliable communications can be decreased through error-control coding as we shall see in the next section. Coding Gain - The difference (in dB) in the required signal-to-noise ratio to maintain reliable communications after coding is employed. Code Rate - Consider an encoder that takes k information bits and adds r redundant bits (also called parity bits) for a total of n = k + r bits per codeword. The code rate is the fraction k/n and the code is called a (n, k) error-control code. The added parity bits are a burden (i.e. overhead) to the communications system, so the system designer often chooses a code for its ability to achieve high coding gain with few parity bits.
11
CHAPTER 4
Convolutional Code
Convolutional codes are commonly described using two parameters: the code rate and the constraint length. The code rate, k/n, is expressed as a ratio of the number of bits into the convolutional encoder (k) to the number of channel symbols output by the convolutional encoder (n) in a given encoder cycle. The constraint length parameter, K, denotes the "length" of the convolutional encoder, i.e. how many k-bit stages are available to feed the combinatorial logic that produces the output symbols. Closely related to K is the parameter m, which indicates how many encoder cycles an input bit is retained and used for encoding after it first appears at the input to the convolutional encoder. The m parameter can be thought of as the memory length of the encoder. Convolutional codes are widely used as channel codes in practical communication systems for error correction. The encoded bits depend on the current k input bits and a few past input bits. The main decoding strategy for convolutional codes is based on the widely used Viterbi algorithm. As a result of the wide acceptance of convolutional codes, there have been several approaches to modify and extend this basic coding scheme. Trellis coded modulation (TCM) and turbo codes are two such examples. In TCM, redundancy is added by combining coding and modulation into a single operation. This is achieved without any reduction in data rate or expansion in bandwidth as required by only error correcting coding schemes.
12
gl
g1
g0
Tap gains
Mod - 2 Adder
Encoded bits
Fig.4.1. block diagram or a general convolution encoder The message bits enter one by one into the tapped shift register ,which are then combined by mod-2 addition to form the encoded bit x. Therefore , we have X=ML gL + m1 g1 + m0 g0 .. (3) Or, X= ( mod-2 addition )
The name convolutional encoding comes from the fact that equation (3) has a from of binary convolution which analogous to the convolutional integral. The message bit m0 in fig(4.1) represents the current input whereas the bits m1 to mL represent the past input or state of the shift register .From equation (3) .it is clear that a message bit x depends on the current message bit m0 and the state of the shift register defined by the previous L message bits.
13
state m2 m1
input m0
+ x1 commutator switch
Encoded bits x2
To provide an extra bit needed for error control, a complete convolutional encoder must generate output bits at a rate greater than the message bit rate rb. This is achieved by using two or more mod-2 address to the register as shown iin figure 4.2, and interleaving the encoded bits via the commutator switch. The convolutional encoder of figure (4.2) is for n=2 , k=1 and l=2 .It therefore generates n=2 encoded bits as under: X1=m0 + m1 + m2 X2=m0 + m2(4) The commutator switch selects these encoded bit alternately to produce the stream of encoded bits as under : X=x1x2x1x2x1x2 (5)
14
The output bit rat is twice that of the input bit rate i.e. rb
4.3. Graphical Representation for convolutional Encoding For the convolutional encoding, there are three different graphical representations that are widely used .They are related to each other. The graphical representation are as under:
(1)The State diagram (2) The Code tree (3)The Code trellis Let us discuss them one by one.
With the help of this encoder we will describe the graphical representation of convolution code.
15
16
o/p codeword
Input bit : 0 1 1 0 Output bit : 00 11 01 01 Fig.4.5. Example of State Diagram:Input message bit: 0110
4.5. Tree Diagram Representation The tree diagram representation shows all possible information and encoded sequences for the convolutional encoder Fig.4.5. shows the tree diagram for the encoder in Fig.4.3. The encoded bits are labelled on the branches of the tree. Given an input sequence, the encoded sequence can be directly read from the tree. As an example, an input sequence (1011) results in the encoded sequence (11, 10, 00, 01).
17
18
00 11 01
19
4.6. Trellis Diagram Representation The trellis diagram of a convolutional code is obtained from its state diagram. All state transitions at each time step are explicitly shown in the diagram to retain the time dimension, as is present in the corresponding tree diagram. Usually, supporting descriptions on state transitions, corresponding input and output bits etc. are labelled in the trellis diagram. It is interesting to note that the trellis diagram, which describes the operation of the encoder, is very convenient for describing the behaviour of the corresponding decoder, especially when the famous Viterbi Algorithm (VA) is followed. Fig 4.6. Shows the trellis diagram for the encoder in Fig.4.3.
20
Initial State
Input
: 0
1 01
0 10
Output Code : 00 11
21