Channel Encoding
Channel Encoding
SECTION: OE174
I take this opportunity to present my votes of thanks to all those guidepost who really acted as lightening pillars to enlighten our way throughout this project that has led to successful and satisfactory completion of this study. We are really grateful to our Mr.Mritunjay Kumar for providing us with an opportunity to undertake this project in this university and providing us with all the facilities. We are highly thankful to Mr. Ambrish gangal for her active support, valuable time and advice, whole-hearted guidance, sincere cooperation and pains-taking involvement during the study and in completing the assignment of preparing the said project within the time stipulated. Lastly, We are thankful to all those, particularly the various friends , who have been instrumental in creating proper, healthy and conductive environment and including new and fresh innovative ideas for us during the project, their help, it would have been extremely difficult for us to prepare the project in a time bound framework.
TABLE OF CONTENTS
1. INTRODUCTION 2. CHANNEL ENCODING 3. HOW IT WORKS 4. TYPES OF CHANNELS 5. MAIN TYPES OF CHANNEL ENCODING 6. TRANSMISSION OVER A CHANNEL 7. ERROR RECOGNITION AND CORRECTION 8. CHANNEL ENCODING FOR DIGITAL COMMUNICATION 9. CHANNEL ENCODING FOR TELECOMMUNICATION 10.CHANNEL ENCODING AS NOISE 11.APPLICATIONS 12.REFERENCES
1. INTRODUCTION
Channel Encoding is a system of error control for data transmission, whereby the sender adds redundant data to its messages, also known as an Error Correction Code or Forward Error Correction (FEC).The main purpose of a channel encoder is to produce a sequence of data that is robust to noise and to provide error detection and forward error correction mechanisms.In simple and cheap transceivers, forward error correction is costly and, therefore, the task of channel encoding is limited to the detection of errors in packet transmission.The physical channel sets limits to the magnitude and the rate of signal transmission. According to the ShannonHartley theorem, the capacity of a channel to transmit a message without an error is given as:
eq.1.1. Where C is the channel capacity in bits per second; B is the bandwidth of the channel in hertz; S is the average signal power over the entire bandwidth, measured in watts; and N is The average noise power over the entire bandwidth, measured in watts. States that for data to be transmitted free of errors, its transmission rate should be below the channels capacity. It also indicates how the signal-to-noise (SNR) ratio, can improve the channels capacity. The equation reveals two independent reasons why errors can be introduced during transmission.
2. CHANNEL ENCODING
The channel encoder is to produce a sequence of data that is robust to noise and to provide error detection and forward error correction mechanisms. In simple and cheap transceivers, forward error correction is costly and, therefore, the task of channel encoding is limited to the detection of errors in packet transmission. Information will be lost if the message is transmitted at a rate higher than the
channels capacity. This type of error is called equivocation in information theory. It is characterized
Information will be lost because of noise, which adds irrelevant information into the signal. A stochastic model of the channel helps to quantify the impact of these two sources of errors. Suppose an input sequence of data xl that can have j distinct values, xl X = (x1, x2, xj), is transmitted through a physical channel. Let P(xl) denote P(X = xl). The channels output can be decoded with a k-valued alphabet to produce ym Y = (y1, y2, yk). Let P (ym) denote P(Y = ym). At time ti , the channel generates an output symbol yi for an input symbol xi . Assuming that the channel distorts the transmitted data, it is possible to model distortion (or transmission probability) as a stochastic process. eq.1.2. Where, l = 1, 2, j and m = 1, 2, k. In the subsequent analysis of the stochastic characteristic of the channel, the following assumptions hold:
The channel is discrete, namely, X and Y have finite sets of symbols. The channel is stationary, namely, P(ym|xl) are independent of the time instance, i. The channel is memory less, namely, P(ym|xl) are independent of previous inputs and outputs. One way of describing transmission distortion is by using channel matrix, Pc
eq.1.3. Where
eq.1.4. Moreover:
eq.1.5. Or, more generally: eq.1.6. Where both are row matrices.
3. HOW IT WORKS
Channel Encoding is accomplished by adding redundancy to the transmitted information using a predetermined algorithm.
The channel matrix of a binary symmetric channel is, therefore given as:
eq.1.9.
eq.1.10. States that a bit of information is either transmitted successfully with P (1|1) = P (0|0) = 1 p or is erased altogether by the channel with a probability of p. The probability that 0 is received by transmitting 1 or vice versa is 0.
namely C(W), is C(W)=C(SK1)C(SK2)C(SKM) The trade-off between efficiency and correction capabilities can also be seen from the attempt to, given a fixed codeword length and a fixed correction capability (represented by the Hamming distance d) maximize the total amount of codewords. A[n,d] is the maximum number of codewords for a given codeword length n and Hamming distance d.
INFORMATION RATE
Amount of Transported Information in bits per sec. When C is a binary block code, consisting of A codewords of length n bits, then the information rate of C is defined as (log2 A)/N When if the first k bits of a codeword are independent information bits, then the information rate is log22k/n = k/n
The encoder on the picture before is a non-recursive encoder. Here's an example of a recursive one:
5.3 COMPARISON BETWEEN BLOCK AND CONVOLUTIONL CODES Block codes work on fixed-size blocks (packets) of bits or symbols of predetermined size. Convolutional codes work on bit or symbol streams of arbitrary length. A convolutional code can be turned into a block code, if desired.
6.1. IRRELEVANCE
The content of information that can be introduced into the channel due to noise is described as the conditional information content, I (y|x). It is the information content of y that can be observed provided that x is known. The conditional entropy is given as:
eq.1.11 P (y|x) can be known from the channel matrix [PC]. The average conditional entropy over all input message symbols, x X, is given by:
eq.1.12
eq.1.13
According to a good channel encoder is one that reduces the irrelevance entropy.
6.2 EQUIVOCATION
The content of information that can be lost because of the channels inherent constraints can be quantified by observing the input x given that the output y is known:
Eq.1.14
eq.1.15
The conditional probability of Equation (1.15) is also known as the probability of inference or posterior probability. Therefore, equivocation is sometimes called inference entropy. A good channel encoding scheme is one that has a high inference probability. This can be achieved by introducing redundancy during channel encoding. 6.3 TRAN SIN FORMATION
The information content I (X; Y) that overcomes the channels constraints to reach the destination (the receiver) is called transinformation. Given the input entropy, H(X), and equivocation, H(X|Y), the transinformation is computed as: eq.1.16
eq.1.17
In this method information bits are protected against errors by the transmitting of extra redundant bits, so that if errors occur during transmission the redundant bits can be used by the decoder to determine where the errors have occurred and how to correct them.
12.REFERENCES
http//www.books.google.co.in/books?isbn=0470997656 http://www.eecs.umich.edu/eecs/research/group.html?r_id=5&g_id=24 www.britannica.com/EBchecked/topic/.../channelencoding