Lecture 10: Error Control Coding I
Lecture 10: Error Control Coding I
Diversity makes use of redundancy in terms of using multiple signals to improve signal quality. Error control coding, as discussed in this lecture and next, uses redundancy by sending extra data bits to detect and correct bit errors. Two types of coding in wireless systems (see beginning of Lecture 7)
Source coding compressing a data source using encoding of information Channel coding encode information to be able to overcome bit errors
Now we focus on channel coding, also called error control coding to overcome the bit errors of a channel.
2
I. Error Detection
Three approaches can be used to cope with data transmission errors.
1. Using codes to detect errors. 2. Using codes to correct errors called Forward Error Correction (FEC). 3. Mechanisms to automatically retransmit (ARQ) corrupted packets.
Example:
F = 500 bytes (4000 bits) Pb = 10-6 (a very good performing wireless link) P1 = (1- 10-6)4000 = 0.9960 P2 = 1-(1- 10-6) 4000 = 0.0040 4 out of every 1000 packets have errors At 100 kbps, this means 6 packets per minute are in error.
Given 100 kilobyte data files: 200 packets per file Probability that the file is corrupted is 1- (1- P1)200 = 0.551
5
Error Detection
Basic Idea: Add extra bits to a block of data bits.
Data block: k bits Error detection: n-k more bits Based on some algorithm for creating the extra bits. Results in a frame (data link layer packets are called frames) of n bits
Parity Check
Simplest scheme: Add one bit to the frame. The value of the bit is chosen so as to make the number of 1s even (or odd, depending on the type of parity). Example
7 bit character: 1110001 Even parity would make an 8 bit character of what? Odd parity would make an 8 bit character of what?
The receiver separates the data bits and the error correction bits.
Then performs the same algorithm again to see of the received bits are what they should have been. Hopefully if errors have occurred, the packet can be retransmitted or corrected. Hopefully because there are always some error patterns could go undetected.
So, this can be used to detect errors. For example, a received 10100010 (when using even parity) would be invalid.
But what types of errors would NOT be detected?
Noise sometimes comes in impulses or during a deep fade so one cannot assume individual bit errors will occur.
Parity checks, therefore, have limited usefullness.
10
11
Takes the source data and creates a sequence of bits that is only valid if divisible by a predetermined number.
Using modulo-2 arithmetic.
Binary addition with no carries. The same as the exclusive-OR operations (XOR). Also can be implemented by polynomial operations.
12
In modulo-2, any number added to itself is zero, so R/P + R/P = 0 The result is a division with no remainder.
14
15
16
CRC codes fail when a sequence of bit errors creates a different bit sequence that is also divisible by P.
17
It can be shown that the following errors can be prevented by suitably chosen values for P, and the related P(X).
All single bit errors will be detected. All double-bit errors. Any odd number of errors. Any burst error for which the length of the burst is less than or equal to (n-k)
This means the burst is less than the frame check sequence. A burst error is a contiguous set of bits where all of the bits are in error.
18
20
21
On transmission, the k-bit block of data is mapped into an n-bit block called a _____________.
Using an FEC (____________________) encoder.
A codeword may or may not be similar to those form the CRC approach above. It may come from taking the original data and adding extra bits (as with CRC). Or it may be created using a completely new set of bits.
The codewords are longer than the original data. Then the block is transmitted.
22
At the receiver, comparing the received codeword with the set of valid codewords can result in one of five possible outcomes
1. There are no bit errors
The received codeword is the same as the transmitted codeword. The corresponding source data for that codeword is output from the decoder.
25
v1 = 011011, v2 = 110001
The Hamming Distance is defined as the number of bits which disagree.
d(v1 , v2 ) = 3
26
Example: Given k = 2, n = 5
27
Decoding rule: Use the closest codeword (in terms of Hamming distance). Why is it okay to do this? How much less likely are two errors than one error? Assume BER = 10-3.
29
In many cases, a possible received codeword is a Hamming distance of 1 from a valid codeword.
30
31
But in eight cases, a received codeword would be a distance of 2 away from two valid codewords.
The receiver does not know how to choose. A correction decision is undecided. An error is detected but not correctable.
So, we can conclude that in this case an error of 1 bit is correctable, but not errors of two bits.
32
33
34
For a code consisting of codewords denoted wi, the minimum Hamming distance is defined as
35
Coding Gain
Coding can allow us to use lower power (smaller Eb/N0) to achieve the same error rate we would have had without using correction bits.
Since errors can be corrected.
38
39
The coding gain of a code is defined as the reduction in dB of Eb/N0 that is required to obtain the same error rate.
For example, for a BER of 10-6, 11 dB is needed for the rate code, as compared to 13.77 dB without the coding. This is a coding gain of 2.77 dB. What is the coding gain at a BER of 10-3?