Error Control Coding
Overview of Error Control Coding
Outline
The History of Error Control Coding
Error correcting coding: Basic
concepts
a. Block codes and Convolutional codes
Hamming distance, Hamming
spheres capability
5/29/12
and error correcting
22
1.1 The History of Error Control Coding
Many communication channels suffer
from noise, interference or distortion due to hardware imperfections, or physical limitations. encode information in such a way that even if the channel (or storage medium) introduces errors, the receiver can correct the errors and recover the original transmitted information.
33
The goal of error control coding is to
5/29/12
The History of Error Control Coding
1948: Dawn of the Information Age
It is arguable that the information age had its beginnings in the late 1940's. Although depending upon much earlier mathematical and scientific work, three discoveries in particular were about to change forever the way the world communicated. Not only were the announcements of these discoveries almost simultaneous, they stemmed from a single commercial research laboratory.
5/29/12
44
The History of Error Control Coding
Claude Shannon (1916-2001) A mathematician working at Bell Laboratories, is generally regarded as the father of Information Theory.
In
his paper "A Mathematical Theory of Communication", July, October 1948, Shannon introduced a fundamental measure of information entropy, and coined the term "bit", referring to the amount of information in a single binary digit.
He went on to show that every noisy channel has
associated with it an information capacity C bits/second, below which it is possible to encode 5/29/12 messages such that they can be received 55 and
The History of Error Control Coding
His proof of the capacity theorem however
relied on a mathematical "trick". He showed for transmission rates below C that the error probability averaged over all randomly selected codes can be made as small as desired. is today honored by the "Shannon Lectures", delivered by recipients of the "Shannon Award", which is awarded each year by the IEEE Theory Society to outstanding researchers in the field of information theory.
66
Claude Shannon
Information
5/29/12
The History of Error Control Coding
R. Hamming Shannon did refer to a practical code in his paper. This code was found by his Bell Labs colleague, Richard Hamming (1915-1998).
You may have already heard Hamming's name in
connection with the Hamming window used in Fourier analysis. Hamming constructed a code which added three additional "parity" bits to four information bits.
5/29/12
77
The History of Error Control Coding
The Hamming code was the first result in the
field that is now known as Coding Theory, the object of which is to find good codes that are also easy to implement.
Although the Hamming code was referred to by
Shannon in 1948, patent considerations prevented its independent publication until 1950.
"Error Detecting and Error Correcting Codes",
5/29/12
Apr. 1950. In 1988 Hamming received the first IEEE Richard W. Hamming Medal "For exceptional contributions to information sciences and systems".
88
The History of Error Control Coding
Fifty Years of Progress
Loosely speaking, error control coding
history can be divided into "pre-turbo code" and "post-turbo code".
codes and their ingenious iterative decoder, invented in 1993 by French researchers Claude Berrou and Alain Glavieux, revolutionized the area. Prior to the invention of these codes, noone really knew how to get close to the 5/29/12 99 theoretical performance limits promised
Turbo
The History of Error Control Coding
Coding research concentrated on two main
areas, algebraic codes and trellis codes.
Algebraic codes, such as the celebrated Reed-
Solomon and BCH codes build algebraic structure into the code such that the code can be decoded using computationally efficient algorithms for solving systems of equations.
Trellis codes are easily decoded using trellis-
based algorithms such as the Viterbi algorithm.
5/29/12
1010
The History of Error Control Coding
Somewhat ironically, another class of codes that
turned out to approach Shannon's bound, namely low density parity check codes (LDPC codes), which were actually discovered by Robert Gallager in the 60's.
Like turbo codes, LDPC codes have an iterative
decoder. The true power of these codes was however overlooked, due in part to the lack of sufficient computing power for their implementation at the time.
5/29/12
1111
1.2 Error correcting coding: Basic concepts
All error correcting codes are based on the
same basic principle: redundancy is added to information in order to correct any errors. In a basic (and practical) form, redundant symbols are appended to information symbols to obtain a coded sequence or code word.
5/29/12
1212
Classification:
FEC ARQ HARQ
5/29/12
1313
Automatic Repeat Request (ARQ) is a
communication protocol in which the receiving device detects errors and requests retransmissions. When the receiver detects an error in a packet, it automatically requests the transmitter to resend the packet. This process is repeated until the packet is error free or the error continues beyond a predetermined number of transmissions. ARQ is sometimes used with Global System for Mobile (GSM) 5/29/12 1414 communication to guarantee data integrity.
Forward Error Correction (FEC) is a method
of increasing the reliability of data communication. In one-way communication channels, a receiver does not have the option to request a retransmission if an error was detected. Forward Error Correction is a method of sending redundant information with the data in order to allow the receiver to reconstruct the data if there was an error in5/29/12 transmission.
1515
Hybrid Automatic Repeat reQuest (HARQ or
Hybrid ARQ) is a sheme wherein information blocks are encoded for partial error correction at receiver and additional, uncorrected errors are retransmitted.
5/29/12
1616
Basic concepts
For the purpose of illustration, a code word
obtained by encoding with a block code is shown in Figure 1.1. The information symbols always appear in the first (leftmost) k positions of a code word. The remaining (rightmost) n - k symbols in a code word provide redundancy that can be used for error correction and/or detection purposes2. The set of all code sequences is called an error correcting code, and will henceforth be denoted by C
Figure 1.1 A systematic block encoding for error correction.
5/29/12 1717
Error correcting coding: Basic
Figure 1.2 shows the block diagram of a canonical
concepts
digital communications /storage system. The information source and destination will include any source coding scheme matched to the nature of the information. The ECC encoder takes as input the information symbols from the source and adds redundant symbols to it, so that most of the errors introduced in the process of modulating a signal, transmitting it over a noisy medium and demodulating it can be corrected.
5/29/12
1818
Basic concepts
Figure 1.2 A canonical digital communications system.
5/29/12
1919
1.2.1 Block codes and convolutional codes
According to the manner in which redundancy is
added to messages, ECC can be divided into two classes: block and convolutional codes. Both types of coding schemes have found practical applications.
Block codes process the information on a block-
by-block basis, treating each block of information bits independently from others. In contrast, the output of a convolutional encoder depends not only on the current input information, but also on previous inputs or outputs, either on a block-by-block or a bit-bybit basis.
2020
5/29/12
Block codes has obvious theoretical structure