Error detection and correction
Error detection means to decide whether the
received data is correct or not without having
a copy of the original message.
Error detection uses the concept of
redundancy, which means adding extra
bits for detecting errors at the destination.
Error correction is the detection of errors and
reconstruction of the original, error-free data
2.8 Error Detection and Correctioncodes
Hamming
Hamming codes are code words formed by adding
redundant check bits, or parity bits, to a data word.
The Hamming distance between two code words is
the number of bits in which two code words differ.
This pair of bytes has a
Hamming distance of 3:
The minimum Hamming distance for a code is the
smallest Hamming distance between all pairs of
words in the code.
2.8 Error Detection and Correction
The minimum Hamming distance for a code,
D(min), determines its error detecting and error
correcting capability.
For any code word, X, to be interpreted as a
different valid code word, Y, at least D(min)
single-bit errors must occur in X.
Thus, to detect k (or fewer) single-bit errors, the
code must have a Hamming distance of
D(min) = k + 1.
2.8 Error Detection and Correction
Hamming codes can detect D(min) - 1 errors
and correct
errors
Thus, a Hamming distance of 2k + 1 is
required to be able to correct k errors in any
data word.
Hamming distance is provided by adding a
suitable number of parity bits to a data word.
2.8 Error Detection and Correction
Suppose we have a set of n-bit code words
consisting of m data bits and r (redundant) parity
bits.
An error could occur in any of the n bits, so each
code word can be associated with n erroneous
words at a Hamming distance of 1.
Therefore,we have n + 1 bit patterns for each
code word: one valid code word, and n
erroneous words.
2.8 Error Detection and Correction
With n-bit code words, we have 2 n possible code
words consisting of 2 m data bits (where n = m + r).
This gives us the inequality:
(n + 1) 2 m 2 n
Because n = m + r, we can rewrite the inequality
as:
(m + r + 1) 2 m 2 m + r or (m + r + 1) 2 r .
2.8 Error Detection and Correction
Suppose we have data words of length m = 4.
Then:
(4 + r + 1) 2 r
implies that r must be greater than or equal to 3.
This means to build a code with 4-bit data words
that will correct single-bit errors, we must add 3
check bits.
Finding the number of check bits is the hard part.
The rest is easy.
2.8 Error Detection and Correction
Suppose we have data words of length m = 8.
Then:
(8 + r + 1) 2 r
implies that r must be greater than or equal to 4.
This means to build a code with 8-bit data words
that will correct single-bit errors, we must add 4
check bits, creating code words of length 12.
2.8 Error Detection and Correction
With code words of length 12, each of the digits, 1
-12, can be expressed in powers of 2. Thus:
1 = 20
5 = 22 + 2 0
9 = 23 + 2 0
2 = 21
6 = 22 + 2 1
10 = 2 3 + 2 1
3 = 2 1 + 2 0 7 = 2 2 + 2 1 + 2 0 11 = 2 3 + 2 1 + 2 0
4 = 2 2 8 = 2 3 12 = 2 3 + 2 2
1 (= 20) contributes to all of the odd-numbered
digits.
2 (= 21) contributes to the digits, 2, 3, 6, 7, 10,
and 11.
. . . And so forth . . .
2.8 Error Detection and Correction
Using our code words of length 12, number each
bit position starting with 1 in the low-order bit.
Each bit position corresponding to an even
power of 2 will be occupied by a check bit.
These check bits contain the parity of each bit
position for which it participates in the sum.
2.8 Error Detection and Correction
Since 2 (= 21) contributes to the digits, 2, 3, 6, 7, 10,
and 11. Position 2 will contain the parity for bits 3, 6,
7, 10, and 11.
When we use even parity, this is the modulo 2 sum
of the participating bit values.
For the bit values shown, we have a parity value of 0
in the second bit position.
What are the values for the other parity bits?
2.8 Error Detection and Correction
The completed code word is shown above.
Bit 1checks the digits, 3, 5, 7, 9, and 11, so
its value is 1.
Bit 4 checks the digits, 5, 6, 7, and 12, so its
value is 1.
Bit 8 checks the digits, 9, 10, 11, and 12, so
its value is also 1.
Using the Hamming algorithm, we can not only
detect single bit errors in this code word, but also
correct them!
2.8 Error Detection and Correction
Suppose an error occurs in bit 5, as shown above. Our
parity bit values are:
Bit 1 checks digits, 3, 5, 7, 9, and 11. Its value is 1, but
should be zero.
Bit 2 checks digits 2, 3, 6, 7, 10, and 11. The zero is
correct.
Bit 4 checks digits, 5, 6, 7, and 12. Its value is 1, but
should be zero.
Bit 8 checks digits, 9, 10, 11, and 12. This bit is correct.
2.8 Error Detection and Correction
We have erroneous bits in positions 1 and 4.
Error can be found out by adding the bit positions
of the erroneous bits.
Simply, 1 + 4 = 5. This tells us that the error is in
bit 5. If we change bit 5 to a 1, all parity bits check
and our data is restored.
Parity Checking
A simple form of error detection can be
accomplished by appending an extra bit
called a parity bit to the code On the
sending side, an encoder adds an extra bit,
called a parity bit, to each byte before
transmission
A receiver uses parity bit to check whether
bits in the byte are correct
Before parity can be used, the sender
and receiver must be configured for
either even parity or odd parity
15
Even parity is when the parity bit is set so that the
total number of 1s in the word is even
11 11 0 Parity bit
10 10 1 Parity bit
Odd parity is when the parity bit is set so that the total
number of 1s in the word is odd
11 11 1 Parity bit
10 10 0 Parity bit
Figure 4.1
Line coding
Four combinations
Types of each application
(to p9)
18
Table 2-1 Four combinations of data
and signals
(to p2)
19
Figure 4.2
data level
Signal level versus
Figure 4.3
DC component
Figure 4.5
Line coding schemes
Note:
Unipolar encoding uses only
one voltage level.
Figure 4.6
Unipolar encoding
Note:
Polar encoding uses two
voltage levels (positive and
negative).
Figure 4.7
Types of polar encoding
Note:
In NRZ-L the level of the
signal is dependent upon
the state of the bit.
Note:
In NRZ-I the signal is
inverted if a 1 is
encountered.
Figure 4.8
encoding
NRZ-L and NRZ-I
Polar - RZ
The Return to Zero (RZ) scheme uses
three voltage values. +, 0, -.
Each symbol has a transition in the
middle. Either from high to zero or from
low to zero.
This scheme has more signal transitions
(two per symbol) and therefore requires
a wider bandwidth.
4.30
Figure 4.9
RZ encoding
Note:
A good encoded digital
signal must contain a
provision for
synchronization.
Figure 4.10
Manchester encoding
Note:
In Manchester encoding,
the transition at the middle
of the bit is used for both
synchronization and bit
representation.
Figure 4.11 Differential
Manchester encoding
Note:
In differential Manchester
encoding, the transition at
the middle of the bit is used
only for synchronization.
The bit representation is
defined by the inversion or
noninversion at the
beginning of the bit.
Note:
In bipolar encoding, we use
three levels: positive, zero,
and negative.
Figure 4.12
Bipolar AMI encoding
102BLOCKCODING
In block coding, we divide our message into blocks,
each of k bits, called datawords. We add r redundant
bits to each block to make the length n = k + r. The
resulting n-bit blocks are called codewords.
10.39
Figure 10.6 Process of error detection in block coding
10.40
Figure 10.7 Structure of encoder and decoder in error correction
10.41
Note
The Hamming distance between two words is the number of
differences between corresponding bits.
10.42
Example 10.4
Let us find the Hamming distance between two pairs of
words.
1. The Hamming distance d(000, 011) is 2 because
2. The Hamming distance d(10101, 11110) is 3 because
10.43
Note
The minimum Hamming distance is the smallest Hamming distance
between
all possible pairs in a set of words.
10.44
103LINEARBLOCKCODES
Almost all block codes used today belong to a subset
called linear block codes. A linear block code is a code
in which the XOR (addition modulo-2) of two valid
codewords creates another valid codeword.
10.45
Note
In a linear block code, the exclusive OR (XOR) of any two valid
codewords creates another valid codeword.
10.46
Sampling
Analog signal is sampled every TS secs.
Ts is referred to as the sampling interval.
fs = 1/Ts is called the sampling rate or
sampling frequency.
There are 3 sampling methods:
Ideal - an impulse at each sampling instant
Natural - a pulse of short width with varying
amplitude
Flattop - sample and hold, like natural but
with single amplitude value
The process is referred to as pulse
amplitude modulation PAM and the
outcome is a signal with analog (non
4.47
gure 4.22
4.48
Three different sampling methods for PCM
Note
According to the Nyquist theorem, the
sampling rate must be
at least 2 times the highest frequency
contained in the signal.
4.49
BLOCK DIAGRAM OF TDM
TXR
BLOCK DIAGRAM OF TDM
TXR