0% found this document useful (0 votes)
9 views

Error Control

error control

Uploaded by

ioterottishruti
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Error Control

error control

Uploaded by

ioterottishruti
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Detection

and
Correction
10.1
Chapter 10 Copyright © The McGraw-Hill Companies, Inc.
Permission required for reproduction or display.

Error
Note
Data can be corrupted
during transmission.

Some applications require that


errors be detected and corrected.

10.2
10-1 INTRODUCTION

Let us first discuss some issues related,


directly or indirectly, to error detection and
correction.

Topics discussed in this section:


Types of Errors
Redundancy
Detection Versus Correction
Forward Error Correction Versus
Retransmission Coding
Modular Arithmetic

10.3

Note
In a single-bit error, only 1 bit in the data
unit has changed.

10.4
Figure 10.1 Single-bit error
10.5

Note
A burst error means that 2 or more bits
in the data unit have changed.
10.6
Figure 10.2 Burst error of length 8
10.7

Note
To detect or correct errors, we need to
send extra (redundant) bits with data.
10.8
Figure 10.3 The structure of encoder and decoder
10.9

Note
In this book, we concentrate on block
codes; we leave convolution codes
to advanced texts.

10.10

Note
In modulo-N arithmetic, we use only the
integers in the range 0 to N −1,
inclusive.
10.11
Figure 10.4 XORing of two single bits or two words
10.12

10-2 BLOCK CODING

In block coding, we divide our message into


blocks, each of k bits, called datawords. We
add r redundant bits to each block to make the
length n = k + r. The resulting n-bit blocks are
called codewords.

Topics discussed in this section:


Error Detection
Error Correction
Hamming Distance
Minimum Hamming Distance

10.13
Figure 10.5 Datawords and codewords in block coding

10.14

Example 10.1

The 4B/5B block coding discussed in Chapter 4 is


a good example of this type of coding. In this
coding scheme, k = 4 and n = 5. As we saw, we
have 2k = 16 datawords and 2n = 32 codewords. We
saw that 16 out of 32 codewords are used for
message transfer and the rest are either used for
other purposes or unused.

10.15
Figure 10.6 Process of error detection in block coding
10.16
Example 10.2

Let us assume that k = 2 and n = 3. Table 10.1


shows the list of datawords and codewords.
Later, we will see how to derive a codeword
from a dataword.

Assume the sender encodes the dataword 01


as 011 and sends it to the receiver. Consider
the following cases:

1. The receiver receives 011. It is a valid


codeword. The receiver extracts the
dataword 01 from it.

10.17
Example 10.2 (continued)

2. The codeword is corrupted during


transmission, and 111 is received. This is not a
valid codeword and is discarded.

3. The codeword is corrupted during


transmission, and 000 is received. This is a valid
codeword. The receiver incorrectly extracts the
dataword 00. Two corrupted bits have made the
error undetectable.
10.18
Table 10.1 A code for error detection (Example 10.2)
10.19

Note
An error-detecting code can detect
only the types of errors for which it is
designed; other types of errors may
remain undetected.

10.20
Figure 10.7 Structure of encoder and decoder in error
correction
10.21
Example 10.3

Let us add more redundant bits to Example 10.2 to


see if the receiver can correct an error without
knowing what was actually sent. We add 3
redundant bits to the 2-bit dataword to make 5-bit
codewords. Table 10.2 shows the datawords and
codewords. Assume the dataword is 01. The
sender creates the codeword 01011. The
codeword is corrupted during transmission, and
01001 is received. First, the receiver finds that the
received codeword is not in the table. This means
an error has occurred. The receiver, assuming
that there is only 1 bit corrupted, uses the
following strategy to guess the correct dataword.

10.22
Example 10.3 (continued)
1. Comparing the received codeword with the first
codeword in the table (01001 versus 00000), the
receiver decides that the first codeword is not
the one that was sent because there are two
different bits.

2. By the same reasoning, the original codeword


cannot be the third or fourth one in the table.

3. The original codeword must be the second one


in the table because this is the only one that
differs from the received codeword by 1 bit.
The receiver replaces 01001 with 01011 and
consults the table to find the dataword 01.
10.23
Table 10.2 A code for error correction (Example 10.3)
10.24

Note
The Hamming distance between two
words is the number of differences
between corresponding bits.
10.25
Example 10.4

Let us find the Hamming distance between two


pairs of words.

1. The Hamming distance d(000, 011) is 2


because
2. The Hamming distance d(10101, 11110) is 3

because

10.26

Note
The minimum Hamming distance is the
smallest Hamming distance between
all possible pairs in a set of words.
10.27
Example 10.5

Find the minimum Hamming distance of the


coding scheme in Table 10.1.
Solution
We first find all Hamming distances.
The dmin in this case is 2.

10.28
Example 10.6

Find the minimum Hamming distance of the


coding scheme in Table 10.2.

Solution
We first find all the Hamming distances.

The dmin in this case is 3.

10.29

Note
To guarantee the detection of up to s
errors in all cases, the minimum
Hamming distance in a block code
must be dmin = s + 1.
10.30
Example 10.7

The minimum Hamming distance for our first code


scheme (Table 10.1) is 2. This code guarantees
detection of only a single error. For example, if
the third codeword (101) is sent and one error
occurs, the received codeword does not match
any valid codeword. If two errors occur, however,
the received codeword may match a valid
codeword and the errors are not detected.
10.31
Example 10.8

Our second block code scheme (Table 10.2) has


dmin = 3. This code can detect up to two errors.
Again, we see that when any of the valid
codewords is sent, two errors create a codeword
which is not in the table of valid codewords. The
receiver cannot be fooled.
However, some combinations of three errors
change a valid codeword to another valid
codeword. The receiver accepts the received
codeword and the errors are undetected.

10.32
Figure 10.8 Geometric concept for finding dmin in error
detection
10.33
Figure 10.9 Geometric concept for finding dmin in error
correction
10.34

Note
To guarantee correction of up to t errors
in all cases, the minimum Hamming
distance in a block code
must be dmin = 2t + 1.
10.35
Example 10.9

A code scheme has a Hamming distance dmin = 4.


What is the error detection and correction
capability of this scheme?

Solution
This code guarantees the detection of up to three
errors (s = 3), but it can correct up to one error. In
other words, if this code is used for error
correction, part of its capability is wasted. Error
correction codes need to have an odd minimum
distance (3, 5, 7, . . . ).

10.36
10-3 LINEAR BLOCK CODES

Almost all block codes used today belong to a


subset called linear block codes. A linear block
code is a code in which the exclusive OR
(addition modulo-2) of two valid codewords
creates another valid codeword.
Topics discussed in this section:
Minimum Distance for Linear Block Codes
Some Linear Block Codes

10.37

Note
In a linear block code, the exclusive OR
(XOR) of any two valid codewords
creates another valid codeword.
10.38
Example 10.10

Let us see if the two codes we defined in Table


10.1 and Table 10.2 belong to the class of linear
block codes.

1. The scheme in Table 10.1 is a linear block code


because the result of XORing any codeword with
any other codeword is a valid codeword. For
example, the XORing of the second and third
codewords creates the fourth one.

2. The scheme in Table 10.2 is also a linear block


code. We can create all four codewords by
XORing two other codewords.
10.39
Example 10.11

In our first code (Table 10.1), the numbers of 1s in


the nonzero codewords are 2, 2, and 2. So the
minimum Hamming distance is dmin = 2. In our
second code (Table 10.2), the numbers of 1s in
the nonzero codewords are 3, 3, and 4. So in this
code we have dmin = 3.

10.40

Note
A simple parity-check code is a
single-bit error-detecting
code in which
n = k + 1 with dmin = 2.

10.41
Table 10.3 Simple parity-check code C(5, 4)
10.42
Figure 10.10 Encoder and decoder for simple parity-check
code
10.43
Example 10.12

Let us look at some transmission scenarios.


Assume the sender sends the dataword 1011. The
codeword created from this dataword is 10111,
which is sent to the receiver. We examine five
cases:

1. No error occurs; the received codeword is


10111. The syndrome is 0. The dataword 1011 is
created. 2. One single-bit error changes a1 . The
received codeword is 10011. The syndrome is 1.
No dataword is created.
3. One single-bit error changes r0 . The received
codeword is 10110. The syndrome is 1. No
dataword is created.
10.44
Example 10.12 (continued)
4. An error changes r0 and a second error
changes a3 . The received codeword is 00110.
The syndrome is 0. The dataword 0011 is
created at the receiver. Note that here the
dataword is wrongly created due to the
syndrome value.
5. Three bits—a3, a2, and a1—are changed by
errors. The received codeword is 01011. The
syndrome is 1. The dataword is not created. This
shows that the simple parity check, guaranteed
to detect one single error, can also find any odd
number of errors.
10.45

Note
A simple parity-check code can detect
an odd number of errors.

10.46
Note
All Hamming codes discussed in this
book have dmin = 3.

The relationship between m and n in


these codes is n = 2m − 1.

10.47
Figure 10.11 Two-dimensional parity-check code

10.48
Figure 10.11 Two-dimensional parity-check code
10.49
Figure 10.11 Two-dimensional parity-check code
10.50
Table 10.4 Hamming code C(7, 4)
10.51
Figure 10.12 The structure of the encoder and decoder for
a Hamming code
10.52
Table 10.5 Logical decision made by the correction logic analyzer
10.53
Example 10.13

Let us trace the path of three datawords from the


sender to the destination:
1. The dataword 0100 becomes the codeword
0100011. The codeword 0100011 is received.
The syndrome is 000, the final dataword is
0100.
2. The dataword 0111 becomes the codeword
0111001. The syndrome is 011. After flipping b2
(changing the 1 to 0), the final dataword is 0111.
3. The dataword 1101 becomes the codeword
1101000. The syndrome is 101. After flipping b0,
we get 0000, the wrong dataword. This shows
that our code cannot correct two errors.
10.54
Example 10.14

We need a dataword of at least 7 bits. Calculate


values of k and n that satisfy this requirement.
Solution
We need to make k = n − m greater than or equal
to 7, or 2m − 1 − m ≥ 7.
1. If we set m = 3, the result is n = 23 − 1 and k
= 7 − 3, or 4, which is not acceptable.
2. If we set m = 4, then n = 24 − 1 = 15 and k = 15
− 4 = 11, which satisfies the condition. So the
code is
C(15, 11)

10.55
Figure 10.13 Burst error correction using Hamming code
10.56
10-4 CYCLIC CODES
Cyclic codes are special linear block codes with
one extra property. In a cyclic code, if a
codeword is cyclically shifted (rotated), the
result is another codeword.

Topics discussed in this section:


Cyclic Redundancy Check
Hardware Implementation
Polynomials
Cyclic Code Analysis
Advantages of Cyclic Codes
Other Cyclic Codes
10.57
Table 10.6 A CRC code with C(7, 4)
10.58
Figure 10.14 CRC encoder and decoder
10.59
Figure 10.15 Division in CRC encoder
10.60
Figure 10.16 Division in the CRC decoder for two cases
10.61
Figure 10.17 Hardwired design of the divisor in CRC
10.62
Figure 10.18 Simulation of division in CRC encoder
10.63
Figure 10.19 The CRC encoder design using shift registers

10.64
Figure 10.20 General design of encoder and decoder of a
CRC code

10.65
Figure 10.21 A polynomial to represent a binary word
10.66
Figure 10.22 CRC division using polynomials
10.67

Note
The divisor in a cyclic code is normally
called the generator polynomial or
simply the generator.

10.68

Note
In a cyclic code,
If s(x) ≠ 0, one or more bits is
corrupted. If s(x) = 0, either
a. No bit is corrupted. or
b. Some bits are corrupted, but the
decoder failed to detect them.

10.69

Note
In a cyclic code, those e(x) errors that
are divisible by g(x) are not caught.
10.70

Note
If the generator has more than one term
and the coefficient of x0 is 1,
all single errors can be caught.
10.71
Example 10.15

Which of the following g(x) values guarantees


that a single-bit error is caught? For each
case, what is the error that cannot be
caught?
a. x + 1 b. x3 c. 1
Solution
a. No xi can be divisible by x + 1. Any single-bit
error can be caught.
b. If i is equal to or greater than 3, xi is divisible
by g(x). All single-bit errors in positions 1 to 3
are caught. c. All values of i make xi divisible by
g(x). No single-bit error can be caught. This
g(x) is useless.

10.72
Figure 10.23 Representation of two isolated single-bit errors using
polynomials

10.73

Note
If a generator cannot divide xt + 1
(t between 0 and n – 1),
then all isolated double errors
can be detected.

10.74
Example 10.16

Find the status of the following generators related


to two isolated, single-bit errors.
a. x + 1 b. x4 + 1 c. x7 + x6 + 1 d. x15 + x14 + 1
Solution
a. This is a very poor choice for a generator.
Any two errors next to each other cannot be
detected. b. This generator cannot detect two
errors that are four positions apart.
c. This is a good choice for this purpose.
d. This polynomial cannot divide xt + 1 if t is
less than 32,768. A codeword with two
isolated errors up to 32,768 bits apart can be
detected by this generator.
10.75

Note
A generator that contains a factor of
x + 1 can detect all odd-numbered
errors.

10.76

Note
❏ All burst errors with L ≤ r will
be detected.
❏ All burst errors with L = r + 1 will
be detected with probability 1 –
(1/2)r–1. ❏ All burst errors with L > r +
1 will be detected with probability 1 –
(1/2)r.

10.77
Example 10.17

Find the suitability of the following generators in


relation to burst errors of different lengths.
a. x6 + 1 b. x18 + x7 + x + 1 c. x32 + x23 + x7 + 1
Solution
a. This generator can detect all burst errors with a
length less than or equal to 6 bits; 3 out of 100
burst errors with length 7 will slip by; 16 out of
1000 burst errors of length 8 or more will slip by.

10.78
Example 10.17 (continued)

b. This generator can detect all burst errors with


a length less than or equal to 18 bits; 8 out of 1
million burst errors with length 19 will slip by; 4
out of 1 million burst errors of length 20 or more
will slip by.

c. This generator can detect all burst errors with


a length less than or equal to 32 bits; 5 out of 10
billion burst errors with length 33 will slip by; 3
out of 10 billion burst errors of length 34 or
more will slip by.

10.79

Note
A good polynomial generator needs to
have the following characteristics: 1. It
should have at least two terms. 2. The
coefficient of the term x0 should be 1.
3. It should not divide xt + 1, for t
between 2 and n − 1.
4. It should have the factor x + 1.

10.80

You might also like