0% found this document useful (0 votes)
19 views59 pages

Reference Material - Chapter 10

Chapter 10 discusses error detection and correction in data transmission, highlighting types of errors such as single-bit and burst errors. It emphasizes the importance of redundancy for error detection and correction, detailing various coding schemes including block coding and Hamming codes. The chapter also covers cyclic codes, checksums, and forward error correction techniques to ensure data integrity during transmission.

Uploaded by

hedasandesh2802
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views59 pages

Reference Material - Chapter 10

Chapter 10 discusses error detection and correction in data transmission, highlighting types of errors such as single-bit and burst errors. It emphasizes the importance of redundancy for error detection and correction, detailing various coding schemes including block coding and Hamming codes. The chapter also covers cyclic codes, checksums, and forward error correction techniques to ensure data integrity during transmission.

Uploaded by

hedasandesh2802
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 59

Chapter 10

Error Detection and Correction


Note

Data can be corrupted


during transmission (three impairments:
attenuation, distortion, noise).

Some applications require that


errors be detected and corrected.

10.2
10-1 INTRODUCTION

Let us first discuss some issues related, directly or


indirectly, to error detection and correction.

Topics discussed in this section:


Types of Errors
Redundancy
Detection Versus Correction
Coding
Forward Error Correction Versus Retransmission

10.3
10.10.1 Types of Errors

• The term Single-bit error means that only 1 bit of


a given data unit (such as a byte, character, or
packet) is changed from 1 to 0 or from 0 to 1.

• The term Burst error means that 2 or more bits in


the data unit have changed from 1 to 0 or from 0 to
1.

10.4
Figure 10.1: Single-bit and burst error

10.5
10.10.2 Redundancy

• The central concept in detecting or correcting


errors is redundancy.
• To be able to detect or correct errors, we need to
send some extra bits with our data.
• These redundant bits are added by the sender and
removed by the receiver.
• Their presence allows the receiver to detect or
correct corrupted bits.

10.6
10.10.3 Detection versus Correction

• The correction of errors is more difficult than


the detection.
• In error detection, we are only looking to see if
any error has occurred. The answer is a simple
yes or no.
• A single-bit error is the same for us as a burst
error.
• In error correction, we need to know the exact
number of bits that are corrupted and, more
importantly, their location in the message.

10.7
10.10.4 Coding

Redundancy is achieved through various coding


schemes.
The sender adds redundant bits through a process
that creates a relationship between the redundant
bits and the actual data bits.
The receiver checks the relationships between the
two sets of bits to detect errors.
The ratio of redundant bits to data bits and the
robustness of the process are important factors in
any coding scheme.

10.8
10-2 BLOCK CODING

• In block coding, we divide our


message into blocks, each of k
bits, called datawords.
• We add r redundant bits to
each block to make the length n
= k + r.
• The resulting n-bit blocks are
called codewords.
• How the extra r bits are
calculated is something we will
10.9
10.2.1 Error Detection

How can errors be detected by using block coding?


If the following two conditions are met, the
receiver can detect a change in the original
codeword.

1. The receiver has (or can find) a list of valid


codewords.

2. The original codeword has changed to an invalid


one.

10.10
Figure 10.2: Process of error detection in block coding

10.11
Example 10.1

Let us assume that k = 2 and n = 3. Table 10.1 shows the list


of datawords and codewords. Later, we will see how to
derive a codeword from a dataword.
Table 10.1: A code for error detection in Example 10.1

10.12
Hamming Distance
The Hamming distance between two words (of the same
size) is the number of differences between the
corresponding bits.
Let us find the Hamming distance between two pairs of
words.

10.13
Figure 10.3: Geometric concept explaining dmin in error
detection

10.14
Example 10.3

• The minimum Hamming distance for our first code


scheme (Table 10.1) is 2.
• This code guarantees detection of only a single bit error.
For example, if the third codeword (101) is sent and one
error occurs, the received codeword does not match any
valid codeword.
• If two errors occur, however, the received codeword may
match a valid codeword and the errors are not detected.

10.15
Example 10.4

A code scheme has a Hamming distance dmin = 4. This code


guarantees the detection of up to three errors (d = s + 1 or
s = 3).

10.16
Example 10.5

The code in Table 10.1 is a linear block code because the


result of XORing any codeword with any other codeword is
a valid codeword. For example, the XORing of the second
and third codewords creates the fourth one.

10.17
Example 10.6

In our first code (Table 10.1), the numbers of 1s in the


nonzero codewords are 2, 2, and 2. So the minimum
Hamming distance is dmin = 2.

10.18
Table 10.2: Simple parity-check code C(5, 4)

10.19
Figure 10.4: Encoder and decoder for simple parity-
check code

10.20
Example 10.7

Let us look at some transmission scenarios. Assume the


sender sends the dataword 1011. The codeword created from
this dataword is 10111, which is sent to the receiver. We
examine five cases:

10.21
Hamming Codes –
[Error correcting
codes]
1. Were originally designed with dmin = 3,
which means that they can detect up to two
errors or correct one single error.
2. First let us find the relationship between n
and k in a Hamming code.
3. We need to choose an integer m >= 3.
4. The values of n and k are then calculated
from mas n = 2m – 1 and k = n - m.
5. The number of check bits r =m.

10.22
Figure 10.12 The structure of the encoder and decoder for a Hamming code

10.23
Hamming Code
Parity checks are created as follow (using
modulo-2)
◦ r0 = a2 + a1 + a0
◦ r1 = a3 + a2 + a1
◦ r2 = a1 + a0 + a3

10.24
Hamming Code
The checker in the decoder creates a 3-bit
syndrome (s2s1s0).
In which each bit is the parity check for 4 out
of the 7 bits in the received codeword:
s0 = b2 + b1 + b0 + q0
s1 = b3 + b2 + b1 + q1
s2 = b1 + b0 + b3 + q2
The equations used by the checker are the
same as those used by the generator with the
parity-check bits added to the right-hand side of
the equation.

10.25
Table 10.5 Logical decision made by the correction logic analyzer

Hamming code C(7, 4) can :


• detect up to 2-bit error (dmin -1)
• can correct up to 1 bit error (d min-1)/2

10.26
Figure 10.13 Burst error correction using Hamming code

Split burst error between multiple codewords


10.27
10-3 CYCLIC CODES

Cyclic codes are special linear


block codes with one extra
property.
In a cyclic code, if a codeword is
cyclically shifted (rotated), the
result is another codeword.
For example, if 1011000 is a
codeword and we cyclically left-
shift, then 0110001 is also a
codeword.
10.28
10.3.1 Cyclic Redundancy Check

We can create cyclic codes to correct errors.


However, the theoretical background required is
beyond the scope of this book.
In this section, we simply discuss a subset of cyclic
codes called the cyclic redundancy check (CRC),
which is used in networks such as LANs and WANs.

10.29
Table 10.3: A CRC code with C(7, 4)

10.30
Figure 10.5: CRC encoder and decoder

10.31
Figure 10.6: Division in CRC encoder

10.32
Figure 10.7: Division in the CRC decoder for two cases

10.33
10.3.2 Polynomials

A better way to understand cyclic codes and how


they can be analyzed is to represent them as
polynomials.
A pattern of 0s and 1s can be represented as a
polynomial with coefficients of 0 and 10. The power
of each term shows the position of the bit; the
coefficient shows the value of the bit.
Figure 10.8 shows a binary pattern and its
polynomial representation.

10.34
Figure 10.8: A polynomial to represent a binary word

10.35
10.3.3 Encoder Using Polynomials

Now that we have discussed operations on


polynomials, we show the creation of a codeword
from a dataword. Figure 10.9 is the polynomial
version of Figure 10.6. We can see that the process
is shorter.

10.36
Figure 10.9: CRC division using polynomials

10.37
10.3.4 Cyclic Code Analysis

We can analyze a cyclic code to find its capabilities


by using polynomials. We define the following,
where f(x) is a polynomial with binary coefficients.

10.38
Example 10.8

Which of the following g(x) values guarantees that a single-


bit error is caught? x + 1, x3 and 1

Solution

10.39
Figure 10.10: Representation of isolated single-bit
errors

10.40
Example 10.9

Find the suitability of the following generators in relation to


burst errors of different lengths: x 6 + 1, x18 + x7 + x + 1, and
x32 + x23 + x7 + 10.
Solution

10.41
Example 10.10

Find the status of the following generators related to two


isolated, single-bit errors: x + 1, x 4 + 1, x7 + x6 + 1, and
x15 + x14 + 1

Solution

10.42
Table 10.4: Standard polynomials

10.43
10.3.5 Advantages of Cyclic Codes

We have seen that cyclic codes have a very good


performance in detecting single-bit errors, double
errors, an odd number of errors, and burst errors.
They can easily be implemented in hardware and
software.
They are especially fast when implemented in
hardware. This has made cyclic codes a good
candidate for many networks.

10.44
10.3.6 Other Cyclic Codes

The cyclic codes we have discussed in this section


are very simple. The check bits and syndromes can
be calculated by simple algebra. There are, however,
more powerful polynomials that are based on
abstract algebra involving Galois fields. These are
beyond the scope of this book. One of the most
interesting of these codes is the Reed-Solomon code
used today for both detection and correction.

10.45
10-4 CHECKSUM

• Checksum is an error-detecting
technique that can be applied
to a message of any length.
• In the Internet, the checksum
technique is mostly used at the
network and transport layer
rather than the data-link layer.
• However, to make our
discussion of error detecting
techniques complete, we
10.46
Figure 10.15: Checksum

10.47
Example 10.11

• Suppose the message is a list of five 4-bit numbers that


we want to send to a destination. In addition to sending
these numbers, we send the sum of the numbers.
• For example, if the set of numbers is (7, 11, 12, 0, 6), we
send (7, 11, 12, 0, 6, 36), where 36 is the sum of the
original numbers.
• The receiver adds the five numbers and compares the
result with the sum.
• If the two are the same, the receiver assumes no error,
accepts the five numbers, and discards the sum.
• Otherwise, there is an error somewhere and the message
not accepted.

10.48
Example 10.12

In the previous example, the decimal number 36 in binary is


(100100)2. To change it to a 4-bit number we add the extra
leftmost bit to the right four bits as shown below.

Instead of sending 36 as the sum, we can send 6 as the sum


(7, 11, 12, 0, 6, 6). The receiver can add the first five
numbers in one’s complement arithmetic. If the result is 6,
the numbers are accepted; otherwise, they are rejected.

10.49
Table 10.5: Procedure to calculate the traditional
checksum

10.50
10-5 FORWARD ERROR CORRECTION

We discussed error detection and


retransmission in the previous
sections. However,
retransmission of corrupted and
lost packets is not useful for real-
time multimedia transmission. We
need to correct the error or
reproduce the packet
immediately.
10.51
10.5.1 Using Hamming Distance

We earlier discussed the Hamming distance for


error detection. For error detection, we definitely
need more distance. It can be shown that to detect t
errors, we need to have dmin = 2t + 10. In other
words, if we want to correct 10 bits in a packet, we
need to make the minimum hamming distance 21
bits, which means a lot of redundant bits need to be
sent with the data. the geometrical representation of
this concept.

10.52
Figure 10.20: Hamming distance for error correction

10.53
10.5.2 Using XOR

Another recommendation is to use the property of


the exclusive OR operation as shown below.

This means:

10.54
10.5.3 Chunk Interleaving

Another way to achieve FEC in multimedia is to


allow some small chunks to be missing at the
receiver. We cannot afford to let all the chunks
belonging to the same packet be missing; however,
we can afford to let one chunk be missing in each
packet.

10.55
Figure 10.21: Interleaving

10.56
10.5.4 Combining

Hamming distance and interleaving can be


combined. We can first create n-bit packets that can
correct t-bit errors. Then we interleave m rows and
send the bits column by column. In this way, we can
automatically correct burst errors up to m × t bits of
errors.

10.57
10.5.5 Compounding

Still another solution is to create a duplicate of each


packet with a low-resolution redundancy and
combine the redundant version with the next packet.
For example, we can create four low-resolution
packets out of five high-resolution packets and send
them as shown in Figure 10.22.

10.58
Figure 10.22: Compounding high-and-low resolution
packets

10.59

You might also like