0% found this document useful (0 votes)
61 views49 pages

Error Detection&Correction

Uploaded by

Kachra prashad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views49 pages

Error Detection&Correction

Uploaded by

Kachra prashad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

ECE 4008 –

COMPUTER
COMMUNICATION

Dr.D.VYDEKI

Associate Professor Gr-2/ SENSE

VIT Chennai
Agenda

 Introduction

 Block Coding

2
Introduction
 Networks must be able to transfer data from one device to another with acceptable accuracy.

 For most applications, a system must guarantee that the data received are identical to the data transmitted.

 Any time data are transmitted from one node to the next, they can become corrupted in passage.

 Many factors can alter one or more bits of a message.

 Some applications can tolerate a small level of error. For example, random errors in audio or video transmissions
may be tolerable, but when we transfer text, we expect a very high level of accuracy.

 At the data-link layer, if a frame is corrupted between the two nodes, it needs to be corrected before it continues
its journey to other nodes. However, most link-layer protocols simply discard the frame and let the upper-layer
protocols handle the retransmission of the frame.

 Some multimedia applications try to correct the corrupted frame.

3
Introduction: Types of Errors
 Whenever bits flow from one point to another, they are subject to unpredictable changes because of interference.

 This interference can change the shape of the signal.

 Types: Single-bit error and burst error

 A burst error is more likely to occur than a single-bit error because the duration of the noise signal is normally
longer than the duration of 1 bit, which means that when noise affects data, it affects a set of bits.

 The number of bits affected depends on the data rate and duration of noise. For example, if data is sent at 1 kbps,
a noise of 1/100 second can affect 10 bits; for data rate of 1 Mbps, the same noise can affect 10,000 bits.

4
Introduction
 Redundancy
• to detect or correct errors, some extra bits need to be sent with data.

• These redundant bits are added by the sender and removed by the receiver.

• Their presence allows the receiver to detect or correct corrupted bits.

 Detection versus Correction

 Coding
• The sender adds redundant bits through a process that creates a relationship between the redundant bits and
the actual data bits.

• The receiver checks the relationships between the two sets of bits to detect errors.

• The ratio of redundant bits to data bits and the robustness of the process are important factors in any coding
scheme.

• Categories: block coding and convolution coding

5
BLOCK CODING
Block Coding
 Message is divided into ‘k’ bits blocks, called datawords.

 add ‘r’ redundant bits to each block to make the length n = k + r

 The resulting n-bit blocks are called codewords.

 With k bits  combination of 2k datawords; with n bits  combination of 2n codewords.

 Since n > k, the number of possible codewords > number of possible datawords.

 The block coding process is one-to-one; the same dataword is always encoded as the same codeword.

 This means that 2 n − 2 k codewords are not used.

 These codewords are invalid or illegal.

 The trick in error detection is the existence of these invalid code.

 If the receiver receives an invalid codeword, this indicates that the data was corrupted during transmission.

7
Error Detection using Block Coding
If the following two conditions are met, the receiver can detect a change in the original codeword.

1. The receiver has (or can find) a list of valid codewords.

2. The original codeword has changed to an invalid one.


Error Detection using Block Coding - Example
 Assume that k = 2 and n = 3.
 Table shows the list of datawords and codewords.

 Assume the sender encodes the dataword 01 as 011 and sends it to the receiver.
 Consider the following cases:
1. The receiver receives 011. It is a valid codeword. The receiver extracts the dataword 01 from it.
2. The codeword is corrupted during transmission, and 111 is received (the leftmost bit is corrupted).
This is not a valid codeword and is discarded.
3. The codeword is corrupted during transmission, and 000 is received (the right two bits are
corrupted). This is a valid codeword. The receiver incorrectly extracts the dataword 00.
 Two corrupted bits have made the error undetectable.
 An error-detecting code can detect only the types of errors for which it is designed; other types of errors may
remain undetected
Error Detection using Block Coding – Hamming Distance

 The Hamming distance between two words (of the same size) is the number of differences
between the corresponding bits.
 Hamming distance between two words x and y as d(x, y).
 Why Hamming distance is important for error detection? Hamming distance between the
received codeword and the sent codeword is the number of bits that are corrupted during
transmission.
 If the codeword 00000 is sent and 01101 is received, 3 bits are in error and the Hamming
distance between the two is d(00000, 01101) = 3.
 In other words, if the Hamming distance between the sent and the received codeword is not
zero, the codeword has been corrupted during transmission.
 The Hamming distance can easily be found if we apply the XOR operation ( ⊕) on the two
words and count the number of 1s in the result.
 Hamming distance is a value greater than or equal to zero.
Minimum Hamming Distance
 In a set of codewords, the minimum Hamming distance is the smallest Hamming distance between all possible pairs of
codewords.

 If ‘s’ errors occur during transmission, the Hamming distance between the sent codeword and received codeword is ‘s’.

 If our system is able to detect up to ‘s’ errors, the minimum distance between the valid codes must be (s + 1), so that the
received codeword does not match a valid codeword.

 In other words, if the minimum distance between all valid codewords is (s + 1), the received codeword cannot be
erroneously mistaken for another codeword. The error will be detected.

 Although a code with dmin = s + 1 may be able to detect more than ‘s’ errors in some special cases, only ‘s’ or fewer
errors are guaranteed to be detected.

 To guarantee correction of up to t errors in all cases, the minimum Hamming distance in a block code must be dmin = 2t
+ 1.
Linear Block Code (LBC)
 Is a code in which the exclusive OR (addition modulo-2) of two valid codewords creates another
valid codeword.

 The minimum Hamming distance is the number of 1s in the nonzero valid codeword with the
smallest number of 1s.

 Parity-check code is a linear block code.

 In this code, a k-bit dataword is changed to an n-bit codeword where n = k + 1.

 The extra bit, called the parity bit, is selected to make the total number of 1s in the codeword
even.

 The minimum Hamming distance for this category is dmin = 2, which means that the code is a
single-bit error-detecting code.
Parity Check Code C(5,4)
Parity Check Code: Encoder  The calculation is done in modular arithmetic.
 The encoder uses a generator that takes a copy of a 4-bit dataword (a0,
a1, a2, and a3) and generates a parity bit r0.
 The dataword bits and the parity bit create the 5-bit codeword.
 The parity bit that is added makes the number of 1s in the codeword
even.
 This is normally done by adding the 4 bits of the dataword (modulo-2);
the result is the parity bit.

+ 0 1
0 0 1
 If the number of 1s is even, the result is 0; if the number of
1s is odd, the result is 1. 1 1 0
 In both cases, the total number of 1s in the codeword is even.
Parity Check Code: Decoder
 The sender sends the codeword, which may be corrupted
during transmission.
 The receiver receives a 5-bit word.
 The checker at the receiver does the same thing as the
generator in the sender with one exception: The addition is
done over all 5 bits.
 The result, which is called the syndrome, is just 1 bit.
 The syndrome is 0 when the number of 1s in the received
codeword is even; otherwise, it is 1.
Assume the sender sends the dataword 1011. The codeword created from this dataword is 10111,
which is sent to the receiver. We examine five cases:
1. No error occurs; the received codeword is 10111. The syndrome is 0. The dataword 1011 is created.
Example 2. One single-bit error changes a1; the received codeword is 10011. The syndrome is 1. No dataword
is created.
3. One single-bit error changes r0; the received codeword is 10110. The syndrome is 1. No dataword
is created. Note that although none of the dataword bits are corrupted, no dataword is created because
the code is not sophisticated enough to show the position of the corrupted bit.
Example
4. An error changes r0 and a second error changes a3 ; The received codeword is 00110. The syndrome
is 0. The dataword 0011 is created at the receiver. Here the dataword is wrongly created due to the
syndrome value. The simple parity-check decoder cannot detect an even number of errors. The errors
cancel each other out and give the syndrome a value of 0.
5. Three bits- a3, a2, and a1 -are changed by errors. The received codeword is 01011. The syndrome is
1. The dataword is not created. This shows that the simple parity check, guaranteed to detect one
single error, can also find any odd number of errors.
Two Dimensional Parity Code (LRC)
 Dataword is organized in a table (rows and columns).

 For each row and each column, 1 parity-check bit is calculated.

 The whole table is then sent to the receiver, which finds the syndrome for each row and each column.
Hamming Code
 dmin = 3

 n = 2 m − 1; k=n-m, where m>=3.

 Hamming code (7,4)


Hamming Code Encoder and
Decoder

𝑟 0 =𝑎 2+ 𝑎1 +𝑎 0
𝑟 1= 𝑎3 + 𝑎2 +𝑎 1
𝑟 2= 𝑎1 +𝑎 0 +𝑎 3
𝑠 0 =𝑏2 +𝑏1 +𝑏0 +𝑞 0
𝑠 1=𝑏 3 +𝑏2 +𝑏1 +𝑞1
𝑠 3 =𝑏1 +𝑏0 +𝑏 3+𝑞 2
Hamming Code - Analysis
Consider the path of three datawords from the sender to the destination:

1. The dataword 0100 becomes the codeword 0100011. The codeword 0100011 is received. The
syndrome is 000, the final dataword is 0100.

2. The dataword 0111 becomes the codeword 0111001. The codeword 0011001 is received; The
syndrome is 011. After flipping b 2 (changing the 1 to 0), the final dataword is 0111.

3. The dataword 1101 becomes the codeword 1101000. The codeword 0001000 is received

(two errors). The syndrome is 101. After flipping b 0, we get 0000, the wrong dataword. This shows
that this Hamming code cannot correct two errors.
Hamming Code - Performance
 Hamming code can only correct a single error or detect a double error.

 To make it detect a burst error, split a burst error between several codewords, one error for each codeword.

 In data communications, normally a packet or a frame of data is sent.

 To make the Hamming code respond to a burst error of size N, make N codewords out of the frame.

 Then, instead of sending one codeword at a time, arrange the codewords in a table and send the bits in the table a column at a
time.

 In Figure, the bits are sent column by column (from the left).

 In each column, the bits are sent from the bottom to the top.

 In this way, a frame is made out of the four codewords and sent to the receiver.
Hamming Code - Performance
 When a burst error of size 4
corrupts the frame, only 1 bit
from each codeword is
corrupted.

 The corrupted bit in each


codeword can then easily be
corrected at the receiver.
Cyclic Codes
 Cyclic codes are special linear block codes with one extra property. In a cyclic code, if a codeword
is cyclically shifted (rotated), the result is another codeword.

 Subset of cyclic code CRC Code used in networks such as LANs and WANs.

Table shows an
example of a CRC
code. Both the
linear and cyclic
properties of this
code could be seen.
Cyclic Codes - Encoder
 In the encoder, the dataword has k bits (4 here); the codeword
has n bits (7 here).

 The size of the dataword is augmented by adding n − k (3 here)


0s to the right-hand side of the word.

 The n-bit result is fed into the generator. The generator uses a
divisor of size n − k + 1 (4 here), predefined and agreed upon.

 The generator divides the augmented dataword by the divisor


(modulo-2 division). The quotient of the division is discarded;
the remainder (r2r1r0) is appended to the dataword to create the
codeword..
Cyclic Codes - Decoder
 The decoder receives the codeword (possibly corrupted in
transition).

 A copy of all n bits is fed to the checker, which is a


replica of the generator.

 The remainder produced by the checker is a syndrome of


n − k (3 here) bits, which is fed to the decision logic
analyzer.

 The analyzer has a simple function. If the syndrome bits


are all 0s, the 4 leftmost bits of the codeword are
accepted as the dataword (interpreted as no error);
otherwise, the 4 bits are discarded (error).
Cyclic Codes – Encoder (Detailed)
 The encoder takes a dataword and augments it with n − k number of 0s.

 It then divides the augmented dataword by the divisor.


Cyclic Codes – Decoder (Detailed)

 The decoder does the same division


process as the encoder.

 The remainder of the division is the


syndrome.

 If the syndrome is all 0s, there is no


error with a high probability; the
dataword is separated from the
received codeword and accepted.

 Otherwise, everything is discarded.


Polynomials
 A better way to understand cyclic codes and how they can be analyzed is to represent
them as polynomials.

 A pattern of 0s and 1s can be represented as a polynomial with coefficients of 0 and 1.

 The power of each term shows the position of the bit; the coefficient shows the value of
the bit.
Polynomials
 The degree of a polynomial is the highest power in the polynomial.

 For example, the degree of the polynomial x 6 + x + 1 is 6. Note that the degree of a polynomial is 1
less than the number of bits in the pattern.

 The bit pattern in this case has 7 bits.

 Adding and subtracting polynomials in mathematics are done by adding or subtracting the
coefficients of terms with the same power.

 In binary, the coefficients are only 0 and 1, and adding is in modulo-2.

 This has two consequences:


 First, addition and subtraction are the same.

 Second, adding or subtracting is done by combining terms and deleting pairs of identical terms.

 For example, adding x 5 + x 4 + x 2 and x 6 + x 4 + x 2 gives just x 6 + x 5. The terms x 4 and x 2 are
deleted.
Polynomials
 Multiplying a term by another term is very simple; just add the powers.

 For example, x 3 × x 4 is x7.

 For dividing, we just subtract the power of the second term from the power of the first.

 For example, x 5/x2 is x3.

 Multiplying a polynomial by another is done term by term.

 Each term of the first polynomial must be multiplied by all terms of the second.

 The result is then simplified, and pairs of equal terms are deleted.
Polynomials
 Division of polynomials is conceptually the same as the binary division used in an CRC
encoder.

 Divide the first term of the dividend by the first term of the divisor to get the first term
of the quotient.

 Multiply the term in the quotient by the divisor and subtract the result from the dividend.

 Repeat the process until the dividend degree is less than the divisor degree.
Polynomials
 A binary pattern is often shifted a number of bits to the right or left.

 Shifting to the left means adding extra 0s as rightmost bits; shifting to the right means
deleting some rightmost bits.

 Shifting to the left is accomplished by multiplying each term of the polynomial by x m,


where m is the number of shifted bits; shifting to the right is accomplished by dividing
each term of the polynomial by x m.

 No negative powers in the polynomial representation.

• When the dataword is augmented in the encoder, actually the bits are shifted to the left.
• Also, when two bit patterns are concatenated, shift the first polynomial to the left and then add the second
polynomial.
CRC Encoder using Polynomials
 The dataword 1001 is represented as x 3 + 1.

 The divisor 1011 is represented as x 3 + x + 1.

 To find the augmented dataword, left-shift the dataword 3 bits (multiplying by x 3 ).

 The result is x 6 + x 3 .

 Division is straightforward.

 Divide the first term of the dividend, x 6 , by the first term of the divisor, x 3 .

 The first term of the quotient is then x 6 /x 3 , or x 3 .

 Then multiply x 3 by the divisor and subtract (XOR) the result from the dividend.

 The result is x 4 , with a degree greater than the divisor’s degree; continue to divide until the degree of the remainder
is less than the degree of the divisor.
CRC Encoder using Polynomials
 It can be seen that the polynomial
representation can easily simplify the
operation of division in this case,
because the two steps involving all-0s
divisors are not needed here.

 In a polynomial representation, the


divisor is normally referred to as the
generator polynomial t(x).
Cyclic Code Analysis
 Dataword: d(x) Codeword: c(x) Generator: g(x) Syndrome: s(x) Error: e(x)

In a cyclic code,
If s(x) ≠ 0, one or more bits is corrupted.
If s(x) = 0, either

a. No bit is corrupted. or
b. Some bits are corrupted, but the decoder failed to detect
them.
Cyclic Code Analysis
 To analyze, find the criteria that must be imposed on the generator, g(x), to detect the
type of error we especially want to be detected.

 First find the relationship among the sent codeword, error, received codeword, and the
generator.

 Received codeword = c(x) + e(x)

 The receiver divides the received codeword by g(x) to get the syndrome.
Cyclic Code Analysis

 The first term at the right-hand side of the equality does not have a remainder (according to the definition of
codeword).

 So, the syndrome is actually the remainder of the second term on the right-hand side.

 If this term does not have a remainder (syndrome =0), either e(x) is 0 or e(x) is divisible by g(x).

 No worry about the first case (there is no error); the second case is very important.

 Those errors that are divisible by g(x) are not caught.

In a cyclic code, those e(x) errors that are divisible by g(x) are not caught.
Single-bit Error Detection
 A single-bit error is e(x) = x i, where ‘i’ is the position of the bit.

 If a single-bit error is caught, then x i is not divisible by g(x).

 If g(x) has at least two terms (which is normally the case) and the coefficient of x 0 is not
zero (the rightmost bit is 1), then e(x) cannot be divided by g(x).
Example

Which of the following g(x) values guarantees that a single-bit error is caught? For each case, what is
the error that cannot be caught?
a. x + 1 b. x3 c. 1

Solution
a. No xi can be divisible by x + 1. Any single-bit error can be caught.
b. If i is equal to or greater than 3, xi is divisible by g(x). All single-bit errors in positions 1 to 3 are
caught.
c. All values of i make xi divisible by g(x). No single-bit error can be caught. This g(x) is useless.
Two Isolated Single-Bit Errors
 , values of i and j define the positions of the errors, and the difference j - i defines the
distance between the two errors.

 If g(x) has more than one term and one term is x o, it cannot divide x i. (Eg. (a) in previous Slide)

 g(x) must not divide xj-i + 1 or x t + 1 , where t is between 0 and n - 1.

 However, t=0 is meaningless and t = I is needed as seen later.

 This means t should be between 2 and n-1.


If a generator cannot divide xt + 1
(t between 0 and n – 1),
then all isolated double errors
can be detected.
Two Isolated Single-Bit Errors
Find the status of the following generators related to two isolated, single-bit errors.

a. x + 1

b. x 4 + 1

c. x 7 + x 6 + 1

a. This is a very poor choice for a generator. Any two errors next to each other cannot be

detected.

b. This generator cannot detect two errors that are four positions apart. The two errors can

be anywhere, but if their distance is 4, they remain undetected.

c. This is a good choice for this purpose.


Error- detection Analysis

A generator that contains a factor of


x + 1 can detect all odd-numbered errors.

For example, x4 + x2 + x + 1 can catch all odd-numbered errors


since it can be written as a product of the two polynomials (x +
1) and x3 + x2 + 1.
Error- detection Analysis
❏ All burst errors with L ≤ r will be detected.
❏ All burst errors with L = r + 1 will be detected
with probability 1 – (1/2)r–1.
❏ All burst errors with L > r + 1 will be detected
with probability 1 – (1/2)r.
Find the suitability of the following generator in relation to burst errors of different lengths.
x6 + 1

This generator can detect all burst errors with a length less than or equal to 6 bits; 3 out
of 100 burst errors with length 7 will slip by; 16 out of 1000 burst errors of length 8 or
more will slip by.
Criteria for a good Polynomial Generator
A good polynomial generator needs to have the following
characteristics:
1. It should have at least two terms.
2. The coefficient of the term x0 should be 1.
3. It should not divide xt + 1, for t between 2 and n − 1.
4. It should have the factor x + 1.
Standard Polynomials
Advantages of Cyclic Codes
 cyclic codes have a very good performance in detecting single-bit errors, double errors,
an odd number of errors, and burst errors.

 They can easily be implemented in hardware and software.

 They are especially fast when implemented in hardware.

 This has made cyclic codes a good candidate for many networks.
 Two Dimensional Parity : Working and Drawbacks | THECSEMONK.COM

You might also like