Error 3
Error 3
Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data, are transmitted
from the source to the destination with a certain extent of accuracy.
Errors
Error is a condition when the receiver’s information does not match the sender’s. Digital signals suffer from
noise during transmission that can introduce errors in the binary bits traveling from sender to receiver. That
means a 0 bit may change to 1 or a 1 bit may change to 0.
Data (Implemented either at the Data link layer or Transport Layer of the OSI Model) may get scrambled by
noise or get corrupted whenever a message is transmitted. To prevent such errors, error-detection codes
are added as extra data to digital messages. This helps in detecting any errors that may have occurred
during message transmission.
Types of Errors
1. Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit (i.e., a single binary
digit) of a transmitted data unit is altered during transmission, resulting in an incorrect or corrupted data
unit.
Single-Bit Error
2. Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data transmission is affected.
Although multiple-bit errors are relatively rare when compared to single-bit errors, they can still occur,
particularly in high-noise or high-interference digital environments.
Multiple-Bit Error
3. Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates a burst error. This
error causes a sequence of consecutive incorrect values.
Burst Error
Simple-bit parity is a simple error detection method that involves adding an extra bit to a data transmission.
It works as:
This scheme makes the total number of 1’s even, that is why it is called even parity checking.
Advantages of Simple Parity Check
For example, the Data to be transmitted is 101010. Codeword transmitted to the receiver is 1010101
(we have used even parity).
Two-dimensional Parity check bits are calculated for each row, which is equivalent to a simple parity
check bit. Parity check bits are also calculated for all columns, and then both are sent along with the
data. At the receiving end, these are compared with the parity bits calculated on the received data.
Two-Dimensional Parity Check can detect and correct all single bit error.
Two-Dimensional Parity Check can detect two or three bit error that occurs anywhere in the matrix.
Two-Dimensional Parity Check cannot correct two or three bit error. It can only detect two or three
bit error.
If we have a error in the parity bit then this scheme will not work.
3. Checksum
Checksum error detection is a method used to identify errors in transmitted data. The process involves
dividing the data into equally sized segments and using a 1’s complement to calculate the sum of these
segments. The calculated sum is then sent along with the data to the receiver. At the receiver’s end, the
same process is repeated and if all zeroes are obtained in the sum, it means that the data is correct.
On the sender’s end, the segments are added using 1’s complement arithmetic to get the sum. The
sum is complemented to get the checksum.
At the receiver’s end, all received segments are added using 1’s complement arithmetic to get the
sum. The sum is complemented.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of the
data unit so that the resulting data unit becomes exactly divisible by a second, predetermined binary number.
At the destination, the incoming data unit is divided by the same number. If at this step there is no remainder,
the data unit is assumed to be correct and is therefore accepted.
A remainder indicates that the data unit has been damaged in transit and therefore must be rejected.
CRC Working
Note:
CRC must be k-1 bits
Increased Data Reliability: Error detection ensures that the data transmitted over the network is reliable,
accurate, and free from errors. This ensures that the recipient receives the same data that was transmitted by
the sender.
Improved Network Performance: Error detection mechanisms can help to identify and isolate network issues
that are causing errors. This can help to improve the overall performance of the network and reduce
downtime.
Enhanced Data Security: Error detection can also help to ensure that the data transmitted over the network is
secure and has not been tampered with.
Overhead: Error detection requires additional resources and processing power, which can lead to
increased overhead on the network. This can result in slower network performance and increased
latency.
False Positives: Error detection mechanisms can sometimes generate false positives, which can
result in unnecessary retransmission of data. This can further increase the overhead on the network.
Limited Error Correction: Error detection can only identify errors but cannot correct them. This
means that the recipient must rely on the sender to retransmit the data, which can lead to further
delays and increased network overhead.
Forward Error Correction: In this Error Correction Scenario, the receiving end is responsible for
correcting the network error. There is no need for retransmission of the data from the sender’s side.
In Backward Error Correction, the sender is responsible for retransmitting the data if errors are
detected by the receiver. The receiver signals the sender to resend the corrupted data or the entire
message to ensure accurate delivery.
1. Hamming codes.
2. Binary convolutional codes.
3. Reed-Solomon codes.
4. Low-Density Parity Check codes.
Total of redundant bits(r) = 4 (This is because the message has four 1’s in it)
Also, by convention, the redundant bits are always placed in the places which are powers of 2. Now,
this message will take the format as shown below:
Therefore, we have R1, R2, R3, and R4 as redundant bits which will be calculated according to the
following rules:
R1 includes all the positions whose binary representation has 1 in their least significant bit.
Thus, R1 covers positions 1, 3, 5, 7, 9, 11.
R2 includes all the positions whose binary representation has 1 in the second position from
the least significant bit. Thus, R2 covers positions 2, 3, 6, 7, 10, 11.
R3 includes all the positions whose binary representation has 1 in the third position from
the least significant bit. Hence, R3 covers positions 4, 5, 6, 7.
R4 includes all the positions whose binary representation has 1 in the fourth position from
the least significant bit due to which R4 covers positions 8, 9, 10, 11.
Since the total number of 1s in all the bit positions corresponding to R2 is an odd number, R2= 1.
Since the total number of 1s in all the bit positions corresponding to R3 is an odd number, R3= 1.
Since the total number of 1s in all the bit positions corresponding to R4 is even, R4 = 0.
This message is transmitted at the receiver’s end. Suppose, bit 6 becomes corrupted and changes to 0.
Then, the message becomes ‘10101101110.’ So, at the receiver’s end, the number of 1’s in the respective
bit positions of R1, R2, R3, and R4 is rechecked to correct the corrupted bit. This is done in the following
steps: For all the parity bits we will check the
For R1: bits 1, 3, 5, 7, 9, and 11 are checked. We can see that the number of 1’s in these bit positions
is 4(even) so R1 = 0.
For R2: bits 2, 3, 6, 7, 10, 11 are checked. You can observe that the number of 1’s in these bit
positions is 5(odd) so we get a R2 = 1.
For R3: bits 4, 5, 6, and 7 are checked. We see that the number of 1’s in these bit positions is 3(odd).
Hence, R3 = 1.
For R8: bits 8, 9, 10, 11 are observed. Here, the number of 1’s in these bit positions is 2 and that’s
even so we get R4 = 0.
If we observe the redundant bits, they give the binary number 0110 whose decimal representation is 6.
Thus, bit 6 contains an error. To correct the error the 6th bit is changed from 1 to 0 to correct the error
S1 S2 S3 S4 S5 S6
Input bit
Outpu
t bit
2
Figure 3-7. The NASA binary convolutional code used in 802.11.
In Fig. 3-7, each input bit on the left-hand side produces two output bits on the right-hand side that are
XOR sums of the input and internal state. Since it deals with bits and performs linear operations, this is a
binary, linear convolutional code. Since 1 input bit produces 2 output bits, the code rate is 1/2. It is not
systematic since none of the output bits is simply the input bit.
The internal state is kept in six memory registers. Each time another bit is in- put the values in the
registers are shifted to the right. For example, if 111 is input and the initial state is all zeros, the internal
state, written left to right, will become 100000, 110000, and 111000 after the first, second, and third
bits have been input. The output bits will be 11, followed by 10, and then 01. It takes seven shifts to
flush an input completely so that it does not affect the output. The constraint length of this code is thus
k = 7.
A convolutional code is decoded by finding the sequence of input bits that is most likely to have
produced the observed sequence of output bits (which includes any errors). For small values of k, this is
done with a widely used algorithm de- veloped by Viterbi (Forney, 1973). The algorithm walks the
observed sequence, keeping for each step and for each possible internal state the input sequence that
would have produced the observed sequence with the fewest errors. The input sequence requiring the
fewest errors at the end is the most likely message.
Convolutional codes have been popular in practice because it is easy to factor the uncertainty of a bit
being a 0 or a 1 into the decoding. For example, suppose
-1V is the logical 0 level and +1V is the logical 1 level, we might receive 0.9V and - 0.1V for 2 bits. Instead
of mapping these signals to 1 and 0 right away, we would like to treat 0.9V as ‘‘very likely a 1’’ and -0.1V
as ‘‘maybe a 0’’ and correct the sequence as a whole. Extensions of the Viterbi algorithm can work with
these uncertainties to provide stronger error correction. This approach of working with the uncertainty
of a bit is called soft-decision decoding. Conversely, deciding whether each bit is a 0 or a 1 before
subsequent error correction is called hard-decision decoding.
Reed - Solomon error correcting codes are one of the oldest codes that were introduced in 1960s by Irving
S. Reed and Gustave Solomon. It is a subclass of non - binary BCH codes. BCH codes (Bose-Chaudhuri-
Hocquenghem codes) are cyclic ECCs that is constructed using polynomials over data blocks.
A Reed - Solomon encoder accepts a block of data and adds redundant bits (parity bits) before transmitting
it over noisy channels. On receiving the data, a decoder corrects the error depending upon the code
characteristics.
Explore our latest online courses and learn new skills at your own pace. Enroll and become a certified
expert to boost your career.
In coding systems with block codes, valid code words consist of polynomials that are divisible by another
fixed polynomial of short length. This fixed polynomial is called generator polynomial.
In Reed Solomon code, generator polynomial with factors is constructed where each root is a consecutive
element in the Galois field. The polynomial is of the form −
Encoding
The method of encoding in Reed Solomon code has the following steps −
The message is represented as a polynomial p(x), and then multiplied with the generator
polynomial g(x).
The message vector [x1,x2,x3.....xk] is mapped to a polynomial of degree less than k such
that px(αi) = xi for all i = 1,...k
The polynomial is evaluated using interpolation methods like Lagrange Interpolation.
Using this polynomial, the other points αk + 1....αn, are evaluated.
The encoded message is calculated as s(x) = p(x) * g(x). The sender sends this encoded message
along with the generator polynomial g(x).
Decoding
The receiver receives the message r(x) and divides it by the generator polynomial g(x).
If r(x)/g(x)=0, then it implies no error.
If r(x)/g(x)≠0, then the error polynomial is evaluated using the expression: r(x) = p(x) * g(x) + e(x)
The error polynomial gives the error positions.
A low - density parity check (LFPC) code is specified by a parity-check matrix containing mostly 0s and a low
density of 1s. The rows of the matrix represent the equations and the columns represent the bits in the
codeword, i.e. code symbols.
A LDPC code is represented by, where is the block length, is the number of 1s in each column and is the
number of 1s in each row, holding the following properties −
The following parity check matrix Hamming code having n = 7, with 4 information bits followed by 3
even parity bits. The check digits are diagonally 1. The parity equations are given alongside −
This examples illustrates an (12, 3, 4) LDPC matrix, i.e. n = 12, j = 3 and k = 4. This implies that each
equation operates on 4 code symbols and each code symbol appears in 3 equations. Unlike parity check
matrix of the Hamming code, this code does not have any diagonal 1s in parity bits.
Decoding of LDPC Codes
There are two possible decoding techniques of LDPC codes −
In the first technique, the decoder does all the parity checks as per the parity equations. If any
bit is contained in more than a fixed number of unsatisfied parity equations, the value of that bit
is reversed. Once the new values are obtained, parity equations are recomputed using the new
values. The procedure is repeated until all the parity equations are satisfied.
This decoding procedure is simple and but is applicable only when the parity-check sets are
small.
The second method performs probabilistic algorithms on LDPC graphs. The graph is a sparse
bipartite graph that contains two sets of nodes, one set representing the parity equations and
the other set representing the code symbols. A line connects node in first set to the second if a
code symbol is present in the equation. Decoding is done by passing messages along the lines of
the graph. Messages are passed from message nodes to check nodes, and from check nodes
back to message nodes and their parity values are calculated.
The two subclasses of these methods are belief propagation and maximum likelihood decoding.
Though these decoding algorithms are complex, they yield better results than the former.