L22 ErrorCodes
L22 ErrorCodes
Error detecting codes enable the detection of errors in data, but do not determine the
precise location of the error.
- store a few extra state bits per data word to indicate a necessary condition for the
data to be correct
- if data state does not conform to the state bits, then something is wrong
- e.g., represent the correct parity (# of 1’s) of the data word
- 1-bit parity codes fail if 2 bits are wrong…
parity bit is 1:
data should
have an odd
number of 1's
A 1-bit parity code is a distance-2 code, in the sense that at least 2 bits must be changed
(among the data and parity bits) produce an incorrect but legal pattern. In other words,
any two legal patterns are separated by a distance of at least 2.
The parity bit could be stored at any fixed location with respect to the corresponding data
bits.
Upon receipt of data and parity bit(s), a check is made to see whether or not they
correspond.
Cannot detect errors involving two bit-flips (or any even number of bit-flips).
Suppose the parity bits are correct (011) and the data bits (0100) contain an error:
Suppose the data bits (0100) are correct and the parity bits (011) contain an error:
If we assume that only one bit flipped, we can conclude the 0110 011
correction is that the data bits should have been 0110. 0111 000
1000 111
1001 100
1010 010
If we assume that two bits flipped, we have two equally 1011 001
Hamming codes use extra parity bits, each reflecting the correct parity for a different
subset of the bits of the code word.
- p1: all higher bit positions k where the 20 bit is set (1's bit)
- p2: all higher bit positions k where the 21 bit is set (2's bit)
…
- pn: all higher bit positions k where the 2n bit is set
Hamming encoding:
b111 b110 b101 b100 b011 b010 b001
101?1?? d4 d3 d2 p3 d1 p2 p1
This means that each data bit is used to define at least two different parity bits; that
redundancy turns out to be valuable.
Hamming encoding:
b111 b110 b101 b100 b11 b10 b1
101?1?? 1 0 1 p3 1 p2 p1
3 ones: p1 == 1
2 ones: p2 == 0
2 ones: p3 == 0
1010101
We can use it to reliably determine an error has occurred if no more than 2 received bits
have flipped, but not be able to distinguish a 1-bit flip from a 2-bit flip.
We can use to determine a correction, under the assumption that no more than one
received bit is incorrect.
The Hamming (8,4) allows us to distinguish 1-bit errors from 2-bit errors. Therefore, it
allows to reliably correct errors that involve single bit flips.
The Hamming Code pattern defined earlier can be extended to data of any width, and with
some modifications, to support correction of single bit errors.
Suppose have a 7-bit data word and use 4 Hamming parity bits:
Suppose have a 7-bit data word and use 4 Hamming parity bits:
Suppose that a 7-bit value is received and that one P1 : depends on D1, D2, D4, D5, D7
data bit, say D4, has flipped and all others are P2 : depends on D1, D3, D4, D6, D7
correct.
P3 : depends on D2, D3, D4
Then D4 being wrong will cause three parity bits to P4 : depends on D5, D6, D7
not match the data: P1 P2 P3
And, assuming only one bit is involved, we know it must be D4 because that’s the only
bit involved in all three of the nonmatching parity bits.
Then the other parity bits will all still match the P3 : depends on D2, D3, D4
data: P1 P2 P4 P4 : depends on D5, D6, D7
And, assuming only one bit is involved, we know it must be P3 because if any single
data bit had flipped, at least two parity bits would have been affected.
If we accommodate data chunks that are some number of bytes, and manage the parity
bits so that they fit nicely into byte-sized chunks, we can handle the data + parity more
efficiently.
For example:
(12,8) 8-bit data chunks and 4-bits of parity, so… 1 byte of parity per 2 bytes of data
(72,64) 9-byte data chunks per 1 byte of parity bits
Suppose have an 8-bit data word and use 4 Hamming parity bits:
How can we determine whether it's correct? Check the parity bits and see which, if any
are incorrect. If they are all correct, we must assume the string is correct. Of course, it
might contain so many errors that we can't even detect their occurrence, but in that case
we have a communication channel that's so noisy that we cannot use it reliably.
0 1 1 1 0 1 0 0 1 1 1 0
OK
OK
WRONG
WRONG
So, what does that tell us, aside from that something is incorrect? Well, if we assume
there's no more than one incorrect bit, we can say that because the incorrect parity bits are
in positions 4 (0100) and 8 (1000), the incorrect bit must be in position 12 (1100).