Error Detection Codes
Error Detection Codes
Amandeep Singh
Dept. Of Electronics and Communication
Lovely Professional University,Phagwara
Punjab, India
[email protected]
Abstract In Information Theory and coding theory with
applicaations in computer science and telecommunication, error
detection and coreection are techniques that enable reliable digita
data delivery over unreliable link as communication channel are
prone to noise and thus error may be introduced. Error Detection
Technques allow detecting such errors,while error correction
enable reconstruction of original data.
Index Terms CRC, WSCComponent, formatting, style,
styling, insert. (key words)
I.
INTRODUCTION
II.
TYPES OF ERROR
Whenever bits flow from one point to another, they are subject
to unpredictable changes because of interference.This
interference can change the shape of the signal.
In a single-bit error, a 0 is changed to a 1 or a 1 to a 0. The
term single-bit error means that only 1 bit of a given data unit
(such as a byte, character, or packet) is changed from 1 to 0 or
from 0 to 1.
The term burst error means that 2 or more bits in the data unit
have changed from 1 to 0 or from 0 to 1.
III.
n-1
(Q0,Q1; . . .;Qn-2;Q)
satisfies the following two conditions:
n-1
( QiWi ) mod M = 0
I=0
Systematic Encoding
n-1
Qi C1
I=0
Remark 1.
1. C is nonlinear if C1 is nonlinear.
2. The codewords of C have even weights
if the codewords of C1 have even weights.
3. The code C1 in Algorithm 1 can be nonsystematic.
However, we focus only on systematic codes, which are more
often used in practice. Thus, we assume that C 1 is an (s, s m,
d1) systematic code with m check bits, 0 < m < s. Let F be the
encoder of C1. Then, each codeword of C 1 is UXm + F(U) =
(U, F(U)) , where U is an information(s- m)-tuple and F(U) is
the corresponding check m-tuple.
4. In Algorithm 1, the weights W0,W1, . . .,Wn-1 can be chosen
to be distinct polynomials of degree less than r because 1<n
<2r. However, Algorithm 1 can be extended to allow n > 2r,
then W0;W1; . . .;Wn-1 will not always be distinct.
5. All the codes considered in this paper are binary codes, i.e.,
their codewords consist of digits 0 or 1. In particular, the code
C is a binary code whose codewords are ns bits. Computers
can efficiently process groups of bits. Thus each ns-bit
codeword is grouped into n tuples, s bits each. Note that this
binary code C can also be viewed as a code in GF(2 s), i.e., as a
code whose codewords consist of n symbols, each symbol
belongs to GF(2s). More generally, suppose that ns= xy for
some positive integers x and y, then this same binary code C
can also be viewed as a code whose codewords consist of x
symbols, each symbol belongs to GF(2y). In the extreme case
(x = 1, y = ns), the code C is also a code whose codewords
consist of only one symbol that belongs to GF(2 ns). Note that,
when the same code is viewed in different alphabets, their
P1 = ( QiWi + U1XT)mod M
i=0
P2= Y2 + F(Y1)
where Wi 0, 1 and F is the encoder of C1. The tuples Y1 and
Y2 are determined as follows: Let
n-3
Y = Qi +U1 XT+P1+U2Xm
i=0
which is an s-tuple that can be written as
Y =Y1Xm + Y2 = (Y1, Y2), where Y1 and Y2 are an (s-m)tuple and an m-tuple, respectively.
Remark 2.
After P1 is computed, P2 is easily computed when C1 is one of
the following four types of codes: The first two types of codes,
given in 1 and 2 below, are very trivial, but they are used later
in Section 3 to construct all the codes in Fig. 1. The next two
types of codes, given in 3 and 4 below, are commonly used in
practice for error control.
1. If m=s, then C1={0s}, which is an (s,0, d1) code, where the
minimum distance d1 is undefined. This very trivial code is
called a useless code because it carries no useful information.
However, it can detect any number of errors, i.e., we can
assign d1= for this particular code.
Further, it can be shown that Theorem 1.2 remains valid when
m = s, i.e., dc = 4 if
C1= ( 0s). Then, from Algorithm 1a, we have
U2= 0, F = 0s, Y1=0, and
n-3
P2 =Y2 = Y = Qi + U1Xr + P1.
i=0
2. If m = 0, then C1 = {0, 1}s, which is an (s,s,1)
code. This very trivial code is called a powerless
code because it protects no information.
From Algorithm 1a, we have Y2 = 0, F = 0,
n-3
V.
a. Parity Check
A parity bit, or check bit, (rewording necessary) is a bit added
to the end of a string of binary code that indicates whether the
number of bits in the string with the value one is even or odd.
Parity bits are used as the simplest form of error detecting
code.
There are two variants of parity bits: even parity bit and odd
parity bit.
In the case of even parity, the number of bits whose value is 1
in a given set are counted. If that total is odd, the parity bit
value is set to 1, making the total count of 1's in the set an
even number. If the count of ones in a given set of bits is
already even, the parity bit's value remains 0.
In the case of odd parity, the situation is reversed. Instead, if
the sum of bits with a value of 1 is odd, the parity bit's value is
set to zero. And if the sum of bits with a value of 1 is even, the
parity bit value is set to 1, making the total count of 1's in the
set an odd number.
Even parity is a special case of a cyclic redundancy
check (CRC), where the 1-bit CRC is generated by
the polynomial x+1.If the parity bit is present but not used, it
may be referred to as mark parity (when the parity bit is
always 1) or space parity (the bit is always 0).
If an odd number of bits (including the parity bit)
are transmitted incorrectly, the parity bit will be incorrect, thus
indicating that a parity error occurred in the transmission. The
parity bit is only suitable for detecting errors; it
cannot correct any errors, as there is no way to determine
which particular bit is corrupted. The data must be discarded
entirely, and re-transmitted from scratch. On a noisy
transmission medium, successful transmission can therefore
take a long time, or even never occur. However, parity has the
advantage that it uses only a single bit and requires only a
number of XOR gates to generate.
b. Cyclic Redundancy Check(CRC)
A cyclic redundancy check (CRC) is an error-detecting
code commonly used in digital networks and storage devices
to detect accidental changes to raw data. Blocks of data
entering these systems get a short check value attached, based
on the remainder of a polynomial division of their contents; on
retrieval the calculation is repeated, and corrective action can
be taken against presumed data corruption if the check values
do not match.
CRCs are so called because the check (data verification) value
is a redundancy (it expands the message without
adding information)
and
the algorithm is
based
on cyclic codes. CRCs are popular because they are simple to
implement
in
binary hardware,
easy
to
analyze
mathematically, and particularly good at detecting common
errors caused by noise in transmission channels.
CRCs are specifically designed to protect against common
types of errors on communication channels, where they can
provide quick and reasonable assurance of the integrity of
d. CXOR Checksum
Suppose now that we allow some of the polynomials
W0;W1; . . .;Wn_1 to be repeated and we use Algorithm along
with variation (9). Let r = s = m, M =Xs +1, and Wi = Xi
modM. It can be shown that
Wi+s =Wi for all i >1, i.e.,
some of the weighting polynomials may repeat. Then, C1 =(0s}
(because m = s), U1 =0 (because r = s), and U2 = Y1 = 0
(because m = s).
We have P1 = Qi.
i=0
e.
Now, consider the code C that has the same length and
the same weighting polynomials as the above CRC. Let r =h
and m = 0. Then, P2= 0 and P1= P Thus, this particular code
C is identical to the above CRC.
and
I=0
2m-2
P2= Qi
i=0
their binary polynomial counterparts (see Fig. 1). See [4], [5],
[9] for definitions and performance comparisons of errordetection codes, including the ones-complement and Fletcher
checksums.
Thus, the integer-arithmetic version also produces the onescomplement and Fletcher checksums. We will not discuss
thesechecksums and integer-based codes any further because
they are often weaker than their polynomial counterparts and
their analyses can be found elsewhere
VI.
The code with the best error detection properties has largest
values of d, b and k for a given h. Assuming the most probable
errors are those that change fewest bits, the most important
parameter is d. The second most important parameter is b
because a larger b guarantees detecting a larger single burst of
errors. The parameter k is least important because k need only
be large enough to cover the largest block size ( a larger k gives
no additional benefit).Fig 1 shows five codes listed above in
order of their error detection capabilities. CRC has the best
error detection properties. Weighted Sum Codes are almost as
good with just a more limited error detection block size though
the other three codes have inferior error detection properties.
The CXOR and Internet Checksum are very weak because they
detect only single bit errors. While the Fletcher checksum can
detect all two bit errors, it can guarantee only half the burst
error detection of the other codes.
VII.
REFRENCES