0% found this document useful (0 votes)
3 views

Hamming Code

The document discusses block codes, specifically focusing on Hamming codes, which are used for error detection and correction in data transmission. It explains the construction of Hamming codes, including the (7,4) code, and details how parity-check bits are calculated and used to detect errors. Additionally, it describes the decoding process and provides examples of how to identify and correct errors using a syndrome table.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Hamming Code

The document discusses block codes, specifically focusing on Hamming codes, which are used for error detection and correction in data transmission. It explains the construction of Hamming codes, including the (7,4) code, and details how parity-check bits are calculated and used to detect errors. Additionally, it describes the decoding process and provides examples of how to identify and correct errors using a syndrome table.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

6-bit word is given

errors in a by
22 Block codes
The proba bility of
obtaining 3
leting
and ones). which gives
inegn 1.6,
n=6and Ë=3 P, =°Cp'(-p)'

probablity
of a
P=194 x10 . The decoding
substituting p = 0.01 gives
1,94 x
105
and P=P;=

failure is therefore way of carrying


with a very simple level of
out error
us
codes provide
system where
a high
The repetitionmav be suitable in a efficientlysothat high levels of redundancy
is
CorTection

acceptable.
and
But ifachanncl is to be
used
repetition codes
redundancy
are quite inadequate and errOr-
acceptable, then
the
make better use of redundancy.
afe not required that
correcting codes are

1.6 Hamming codes codes


product codes and
repetition considered in
single-parity-check codes, achieving
steps' towards
The
sections can be
thought of as
"first error
detection and correction, and are
the error
limited degree of
previous
control. The codes allow a somewhat more interesting codes are
Moving on towards codes devised for
first class of linear
implement.
Simple to were the error
which for
the Hamming codes linearity is a good property a code to
controland as we shall see,
in Chapter 2, in the history of
important position error-
possess. The Hamming
codes occupy an the book. Here
refer tothem repeatedly throughout the
control codes and we will later they are reconsidered in
terms of th
introduced and
Hamming codes are
linear properties. Hamming codes is the (7,4) code
that takes 4-bit infor
The first in the class of bits ara
7-bitcodewords. Three parity-check
mation words and encodes them into information bits using eqns 1.17 shown
required, these are determined from the
below. Given an information word i= (i, i,
i3, i4) then the parity-check bits are
PI =h+i)+i
(1.17)
P3 =i thti4
addition (see
where the information bits are added together using modulo-2
codeword
Table 1.4).Appending the parity bits to the information word gives the
e= (i,,i3. 4,P1,P2.P3). (1.18)

Table 1.6 shows the set of codewords for the (7,4) code. There are 16 codewords, onë
for each information word. For reference purposes, the codewords and information
words are labelled c, to¬s. and i, to ijs respectively. The subscript i of thecodeword
c,gives the numerical value of the corresponding informationword, for example cs
is the codewordcorresponding to theinformation wordi, =(1001). Note that here,
as with the repetition codes, the word 'parity' does not refer to whether there are an
even or oddnumber of ones in aword, but rather refers to the code's check bits
irrespective of the code's property or structure.
Hamming codes | 23
Tabie 1.6
The (7,4) Hamming code
Information words Codewords
i-(i1, z, i, i) C=(1,iz, l3, i4, P1P2, P3)
io (0 0 0 0) Co -(0 000000)
ii=(0 0 01) C; =(000101 1)
i,=(0 0 10) C) =(0 0 101 10)
in=(0 011) C; =(0 0 1 1 10 1)
i= (0 1 00) CA (0 1 0011 1)
is= (0 1 0 1) Cs=(010110 0)
i;- (0 1 10) C =(0 1 1000 1)
i,=(0 111) C;=(0 1 11010)
ig=(1 0 00) Cg =(1 0 001 0 1)
io (1 00 1) Cg (1 0 01 11 0)
io =(1 0 10) C1o=(1 01001 1)
in=(1011) C|=(1 01 1000)
i2=(1 10 0) C12=(1 1 0001O)
i3 =(1 101) C3=(1 10100 1)
i4=(1 1 10) C14 =(1 1 10100)
is = (1 1 11) C1s=(1 11111 1)

Atthe decoding stage the received word is


V=(V1, V2, V3, V4, V5, V6, V7)
where

VÊ =i

V3=i3
V4 = l4
Vs = PI
V6 = P2
V7 = P3
if no errors occur. The decoder determines 3parity-check sums

S = (V t v2 t V3) + vs
S) = (V2 t v3 + v4) + v6 (1.19)
S3 = (V) t 2 + v4) + V7.
The first 3 bits in each parity-check sum correspond to the same combination of
information bits as that used in theconstruction of the parity bits (see eqns 1.17),
they are enclosed in parenthesis to emphasize this correspondence. From the parity
check sums we can define

S= (S, S2, S3) (1.20)


24 Block codes The
or syndrome of v.
Which is known as
the error syndrome
zero if no errors
0ccur. For parity-chekif
example,
Constructed such
errors then
that thev are
there umsare are

and using eqns 1.17 gives


S1 =PI t p = 0 .

irrespective of whether
p, is 0 or 1 (because
p, +p,=0 we no
modulo-2
Note that addition). Likewise we can show that sz =S3 =0 when there are are
there are no errors. ising
codeword the error syndromes=(000) when
Thereforecy= (1011000,ifit incurs no errors then it will give v =(1011Consider
now Ihe
parity-check sums will he
the resulting
as the decoder input and
+0 =0
SUBHANKAR MAJHI Sj = (1+0+1)
+0=0
Lecturer, IT Dept. S) = (0+1+1)
KATIONAL INSTITUTE OF TECHNOLOGY +0 =0
S3= (1 +0+1)
DURGAPUR
which again givens=(0 0 0). Likewise if we take any codeword from Tablel6
the error syndromes=(000) and so the error syndrome of a codeword is alwas
zero. The construction of the parity-check bits and parity-check sums is such tha
codeword is zero.
the error syndrome of any

Example 1.9 1 1 10 1 0) be a codeword sent over a channel. If no erTOrS


Let c=(0
then the received word is v= C7=(011101 0) and using eqns 1.19 gives the
parity-check sums

S1 = (0+1+1) +0=0
S) =(1+ 1+1) +1=0
S3 = (0+ 1+1) +0=0

giving an error syndrome of s= (91, S2, 5)=(000).


A zero error syndrome is always obtained when there are no errors and a nonzero
error syndrome can only arise if errors have occurred. However, the occurrence of
errors does not necessarily give a nonzero error syndrome, as a zero error syndrome
is also obtained whenever, an error pattern changes a codeword intoa different
codeword. For example, if the codewordc, =(0101100)incurs the errore =(0110
001) then, using eqn 1.1, the decoder input is
V=c+e

=(0 101 100) +(0 I10001)


=(0011101)
Hamming codes | 25
which is the codeword e. The resulting parity-check sums are
S1 = (0+ 0+ 1) +1=0
S2 =(0+ 1+) +0=0
S3 = (0 +0 +1) + 1l =0
giving s =(00 0)and therefore the error pattern is undetectable. The occurrence
position
of a single error gives an error syndromesthat depends uniquely on the
of the error within the codeword. Furthermore s is independent of the code
word incurring the error because the error syndrome of a codeword is always zero.
For example consider the codeword c2 =(| 100010) incurring the error pattern
e=(00 01000) this gives
V=cte

= (1 1000 1 0) +(0 001000)


=(1 101010)
and the parity-check sums are
S1 = (1+ 1+0) +0 =0
S) =(1+0+ 1) +1 =1
S3 =(1+1+1) +0=1
giving s =(0 11). Consider now some other codeword, say, c3 =(0 01110 1)
incurring the same error pattern e=(0 001000), here y =(00 10101) and again
we gets=(011). Any of the 16 codewords, belonging to the (7,4) code, incurringthethe7
error e =(000100 0) will give the error syndrome s=(011).Table 1.7 shows
different single errors e that can occur in a 7-bit word, along with their corre
sponding error syndromes s. The error syndrome s= (0 0 0), obtained when no
errors occur, is included in the table. The table can be used for single-error correction
by looking up' the error pattern e corresponding to a given error syndrome s. The
position of the nonzero bit in egives the position of the error in v, and on locating the
error the erroneous bit is corrected by inverting it. Table 1.7 is referred to as a
syndrome table. Note that the error pattern obtained from the syndrome table and

Table 1.7
Syndrome table for the (7,4) Hamming code
Error pattern e Error syndrome s
(ej, e2, e3, e4, e5, e6, e) (sj, Sz, S3)
(0000000) (000)
(000000 1) (00 1)
(00000 10) (0 10)
(0000100) (10 0)
(0 00100 0) (01 1)
(0 0 1000 0) (1 10)
(010000 0) (1 11)
(1000 000) (10 1)
26 Block codes estimate of the error
decoder's guess or pattern e
the Decoding
the resulting codeword are respectively. can be
and are denoted by ê and sum-
and codeword c
marized in the three steps:
decoder input V.
I. Calculate s fro the patterné that corresponds to s.
table obtainthe error
2. Fromthe syndrome has the effect of
then given by é=y+é,this inverting
3. The requireddcodeword isposition of the nonzero bit in é.
the bit in vgiven by the syndrome gives the
resultingerror
errors can be correct
occurring, the
Inthe event of asingle error is obtained. All single corrected
codeword
error patternand the correct
Hamming code is a
single-error-correcting code

and therefore the (7,4)

Example 1.10 Hamming code, incurs


a single error so
Given that a c, of the (7,4)
codeword
gIVIng v=(|011 001), find c.
parity-check sums
Using eqns 1.19 gives the
+0=0
sj = (1 +0+1) +0=0
S2 =(0+1+) +1=1
S3 =(1+0+1)
From Table 1.7, s= (00 ) gives
and so the error syndrome s =(S1,52, 53) =(00 1).
of Vgives (10 |1000)
the error pattern (0 000001). Inverting the right-hand bit
which is the codeword ci
values of s=
The syndrome table for the (7,4) code contains all possible
syndrome s
(S1, S2, S3), SO whenever two or more errors OCcur the resulting error
will always be one that corresponds to noerrors (s =0) or toa single error (s #0), and
on both occasions adecoding error occurs. There are no decoding failures and
therefore decoding is complete. For exampleconsider Cy = (1001110), along with
the double-error pattern e= (00100l0), so giving y= Co+e=(1011100). The
error syndrome of vis s=(10 0), and referring to Table 1.7 we get ê= (0000100).
Hence =p+ê=(10l1000)which is c and not c,. Whilst the two errors have
been detected (becauses#0), a decoding error has ultimately occurred. All double
errors give a nonzero syndrome and are therefore detectable. We have already seen
that the code can detect and correct single errors, and therefore the (7,4) Hamming
code can detect single and double errors or can correct single errors. Note that
although the code can detect up to 2errors, this is only interpreted as error detection
iferror correction is not implemented. Iferror correction is carried out then decoding
is viewed as acorrection, and not adetection, process. Hence it is in this sense that we
think of the code as being able to detect up to two errors or correct single errors. Ihe
code cannot be used for jointly carrying out error detection and correction, tor this
requires codes with greater error-control capability (see Section 1.7).
We have seen that the (7,4) code is guaranteed to detect all single and double
errors, however other errors patterns are also detectable. For example some tnpic
errors and 4-bit errors are detectable as shown in the example below.
Hamming codes | 27
Example 1.11
Given that the (7,4) code is used for error detection only, determine the outcome
when the codeword c =(0 1 10001) incurs:
(a)the triple crror e, -(0 10 |0 0):
(b) the triple error e, -(|0 01010):
(c) the 4-bit error e, -(|||0100):
(d) the 4-bit error es=(0 1 0110).
(a) lfc, =(011000 Dincursthe error e, (010|100)then v C, t ej =(00111
01). The parity-check sums are s -0, 5,-0, and s =0, and so s=(S1, 52, S3) =
(000). The erroris therefore notdetected and soa decoding error has occurred.
(b) When c incurs the error e, =(100 1010) we get v=(|1|101). The error
syndrome is nows(100) and so the error has been detected.
(c) For co and e,,v =(1000101)and the error syndromes = (000).The four errors
are not detected.
(d) Here Ch and e4 givev-(001l100) and error syndrome s =(001). The four
errors are therefore detected.

Note that the triple-error pattern e, and the 4-bit error pattern e, are undetected
because they are identical to the codewords cs and cj4 respectively. The other two error
patterns do not resemble any of the codewords and are therefore detectable.
The (7,4) Hamming code is the first code in the class of single-error-correcting
codes whose blocklengths nand information lengths k satisfy
n=2- 1
k=X1-r (1.21)

for any integer r> 3, and where r=n-k gives the number of parity-check bits.
Taking r=3 gives the (7,4) code already considered. For r=4 we get the (15, 11)
Hamming code which has 11-bit information words, 15-bit codewords and 4
parity-check bits.Given the information word i=(i,h,...i) the parity bits are

(1.22)

PA = i tin thtisthts t 0u
so giving the codeword c= (i, iz,... , i1,PiP2, P3, Pa). The parity-check sums and
syndrome table are constructed in the same way as those for the (7,4) code.
Table 1.8 shows the number of codewords and error syndromes for the (2- 1,
2-1-r) Hamming codes for values of r= 3,4, 5 and 6. Note that the number of
error syndromes rises much less rapidlywith r than the number of codewords. Error
detection and correctioncan be achieved through the use of tables of codewords,but
this becomes impractical for large values of nand k.Decoding based on asyndrome
28 Block codes Table 1.8

Hamming codes
for r=3 to 6
The Number of
Number of
(n, k) codewords syndromes
8
16
16
(7,4) 2,048

4 (15,11) 67x106 32
5 (31, 26) 64
6 (63, 57)

2
(2-1,2"-|-r)
Where k=X-1 -r.

due to the number ierror syndromes being


of
table is usually a practical alternative,
codewords.
Significantly less than the number of to give .ki
can be added to the (7,4) code
An additional parity-check bit extendo
with even parity. The resulting code is known as the (8, 4)
codewords
errOrS and detecting double
Hamming code and is capable ofjointlycorrecting single
errors (see Section 2.7).
correction is not restricted tothe Hamming
The use of a syndrome table for error all
The syndrome table consists of
codes but can be applied to any block code.
corresponding error syndromes.An (n. k
correctable error patterns along with their
single-error-correcting code has "C=n single-error patterns
and corresponding
double-error-correcting code has "C.
error syndromes in its syndrome table. A
single-error patterns and "C double-error patterns
and error syndromes, in its
"C2,...,"C, error
syndrome table. Acode capable ofcorrecting 1errorsrequires"C;,
patterns and error syndromes.

1.7 Minimum distance of block codes


In Section 1.2 we introduced the ideaof parity-check bits having the effect of adding
redundancy and so producing codewords that are separated from each other.
Increasing the parity-check bits increases the redundancy and the separation or
distance between codewords. Here we extend these ideas further and consider how
distance between codewords determines the error-control capability of a code. As
we shall see the concept of distance between codewords, and in particular the
minimum distance within a code, is fundamental to error-control codes.
The Hamming weight or weight of a word vis defined as the number of nonzero
components of vand is denoted by w(). The Hamming distance or distance between
two words v, and v2, having the same number of bits, is defined as the number ol
places in which they difer and is denoted by dv, v). For example the words v, =
(01 1010) and v, =(101000)have weights 3 and 2respectively and are separateo
Minimum distance of block codes 29
by a distance of 3, therefore

w(0110| 0) =3
w(10100 0) =2
d(0 110 10, 10 10 0 0) = 3.
The minimum distance dmin of a block code is the smallest distance between code
words. Hence codewords differ by dmin OT More bits. The minimum distance is found
by taking a pair of codewords,determining the distance between them and then
repeating this for all pairs of different codewords. The smallest value obtained is the
minimum distance of the code.

Example 1.12
Determine the minimum distance of the even-parity (3,2) block code.
Here the codewords are (0 00) , (0 11), (1 10) and (1 0 i1). Taking codewords
pairwise gives
d(0 0 0, 0 1 1) =2
d(00 0, 1 1 0) =2
d(00 0, 1 01)=2
d(0 11, 1 10) =2
d(0 11, 10 1) =2
d(l 10, 10 1) =2
and the minimum distance of the code is therefore 2.

Consider the (7,4) Hamming code whose 16 codewords are shown in Table 1.6.
This has 120 pairs of different codewords, and it can be shown that any pair of
codewords has its 2codewords separated by a distance of 3, 4or 7and therefore the
minimum distance of the (7,4) Hamming code is 3. The code has &pairs of code
words where the 2codewords in each pair are separated by a distance of 7, 56 pairs
have their 2codewords separated by adistance of 4and the remaining 56pairs have
codewords separated by a distance of 3 (see Table 1.9).
It is not usually practical to determine the minimum distance of a code by con
sidering the distance between all pairs of different codewords. An (n, k) block code
has m= 2k codewords and therefore "C, different pairs of codewords, a term that
rises very rapidly with increasing k. The (7,4) Hamming code has 120 pairs of dif
ferent codewords, and the (15, 11) Hamming code (see Table 1.8, r=4) has 2
codewords which gives over 2 x 10° pairs of codewords. An arbitrary block code
could require a considerable degree of computation to determine its minimum
distance. However, thecodes that are important are not arbitrary but have a linear
property (already referred to at the start of Section 1.6) that allows the minimum
distance to be determined easily, this is considered in Section 2.1.
It is interesting to considera block code from a geometric point of view, as this
helps to illustrate theconcept of distance between words. Codewords belonging to
an (n, k) block code can be thought of as lying within an n-dimensional space
Table 1.9 cod
30 Block codes (7,4)
codewords
in the
between
Distance Co C10 C12 C13
Ca

4 3
C 4 3
Cr 4 4 3 7
4 7
4 4 7 4
3 3 3
C 3 3 3 7
4
4 4
3 3 7
4 3 3 4 4 4 4 3
C 3
3 0 3 7 4 3 3
Ca
3 3 4 3
3 4 4 4
4
7 3 3 3
Ca
4 3 3 3 3 3
0 4 4 4
Cs 3 3 7
3 4 4 3 3
3 4 3 3
4 7 4
4 3 4 4 3 4
C 4
4 4 3 3
CR
7 4 4 4
4 3
3 3 7 3 3 4 3
Co 4 3 3
3 4
4 3 3 3 4 0 3
C10
3 7 4 3
3 4 4 4 3 3 3 4
C11 4 4 4
4 7 3 4 3
3 4 3 3 3 4
C12 4 3 3
4
7 4 4
Cj3
3 3 3 3 4 4 3
3 3 4 3 3
7 4 4
C14 3
4 4 3
C15

shows the (3,,2) even-parity code ina


For example Fig. 1.9
each bit). form c =(Cy, Cys Cz) wheree Cx, Cy, and c,
(l dimensionfor Each codeword is
of the
Y, and Zaxes respectively.
3-dimensionalspace. the X,
coordinates of c
along
show words that are redundant to
The
0 or l and are the open circles
are codewords and the
shaded circles are words a distance I away from the code.
codeword has 3redundant codewords (the dotted lines
the code. Each distance of 2 away from the other 3
Word and is at a codewordIincurring asingle error wjli
of 1 within the space). A redundant word, A second
represent a distance a error will
along a dotted line, to (1 0 )ino
codewords. For example, say
'transition',
result ina l of 2
this time to codeword (
give another transition, a second error will then give the
1), then
a single error to give (00 errors/transitions are required to
change any one
of 2
or (0 00). A minimum codeword, which is consistent with the code's minimu
codeword into another
distance of 2. reference to a coordinate
within a code can be illustrated without
The distance 2) code seen
in Fig. 1.10(a). This shows the main features of the (3,
system, as shown redundant words and a
namely that each codeword is connected to 3
in Fig. L.9, any one codeword into another.
required to change
minimum of 2 transitions are
diagramfor the (3, 1) repetition code. Here
Figure 1.10(b) shows the corresponding 11). To get from either of the codewords to
(1
there are only 2 codewords (000) and minimum distance of the code is therefore 3.
the
the other requires 3transitions and
because comparing (0 0 0) to (1 11) gives dmin =
This is a rather obvious result interesting to view a code in this manner.
immediately, nevertheless it is always soon
attempt to produce a similar diagram for the (7, 4) code we find that it
If we 16 codewords, the total number Ol
becomes rather cluttered. Whilst there are only
to 7 other words (one link for eacn
words is 128 each of which has to be linked
words along one aimiu
transition). A further simplification to Fig. 1.9 is to show word, but
sion only, as shown in Fig. 1.11. Each site no longer represents a specific
Minimum distance of block codes 31

|(001) (011)

(111)
(101)

(000)
.... (010)

(100) O
(110)

Codewords
O Redundant words

Fig. 1.9 The (3, 2) code in a 3-dimensional space.

only whether or not it isa codeword (shaded and open circles represent codewords
and redundant words respectively). Moving from one circle to an adjacent circle
represents achange of 1 bit (i.e. a distance of 1). The codewords c and Cz are
separated by a distance 3, and c, and c, by a distance of 2, five redundant words r,
to rs are shown.
Consider now the arrangement of codewords shown in Fig. 1.12(a), this typifies
the separation of words in a code with minimum distance 3. Here A and B indi
cate examples of transitions that can occur if c incurs 1 or 2 errors respec
tively. Examples Cand D showtriple errors occurring at c, and c4 respectively. To
determine the decoding decisions for the errors A, B, C, and D we consider a
maximum-likelihood decoder with input v. Fora binary-symmetric channel, max
imum-likelihood decoding is equivalent to selecting a codeword that is closest to r
than any other codeword and is referred to as minimum-distance decoding or nearest
neighbour decoding. The error patterns in Fig. I.12(a) will therefore be decoded as
follows:

A: c incurs asingle error


The erroris detected because it gives a redundant wordr,. Furthermore, minimum
distance decoding is able to correct the error as r, lies closer to c than c.
B: c incurs a double error
The error is again detected, however this time the redundant word r, lies closer to e
than c, and minimum-distance decoding estimates c, to be the required codeword.
This will give a decoding error.
32 | Block codes
even-parity code
(a) The (3,2)
010

000

110
100

101
...........
001 011

111

code
(b) The (3,1) repetition
010

000
......

110 100 o01 011


101

111

redundant words
codewords

Fig. 1.10 Illustrating distance.

C: cz incurs a triple error


The error changes c into c; and cannot be detected or corrected. Again this will give
a decoding error.

D: c4 incurs a triple error


Here the number of errors equals the minimum distance of the code buta redundant
wordr, is still obtained as c; and ca are separated by a distance of4. The error is
therefore detected but adecoding eror occurs as c is the nearest codeword to rs.
In a code with dmin=3 the occurrence of single or double errors within acodeword
always gives a redundant word (as in Aand Babove) and the detection of the errors is
guaranteed. The occurrence of 3 or more errors may be detected (e.g. Dabove) but
not all such error patterns are detectable (e.g. C above), Hence a block code witn
minimum distance 3can detect allsingle and double errors. Acode with minimu
1.12(6),
distance 5 can detect 4 or fewer errors, as illustrated by A, B, Cand Din Fig..
6or
Eshows an undetectable 5-bit error. Acode with minimum distance7/ can detect
Minimum distance of block codes 33
1-bit transition 2-bit transition

C C
Fig. 1.11 A simplified way of illustrating distance.

(a) dmin 3
B
D
:

Co

(b) dmin = 5

E
D

Fig. 1.12 Examples of errors.

fewer errors. It follows that a block code with minimumn distance dmin Can detect all
error patterns with

l= dmin -1 (1.23)
or fewer errors.
Anerror pattern incurred by a codeword c is correctable only if the resulting
redundant word is closer to cthan any other codeword. For acode with dmin = 3it is
only single errors that satisfy this requirement Hence a block code with minimum
distance 3can correct all single errors. Acode with minimum distance 5 cancorrect
all singleand double errors, acodeword cincurring asingle or double error willgive
a redundant word that is closer to c than any other codeword. Acode with minimum
distance 7 can correct 3 or fewer errors and it follows that a block code with mini
mum distance dmin can correct all error patterns with

(=dain -I) (1.24)

or fewer errors.
We refer to t and l as the error-correction and error-detection limits respectively
and they give the error-control limits of a code. Codes with error-correction limit
t and error-detection limit are referred to as -error-correcting codes and
l-error-detecting codes respectively. Note that whilst a code with error-detection
limit is guaranteed to detect allerror patterns withl errorsor less, the code will also
34 Block codes patterns With
more than ( errors.
be ableto
detect some
error
correct certain
error patterns with more Lithan
kewise a
correcting code
is able to
2.6. codewords of an
l-erTor
this in Section (n, k) code
will return to notion that
return to the
be thought of:ias havinga
now can
redundant wordsdectheoihatding aresphaterae
Let's
n-dimensional space.
Each codewordsphere contains
decoding
laround it. Each codeword at the
centre of
the
decoding spheres sphere.no
of radius fromthe
away
distance of t or
less
lying outside the but There
belong to two
usually be or more words
redundant spheres as the spheres are non-interSecting. In
wËl word wi
a minimum
codeword at the centre of the
with
within which v liesis
distance decoder
input
taken r the
as the required codeword. If every word within the space
belongs to oneand only one sphere, so that no word lies outside a decoding sphere,
decoding sphexe
referred ito
equal radius,then the code is as a perfectctcode.The.
word 'perfect' of here not in the sense of the best or exceptionally
spheresisareused good, but
and the
rather to describethe geometrical characteristic of the code. The decoding spheres
can be thought of as perfectly fitting the available space with no overlap and ng
space. The Hamming codes and the repetition codes with odd blocklengh
unused
however perfect
codes are rare.
codes,
Whilst egns 1.23 and 1.24 represent a code's inherent error-control capablity,
are perfect
often the error control realized is acompromise between error correction and error
detection. We have already seen that the (7,4) code can correct single errors or dete:t
upto 2errors. When double errors occur they are detected. because the error syn-
drome is nonzero, but subsequently 'corrected'. The decoding process sis rnot so much
one of double-error detection, which would resultin a decoding failure, but rather
error correction resulting in a decoding error. When carrying out single-error cor-
rection the double-error detection capability of the codeiis not used. However. if the
decoder does not carryout single-error correction then doubleeerrors give a decoding
failure and are said to be detected. The (7,4) code, or any otherr code with dmin-},
single-error correction jointly Ta.
cannot carry out double-error detection and block0
and it can be shown that a
requires a larger minimum distance
minimum distance dmin Can jointly correct
t or fewer errors and detect /or B
errors providing
t+'<dmin -1 (125)
egn 1.25 for minimum
where >t. Table I.10shows values of and that satisfy
distances of l to 7. Note that for each value of the value of shown
gives the
maximum number of errors that can be detected excluding error
patterns with t or
which means that all
fewer errors. For example for dmin =5 and=lwe get /=3
=7 and =2, which
double and triple errors can be detected. Likewise for dmin
also that for odd values of
give /= 4, alltriple and 4-bit errors can be detected. Note
of errors are cor
dmin error detection is not possible when the maximum number
detected when the
rected. Whereas when dmin is even then dmin/2 errors can be
maximum number of errors, now given by (dmin -2)/2. arecorrected.
The four ways of using the error-control capability of a code with dmin =7art
distance of 1, are
illustratedin Fig. I.13. Two codewords c and z, Separated by a incurring 6or
shown along with six redundant words r to r6 and we consider c oferrors
fewer errors. In Fig. 1.13(a) the decoder is correcting the maximum number
around c; and c.
Single.
3and so decoding spheres of radius =t=3 are shown
Soft-decision decoding 35
Table 1.10
Joint error correction and detection

dmin Number of errors Number of errors


Corrected t' detected /
(0

2 1
3 1
or 2
4 1 2
or 0 3
5 2
or 1 3
or 0
6 2 3
or 1 4
or 5
7 3
or 2 4
or 1 5
or 0 6

double, and triple errors incurred by c, result in redundant words lying within c's
decoding sphere and the errors are correctable. There is no error detection capability
because error patterns with 4, 5, or 6 errors give words lying within c;'s decoding
sphere and will therefore give decoding errors. Figure 1.13(b) illustrates decoding
when only 2 errors are corrected (= 2). Each decoding sphere now has a radius of
2and the redundant words ra and ra are excluded from both spheres. Single and
double errors can still be corrected, however 3- and 4-bit errors lie outside cË's
andc;'s decoding spheres and cannot becorrected. This is therefore an example of
I- and 2-bit error correction, jointly with 3- and 4-bit error detection. Reducing r to
lgives r2, r3, P4, and rs lying outside the decoding spheres (Fig. 1.13(c). This allows
single-error-correction and the detection of 2, 3, 4, and 5 errors to take place
jointly. If no error correction is implemented ( =0), then there are no decoding
spheres and I to 6 errors can be detected (Fig. 1.13(d)).

1.8 Soft-decision decoding


In the preceding sections it has been assumed that the output of the demodulator (see
Fig. 1.1) is alwaysa 0or a 1. Such a demodulator is said to make hard decisions. A
demodulator that is not constrained to return 0 or I but is allowed to return a third
symbol, say X, is said to make soft decisions. A soft decision is made whenever the
demodulator input is so noisy that a 0and l are equiprobable. The symbol X
is referred to as an erasure and Fig. 1.14 shows a channel that includes erasures.
An erasure is treated as an error whose location is known but with unknown
magnitude and is thought of as being at a distance of 1/2 away from 0 and from !
(i.e. equidistant from 0and 1). Ademodulator that makes hard decisions gives no
36 Block codes
(a) l'= 3

l = 3
l-3

C.

(b) = 2

Errors
detected
l =2
t=2

C.

(c) t'= 1

f=1 Errors detected

(d) f=0

Errors detected

C C
Fig. 1.13 Joint error correction and detection for dmin =1.

indication of the quality of the Os and Is entering the decoder. The decoder make
decisions on the presence or absence of errors according to the error-controlcode
being used. Asoft-decision demodulator, however., provides the decoder with
additional information that can be used to improve error control. In the eventtofthe
o
bitsthat
decoder detecting errors, any erasures in the word being decoded will be the b
are most likely to be in error.
Soft-decision decoding | 37

Fig. 1.14 An erasure channel.

Table 1.11
Error and erasure
correction for dmin -10

0 9
1
2 5
3 3
4 1

Consider the (8,7) even-paritycode and let's assume that the input to the decoder
is P=(10010100).The parity of vis odd and therefore the decoder knows that at
least 1error has occurred. Based on the parity ofv alone the decoder has no way of
establishing the position of the error, or errors, in . Consider now a decoder whose
input is taken from a soft-decision demodulator andlet y (01110XI) be the
decoder input. The parity of vis incorrect but here it is reasonable to assume that the
position of the erasure gives the bit that is most likelyto bein error. If the erasure is
assumed to have a 0value then vstilhas the wrong parity. However setting X= 1
gives v =(01110111) which has the correct parity and can be taken as the most
likely even-parity codeword that vcorresponds to. If vhas the correct parity and
contains an erasure, then the erasure is set to 0 or Ias so to preserve the correct
parity, for examplegiven v= (1X000100) we would set X=0. Ifv contains asingle
erasure, and no other errors, then single-error correction is guaranteed. The (8,7)
even-parity code is an error-detecting code, it has no error-correcting capability.
However, here we see that in conjunction with a soft-decision demodulator single
error correction can be achieved. The combination of error-controlcoding with soft
decision demodulation is referred to as soft-decision decoding.
Itcan be shown that for a code with minimum distance dmin an ypattern of t errors
and s' erasures can be corrected providing
21 +s < dmin -1. (1.26)
Table 1.1lshows values of and s for dnin = 10.Note that, because erasures and
errors are respectively at adistance of I/2 and 1from the correct value, for every
extra bitcorrected the number oferasures that can be corrected is reduced by 2. The
(8,7) even-parity code has dmin =2and (-0therefore only I erasure can be cor
rected. The (7,4) code, with dpin =3,cannot correct erasures when used for error
correction, but can correct up to 2erasures if error correction is not carried out.
38 | Block codes
We have seen that there are benefits to be gained can by using
return 0 or 1 but soft-
dulators that are not constraincd to
benefits can be gained using demodulators that can assign a bit
return
Here the demodulator decides whether cach bit is a 0 or1 and assign
-decisi
erasures.on
quality demo
indicates how good each bit is. Abitthat is a clear-cut 0 or I would a bil
to Furher
eachty bi.
bit quality, whilst a bitthat is only just a 0 or l isan
given a
low bit
be quai
assigned(he hithgah
where a bit is equally likely to be a 0 or a l
on the
then
basis
errasure is quality.
of the bit values, returned. In
a
then makes its decision not just 0, 1, or X, The case
with each bit. The use of
the quality associated
considerable coding gains, but is
bit-quality
atthe Cxpense of increased information also but decoder
demodulator and the decoder. complexity for both
can givone
the
1.9 Automatic-repeat-request schemes
considered So far
The communication channel that we have is one in
mation transfer takes place in one direction only, namely from the which infor
information is generated to the point at which the i information is usedpoint at which
Error-control encoding takes place prior to transmission
view to and on (see Fig. I.1),.
the information, decoding takes place with a
correcting, any errors incurred during the transmission. The
detecting, reception
and if of
user is referred to direction of possible
tion transfer from the source to the asforward path informa-
the
error-correction techniques previously considered are known as and the
correction schemes. A channel within which transmission is possible
from the
the source is said to have a return path. The existence of a return path allows
forward-error-
user to
decodingrequests
to be made for retransmission of information in the event of a
Strategies of error control based on requests for retransmission are failure.
Automatic- Repeat- Request (AR)schemes. referred to as
Figure 1.15(a) shows one of the simplest ARQ schemes, namely a
scheme.Here the transmitter sends a wordw, on the forward path, andstop-and-wait
waiWai
acknowledgement (ACK) on the return path before sending the next word
w,. If
the decoder at the receiver detects no errors in wË then the receiver sends an
the transmitter. The transmitter, upon receipt of the ACK transmits the nevtACKto
w2. However, if wË is found to contain errors then the receiver sends a peon
acknowledgement (NACK) reply, in which case the transmitter will retransmit w.
instead of transmitting W2. Communication continues in this way with the trans.
mitter waiting for a reply to each word sent, and sending a new word whenever the
reply is an 4CK and retransmitting the previous word if the reply is a NACK.Sucha
stop-and-wait scheme is simple to implement but quite inefficient in terms of the
usage of the communication channel, as the forward path lies idle whilst the
transmitter waits for the ACK/NACK replies.
The go-back-NARQ scheme,shown in Fig. 1.15(b) for N=4, allows continuous
transmission on the forward path, therefore avoiding idle transmission time. Here
the transmitter does not wait for an ACK to each word sent, but transmits con
tinuously until it receives a NACK. We assume, that because of delays within the
system, that if the ith word sent by the transmitter is erroneous then the NACA
received before the transmitter sends the (i + N )th word. The receiver does not sen
an ACK upon receipt of eacherror-free frame,but only sends a NACK whenevet
Automatic-repeat-request schemes 39
(a) Stop-and-wait

Idle time Retransmitted


Transmitter
W W Wa
ACK NACK ACK
Receiver ACK

W
W W

Error
detected

(b) Go-back-4

Retransmitted
Transmitter

W W, W, WE W W Wa W; W6 W, Wa W W;0
NACK

Receiver
W.
W W, Wa Ws W% W, W8 Ws W% W, Wa W W40

Discarded
Error
detected

(c) Selective repeat


Retransmitted
Transmitter

W, W W W Ws W W Wg W Wg W1o W1 W12 W3
NACK

Receiver

W W Wa W W Wa Wo W1 W12 W13

Stored
Error
detected
-discarded

Fig. 1.15 Automatic-repeat-request schemes.

detects an error. Furthermore the receiver discards the erroneous word and the
N-1words that follow. The transmitter, upon receipt of a NACK goes tback N
words and retransmits the ith word along with the N - l words that followed. By
respectively discarding and retransmitting Nwords the receiver and the transmitter
ensure that the correct sequence of words is preserved, without the receiver having to
store words. In Fig. 1.15(b) ws is in error and so the receiver replies with a NACK.
40 Biock codes transmitter
interrupts the sequence of
the
receipt
NACK
of the
OnIf the receiverinstead is after which it
continues
with w6, W7, Words ant,
and can be
of wo, of storing words, then the go-back-N scheme
capable
r e t r a n s m i t s ws.
improved by retransmitting only words that are erroneous. The receiver, on
detecting an erroneous word discards the word, sends a NACK to the transmitter

follow. The transmitter, on receipt of


retransmitsthe - |words
onlyNthe that
erroneous word. On receiving the retransmitted the word,
NACKthe.
and stores
receiver uses the N- I words stored to reconstruct the Correct sequence of Words
Such a scheme is known as a selective-repeat ARQ scheme (see Fig. 1.15(c). Here the.
emphasis for maintaining the correct sequence of words, in the event of errors
receivei.
the
Occurring,is placed at

Problems
code is used to generate even-parity
1.1 A(4, 3)
single-parity-check

expressions for the probability


of: codewords
Determine

() correct decoding pe
(ii) adecoding error Pe
(iüi) a decoding failure pr
Evaluate Pc Pes and p; when p=5y lh-2
interms of the bit-error probability p. codewords. Determine the maximum
code has 8-bit bit.
1.2 Asingle-parity check can be tolerated so that codewords have a success rate
error probability that
of 99.9%.
-1) single-parity-check code is used for error detection in a chanel
1.3 An (n, n maximum blocklength n such that
with bit-error probability 10. Find the
fall below 99%.
the success rate does not
Given an (n, 1) repetition code determine the probability that an infomation
1.4 and when n=5. Assume a bit-error
bit is correct after decoding when n=3
probability of 0.05.
single-error correction and double-error
1.5 A (4, 1) repetition code is used for bit-error probability of 0.01.
detection. Find the decoding failure rate for a
bit-error probability 10 an (n, 1)
1.6 In a communication channel with Find the minimum blocklengtn
repetition code is used for error correction.
10- after decoding. Assume odd
that gives abit-error probability less than
values ofn only.
(5,4) single-parity-e
1.7 Aproduct code is constructed from the (4.3) anddenoted by i,,i2, i3 and is
codes. The information bits inthe code arrays are p is the
same

where j =1, 2, and 3. Show that the overall parity-check bit column

whether it is constructed from the row parity-checks or from the


parity-checks. Assume even parity. single-parity
1.8 A(32, 21) product code is constructed from the (8,,7) and (4, 3)
and row
parity
check codes. Even parity is used to construct both the column a code array.
check bits. Figure 1.16 shows 4 arrays each of which
represents

Determine the decisions that a decoder is likely to make.

You might also like