Hamming Code
Hamming Code
errors in a by
22 Block codes
The proba bility of
obtaining 3
leting
and ones). which gives
inegn 1.6,
n=6and Ë=3 P, =°Cp'(-p)'
probablity
of a
P=194 x10 . The decoding
substituting p = 0.01 gives
1,94 x
105
and P=P;=
acceptable.
and
But ifachanncl is to be
used
repetition codes
redundancy
are quite inadequate and errOr-
acceptable, then
the
make better use of redundancy.
afe not required that
correcting codes are
Table 1.6 shows the set of codewords for the (7,4) code. There are 16 codewords, onë
for each information word. For reference purposes, the codewords and information
words are labelled c, to¬s. and i, to ijs respectively. The subscript i of thecodeword
c,gives the numerical value of the corresponding informationword, for example cs
is the codewordcorresponding to theinformation wordi, =(1001). Note that here,
as with the repetition codes, the word 'parity' does not refer to whether there are an
even or oddnumber of ones in aword, but rather refers to the code's check bits
irrespective of the code's property or structure.
Hamming codes | 23
Tabie 1.6
The (7,4) Hamming code
Information words Codewords
i-(i1, z, i, i) C=(1,iz, l3, i4, P1P2, P3)
io (0 0 0 0) Co -(0 000000)
ii=(0 0 01) C; =(000101 1)
i,=(0 0 10) C) =(0 0 101 10)
in=(0 011) C; =(0 0 1 1 10 1)
i= (0 1 00) CA (0 1 0011 1)
is= (0 1 0 1) Cs=(010110 0)
i;- (0 1 10) C =(0 1 1000 1)
i,=(0 111) C;=(0 1 11010)
ig=(1 0 00) Cg =(1 0 001 0 1)
io (1 00 1) Cg (1 0 01 11 0)
io =(1 0 10) C1o=(1 01001 1)
in=(1011) C|=(1 01 1000)
i2=(1 10 0) C12=(1 1 0001O)
i3 =(1 101) C3=(1 10100 1)
i4=(1 1 10) C14 =(1 1 10100)
is = (1 1 11) C1s=(1 11111 1)
VÊ =i
V3=i3
V4 = l4
Vs = PI
V6 = P2
V7 = P3
if no errors occur. The decoder determines 3parity-check sums
S = (V t v2 t V3) + vs
S) = (V2 t v3 + v4) + v6 (1.19)
S3 = (V) t 2 + v4) + V7.
The first 3 bits in each parity-check sum correspond to the same combination of
information bits as that used in theconstruction of the parity bits (see eqns 1.17),
they are enclosed in parenthesis to emphasize this correspondence. From the parity
check sums we can define
irrespective of whether
p, is 0 or 1 (because
p, +p,=0 we no
modulo-2
Note that addition). Likewise we can show that sz =S3 =0 when there are are
there are no errors. ising
codeword the error syndromes=(000) when
Thereforecy= (1011000,ifit incurs no errors then it will give v =(1011Consider
now Ihe
parity-check sums will he
the resulting
as the decoder input and
+0 =0
SUBHANKAR MAJHI Sj = (1+0+1)
+0=0
Lecturer, IT Dept. S) = (0+1+1)
KATIONAL INSTITUTE OF TECHNOLOGY +0 =0
S3= (1 +0+1)
DURGAPUR
which again givens=(0 0 0). Likewise if we take any codeword from Tablel6
the error syndromes=(000) and so the error syndrome of a codeword is alwas
zero. The construction of the parity-check bits and parity-check sums is such tha
codeword is zero.
the error syndrome of any
S1 = (0+1+1) +0=0
S) =(1+ 1+1) +1=0
S3 = (0+ 1+1) +0=0
Table 1.7
Syndrome table for the (7,4) Hamming code
Error pattern e Error syndrome s
(ej, e2, e3, e4, e5, e6, e) (sj, Sz, S3)
(0000000) (000)
(000000 1) (00 1)
(00000 10) (0 10)
(0000100) (10 0)
(0 00100 0) (01 1)
(0 0 1000 0) (1 10)
(010000 0) (1 11)
(1000 000) (10 1)
26 Block codes estimate of the error
decoder's guess or pattern e
the Decoding
the resulting codeword are respectively. can be
and are denoted by ê and sum-
and codeword c
marized in the three steps:
decoder input V.
I. Calculate s fro the patterné that corresponds to s.
table obtainthe error
2. Fromthe syndrome has the effect of
then given by é=y+é,this inverting
3. The requireddcodeword isposition of the nonzero bit in é.
the bit in vgiven by the syndrome gives the
resultingerror
errors can be correct
occurring, the
Inthe event of asingle error is obtained. All single corrected
codeword
error patternand the correct
Hamming code is a
single-error-correcting code
Note that the triple-error pattern e, and the 4-bit error pattern e, are undetected
because they are identical to the codewords cs and cj4 respectively. The other two error
patterns do not resemble any of the codewords and are therefore detectable.
The (7,4) Hamming code is the first code in the class of single-error-correcting
codes whose blocklengths nand information lengths k satisfy
n=2- 1
k=X1-r (1.21)
for any integer r> 3, and where r=n-k gives the number of parity-check bits.
Taking r=3 gives the (7,4) code already considered. For r=4 we get the (15, 11)
Hamming code which has 11-bit information words, 15-bit codewords and 4
parity-check bits.Given the information word i=(i,h,...i) the parity bits are
(1.22)
PA = i tin thtisthts t 0u
so giving the codeword c= (i, iz,... , i1,PiP2, P3, Pa). The parity-check sums and
syndrome table are constructed in the same way as those for the (7,4) code.
Table 1.8 shows the number of codewords and error syndromes for the (2- 1,
2-1-r) Hamming codes for values of r= 3,4, 5 and 6. Note that the number of
error syndromes rises much less rapidlywith r than the number of codewords. Error
detection and correctioncan be achieved through the use of tables of codewords,but
this becomes impractical for large values of nand k.Decoding based on asyndrome
28 Block codes Table 1.8
Hamming codes
for r=3 to 6
The Number of
Number of
(n, k) codewords syndromes
8
16
16
(7,4) 2,048
4 (15,11) 67x106 32
5 (31, 26) 64
6 (63, 57)
2
(2-1,2"-|-r)
Where k=X-1 -r.
w(0110| 0) =3
w(10100 0) =2
d(0 110 10, 10 10 0 0) = 3.
The minimum distance dmin of a block code is the smallest distance between code
words. Hence codewords differ by dmin OT More bits. The minimum distance is found
by taking a pair of codewords,determining the distance between them and then
repeating this for all pairs of different codewords. The smallest value obtained is the
minimum distance of the code.
Example 1.12
Determine the minimum distance of the even-parity (3,2) block code.
Here the codewords are (0 00) , (0 11), (1 10) and (1 0 i1). Taking codewords
pairwise gives
d(0 0 0, 0 1 1) =2
d(00 0, 1 1 0) =2
d(00 0, 1 01)=2
d(0 11, 1 10) =2
d(0 11, 10 1) =2
d(l 10, 10 1) =2
and the minimum distance of the code is therefore 2.
Consider the (7,4) Hamming code whose 16 codewords are shown in Table 1.6.
This has 120 pairs of different codewords, and it can be shown that any pair of
codewords has its 2codewords separated by a distance of 3, 4or 7and therefore the
minimum distance of the (7,4) Hamming code is 3. The code has &pairs of code
words where the 2codewords in each pair are separated by a distance of 7, 56 pairs
have their 2codewords separated by adistance of 4and the remaining 56pairs have
codewords separated by a distance of 3 (see Table 1.9).
It is not usually practical to determine the minimum distance of a code by con
sidering the distance between all pairs of different codewords. An (n, k) block code
has m= 2k codewords and therefore "C, different pairs of codewords, a term that
rises very rapidly with increasing k. The (7,4) Hamming code has 120 pairs of dif
ferent codewords, and the (15, 11) Hamming code (see Table 1.8, r=4) has 2
codewords which gives over 2 x 10° pairs of codewords. An arbitrary block code
could require a considerable degree of computation to determine its minimum
distance. However, thecodes that are important are not arbitrary but have a linear
property (already referred to at the start of Section 1.6) that allows the minimum
distance to be determined easily, this is considered in Section 2.1.
It is interesting to considera block code from a geometric point of view, as this
helps to illustrate theconcept of distance between words. Codewords belonging to
an (n, k) block code can be thought of as lying within an n-dimensional space
Table 1.9 cod
30 Block codes (7,4)
codewords
in the
between
Distance Co C10 C12 C13
Ca
4 3
C 4 3
Cr 4 4 3 7
4 7
4 4 7 4
3 3 3
C 3 3 3 7
4
4 4
3 3 7
4 3 3 4 4 4 4 3
C 3
3 0 3 7 4 3 3
Ca
3 3 4 3
3 4 4 4
4
7 3 3 3
Ca
4 3 3 3 3 3
0 4 4 4
Cs 3 3 7
3 4 4 3 3
3 4 3 3
4 7 4
4 3 4 4 3 4
C 4
4 4 3 3
CR
7 4 4 4
4 3
3 3 7 3 3 4 3
Co 4 3 3
3 4
4 3 3 3 4 0 3
C10
3 7 4 3
3 4 4 4 3 3 3 4
C11 4 4 4
4 7 3 4 3
3 4 3 3 3 4
C12 4 3 3
4
7 4 4
Cj3
3 3 3 3 4 4 3
3 3 4 3 3
7 4 4
C14 3
4 4 3
C15
|(001) (011)
(111)
(101)
(000)
.... (010)
(100) O
(110)
Codewords
O Redundant words
only whether or not it isa codeword (shaded and open circles represent codewords
and redundant words respectively). Moving from one circle to an adjacent circle
represents achange of 1 bit (i.e. a distance of 1). The codewords c and Cz are
separated by a distance 3, and c, and c, by a distance of 2, five redundant words r,
to rs are shown.
Consider now the arrangement of codewords shown in Fig. 1.12(a), this typifies
the separation of words in a code with minimum distance 3. Here A and B indi
cate examples of transitions that can occur if c incurs 1 or 2 errors respec
tively. Examples Cand D showtriple errors occurring at c, and c4 respectively. To
determine the decoding decisions for the errors A, B, C, and D we consider a
maximum-likelihood decoder with input v. Fora binary-symmetric channel, max
imum-likelihood decoding is equivalent to selecting a codeword that is closest to r
than any other codeword and is referred to as minimum-distance decoding or nearest
neighbour decoding. The error patterns in Fig. I.12(a) will therefore be decoded as
follows:
000
110
100
101
...........
001 011
111
code
(b) The (3,1) repetition
010
000
......
111
redundant words
codewords
C C
Fig. 1.11 A simplified way of illustrating distance.
(a) dmin 3
B
D
:
Co
(b) dmin = 5
E
D
fewer errors. It follows that a block code with minimumn distance dmin Can detect all
error patterns with
l= dmin -1 (1.23)
or fewer errors.
Anerror pattern incurred by a codeword c is correctable only if the resulting
redundant word is closer to cthan any other codeword. For acode with dmin = 3it is
only single errors that satisfy this requirement Hence a block code with minimum
distance 3can correct all single errors. Acode with minimum distance 5 cancorrect
all singleand double errors, acodeword cincurring asingle or double error willgive
a redundant word that is closer to c than any other codeword. Acode with minimum
distance 7 can correct 3 or fewer errors and it follows that a block code with mini
mum distance dmin can correct all error patterns with
or fewer errors.
We refer to t and l as the error-correction and error-detection limits respectively
and they give the error-control limits of a code. Codes with error-correction limit
t and error-detection limit are referred to as -error-correcting codes and
l-error-detecting codes respectively. Note that whilst a code with error-detection
limit is guaranteed to detect allerror patterns withl errorsor less, the code will also
34 Block codes patterns With
more than ( errors.
be ableto
detect some
error
correct certain
error patterns with more Lithan
kewise a
correcting code
is able to
2.6. codewords of an
l-erTor
this in Section (n, k) code
will return to notion that
return to the
be thought of:ias havinga
now can
redundant wordsdectheoihatding aresphaterae
Let's
n-dimensional space.
Each codewordsphere contains
decoding
laround it. Each codeword at the
centre of
the
decoding spheres sphere.no
of radius fromthe
away
distance of t or
less
lying outside the but There
belong to two
usually be or more words
redundant spheres as the spheres are non-interSecting. In
wËl word wi
a minimum
codeword at the centre of the
with
within which v liesis
distance decoder
input
taken r the
as the required codeword. If every word within the space
belongs to oneand only one sphere, so that no word lies outside a decoding sphere,
decoding sphexe
referred ito
equal radius,then the code is as a perfectctcode.The.
word 'perfect' of here not in the sense of the best or exceptionally
spheresisareused good, but
and the
rather to describethe geometrical characteristic of the code. The decoding spheres
can be thought of as perfectly fitting the available space with no overlap and ng
space. The Hamming codes and the repetition codes with odd blocklengh
unused
however perfect
codes are rare.
codes,
Whilst egns 1.23 and 1.24 represent a code's inherent error-control capablity,
are perfect
often the error control realized is acompromise between error correction and error
detection. We have already seen that the (7,4) code can correct single errors or dete:t
upto 2errors. When double errors occur they are detected. because the error syn-
drome is nonzero, but subsequently 'corrected'. The decoding process sis rnot so much
one of double-error detection, which would resultin a decoding failure, but rather
error correction resulting in a decoding error. When carrying out single-error cor-
rection the double-error detection capability of the codeiis not used. However. if the
decoder does not carryout single-error correction then doubleeerrors give a decoding
failure and are said to be detected. The (7,4) code, or any otherr code with dmin-},
single-error correction jointly Ta.
cannot carry out double-error detection and block0
and it can be shown that a
requires a larger minimum distance
minimum distance dmin Can jointly correct
t or fewer errors and detect /or B
errors providing
t+'<dmin -1 (125)
egn 1.25 for minimum
where >t. Table I.10shows values of and that satisfy
distances of l to 7. Note that for each value of the value of shown
gives the
maximum number of errors that can be detected excluding error
patterns with t or
which means that all
fewer errors. For example for dmin =5 and=lwe get /=3
=7 and =2, which
double and triple errors can be detected. Likewise for dmin
also that for odd values of
give /= 4, alltriple and 4-bit errors can be detected. Note
of errors are cor
dmin error detection is not possible when the maximum number
detected when the
rected. Whereas when dmin is even then dmin/2 errors can be
maximum number of errors, now given by (dmin -2)/2. arecorrected.
The four ways of using the error-control capability of a code with dmin =7art
distance of 1, are
illustratedin Fig. I.13. Two codewords c and z, Separated by a incurring 6or
shown along with six redundant words r to r6 and we consider c oferrors
fewer errors. In Fig. 1.13(a) the decoder is correcting the maximum number
around c; and c.
Single.
3and so decoding spheres of radius =t=3 are shown
Soft-decision decoding 35
Table 1.10
Joint error correction and detection
2 1
3 1
or 2
4 1 2
or 0 3
5 2
or 1 3
or 0
6 2 3
or 1 4
or 5
7 3
or 2 4
or 1 5
or 0 6
double, and triple errors incurred by c, result in redundant words lying within c's
decoding sphere and the errors are correctable. There is no error detection capability
because error patterns with 4, 5, or 6 errors give words lying within c;'s decoding
sphere and will therefore give decoding errors. Figure 1.13(b) illustrates decoding
when only 2 errors are corrected (= 2). Each decoding sphere now has a radius of
2and the redundant words ra and ra are excluded from both spheres. Single and
double errors can still be corrected, however 3- and 4-bit errors lie outside cË's
andc;'s decoding spheres and cannot becorrected. This is therefore an example of
I- and 2-bit error correction, jointly with 3- and 4-bit error detection. Reducing r to
lgives r2, r3, P4, and rs lying outside the decoding spheres (Fig. 1.13(c). This allows
single-error-correction and the detection of 2, 3, 4, and 5 errors to take place
jointly. If no error correction is implemented ( =0), then there are no decoding
spheres and I to 6 errors can be detected (Fig. 1.13(d)).
l = 3
l-3
C.
(b) = 2
Errors
detected
l =2
t=2
C.
(c) t'= 1
(d) f=0
Errors detected
C C
Fig. 1.13 Joint error correction and detection for dmin =1.
indication of the quality of the Os and Is entering the decoder. The decoder make
decisions on the presence or absence of errors according to the error-controlcode
being used. Asoft-decision demodulator, however., provides the decoder with
additional information that can be used to improve error control. In the eventtofthe
o
bitsthat
decoder detecting errors, any erasures in the word being decoded will be the b
are most likely to be in error.
Soft-decision decoding | 37
Table 1.11
Error and erasure
correction for dmin -10
0 9
1
2 5
3 3
4 1
Consider the (8,7) even-paritycode and let's assume that the input to the decoder
is P=(10010100).The parity of vis odd and therefore the decoder knows that at
least 1error has occurred. Based on the parity ofv alone the decoder has no way of
establishing the position of the error, or errors, in . Consider now a decoder whose
input is taken from a soft-decision demodulator andlet y (01110XI) be the
decoder input. The parity of vis incorrect but here it is reasonable to assume that the
position of the erasure gives the bit that is most likelyto bein error. If the erasure is
assumed to have a 0value then vstilhas the wrong parity. However setting X= 1
gives v =(01110111) which has the correct parity and can be taken as the most
likely even-parity codeword that vcorresponds to. If vhas the correct parity and
contains an erasure, then the erasure is set to 0 or Ias so to preserve the correct
parity, for examplegiven v= (1X000100) we would set X=0. Ifv contains asingle
erasure, and no other errors, then single-error correction is guaranteed. The (8,7)
even-parity code is an error-detecting code, it has no error-correcting capability.
However, here we see that in conjunction with a soft-decision demodulator single
error correction can be achieved. The combination of error-controlcoding with soft
decision demodulation is referred to as soft-decision decoding.
Itcan be shown that for a code with minimum distance dmin an ypattern of t errors
and s' erasures can be corrected providing
21 +s < dmin -1. (1.26)
Table 1.1lshows values of and s for dnin = 10.Note that, because erasures and
errors are respectively at adistance of I/2 and 1from the correct value, for every
extra bitcorrected the number oferasures that can be corrected is reduced by 2. The
(8,7) even-parity code has dmin =2and (-0therefore only I erasure can be cor
rected. The (7,4) code, with dpin =3,cannot correct erasures when used for error
correction, but can correct up to 2erasures if error correction is not carried out.
38 | Block codes
We have seen that there are benefits to be gained can by using
return 0 or 1 but soft-
dulators that are not constraincd to
benefits can be gained using demodulators that can assign a bit
return
Here the demodulator decides whether cach bit is a 0 or1 and assign
-decisi
erasures.on
quality demo
indicates how good each bit is. Abitthat is a clear-cut 0 or I would a bil
to Furher
eachty bi.
bit quality, whilst a bitthat is only just a 0 or l isan
given a
low bit
be quai
assigned(he hithgah
where a bit is equally likely to be a 0 or a l
on the
then
basis
errasure is quality.
of the bit values, returned. In
a
then makes its decision not just 0, 1, or X, The case
with each bit. The use of
the quality associated
considerable coding gains, but is
bit-quality
atthe Cxpense of increased information also but decoder
demodulator and the decoder. complexity for both
can givone
the
1.9 Automatic-repeat-request schemes
considered So far
The communication channel that we have is one in
mation transfer takes place in one direction only, namely from the which infor
information is generated to the point at which the i information is usedpoint at which
Error-control encoding takes place prior to transmission
view to and on (see Fig. I.1),.
the information, decoding takes place with a
correcting, any errors incurred during the transmission. The
detecting, reception
and if of
user is referred to direction of possible
tion transfer from the source to the asforward path informa-
the
error-correction techniques previously considered are known as and the
correction schemes. A channel within which transmission is possible
from the
the source is said to have a return path. The existence of a return path allows
forward-error-
user to
decodingrequests
to be made for retransmission of information in the event of a
Strategies of error control based on requests for retransmission are failure.
Automatic- Repeat- Request (AR)schemes. referred to as
Figure 1.15(a) shows one of the simplest ARQ schemes, namely a
scheme.Here the transmitter sends a wordw, on the forward path, andstop-and-wait
waiWai
acknowledgement (ACK) on the return path before sending the next word
w,. If
the decoder at the receiver detects no errors in wË then the receiver sends an
the transmitter. The transmitter, upon receipt of the ACK transmits the nevtACKto
w2. However, if wË is found to contain errors then the receiver sends a peon
acknowledgement (NACK) reply, in which case the transmitter will retransmit w.
instead of transmitting W2. Communication continues in this way with the trans.
mitter waiting for a reply to each word sent, and sending a new word whenever the
reply is an 4CK and retransmitting the previous word if the reply is a NACK.Sucha
stop-and-wait scheme is simple to implement but quite inefficient in terms of the
usage of the communication channel, as the forward path lies idle whilst the
transmitter waits for the ACK/NACK replies.
The go-back-NARQ scheme,shown in Fig. 1.15(b) for N=4, allows continuous
transmission on the forward path, therefore avoiding idle transmission time. Here
the transmitter does not wait for an ACK to each word sent, but transmits con
tinuously until it receives a NACK. We assume, that because of delays within the
system, that if the ith word sent by the transmitter is erroneous then the NACA
received before the transmitter sends the (i + N )th word. The receiver does not sen
an ACK upon receipt of eacherror-free frame,but only sends a NACK whenevet
Automatic-repeat-request schemes 39
(a) Stop-and-wait
W
W W
Error
detected
(b) Go-back-4
Retransmitted
Transmitter
W W, W, WE W W Wa W; W6 W, Wa W W;0
NACK
Receiver
W.
W W, Wa Ws W% W, W8 Ws W% W, Wa W W40
Discarded
Error
detected
W, W W W Ws W W Wg W Wg W1o W1 W12 W3
NACK
Receiver
W W Wa W W Wa Wo W1 W12 W13
Stored
Error
detected
-discarded
detects an error. Furthermore the receiver discards the erroneous word and the
N-1words that follow. The transmitter, upon receipt of a NACK goes tback N
words and retransmits the ith word along with the N - l words that followed. By
respectively discarding and retransmitting Nwords the receiver and the transmitter
ensure that the correct sequence of words is preserved, without the receiver having to
store words. In Fig. 1.15(b) ws is in error and so the receiver replies with a NACK.
40 Biock codes transmitter
interrupts the sequence of
the
receipt
NACK
of the
OnIf the receiverinstead is after which it
continues
with w6, W7, Words ant,
and can be
of wo, of storing words, then the go-back-N scheme
capable
r e t r a n s m i t s ws.
improved by retransmitting only words that are erroneous. The receiver, on
detecting an erroneous word discards the word, sends a NACK to the transmitter
Problems
code is used to generate even-parity
1.1 A(4, 3)
single-parity-check
() correct decoding pe
(ii) adecoding error Pe
(iüi) a decoding failure pr
Evaluate Pc Pes and p; when p=5y lh-2
interms of the bit-error probability p. codewords. Determine the maximum
code has 8-bit bit.
1.2 Asingle-parity check can be tolerated so that codewords have a success rate
error probability that
of 99.9%.
-1) single-parity-check code is used for error detection in a chanel
1.3 An (n, n maximum blocklength n such that
with bit-error probability 10. Find the
fall below 99%.
the success rate does not
Given an (n, 1) repetition code determine the probability that an infomation
1.4 and when n=5. Assume a bit-error
bit is correct after decoding when n=3
probability of 0.05.
single-error correction and double-error
1.5 A (4, 1) repetition code is used for bit-error probability of 0.01.
detection. Find the decoding failure rate for a
bit-error probability 10 an (n, 1)
1.6 In a communication channel with Find the minimum blocklengtn
repetition code is used for error correction.
10- after decoding. Assume odd
that gives abit-error probability less than
values ofn only.
(5,4) single-parity-e
1.7 Aproduct code is constructed from the (4.3) anddenoted by i,,i2, i3 and is
codes. The information bits inthe code arrays are p is the
same
where j =1, 2, and 3. Show that the overall parity-check bit column