10 11648 J Pamj 20160506 17

Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/313842897

Error Detection and Correction Using Hamming and Cyclic Codes in a


Communication Channel

Article in Pure and Applied Mathematics Journal · January 2016


DOI: 10.11648/j.pamj.20160506.17

CITATIONS READS

5 6,811

7 authors, including:

Irene Ndanu John Peter Waweru Kamaku


African Institute for Mathematical Sciences Rwanda Jomo Kenyatta University of Agriculture and Technology
1 PUBLICATION 5 CITATIONS 5 PUBLICATIONS 7 CITATIONS

SEE PROFILE SEE PROFILE

Nicholas Mutua Nicholas Muthama Mutua


Taita Taveta University College Taita Taveta University
28 PUBLICATIONS 130 CITATIONS 12 PUBLICATIONS 33 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Nicholas Mutua on 19 February 2017.

The user has requested enhancement of the downloaded file.


Pure and Applied Mathematics Journal
2016; 5(6): 220-231
http://www.sciencepublishinggroup.com/j/pamj
doi: 10.11648/j.pamj.20160506.17
ISSN: 2326-9790 (Print); ISSN: 2326-9812 (Online)

Error Detection and Correction Using Hamming and Cyclic


Codes in a Communication Channel
Irene Ndanu John1, *, Peter Waweru Kamaku2, Dishon Kahuthu Macharia1,
Nicholas Muthama Mutua1
1
Mathematics and Informatics Department, Taita Taveta University, Voi, Kenya
2
Pure and Applied Mathematics Department, Jomo Kenyatta University of Agriculture and Technology, JKUAT, Nairobi, Kenya

Email address:
[email protected] (I. N. John), [email protected] (P. W. Kamaku), [email protected] (D. K. Macharia),
[email protected] (N. M. Mutua)
*
Corresponding author

To cite this article:


Irene Ndanu John, Peter Waweru Kamaku, Dishon Kahuthu Macharia, Nicholas Muthama Mutua. Error Detection and Correction Using
Hamming and Cyclic Codes in a Communication Channel. Pure and Applied Mathematics Journal. Vol. 5, No. 6, 2016, pp. 220-231.
doi: 10.11648/j.pamj.20160506.17

Received: December 1, 2016; Accepted: December 28, 2016; Published: January 20, 2017

Abstract: This paper provides an overview of two types of linear block codes: Hamming and cyclic codes. We have generated,
encoded and decoded these codes as well as schemes and/or algorithms of error-detecting and error-correcting of these codes. We
have managed to detect and correct errors in a communication channel using error detection and correction schemes of hamming
and cyclic codes.
Keywords: Linear Blocks, Hamming, Cyclic, Error-Detecting, Error-Correcting

a Source coding:
1. Introduction Source encoding
Coding theory is concerned with the transmission of data Source decoding
across noisy channels and the recovery of corrupted b Channel coding:
messages, Altaian [1]. It has found widespread applications Channel encoding
in electrical engineering, digital communication, Channel decoding
Mathematics and computer science. While the problems in
coding theory often arise from engineering applications, it is
fascinating to note the crucial role played by mathematics in
the development of the field.
The importance of algebra in coding theory is a commonly
acknowledged fact, with many deep mathematical results
being used in elegant ways in the advancement of coding
theory; therefore coding theory appeals not just to engineers
and computer scientists, but also to mathematicians and
hence, coding theory is sometimes called algebraic coding
theory, Doran [3].
An algebraic techniques involving finite fields, group Figure 1. Model of a Data Transmission System.
theory, polynomial algebra as well as linear algebra deal with
the design of error-correcting codes for the reliable Source encoding involves changing the message source to
transmission of information across noisy channels. a suitable code say u to be transmitted through the channel.
Usually, coding is divided into two parts: Channel encoding deals with the source encoded message u ,
221 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel

by introducing some extra data bits that will be used in (00) → (00000),
detecting and/or even correcting the transmitted message, (01) → (01111),
Hall [4]. Thus the result of the source encoding is a code (10) → (10110),
word, say v . Likewise, channel decoding and source
(11) → (11001)
decoding are applied on the destination side to decode the
received code word r as correctly as possible. Again if the message (00000) was transmitted over a noisy
Figure 1 represents a model of a data transmission system. channel and that there is only one error introduced, then the
For example: Consider a message source of four fruit received word must be one of the following five: (10000),
words to be transmitted: apple, banana, cherry and grape. The (01000), (00100), (00010) or (00001). Since only one error
source encoder encodes these words into the following binary occurred and since each of these five codes differs from
data (u1u2 u3u4 ) : (00000) by only one bit, and from the other three correct
codes (01111), (10110) and (11001) by at least two bits, then
Apple → u1 = (0, 0), banana → u2 = (0, 1),
the receiver will decode the received message into (00000)
and, hence, the received message will be correctly decoded
Cherry → u3 = (1, 0), Grape → u4 = (1, 1).
into ‘apple’.
Suppose the message ‘apple’ is to be transmitted over a Algebraic coding theory is basically divided into two
noisy channel. The bits u1 = (0, 0) will be transmitted major types of codes: Linear block codes and Convolutional
codes, Blahut [2].
instead. Suppose an error of one bit occurred during the
In this paper we present some encoding and decoding
transmission and the code (0, 1) is received instead as seen in
schemes as well as some used error detection/correction
the following figure. The receiver may not realize that the
message was corrupted and the received message will be coding techniques using linear block codes only. We discuss
decoded into ‘banana’. only two types of linear block codes: Hamming and cyclic
codes.

1.1. Problem Statement

In any environment, noise, electromagnetic radiations and


any other forms of disturbances affect communication leading
to corrupted messages, errors in the received messages or even
to an extent of the message not being received at all.

1.2. Objectives of the Study

Figure 2. A communication error occurred. 1.2.1. General Objective


The main objective of this study was to provide an overview
With channel coding, this error may be detected (and even of two types of linear block codes: Hamming and cyclic codes
corrected) by introducing a redundancy bit as follows and study schemes and/or algorithms of error detection and
(v1v2 v3v4 ) : correction of these codes.

(00) → v1 = (000), 1.2.2. Specific Objectives


To generate, encode and decode hamming and cyclic codes.
(01) → v = (011),
2 To detect and correct errors using hamming and cyclic
(10) → v3 = (101), codes.
(11) → v = (110).
4 1.3. Justification of Study
The newly encoded message ‘apple’ is now (000). Suppose Transmission of data across noisy channel and other forms
this message was transmitted and an error of one bit only of interference affect communication resulting to misdirected
occurred. The receiver may get one of the following: (100), messages. Hence, we need the recovery of these corrupted
(010) or (001). In this way, we can detect the error, as none of messages. Considering the present concern with privacy and
(100), (010) or (001) is among our encoded messages. secrecy, and the prospect that such problems will increase
Note that the above channel encoding scheme does not significantly as communication services and data repositories
allow us to correct errors. For instance, if (100) is received, grow, importance is thus attached to finding means of
then we do not know whether (100) comes from (000), (101)
detecting and correcting any error that occur. Thus the need
or (110). However, if more three redundancy bits are
for a code that fully guarantees security in the sense that
introduced instead of one bit, we will be able to correct errors.
whenever two or more persons send or receive this code then
For instance, we can design the following channel coding
scheme: data integrity and authentication are guaranteed. In this paper
we present some encoding and decoding schemes as well as
error detection/correction coding techniques using linear
Pure and Applied Mathematics Journal 2016; 5(6): 220-231 222

block codes only. the good ones.


This study is applicable in: The 1980s saw encoders and decoders appear frequently in
Deep space communication. newly designed digital communication systems and digital
Satellite communication. storage systems, Hamming [5].
Data transmission. The 1990s witnesses an evaluation of all groups in
Data storage. informatics at the universities in Norway. The evaluation was
Mobile communication. performed by a group of internationally recognized experts.
File transfer. The committee observed that the period 1988-92, had the
Digital audio/video transmission. largest number of papers (27) published in internationally
refereed journals among all the informatics groups in Norway.
1.4. Null Hypothesis In the period 1995-1997 the goal of finding explicit codes
Codes for non-binary input channels such as the dual – which reach the limits predicted by Shannon's original work
code [1] useful on multiple frequency shift channels, and on has been achieved. The constructions require techniques from
practical implementations of high rate codes cannot work on a surprisingly wide range of pure mathematics: linear algebra,
binary channels. the theory of fields and algebraic geometry all play a vital role,
Han [6]. Not only has coding theory helped to solve problems
of vital importance in the world outside mathematics, it also
2. Literature Review has enriched other branches of mathematics, with new
The history of data-transmission codes began in 1948 with problems as well as new solutions, Kolman [7]. In 1998
the publication of a famous paper by Claude Shannon. Alamouti described a space-time code.
Shannon showed that associated with any communication In 2000 Aji, McEliece and others synthesize several
channel or storage channel is a number C (measured in bits per decoding algorithms using message passing ideas. In the
second), called the capacity of the channel, which has the period 2002-2006 many books and papers are introduce such
following significance: Whenever the information as Algebraic soft-Decision Decoding of Reed- Solomon
transmission rate R (in bits per second) required of a Codes by Koetter R., and Error Control Coding: Fundamentals
communication or storage system is less than C then, by using and Applications by Lin and Costello and Error Correction
a data-transmission code, it is possible to design a Coding by Moon T. in 2005.
communication system for the channel whose probability of During this decade, development of algorithms for
output error is as small as desired. Shannon, however, did not hard-decision decoding of large nonbinary block codes
tell us how to find suitable codes; his contribution was to defined on algebraic curves, Kabatiansky [8]. Decoders for the
prove that they exist and to define their role. Throughout the codes known as hermitian codes are now available and these
1950s, much effort was devoted to finding explicit codes may soon appear in commercial products. At the same
constructions for classes of codes. The first block codes were time, the roots of the subject are growing even deeper into the
introduced in 1950 when Hamming described a class of rich soil of mathematics.
single-error-correcting block codes and he published what is Doumen (2003), researched on the aims of cryptography in
now known as Hamming code, which remains in use in many providing secure transmission of messages in the sense that
applications today. two or more persons can communicate in a way that
In 1957, among the first codes used practically were the guarantees confidentiality, data integrity and authentication.
cyclic codes which were generated using shift registers. It was Sebastia (2003) studied on the Block error correcting codes.
quickly noticed by Prange that the cyclic codes have a rich He found that the minimum distance decoder maximizes the
algebraic structure, the first indication that algebra would be a likelihood of correcting errors if all the transmission symbols
valuable tool in code design. have the same probability of being altered by the channel
In the 1960s, the major advances came in 1960 when noise. He also noted that if a code has a minimum distance d,
Hocquenghem and Bose and Ray-Chaudhuri found a large then d(C) - 1 is the highest integer with the property that the
class of multiple-error-correcting codes (the BCH codes). The code detects d(C) - 1 errors.
discovery of BCH codes led to a search for practical methods Todd [11], studied on the Error control coding. He showed
of designing the hardware or software to implement the that if a communication channel introduces fewer error than the
encoder and decoder. In the same year independently, Reed, minimum distance errors, d(C), then these can be detected and
Solomon and Arimoto found a related class of codes for if d(C)- 1 errors are introduced, then error detection is
non-binary channels. Concatenated codes were introduced by guaranteed (see also [8]). He also noted that the probability of
Forney (1966), later Justesen used the idea of a concatenated error detection depends only on the error introduced by the
code to devise a completely constructive class of long block communication channel and that the decoder will make an error
codes with good performance. if more than half of the received bit strings are in error [9].
During the 1970s, these two avenues of research began to In 2009, Nyaga [10] studied the Cyclic ISBN-10 to improve
draw together in some ways and to diverge further in others. the conventional ISBN-10. They designed a code that would
Meanwhile, Goppa (1970) defined a class of codes that is sure detect and correct multiple errors without many conditions
to contain good codes, though without saying how to identify attached for error correction and found out that the code could
223 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel

correct as many errors as the code could detect. The method characterized by mathematical models, which (it is hoped)
involves trial and error calculation and thus it needs to be are sufficiently accurate to be representative of the attributes
improved on and simplified to speed up the process. of the actual channel.
Asma & Ramanjaneyulu [12] studied the implementation of In this paper we restrict our work on a particularly simple
Convolution Encoder and Adaptive Viterbi Decoder for Error and practically important channel model, called the binary
Correction. Egwali Annie and Akwukwuma [13] investigated symmetric channel (BSC), and defined as follows:
Performance Evaluation of AN-VE: An Error Detection and Definition 1: A binary symmetric channel (BSC) is a
Correction Code. Vikas Gupta and Chanderkant Verma [14] memoryless channel which has channel alphabet {0, 1} and
examined Error Detection and Correction: Viterbi Mechanism. channel probabilities.
Error Detecting and Error Correcting Codes were examined 1
by Chauhan et al [15]. p(1 received |0 send) = p(0 received | 1 sent) = p < ,
2
( after = there should not be space)
3. Methodology p(0 received | 0 send) = p(1 received | 1 sent) = 1 – p.
Figure 3, shows a BSC with crossover probability p.
This section sets the methodology of the research by
discussing the Linear Block codes.
3.1. Basic concepts of Block Codes
The data of output of the source encoder are represented by
sequence of binary digits, zeros or ones. In block coding this
sequence is segmented into message blocks u = (u0 u1 ...uk −1 )
consisting of k digits each.
There are a total of 2 k distinct messages. The channel
encoder, according to certain rules, transforms each input Figure 3. Binary Symmetric Channel.
message into a word v = (v0 v1 ...vn −1 ) with n ≥ k .
3.5. General Methods of Decoding Linear Codes Over BSC
3.2. Basic Properties of a Linear Block Code
In a communication channel we assume a code word
The zero word (00…0), is always a codeword. v = (v0 ...vn −1 ) is transmitted and suppose r = ( r0 ...rn −1 ) is
If c is a codeword, then (-c) is also a codeword. received at the output of the channel. If r is a valid codeword,
A linear code is invariant under translation by a we may conclude that there is no error in v. Otherwise, we
codeword. That is, if c is a codeword in linear code C, know that some errors have occurred and we need to find the
then C + c = C. correct codeword that was sent by using any of the following
The dimension k of the linear code C(n, k) is the general methods of linear codes decoding:
dimension of C as a subspace of Vn over GF(2), i.e. Maximum likelihood decoding,
dim(C ) = k . Nearest neighbor/Minimum distance decoding
Syndrome decoding
3.3. Encoding Scheme Standard array
Syndrome decoding using truth table
If u = (u0 u1 ...uk −1 ) is the message to be encoded, then the These methods for finding the most likely codeword sent
corresponding codeword v can be given as follows: v = u. G are known as decoding methods.
3.4. Error Detection, Error Correction & Decoding Schemes 3.5.1. Maximum Likelihood Decoding
Suppose the codewords {v0 , v1 ,..., v2k −1 } form the linear
A fundamental concept in secure communication of data is
the ability to detect and correct the errors caused by the block code C(n, k) and suppose a BSC with crossover
channel. In this chapter, we will introduce the general 1
probability p < is used.
schemes/methods of linear codes decoding. 2
Channel Model / Binary Symmetric Channel Let a word r = (r0 r1 ...rn −1 ) of length n be received when a
The channel is the medium over which the information is codeword vr = (vr0 vr1 ...vrn−1 ) ∈ C is sent. Then, the maximum
conveyed.
Examples of channels are telephone lines, internet cables likelihood decoding (MLD) will conclude that vr is the most
and phone channels, etc. These are channels in which likely codeword transmitted if vr maximizes the forward
information is conveyed between two distinct places or channel probabilities.
between two distinct times, for example, by writing
information onto a computer disk, then retrieving it at later 3.5.2. Nearest Neighbor Decoding/Minimum Distance
time. Decoding
Now, for purposes of analysis, channels are frequently Important parameters of linear block codes called the
Pure and Applied Mathematics Journal 2016; 5(6): 220-231 224

hamming distance and hamming weight are introduced as 4. Binary Hamming Codes
well as the minimum distance decoding.
4.1. Construction of Binary Hamming Codes
3.5.3. The Minimum Distance Decoding
Suppose the codewords {v0 , v1 ,..., v2k −1 } from a code C(n, Hamming codes are the first important class of linear
error-correcting codes named after its inventor, Hamming [1]
k) are being sent over a BSC. If a word r is received, the who asserted by proper encoding of information, errors
nearest neighbor decoding or (minimum distance decoding) induced by a noisy channel or storage medium can be
will decode r to the codeword vr that is the closest one to reduced to any desired level without sacrificing the rate of
the received word r. Such procedures can be realized by an information transmission or storage. We discuss the binary
exhaustive search on the set of codewords which consists of Hamming codes with their shortened and extended versions
comparing the received word with all codewords and that are defined over GF (2). These Hamming codes have
choosing of the closest codeword. been widely used for error control in digital communication
and data storage. They have interesting properties that make
3.5.4. Syndrome & Error Detection encoding and decoding operations easier.
Consider an (n, k) linear code C. Let v = (v0 v1 ...vn −1 ) be a In this section we introduce Hamming codes as linear
codeword that was transmitted over a noisy channel (BSC). block codes that are capable of correcting any single error
Let r = (r0 r1 ...rn −1 ) be the received vector at the output of the over the span of the code block length.
channel. Because of the channel noise, r may be different Let r ∈ Z , the hamming code of order r is a code
from v. Hence, the vector sum e = r + v = (e0 e1 ...en −1 ) is an generated when you take a parity check matrix H and
r × (2 r − 1) matrix with columns that are all the (2 r − 1)
n-tuple where ei = 1, and ri ≠ vi for i = 0,1,..., n − 1. This
non-zero bit strings of length r in any order such that the last
n-tuple is called an error vector or (error pattern). The 1s in e
r columns form an identity matrix.
are the transmission errors that the code is able to correct.
Remark 1:
3.5.5. Syndrome & Error Correction Interchanging the order of the columns lead to an
The syndrome s of a received vector r = v + e depends equivalent code.
only on the error pattern e, and not on the transmitted Example 1:
codeword v. Find codewords in hamming code C of order.
1 b) 2 c) 3
3.5.6. Error-Detecting& Error-Correcting Capabilities of Solution
Block Codes
Error-Detecting Capabilities of Block Codes r=1
Let m be a positive integer. A code C is u error detecting if,
1× (21 − 1)matrix
whenever a codeword incurs at least one and at most u errors,
the resulting word is not a codeword. ⇒ 1×1
A code is exactly u error detecting if it is m error detecting ⇒ (1)
but not (m + 1) error detecting. G =1
Error-Correcting Capabilities of Block Codes G = ( I / A)
If a block code with minimum distance d min is used for
H = ( At / I )
random-error correction, one would like to know how many
errors that the code is able to correct. (1)(1) = 1
A block code with minimum distance d min guarantees (1)(0) = 0
 d −1 ⇒ C = {0,1}
correcting all the error patterns of t =  min  or minimal
 2  r=2
errors. The parameter t is called the random-error-correcting
capability of the code. 2 × (22 − 1)matrix
⇒ 2×3
3.5.7. Syndrome Decoding
We will discuss a scheme for decoding linear block codes 110 
that uses a one-to-one correspondence between a coset leader H =  ⇒ ( A / I2 )
t

 101 
and a syndrome. So we can form a decoding table, which is
much simpler to use than a standard array. The table consists  1 
At =  
of 2 n − k coset leaders (the correctable error patterns) and  1
their corresponding syndromes. ∴ A = (11)
So the exhaustive search algorithm on the set of 2 n − k
G = ( I / A)
syndromes of correctable error patterns can be realized if we
have a decoding table, in which syndromes correspond to = (111)
coset leaders.
225 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel

Perform linear combinations of rows of G. vector sum of those columns of the matrix corresponding to
C = {111, 000} the positions of the errors.
Remark 2: Now, consider all error words of weight one are to have
Let r = 1. Then the G is a 1×1 matrix. Since G = ( I / A) distinct syndromes, and then it is evidently necessary and
sufficient that all columns of the matrix must be distinct.
and H = ( At / I ) , we conclude that G = (1) and hence
For if w(e) = 1 say ei = 1 then SiT = hi if e j = 1 then
C = {0,1} .
Since all the codewords are linear combinations of the
S Tj = hj now, if SiT = S Tj then hi ≠ h j for i ≠ j .
rows of G, this code has only two codewords. In other words, the parity-check matrix H of this code
consists of all the nonzero (n − k ) -tuples as its columns. Thus,
r=3
there are n = 2( n − k ) − 1 possible columns.
3 × (2 − 1)matrix
3
The code resulting from above is called a Binary Hamming
H ⇒ 3 × 7matrix code of length n = 2m − 1 and k = 2m − 1 − m where
m = n−k .
 0111100 
  Definition 2:
H =  1011010  For any integer m > 1 there is a Hamming code, Ham (m) ,
 1101001 
  of length 2 m − 1 with m parity bits and 2m − 1 − m
H = ( A / I3 )
t
information bits.
Using a binary m × n parity check matrix whose columns
 0111
  are all of the m - dimensional binary vectors different from
A =  1011 
t
zero, the Hamming code is defined as follows:
 1101 
  Ham ( m) = {v = (v0 v1 ...vn −1 ) ∈ Vn : H .vT = 0}
.
 011
  Table 1. (n, k ) Parameters for Some Hamming Codes.
101 
A=
 110  M Hamming Code
  3 (7, 4)
 111  4 (15, 11)
G = ( I / A) 5 (31, 26)
6 (63, 57)
1000011  7 (127, 120)
 
0100101 
= Theorem 1: The minimum distance of a Hamming code is
 0010110  at least 3.
 
 0001111  Proof:
If Ham (m) contained a codeword v of weight 1, then
Perform linear combination of rows of G. v would have 1 in the i th position and zero in all other
= {1000011, 0100101, 0010110, 0001111, 1100110,
positions.
1010101, 1001100, 0110011, 0101100, 0011001, 1110000,
1101001, 0111100, 1111111, 0000000 and 0101010} Since HvT = 0 = hi , then i th column of H must be zero.
Suppose the linear code C (n, k ) has a (n − k ) × n matrix This is a contradiction of the definition of H. So Ham (m)
H as the parity check matrix and that the syndrome of the has a minimum weight of at least 2.
received word r is given by S T = H .r T . Then the decoder If Ham (m) contained a codeword v of weight 2, then
must attempt to find a minimum weight e which solves the v would have 1 in the i th and j th positions and zero in all
equation S T = H .eT . other positions. Again, since Hv = 0 = hi + hj , then hi & h j
T

Write e = (e0 , e1 ,..., en −1 ) and H = (h0 h1 ,..., hn −1 ) , where


are not distinct. This is a contradiction.
ei ∈ GF (2)∀i = 0,1,..., n − 1 and each hi is an (n − k ) So Ham (m) has a minimum weight of at least 3.
dimensional column vector over GF (2) , then Then Wmin ≥ 3 . Since d min = Wmin in linear codes, then
d min ≥ 3 , therefore the minimum distance of Hamming code
 e0 
  is at least 3.
 e1  Theorem 2: The minimum distance of a Hamming code is
n −1
S T = [h0 h2 ...hn −1 ]  .  = ∑ i = 0 ei hi . exactly 3.
  Proof:
. 
e  Let C (n, k ) be a Hamming code with parity-check matrix
 n −1 
H m × n . Let us express the parity-check matrix H in the
In other words, the syndrome may be interpreted as the following form:
Pure and Applied Mathematics Journal 2016; 5(6): 220-231 226

H = [ h1 ,.., hi ,.., h j ,.., h2m −1 ] , where each hi represents the (I.e.) The column of 1’s is added to I 3 .
th
i column of H. Since the columns of H are nonzero and
1
distinct, no two columns add to zero. It follows that the  
minimum distance of a Hamming code is at least 3. Since H G = ( I 3 / A) . Where A = 1 
consists of all the nonzero m -tuples as its columns, the 1
 
vector sum of any two columns, say hi and h j , must also
be a column in H, say hS i.e. hi + h j = hs . Thus, b Consider the encoding using triple repetition for 3 bit
messages as follows.
hi + h j = hs = 0 (In modulo 2-addition)
It follows from that the minimum distance of a Hamming G = ( I3 | I3 | I 3 ) .
code is exactly 3.
Corollary 1: The Hamming code is capable of correcting 100100100 
 
all the error patterns of weight one and is capable of G =  010010010  .
detecting all 2 or fewer errors.  001001001 
Proof:  
 d −1  3 −1 c Let
t =  min  =   = 1 . So the Hamming code is
 2   2 
capable of correcting all the error patterns of weight one.  100111 
 
And d min − 1 = 3 − 1 = 2 . Thus it also has the capability of G =  010110  .
 001101 
detecting all 2 or fewer errors.  
Result
For any positive integer m > 1 , there exists a Hamming G = ( I 3 | A) . Where
code with the following parameters:
Code length: n = 2m − 1  111 
 
Number of information symbols: k = 2m − m − 1 A =  110  .
Number of parity-check symbols: n − k = m  101 
 
Random-error-correcting capability: t = 1(d min = 3) .
What codewords does G generate?
4.2. The Generator and the Parity Check Matrices of Binary Solution:
Hamming Codes Ham (m) E(X) = XG

GENERATOR MATRICES f : B3 → B6
When we use PCB we encode a message x1 x2 ...xk as
B 3 = {000, 001, 010,100, 011,101,110,111} .
k
x1 x2 ...xk xk +1 where xk +1 = ∑ xi (mod 2) .
i =1 (000)G = 000000
To generalize this notion we add more than one check bit (001)G = 001101
and encode the message x1 x2 ...xk as x1 x2 ...xk xk +1 ...xn . Where (010)G = 010110
the last n – k bits are PCB’s obtained from the k bits in the (100)G = 100111
message.
The PCB are xk +1 xk + 2 ...xn specified as follows: (011)G = 011011
i. Consider the k bit message x1 x2 ...xk as a 1 x k matrix (101)G = 101010
X. (110)G = 110001
ii. Let G be a k x n matrix that begins with I k . I.e. k x k (111)G = 111100
identity matrix. Hence G = ( I k / A) where A is a k x (n
G = {000000, 001101, 010110,100111, 011011,101010,110001,111100}
- k) matrix. G is called a generator matrix.
iii. We encode the message X as E(X) =XG doing Remark 3:
arithmetic mod 2. a The codewords in a binary code generated by the
Example 3: generator matrix G can be obtained by performing all
a Consider encoding by adding the PCB to a 3 bit possible linear combinations of the rows of G working
message, where mod 2.

1001   100111 
   
G =  0101 . I.e. G =  010110  .
 0011  001101 
   
227 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel

Rows = 100111, 010110, 001101 Suppose a PCB is added to a bit string during transmission,
what would you conclude on the following received
Adding any = 110001, 011011, 101010, 111100, 000000. messages.
The binary codes formed using the generator matrix have a 101011101 – It contains an even number of 1s. Hence it
the closure property. They are therefore linear codes. I.e. is either a valid codeword or contains an even number
of errors.
consider y1 and y2 codewords generated by G.
b 11110010111001 – It contains an odd number of 1s
hence it cannot be a valid codeword and must therefore
I .e. y1 = x1G , i.e..E ( x1 )
contain an odd number of errors.
y2 = x2 G , i.e..E ( x2 ) . Consider the generator matrix
E ( x1 + x2 ) = ( x1 + x2 )G  100111 
 
G =  010110  the bit string x1 x2 x3 is encoded as
= x1G + x2 G  001101 
.  
= y1 + y2
x1 x2 x3 x4 x5 x6 where:
Parity Check Matrices
A simple way to detect errors is by use of parity check bit x4 = x1 + x2 + x3
(PCB). x5 = x1 + x2 .
A parity bit, or check bit is a bit added to the end of a x6 = x1 + x3
string of binary code that indicates whether the number of
bits in the string with the value one is even or odd. Parity bits x1 + x2 + x3 + x4 = 0 
are used as the simplest forms of error detecting code. There 
I.e. x1 + x2 + x5 = 0  – parity check equations.
are two types of parity bits: even parity bit and odd parity bit. 
An odd number of bits (including the parity bit) are x1 + x3 + x6 = 0 
transmitted incorrectly; the parity bit will be incorrect, thus I.e.
indicating that a parity error occurred in the transmission.
Because of its simplicity, parity is used in many hardware  x1 
 
applications where an operation can be repeated in case of x2
difficulty, or where simply detecting the error is helpful. In 111100     0 
  x3  
serial data transmission, a common format is 7 data bit, an 110010   x  =  0  .
even parity bit, and one or two stop bits. This format neatly 101001   4   0 
  x   
accommodates all the 7-bit ASCII characters in a convenient  5
8-bit byte. x 
 6
If a bit string contains an even number of 1s we put 0 at
the end. H [ E ( x )t ] = 0 . Where E ( x )t is the transpose of E(x) and
If it contains an odd number of 1s we put a 1 at the end. the parity check matrix is
Our aim is to ensure an even number of 1s in any
codeword. 111100 
 
I.e. message x1 x2 ...xn . H = 110010 
Encoded as x1 x2 ...xn xn +1 . 101001  .
 
Where xn +1 = x1 + x2 + ... + xn . = ( A / I3 )
t

A single error in communication will therefore be noted


since it will change the parity. In general H = ( At / I n − k ) .
Example 4:
Remark 5:
Message: 101
Relationship between generator matrix and parity check
Encoded as 1010
matrix. Suppose G is a k x n matrix (i.e.) G = ( I k / A) .
Suppose sent: 101
Received: 111 (check 111(odd) → error) Hence A is a k x (n - k) matrix. We associate to G the
Example 5: parity check matrix H where: H = ( At / I n − k ) . x is then a
Message: 10101 codeword iff G = ( I k / A) , H = ( At / I n − k ) .
Encoded as 101011 From a generator matrix, we can find the associated
Suppose sent: 101011 parity check matrix and vice versa.
Received: 111111(check (even number of 1s) → no error),
but there is an unnoticed error. 4.3. Syndrome & Error Detection/Correction
Remark 4:
We notice that, when an even number of errors occur, it is The parity check matrix is used to detect errors.
not noticed. Any received bit string y that does not satisfy the equation,
Pure and Applied Mathematics Journal 2016; 5(6): 220-231 228

Hy t = 0 is not a valid codeword, that is, it is in error. Check H ⇒ c j is the 5th column. Hence the 5th bit string
When the columns of the parity check matrix are distinct received is in error.
and are all non-zero, H can be used to correct the errors. y = 001111 .
Suppose x is sent and y received in error then y = x + e, e Hence x = 001101
being the error string.
(ii) Hy t .
If e = 0 then no error.
In general the error string e has a 1 in the position where y
0
differs from x and 0 in all other places.  
1
Example 6: 111100    1
 0  
x = 110010 110010   0  = 1 = c j .
101001    1
y = 100010  0  
 
⇒ y = x+e 1 
.
Hence, e = 010000
Check H ⇒ c j is the 1st column. Hence the 1st bit string
Remark: received is in error.

H [ y t ] = H [ x + e]t . y = 010001
.
= Hxt + Het Hence x = 110001.
.
= He t
4.4. Cyclic Codes
Hyt = Het = c j . Where c j is the jth column of H . Cyclic codes form an important subclass of linear block
Assuming no more than one error exists, we can find the codes and were first studied by Prange in 1957. These codes
codeword x that was send by simply computing Hy t . If Hy t are popular for two main reasons: first, they are very
effective for error detection/correction and second, they
= 0, then no error and y is the sent codeword. Otherwise the
possess many algebraic properties that simplify the encoding
j th bit is in error and should be changed to produce x.
and the decoding implementations.
Example 7: A code C is said to be cyclic if:
i. C is a linear code.
 100111  ii. Whenever a right or left shift is performed on any
 
Let G =  010110  . codeword it yields another codeword.
 001101  i.e. whenever a0 a1 ...an ∈ C then a1a2 ...an a0 ∈ C .
 
Remark 6:
Obtain H.
a1a2 ...an a0 is the first cyclic shift.
Determine the codeword sent given the received:
y = 001111 Example 8:
y = 010001
C = {000, 110, 101, 011}
Assuming no more than one error.
Solution Hence C is cyclic.
G = ( I 3 / A) 000 → 000 → 000
.
110 → 101 → 011 → 110 .
Where A = Hy t .
Cyclic codes are useful in:
H = ( A / I3 ) ⇒
t  111100  i. Shift registers.
 
H =  110010  ii. On the theoretical side, cyclic codes can be investigated
 101001  by means of algebraic theory of rings and polynomials.
 
Description of Cyclic Codes
(i) Hy t . If the components of an n –tuple v = (v0 v1 ...vn −1 ) are
cyclically shifted one place to the right, we obtain another
0
  n-tuple, v (1) = (vn −1v0 ...vn − 2 ) which is called a cyclic shift of v .
0
111100     0  . Clearly, the cyclic shift of v is obtained by moving the
  1   
 110010    = 1
  = c
101001  1   0 
j
right most digit vn −1 of v to the left most digit and moving
  1   
  every other digit v0 , v1 ,...vn − 2 one position to the right.
1 
229 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel

Shifting the components of v cyclically, i places to the Definition 3: An (n, k ) linear code C is called cyclic if any
right, the resultant n-tuple would be cyclic shift of a codeword in C is also a codeword in C, i.e.
whenever (v0 v1 ...vn −1 ) ∈ C , then so is (vn −1v0 ...vn − 2 ) .
v ( i ) = (vn − i vn − i −1 ...vn −1v0 .v1 ...vn − i −1 )
Example: Consider the following (7, 4) linear code C;
Remark 9: Cyclically shifting v i -places to the right is
equivalent to cyclically shifting v( n − i ) -places to the left.

C = {(0000000), (1101000), (0110100), (1011100),(0011010), (1110010), (0101110),


(1000110), (0001101), (1100101), (0111001), (1010001), (0010111), (1111111),
(0100011), (1001011)}

One can easily check that the cyclic shift of a codeword in C The equivalence classes form a ring, and iff F(x) is
is also a codeword in C. For instance, let v = (1101000) ∈ C , reducible we get a field.
then v (1) = (0110100) ∈ C : Example 9:
Hence, the code C is a cyclic. f ( x) = 1 + x 2 in V 3 [ x] .
Correspondence between bit string and polynomials over
Ζ2 .
Solution: Elements P(x) in V 3 [ x ] are;
The key to algebraic treatment of cyclic code is the
correspondence between the word a = a0 a1a2 ...an −1 is 0,1 + x + x 2 ,1, x, x 2 ,1 + x,1 + x 2 , x + x 2 .
V n or B n and polynomial
< 1 + x 2 >= {0,1 + x 2 ,1 + x, x + x 2 }
n −1
a ( x) = a0 + a1 x + a2 x + ... + an −1 x in Ζ 2 [ x]
2
0 → 000
.
1 + x 2 → 101 .
In this correspondence the first cyclic shift of a codeword â
1 + x → 110
is represented by the polynomial
x + x 2 → 011
n −1
aˆ ( x ) = an −1 x + a0 x + a1 x + ... + an − 2 x
0 2
.
C = {000, 101, 110, 011}
i.e.
4.4.1. Shift-Register Encoders for Cyclic Codes
Table 2. First Cyclic Shift In this section we present circuits for performing the
encoding operation by presenting circuits for computing
x 0
x1 x2 x 3 .... x n−1
polynomial multiplication and division.
a (x ) a0 a1 a2 a3 …. an−1 Hence, we shall show that every cyclic code can be encoded
aˆ( x ) an−1 a0 a1 a2 … an−2 with a simple finite-state machine called a shift-register
encoder.
Consider: To define the shift register we want to by the following
definition;
xa( x) = x(a0 + a1 x + a2 x 2 + ... + an −1 x n −1 ) Definition 4: A D flip-flop is a one-bit memory storage in
Consider the field GF (2) .
xa( x) − an −1 x n −1
= x(a0 + a1 x + ... + an −1 x n −1 ) − an −1 ( x n − 1) .
⇒ a0 x + a1 x + ... + an −1 x − an −1 x + an −1 = aˆ ( x)
2 n n
Figure 4. Flip-Flop.
aˆ ( x) = xa( x) − an −1 ( x n − 1)
External clock: Not pictured in our simplified circuit
= xa ( x ) mod( x n − 1)
diagrams, but an important part of them, which generates a
Working with polynomials help us to perform operations on timing signal ("tick") every t0 seconds.
cyclic codes for better understanding. When the clock ticks, the content of each flip-flop is shifted
We denote the ring of polynomials modulo x n − 1 by out of the flip-flop in the direction of the arrow, through the
V [ x ] with coefficients in Ζ 2 .
n circuit to the next flip-flop.
The signal then stops until the next tick.
The addition and multiplication of polynomials modulo Adder: The symbol of adder has two inputs and one output,
x n − 1 can be regarded as addition and multiplication of which is computed as the sum of the inputs (modulo
equivalence classes of polynomials. 2-addition
Pure and Applied Mathematics Journal 2016; 5(6): 220-231 230

Thus we can employ the division algorithm to obtain a


syndrome as follows:

r ( x) = a ( x) g ( x) + s( x)

where a( x) is the quotient and s ( x ) is the remainder


polynomial having degree less than the degree of g ( x) :
Figure 5. Adder.
s ( x ) = s0 + s1 x + ... + sn − k −1 x n − k −1
Multiplication: The symbol of multiplication has one input
and one output, where the output is the multiplication of the Thus, to compute the syndrome we can use a circuit.
input and the number g i which is stored in this symbol (either
1 or 0), where 0 represented by no connection and 1 by a 5. Applications, Conclusion and
connection.
Recommendation
5.1. Applications

This study is applicable in:


Deep space communication.
Figure 6. Multiplication.
Satellite communication.
Data transmission.
Definition: A shift-register is a chain of (n − k ) D
Data storage.
flip-flops connected to each other, where the output from one Mobile communication.
flip-flop becomes the input of the next flip-flop. File transfer.
Digital audio/video transmission.

Figure 7. Shift Register.


5.2. Conclusion

Decoding can be accomplished in the following manner:


All the flip-flops are driven by a common clock, and all are
i. If s (r ) = 0 , then we assume that no error occurred.
set or reset simultaneously.
ii. If s (r ) ≠ 0 and it contains odd number of 1's, we
4.4.2. Cyclic Codes Decoding assume that a single error occurred. The error pattern of a
Decoding of cyclic codes consists of the same three steps as single error that corresponds to s is added to the received
for decoding linear codes: word for error correction.
a Syndrome computation. iii. If s (r ) ≠ 0 and it contains even number of 1's, an
b Association of the syndrome to an error pattern. uncorrectable error pattern has been detected.
c Error correction.
For any linear code, we can form a standard array, or we can 5.3. Recommendation
use the reduced standard array using syndromes. For cyclic
codes it is possible to exploit the cyclic structure of the code to Hamming code corrects only the error patterns of single
decrease the memory requirements. error and is capable of detecting all 2 or fewer errors hence
First we must determine if the received word r is a finding a way to correct more than one error and detect more
codeword in C or not using a Theorem which states that an than two errors would be effective.
r ( x ) ∈ C if and only if All error detection, correction controlling mechanisms has
been studied. Hamming code is most efficient error correction
r ( x ) h( x ) ≡ 0 mod x n + 1 ⇒ ( x n + 1) divided r ( x )h ( x ) . mechanism in long distance communication. Interesting area
of future research is the study of how the presence of caches
If r ( x ) ∉ C we determine the closest codeword in C (n, k ) would affect the correlation in the data input to the ECC
using the syndrome of as r(x) follows: memory, and whether there is any systematic pattern there that
Since every valid received code polynomial r ( x) must be can be exploited by the optimization algorithms
a multiple of the generator polynomial g ( x ) of C, then when
Acknowledgements
we divide r ( x) by g ( x ) the remainder is zero exactly
when r ( x) is a codeword, i.e. The authors wish to thank Taita Taveta University for the
support given towards the completion of this research. Special
r ( x) = a ( x ) g ( x ) + 0 thanks goes to the Department of Mathematics and
Informatics.
231 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel

[9] Lemmermeyer F. Error Correcting Codes. 2005. 100P.

References [10] Nyaga, L. and Cecilia, M. (2008). Increasing error detection


and correction efficiency in the ISBN. Discovery and
[1] Attarian Ad. Algebraic Coding Theory. 2006. 12P. Innovation, 20: 3–4.

[2] Blahut R. Algebraic Codes for Data Transmission. United [11] Todd, K. M. (2005). Error Correction Coding: Mathematical
Kingdom: Cambridge University Press; 2003. 482p. Methods and Algorithms. John Wiley & Sons Inc.

[3] Doran R. Encyclopedia of Mathematics and its Applications. [12] Asma & Ramanjaneyulu [2015]: Implementation of
2nd ed. Cambridge University Press; 2002. 205. Convolution Encoder and Adaptive Viterbi Decoder for Error
Correction, International Journal of Emerging Engineering
[4] Hall J. Notes on Coding Theory. United State America: Research and Technology.
Michigan State University. 2003. 10P.
[13] Egwali Annie O. and Akwukwuma V. V. N. (2013):
[5] Hamming R. Error Detecting and Error Correcting Codes. Bell Performance Evaluation of AN-VE: An Error Detection and
Syst. Tech. J., 29. 1950; 147-160. Correction Code, African Journal of Computing & ICT.
[6] Han Y. Introduction to Binary Linear Block Codes. National [14] Vikas Gupta, Chanderkant Verma (2012): Error Detection and
Taipei University. Taiwan. 97P. Correction: Viterbi Mechanism, International Journal of
Computer Science and Communication Engineering.
[7] Kolman B. Introductory Linear Algebra: with Applications. 3rd
ed. United States of America: Prentice Hall; 1997. 608P. [15] Neha Chauhan, Pooja Yadav, Preeti Kumari (2014): Error
Detecting and Error Correcting Codes, International Journal of
[8] Kabatiansky G. Error Correcting Coding and Security for Data Innovative Research in Technology.
Networks. John Wiley & Sons, Ltd; 2005. 278p.

View publication stats

You might also like