10 11648 J Pamj 20160506 17
10 11648 J Pamj 20160506 17
10 11648 J Pamj 20160506 17
net/publication/313842897
CITATIONS READS
5 6,811
7 authors, including:
All content following this page was uploaded by Nicholas Mutua on 19 February 2017.
Email address:
[email protected] (I. N. John), [email protected] (P. W. Kamaku), [email protected] (D. K. Macharia),
[email protected] (N. M. Mutua)
*
Corresponding author
Received: December 1, 2016; Accepted: December 28, 2016; Published: January 20, 2017
Abstract: This paper provides an overview of two types of linear block codes: Hamming and cyclic codes. We have generated,
encoded and decoded these codes as well as schemes and/or algorithms of error-detecting and error-correcting of these codes. We
have managed to detect and correct errors in a communication channel using error detection and correction schemes of hamming
and cyclic codes.
Keywords: Linear Blocks, Hamming, Cyclic, Error-Detecting, Error-Correcting
a Source coding:
1. Introduction Source encoding
Coding theory is concerned with the transmission of data Source decoding
across noisy channels and the recovery of corrupted b Channel coding:
messages, Altaian [1]. It has found widespread applications Channel encoding
in electrical engineering, digital communication, Channel decoding
Mathematics and computer science. While the problems in
coding theory often arise from engineering applications, it is
fascinating to note the crucial role played by mathematics in
the development of the field.
The importance of algebra in coding theory is a commonly
acknowledged fact, with many deep mathematical results
being used in elegant ways in the advancement of coding
theory; therefore coding theory appeals not just to engineers
and computer scientists, but also to mathematicians and
hence, coding theory is sometimes called algebraic coding
theory, Doran [3].
An algebraic techniques involving finite fields, group Figure 1. Model of a Data Transmission System.
theory, polynomial algebra as well as linear algebra deal with
the design of error-correcting codes for the reliable Source encoding involves changing the message source to
transmission of information across noisy channels. a suitable code say u to be transmitted through the channel.
Usually, coding is divided into two parts: Channel encoding deals with the source encoded message u ,
221 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel
by introducing some extra data bits that will be used in (00) → (00000),
detecting and/or even correcting the transmitted message, (01) → (01111),
Hall [4]. Thus the result of the source encoding is a code (10) → (10110),
word, say v . Likewise, channel decoding and source
(11) → (11001)
decoding are applied on the destination side to decode the
received code word r as correctly as possible. Again if the message (00000) was transmitted over a noisy
Figure 1 represents a model of a data transmission system. channel and that there is only one error introduced, then the
For example: Consider a message source of four fruit received word must be one of the following five: (10000),
words to be transmitted: apple, banana, cherry and grape. The (01000), (00100), (00010) or (00001). Since only one error
source encoder encodes these words into the following binary occurred and since each of these five codes differs from
data (u1u2 u3u4 ) : (00000) by only one bit, and from the other three correct
codes (01111), (10110) and (11001) by at least two bits, then
Apple → u1 = (0, 0), banana → u2 = (0, 1),
the receiver will decode the received message into (00000)
and, hence, the received message will be correctly decoded
Cherry → u3 = (1, 0), Grape → u4 = (1, 1).
into ‘apple’.
Suppose the message ‘apple’ is to be transmitted over a Algebraic coding theory is basically divided into two
noisy channel. The bits u1 = (0, 0) will be transmitted major types of codes: Linear block codes and Convolutional
codes, Blahut [2].
instead. Suppose an error of one bit occurred during the
In this paper we present some encoding and decoding
transmission and the code (0, 1) is received instead as seen in
schemes as well as some used error detection/correction
the following figure. The receiver may not realize that the
message was corrupted and the received message will be coding techniques using linear block codes only. We discuss
decoded into ‘banana’. only two types of linear block codes: Hamming and cyclic
codes.
correct as many errors as the code could detect. The method characterized by mathematical models, which (it is hoped)
involves trial and error calculation and thus it needs to be are sufficiently accurate to be representative of the attributes
improved on and simplified to speed up the process. of the actual channel.
Asma & Ramanjaneyulu [12] studied the implementation of In this paper we restrict our work on a particularly simple
Convolution Encoder and Adaptive Viterbi Decoder for Error and practically important channel model, called the binary
Correction. Egwali Annie and Akwukwuma [13] investigated symmetric channel (BSC), and defined as follows:
Performance Evaluation of AN-VE: An Error Detection and Definition 1: A binary symmetric channel (BSC) is a
Correction Code. Vikas Gupta and Chanderkant Verma [14] memoryless channel which has channel alphabet {0, 1} and
examined Error Detection and Correction: Viterbi Mechanism. channel probabilities.
Error Detecting and Error Correcting Codes were examined 1
by Chauhan et al [15]. p(1 received |0 send) = p(0 received | 1 sent) = p < ,
2
( after = there should not be space)
3. Methodology p(0 received | 0 send) = p(1 received | 1 sent) = 1 – p.
Figure 3, shows a BSC with crossover probability p.
This section sets the methodology of the research by
discussing the Linear Block codes.
3.1. Basic concepts of Block Codes
The data of output of the source encoder are represented by
sequence of binary digits, zeros or ones. In block coding this
sequence is segmented into message blocks u = (u0 u1 ...uk −1 )
consisting of k digits each.
There are a total of 2 k distinct messages. The channel
encoder, according to certain rules, transforms each input Figure 3. Binary Symmetric Channel.
message into a word v = (v0 v1 ...vn −1 ) with n ≥ k .
3.5. General Methods of Decoding Linear Codes Over BSC
3.2. Basic Properties of a Linear Block Code
In a communication channel we assume a code word
The zero word (00…0), is always a codeword. v = (v0 ...vn −1 ) is transmitted and suppose r = ( r0 ...rn −1 ) is
If c is a codeword, then (-c) is also a codeword. received at the output of the channel. If r is a valid codeword,
A linear code is invariant under translation by a we may conclude that there is no error in v. Otherwise, we
codeword. That is, if c is a codeword in linear code C, know that some errors have occurred and we need to find the
then C + c = C. correct codeword that was sent by using any of the following
The dimension k of the linear code C(n, k) is the general methods of linear codes decoding:
dimension of C as a subspace of Vn over GF(2), i.e. Maximum likelihood decoding,
dim(C ) = k . Nearest neighbor/Minimum distance decoding
Syndrome decoding
3.3. Encoding Scheme Standard array
Syndrome decoding using truth table
If u = (u0 u1 ...uk −1 ) is the message to be encoded, then the These methods for finding the most likely codeword sent
corresponding codeword v can be given as follows: v = u. G are known as decoding methods.
3.4. Error Detection, Error Correction & Decoding Schemes 3.5.1. Maximum Likelihood Decoding
Suppose the codewords {v0 , v1 ,..., v2k −1 } form the linear
A fundamental concept in secure communication of data is
the ability to detect and correct the errors caused by the block code C(n, k) and suppose a BSC with crossover
channel. In this chapter, we will introduce the general 1
probability p < is used.
schemes/methods of linear codes decoding. 2
Channel Model / Binary Symmetric Channel Let a word r = (r0 r1 ...rn −1 ) of length n be received when a
The channel is the medium over which the information is codeword vr = (vr0 vr1 ...vrn−1 ) ∈ C is sent. Then, the maximum
conveyed.
Examples of channels are telephone lines, internet cables likelihood decoding (MLD) will conclude that vr is the most
and phone channels, etc. These are channels in which likely codeword transmitted if vr maximizes the forward
information is conveyed between two distinct places or channel probabilities.
between two distinct times, for example, by writing
information onto a computer disk, then retrieving it at later 3.5.2. Nearest Neighbor Decoding/Minimum Distance
time. Decoding
Now, for purposes of analysis, channels are frequently Important parameters of linear block codes called the
Pure and Applied Mathematics Journal 2016; 5(6): 220-231 224
hamming distance and hamming weight are introduced as 4. Binary Hamming Codes
well as the minimum distance decoding.
4.1. Construction of Binary Hamming Codes
3.5.3. The Minimum Distance Decoding
Suppose the codewords {v0 , v1 ,..., v2k −1 } from a code C(n, Hamming codes are the first important class of linear
error-correcting codes named after its inventor, Hamming [1]
k) are being sent over a BSC. If a word r is received, the who asserted by proper encoding of information, errors
nearest neighbor decoding or (minimum distance decoding) induced by a noisy channel or storage medium can be
will decode r to the codeword vr that is the closest one to reduced to any desired level without sacrificing the rate of
the received word r. Such procedures can be realized by an information transmission or storage. We discuss the binary
exhaustive search on the set of codewords which consists of Hamming codes with their shortened and extended versions
comparing the received word with all codewords and that are defined over GF (2). These Hamming codes have
choosing of the closest codeword. been widely used for error control in digital communication
and data storage. They have interesting properties that make
3.5.4. Syndrome & Error Detection encoding and decoding operations easier.
Consider an (n, k) linear code C. Let v = (v0 v1 ...vn −1 ) be a In this section we introduce Hamming codes as linear
codeword that was transmitted over a noisy channel (BSC). block codes that are capable of correcting any single error
Let r = (r0 r1 ...rn −1 ) be the received vector at the output of the over the span of the code block length.
channel. Because of the channel noise, r may be different Let r ∈ Z , the hamming code of order r is a code
from v. Hence, the vector sum e = r + v = (e0 e1 ...en −1 ) is an generated when you take a parity check matrix H and
r × (2 r − 1) matrix with columns that are all the (2 r − 1)
n-tuple where ei = 1, and ri ≠ vi for i = 0,1,..., n − 1. This
non-zero bit strings of length r in any order such that the last
n-tuple is called an error vector or (error pattern). The 1s in e
r columns form an identity matrix.
are the transmission errors that the code is able to correct.
Remark 1:
3.5.5. Syndrome & Error Correction Interchanging the order of the columns lead to an
The syndrome s of a received vector r = v + e depends equivalent code.
only on the error pattern e, and not on the transmitted Example 1:
codeword v. Find codewords in hamming code C of order.
1 b) 2 c) 3
3.5.6. Error-Detecting& Error-Correcting Capabilities of Solution
Block Codes
Error-Detecting Capabilities of Block Codes r=1
Let m be a positive integer. A code C is u error detecting if,
1× (21 − 1)matrix
whenever a codeword incurs at least one and at most u errors,
the resulting word is not a codeword. ⇒ 1×1
A code is exactly u error detecting if it is m error detecting ⇒ (1)
but not (m + 1) error detecting. G =1
Error-Correcting Capabilities of Block Codes G = ( I / A)
If a block code with minimum distance d min is used for
H = ( At / I )
random-error correction, one would like to know how many
errors that the code is able to correct. (1)(1) = 1
A block code with minimum distance d min guarantees (1)(0) = 0
d −1 ⇒ C = {0,1}
correcting all the error patterns of t = min or minimal
2 r=2
errors. The parameter t is called the random-error-correcting
capability of the code. 2 × (22 − 1)matrix
⇒ 2×3
3.5.7. Syndrome Decoding
We will discuss a scheme for decoding linear block codes 110
that uses a one-to-one correspondence between a coset leader H = ⇒ ( A / I2 )
t
101
and a syndrome. So we can form a decoding table, which is
much simpler to use than a standard array. The table consists 1
At =
of 2 n − k coset leaders (the correctable error patterns) and 1
their corresponding syndromes. ∴ A = (11)
So the exhaustive search algorithm on the set of 2 n − k
G = ( I / A)
syndromes of correctable error patterns can be realized if we
have a decoding table, in which syndromes correspond to = (111)
coset leaders.
225 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel
Perform linear combinations of rows of G. vector sum of those columns of the matrix corresponding to
C = {111, 000} the positions of the errors.
Remark 2: Now, consider all error words of weight one are to have
Let r = 1. Then the G is a 1×1 matrix. Since G = ( I / A) distinct syndromes, and then it is evidently necessary and
sufficient that all columns of the matrix must be distinct.
and H = ( At / I ) , we conclude that G = (1) and hence
For if w(e) = 1 say ei = 1 then SiT = hi if e j = 1 then
C = {0,1} .
Since all the codewords are linear combinations of the
S Tj = hj now, if SiT = S Tj then hi ≠ h j for i ≠ j .
rows of G, this code has only two codewords. In other words, the parity-check matrix H of this code
consists of all the nonzero (n − k ) -tuples as its columns. Thus,
r=3
there are n = 2( n − k ) − 1 possible columns.
3 × (2 − 1)matrix
3
The code resulting from above is called a Binary Hamming
H ⇒ 3 × 7matrix code of length n = 2m − 1 and k = 2m − 1 − m where
m = n−k .
0111100
Definition 2:
H = 1011010 For any integer m > 1 there is a Hamming code, Ham (m) ,
1101001
of length 2 m − 1 with m parity bits and 2m − 1 − m
H = ( A / I3 )
t
information bits.
Using a binary m × n parity check matrix whose columns
0111
are all of the m - dimensional binary vectors different from
A = 1011
t
zero, the Hamming code is defined as follows:
1101
Ham ( m) = {v = (v0 v1 ...vn −1 ) ∈ Vn : H .vT = 0}
.
011
Table 1. (n, k ) Parameters for Some Hamming Codes.
101
A=
110 M Hamming Code
3 (7, 4)
111 4 (15, 11)
G = ( I / A) 5 (31, 26)
6 (63, 57)
1000011 7 (127, 120)
0100101
= Theorem 1: The minimum distance of a Hamming code is
0010110 at least 3.
0001111 Proof:
If Ham (m) contained a codeword v of weight 1, then
Perform linear combination of rows of G. v would have 1 in the i th position and zero in all other
= {1000011, 0100101, 0010110, 0001111, 1100110,
positions.
1010101, 1001100, 0110011, 0101100, 0011001, 1110000,
1101001, 0111100, 1111111, 0000000 and 0101010} Since HvT = 0 = hi , then i th column of H must be zero.
Suppose the linear code C (n, k ) has a (n − k ) × n matrix This is a contradiction of the definition of H. So Ham (m)
H as the parity check matrix and that the syndrome of the has a minimum weight of at least 2.
received word r is given by S T = H .r T . Then the decoder If Ham (m) contained a codeword v of weight 2, then
must attempt to find a minimum weight e which solves the v would have 1 in the i th and j th positions and zero in all
equation S T = H .eT . other positions. Again, since Hv = 0 = hi + hj , then hi & h j
T
H = [ h1 ,.., hi ,.., h j ,.., h2m −1 ] , where each hi represents the (I.e.) The column of 1’s is added to I 3 .
th
i column of H. Since the columns of H are nonzero and
1
distinct, no two columns add to zero. It follows that the
minimum distance of a Hamming code is at least 3. Since H G = ( I 3 / A) . Where A = 1
consists of all the nonzero m -tuples as its columns, the 1
vector sum of any two columns, say hi and h j , must also
be a column in H, say hS i.e. hi + h j = hs . Thus, b Consider the encoding using triple repetition for 3 bit
messages as follows.
hi + h j = hs = 0 (In modulo 2-addition)
It follows from that the minimum distance of a Hamming G = ( I3 | I3 | I 3 ) .
code is exactly 3.
Corollary 1: The Hamming code is capable of correcting 100100100
all the error patterns of weight one and is capable of G = 010010010 .
detecting all 2 or fewer errors. 001001001
Proof:
d −1 3 −1 c Let
t = min = = 1 . So the Hamming code is
2 2
capable of correcting all the error patterns of weight one. 100111
And d min − 1 = 3 − 1 = 2 . Thus it also has the capability of G = 010110 .
001101
detecting all 2 or fewer errors.
Result
For any positive integer m > 1 , there exists a Hamming G = ( I 3 | A) . Where
code with the following parameters:
Code length: n = 2m − 1 111
Number of information symbols: k = 2m − m − 1 A = 110 .
Number of parity-check symbols: n − k = m 101
Random-error-correcting capability: t = 1(d min = 3) .
What codewords does G generate?
4.2. The Generator and the Parity Check Matrices of Binary Solution:
Hamming Codes Ham (m) E(X) = XG
GENERATOR MATRICES f : B3 → B6
When we use PCB we encode a message x1 x2 ...xk as
B 3 = {000, 001, 010,100, 011,101,110,111} .
k
x1 x2 ...xk xk +1 where xk +1 = ∑ xi (mod 2) .
i =1 (000)G = 000000
To generalize this notion we add more than one check bit (001)G = 001101
and encode the message x1 x2 ...xk as x1 x2 ...xk xk +1 ...xn . Where (010)G = 010110
the last n – k bits are PCB’s obtained from the k bits in the (100)G = 100111
message.
The PCB are xk +1 xk + 2 ...xn specified as follows: (011)G = 011011
i. Consider the k bit message x1 x2 ...xk as a 1 x k matrix (101)G = 101010
X. (110)G = 110001
ii. Let G be a k x n matrix that begins with I k . I.e. k x k (111)G = 111100
identity matrix. Hence G = ( I k / A) where A is a k x (n
G = {000000, 001101, 010110,100111, 011011,101010,110001,111100}
- k) matrix. G is called a generator matrix.
iii. We encode the message X as E(X) =XG doing Remark 3:
arithmetic mod 2. a The codewords in a binary code generated by the
Example 3: generator matrix G can be obtained by performing all
a Consider encoding by adding the PCB to a 3 bit possible linear combinations of the rows of G working
message, where mod 2.
1001 100111
G = 0101 . I.e. G = 010110 .
0011 001101
227 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel
Rows = 100111, 010110, 001101 Suppose a PCB is added to a bit string during transmission,
what would you conclude on the following received
Adding any = 110001, 011011, 101010, 111100, 000000. messages.
The binary codes formed using the generator matrix have a 101011101 – It contains an even number of 1s. Hence it
the closure property. They are therefore linear codes. I.e. is either a valid codeword or contains an even number
of errors.
consider y1 and y2 codewords generated by G.
b 11110010111001 – It contains an odd number of 1s
hence it cannot be a valid codeword and must therefore
I .e. y1 = x1G , i.e..E ( x1 )
contain an odd number of errors.
y2 = x2 G , i.e..E ( x2 ) . Consider the generator matrix
E ( x1 + x2 ) = ( x1 + x2 )G 100111
G = 010110 the bit string x1 x2 x3 is encoded as
= x1G + x2 G 001101
.
= y1 + y2
x1 x2 x3 x4 x5 x6 where:
Parity Check Matrices
A simple way to detect errors is by use of parity check bit x4 = x1 + x2 + x3
(PCB). x5 = x1 + x2 .
A parity bit, or check bit is a bit added to the end of a x6 = x1 + x3
string of binary code that indicates whether the number of
bits in the string with the value one is even or odd. Parity bits x1 + x2 + x3 + x4 = 0
are used as the simplest forms of error detecting code. There
I.e. x1 + x2 + x5 = 0 – parity check equations.
are two types of parity bits: even parity bit and odd parity bit.
An odd number of bits (including the parity bit) are x1 + x3 + x6 = 0
transmitted incorrectly; the parity bit will be incorrect, thus I.e.
indicating that a parity error occurred in the transmission.
Because of its simplicity, parity is used in many hardware x1
applications where an operation can be repeated in case of x2
difficulty, or where simply detecting the error is helpful. In 111100 0
x3
serial data transmission, a common format is 7 data bit, an 110010 x = 0 .
even parity bit, and one or two stop bits. This format neatly 101001 4 0
x
accommodates all the 7-bit ASCII characters in a convenient 5
8-bit byte. x
6
If a bit string contains an even number of 1s we put 0 at
the end. H [ E ( x )t ] = 0 . Where E ( x )t is the transpose of E(x) and
If it contains an odd number of 1s we put a 1 at the end. the parity check matrix is
Our aim is to ensure an even number of 1s in any
codeword. 111100
I.e. message x1 x2 ...xn . H = 110010
Encoded as x1 x2 ...xn xn +1 . 101001 .
Where xn +1 = x1 + x2 + ... + xn . = ( A / I3 )
t
Hy t = 0 is not a valid codeword, that is, it is in error. Check H ⇒ c j is the 5th column. Hence the 5th bit string
When the columns of the parity check matrix are distinct received is in error.
and are all non-zero, H can be used to correct the errors. y = 001111 .
Suppose x is sent and y received in error then y = x + e, e Hence x = 001101
being the error string.
(ii) Hy t .
If e = 0 then no error.
In general the error string e has a 1 in the position where y
0
differs from x and 0 in all other places.
1
Example 6: 111100 1
0
x = 110010 110010 0 = 1 = c j .
101001 1
y = 100010 0
⇒ y = x+e 1
.
Hence, e = 010000
Check H ⇒ c j is the 1st column. Hence the 1st bit string
Remark: received is in error.
H [ y t ] = H [ x + e]t . y = 010001
.
= Hxt + Het Hence x = 110001.
.
= He t
4.4. Cyclic Codes
Hyt = Het = c j . Where c j is the jth column of H . Cyclic codes form an important subclass of linear block
Assuming no more than one error exists, we can find the codes and were first studied by Prange in 1957. These codes
codeword x that was send by simply computing Hy t . If Hy t are popular for two main reasons: first, they are very
effective for error detection/correction and second, they
= 0, then no error and y is the sent codeword. Otherwise the
possess many algebraic properties that simplify the encoding
j th bit is in error and should be changed to produce x.
and the decoding implementations.
Example 7: A code C is said to be cyclic if:
i. C is a linear code.
100111 ii. Whenever a right or left shift is performed on any
Let G = 010110 . codeword it yields another codeword.
001101 i.e. whenever a0 a1 ...an ∈ C then a1a2 ...an a0 ∈ C .
Remark 6:
Obtain H.
a1a2 ...an a0 is the first cyclic shift.
Determine the codeword sent given the received:
y = 001111 Example 8:
y = 010001
C = {000, 110, 101, 011}
Assuming no more than one error.
Solution Hence C is cyclic.
G = ( I 3 / A) 000 → 000 → 000
.
110 → 101 → 011 → 110 .
Where A = Hy t .
Cyclic codes are useful in:
H = ( A / I3 ) ⇒
t 111100 i. Shift registers.
H = 110010 ii. On the theoretical side, cyclic codes can be investigated
101001 by means of algebraic theory of rings and polynomials.
Description of Cyclic Codes
(i) Hy t . If the components of an n –tuple v = (v0 v1 ...vn −1 ) are
cyclically shifted one place to the right, we obtain another
0
n-tuple, v (1) = (vn −1v0 ...vn − 2 ) which is called a cyclic shift of v .
0
111100 0 . Clearly, the cyclic shift of v is obtained by moving the
1
110010 = 1
= c
101001 1 0
j
right most digit vn −1 of v to the left most digit and moving
1
every other digit v0 , v1 ,...vn − 2 one position to the right.
1
229 Irene Ndanu John et al.: Error Detection and Correction Using Hamming and Cyclic Codes in a Communication Channel
Shifting the components of v cyclically, i places to the Definition 3: An (n, k ) linear code C is called cyclic if any
right, the resultant n-tuple would be cyclic shift of a codeword in C is also a codeword in C, i.e.
whenever (v0 v1 ...vn −1 ) ∈ C , then so is (vn −1v0 ...vn − 2 ) .
v ( i ) = (vn − i vn − i −1 ...vn −1v0 .v1 ...vn − i −1 )
Example: Consider the following (7, 4) linear code C;
Remark 9: Cyclically shifting v i -places to the right is
equivalent to cyclically shifting v( n − i ) -places to the left.
One can easily check that the cyclic shift of a codeword in C The equivalence classes form a ring, and iff F(x) is
is also a codeword in C. For instance, let v = (1101000) ∈ C , reducible we get a field.
then v (1) = (0110100) ∈ C : Example 9:
Hence, the code C is a cyclic. f ( x) = 1 + x 2 in V 3 [ x] .
Correspondence between bit string and polynomials over
Ζ2 .
Solution: Elements P(x) in V 3 [ x ] are;
The key to algebraic treatment of cyclic code is the
correspondence between the word a = a0 a1a2 ...an −1 is 0,1 + x + x 2 ,1, x, x 2 ,1 + x,1 + x 2 , x + x 2 .
V n or B n and polynomial
< 1 + x 2 >= {0,1 + x 2 ,1 + x, x + x 2 }
n −1
a ( x) = a0 + a1 x + a2 x + ... + an −1 x in Ζ 2 [ x]
2
0 → 000
.
1 + x 2 → 101 .
In this correspondence the first cyclic shift of a codeword â
1 + x → 110
is represented by the polynomial
x + x 2 → 011
n −1
aˆ ( x ) = an −1 x + a0 x + a1 x + ... + an − 2 x
0 2
.
C = {000, 101, 110, 011}
i.e.
4.4.1. Shift-Register Encoders for Cyclic Codes
Table 2. First Cyclic Shift In this section we present circuits for performing the
encoding operation by presenting circuits for computing
x 0
x1 x2 x 3 .... x n−1
polynomial multiplication and division.
a (x ) a0 a1 a2 a3 …. an−1 Hence, we shall show that every cyclic code can be encoded
aˆ( x ) an−1 a0 a1 a2 … an−2 with a simple finite-state machine called a shift-register
encoder.
Consider: To define the shift register we want to by the following
definition;
xa( x) = x(a0 + a1 x + a2 x 2 + ... + an −1 x n −1 ) Definition 4: A D flip-flop is a one-bit memory storage in
Consider the field GF (2) .
xa( x) − an −1 x n −1
= x(a0 + a1 x + ... + an −1 x n −1 ) − an −1 ( x n − 1) .
⇒ a0 x + a1 x + ... + an −1 x − an −1 x + an −1 = aˆ ( x)
2 n n
Figure 4. Flip-Flop.
aˆ ( x) = xa( x) − an −1 ( x n − 1)
External clock: Not pictured in our simplified circuit
= xa ( x ) mod( x n − 1)
diagrams, but an important part of them, which generates a
Working with polynomials help us to perform operations on timing signal ("tick") every t0 seconds.
cyclic codes for better understanding. When the clock ticks, the content of each flip-flop is shifted
We denote the ring of polynomials modulo x n − 1 by out of the flip-flop in the direction of the arrow, through the
V [ x ] with coefficients in Ζ 2 .
n circuit to the next flip-flop.
The signal then stops until the next tick.
The addition and multiplication of polynomials modulo Adder: The symbol of adder has two inputs and one output,
x n − 1 can be regarded as addition and multiplication of which is computed as the sum of the inputs (modulo
equivalence classes of polynomials. 2-addition
Pure and Applied Mathematics Journal 2016; 5(6): 220-231 230
r ( x) = a ( x) g ( x) + s( x)
[2] Blahut R. Algebraic Codes for Data Transmission. United [11] Todd, K. M. (2005). Error Correction Coding: Mathematical
Kingdom: Cambridge University Press; 2003. 482p. Methods and Algorithms. John Wiley & Sons Inc.
[3] Doran R. Encyclopedia of Mathematics and its Applications. [12] Asma & Ramanjaneyulu [2015]: Implementation of
2nd ed. Cambridge University Press; 2002. 205. Convolution Encoder and Adaptive Viterbi Decoder for Error
Correction, International Journal of Emerging Engineering
[4] Hall J. Notes on Coding Theory. United State America: Research and Technology.
Michigan State University. 2003. 10P.
[13] Egwali Annie O. and Akwukwuma V. V. N. (2013):
[5] Hamming R. Error Detecting and Error Correcting Codes. Bell Performance Evaluation of AN-VE: An Error Detection and
Syst. Tech. J., 29. 1950; 147-160. Correction Code, African Journal of Computing & ICT.
[6] Han Y. Introduction to Binary Linear Block Codes. National [14] Vikas Gupta, Chanderkant Verma (2012): Error Detection and
Taipei University. Taiwan. 97P. Correction: Viterbi Mechanism, International Journal of
Computer Science and Communication Engineering.
[7] Kolman B. Introductory Linear Algebra: with Applications. 3rd
ed. United States of America: Prentice Hall; 1997. 608P. [15] Neha Chauhan, Pooja Yadav, Preeti Kumari (2014): Error
Detecting and Error Correcting Codes, International Journal of
[8] Kabatiansky G. Error Correcting Coding and Security for Data Innovative Research in Technology.
Networks. John Wiley & Sons, Ltd; 2005. 278p.