0% found this document useful (0 votes)
53 views35 pages

Chapter-1: 1.1: Coding Theory

The document provides an introduction to coding theory and error correction codes. It discusses: 1) Coding theory is concerned with improving communication reliability over noisy channels by devising methods to detect and correct errors during transmission. Error correction codes are widely used in communication systems. 2) There are two main types of error correction codes - linear block codes and convolutional codes. Linear block codes encode data in fixed length blocks and convolutional codes encode data as a continuous stream. 3) Common linear block codes include Hamming codes, BCH codes, and Golay codes. Golay codes can correct multiple bit errors and are easily decoded using an algebraic method. The document then focuses on Golay codes, providing specifications

Uploaded by

hari chowdary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views35 pages

Chapter-1: 1.1: Coding Theory

The document provides an introduction to coding theory and error correction codes. It discusses: 1) Coding theory is concerned with improving communication reliability over noisy channels by devising methods to detect and correct errors during transmission. Error correction codes are widely used in communication systems. 2) There are two main types of error correction codes - linear block codes and convolutional codes. Linear block codes encode data in fixed length blocks and convolutional codes encode data as a continuous stream. 3) Common linear block codes include Hamming codes, BCH codes, and Golay codes. Golay codes can correct multiple bit errors and are easily decoded using an algebraic method. The document then focuses on Golay codes, providing specifications

Uploaded by

hari chowdary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Chapter-1

INTRODUCTION

In computer any kind of data is stored and processed as binary digits. A bit is
0 or 1. Every letter has an ASCII code. For example, the ASCII code of the letter ‘A’
is 01000001. Typically, data consists of billions of bits. It is possible to model the
transmitted data as a string of 0s and 1s. Digital data is transmitted over a channel
(which could be a wire, network, space, air etc.), and there is often noise in the
channel. The noise may distort the messages to be sent. Therefore, the received data
may not be same as transmitted data.

In binary digital communication systems, when a message is transmitted


through the communication channel or stored for later use, the errors can occur which
cause a transmitted zero to be received as a one or a transmitted one to be received as
a zero. This error is due to noise presented in channel in any form. This error can be
corrected by error-control codes.

1.1: CODING THEORY

Coding theory is concerned with reliability of communication over noisy


channels. The study of error-control codes is called coding theory. The goal of coding
theory is to improve the reliability of digital communication by devising methods that
enable the receiver to decide whether there have been errors during the transmission,
and if there are, to possibly recover the original message. Coding theory is the study
of the properties of codes and their fitness for a specific application. Codes are used
for data compression, cryptography, error-correction and more recently also
for network coding.

There are essentially two aspects to coding theory:

1. Data compression
2. Error correction

Error correcting codes are used in a wide range of communication systems.


Error-correcting and error-detecting codes play an important role in applications for
wireless networking, satellite communication, compact disks, and so on.

1
The error detection and correction are necessary for reliable transmission and
storage of data in communication system. Information media are not fully reliable in
practice, in the sense that noise (any form of interference) frequently causes data to be
distorted. To deal with this undesirable but inevitable situation, some form of
redundancy is incorporated in the original data. The main method used to recover
messages that might be distorted during transmission over a noisy channel is to
employ redundancy. With this redundancy, even if errors are introduced (up to some
tolerance level) the original information can be recovered, or at least the presence of
errors can be detected.

The fundamental concept in error correction is that of a codeword, which just


a string of digits is. To correct errors we choose a collection of code words that are
quite different from each other, so that when one is received it is impossible to
mistake it for any other codeword if it has been corrupted slightly. Such a set of
codewords is called a q-ary code, where q determines how many different values the
digits of each codeword can take. For example, the codewords in a 2-ary (or binary)
code are strings of ‘0’s and ‘1’s, and the codewords in a 10-ary code are strings of the
usual decimal digits ‘0’ to ‘9’.

Algebraic coding theory is basically divided into two major types of codes:

1. Linear block codes


2. Convolutional codes.

It analyzes the following three properties of a code – mainly:

 Code word length


 Total number of valid code words
 The minimum distance between two valid code words, using mainly
the Hamming distance.

Linear block codes have the property of linearity, i.e. the sum of any two
codewords is also a code word, and they are applied to the source bits in blocks, hence
it is named as linear block codes. Linear block codes are summarized by their symbol
alphabets (e.g., binary) and parameters (n,m,dmin) where

1. n is the length of the codeword, in symbols,

2
2. m is the number of source symbols that will be used for encoding at once,
3. dmin is the minimum hamming distance for the code.

A block code is a code that uses sequences of n symbols, for some positive
integer n. Each sequence of length n is a code word or code block, and contains k
information digits (or bits). The remaining n − k digits in the code word are called
redundant digits or parity-check bit. They do not contain any additional information,
but are used to make it possible to correct errors that occur in the transmission of the
code. The encoder for a block code is memory less, which means that the n digits in
each code word depend only on each other and are independent of any information
contained in previous code words [5].

The convolution encoder is more complicated, however, because it contains


memory. The idea behind a convolution code is to make every codeword symbol be
the weighted sum of the various input message symbol. In a convolution code, the n
digits of a code word also depend on the code words that were encoded previously
during a fixed length of time. The symbols that make up a code word are taken from
an alphabet of q elements [5]. A very common type of code is a binary code for which
q is 2, and the alphabet typically consists of 0 and 1. In binary block coding, there are
2n possible code words in a code of length n. However, each code word contains a
message of only length k, so we use only 2k of the 2n possible words with k < n. This
set of 2k code words of length n is called an (n, k) block code. The code rate is the
ratio R = k/n.

There are many types of linear block codes, such as

1. Cyclic codes (e.g., Hamming codes)


2. Polynomial codes (e.g., BCH codes)
3. Binary Golay code (Perfect codes)

In coding theory, a cyclic code [10] is a block code, where the circular
shifts of each codeword gives another word that belongs to the code. They are error-
correcting codes that have algebraic properties that are convenient for efficient error
detection and correction.

3
Error-correcting codes are used to correct errors when messages are
transmitted through a noisy communication channel. For example, we may wish to
send binary data (a stream of zeros and ones) through a noisy channel as quickly and
as reliably as possible. The channel may be a telephone line, a high frequency radio
link or a satellite communication link. The noise may be human error, lightning,
thermal noise, imperfections in equipment, etc.

One of the key features of BCH codes is that during code design, there is a
precise control over the number of symbol errors correctable by the code. In
particular, it is possible to design binary BCH codes that can correct multiple bit
errors. Another advantage of BCH codes is the ease with which they can be decoded,
namely, via an algebraic method. This simplifies the design of the decoder for these
codes, using small low-power electronic hardware.
1.2 SCOPE OF THE PROJECT
The main scope of this project is to learn about the coding theory, golay
coding algorithm (encoding and decoding), syndrome decoding algorithm which is
the type of hard decision decoding algorithm mostly used for syndrome calculation
and also to learn about error correction capability.

4
Chapter-2
SPECIFICATION AND DESIGN APPROACH OF GOLAY CODE
2.1 SPECIFICATIONS:

Golay Encoder:

The perfect binary golay code can be represented as (23, 12, 7). Here for golay
encoder the input data length is 12 bit i.e the message bit that can be encoded into 23
bit so the golay encoder output data length is 23 bit and the length of the redundancy
is 11 bit. The key word which is used inside the golay encoder has the length of 12
bit. The length of the error bit which is introduced in the channel is 23 bit, and the
weight is 1/2/3/4/5.

Golay Decoder:

The golay decoder has the input data length is 23 bit, and the length of the
output of decoder is 12 bit. The received code word length is 23 bit. The length of the
syndrome is 11 bit. The length of the error pattern is 23 bit.

2.2 BLOCK DIAGRAM OF GOLAY CODING

A binary Golay code is a type of error-correcting code used in digital


communications. There are two closely related binary Golay codes. The extended
binary Golay code (sometimes just called the "Golay code" in finite group theory)
encodes 12 bit of data in a 24-bit word in such a way that any 3-bit errors can be
corrected or any 7-bit errors can be detected. The other, the perfect binary Golay
code, has codewords of length 23 and is obtained from the extended binary Golay
code by deleting one coordinate position (conversely, the extended binary Golay code
is obtained from the perfect binary Golay code by adding a parity bit). In standard
code notation the codes have parameters [24, 12, 8] and [23, 12, 7], corresponding to
the length of the codewords, the dimension of the code, and the minimum Hamming
distance between two codewords, respectively.

5
Error

Figure 2.1: block diagram of the golay coding implementation


In the proposed approach a message or information can be given to the input
of the golay encoder. The golay encoder will map these 12 bit data into 23 bit of data.
The encoder output can be obtained by padding the information bits with the parity
vector bits. The parity vector bits can be generated by performing the i mod g. here i
is the information bits and g is the generator bits also called as key word. The encoded
data can be transferred and it is xored with the error vector bits which has the weight
is up to ‘5’. Now the received code word is given to the input of the decoder block.

In the decoder block the syndrome bits are generated by performing the r mod
g operation. Here r is the received code word. By using the syndrome vector bits
obtain the all possible error patterns. By this process the received code word can be
corrected by selecting the appropriate error pattern for corresponds to a syndrome
from the lookup table which contains all possible error patterns corresponds to
syndrome vector bits.

The below figure shows an example of error-correcting code in the form of


binary repetition code. Here the encoder is repeating the message symbol five times.
The other r = 4 digits are repetitions of the message digit. If there are two errors which

6
have occurred, the decoder will decode the received vector 01001 as the "nearest"
codeword that is 00000 or yes which is still correct.

Figure 2.2: simple block diagram of error correcting code

The decoder uses the following rule to decide if the received message is zero
or one. The decoder counts the number of zeros and ones. If the number of ones is
greater than the number of zeros, then the decoder decides that the received message
is one. If the number of zeros is greater than the number of ones, then the received
message will be zero. If they are equal in number (zeros and ones) it will give a
decoding failure result.

Encoder:
The encoder accepts the information to be transmitted as a sequence of length
k of binary symbols from the information source and appends a set of r parity- check
digits. The parity-check digits are determined by the encoding rules. The codeword
from the encoder is transmitted through the channel (in some applications the channel
may be storage device such as a magnetic tape.) Let Rx be the received codeword, ex
the channel noise, and Tx the transmitted codeword. If there is an error in the received
word then to correct that error the received word Rx is to be added to the channel noise
ex using modulo-2 addition which can be achieved using an XOR gate. In that case
the transmitted codeword Tx can be found from the relationship:
Tx = Rx XOR ex
If the channel is noiseless (ex = 0), then the received word Rx is equal to the
transmitted codeword Tx.
Tx = Rx
As an example assume that the codeword Tx = (011) and ex = (100) , this
means that the error occurred on the First bit, then the received vector becomes:
Rx = Tx XOR ex =(111)

7
Decoder:
The decoder determines whether the information and check digits satisfy the
encoding rules, and uses any observed discrepancy to detect and possibly correct
errors that have occurred in transmission. For detection only, the decoder must
perform the following:

1) The decoder applies the decoding rules to the received word to determine if the
parity-check bit satisfies the parity-check relationships or not. If the parity-check
relationships are not satisfied, this means that an error has occurred. If only error
detection is to be performed, then the decoding function is completed with an
announcement that either the received word is a codeword or that an error has
been detected.

8
Chapter-3

IMPLEMENTATION OF GOLAY CODE ENCODER

3.1 GOLAY CODE ENCODER IMPLEMENTATION

Linear block codes are a class of parity check codes that can be characterized
by the (n; k) notation. Assume that the output of the information source is a stream of
binary digits. In block coding the information sequence is segmented into message
blocks of fixed length and consisting of k information digits.

3.1.1: Encoding of Linear Block Codes

The encoder transforms a block of k messages digits (a message vector) into a


longer block of n codeword digits (a code vector). There are a total of 2k distinct
messages. These set of 2k code words are called a blockcode. A binary blockcode is
said to be linear if and only if the modulo-2 addition of any two codewords is also a
codeword. Linear block codes are the most widely used block codes. Non-linear
blockcodes are seldom used due to their inherent encoding and decoding complexity.
The important characteristic offered by linear block codes is that they allow more
effective encoding and decoding methods. A block code C(n; k) is said to be linear if
it has the following characteristic:

3.1.2: Generator Matrix

If k is large, this means that very large space of memory is required to contain
the large number of code vectors, 2k. To overcome this problem the required code
vectors can be generated as needed by using a generator matrix. Using a matrix
representation the codeword V can be expressed as:

9
Where gi = (gi,0 ,gi,1 ,…………….. gi,n-1) is a codeword and G is the generator
matrix of the code. There exists a subspace, the dual code C, whose codewords are
orthogonal to any codeword in C. This subspace is de ned by the matrix H, known as
the parity check matrix of the code C. Since every codeword in C is orthogonal to
every codeword in C, if V is a codeword in C, then VHT =0.

Hence if for a received vector Rq = (Rq0, Rq1,………..,Rq(n-1)), where each


Rqi for 0 <i <= (n - 1) are q ary values, RqHT != 0, this means that an error has been
introduced by the channel.

3.1.3: Weight and Distance of Linear Block Codes

Minimum Weight of Linear Block Codes: The Hamming weight of a codeword X


is denoted by W(X), and defined to be the number of nonzero elements in this
codeword. For a binary codeword this is equivalent to the number of ones. For
example, if X = 100110100, then: W(X)=4

The Hamming distance between two codewords X and Y is denoted by d(X,


Y ), and is defined to be the number of elements n , for example if:

X = 110100101, and Y = 101101101 then:

d(X, Y )=3

3.1.4: Syndrome Computation and Error Detection For Linear Block Codes

To determine whether or not there is some error in the received message the
decoder begins by computing the syndrome digits. These are defined by the equation:

ST = HRT

Where, H is the parity check matrix. The code block length n is equal to the number
of columns of H. If the syndrome is zero, then the received vector is very likely the
transmitted codeword, or there is an undetectable error pattern existing in the received
word. If the syndrome is not zero, then the received vector is not the transmitted
codeword, and an error has been detected. Since the syndrome digits are defined by
the same equations as the parity-check equations, the syndrome digits reveal the
pattern of parity-check failures in the received word.

10
3.1.5: Error Correction for Linear Block Codes

If error correction is to be performed, the decoder has to estimate the


transmitted codeword Tx, when it receives a codeword Rx. To do this, the decoder
determines the distance between Rx and each of the possible transmitted codewords
Tj, and selects the most likely Ti, where:

d(Rx,Ti) <= d(Rx;Tj) for i, j =1,…….,M and i ≠ j

where M =2^k is the size of the code vector set. In order to detect up to ex and correct
up to t errors (where t <= ex) in a codeword, the minimum distance of the code must
be:

dmin = ex + t +1 -------------- (3.1)

If error detection only is required, then t =0, dmin = ex +1, or ex = dmin- 1. For
maximum correction of errors then t = ex and dmin =(2t +1), or: t = (dmin-1)/2

3.2: INTRODUCTION TO GOLAY CODE IMPLEMENTATION

The well-known binary Golay[8] code, also called the binary (23, 12, 7)
quadratic residue (QR) code, was first discovered by Golay in 1949. It is a very useful
perfect linear error-correcting code; particularly, it has been used in the past decades
for a variety of applications involving a parity bit being added to each word to yield a
half-rate code called the binary (24, 12,8) extended Golay code. One of its most
interesting applications is the provision of error control in the Voyager missions. The
Golay code can allow the correction of up to t= [(d-1)] /2 errors, where [x] denotes
the greatest integer less than or equal to x, t is the error-correcting capability, and d is
the minimum Hamming distance of the code. The (23, 12, 7) Golay code is a perfect
linear error-correcting code that can correct all patterns of three or fewer errors in 23
bit positions. There are several efficient decoding algorithms for the (23, 12, 7) Golay
code.

A convenient way of finding a binary [23; 12; 7] Golay code is to construct


first the extended Golay [24; 12; 8] code, which is just the [23; 12; 7] Golay code
augmented with a final parity check in the last position.

11
The binary form of the Golay code is one of the most important types of
linear binary block codes. It is of particular significance since it is one of only a few
examples of a nontrivial perfect code. A t-error-correcting code can correct a
maximum of t errors. A perfect t-error correcting code has the property that every
word lies within a distance of t to exactly one code word. Equivalently, the code has
dmin = 2t + 1, and covering radius t, where the covering radius r is the smallest
number such that every word lies within a distance of r to a codeword. If there is an
(n; k) code with an alphabet of q elements, and d = 2t + 1, then,

---------------------------------- (3.2)

3.3: ENCODER FOR GOLAY CODE

An (n, k) linear cyclic code C is said to be cyclic if for every codeword c =(

c0,c1,………. ,cn-1) ∈ C, there is also a code word c' =( cn-1,c0,c1,………. ,cn-2) ∈ C, the

codeword c' is a right cyclic shift of the codeword c , it follows that all n of the
distinct cyclic shifts of c must also be codewords in C.

3.3.1: Galois Field

A block code consists of a set of codewords of fixed length n, with each


codewords being an n tuple over a finite field GF(q) of q symbols. A field is a set of
elements in which we can do addition, subtraction, multiplication, and division.
Addition and multiplication must satisfy the commutative, associative and distributive
laws. The number of elements in the field is called the order of the field. A binary
field is a field of two elements (0,1) under modulo-2 addition and modulo-2
multiplication, and is denoted by GF(2). Modulo-2 addition can be achieved by using
XOR gates.

3.3.2: Generator Polynomial & Generator Matrix

For the binary (23, 12, 7) Golay-code of length 23 over GF(211), its quadratic
residue (QR) set is the collection of all non-zero quadratic residues modulo n given by

Q23 = {i|i ; j2mod 23 for 1 ≤ j ≤ 22}

12
= {1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18}

The generator polynomial of a (23, 12, 7) Golay code is given by

g(x)=x11+ x9+ x7+ x6+ x5+ x+ 1

The general notation of the Golay code is (n, k, dmin). The generator
polynomial g(x) forms a k×n generator matrix G, which is given by

The degree of g(x) is r = 11, and the total entries of the 12×23 generator matrix G are:

By using row operation in above generator matrix G, the systematic generator


matrix Gs for this code is shown as follows:

Gs = [P | I k]12x23

Where P expresses parity check bits matrix, and Ik expresses identity matrix
with k=12.

13
The kxn generator matrix G is a standard generator matrix if its last k columns
form a k x k identity matrix. A choice of one set is essentially a choice of the
corresponding systematic generator for encoding purposes.

3.3.3: Encoder for Code Word Generation

The vector form of code word C can be expressed as polynomial form c(x)=
(c0+c1x+………. ,+cn-1xn-1). All the code polynomials in C, there is an unique monic
generator polynomial g(x)= (g0+g1x+………. ,+gr-1xr-1+ xr) with minimal degree r < n.
Every code polynomial c(x) in C is a multiple of g(x) , and can be expressed uniquely
as c(x) =m(x)g(x) where m(x)= (m0+m1x+………. ,+mk-1xk-1) is a message polynomial
which coefficients are from the vector form (m0,m1,………. ,mk-1) . The generator
polynomial g(x) of C is a factor of (xn -1) in GF(q).The codeword of systematic form
can be obtained in matrix form by:

C=mGS

The flow chart for the encoding operation is shown in below Figure. The
Encoder operation basically consists of inputting a 12-bit message vector M which is
to be encoded. Here padding 11 zeros at the MSB of the message vector M and
XORing of each window of 12-bit (taken from this 23 bit) with the key word data i.e
generator vector which is 12 bit data, when the right most bit of the window is ‘1’.

Figure 3.1: flow chart representation of golay encoder

The above process can be continued up to the 12th bit of the message vector M
is reached. After completion of this process the higher 11 bit of the message vector is

14
place higher 11 bit of the code word, and the initial input data 12 bit of message
vector is placed lower 12 bit positions of the code word. In this way the code word is
generated for the given message vector.

Here an example is explained as the message or information bits are 12 i.e


“110010101001”, and the corresponding 11 check bit ”10101110010” which are
generated by performing the operation as explained above. Here the generator
polynomial which is called as key is equal to “101011100011”. In the above example
the code word which is obtained is “10101110010110010101001”.

Generation of code word can be represented as shown below.


C1x23=M1x12*G12x23

From above it clearly shows that the message bits always occupy the lowest 12
coordinates of the codeword.

15
Chapter-4

IMPLEMENTATION OF GOLAY CODE DECODER

In coding theory, decoding is the process of translating received messages


into codewords of a given code. There have been many common methods of mapping
messages to codewords. These are often used to recover messages sent over a noisy
channel, such as a binary symmetric channel.

4.1: DECODER FOR GOLAY CODE

After encoding the message into a code word, the code word can be
transmitted through the noisy channel. After receiving the code word it should be
decoded into original form i.e message or information. For this the decoding
algorithm is implemented.

To illustrate decoding algorithm, define

E(x)= (e22x22+e21x21+………. , e1x +e0)

to be the error polynomial. Written as a vector, the error vector is E = (e0, e1,…, e22).
Then the received codeword has the form

R(x)= C(x) + E(x). ----------------------------- (4.1)

Suppose a code word C of length n is transmitted in a field F, assume that


channel is adding an error vector or error word r. Suppose that ‘e’ errors are occurred

in the received codeword R(x), and assume that 2t ≤ d-1. The decoder begins by

dividing the received codeword R(x) by the generator polynomial g(x), i.e.

R(x)=m(x)g(x)+E(x). ------------------------ (4.2)

The decoding process can be explained briefly by using flow chart which is
given below. The decoding process may contain several steps they are

 First find whether the received word is error free or not. This process
can be done before computing the decoding process. By using two
dimensional parity check bit this can be achieved.

16
 Based on the two dimensional parity check bit if the received word is
error free then there is no need of computing the internal steps of the
decoding process, directly process end by decoding the 23 bit of
received code word is decoded into original information bits which of
12 bit length.

 If the code word is effected by the error then to correct and decode the
received data the internal operations of decoder to be performed.

 In that first Calculate the syndrome value for the received code word
which is error affected code word.

 Based on the syndrome value the suitable error pattern is determined to


correct the received word.

 After getting the error pattern, the received word is get corrected to
obtain the original code word.

 After that the corrected code word is decoded into the original
information or message which is encoded at the source.

Figure 4.1: flow chart for decoding of golay code

17
Above stated steps which involved in decoding process are clearly explained below.

4.1.1: Hard Versus Soft-Decision Decoding

If the demodulator output is quantized to more than two levels (Q > 2), or the
output of the demodulator is left unquantized ( analogue ) the demodulator feeds the
decoder with more information than is provided in the hard-decision case, the
demodulator is said to make soft-decisions, and the decoding is called soft decision
decoding. The decoder in this case must accept multilevel (or analogue) inputs.
Although this makes the decoder more difficult to implement, soft-decision decoding
offers significant performance improvement over hard-decision decoding.

When the demodulator sends a hard binary decision to the decoder, it sends a
single bit. When it sends a soft binary decision quantized to eight levels, it sends the
decoder a 3-bit word describing a time interval along the signal. Sending the decoder
a 3-bit word in place of 1-bit symbol provide the decoder with more information,
which can help to recover errors introduced by the communication channel. This
results in a system with better error performance than in a hard decision technique, by
adding confidence to the generated output. But the implementation of soft decoding is
more complicated or complex compared to hard decision decoding technique. In the
proposed golay code implementation the hard decision decoding technique is
implemented or used to decode the data.

Hard decision decoding techniques can be classified as follows

 Syndrome decoding

 Meggitt decoding

 Systematic search decoding,

 Error trapping decoding

Syndrome Decoding

Syndrome decoding is a hard-decision minimum distance algorithm for golay


code which is one of the classes of linear block codes. The main idea of the algorithm
is to calculate the syndrome of the received word and determine the error pattern
either by means of a list of the syndromes and the corresponding error patterns, or by

18
using the standard array to find the correctable error pattern. The decoder then
corrects the introduced error by adding the error pattern to the received word.

4.2: SYNDROME GENERATION FOR GOLAY CODE

The syndrome decoding method is used to decode the received word.


According to the syndrome decoding technique firstly the syndrome is to be
determined based on the received word.

As the syndrome of a codeword is zero value, then the received word is equal
to the transmitted codeword. Equally well, distinct cosets have different syndromes
since the difference of vectors from distinct cosets is not a codeword and so has
nonzero syndrome.

In decoding, when r is received, first calculate the syndrome s = rHT. Next


look up s in the syndrome dictionary as s = si. Finally decode r to c = r - ei. The set of
known syndromes obtained by evaluating r (x) at the roots of g(x). Before calculating
syndrome value the parity check matrix has to be defined.

The parity check polynomial h(x)= (h0+h1x+………. ,+hkxk) with the degree of
k=(n-r) is a factor of xn – 1, such that xn –1=g(x)h(x). Since c(x) is a code
polynomial if and only if it is a multiple of g(x), it follows that c(x) is a code
polynomial if and only if c(x)h(x)≡0 modulo (xn–1). The (n-k)×n parity check matrix
H is:

The parity check polynomial of (23, 12, 7) Golay code is

h(x)= (x23–1)/ g(x)= x12+ x10+ x7+ x4+ x3+ x2+ x1+1 ---- (4.3)

The degree of h(x) is k=12, and the total entries of the 11×23 parity check matrix H
is:

19
By using row operation in above parity check matrix H, the systematic parity
check matrix Hs for this code is shown as follows:

Hs = [ I k|PT]11x23

Where P expresses parity check bits matrix, PT expresses transpose of parity


check bits matrix and Ik expresses identity matrix with k=11. Any matrix H for which
C =( r|rHT=0) it will be called as a control matrix for C.

s = [rHT] ------------------------- (4.4)

Where HT is transpose matrix of H. If during the data transmission, no error


occurs, the received code word r is equal to the original transmitted codeword c.

Input data Output data bits


bits Syndrome generator 11 bit data
23 bit data
Received key
word 12 bit data

Figure 4.2: syndrome generator block diagram

20
Here the 23 bit of received data is given to the input of syndrome generator.
There is a key word which is termed as golay generator polynomial vector in the
syndrome generator block can be used for calculating the syndrome value. XORing
of each window of 12-bit (taken from this 23 bit) with the key word data i.e generator
vector which is 12 bit data, when the right most bit of the window is ‘1’.

Figure 4.3: flow chart representation of steps involved in syndrome generation

The above process can be continued up to the 12th bit of the received code
word is reached. Then the resulting 11 bit of data is taken as the syndrome value,
based on this syndrome value the appropriate error pattern can be calculated to correct
the received data.

It may sometimes be more convenient to define syndromes and do syndrome


decoding relative to a control matrix H rather than a check matrix. Syndrome
decoding does not suffer from many of the failings of standard array decoding. The
syndrome dictionary is much smaller than the standard array for storage purposes; and
it can be ordered lexicographically, so that searches can be done linearly. Still
syndrome decoding in this dictionary form is too general to be of much practical use.

21
4.3: ERROR PATTERN GENERATION

After getting the syndrome value the error pattern has to be determined.
Before obtaing the error pattern systematic parity check matrix has to be defined. The
systematic matrix is shown below. The degree of systematic parity check matrix is
k=12, and the total entries of the 11×23 parity check matrix is:

This systematic parity check matrix can be obtained from parity check polynomial
equation. The parity check polynomial of (23, 12, 7) G olay code is

Compare with syndrome s and the position of the column of the H, one can
find out the error position of the received codeword and correct these error bits under
error correcting capability. By searching the available syndrome value in the above
matrix the error pattern can be computed for 1 bit error. For more than 1 bit error the
error pattern can be calculated as follows.

Here by performing the xor operation between the columns of the above parity
check matrix the appropriate error pattern can be calculated. For example to calculate
two bit error patterns the calculated syndrome will be equals to the xor operation of
any two columns of the given parity check matrix. Suppose i and j columns are xored
and the result will be equals to given syndrome then the error patterns can be
calculated as in 23 bit of data of error pattern i and j positions are filled with 1’s and
remaining are filed with 0’s. Same above procedure is followed for 5 bit errors but the
number of columns to be xored will differ like 5 columns .The calculated syndrome is

22
therefore associated with the coset whose leader has a 1 in the respective position
(read in binary) and 0 elsewhere. By assuming an error in this respective position the
error pattern can be estimated.

Figure 4.4: flow chart for error pattern gneration of golay code

An example worked out by hand illustrates for the above obtained syndrome
value. Here the 11 syndrome bits “00011001011” which are generated by performing
the operation as explained in the previous section. As explained above the error
pattern can be determined by xoring the 2nd, 7th ,9th and 15th columns (right to left) of
the systematic parity check matrix, the obtained error pattern for the syndrome value
“00011001011” is “00000000100000101000010”.

4.4: ERROR CORRECTION

Here the received word which is effected by error is corrected by using the
corresponding error pattern. This overall process can be done by respect to as shown
in below flow graph. The received word and the corresponding error pattern can be
given to the error correction block as inputs. XOR operation is performed in between
two of them and the resultant value is the corrected code word.

23
Figure 4.5: flow chart for correction of received word of golay code

The received word and the corresponding error pattern can be given to the
error correction block as inputs. XOR operation is performed in between two of them
and the resultant value is the corrected code word.

C= r XOR e;

Here C is the corrected code word and r is the received word, e is the error
pattern corresponds to the received word.

24
Chapter-5

SIMULATION RESULT AND SYNTHESIS REPORT

5.1 SIMULATION RESULT:

Golay Encoder:

Here the VHDL code is written for the golay encoder and simulated. The input
data given to the encoder is “110010101001”, the output of the encoder is
“10101110010110010101001” generated which is of 23 bit length. It can be shown
with an arrow in the below figure. After completion of encoding process the code
word is transmitted and the transmitted code word is XORed with the error vector in
the channel.

Syndrome Generator:

Here the VHDL code is written for the syndrome generator and simulated. For
four bit error the input data given to the syndrome generator is
“10101110110110111101011”, for the given input data the output is “00011001011”
of 11 bit syndrome is generated.

Error Pattern Generation:

Here the VHDL code is written for the error pattern generator and simulated.
For four bit error the input data given to the error pattern generator is “00011001011”,
for the given input data the output “00000000100000101000010” is generated.

Error Correction:

Here the VHDL code is written for the error correction module and simulated.
For four bit error the input data given to the error correction module is error pattern
“00000000100000101000010” and received data “10101110110110111101011”, for
the given input data the output of the error correction module is
“10101110010110010101001” generated.

25
Decoded Data:

Here the VHDL code is written for the error correction module and simulated
by using Xilinix ISE simulator. For four bit error the input data given to the error
correction module is error pattern “00000000000100101000010” and received data
“10101110010010111101011”, for the given input data the output of the error
correction module is “10101110010110010101001” which is of 23 bit length. The
corresponding decoded data is “110010101001” which is of 12 bit length.

Figure 5.1: Simulation result of golay coding (top module)

5.2 SYNTHESIS REPORT


The modules are synthesized by using Xilinx ISE 13.2.

DEVICE UTILIZATION SUMMARY:


Selected Device: 3s500e-4fg320
Number of Slices: 877 out of 4656 18%
Number of Slice Flip Flops: 129 out of 9312 1%
Number of 4 input LUTs: 1556 out of 9312 16%
Number of IOs: 132
Number of bonded IOBs: 132 out of 232 56%
IOB Flip Flops: 46

26
TIMING SUMMARY:
Speed Grade: -4
Minimum period: 5.339ns (Maximum Frequency: 187.301MHz)
Minimum input arrival time before clock: 21.864ns
Maximum output required time after clock: 25.467ns
Maximum combinational path delay: 25.275ns

Figure 5.2: Top module schematic design

5.3 RESULT ANALYSIS:

Here the input data given to golay encoder and the data is encoded,
transmitted, received and corrected up to three errors, and decoded into original form.
The simulation result shows the encoded data, received data, syndrome value,
corrected data word and decoded data. The complete top module is executed on
Xilinx platform and the simulation results are obtained.

27
Chapter-6

CONCLUSION AND FUTURE SCOPE

CONCLUSION:

The encoder module, syndrome generator, error pattern generator and error
correction modules for Golay code has been designed, realized, simulated. Here data
or message can be encoded and transmitted. After introducing the channel noise or
error the code word is received. Syndrome is generated based on the received word
and for that syndrome value suitable error pattern can be calculated and based on the
error pattern value the received code is corrected and the original data retrieved.
While there is no error is introduced in the channel to reduce the steps of decoding
process additionally two dimensional check matrix is used. Simulation results are
verified.

FUTURE SCOPE:

By using this golay code algorithm which is implemented for triple error
correction can be implemented in the application of digital image transmission. There
is a scope for implementation of soft decision decoding algorithm instead of hard
decision decoding algorithm for analog data transmission.

28
REFERENCES:

[1] H.P.Lee , H.S.Chang “ An Improvement on Soft decoding of the (24,12,8) Golay


code” IEEE Communication system, 2008, Page(s): 246-249.

[2] Wen-Ku Su, Pei-Yu Shih, Tsung-Ching Lin, and Trieu-Kien Truong “Soft-
decoding of the (23, 12, 7) Binary Golay Code” International MultiConference of
Engineers and Computer Scientists (IMECS) 2008, 19-21 March, 2008, Hong Kong.

[3] H.C. Chang, H.P. Lee, T.C. Lin, T.K. Truong “A Weight Method of Decoding the
(23, 12, 7) Golay Code Using Reduced Table Lookup” International Conference on
Communications, Circuits and Systems, ICCCAS 2008.

[4] Patrick ADDE, Raphael LE BIDAN “A low-complexity soft-decision decoding


architecture for the binary extended Golay code” International Conference on
Electronics, Circuits and Systems (ICECS), 2012, IEEE.

[5] Melissa Kanemasu “Golay Codes” MIT Undergraduate Journal of Mathematics.

29
APPENDEX-I
BASIC VLSI DESIGN METHODOLOGY

1.1 INTRODUCTION
Very-large-scale integration (VLSI) is the process of creating integrated
circuits by combining thousands of transistor-based circuits into a single chip. VLSI
began in the 1970s when complex semiconductor and communication technologies
were being developed. The microprocessor is a VLSI device. The term is no longer as
common as it once was, as chips have increased in complexity into the hundreds of
millions of transistors.
Current technology has moved far past this mark and today's microprocessors
have many millions of gates and hundreds of millions of individual transistors. At one
time, there was an effort to name and calibrate various levels of large-scale integration
above VLSI. Terms like Ultra-large-scale Integration (ULSI) were used. But the huge
number of gates and transistors available on common devices has rendered such fine
distinctions moot. Terms suggesting greater than VLSI levels of integration are no
longer in widespread use. Even VLSI is now somewhat quaint, given the common
assumption that all microprocessors are VLSI or better.1.1.1 Advantages of IC’s Over
Discrete Components. While we will concentrate on integrated circuits, the properties
of integrated circuits-what we can and cannot efficiently put in an integrated circuit-
largely determine the architecture of the entire system. Integrated circuits improve
system characteristics in several critical ways. ICs have three key advantages over
digital circuits built from discrete components.
Size: Integrated circuits are much smaller-both transistors and wires are
shrunk to micrometer sizes, compared to the millimeter or centimeter scales of
discrete components. Small size leads to advantages in speed and power consumption,
since smaller components have smaller parasitic resistances
Speed: Signals can be switched between logic 0 and logic 1 much quicker
within a chip than they can between chips. Communication within a chip can occur
hundreds of times faster than communication between chips on a printed circuit
board. The high speed of circuit’s on-chip is due to their small size-smaller
components and wires have smaller parasitic capacitances to slow down the signal.
Power consumption: Logic operations within a chip also take much less
power. Once again, lower power consumption is largely due to the small size of

30
circuits on the chip-smaller parasitic capacitances and resistances require less power
to drive them.

1.2 INTRODUCTION TO FPGA DESIGN TOOL FLOW


1.2.1 FPGA Design Considerations
FPGA demonstrates good performance and logic capacity by exploiting
parallelism. At present single FPGA platform can play multi-functions, including
control, filter and system. FPGA design flow is a three-step process consisting of
design entry, implementation, and verification stages, as shown in Fig 2.1 The full
design flow is an iterative process of entering, implementing, and verifying the design
until it is correct and complete. The key advantage of VHDL when used for systems
design is that it allows the behavior of the required system to be described (modeled)
and verified (simulated) before Synthesis tools translate the design into real hardware
(gates and wires). HDL describes hardware behavior. There are main differences
between traditional programming languages and HDL. Traditional languages are a
sequential process whereas

Figure 1: Xilinx FPGA design flow

31
 HDL is a parallel process.
 HDL runs forever whereas traditional programming language will only run if directed.
1.2.2 VHDL
VHDL (VHSIC hardware description language; VHSIC: very-high-
speed integrated circuit) is a hardware description language used in electronic design
automation to describe digital and mixed-signal systems such as field-programmable
gate arrays and integrated circuits. The structural and dataflow descriptions show a
concurrent behavior. That is, all statements are executed concurrently, and the order
of the statements is not relevant. On the other hand, behavioral descriptions are
executed sequentially in processes, procedures and functions in VHDL. The
behavioral descriptions resemble high-level programming languages.
VHDL allows a mixture of various levels of design entry abstraction.
Precision RTL Synthesis Synthesizes will accept all levels of abstraction, and
minimize the amount of logic needed, resulting in a final net list description in the
technology of your choice. The Top-Down Design Flow is shown in Figure 2.2.

Figure 2: Top-down design flow with precision RTL synthesis

32
1.2.3 VHDL and Synthesis
VHDL is fully simulatable, but not fully synthesizable. There are several
VHDL constructs that do not have valid representation in a digital circuit. Other
constructs do, in theory, have a representation in a digital circuit, but cannot be
reproduced with guaranteed accuracy. Delay time modeling in VHDL is an example.
State-of-the-art synthesis algorithms can optimize Register Transfer Level (RTL)
circuit descriptions and target a specific technology. Scheduling and allocation
algorithms, which perform circuit optimization at a very high and abstract level, are
not yet robust enough for general circuit applications. Therefore, the result of
synthesizing a VHDL description depends on the style of VHDL that is used.

1.2.4 Evolution of Programmable Logic Devices


The first device developed later specifically for implementing logic circuits
was the Field-Programmable Logic Array (FPLA), or simply PLA for short. A PLA
consists of two levels of logic gates: a programmable “wired” AND-plane followed
by a programmable “wired” OR-plane. A PLA is structured so that any of its inputs
(or their complements) can be AND’ed together in the AND-plane; each AND-plane
output can thus correspond to any product term of the inputs. Similarly, each OR-
plane output can be configured to produce the logical sum of any of the AND-plane
outputs. With this structure, PLAs are well-suited for implementing logic functions in
sum-of-products form.

Field-Programmable Gate Arrays (FPGAs). Like MPGAs, FPGAs comprise


an array of uncommitted circuit elements, called logic blocks, and interconnect
resources, but FPGA configuration is performed through programming by the end
user. An illustration of a typical FPGA architecture appears in Figure 4.

Figure 3: structure of an FPGA

33
1.3 FPGA DESIGN AND PROGRAMMING TOOL FLOW
The standard design flow comprises the following steps:

Design Entry and Synthesis: In this step of the design flow, design is created
using a, a hardware description language (HDL) for text-based entry. Xilinx Synthesis
Technology (XST) GUI can be used to synthesize the HDL file into an NGC file.

Design Implementation: By implementing to a specific Xilinx architecture,


the logical design file format, such as EDIF, that is created in the design entry and
synthesis stage is converted into a physical file format. The physical information is
contained in the native circuit description (NCD) file for FPGAs Then bit stream file
is created from these files and optionally a PROM or EPROM is programmed for
subsequent programming of Xilinx device.

Design Verification: Using a gate-level simulator or cable, it is ensured that


the design meets timing requirements and functions properly. To define the
behavior of the FPGA, the user provides a hardware description language (HDL) or a
schematic design. The HDL form is more suited to work with large structures because
it's possible to just specify them numerically rather than having to draw every piece
by hand. However, schematic entry can allow for easier visualization of a design.
Then, using an electronic design automation tool, a technology-mapped net list is
generated. The net list can then be fitted to the actual FPGA architecture using a
process called place-and-route, usually performed by the FPGA Company’s
proprietary place-and-route software. The user will validate the map, place and route
results via timing analysis, simulation, and other verification methodologies.

Once the design and validation process is complete, the binary file generated
(also using the FPGA company's proprietary software) is used to (re)configure the
FPGA. Going from schematic/HDL source files to actual configuration: The source
files are fed to a software suite from the FPGA vendor that through different steps
will produce a file. This file is then transferred to the FPGA via a serial interface
(JTAG).Initially the RTL description in VHDL is simulated by creating test benches
to simulate the system and observe results.

Then, after the synthesis engine has mapped the design to a net list, the net list is
translated to a gate level description where simulation is repeated to confirm the

34
synthesis proceeded without errors. Finally the design is laid out in the FPGA at
which point propagation delays can be added and the simulation run again with these
values back-annotated onto the net list.

1.4 APPLICATIONS OF FPGA’s


FPGAs have gained rapid acceptance and growth over the past decade because
they can be applied to a very wide range of applications. A list of typical applications
includes: random logic, integrating multiple SPLDs, device controllers,
communication encoding and filtering, small to medium sized systems with SRAM
blocks, and many more.
Other interesting applications of FPGAs are prototyping of designs later to be
implemented in gate arrays, and also emulation of entire large hardware systems. The
former of these applications might be possible using only a single large FPGA (which
corresponds to a small Gate Array in terms of capacity), and the latter would entail
many FPGAs connected by some sort of interconnect; for emulation of hardware.
Another promising area for FPGA application is the usage of FPGAs as custom
computing machines. This involves using the programmable parts to “execute”
software, rather than compiling the software for execution on a regular CPU.

35

You might also like