Chapter-1: 1.1: Coding Theory
Chapter-1: 1.1: Coding Theory
INTRODUCTION
In computer any kind of data is stored and processed as binary digits. A bit is
0 or 1. Every letter has an ASCII code. For example, the ASCII code of the letter ‘A’
is 01000001. Typically, data consists of billions of bits. It is possible to model the
transmitted data as a string of 0s and 1s. Digital data is transmitted over a channel
(which could be a wire, network, space, air etc.), and there is often noise in the
channel. The noise may distort the messages to be sent. Therefore, the received data
may not be same as transmitted data.
1. Data compression
2. Error correction
1
The error detection and correction are necessary for reliable transmission and
storage of data in communication system. Information media are not fully reliable in
practice, in the sense that noise (any form of interference) frequently causes data to be
distorted. To deal with this undesirable but inevitable situation, some form of
redundancy is incorporated in the original data. The main method used to recover
messages that might be distorted during transmission over a noisy channel is to
employ redundancy. With this redundancy, even if errors are introduced (up to some
tolerance level) the original information can be recovered, or at least the presence of
errors can be detected.
Algebraic coding theory is basically divided into two major types of codes:
Linear block codes have the property of linearity, i.e. the sum of any two
codewords is also a code word, and they are applied to the source bits in blocks, hence
it is named as linear block codes. Linear block codes are summarized by their symbol
alphabets (e.g., binary) and parameters (n,m,dmin) where
2
2. m is the number of source symbols that will be used for encoding at once,
3. dmin is the minimum hamming distance for the code.
A block code is a code that uses sequences of n symbols, for some positive
integer n. Each sequence of length n is a code word or code block, and contains k
information digits (or bits). The remaining n − k digits in the code word are called
redundant digits or parity-check bit. They do not contain any additional information,
but are used to make it possible to correct errors that occur in the transmission of the
code. The encoder for a block code is memory less, which means that the n digits in
each code word depend only on each other and are independent of any information
contained in previous code words [5].
In coding theory, a cyclic code [10] is a block code, where the circular
shifts of each codeword gives another word that belongs to the code. They are error-
correcting codes that have algebraic properties that are convenient for efficient error
detection and correction.
3
Error-correcting codes are used to correct errors when messages are
transmitted through a noisy communication channel. For example, we may wish to
send binary data (a stream of zeros and ones) through a noisy channel as quickly and
as reliably as possible. The channel may be a telephone line, a high frequency radio
link or a satellite communication link. The noise may be human error, lightning,
thermal noise, imperfections in equipment, etc.
One of the key features of BCH codes is that during code design, there is a
precise control over the number of symbol errors correctable by the code. In
particular, it is possible to design binary BCH codes that can correct multiple bit
errors. Another advantage of BCH codes is the ease with which they can be decoded,
namely, via an algebraic method. This simplifies the design of the decoder for these
codes, using small low-power electronic hardware.
1.2 SCOPE OF THE PROJECT
The main scope of this project is to learn about the coding theory, golay
coding algorithm (encoding and decoding), syndrome decoding algorithm which is
the type of hard decision decoding algorithm mostly used for syndrome calculation
and also to learn about error correction capability.
4
Chapter-2
SPECIFICATION AND DESIGN APPROACH OF GOLAY CODE
2.1 SPECIFICATIONS:
Golay Encoder:
The perfect binary golay code can be represented as (23, 12, 7). Here for golay
encoder the input data length is 12 bit i.e the message bit that can be encoded into 23
bit so the golay encoder output data length is 23 bit and the length of the redundancy
is 11 bit. The key word which is used inside the golay encoder has the length of 12
bit. The length of the error bit which is introduced in the channel is 23 bit, and the
weight is 1/2/3/4/5.
Golay Decoder:
The golay decoder has the input data length is 23 bit, and the length of the
output of decoder is 12 bit. The received code word length is 23 bit. The length of the
syndrome is 11 bit. The length of the error pattern is 23 bit.
5
Error
In the decoder block the syndrome bits are generated by performing the r mod
g operation. Here r is the received code word. By using the syndrome vector bits
obtain the all possible error patterns. By this process the received code word can be
corrected by selecting the appropriate error pattern for corresponds to a syndrome
from the lookup table which contains all possible error patterns corresponds to
syndrome vector bits.
6
have occurred, the decoder will decode the received vector 01001 as the "nearest"
codeword that is 00000 or yes which is still correct.
The decoder uses the following rule to decide if the received message is zero
or one. The decoder counts the number of zeros and ones. If the number of ones is
greater than the number of zeros, then the decoder decides that the received message
is one. If the number of zeros is greater than the number of ones, then the received
message will be zero. If they are equal in number (zeros and ones) it will give a
decoding failure result.
Encoder:
The encoder accepts the information to be transmitted as a sequence of length
k of binary symbols from the information source and appends a set of r parity- check
digits. The parity-check digits are determined by the encoding rules. The codeword
from the encoder is transmitted through the channel (in some applications the channel
may be storage device such as a magnetic tape.) Let Rx be the received codeword, ex
the channel noise, and Tx the transmitted codeword. If there is an error in the received
word then to correct that error the received word Rx is to be added to the channel noise
ex using modulo-2 addition which can be achieved using an XOR gate. In that case
the transmitted codeword Tx can be found from the relationship:
Tx = Rx XOR ex
If the channel is noiseless (ex = 0), then the received word Rx is equal to the
transmitted codeword Tx.
Tx = Rx
As an example assume that the codeword Tx = (011) and ex = (100) , this
means that the error occurred on the First bit, then the received vector becomes:
Rx = Tx XOR ex =(111)
7
Decoder:
The decoder determines whether the information and check digits satisfy the
encoding rules, and uses any observed discrepancy to detect and possibly correct
errors that have occurred in transmission. For detection only, the decoder must
perform the following:
1) The decoder applies the decoding rules to the received word to determine if the
parity-check bit satisfies the parity-check relationships or not. If the parity-check
relationships are not satisfied, this means that an error has occurred. If only error
detection is to be performed, then the decoding function is completed with an
announcement that either the received word is a codeword or that an error has
been detected.
8
Chapter-3
Linear block codes are a class of parity check codes that can be characterized
by the (n; k) notation. Assume that the output of the information source is a stream of
binary digits. In block coding the information sequence is segmented into message
blocks of fixed length and consisting of k information digits.
If k is large, this means that very large space of memory is required to contain
the large number of code vectors, 2k. To overcome this problem the required code
vectors can be generated as needed by using a generator matrix. Using a matrix
representation the codeword V can be expressed as:
9
Where gi = (gi,0 ,gi,1 ,…………….. gi,n-1) is a codeword and G is the generator
matrix of the code. There exists a subspace, the dual code C, whose codewords are
orthogonal to any codeword in C. This subspace is de ned by the matrix H, known as
the parity check matrix of the code C. Since every codeword in C is orthogonal to
every codeword in C, if V is a codeword in C, then VHT =0.
d(X, Y )=3
3.1.4: Syndrome Computation and Error Detection For Linear Block Codes
To determine whether or not there is some error in the received message the
decoder begins by computing the syndrome digits. These are defined by the equation:
ST = HRT
Where, H is the parity check matrix. The code block length n is equal to the number
of columns of H. If the syndrome is zero, then the received vector is very likely the
transmitted codeword, or there is an undetectable error pattern existing in the received
word. If the syndrome is not zero, then the received vector is not the transmitted
codeword, and an error has been detected. Since the syndrome digits are defined by
the same equations as the parity-check equations, the syndrome digits reveal the
pattern of parity-check failures in the received word.
10
3.1.5: Error Correction for Linear Block Codes
where M =2^k is the size of the code vector set. In order to detect up to ex and correct
up to t errors (where t <= ex) in a codeword, the minimum distance of the code must
be:
If error detection only is required, then t =0, dmin = ex +1, or ex = dmin- 1. For
maximum correction of errors then t = ex and dmin =(2t +1), or: t = (dmin-1)/2
The well-known binary Golay[8] code, also called the binary (23, 12, 7)
quadratic residue (QR) code, was first discovered by Golay in 1949. It is a very useful
perfect linear error-correcting code; particularly, it has been used in the past decades
for a variety of applications involving a parity bit being added to each word to yield a
half-rate code called the binary (24, 12,8) extended Golay code. One of its most
interesting applications is the provision of error control in the Voyager missions. The
Golay code can allow the correction of up to t= [(d-1)] /2 errors, where [x] denotes
the greatest integer less than or equal to x, t is the error-correcting capability, and d is
the minimum Hamming distance of the code. The (23, 12, 7) Golay code is a perfect
linear error-correcting code that can correct all patterns of three or fewer errors in 23
bit positions. There are several efficient decoding algorithms for the (23, 12, 7) Golay
code.
11
The binary form of the Golay code is one of the most important types of
linear binary block codes. It is of particular significance since it is one of only a few
examples of a nontrivial perfect code. A t-error-correcting code can correct a
maximum of t errors. A perfect t-error correcting code has the property that every
word lies within a distance of t to exactly one code word. Equivalently, the code has
dmin = 2t + 1, and covering radius t, where the covering radius r is the smallest
number such that every word lies within a distance of r to a codeword. If there is an
(n; k) code with an alphabet of q elements, and d = 2t + 1, then,
---------------------------------- (3.2)
c0,c1,………. ,cn-1) ∈ C, there is also a code word c' =( cn-1,c0,c1,………. ,cn-2) ∈ C, the
codeword c' is a right cyclic shift of the codeword c , it follows that all n of the
distinct cyclic shifts of c must also be codewords in C.
For the binary (23, 12, 7) Golay-code of length 23 over GF(211), its quadratic
residue (QR) set is the collection of all non-zero quadratic residues modulo n given by
12
= {1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18}
The general notation of the Golay code is (n, k, dmin). The generator
polynomial g(x) forms a k×n generator matrix G, which is given by
The degree of g(x) is r = 11, and the total entries of the 12×23 generator matrix G are:
Gs = [P | I k]12x23
Where P expresses parity check bits matrix, and Ik expresses identity matrix
with k=12.
13
The kxn generator matrix G is a standard generator matrix if its last k columns
form a k x k identity matrix. A choice of one set is essentially a choice of the
corresponding systematic generator for encoding purposes.
The vector form of code word C can be expressed as polynomial form c(x)=
(c0+c1x+………. ,+cn-1xn-1). All the code polynomials in C, there is an unique monic
generator polynomial g(x)= (g0+g1x+………. ,+gr-1xr-1+ xr) with minimal degree r < n.
Every code polynomial c(x) in C is a multiple of g(x) , and can be expressed uniquely
as c(x) =m(x)g(x) where m(x)= (m0+m1x+………. ,+mk-1xk-1) is a message polynomial
which coefficients are from the vector form (m0,m1,………. ,mk-1) . The generator
polynomial g(x) of C is a factor of (xn -1) in GF(q).The codeword of systematic form
can be obtained in matrix form by:
C=mGS
The flow chart for the encoding operation is shown in below Figure. The
Encoder operation basically consists of inputting a 12-bit message vector M which is
to be encoded. Here padding 11 zeros at the MSB of the message vector M and
XORing of each window of 12-bit (taken from this 23 bit) with the key word data i.e
generator vector which is 12 bit data, when the right most bit of the window is ‘1’.
The above process can be continued up to the 12th bit of the message vector M
is reached. After completion of this process the higher 11 bit of the message vector is
14
place higher 11 bit of the code word, and the initial input data 12 bit of message
vector is placed lower 12 bit positions of the code word. In this way the code word is
generated for the given message vector.
From above it clearly shows that the message bits always occupy the lowest 12
coordinates of the codeword.
15
Chapter-4
After encoding the message into a code word, the code word can be
transmitted through the noisy channel. After receiving the code word it should be
decoded into original form i.e message or information. For this the decoding
algorithm is implemented.
to be the error polynomial. Written as a vector, the error vector is E = (e0, e1,…, e22).
Then the received codeword has the form
in the received codeword R(x), and assume that 2t ≤ d-1. The decoder begins by
dividing the received codeword R(x) by the generator polynomial g(x), i.e.
The decoding process can be explained briefly by using flow chart which is
given below. The decoding process may contain several steps they are
First find whether the received word is error free or not. This process
can be done before computing the decoding process. By using two
dimensional parity check bit this can be achieved.
16
Based on the two dimensional parity check bit if the received word is
error free then there is no need of computing the internal steps of the
decoding process, directly process end by decoding the 23 bit of
received code word is decoded into original information bits which of
12 bit length.
If the code word is effected by the error then to correct and decode the
received data the internal operations of decoder to be performed.
In that first Calculate the syndrome value for the received code word
which is error affected code word.
After getting the error pattern, the received word is get corrected to
obtain the original code word.
After that the corrected code word is decoded into the original
information or message which is encoded at the source.
17
Above stated steps which involved in decoding process are clearly explained below.
If the demodulator output is quantized to more than two levels (Q > 2), or the
output of the demodulator is left unquantized ( analogue ) the demodulator feeds the
decoder with more information than is provided in the hard-decision case, the
demodulator is said to make soft-decisions, and the decoding is called soft decision
decoding. The decoder in this case must accept multilevel (or analogue) inputs.
Although this makes the decoder more difficult to implement, soft-decision decoding
offers significant performance improvement over hard-decision decoding.
When the demodulator sends a hard binary decision to the decoder, it sends a
single bit. When it sends a soft binary decision quantized to eight levels, it sends the
decoder a 3-bit word describing a time interval along the signal. Sending the decoder
a 3-bit word in place of 1-bit symbol provide the decoder with more information,
which can help to recover errors introduced by the communication channel. This
results in a system with better error performance than in a hard decision technique, by
adding confidence to the generated output. But the implementation of soft decoding is
more complicated or complex compared to hard decision decoding technique. In the
proposed golay code implementation the hard decision decoding technique is
implemented or used to decode the data.
Syndrome decoding
Meggitt decoding
Syndrome Decoding
18
using the standard array to find the correctable error pattern. The decoder then
corrects the introduced error by adding the error pattern to the received word.
As the syndrome of a codeword is zero value, then the received word is equal
to the transmitted codeword. Equally well, distinct cosets have different syndromes
since the difference of vectors from distinct cosets is not a codeword and so has
nonzero syndrome.
The parity check polynomial h(x)= (h0+h1x+………. ,+hkxk) with the degree of
k=(n-r) is a factor of xn – 1, such that xn –1=g(x)h(x). Since c(x) is a code
polynomial if and only if it is a multiple of g(x), it follows that c(x) is a code
polynomial if and only if c(x)h(x)≡0 modulo (xn–1). The (n-k)×n parity check matrix
H is:
h(x)= (x23–1)/ g(x)= x12+ x10+ x7+ x4+ x3+ x2+ x1+1 ---- (4.3)
The degree of h(x) is k=12, and the total entries of the 11×23 parity check matrix H
is:
19
By using row operation in above parity check matrix H, the systematic parity
check matrix Hs for this code is shown as follows:
Hs = [ I k|PT]11x23
20
Here the 23 bit of received data is given to the input of syndrome generator.
There is a key word which is termed as golay generator polynomial vector in the
syndrome generator block can be used for calculating the syndrome value. XORing
of each window of 12-bit (taken from this 23 bit) with the key word data i.e generator
vector which is 12 bit data, when the right most bit of the window is ‘1’.
The above process can be continued up to the 12th bit of the received code
word is reached. Then the resulting 11 bit of data is taken as the syndrome value,
based on this syndrome value the appropriate error pattern can be calculated to correct
the received data.
21
4.3: ERROR PATTERN GENERATION
After getting the syndrome value the error pattern has to be determined.
Before obtaing the error pattern systematic parity check matrix has to be defined. The
systematic matrix is shown below. The degree of systematic parity check matrix is
k=12, and the total entries of the 11×23 parity check matrix is:
This systematic parity check matrix can be obtained from parity check polynomial
equation. The parity check polynomial of (23, 12, 7) G olay code is
Compare with syndrome s and the position of the column of the H, one can
find out the error position of the received codeword and correct these error bits under
error correcting capability. By searching the available syndrome value in the above
matrix the error pattern can be computed for 1 bit error. For more than 1 bit error the
error pattern can be calculated as follows.
Here by performing the xor operation between the columns of the above parity
check matrix the appropriate error pattern can be calculated. For example to calculate
two bit error patterns the calculated syndrome will be equals to the xor operation of
any two columns of the given parity check matrix. Suppose i and j columns are xored
and the result will be equals to given syndrome then the error patterns can be
calculated as in 23 bit of data of error pattern i and j positions are filled with 1’s and
remaining are filed with 0’s. Same above procedure is followed for 5 bit errors but the
number of columns to be xored will differ like 5 columns .The calculated syndrome is
22
therefore associated with the coset whose leader has a 1 in the respective position
(read in binary) and 0 elsewhere. By assuming an error in this respective position the
error pattern can be estimated.
Figure 4.4: flow chart for error pattern gneration of golay code
An example worked out by hand illustrates for the above obtained syndrome
value. Here the 11 syndrome bits “00011001011” which are generated by performing
the operation as explained in the previous section. As explained above the error
pattern can be determined by xoring the 2nd, 7th ,9th and 15th columns (right to left) of
the systematic parity check matrix, the obtained error pattern for the syndrome value
“00011001011” is “00000000100000101000010”.
Here the received word which is effected by error is corrected by using the
corresponding error pattern. This overall process can be done by respect to as shown
in below flow graph. The received word and the corresponding error pattern can be
given to the error correction block as inputs. XOR operation is performed in between
two of them and the resultant value is the corrected code word.
23
Figure 4.5: flow chart for correction of received word of golay code
The received word and the corresponding error pattern can be given to the
error correction block as inputs. XOR operation is performed in between two of them
and the resultant value is the corrected code word.
C= r XOR e;
Here C is the corrected code word and r is the received word, e is the error
pattern corresponds to the received word.
24
Chapter-5
Golay Encoder:
Here the VHDL code is written for the golay encoder and simulated. The input
data given to the encoder is “110010101001”, the output of the encoder is
“10101110010110010101001” generated which is of 23 bit length. It can be shown
with an arrow in the below figure. After completion of encoding process the code
word is transmitted and the transmitted code word is XORed with the error vector in
the channel.
Syndrome Generator:
Here the VHDL code is written for the syndrome generator and simulated. For
four bit error the input data given to the syndrome generator is
“10101110110110111101011”, for the given input data the output is “00011001011”
of 11 bit syndrome is generated.
Here the VHDL code is written for the error pattern generator and simulated.
For four bit error the input data given to the error pattern generator is “00011001011”,
for the given input data the output “00000000100000101000010” is generated.
Error Correction:
Here the VHDL code is written for the error correction module and simulated.
For four bit error the input data given to the error correction module is error pattern
“00000000100000101000010” and received data “10101110110110111101011”, for
the given input data the output of the error correction module is
“10101110010110010101001” generated.
25
Decoded Data:
Here the VHDL code is written for the error correction module and simulated
by using Xilinix ISE simulator. For four bit error the input data given to the error
correction module is error pattern “00000000000100101000010” and received data
“10101110010010111101011”, for the given input data the output of the error
correction module is “10101110010110010101001” which is of 23 bit length. The
corresponding decoded data is “110010101001” which is of 12 bit length.
26
TIMING SUMMARY:
Speed Grade: -4
Minimum period: 5.339ns (Maximum Frequency: 187.301MHz)
Minimum input arrival time before clock: 21.864ns
Maximum output required time after clock: 25.467ns
Maximum combinational path delay: 25.275ns
Here the input data given to golay encoder and the data is encoded,
transmitted, received and corrected up to three errors, and decoded into original form.
The simulation result shows the encoded data, received data, syndrome value,
corrected data word and decoded data. The complete top module is executed on
Xilinx platform and the simulation results are obtained.
27
Chapter-6
CONCLUSION:
The encoder module, syndrome generator, error pattern generator and error
correction modules for Golay code has been designed, realized, simulated. Here data
or message can be encoded and transmitted. After introducing the channel noise or
error the code word is received. Syndrome is generated based on the received word
and for that syndrome value suitable error pattern can be calculated and based on the
error pattern value the received code is corrected and the original data retrieved.
While there is no error is introduced in the channel to reduce the steps of decoding
process additionally two dimensional check matrix is used. Simulation results are
verified.
FUTURE SCOPE:
By using this golay code algorithm which is implemented for triple error
correction can be implemented in the application of digital image transmission. There
is a scope for implementation of soft decision decoding algorithm instead of hard
decision decoding algorithm for analog data transmission.
28
REFERENCES:
[2] Wen-Ku Su, Pei-Yu Shih, Tsung-Ching Lin, and Trieu-Kien Truong “Soft-
decoding of the (23, 12, 7) Binary Golay Code” International MultiConference of
Engineers and Computer Scientists (IMECS) 2008, 19-21 March, 2008, Hong Kong.
[3] H.C. Chang, H.P. Lee, T.C. Lin, T.K. Truong “A Weight Method of Decoding the
(23, 12, 7) Golay Code Using Reduced Table Lookup” International Conference on
Communications, Circuits and Systems, ICCCAS 2008.
29
APPENDEX-I
BASIC VLSI DESIGN METHODOLOGY
1.1 INTRODUCTION
Very-large-scale integration (VLSI) is the process of creating integrated
circuits by combining thousands of transistor-based circuits into a single chip. VLSI
began in the 1970s when complex semiconductor and communication technologies
were being developed. The microprocessor is a VLSI device. The term is no longer as
common as it once was, as chips have increased in complexity into the hundreds of
millions of transistors.
Current technology has moved far past this mark and today's microprocessors
have many millions of gates and hundreds of millions of individual transistors. At one
time, there was an effort to name and calibrate various levels of large-scale integration
above VLSI. Terms like Ultra-large-scale Integration (ULSI) were used. But the huge
number of gates and transistors available on common devices has rendered such fine
distinctions moot. Terms suggesting greater than VLSI levels of integration are no
longer in widespread use. Even VLSI is now somewhat quaint, given the common
assumption that all microprocessors are VLSI or better.1.1.1 Advantages of IC’s Over
Discrete Components. While we will concentrate on integrated circuits, the properties
of integrated circuits-what we can and cannot efficiently put in an integrated circuit-
largely determine the architecture of the entire system. Integrated circuits improve
system characteristics in several critical ways. ICs have three key advantages over
digital circuits built from discrete components.
Size: Integrated circuits are much smaller-both transistors and wires are
shrunk to micrometer sizes, compared to the millimeter or centimeter scales of
discrete components. Small size leads to advantages in speed and power consumption,
since smaller components have smaller parasitic resistances
Speed: Signals can be switched between logic 0 and logic 1 much quicker
within a chip than they can between chips. Communication within a chip can occur
hundreds of times faster than communication between chips on a printed circuit
board. The high speed of circuit’s on-chip is due to their small size-smaller
components and wires have smaller parasitic capacitances to slow down the signal.
Power consumption: Logic operations within a chip also take much less
power. Once again, lower power consumption is largely due to the small size of
30
circuits on the chip-smaller parasitic capacitances and resistances require less power
to drive them.
31
HDL is a parallel process.
HDL runs forever whereas traditional programming language will only run if directed.
1.2.2 VHDL
VHDL (VHSIC hardware description language; VHSIC: very-high-
speed integrated circuit) is a hardware description language used in electronic design
automation to describe digital and mixed-signal systems such as field-programmable
gate arrays and integrated circuits. The structural and dataflow descriptions show a
concurrent behavior. That is, all statements are executed concurrently, and the order
of the statements is not relevant. On the other hand, behavioral descriptions are
executed sequentially in processes, procedures and functions in VHDL. The
behavioral descriptions resemble high-level programming languages.
VHDL allows a mixture of various levels of design entry abstraction.
Precision RTL Synthesis Synthesizes will accept all levels of abstraction, and
minimize the amount of logic needed, resulting in a final net list description in the
technology of your choice. The Top-Down Design Flow is shown in Figure 2.2.
32
1.2.3 VHDL and Synthesis
VHDL is fully simulatable, but not fully synthesizable. There are several
VHDL constructs that do not have valid representation in a digital circuit. Other
constructs do, in theory, have a representation in a digital circuit, but cannot be
reproduced with guaranteed accuracy. Delay time modeling in VHDL is an example.
State-of-the-art synthesis algorithms can optimize Register Transfer Level (RTL)
circuit descriptions and target a specific technology. Scheduling and allocation
algorithms, which perform circuit optimization at a very high and abstract level, are
not yet robust enough for general circuit applications. Therefore, the result of
synthesizing a VHDL description depends on the style of VHDL that is used.
33
1.3 FPGA DESIGN AND PROGRAMMING TOOL FLOW
The standard design flow comprises the following steps:
Design Entry and Synthesis: In this step of the design flow, design is created
using a, a hardware description language (HDL) for text-based entry. Xilinx Synthesis
Technology (XST) GUI can be used to synthesize the HDL file into an NGC file.
Once the design and validation process is complete, the binary file generated
(also using the FPGA company's proprietary software) is used to (re)configure the
FPGA. Going from schematic/HDL source files to actual configuration: The source
files are fed to a software suite from the FPGA vendor that through different steps
will produce a file. This file is then transferred to the FPGA via a serial interface
(JTAG).Initially the RTL description in VHDL is simulated by creating test benches
to simulate the system and observe results.
Then, after the synthesis engine has mapped the design to a net list, the net list is
translated to a gate level description where simulation is repeated to confirm the
34
synthesis proceeded without errors. Finally the design is laid out in the FPGA at
which point propagation delays can be added and the simulation run again with these
values back-annotated onto the net list.
35