SIMULATIONOF32 BitArirhmeticUnitUsingXilinx
SIMULATIONOF32 BitArirhmeticUnitUsingXilinx
SIMULATIONOF32 BitArirhmeticUnitUsingXilinx
net/publication/343570311
CITATIONS READS
0 1,228
4 authors, including:
Bharathy G.T
Jerusalem College of Engineering
40 PUBLICATIONS 86 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Bharathy G.T on 11 August 2020.
ABSTRACT
The paper proposes a verilog design and verification of different operations in a pipelined
floating point arithmetic unit (FPAU). The designed AU aspects two floating point 32 bit
numbers and the code corresponding to the operation to be performed. The novelty of AU is it
gives high performance through the pipelining concept. Pipelining is a technique where
multiple instruction execution is overlapped. In the top-down design approach, four arithmetic
modules: Addition, Subtraction, Multiplication and Division are combined to form floating-
point AU. Each module is divided into smaller modules. Two bits selection determines which
operation takes place at a particular time. The pipeline modules are independent of each other.
Design functionalities are validated through simulation, compilation and the synthesis of the
code was performed using Xilinx-ISE. Successful implementation of pipelining in floating
point AU using Verilog fulfills the needs for high performance application.
INTRODUCTION
A. General
Floating-point numbers are widely adopted in many applications due its dynamic
representation capabilities. Floating point representation is able to retain its resolution and
accuracy compared to fixed-point representations. Based on this standard, floating-point
representation for digital systems should be platform-independent and data are interchanged
freely among different digital systems. ALU is a block in a microprocessor that handles
arithmetic operations. It always performs computation of floating-point operations. Some
CPU’s such as AMD Athlon have more than one floating point unit that handles floating point
operation. The use of Verilog for modeling is especially appealing since it provides a formal
description of the system and allows the use of specific description styles to cover the different
abstraction levels (architectural, register transfer and logic level) employed in the design. In
the computation of method, the problem is first divided into small pieces; each can be seen as
a sub module in Verilog.
The digital arithmetic operations are very important in the design of digital processors
and application-specific systems. An arithmetic circuit plays an important role in digital
systems with the vast development in the very large scale integration (VLSI) circuit
technology; many complex circuits have become easily implementable today. Algorithms that
are seemed to be impossible to implement, now have attractive implementation possibilities
for the future. This means that not only the conventional computer arithmetic methods, but also
the unconventional ones are worth investigation in new designs. The motion of real numbers
in mathematics is convenient for hand computations and formula manipulations. However, real
numbers are not well-suited for general purpose computation, because their numeric
representation as a string of digits expressed in say, base 10 can be very long or even infinitely
long. Examples include pie, e, and 1/3. In practice, computers store numbers with finite
precision. Numbers and arithmetic used in scientific computation should meet a few general
criteria.
The standardized methods to represent floating point numbers have been instituted by the
IEEE-754 standard through which the floating point operations can be carried out efficiently
with modest storage requirements. An arithmetic unit is a part of computer processor (CPU)
that carries out arithmetic operations on the operands in computer instructions words. Generally
arithmetic unit performs arithmetic operations like addition, subtraction, multiplication,
division. Some processor contains more than one AU - for example one for fixed operations
and another for floating point operations. To represent very large or small values large range
is required as the integer representations is no longer appropriate. In most modern general
purpose computer architecture, one or more FPUs are integrated with the CPU; however, many
embedded processors, especially older designs, do not have hardware support for floating point
operations.
Almost every languages have a floating point data types; computers from pc’s to
supercomputers have floating point accelerators; most compliers will be called upon to compile
floating point algorithm from time to time; virtually every operating system must respond to
floating point exceptions such as overflow.
The term floating point is derived from the meaning that there is no fixed number of
IJSRE AUGUST, Vol-2 Issue-08 www.ijsre.in Page 2
International Journal of Scientific Research in Engineering- IJSRE
Index: 20180208:003. IDSRJI Indexing, Impact Factor: 3.27
digits before and after the decimal point, that is, the decimal point can float. In general floating
point representations are slower and less accurate than fixed-point representations, but they can
handle a larger range of numbers. It consists of a fractional part. For e.g. following numbers
are the floating point numbers: 35, -112.5, ½, 4E-5 etc. Almost every language supports a
floating point data type. A number representation (called a numeral system in mathematics)
specifies some way of storing a number that maybe encoded as a string of digits. In computing,
floating point describes a system for numerical representation in which a string of digits (or
bits) represents a rational number. The term floating point refers to the fact that the radix point
(decimal point, or more commonly in computers, binary point) can "float"; that is, it can be
placed anywhere relative to the significant digits of the number.
Floating point numbers are one possible way of representing real numbers in binary
format; the IEEE 754 [11] standard presents two different floating point formats, Binary
interchange format and Decimal interchange format. This paper focuses only on single
precision normalized binary interchange format. Figure 1 shows the IEEE 754 single precision
binary format representation; it consists of a one-bit sign (S), an eight bit exponent (E), and a
twenty-three-bit fraction (M) or Mantissa.
32-bit Single Precision Floating Point Numbers IEEE standard are stored as:
S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMM
S: Sign – 1 bit
E: Exponent – 8 bits
M: Mantissa – 23 bits Fraction
Sign bit 8bits 23 bits
An extra bit is added to the mantissa to form what is called the significand. If the exponent is
greater than 0 and smaller than 255, and there is 1 in the MSB of the significand then the
number is said to be a normalized number; in this case the real number is represented by (1)
V = (-1s) * 2 (E - Bias) * (1.M) (1)
Where M = m22 2-1 + m21 2-2 + m20 2-3+…+ m1 2-22+m0 2-23; Bias = 127.
The addition of two floating point numbers has two different cases. Case I: when both
the numbers are of same sign i.e. when both the numbers are either +ve or –ve. In this case
MSB of both the numbers are either 1 or 0. Case II: when both the numbers are of different
sign i.e. when one number is +ve and other number is –ve. In this case the MSB of one number
is 1 and other is 0.
Step 1: Enter two numbers N1 and N2. E1, S1 and E1, S2 represent exponent and significand
of N1 and N2 respectively.
Step 2: Is E1 or E2 ='0'. If yes; set hidden bit of N1 or N2 is zero. If not; then check if E2 > E1,
if yes swap N1 and N2 and if E1 > E2; contents of N1 and N2 need not to be swapped.
Step 3: Calculate difference in exponents d=E1-E2. If d = '0' then there is no need of shifting
the significand. If d is more than '0' say 'y' then shift S2 to the right by an amount 'y' and fill
the left most bits by zero. Shifting is done with hidden bit.
Step 4: Amount of shifting i.e. 'y' is added to exponent of N2 value. New exponent value of
E2= (previous E2) + 'y'. Now result is in normalize form because E1 = E2.
Step 5: Check if N1 and N2 have different sign, if 'no'
Step 6: Add the significands of 24 bits each including hidden bit S=S1+S2.
Step 7: Check if there is carry out in significand addition. If yes; then add '1' to the exponent
value of either E1 or new E2. After addition, shift the overall result of significand addition to
the right by one by making MSB of S as '1' and dropping LSB of significand.
Step 8: If there is no carry out in step 6, then previous exponent is the real exponent.
Step 9: Sign of the result i.e. MSB = MSB of either N1 or N2.
Step 10: Assemble result into 32-bit format excluding 24th bit of significand i.e. hidden bit
First 8-bit comparator is used to compare the exponent of two numbers. If exponents of
two numbers are equal then there is no need of shifting. Second 8-bit comparator compares
exponent with zero. If the exponent of any number is zero set the hidden bit of that number
zero. Third comparator is required to check whether the exponent of number 2 is greater than
number 1. If the exponent of number 2 is greater than number 1 then the numbers are swapped.
One subtractor is required to compute the difference between the 8-bit exponents of
two numbers. Second subtractor is used if both the numbers are of different sign than after
addition of the significands of two numbers if carry appears. This carry is subtracted from the
exponent using 8-bit subtractor.
One 24-bit adder is required to add the 24-bit significands of two numbers. One 8-bit
adder is required if both the numbers are of same sign than after addition of the significands of
two numbers if carry appears. This carry is added to the exponent using 8-bit adder. Second 8-
bit adder is used to add the amount of shifting to the exponent of smaller number.
One swap unit is required to swap the numbers if N2 is greater than N1. Swapping is
normally done by taking the third variable. Two shift units are required one is shift left and
second is shift right.
The algorithm for floating point multiplication is explained through flow chart in Figure
3. Let N1 and N2 are normalized operands represented by S1, M1, E1 and S2, M2, E2 as their
Sign Bit Calculation: The result of multiplication is a negative sign if one of the multiplied
numbers is of a negative value and that can be obtained by XORing the sign of two inputs.
Exponent Addition is done through unsigned adder for adding the exponent of the first input to
the exponent of the second input and after that subtract the Bias (127) from the addition result
(i.e. E1+E2 - Bias). The result of this stage can be called as intermediate exponent.
Significand Multiplication is done for multiplying the unsigned significand and placing
the decimal point in the multiplication product. The result of significand multiplication can be
called as intermediate product (IP). The unsigned significand multiplication is done on 24 bit.
Overflow due to exponent addition can be compensated during subtraction of the bias;
resulting in a normal output value (normal operation). An underflow may occur while
subtracting the bias to form the intermediate exponent. If the intermediate exponent < 0 then it
is an underflow that can never be compensated; if the intermediate exponent = 0 then it is an
underflow that may be compensated during normalization by adding 1 to it . When an overflow
occurs an overflow flag signal goes high and the result turns to ±Infinity (sign determined
according to the sign of the floating point multiplier inputs). When an underflow occurs an
underflow flag signal goes high and the result turns to ±Zero (sign determined according to the
sign of the floating point multiplier inputs).
The algorithm for floating point multiplication is explained through flow chart in Figure
4. Let N1 and N2 are normalized operands represented by S1, M1, E1 and S2, M2, E2 as their
respective sign bit, mantissa (significand) and exponent. If let us say we consider x=N1 and
d=N2 and the final result q has been taken as “x/d”. Again the following four steps are used for
floating point division.
The sign bit calculation, mantissa division, exponent subtraction (no need of bias subtraction
here), rounding the result to fit in the available bits and normalization is done in the similar
way as has been described for multiplication.
SIMULATION RESULTS
The following figures shows the input and output of the various operations of the
floating point arithmetic unit.
The Arithmetic Unit is an important part of any system. Here a 32-bit floating point
arithmetic unit has been simulated using Verilog HDL. The functions of the arithmetic units
are designed through pipelining technique and was found to be efficient as the pipelining
modules are smaller as well as independent. This design can be further used for higher
performance applications.
REFERENCES