Abstract
Abstract
Abstract:
Floating numbers are represented using IEEE standard 754. IEEE 754 defines formats for representing
single-precision and double-precision floating-point numbers, along with rules for performing basic
arithmetic operations on these numbers. The standard specifies the following components for both
3single-precision(32-bit) and double-precision(64-bit) floating-point numbers.
This project introduces an efficient approach to IEEE 754 floating-point multiplication by implementing it
in the logarithmic domain using Logarithmic Number System (LNS). It overcomes the limitations of the
traditional floating pointing multipliers. By utilizing logarithmic and antilogarithmic converters., the
logarithmic multiplier allows multiplication through addition, enabling support for higher accuracy levels.
Floating point multipliers play a vital role in high-power computing applications like image and signal
processing. This project approach presents a providing solution to the challenges posed by floating-point
multiplication, offering improved performance, reduced delay, and low power consumption for
demanding computational tasks in various applications. The multiplier is implemented using Verilog HDL,
targeted on Spartan-3E.
Keywords: IEEE Standard 754, Floating Point Multiplier, Image Processing, Logarithmic Number System
Introduction:
Importance of VLSI:
VLSI (Very Large Scale Integration) holds immense importance across various technological domains for
several reasons:
1. Miniaturization: VLSI enables the integration of millions to billions of transistors onto a single chip.
This miniaturization leads to smaller, more powerful devices like smartphones, IoT sensors, and high-
performance computing systems.
2. Increased Functionality: With more transistors on a chip, VLSI allows for the implementation of
complex functionalities in a single device. This results in more powerful processors, advanced memory
systems, and multifunctional chips.
3. Power Efficiency: VLSI allows for the development of energy-efficient devices by optimizing power
consumption at both chip and system levels. This is crucial for battery-powered devices and reduces
environmental impact.
4. Cost Reduction: By integrating multiple functions onto a single chip, VLSI helps in reducing the overall
cost of manufacturing, assembling, and maintaining electronic devices.
5. Advancements in Technology: VLSI advancements drive innovation in various fields such as artificial
intelligence, machine learning, telecommunications, automotive technology, healthcare, and more. It
enables the development of cutting-edge applications and technologies.
Floating point:
In VLSI (Very Large Scale Integration), floating-point refers to a numerical representation format used to
handle real numbers (numbers with fractional parts) in digital circuits. Floating-point numbers are
typically represented as a sign bit, an exponent, and a fractional part (mantissa).
The term "floating-point" comes from the fact that the decimal point can "float" within the digits of the
number. This allows a wide range of numbers to be represented, both very small and very large, by
adjusting the exponent while maintaining a certain precision.
In VLSI design, floating-point units are often used to perform arithmetic operations involving real
numbers, providing a more versatile and accurate way to handle computations that involve a wide range
of values and precision requirements compared to fixed-point arithmetic. Floating-point units are crucial
in applications requiring high precision, such as scientific computing, graphics processing, and complex
mathematical calculations.
Multiplier:
Importance:
Multipliers are fundamental components in digital signal processing (DSP) and many other
computational tasks. Their importance lies in their ability to efficiently perform complex arithmetic
operations, particularly multiplication. Here are some key reasons why multipliers are important:
1. Arithmetic Operations:
- Multipliers are essential for performing multiplication operations, which are fundamental in various
mathematical and computational algorithms. They can significantly speed up the execution of tasks that
involve repetitive multiplication, such as matrix operations and polynomial evaluations.
- DSP applications, including audio processing, image processing, and communication systems, often
require intensive multiplication operations. Multipliers play a critical role in implementing filters,
convolution operations, and other mathematical transformations essential for processing digital signals.
3. Efficiency in Algorithms:
- Many algorithms, such as Fast Fourier Transform (FFT) and finite impulse response (FIR) filters, heavily
rely on the efficient implementation of multiplication operations. Multipliers contribute to the overall
efficiency and speed of these algorithms.
4. Power Efficiency:
- Dedicated hardware multipliers are often more power-efficient than software-based multiplication
algorithms running on general-purpose processors. This makes them suitable for applications with strict
power constraints, such as mobile devices and battery-powered systems.
- In applications dealing with complex numbers, such as in electrical engineering and physics,
multiplication is a fundamental operation for manipulating real and imaginary components. Multipliers
are crucial for efficiently handling complex arithmetic.
6. Matrix Operations:
- In linear algebra, matrix multiplication is a core operation for solving systems of linear equations and
performing transformations. Multipliers are essential for efficiently carrying out matrix multiplication in
various applications, including graphics rendering and scientific simulations.
7. Parallel Processing:
- FPGAs (Field-Programmable Gate Arrays) and other hardware accelerators often leverage parallelism
for improved performance. Multipliers can be implemented in parallel, allowing for faster execution of
tasks that involve multiple multiplications simultaneously.
8. Computational Efficiency:
- Multipliers contribute to the overall computational efficiency of a system by reducing the number of
clock cycles required to perform multiplication operations. This is particularly important in real-time
applications where quick responses are critical.
9. Resource Optimization:
- In hardware design, optimizing the use of resources is essential. Multipliers, when efficiently utilized,
can lead to more compact and resource-efficient designs, which is crucial in applications where space on
a chip is limited.
In summary, multipliers are fundamental components in digital systems, playing a crucial role in a wide
range of applications where efficient and high-speed multiplication operations are essential for
computational tasks. They contribute to the overall performance, power efficiency, and resource
optimization of digital systems.
+ +
A_Sign B_Sign + -
Bias
XOR
Normalizer
Multiplier Result
Advantages of Multiplier:
In the context of VLSI (Very Large Scale Integration), a multiplier is a fundamental building block that
performs the multiplication operation on two binary numbers. Multipliers are essential components in
digital signal processing, arithmetic operations, and various other applications. Here are some
advantages of multipliers in VLSI:
1. High-Speed Processing:
- VLSI multipliers are designed to perform multiplication operations quickly and efficiently. This high-
speed processing is crucial in applications where real-time or high-throughput performance is required,
such as in digital signal processing (DSP) applications.
2. Area Efficiency:
- VLSI multipliers are designed to be compact and occupy minimal chip area while still delivering high
performance. This is important in VLSI design where the goal is to integrate a large number of
components onto a single chip.
3. Power Efficiency:
- Multipliers are optimized for power efficiency, which is crucial in battery-operated devices and other
power-sensitive applications. Power-efficient multipliers contribute to overall system energy savings.
4. Versatility:
- VLSI multipliers can be designed to support various multiplication formats, such as fixed-point and
floating-point arithmetic. This versatility makes them suitable for a wide range of applications, including
general-purpose computing and specialized signal processing tasks.
5. Scalability:
- Multipliers can be designed with scalability in mind, allowing them to be easily adapted to different
technology nodes and process technologies. This scalability is important for keeping pace with
advancements in semiconductor manufacturing.
Disadvantages of Multiplier:
In VLSI (Very Large Scale Integration) design, multipliers are essential components for performing
arithmetic operations in digital circuits. However, like any other component, multipliers have their
disadvantages. Here are some of the potential drawbacks associated with multipliers in VLSI:
1. Area Overhead:
- Multipliers typically occupy a significant amount of silicon area in an integrated circuit. This can be a
critical concern, especially in applications where minimizing the chip's footprint is crucial, such as in
mobile devices or IoT devices.
2. Power Consumption:
- Multipliers may introduce latency in the processing of digital signals. The time required to complete a
multiplication operation can impact the overall performance of the system, particularly in applications
that demand high-speed processing.
4. Complexity of Design:
- Designing efficient and high-performance multipliers can be a complex task. As the bit-width and
precision requirements increase, the complexity of the multiplier design also grows. This complexity can
lead to longer design cycles and potentially higher development costs.
5. Routing Challenges:
- Multiplier modules can impose challenges in routing signals due to the interconnectivity involved. The
large number of inputs and outputs in a multiplier can lead to increased routing congestion, making it
more difficult to achieve optimal placement and routing in the overall chip design.
Sign Bit: The sign bit is a single bit that indicates the sign of the floating point number, determining
whether the number is positive or negative. In most of the floating-point representations, including IEEE
754, the sign bit is typically the leftmost bit. The sign bit determines whether the number is positive(0)
and negative(1). It is independent of the mantissa and exponent but crucial in representing the overall
value and ensuring correct arithmetic operations.
Mantissa: The Mantissa represents the significant digits of the floating point number, including the
fractional part. It determines the precision or accuracy of the number. The sign bit isn’t part of the
mantissa.
Exponent: The exponent represents the scale or magnitude of the floating point number. It indicates
how many positions the decimal point should be shifted to obtaiin the actual value.
Exponent: 8 bits
Exponent: 11 bits
The concept of a logarithmic floating-point multiplier aims to address limitations related to the exponent
and mantissa field sizes in traditional floating-point multipliers. By leveraging logarithmic representation,
it seeks to optimize resource utilization by efficiently encoding and processing floating-point numbers
with larger exponents and mantissas within constrained hardware resources such as LUT6s in UltraScale+
devices.
Log
Converter Mantissa
Addition
Anti-Log
Converter
Multiplication Output
Log
Converter
Floating Point Number
A logarithmic floating-point multiplier utilizing a 4-bit mantissa is constructed with a specific architecture
to optimize hardware resources and minimize delay. For instance, a 4-bit mantissa multiplier with a
5*LUT6 implementation and 1*LUT delay is suggested for efficient operaton. The delay of the mantissa
addition is crucial in maintaining the clock frequency, affecting both latency and throughput. Fast carry
chain logic(such as CARRY8) is employed to minimize delay and reduce logic resource consumption
during mantissa addition.
The efficient implementation of the 4-bit mantissa multiplier, leveraging LUT6s and CARRY* logic, aims to
reduce delay and enhance performance. The specific symbols (KW,FL,KL,K,F) used in the architecture
correspond to the logarithmic operations and bit manipulations within the multiplier.
The critical components, including the log converter, mantissa addition, exponent addition, and antilog
converter, are designed using LUT6s and optimized logic to facilitate accurate and fast computation.
The LFP multiplier designed with a 4-bit mantissa demonstrates a specific operating
frequency(e.g.,650MHz for VU13P devices) with defined delays for critical paths. The delay elements,
such as 1*LUT6 delay for the log converter and multiple LUT6 delays for different functional blocks, are
identified.
Comparison between Floating point multiplier and logarithmic floating point multiplier:
Mantissa Limitation:
For SFP – These multipliers have a limitation where the size of the mantissa field should be less than or
equal to 3 bits.
For LFP – It addresses the SFP limitation by allowing larger mantissa fields.
Resource Optimization:
For SFP - For larger mantissa fields these multipliers demand a significant number of LUT6s
For LFP – These Multipliers exhibit reduced resource usage compared to SFP when dealing with larer
mantissa fields. For instance, the number of LUT6s decreases significantly.
Operating Frequency:
For SFP – They Operate at a specific frequency, and the implementation with larger mantissa fields
reduce the clock frequency due to increased resource demands.
For LFP – These multipliers achieve comparable operating frequencies as SFP multipliers despite
accommodating larger mantissa fields, ensuring similar latency and throughput.
Accuracy analysis of Floating point multiplier and logarithmic floating point multiplier:
For SFP:
Error Categories: Errors in SFP multiplication primarily arise from input inaccuracies and conversion
errors due to log and antilog converters.
Error Reduction with Bit Increase: Increasing the mantissa bits by one in SFP approximately halves the
error, enhancing the precision
Precision Levels: SFP tends to exhiit higher precision compared to LFP with the same number of mantissa
bits due to non-uniform distribution of LFP numbers.
For LFP:
Error Sources: Errors in LFP multiplication primarily arise from input inaccuracies and conversion errors
due to log and antilog converters.
Error Reduction with Increased Bits: Similar to SFP, LFP demonstrates reduced errors when the mantissa
bit count increases. However, due to non-uniform distribution, LFP may have lower precision than SFP
with the same bit count.
Performance and Accuracy Trade-off: LFP-3 despite having similar hardware performance to SFP-1,
exhibits a reduced error ratio compared to LFP-1. This implies LFP achieves reasonable accuracy levels
while utilizing the same logic resources.
A Field-Programmable Gate Array (FPGA) is a type of integrated circuit that can be configured by a user
or designer after manufacturing. It consists of an array of programmable logic blocks and configurable
interconnects that can be customized to implement various digital circuits. Here are some key aspects
and features of FPGA boards:
1. Programmability:
- FPGAs are known for their reconfigurability. Users can define the functionality of the device by
loading a configuration file onto the FPGA, which effectively programs the internal logic elements and
interconnects.
2. Logic Blocks:
- FPGAs contain numerous logic blocks, which are the fundamental building blocks for implementing
digital circuits. These logic blocks typically include look-up tables (LUTs), flip-flops, multiplexers, and
other elements.
3. Interconnects:
- The interconnects on an FPGA enable the routing of signals between different logic blocks. These
interconnects are programmable, allowing designers to create custom connections based on their
specific requirements.
4. Clock Management:
- FPGAs often come with dedicated resources for clock management. These include phase-locked loops
(PLLs) and delay-locked loops (DLLs) that help generate and manage clock signals, ensuring synchronous
operation of different parts of the design.
5. I/O Blocks:
- FPGA boards include Input/Output (I/O) blocks that interface with external devices. These blocks can
be configured to support various communication standards such as LVDS, DDR, and more.
Existing Model:
The project focuses on enhancing the efficiency and precision of convolutional neural networks (CNNs)
on Field Programmable Gate Arrays (FPGAs). Initially, it addresses limitations of the Small Floating-Point
(SFP) multiplier proposed by Xilinx, which restricts the mantissa field to 3 bits, by introducing the Small
Logarithmic Floating-Point (SLFP) multiplier. This SLFP design leverages logarithmic number systems to
enable higher mantissa sizes (up to 5 bits) with minimal overheads, ensuring multiple accuracy levels
without compromising on hardware efficiency. The implementation includes logarithmic and
antilogarithmic converters, optimizing the mantissa addition with a carry chain to maintain high
throughput (650MOPS) and low latency (1.5ns). Comparative analyses demonstrate the superiority of
SLFP, showcasing its potential in applications like Mobile Net where precision demands exceed those of
conventional SFP multipliers, providing more flexibility for quantization without extensive retraining
processes.
Proposed Model:
Our project describes a floating-point multiplier implemented in Verilog. It consists of modules for
logarithm calculation, exponent addition, and antilogarithm conversion to handle floating-point
operations. The main module, `floating _ point_ multiplier`, takes two input numbers, converts them
into logarithmic form, adds their exponents, multiplies their mantissas, and combines the results to
generate the product. Additionally, there's a normal multiplier module that performs multiplication of
mantissas and exponent addition separately. Test benches (`floating_ point _ multiplier _tb` and `Normal
Multiplier _tb`) are set up to simulate the functionality, where inputs `a` and `b` are initialized with
specific values, representing fixed-point numbers. Overall, your design involves handling different
components of floating-point arithmetic to perform multiplication in a hardware description language
like Verilog.
Outputs:
[1] HassaaBotao Xiong , Sheng Fan, Xintong He, Tu Xu, and Yuchun Chang , Member, IEEE “Small
Logarithmic Floating-Point Multiplier Based on FPGA and Its Application on MobileNet”, IEEE Transaction
on Circuits and Systems-II, Express Briefs Volume-69, No. 12 , Dec 2022.
[2] n Saadat, Haseeb Bokhari, and Sri Parameswaran, Senior Member, IEEE, “Minimally Biased Multipliers
for Approximate Integer and Floating Point Multiplier”, IEEE transaction on Computer-Aided Design of
integrated circuits and Systems, Volume 34 , No. 11. Nov 2018.
[3] B. S. Prabakaran et al., “DeMAS: An efficient design methodology for building approximate adders for
FPGA-based systems,” in Proc. Design, Autom. Test Europe Conf. Exhibition (DATE), Dresden, Germany,
2018,pp. 917–920.“Higher performance neural networks with small floating point,” Xilinx,San Jose, CA,
USA, document WP530(v1.0.1), 2021.ion,” IEEE J. Sel. Topics Signal Process., vol. 14, no. 4, pp. 715–726,
May 2020.
[4] S. Hashemi, R. I. Bahar, and S. Reda, “DRUM: A dynamic range unbiased multiplier for approximate
applications,” in Proc. ICCAD, Austin,TX, USA, Nov. 2015, pp. 418–425.
[6] A. A. Pandit, C. A. Reddy, and G. Narayan, “Design and simulation of 16 16 bit iterative logarithmic
multiplier for accurate results,” in Proc. 2nd Int. Conf. Electron., Commun. Aerosp. Technol., 2018, pp.
985–990