0% found this document useful (0 votes)
16 views11 pages

Lab 7

This document outlines Lab #7 for a Computer Science course at the Institute of Business Administration, focusing on floating point instructions in RISC-V assembly language. It explains the representation and functioning of floating-point numbers, the IEEE 754 standard, and the RISC-V floating point extension for various precisions. The lab tasks include writing RISC-V code to calculate the value of Pi using Nilkantha’s series and comparing it with provided C code.

Uploaded by

omarashraf13456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views11 pages

Lab 7

This document outlines Lab #7 for a Computer Science course at the Institute of Business Administration, focusing on floating point instructions in RISC-V assembly language. It explains the representation and functioning of floating-point numbers, the IEEE 754 standard, and the RISC-V floating point extension for various precisions. The lab tasks include writing RISC-V code to calculate the value of Pi using Nilkantha’s series and comparing it with provided C code.

Uploaded by

omarashraf13456
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Department of Computer Science

Institute of Business Administration, Karachi

Lab #7: Floating Point Instructions in RISC-V


Assembly

Computer Architecture & Assembly Language


March 18, 2024

Course Instructor ..........................................................Salman Zaffar Lab


Instructor ............................................................Mehwish Zafar Week
Performed ...............................................................: Week 10
Room .............................................................................MTL4

1 Introduction
This lab introduces the floating point instructions available in RISC-V instruction set
architecture and how it works in number system.

2 Introduction to Floating-Point Numbers


Floating-point numbers are a way to represent real numbers in computers. Unlike integers,
which can represent whole numbers, floating-point numbers allow representation of
fractional numbers and very large or very small numbers with a high degree of precision.
Floatingpoint numbers are widely used in scientific and engineering applications where
precision and a wide range of values are required.

2.1 How floating-point numbers work:


The idea is to compose a number of two main parts:

• A significand that contains the number’s digits. Negative significands represent


negative numbers.

1
• An exponent that says where the decimal (or binary) point is placed relative to the
beginning of the significand. Negative exponents represent numbers that are very small
(i.e. close to zero).
Such a format satisfies all the requirements:

• It can represent numbers at wildly different magnitudes (limited by the length of the
exponent).

• It provides the same relative accuracy at all magnitudes (limited by the length of the
significand).

• It allows calculations across magnitudes: multiplying a very large and a very small
number preserves the accuracy of both in the result.

2.2 Representation of Floating-Point Numbers:


Floating-point numbers are typically represented using three components: sign, exponent,
and mantissa (also known as significand or fraction). The general form of a floating-point
number is:

1. Sign: Represents the sign of the number (positive or negative). It is usually


represented using one bit, where 0 represents positive and 1 represents negative.

2. Mantissa: Represents the significant digits of the number. It is a binary fraction that
follows the sign bit. The mantissa contains the significant digits of the number,
including the fractional part.

3. Exponent: Represents the scale or magnitude of the number. It determines the


position of the decimal point relative to the beginning of the mantissa. The exponent is
usually biased to allow for both positive and negative exponents.

2.3 IEEE 754 Standard:


Nearly all hardware and programming languages use floating-point numbers in the same
binary formats, which are defined in the IEEE 754 standard. The usual formats are 32 or 64
bits in total length:

2
Note that there are some peculiarities:

• The actual bit sequence is the sign bit first, followed by the exponent and finally the
significand bits.

• The exponent does not have a sign; instead an exponent bias is subtracted from it (127
for single and 1023 for double precision). This, and the bit sequence, allows floating-
point numbers to be compared and sorted correctly even when interpreting them as
integers.

• The significand’s most significant digit is omitted and assumed to be 1, except for
subnormal numbers which are marked by an all-0 exponent and allow a number range
beyond the smallest numbers given in the table above, at the cost of precision. • There
are separate positive and a negative zero values, differing in the sign bit, where all other
bits are 0. These must be considered equal even though their bit patterns are different.

• There are special positive and negative infinity values, where the exponent is all 1-bits
and the significand is all 0-bits. These are the results of calculations where the positive
range of the exponent is exceeded, or division of a regular number by zero. • There are
special not a number (or NaN) values where the exponent is all 1-bits and the
significand is not all 0-bits. These represent the result of various undefined calculations
(like multiplying 0 and infinity, any calculation involving a NaN value, or application-
specific cases). Even bit-identical NaN values must not be considered equal.
If this seems too abstract and you want to see how some specific values look like in IEE 754,
try the Float Toy, or the IEEE 754 Visualization.

2.4 Example:
Let’s consider the single-precision floating-point number format. Suppose we have the
following binary representation:

Sign bit: 0, Exponent: 10000010, Mantissa: 11010011000000000000000

Given Binary Representation:

• Sign bit: 0 (positive)

• Exponent: 100000102 = 13010 (biased by 127)

• Mantissa: 1.110100110000000000000002 (leading bit implied)

Interpretation:

• Sign bit: Since the sign bit is 0, the number is positive.

• Exponent: The exponent value is 130 - 127 = 3.

3
• Mantissa: The value of mantissa is 1.110100110000000000000002, including the
implied leading bit.

• Final Value: 1.110100110000000000000002×2(130−127) =


3
1.110100110000000000000002× 2 = 14.5937510

This example demonstrates how a floating-point number is represented and interpreted


according to the IEEE 754 standard.

3 RISC-V Floating Point Extension


The RISC-V architecture defines optional floating-point extensions called RVF, RVD, and RVQ
for operating on single-, double-, and quad-precision floating-point numbers, respectively.
RVF/D/Q define 32 floating-point registers, f0 to f31, with a width of 32, 64, or 128 bits,
respectively. When a processor implements multiple floating-point extensions, it uses the
lower part of the floating-point register for lower-precision instructions. f0 to f31 are
separate from the program (also called integer) registers, x0 to x31. As with program
registers, floating-point registers are reserved for certain purposes by convention, as given in
Figure 1. Figure 3 lists all of the floating-point instructions. Computation and comparison
instructions use the same mnemonics for all precisions, with .s, .d, or .q appended at the end
to indicate precision. For example, fadd.s, fadd.d, and fadd.q perform single-, double-, and
quad-precision addition, respectively. Other floating-point instructions include fsub, fmul,
fdiv, fsqrt, fmadd (multiply-add), and fmin. Memory accesses use separate instructions for
each precision. Loads are flw, fld, and flq, and stores are fsw, fsd, and fsq.

4
Figure 1: Floating-Point Register Set

Figure 2: Coding Example

5
Figure 3: RISC-V Floating Point Extension

4 Laboratory Tasks
Write a RISC-V code to calculate the value Pi using Nilkantha’s series using floating-point
instructions.

4.1 Task Explanation


Pi is an irrational number having non-recurring decimal values. We commonly know Pi =
3.14 or Pi = 22/7, but it is just an approximation for our ease. One way to calculate it can be
given using Nilkantha’s series. It is given by:

6
Below is example C code for the calculation of PI using Nilkantha’s series.

// C code to implement the above approach


#include <stdio.h>

// Function to calculate PI
double calculatePI(double PI, double n,
double sign)
{
// Add for 1000000 terms
for (int i = 0; i <= 1000000; i++) {
PI = PI + (sign * (4 / ((n) * (n + 1)
* (n + 2))));

// Addition and subtraction


// of alternate sequences
sign = sign * (-1);

// Increment by 2 according to formula


n += 2;
}

// Return the value of Pi


return PI;
}

// Driver code
void main()
{

// Initialise sum=3, n=2, and sign=1


double PI = 3, n = 2, sign = 1;

// Function call
printf("The approximation of Pi is %0.8lf\n",calculatePI(PI, n,
sign));
}

// OUTPUT
// The approximation of Pi is 3.14159265

4.2 RISC-V Equivalent Code


4.2.1 Write an equivalent RISC-V code in Venus and try to get as many correct decimal
digits as possible through efficient coding. Refer to the example given in lab
along with single and double precision floating point instructions table.

7
8
4.2.2 Paste the screenshots of both Integer and Floating Registers Section highlighting
the values of used registers.

9
10
11

You might also like