Floating Point Arithmetic
Outline
Floating-Point Numbers
IEEE 754 Floating-Point Standard
Floating-Point Addition and Subtraction
Floating-Point Multiplication
The World is Not Just Integers
Programming languages support numbers with fraction
Called floating-point numbers
Examples:
3.14159265… (π)
2.71828… (e)
0.000000001 or 1.0 × 10–9 (seconds in a nanosecond)
86,400,000,000,000 or 8.64 × 1013 (nanoseconds in a day)
last number is a large integer that cannot fit in a 32-bit integer
We use a scientific notation to represent
Very small numbers (e.g. 1.0 × 10–9)
Very large numbers (e.g. 8.64 × 1013)
Scientific notation: ± d . f1f2f3f4 … × 10 ± e1e2e3
Floating-Point Numbers
Examples of floating-point numbers in base 10 …
5.341×103 , 0.05341×105 , –2.013×10–1 , –201.3×10–3
decimal point
Examples of floating-point numbers in base 2 …
1.00101×223 , 0.0100101×225 , –1.101101×2–3 , –1101.101×2–6
binary point
Exponents are kept in decimal for clarity
The binary number (1101.101)2 = 23+22+20+2–1+2–3 = 13.625
Floating-point numbers should be normalized
Exactly one non-zero digit should appear before the point
In a decimal number, this digit can be from 1 to 9
In a binary number, this digit should be 1
Normalized FP Numbers: 5.341×103 and –1.101101×2–3
NOT Normalized: 0.05341×105 and –1101.101×2–6
Floating-Point Representation
A floating-point number is represented by the triple
S is the Sign bit (0 is positive and 1 is negative)
Representation is called sign and magnitude
E is the Exponent field (signed)
Very large numbers have large positive exponents
Very small close-to-zero numbers have negative exponents
More bits in exponent field increases range of values
F is the Fraction field (fraction after binary point)
More bits in fraction field improves the precision of FP numbers
S Exponent Fraction
Value of a floating-point number = (-1)S × val(F) × 2val(E)
Next . . .
Floating-Point Numbers
IEEE 754 Floating-Point Standard
Floating-Point Addition and Subtraction
Floating-Point Multiplication
IEEE 754 Floating-Point Standard
Found in virtually every computer invented since 1980
Simplified porting of floating-point numbers
Unified the development of floating-point algorithms
Increased the accuracy of floating-point numbers
Single Precision Floating Point Numbers (32 bits)
1-bit sign + 8-bit exponent + 23-bit fraction
S Exponent8 Fraction23
Double Precision Floating Point Numbers (64 bits)
1-bit sign + 11-bit exponent + 52-bit fraction
S Exponent11 Fraction52
(continued)
Normalized Floating Point Numbers
For a normalized floating point number (S, E, F)
S E F = f1 f 2 f3 f 4 …
Significand is equal to (1.F)2 = (1.f1f2f3f4…)2
IEEE 754 assumes hidden 1. (not stored) for normalized numbers
Significand is 1 bit longer than fraction
Value of a Normalized Floating Point Number is
(–1)S × (1.F)2 × 2val(E)
(–1)S × (1.f1f2f3f4 …)2 × 2val(E)
(–1)S × (1 + f1×2-1 + f2×2-2 + f3×2-3 + f4×2-4 …)2 × 2val(E)
(–1)S is 1 when S is 0 (positive), and –1 when S is 1 (negative)
Biased Exponent Representation
How to represent a signed exponent? Choices are …
Sign + magnitude representation for the exponent
Two’s complement representation
Biased representation
IEEE 754 uses biased representation for the exponent
Value of exponent = val(E) = E – Bias (Bias is a constant)
Recall that exponent field is 8 bits for single precision
E can be in the range 0 to 255
E = 0 and E = 255 are reserved for special use (discussed later)
E = 1 to 254 are used for normalized floating point numbers
Bias = 127 (half of 254), val(E) = E – 127
val(E=1) = –126, val(E=127) = 0, val(E=254) = 127
Converting FP Decimal to Binary
Convert –0.8125 to binary in single and double precision
Solution:
Fraction bits can be obtained using multiplication by 2
0.8125 × 2 = 1.625
0.625 × 2 = 1.25
0.8125 = (0.1101)2 = ½ + ¼ + 1/16 = 13/16
0.25 × 2 = 0.5
0.5 × 2 = 1.0
Stop when fractional part is 0
Fraction = (0.1101)2 = (1.101)2 × 2 –1 (Normalized)
Exponent = –1 + Bias = 126 (single precision) and 1022 (double)
Single
10111111010100000000000000000000
Precision
10111111111010100000000000000000 Double
Precision
00000000000000000000000000000000
Largest Normalized Float
What is the Largest normalized float?
Solution for Single Precision:
01111111011111111111111111111111
Exponent – bias = 254 – 127 = 127 (largest exponent for SP)
Significand = (1.111 … 1)2 = almost 2
Value in decimal ≈ 2 × 2127 ≈ 2128 ≈ 3.4028 … × 1038
Solution for Double Precision:
01111111111011111111111111111111
11111111111111111111111111111111
Value in decimal ≈ 2 × 21023 ≈ 21024 ≈ 1.79769 … × 10308
Overflow: exponent is too large to fit in the exponent field
Smallest Normalized Float
What is the smallest (in absolute value) normalized float?
Solution for Single Precision:
00000000100000000000000000000000
Exponent – bias = 1 – 127 = –126 (smallest exponent for SP)
Significand = (1.000 … 0)2 = 1
Value in decimal = 1 × 2–126 = 1.17549 … × 10–38
Solution for Double Precision:
00000000000100000000000000000000
00000000000000000000000000000000
Value in decimal = 1 × 2–1022 = 2.22507 … × 10–308
Underflow: exponent is too small to fit in exponent field
Zero, Infinity, and NaN
Zero
Exponent field E = 0 and fraction F = 0
+0 and –0 are possible according to sign bit S
Infinity
Infinity is a special value represented with maximum E and F = 0
For single precision with 8-bit exponent: maximum E = 255
For double precision with 11-bit exponent: maximum E = 2047
Infinity can result from overflow or division by zero
+∞ and –∞ are possible according to sign bit S
NaN (Not a Number)
NaN is a special value represented with maximum E and F ≠ 0
Result from exceptional situations, such as 0/0 or sqrt(negative)
Operation on a NaN results is NaN: Op(X, NaN) = NaN
Simple 6-bit Floating Point Example
6-bit floating point representation
S Exponent3 Fraction2
Sign bit is the most significant bit
Next 3 bits are the exponent with a bias of 3
Last 2 bits are the fraction
Same general form as IEEE
Normalized, denormalized
Representation of 0, infinity and NaN
Value of normalized numbers (–1)S × (1.F)2 × 2E – 3
Value of denormalized numbers (–1)S × (0.F)2 × 2– 2
Next . . .
Floating-Point Numbers
IEEE 754 Floating-Point Standard
Floating-Point Addition and Subtraction
Floating-Point Multiplication
Floating Point Addition Example
Consider adding: (1.111)2 × 2–1 + (1.011)2 × 2–3
For simplicity, we assume 4 bits of precision (or 3 bits of fraction)
Cannot add significands … Why?
Because exponents are not equal
How to make exponents equal?
Shift the significand of the lesser exponent right
until its exponent matches the larger number
(1.011)2 × 2–3 = (0.1011)2 × 2–2 = (0.01011)2 × 2–1
Difference between the two exponents = –1 – (–3) = 2
So, shift right by 2 bits 1.111
+
0.01011
Now, add the significands:
Carry 10.00111
Addition Example – cont’d
So, (1.111)2 × 2–1 + (1.011)2 × 2–3 = (10.00111)2 × 2–1
However, result (10.00111)2 × 2–1 is NOT normalized
Normalize result: (10.00111)2 × 2–1 = (1.000111)2 × 20
In this example, we have a carry
So, shift right by 1 bit and increment the exponent
Round the significand to fit in appropriate number of bits
We assumed 4 bits of precision or 3 bits of fraction
Round to nearest: (1.000111)2 ≈ (1.001)2 1.000 111
Renormalize if rounding generates a carry + 1
1.001
Detect overflow / underflow
If exponent becomes too large (overflow) or too small (underflow)
Floating Point Subtraction Example
Consider: (1.000)2 × 2–3 – (1.000)2 × 22
We assume again: 4 bits of precision (or 3 bits of fraction)
Shift significand of the lesser exponent right
Difference between the two exponents = 2 – (–3) = 5
Shift right by 5 bits: (1.000)2 × 2–3 = (0.00001000)2 × 22
Convert subtraction into addition to 2's complement
Sign
2’s Complement
+ 0.00001 × 22 Since result is negative, convert
– 1.00000 × 22 result from 2's complement to
0 0.00001 × 22 sign-magnitude
1 1.00000 × 22
2’s Complement
1 1.00001 × 22
– 0.11111 × 22
Subtraction Example – cont’d
So, (1.000)2 × 2–3 – (1.000)2 × 22 = – 0.111112 × 22
Normalize result: – 0.111112 × 22 = – 1.11112 × 21
For subtraction, we can have leading zeros
Count number z of leading zeros (in this case z = 1)
Shift left and decrement exponent by z
Round the significand to fit in appropriate number of bits
We assumed 4 bits of precision or 3 bits of fraction
Round to nearest: (1.1111)2 ≈ (10.000)2 1.111 1
+ 1
Renormalize: rounding generated a carry 10.000
–1.11112 × 21 ≈ –10.0002 × 21 = –1.0002 × 22
Result would have been accurate if more fraction bits are used
Floating Point Addition / Subtraction
Start
Shift significand right by
1. Compare the exponents of the two numbers. Shift the d = | EX – EY |
smaller number to the right until its exponent would match
the larger exponent.
Add significands when signs
of X and Y are identical,
2. Add / Subtract the significands according to the sign bits.
Subtract when different
X – Y becomes X + (–Y)
3. Normalize the sum, either shifting right and incrementing
the exponent or shifting left and decrementing the exponent
Normalization shifts right by 1 if
4. Round the significand to the appropriate number of bits, there is a carry, or shifts left by
and renormalize if rounding generates a carry the number of leading zeros in
the case of subtraction
Overflow or yes
Exception Rounding either truncates
underflow?
fraction, or adds a 1 to least
no significant fraction bit
Done
Next . . .
Floating-Point Numbers
IEEE 754 Floating-Point Standard
Floating-Point Addition and Subtraction
Floating-Point Multiplication
Floating Point Multiplication Example
Consider multiplying: 1.0102 × 2–1 by –1.1102 × 2–2
As before, we assume 4 bits of precision (or 3 bits of fraction)
Unlike addition, we add the exponents of the operands
Result exponent value = (–1) + (–2) = –3
Using the biased representation: EZ = EX + EY – Bias
EX = (–1) + 127 = 126 (Bias = 127 for SP) 1.010
×
EY = (–2) + 127 = 125 1.110
EZ = 126 + 125 – 127 = 124 (value = –3) 0000
1010
Now, multiply the significands: 1010
1010
(1.010)2 × (1.110)2 = (10.001100)2
10001100
3-bit fraction 3-bit fraction 6-bit fraction
Multiplication Example – cont’d
Since sign SX ≠ SY, sign of product SZ = 1 (negative)
So, 1.0102 × 2–1 × –1.1102 × 2–2 = –10. 0011002 × 2–3
However, result: –10. 0011002 × 2–3 is NOT normalized
Normalize: 10. 0011002 × 2–3 = 1.00011002 × 2–2
Shift right by 1 bit and increment the exponent
At most 1 bit can be shifted right … Why?
Round the significand to nearest:
1.000 1100
1.00011002 ≈ 1.0012 (3-bit fraction) + 1
Result ≈ –1. 0012 × 2–2 (normalized) 1.001
Detect overflow / underflow
No overflow / underflow because exponent is within range
Floating Point Multiplication
Start
Biased Exponent Addition
1. Add the biased exponents of the two numbers, subtracting EZ = EX + EY – Bias
the bias from the sum to get the new biased exponent
Result sign SZ = SX xor SY can
2. Multiply the significands. Set the result sign to positive if be computed independently
operands have same sign, and negative otherwise
Since the operand significands
3. Normalize the product if necessary, shifting its significand 1.FX and 1.FY are ≥ 1 and < 2,
right and incrementing the exponent
their product is ≥ 1 and < 4.
To normalize product, we need
4. Round the significand to the appropriate number of bits, to shift right by 1 bit only and
and renormalize if rounding generates a carry increment exponent
yes
Rounding either truncates
Overflow or
Exception fraction, or adds a 1 to least
underflow?
significant fraction bit
no
Done