Computer Architecture: Arithmetic For Computers
Computer Architecture: Arithmetic For Computers
Computer Architecture: Arithmetic For Computers
CHAPTER 3
1
CE Arithmetic for Computers
1. Introduction
3. Multiplication
4. Division
5. Floating Point
2
CE Arithmetic for Computers
3
CE Arithmetic for Computers
1. Introduction
3. Multiplication
4. Division
5. Floating Point
4
CE Introduction
For example:
represented?
5
CE Arithmetic for Computers
1. Introduction
3. Multiplication
4. Division
5. Floating Point
6
CE Addition and Subtraction
Addition:
7
CE Addition and Subtraction
Subtraction:
8
CE Addition and Subtraction
Overflow
When does the overflow occur on the signed number?
9
CE Addition and Subtraction
Overflow
The computer designer must therefore provide a way to ignore overflow in
some cases and to recognize it in others.
The MIPS solution is to have two kinds of arithmetic instructions to recognize the two
choices:
■ Add (add), add immediate (addi), and subtract (sub) cause exceptions (interrupt)
on overflow.
■ Add unsigned (addu), add immediate unsigned (addiu), and subtract unsigned
(subu) do not cause exceptions on overflow.
Note: MIPS detects overflow with an exception, also called an interrupt on many
computers. An exception or interrupt is essentially an unscheduled procedure call.
10
CE Arithmetic for Computers
1. Introduction
3. Multiplication
4. Division
5. Floating Point
11
CE Multiplication
Example
Although the decimal example above happens to use only 0 and 1, multiplication of
binary numbers must always use 0 and 1, and thus always offers only these two choices:
1. Just place a copy of the multiplicand (1 ×multiplicand) in the proper place if the
multiplier digit is a 1.
12
CE Multiplication
Sequential Version of the Multiplication Algorithm and Hardware
14
CE Multiplication
Compare with the first version in the previous slide, the Multiplicand register, ALU, and
Multiplier register are all 32 bits wide, with only the Product register left at 64 bits.
Signed Multiplication
The easiest way to understand how to deal with signed numbers is to first
convert the multiplier and multiplicand to positive numbers and then
remember the original signs.
The algorithms should then be run for 31 iterations, leaving the signs out of
the calculation.
It turns out that the last algorithm will work for signed numbers.
16
CE Multiplication
Faster Multiplication
17
CE Multiplication
Multiply in MIPS
To fetch the integer 32-bit product, the programmer uses move from lo (mflo).
The MIPS assembler generates a pseudo instruction for multiply that specifies three
general purpose registers, generating mflo and mfhi instructions to place the product into
registers.
18
CE Arithmetic for Computers
1. Introduction
3. Multiplication
4. Division
5. Floating Point
19
CE Division
20
CE Division
22
CE Division
23
CE Division
Signed division
The simplest solution is to remember the signs of the divisor and dividend and then
negate the quotient if the signs disagree.
The one complication of signed division is that we must also set the sign of the
remainder. Remember that the following equation must always hold:
Example:
24
CE Division
Divide in MIPS
You may have already observed that the same sequential hardware can be used for
both multiply and divide in Fig.3 and Fig.7 .
The only requirement is a 64-bit register that can shift left or right and a 32-bit ALU
that adds or subtracts. Hence, MIPS uses the 32-bit Hi and 32-bit Lo registers for both
multiply and divide.
As we might expect from the algorithm above, Hi contains the remainder, and Lo
contains the quotient after the divide instruction completes.
To handle both signed integers and unsigned integers, MIPS has two instructions:
divide (div) and divide unsigned (divu).
The MIPS assembler allows divide instructions to specify three registers, generating
the mflo or mfhi instructions to place the desired result into a general-purpose register.
25
CE Arithmetic for Computers
1. Introduction
3. Multiplication
4. Division
5. Floating Point
26
CE Floating Point
Definitions
Some representations of the real number:
The alternative notation for the above last two numbers is called scientific
notation, which has a single digit to the left of the decimal point.
A number in scientific notation that has no leading 0s is called a normalized number
Example: 1.0ten x 10-9: normalized scientific number
0.1ten x 10-8: not normalized scientific number
10.0ten x 10-10: not normalized scientific number
The binary number shown in scientific notation is called floating point
Floating point: Computer arithmetic that represents numbers in which the binary point is
not fixed. 27
CE Floating Point
Floating-Point representation
A designer of a floating-point representation must find a compromise between the size
of the fraction and the size of the exponent.
This tradeoff is between precision and range:
- Increasing the size of the fraction enhances the precision of the fraction.
- Increasing the size of the exponent increases the range of numbers that can be
represented.
Floating-point numbers are usually a multiple of the size of a word.
The representation of a MIPS floating-point number:
Where
s is the sign of the floating-point number (1 meaning negative)
exponent is the value of the 8-bit exponent field (including the sign of the exponent)
fraction is the 23-bit number
This representation is called sign and magnitude, since the sign is a separate bit from the
rest of the number.
In general, floating-point numbers are of the form: 28
CE Floating Point
Floating-Point representation
Overflow (floating-point): A situation in which a positive exponent becomes too large to
fit in the exponent field.
Underflow (floating-point): A situation in which a negative exponent becomes too large
to fit in the exponent field.
To reduce chances of underflow or overflow is to offer another format that has a larger
exponent. In C this number is called double, and operations on doubles are called double
precision floating-point arithmetic; single precision floating-point is the name of the format
in previous slide.
Double precision: A floating-point value represented in two 32-bit words.
Single precision: A floating-point value represented in a single 32-bit word.
The representation of a double precision floating-point number:
To pack even more bits into the significand (also coefficient or mantissa is part of a
number in scientific notation), IEEE 754 makes the leading 1-bit of normalized binary
numbers implicit. Hence, the number is actually 24 bits long in single precision (implied 1
and a 23-bit fraction), and 53 bits long in double precision (1 +52).
Note: To be precise, we use the term significand to represent the 24- or 53-bit number
that is 1 plus the fraction, and fraction when we mean the 23- or 52-bit number.
The representation of the rest of the numbers uses the form from before with the hidden 1
added
where the bits of the fraction represent a number between 0 and 1 and E specifies the value
in the exponent field. If we number the bits of the fraction from left to rights1, s2, s3, . . .
,then the value is:
30
CE Floating Point
Floating-Point representation
Negative exponents pose a challenge to simplified sorting. If we use two’s complement or
any other notation in which negative exponents have a 1 in the most significant bit of the
exponent field.
Example:
1.0two x 2-1 would be represented with a negative exponent will look like a big number.
1.0two x 2+1 would be represented with a negative exponent will look like the smaller
number.
31
CE Floating Point
Floating-Point representation
The desirable notation must therefore represent the most negative exponent as 00 . . . 00two
and the most positive as 11 . . . 11two. This convention is called biased notation, with the
bias being the number subtracted from the normal, unsigned representation to determine
the real value.
IEEE 754 uses a bias of 127 for single precision, so an exponent of -1 is represented by
the bit pattern of the value (-1 + 127ten ), or 126ten = 0111 1110two , and +1 is represented
by (1+127), or 128ten = 1000 0000two .
The exponent bias for double precision is 1023.
Importance: Biased exponent means that value represented by a floating-point
number is really:
to as large as
33
CE Floating Point
Floating-Point representation
Answer1:
34
CE Floating Point
Floating-Point representation
Answer1:
35
CE Floating Point
Floating-Point representation
Example 2: Converting Binary to Decimal Floating-Point
Answer 2:
36
CE Floating Point
Floating-Point addition
Let’s add numbers in scientific notation by hand to illustrate the problem in floating-
point addition: 9.999ten x 101 + 1.610ten x 10-1. Assume that we can store only four
decimal digits of the significand and two decimal digits of the exponent.
37
CE Floating Point
Floating-Point addition
38
CE Floating Point
Floating-Point addition
Example: Try adding the number 0.5ten and -0.4375ten in binary using the algorithm in Fig.9
Answer:
40
CE Floating Point
Floating-Point addition
Answer:
41
CE Floating Point
Floating-Point addition
Hardware
Architecture:
43
CE Floating Point
Floating-Point multiplication
45
CE Floating Point
Floating-Point multiplication
47
CE Floating Point
Floating-Point multiplication
Answer 2:
48
CE Floating Point
Floating-Point instruction in MIPS
49
CE Floating Point
Floating-Point instruction in MIPS
The base registers for floating-point data transfers remain integer registers. The MIPS
code to load two single precision numbers from memory, add them, and then store the
sum might look like below:
50
CE Floating Point
Floating-Point instruction in MIPS
51
Fig.12 MIPS floating-point architecture
CE Floating Point
Floating-Point instruction in MIPS
52
CE Floating Point
Floating-
Point
instruction
in MIPS
Fig.14 MIPS
floating-point
instruction
encoding
53
CE Floating Point
Accurate Arithmetic
Unlike integers, which can represent exactly every number between the smallest and
largest number, floating-point numbers are normally approximations for a number they
can’t really represent.
The reason is that an infinite variety of real numbers exists between, say, 0 and 1, but no
more than 253 can be represented exactly in double precision floating point. The best we
can do is getting the floating-point representation close to the actual number.
Thus, IEEE 754 offers several modes of rounding to let the programmer pick the
desired approximation.
Rounding sounds simple enough, but to round accurately requires the hardware to
include extra bits in the calculation. IEEE 754, therefore, always keeps two extra bits on
the right during intermediate additions, called guard and round, respectively.
54