0% found this document useful (0 votes)
79 views4 pages

Multiplication: - Accomplished Via Shifting and Addition

Multiplication is more complex than addition and requires more time and area in a processor. It can be implemented through shifting and addition using a grade school algorithm with multiple versions at different bit sizes. Negative numbers require converting to positive before multiplying. Floating point representation uses sign, exponent, and significand according to IEEE 754 standard but introduces complexities in accuracy and operations. Computer arithmetic has precision limits and meaning depends on interpretation by instructions rather than inherent bit patterns.

Uploaded by

anand_duraiswamy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views4 pages

Multiplication: - Accomplished Via Shifting and Addition

Multiplication is more complex than addition and requires more time and area in a processor. It can be implemented through shifting and addition using a grade school algorithm with multiple versions at different bit sizes. Negative numbers require converting to positive before multiplying. Floating point representation uses sign, exponent, and significand according to IEEE 754 standard but introduces complexities in accuracy and operations. Computer arithmetic has precision limits and meaning depends on interpretation by instructions rather than inherent bit patterns.

Uploaded by

anand_duraiswamy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Multiplication

• More complicated than addition


– accomplished via shifting and addition
• More time and more area
• Let’s look at 3 versions based on gradeschool algorithm

0010 (multiplicand)
__x_1011 (multiplier)

• Negative numbers: convert and multiply


– there are better techniques, we won’t look at them

1998 Morgan Kaufmann Publishers 86

Multiplication: Implementation

Start

Multiplier0 = 1 1. Testı Multiplier0 = 0


Multiplier0

Multiplicand
Shift left 1a. Add multiplicand to product andı
place the result in Product register
64 bits

Multiplier
64-bit ALU Shift right
2. Shift the Multiplicand register left 1 bit
32 bits

Product
Control test
Write
3. Shift the Multiplier register right 1 bit
64 bits

No: < 32 repetitions


32nd repetition?

Yes: 32 repetitions

Done

1998 Morgan Kaufmann Publishers 87


Second Version
Start

Multiplier0 = 1 1. Testı Multiplier0 = 0


Multiplier0

Multiplicand 1a. Add multiplicand to the left half ofı


the product and place the result inı
32 bits the left half of the Product register

Multiplier
32-bit ALU Shift right
32 bits 2. Shift the Product register right 1 bit

Shift right
Product Control test
Write
64 bits 3. Shift the Multiplier register right 1 bit

No: < 32 repetitions


32nd repetition?

Yes: 32 repetitions

Done

1998 Morgan Kaufmann Publishers 88

Final Version

Start

Product0 = 1 1. Testı Product0 = 0


Product0

Multiplicand

32 bits 1a. Add multiplicand to the left half ofı


the product and place the result inı
the left half of the Product register

32-bit ALU

Shift right Controlı 2. Shift the Product register right 1 bit


Product
Write test
64 bits

No: < 32 repetitions


32nd repetition?

Yes: 32 repetitions

Done

1998 Morgan Kaufmann Publishers 89


Floating Point (a brief look)

• We need a way to represent


– numbers with fractions, e.g., 3.1416
– very small numbers, e.g., .000000001
– very large numbers, e.g., 3.15576 × 109
• Representation:
– sign, exponent, significand: (–1)sign × significand × 2exponent
– more bits for significand gives more accuracy
– more bits for exponent increases range
• IEEE 754 floating point standard:
– single precision: 8 bit exponent, 23 bit significand
– double precision: 11 bit exponent, 52 bit significand

1998 Morgan Kaufmann Publishers 90

IEEE 754 floating-point standard

• Leading “1” bit of significand is implicit

• Exponent is “biased” to make sorting easier


– all 0s is smallest exponent all 1s is largest
– bias of 127 for single precision and 1023 for double precision
– summary: (–1)sign × (1+significand) × 2exponent – bias

• Example:

– decimal: -.75 = -3/4 = -3/22


– binary: -.11 = -1.1 x 2-1
– floating point: exponent = 126 = 01111110

– IEEE single precision: 10111111010000000000000000000000

1998 Morgan Kaufmann Publishers 91


Floating Point Complexities

• Operations are somewhat more complicated (see text)


• In addition to overflow we can have “underflow”
• Accuracy can be a big problem
– IEEE 754 keeps two extra bits, guard and round
– four rounding modes
– positive divided by zero yields “infinity”
– zero divide by zero yields “not a number”
– other complexities
• Implementing the standard can be tricky
• Not using the standard can be even worse
– see text for description of 80x86 and Pentium bug!

1998 Morgan Kaufmann Publishers 92

Chapter Four Summary

• Computer arithmetic is constrained by limited precision


• Bit patterns have no inherent meaning but standards do exist
– two’s complement
– IEEE 754 floating point
• Computer instructions determine “meaning” of the bit patterns
• Performance and accuracy are important so there are many
complexities in real machines (i.e., algorithms and implementation).

• We are ready to move on (and implement the processor)

you may want to look back (Section 4.12 is great reading!)

1998 Morgan Kaufmann Publishers 93

You might also like