0% found this document useful (0 votes)
134 views8 pages

Booth and Radix-4 Questions

This document contains the solutions to an assignment involving multiplying numbers using Booth recoding and bit-pair encoding. The solutions show the step-by-step work of multiplying numbers using both Booth recoding and bit-pair encoding. For each multiplication problem, it shows the encoded multiplier, the multiplication process using addition, and the final product. It also contains solutions for converting decimal numbers to IEEE single-precision floating point numbers in hexadecimal format, and vice versa converting floating point hexadecimal numbers back to decimal values.

Uploaded by

M Usman Riaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views8 pages

Booth and Radix-4 Questions

This document contains the solutions to an assignment involving multiplying numbers using Booth recoding and bit-pair encoding. The solutions show the step-by-step work of multiplying numbers using both Booth recoding and bit-pair encoding. For each multiplication problem, it shows the encoded multiplier, the multiplication process using addition, and the final product. It also contains solutions for converting decimal numbers to IEEE single-precision floating point numbers in hexadecimal format, and vice versa converting floating point hexadecimal numbers back to decimal values.

Uploaded by

M Usman Riaz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

525.

412 Computer Architecture


Assignment 7 Solutions

6.13 Repeat Exercise 6.8 but perform multiplications


a. using Booth recoding.
b. using bit-pair encoding.
a. 101 101 b. 011 110 c. 111 100 d. 101 010
110 100 101 011 011 101 110 111

e. 011 100 f. 110 101 g. 110 001 h. 101 101


100 010 110 010 011 011 110 111

Solution

a. In the solutions below the multiplier has been encoded using Booth encoding in which each
bit has been replaced with a symbol based on the original bit as well as its neighbor to the right:
Bit and its
right-hand neighbor Booth encoding
00 0
01 1
10 1
11 0
Multiplying by 1 and adding is the same as adding in the two’s complement of the multiplicand.
a. 101 101 b. 011 110 c. 111 100 d. 101 010
×011 100 ×111 101 ×100 111 ×011 001
000 001 001 100 11 111 100 010 00 000 000 100 0 000 010 110
111 101 101 00 001 111 0 11 111 111 00 1 101 010
000 100 11 11 100 010 00 000 010 0 0 101 10
000 011 100 100 00 111 10 11 110 0 0 011 000 110
10 001 0
−19 × −12 = 228 11 110 001 100 −22 × −9 = 198
10 110 001 010 −4 × 29 = −116
30 × −21 = −630

e. 011 100 f. 110 101 g. 110 001 h. 101 101


×100 110 ×010 110 ×101 101 ×011 001
11 111 001 000 0 000 010 110 00 000 001 111 0 000 010 011
00 001 110 0 1 111 010 1 11 111 000 1 1 101 101
10 010 0 0 010 11 00 001 111 0 100 11
11 000 1
10 010 111 000 0 010 011 010 0 010 101 011
28 × −30 = −840 −11 × −14 = 154 11 001 101 011 −19 × −9 = 171
−15 × 27 = −405

1
b. In the solutions below the multiplier has been encoded using bit-pair encoding (Booth radix-4
encoding) in which each pair of bits has been replaced with a symbol based on the original pair of
bits as well as its neighbor to the right:
Bit and its Bit and its
right-hand neighbor Bit-pair encoding right-hand neighbor Bit-pair encoding
000 00 100 10
001 01 101 01
010 01 110 01
011 10 111 00

Multiplying by 1 and adding is the same as adding in the two’s complement of the multiplicand.

a. 101 101 b. 011 110 c. 111 100 d. 101 010


×010 100 ×010 101 ×100 101 ×011 001
1 110 110 100 11 111 100 010 11 111 111 100 0 010 110
0 100 11 11 110 001 0 00 000 010 0 1 101 010
11 000 10 11 110 0 0 101 10
0 011 100 100
−19 × −12 = 228 10 110 001 010 11 110 001 100 0 011 000 110
30 × −21 = −630 −4 × 29 = −116 −22 × −9 = 198

e. 011 100 f. 110 101 g. 110 001 h. 101 101


×100 110 ×010 110 ×100 101 ×011 001
11 111 001 000 0 000 010 110 0 000 001 111 0 000 010 011
00 001 110 0 1 111 010 1 0 000 111 1 1 101 101
10 010 0 0 010 11 1 000 1 0 100 11
10 010 111 000 0 010 011 010 11 001 101 011 0 010 101 011
28 × −30 = −840 −11 × −14 = 154 −15 × 27 = −405 −19 × −9 = 171

2
6.28 Convert the following decimal numbers to IEEE single-precision floating-point numbers. Re-
port the results as hexadecimal values. You need not e4xtend the calculation of the significand
value beyond its most significant 8 bits.

b. 6.5125
e. 56,000,135
f. −23.23
g. 6.02 × 1023

Solution

6.28 b.

6.512510 = 1.62816 × 22
≈ 1.A0CCCCD16 × 22
= 1.1010 0000 1100 1100 1100 1100 1101

The number is positive, so the sign bit s = 0.


The exponent, expressed in excess-127 notation, is e = 12710 + 210 = 12810 + 1 = 1000 00012 .
The first 23 bits of the fraction, excluding the initial 1-bit, are 1010 0000 1100 1100 1100 1102 .
Piecing this together as <s><e><f >gives
sign significand
z}|{ z }| {
0 1000 0001
| {z } 1010 0000 1100 1100 1100 110
exponent

= 0100 0000 1101 0000 0110 0110 0110 0110


= 40D0666616

If only the most significant eight bits of the significand are included (assuming the bit to the
left of the binary point is one of them,) then the result can be shortened to 40D0000016

3
6.28 e.
56, 000, 13510 = 1.6689 × 225
= 1.AB3F43816 × 225
= 1.1010 1011 0011 1111 0100 0011 10002 × 225
≈ 1.1010 1011 0011 1111 0100 0102 × 225 .
Here we round off to 23 bits after the binary point. If the truncated digits are exactly 1 with
infinitely many trailing zeroes, round in the direction which yields a 0 in the least significant
remaining bit.
The sign bit s = 0 since this number is positive.
The exponent e = 127 + 25 = 128 + 24 = 1001 10002.
Piecing these together gives
0 1001 1000 1010 1011 0011 1111 0100 010 = 0100 1100 0101 0101 1001 1111 1010 0010
= 4C55 9FA216
≈ 4C56 000016 .

6.28 f.
−23.23 = −1.451875 × 24
≈ −1.73AE14816 × 24
= 1.0111 0011 1010 1110 0001 0100 100016 × 24 .

The sign bit s = 1 since this number is negative.


The exponent e = 127 + 4 = 128 + 3 = 1000 00112.
Piecing these together gives
1 1000 0011 0111 0011 1010 1110 0001 010
= 1100 0001 1011 1001 1101 0111 0000 1010
= C1B9 D70A16
≈ C1BA 000016 .

6.28 g. Consider first the exponent part of 6.02 × 1023 .


6.02 × 1023 = 1.99185 × 278
≈ 1.FDE9F11 × 278
= 1.1111 1101 1110 1001 1111 0001 00012 × 278
≈ 1.1111 1101 1110 1001 1111 0012 × 278 .

The sign bit s = 0 since this number is positive.


The exponent e = 127 + 78 = 128 + 77 = 1100 11012.
Piecing these together gives
0 1100 1101 1111 1101 1110 1001 1111 001 = 0110 0110 1111 1110 1111 0100 1111 1001
= 66FE F4F916
≈ 66FF 000016 .

4
6.29 Convert the following IEEE single-precision floating-point numbers to their decimal values.
Report the answers to three significant figures. Use scientific notation if needed to maintain
precision:

a. 0x21E0 0000
b. 0x8906 0000
c. 0x4B90 0000
d. 0xf1A6 0000

Solution

6.29 a.

0x21E0 0000 = 0010 0001 1110 0000 0000 0000 0000 0000
= 0 0100 0011 1100 0000 0000 0000 0000 000

The sign bit is 0, meaning the number is positive.


The exponent ê = 0100 0011 = e + 0111 111. Solving this yields e = −60.
The significand is

1.1100 0000 0000 0000 0000 0002 = 1.C000 0016


= 1.75

The floating point number, therefore, is 1.75 × 2−60 = 1.518 × 10−18 ≈ 1.52 × 10−18 .

6.29 b.

0x8906 0000 = 1000 1001 0000 0110 0000 0000 0000 0000
= 1 0001 0010 0000 1100 0000 0000 0000 000

The sign bit is 1, meaning the number is negative.


The exponent ê = 0001 0010 = e + 0111 111. Solving this yields e = −109.
The significand is

1.0000 1100 0000 0000 0000 0002 = 1.0C00 0016


= 1.047

The floating point number, therefore, is −1.047 × 2−109 = −1.613 × 10−33 ≈ −1.61 × 10−33 .

5
6.29 c.

0x4B90 0000 = 0100 1011 1001 0000 0000 0000 0000 0000
= 0 1001 0111 0010 0000 0000 0000 0000 000

The sign bit is 0, meaning the number is positive.


The exponent ê = 1001 0111 = e + 0111 111. Solving this yields e = 24.
The significand is

1.0010 0000 0000 0000 0000 0002 = 1.2000 0016


= 1.047

The floating point number, therefore, is 1.125 × 224 = 1.887 × 107 ≈ 1.89 × 107 .

6.29 d.

0xF1A6 0000 = 1111 0001 1010 0110 0000 0000 0000 0000
= 1 1110 0011 0100 1100 0000 0000 0000 000

The sign bit is 1, meaning the number is negative.


The exponent ê = 1110 0011 = e + 0111 111. Solving this yields e = 100.
The significand is

1.0100 1100 0000 0000 0000 0002 = 1.4C00 0016


= 1.297

The floating point number, therefore, is −1.297 × 2100 = −1.644 × 1030 ≈ −1.64 × 1030 .

6
6.31 In floating-point addition, can you give an example where a+b = a for some nonzero value of b?

Solution

Let a = 1 × 224 and let b = 1. In the IEEE single-precision floating-point format there
are only 24 bits of precision available, including the implied 1-bit in a normalized number.
When we add them in floating-point hardware, they must first be aligned, which means the
lesser number needs to be shifted to the right until its exponent matches that of the larger
number. As a result we do not get the same results as when we add them using infinitely
precise real numbers, as shown in the table below.

Real Numbers Aligned Floating-point Numbers


with Unlimited Precision with 24 Bits Precision

1 × 224 1.0000 0000 0000 0000 0000 000 × 224


+ 1 + 0.0000 0000 0000 0000 0000 000 × 224
24
1.0000 0000 0000 0000 0000 0001 × 2 1.0000 0000 0000 0000 0000 000 × 224

7
6.32 Show that for floating-point addition, the associative law, (a + b) + c = a + (b + c), does not
hold.

Solution

It is sufficient to show that the associative law fails in even one instance. The law fails, gener-
ally, because there is a limited number of bits in the representation. When doing floating-point
addition or subtraction, the arguments must be aligned, that is, they must have the same ex-
ponent. One of the numbers—usually the lesser—is scaled so that it has the same exponent
as the other. This means using a denormalized version of the number, that is, it requires that
there be leading zeroes before the addition can be performed.
Consider the addition problem (a + b) + c = (1 × 224 + 1) + 1. The 1 inside the parentheses
must be scaled by shifting it to the right 24 places, making its exponent 24 also. However,
with only 24 bits in the number, the result is zero:

Real Numbers Aligned Floating-point Numbers


with Unlimited Precision with 24 Bits Precision

1 × 224 1.0000 0000 0000 0000 0000 000 × 224


+ 1 + 0.0000 0000 0000 0000 0000 000 × 224
24
1.0000 0000 0000 0000 0000 0001 × 2 1.0000 0000 0000 0000 0000 000 × 224

1.0000 0000 0000 0000 0000 0001 × 224 1.0000 0000 0000 0000 0000 000 × 224
+ 1 + 0.0000 0000 0000 0000 0000 000 × 224
24
1.0000 0000 0000 0000 0000 0010 × 2 1.0000 0000 0000 0000 0000 000 × 224

When we instead perform the addition problem a + (b + c) = 1 × 224 + (1 + 1) we get quite a


different result:

Real Numbers Aligned Floating-point Numbers


with Unlimited Precision with 24 Bits Precision

1 1.0000 0000 0000 0000 0000 000 × 20


+ 1 + 1.0000 0000 0000 0000 0000 000 × 20
2 1.0000 0000 0000 0000 0000 000 × 21

2 0.0000 0000 0000 0000 0000 001 × 224


24
+ 1×2 + 1.0000 0000 0000 0000 0000 000 × 224
24
1.0000 0000 0000 0000 0000 0010 × 2 1.0000 0000 0000 0000 0000 001 × 224

We get the correct answer the second way but not the first way. Not only does this show that
addition of floating point numbers does not obey the associative law, it also suggests errors
can be reduced by doing addition and subtraction of the smallest numbers first.

You might also like