Computer Organization and Architecture: UNIT-2
Computer Organization and Architecture: UNIT-2
UNIT-2
Arithmetic Unit
The truth table for the sum and carry-out functions for adding equally weighted bits xi and yi in
two numbers X and Y. The figure also shows logic expressions for these functions, along with an
example of addition of the 4-bit unsigned numbers 7 and 6. Note that each stage of the addition
process must accommodate a carry-in bit. We use ci to represent the carry-in to stage i, which is
the same as the carry-out from stage (i − 1). The logic expression for si in Figure-1 can be
implemented with a 3-input XOR gate, used in Figure .2a as part of the logic required for a single
stage of binary addition. The carry-out function, ci+1, is implemented with an AND-OR circuit, as
shown. A convenient symbol for the complete circuit for a single stage of addition, called a full
adder (FA), is also shown in the figure.
Acascaded connection of n full-adder blocks can be used to add two n-bit numbers, as shown in
Figure 2b. Since the carries must propagate, or ripple, through this cascade, the configuration is
called a ripple-carry adder.
Figure .2 Logic for addition of binary numbers.
Thus, all carries can be obtained three gate delays after the input operands X , Y , and c0 are applied
because only one gate delay is needed to develop all Pi and Gi signals, followed by two gate delays
in the AND-OR circuit for ci+1. After a further XOR gate delay, all sum bits are available. In total,
the n-bit addition process requires only four gate delays, independent of n.
Let us consider the design of a 4-bit adder. The sum and carries can be implemented as
When i=0
S0 = x0 ⊕ y0 ⊕ c0
c1 = G0 + P0c0
When i=1
S1 = x1 ⊕ y1 ⊕ c1
c2 = G1 + P1G0 + P1P0c0
When i=2
S2 = x2 ⊕ y2 ⊕ c2
c3 = G2 + P2G1 + P2P1G0 + P2P1P0c0
When i=3
S3 = x3 ⊕ y3 ⊕ c3
c4 = G3 + P3G2 + P3P2G1 + P3P2P1G0 + P3P2P1P0c0
The complete 4-bit adder is shown in Figure 9.4b. The carries are produced in the block labeled
Carry-look ahead logic. An adder implemented in this form is called a carry-look ahead adder.
Delay through the adder is 3 gate delays for all carry bits and 4 gate delays for all sum bits. In
comparison, a 4-bit ripple-carry adder requires 7 gate delays for s3 and 8 gate delays for c4.
The product is computed one bit at a time by adding the bit columns from right to left and
propagating carry values between columns.
Array Multiplication:
• The combinational array multiplier just described uses a large number of logic gates for
multiplying numbers of practical size, such as 32- or 64-bit numbers.
• Multiplication of two n-bit numbers can also be performed in a sequential circuit that uses
a single n-bit adder.
• Registers A and Q are shift registers. Together, they hold partial product Ppi.
• Multiplier bit qi generates the signal Add/Noadd.
• This signal causes the multiplexer MUX to select 0 when qi = 0, or to select the
multiplicand M when qi = 1, to be added to PPi to generate PP(i + 1).
• The product is computed in n cycles.
• The carry-out from the adder is stored in flip-flop C.
• At the start, the multiplier is loaded into register Q, the multiplicand into register M, and C
and A are cleared to 0.
• At the end of each cycle, C, A, and Q are shifted right one bit position to allow for growth
of the partial product as the multiplier is shifted out of register Q.
• After n cycles, the high-order half of the product is held in register A and the low-order
half is in register Q.
Multiplication of Signed Numbers:
• The general strategy is still to accumulate partial products by adding versions of the
multiplicand as selected by the multiplier bits.
• First, consider the case of a positive multiplier and a negative multiplicand. When we add
a negative multiplicand to a partial product, we must extend the sign-bit value of the
multiplicand to the left as far as the product will extend.
• For a negative multiplier, a straightforward solution is to form the 2’s-complement of
both the multiplier and the multiplicand and proceed as in the case of a positive multiplier.
• A technique that works equally well for both negative and positive multipliers, is Booth
algorithm.
Booth Algorithm:
• The Booth algorithm generates a 2n-bit product and treats both positive and negative
2’scomplement n-bit operands uniformly.
• The multiplier is converted into Booth recoded multiplier.
• The Multiplicand is multiplied with the recoded multiplier.
• The MSB of the result indicates the sign of the result.
• If sign of the result is 1(Negative), the magnitude contained in the remaining bits is in 2’s
complement form.
• The case when the least significant bit of the multiplier is 1 is handled by assuming that an
implied 0 lies to its right.
• The Booth multiplier recoding table is shown below.
Example 1:
Multiply 45 and 30
• Given multiplier is 0 1 1 1 1 0 (30)
• Booth recoded multiplier is formed by appending a 0 to the right of the multiplier and
using Booth recoded table as +1 0 0 0 -1 0
• The multiplicand is multiplied with Booth recoded multiplier.
Example 2:
The Booth algorithm has two attractive features. First, it handles both positive and
negative multipliers uniformly.
• Second, it achieves some efficiency in the number of additions required when the
multiplier has a few large blocks of 1s.
Drawbacks:
Fast Multipliers:
• Bit-pair recoding of the multiplier results in using at most one summand for each pair of
bits in the multiplier. It is derived directly from the Booth algorithm.
• Group the Booth-recoded multiplier bits in pairs
• If the Booth-recoded multiplier is examined two bits at a time, starting from the right, it
can be rewritten in a form that requires at most one version of the multiplicand to be
added to the partial product for each pair of multiplier bits.
• An example of bit-pair recoding of the multiplier
Problem1: Compute the product of -14 and +12 using Bit-pair recoding
1 0 1 1 1 1 X -1 +1 -1
-----------------------------------
000000000000
0000010001
00010001
---------------------------
0 0 0 1 0 1 0 1 0 1 0 0 (+340)
The multiplication example from above Figure performed using carry-save addition.
Tree Schematic representation of the carry-save addition operations
Integer Division:
Manual Division:
Longhand division examples:
Non-Restoring Division:
The restoring division algorithm can be improved by avoiding the need for restoring A after an
unsuccessful subtraction. Subtraction is said to be unsuccessful if the result is negative.
Consider the sequence of operations that takes place after the subtraction operation in the
preceding algorithm.
• If A is positive, we shift left and subtract M, that is, we perform 2A− M.
• If A is negative, we restore it by performing A+ M, and then we shift it left and subtract
M. This is equivalent to performing 2A+ M.
• The q0 bit is appropriately set to 0 or 1 after the correct operation has been performed.
• summarizing the above discussion is an algorithm for non-restoring division.
Algorithm to perform Non-Restoring division:
Stage 1:
Do the following two steps n times:
1. If the sign of A is 0, shift A and Q left one bit position and subtract M from A; otherwise,
shift A and Q left and add M to A.
2. Now, if the sign of A is 0, set q0 to 1; otherwise, set q0 to 0.
Stage 2:
If the sign of A is 1, add M to A.
Flow chart:
A floating-point number (or real number) can represent a very large (1.23×10^88) or a
very small (1.23×10^-88) value. It could also represent very large negative number (-
1.23×10^88) and very small negative number (-1.23×10^88), as well as zero, as
illustrated:
A floating-point number is typically expressed in the scientific notation, with a fraction (M),
and an exponent (E) of a certain radix (r), in the form of M×r^E. Decimal numbers use
radix of 10 (M×10^E); while binary numbers use radix of 2 (M×2^E).
Representation of floating point number is not unique. For example, the
number 55.66 can be represented as 5.566×10^1, 0.5566×10^2, 0.05566×10^3, and so
on. The fractional part can be normalized. In the normalized form, there is only a single
non-zero digit before the radix point. For example, decimal number 123.4567 can be
normalized as 1.234567×10^2; binary number 1010.1011B can be normalized
as 1.0101011B×2^3.
It is important to note that floating-point numbers suffer from loss of precision when
represented with a fixed number of bits (e.g., 32-bit or 64-bit). This is because there
are infinite number of real numbers (even within a small range of says 0.0 to 0.1). On the
other hand, a n-bit binary pattern can represent a finite 2^n distinct numbers. Hence, not
all the real numbers can be represented. The nearest approximation will be used instead,
resulted in loss of accuracy.
It is also important to note that floating number arithmetic is very much less efficient than
integer arithmetic. It could be speed up with a so-called dedicated floating-point co-
processor. Hence, use integers if your application does not require floating-point numbers.
In computers, floating-point numbers are represented in scientific notation of fraction (M)
and exponent (E) with a radix of 2, in the form of M×2^E. Both E and M can be positive as
well as negative.
Modern computers adopt IEEE 754 standard for representing floating-point
numbers. There are two representation schemes: 32-bit single-precision and 64-bit
double-precision.
Un-normalised:
Examples:
64-bit Double Precision IEEE 754 standard format for Floating point representation:
The representation scheme for 64-bit double-precision is similar to the 32-bit single-
precision:
The most significant bit is the sign bit (S), with 0 for positive numbers and 1 for
negative numbers.
The following 11 bits represent exponent (E).
The remaining 52 bits represents fraction (M).
The value (N) is calculated as follows:
Normalized form: For 1 ≤ E ≤ 2046, N = (-1)^S × 1.M × 2^(E-1023).
Un-normalized form: For E = 0, N = (-1)^S × 0.M × 2^(-1022). These are in the Un-normalized
form.
For E = 2047, N represents special values, such as ±INF (infinity), NaN (not a number).
Add/Subtract Rule:
1. Choose the number with the smaller exponent and shift its mantissa right a number of steps equal
to the difference in exponents.
2. Set the exponent of the result equal to the larger exponent.
3. Perform addition/subtraction on the mantissas and determine the sign of the result.
4. Normalize the resulting value, if necessary.
Multiply Rule:
1. Add the exponents and subtract 127 to maintain the excess-127 representation.
2. Multiply the mantissas and determine the sign of the result.
3. Normalize the resulting value, if necessary.
Divide Rule:
1. Subtract the exponents and add 127 to maintain the excess-127 representation.
2. Divide the mantissas and determine the sign of the result.
3. Normalize the resulting value, if necessary.