Coa Unit5 Arithmetic
Coa Unit5 Arithmetic
Computer Arithmetic
Data is manipulated by using the arithmetic instructions in digital computers. Data is
manipulated to produce results necessary to give solution for the computation problems. The
Addition, subtraction, multiplication and division are the four basic arithmetic operations. If we
want then we can derive other operations by using these four operations.
To execute arithmetic operations there is a separate section called arithmetic processing unit
in central processing unit. The arithmetic instructions are performed generally on binary or decimal
data. Fixed-point numbers are used to represent integers or fractions. We can have signed or
unsigned negative numbers. Fixed-point addition is the simplest arithmetic operation.
If we want to solve a problem then we use a sequence of well-defined steps. These steps are
collectively called algorithm. To solve various problems we give algorithms.
In order to solve the computational problems, arithmetic instructions are used in digital
computers that manipulate data. These instructions perform arithmetic calculations.
And these instructions perform a great activity in processing data in a digital computer. As we
already stated that with the four basic arithmetic operations addition, subtraction, multiplication and
division, it is possible to derive other arithmetic operations and solve scientific problems by means
of numerical analysis methods.
A processor has an arithmetic processor (as a sub part of it) that executes arithmetic
operations. The data type, assumed to reside in processor, registers during the execution of an
arithmetic instruction. Negative numbers may be in a signed magnitude or signed complement
representation. There are three ways of representing negative fixed point - binary numbers signed
magnitude, signed 1’s complement or signed 2’s complement. Most computers use the signed
magnitude representation for the mantissa.
6.1 Addition and Subtraction
With Signed –Magnitude Data
We designate the magnitude of the two numbers by A and B. Where the signed numbers are
added or subtracted, we find that there are eight different conditions to consider, depending on the
sign of the numbers and the operation performed. These conditions are listed in the first column of
Table. The other columns in the table show the actual operation to be performed with the
magnitude of the numbers. The last column is needed to present a negative zero. In other words,
when two equal numbers are subtracted, the result should be +0 not -0.
The algorithms for addition and subtraction are derived from the table and can be stated as
follows (the words parentheses should be used for the subtraction algorithm).
Table: Addition and Subtraction of Signed-Magnitude Numbers
(+A) + (– B) + (A – B) – (B – A) + (A – B)
(– A) + (+ B) – (A – B) + (B – A) + (A – B)
(– A) + (– B) – (A + B)
(+ A) – (+ B) + (A + B) – (B – B) + (A – B)
(+ A) – (– B) + (A + B)
(–A) – (+B) – (A + B)
(–A) – (–B) – (A – B) + (B – A) + (A – B)
When the signs of A and B are same, add the two magnitudes and attach the sign of result is
that of A. When the signs of A and B are not same, compare the magnitudes and subtract the
smaller number from the larger. Choose the sign of the result to be the same as A, if A > B or the
complement of the sign of A if A < B. If the two magnitudes are equal, subtract B from A and make
the sign of the result will be positive.
Figure: Flowchart
10111
10111
00000
00000
10111
This process looks at successive bits of the multiplier, least significant bit first. If the
multiplier bit is 1, the multiplicand is copied as it is; otherwise, we copy zeros. Now we shift
numbers copied down one position to the left from the previous numbers. Finally, the numbers are
added and their sum produces the product.
The number can be represented as 2k+1 – 2m = 24- 20= 16 - 1 = 15. Therefore, the
multiplication M x 14, where M is the multiplicand and 14 the multiplier may be computed as M x
24 - M x 21. That is, the product can be obtained by shifting the binary multiplicand M four times to
the left and subtracting M shifted left once.
Booth algorithm needs examination of the multiplier bits and shifting of the partial product.
Prior to the shifting, the multiplicand added to the partial product, subtracted from the partial
product, or left unchanged by the following rules:
1. The multiplicand is subtracted from the partial product when we get the first least significant 1
in a string of 1's in the multiplier.
2. The multiplicand is added to the partial product when we get the first Q (provided that there
was a previous 1) in a string of 0's in the multiplier.
3. The partial product does not change when the multiplier bit is the same as the previous
multiplier bit.
The algorithm applies to both positive and negative multipliers in 2's complement
representation. This is because a negative multiplier ends with a string of l's and the last operation
will be a subtraction of the appropriate weight. For example, a multiplier equal to -14 is represented
in 2's complement as 110010 and is treated as -24 + 22 - 21 = -14.
A numerical example of Booth algorithm is given in Table for n = 5. It gives the multiplication of (-
9) x (-13) = +117.
Table: Example of Multiplication with Booth Algorithm
BR = 10111
Array Multiplier
To check the bits of the multiplier one at a time and forming partial products is a sequential
operation requiring a sequence of add and shift micro-operations. The multiplication of two binary
numbers can be done with one micro-operation by using combinational circuit that forms the
product bits all at once.
This is a fast way since all it takes is the time for the signals to propagate through the gates
that form the multiplication array. However, an array multiplier requires a large number of gates,
c6 c5 c4 c3 c2 c1 c0
Figure: 4-bit by 3-bit array multiplier
magnitude. A quotient bit 1 is inserted into Qn and the partial remainder is shifted to the left to
repeat the process when E = 1. If E = 0, it signifies that A < B so the quotient in Qn remains a 0
(inserted during the shift). To restore the partial remainder in A the value of B is then added to its
previous value. The partial remainder is shifted to the left and the process is repeated again until we
get all five quotient-bits. Note that while the partial remainder is shifted left, the quotient bits are
shifted also and after five shifts, the quotient is in Q and A has the final remainder. Before showing
the algorithm in flowchart form, we have to consider the sign of the result and a possible overflow
condition. The sign of the quotient is obtained from the signs of the dividend and the divisor. If the
two signs are same, the sign of the quotient is plus. If they are not identical, the sign is minus. The
sign of the remainder is the same as that of the dividend.
form the new partial product. The sequence counter is decremented by 1 and its new value checked.
If it is not equal to zero, the process is repeated and a new partial product is formed. When SC = 0
we stops the process.
The hardware divide algorithm is given in Figure. A and Q contain the dividend and B has
the divisor. The sign of the result is transferred into Q. A constant is set into the sequence counter
SC to specify the number of bits in the quotient. As in multiplication, we assume that operands are
transferred to registers from a memory unit that has words of n bits. Since an operand must be
stored with its sign, one bit of the word will be occupied by the sign and the magnitude will have n-
1 bit.
An overflow may occur in the division operation, which may be easy to handle if we are
using paper and pencil but is not easy when using hardware. This is because the length of registers
is finite and will not hold a number that exceeds the standard length. To see this, let us consider a
system that has 5-bit registers. We use one register to hold the divisor and two registers to hold the
dividend. From the example of Figure, the quotient will consist of six bits if the five most
significant bits of the dividend constitute a number greater than the divisor. The quotient is to be
stored in a standard 5-bit register, so the overflow bit will require one more flip-flop for storing the
sixth bit. This divide-overflow condition must be avoided in normal computer operations because
the entire quotient will be too long for transfer into a memory unit that has words of standard
length, that is, the same as the length of registers. Provisions to ensure that this condition is detected
must be included in either the hardware or the software of the computer, or in a combination of the
two.
When the dividend is twice as long as the divisor, we can understand the condition for
overflow as follows:
A divide-overflow occurs if the high-order half bits of the dividend makes a number greater
than or equal to the divisor. Another problem associated with division is the fact that a division by
zero must be avoided. The divide-overflow condition takes care of this condition as well. This
occurs because any dividend will be greater than or equal to a divisor, which is equal to zero.
Overflow condition is usually detected when a special flip-flop is set. We will call it a divide-
overflow flip-flop and label it DVF.
Assuming fraction representation for the mantissa and taking the two sign bits into consideration,
the range of numbers that can be represented is + (1 – 2-35) x 22047
This number is derived from a fraction that contains 35 1’s, an exponent of 11 bits
(excluding its sign), and because 211–1 = 2047. The largest number that can be accommodated is
approximately 10615. The mantissa that can accommodated is 35 bits (excluding the sign) and if
considered as an integer it can store a number as large as (235 –1). This is approximately equal to
1010, which is equivalent to a decimal number of 10 digits.
Computers with shorter word lengths use two or more words to represent a floating-point
number. An 8-bit microcomputer uses four words to represent one floating-point number. One word
of 8 bits are reserved for the exponent and the 24 bits of the other three words are used in the
mantissa.
Arithmetic operations with floating-point numbers are more complicated than with fixed-
point numbers. Their execution also takes longer time and requires more complex hardware.
Adding or subtracting two numbers requires first an alignment of the radix point since the exponent
parts must be made equal before adding or subtracting the mantissas. We do this alignment by
shifting one mantissa while its exponent is adjusted until it becomes equal to the other exponent.
Consider the sum of the following floating-point numbers:
.5372400 x 102 + .1580000 x 10-1
It is necessary to make two exponents be equal before the mantissas can be added. We can
either shift the first number three positions to the left, or shift the second number three positions to
the right. When we store the mantissas in registers, shifting to the left causes a loss of most
significant digits. Shifting to the right causes a loss of least significant digits. The second method is
preferable because it only reduces the accuracy, while the first method may cause an error. The
usual alignment procedure is to shift the mantissa that has the smaller exponent to the right by a
number of places equal to the difference between the exponents. Now, the mantissas can be added.
. 5372400 x 102 +. 0001580 x 102 = . 5373980 x 102
When two normalized mantissas are added, the sum may contain an overflow digit. An
overflow can be corrected easily by shifting the sum once to the right and incrementing the
exponent. When two numbers are subtracted, the result may contain most significant zeros as
shown in the following example:
.56780 x 105 - .56430 x 105 = .00350 x 105
An underflow occurs if a floating-point number that has a 0 in the most significant position
of the mantissa. To normalize a number that contains an underflow, we shift the mantissa to the left
and decrement the exponent until a nonzero digit appears in the first position. Here, it is necessary
to shift left twice to obtain .35000 x 103. In most computers a normalization procedure is performed
after each operation to ensure that all results are in a normalized form.
Floating-point multiplication and division need not do an alignment of the mantissas.
Multiplying the two mantissas and adding the exponents can form the product. Dividing the
mantissas and subtracting the exponents perform division.
The operations done with the mantissas are the same as in fixed-point numbers, so the two
can share the same registers and circuits. The operations performed with the exponents are
compared and incremented (for aligning the mantissas), added and subtracted (for multiplication)
and division), and decremented (to normalize the result). We can represent the exponent in any one
of the three representations - signed-magnitude, signed 2’s complement or signed 1’s complement.
Biased exponents have the advantage that they contain only positive numbers. Now it
becomes simpler to compare their relative magnitude without bothering about their signs. Another
advantage is that the smallest possible biased exponent contains all zeros. The floating-point
representation of zero is then a zero mantissa and the smallest possible exponent.
Register Configuration
The register configuration for floating-point operations is shown in figure. As a rule, the
same registers and adder used for fixed-point arithmetic are used for processing the mantissas. The
difference lies in the way the exponents are handled.
The register organization for floating-point operations is shown in Fig. Three registers are
there, BR, AC, and QR. Each register is subdivided into two parts. The mantissa part has the same
uppercase letter symbols as in fixed-point representation. The exponent part may use corresponding
lower-case letter symbol.