Unit 4
Unit 4
1. Introduction:
• Arithmetic instructions in computers manipulate data to solve computational problems.
• These instructions perform basic arithmetic operations (addition, subtraction, multiplication, division) crucial for data processing.
• From these basic operations, more complex calculations and scientific problem-solving are achieved using numerical analysis
methods.
2. Arithmetic Processor:
• The arithmetic processor is a dedicated unit within the CPU that executes arithmetic instructions.
• The data type (binary/decimal, fixed-point/floating-point) used in calculations is specified in the instruction itself.
3. Data Representation:
• Fixed-point numbers can represent integers or fractions.
• Negative numbers can be represented in signed-magnitude or signed-complement form.
• The complexity of the arithmetic processor depends on the supported operations and data representations.
4. Signed-Magnitude vs. Signed-Complement:
• We learn basic arithmetic operations using signed-magnitude representation (positive/negative sign and magnitude).
• Understanding these operations is crucial for hardware implementation.
• Signed-complement representation requires more complex algorithms and circuits compared to signed-magnitude.
5. Algorithms and Flowcharts:
• An algorithm is a set of well-defined steps to solve a problem.
• Section 3-3 (not shown) presented an algorithm for adding fixed-point binary numbers in signed-2's complement (which requires a
simple parallel binary adder for implementation).
• Flowcharts visually represent algorithms using rectangles for computational steps and diamonds for decision points with branching
paths.
6. Focus of this Chapter:
• This chapter will explore various arithmetic algorithms for different data types:
1. Fixed-point binary (signed-magnitude and signed-2's complement)
2. Floating-point binary
3. Binary-coded decimal (BCD)
• The chapter will explain how to implement these algorithms using digital hardware circuits.
Summary:
Computer arithmetic is essential for data manipulation and problem-solving. This chapter dives deep into algorithms for performing
arithmetic operations on various data representations, along with their hardware implementation using digital circuits.
2. Hardware Implementation
Addition and Subtraction with Signed-Magnitude 1.
Introduction:
• There are three ways to represent negative fixed-point binary numbers: signed-magnitude, signed-1's complement, and signed-2's
complement (introduced in Section 3-3, not shown).
• Most computers use signed-2's complement for integer arithmetic and signed-magnitude for the mantissa in floating-point
operations.
2. Focus of this Section:
• This section details addition and subtraction algorithms for signed-magnitude data.
• It's important to distinguish between the data representation used before and after the operation (signed-magnitude) and any
intermediate calculations using complements.
3. Signed-Magnitude Representation:
• Signed-magnitude is familiar because it resembles manual arithmetic (positive/negative sign and magnitude).
4. Deriving the Algorithms:
• We consider eight different conditions based on signs and operations (addition/subtraction) as shown in Table 10-1 (not shown).
• The table details operations on magnitudes and final sign determination to prevent negative zero (equal numbers subtracted should
result in +0).
5. Addition and Subtraction Algorithms:
• Addition Algorithm:
o If signs of A and B are the same (or different for subtraction), add magnitudes and keep the sign of A.
o If signs are different (or same for subtraction), compare magnitudes:
▪ Subtract smaller from larger and use the sign of A (larger number).
▪ If magnitudes are equal, subtract B from A and make the sign positive.
• Subtraction Algorithm (similar to addition with sign difference):
o If signs are the same (or different for addition), add magnitudes and keep the sign of A.
o If signs are different (or same for addition), compare magnitudes:
▪ Subtract smaller from larger and use the sign of A (larger number).
▪ If magnitudes are equal, subtract B from A and make the sign positive.
6. Summary:
The algorithms for signed-magnitude addition and subtraction are similar with a key difference in sign handling based on operand
signs and the operation being performed.
3. Hardware Implementation
Hardware Implementation of Signed-Magnitude Addition/Subtraction (for Exams)
1. Introduction:
• This section explains how to implement signed-magnitude addition and subtraction using hardware circuits.
2. Hardware Setup:
• Registers:
o A and B: Store the magnitudes of the operands.
o A_s and B_s: Flip-flops to hold the corresponding signs.
• Accumulator register: A and A_s combined, can hold the result.
3. Traditional Algorithm (More Hardware):
• Requires:
o Parallel adder for A + B.
o Comparator to determine A > B, A = B, or A < B.
o Two parallel subtractors for A - B and B - A.
• Uses an exclusive-OR gate with A_s and B_s for sign determination.
4. Efficient Algorithm (Less Hardware):
• Leverages the fact that subtraction can be done using complement and add.
• Compares magnitudes by checking the carry bit after subtraction.
• Uses 2's complement for subtraction and comparison, requiring only:
o An adder
o A complementer
5. Hardware Block Diagram (Figure 10-1):
• Registers A, B, sign flip-flops A_s, B_s.
• Subtraction done by adding A to the 2's complement of B.
• Carry output (C) goes to flip-flop E to determine relative magnitudes.
• Add-overflow flip-flop (AVF) holds overflow bit during addition.
• A register provides additional functionalities needed in the algorithm.
6. Circuit Details:
• Parallel adder adds A and B, output goes to A register (sum).
• Complementer provides B or its complement based on the mode control (M).
• M also controls the adder's input carry:
o M = 0: Normal addition (A + B).
o M = 1: Add A with 1's complement of B and carry-in of 1 (equivalent to A - B).
Summary:
This section explores two hardware implementations for signed-magnitude addition/subtraction. The efficient approach uses an
adder, a complementer, and leverages 2's complement for subtraction and comparison, reducing hardware complexity.
4. Hardware Algorithm
Hardware Algorithm for Signed-Magnitude Addition/Subtraction (for Exams)
This section explains the hardware algorithm using a flowchart. It details sign comparison, addition/subtraction based on signs,
overflow handling, magnitude comparison, sign correction, and final result retrieval.
5. Addition and Subtraction with Signed-2's
Complement Data
Signed-2's Complement Addition and Subtraction Explained for Exams
1. Introduction:
• This section reviews signed-2's complement representation (introduced in Sec. 3-3, not shown) for numbers and the corresponding
addition/subtraction algorithms.
2. Signed-2's Complement Representation:
• Leftmost bit: sign bit (0 for positive, 1 for negative).
• Example:
o +33: 00100001
o -33 (2's complement of +33): 11011111
3. Addition:
• Add the binary numbers, including sign bits.
• Discard any carry-out from the sign bit position.
4. Subtraction:
• Take the 2's complement of the subtrahend.
• Add the 2's complement to the minuend.
5. Overflow Detection:
• Overflow occurs when the sum of two n-bit numbers requires n+1 bits.
• Detected by checking the last two carries from the addition:
o Overflow if the exclusive-OR of these carries is 1.
6. Hardware Implementation (Figure 10-3):
• Similar to signed-magnitude (Figure 10-1) but without separate sign bits.
• A register renamed to AC (accumulator), B renamed to BR.
• Sign bits included in addition/subtraction with complementer and adder.
• Overflow flip-flop (V) set for overflow (output carry discarded).
7. Algorithm Flowchart (Figure 10-4):
• Add AC and BR contents (including signs).
• V set to 1 if last two carries result in overflow (exclusive-OR), cleared to 0 otherwise.
• For subtraction, add AC to the 2's complement of BR (flips sign).
• Overflow check required (erroneous result in AC if overflow occurs).
8. Conclusion:
• Signed-2's complement addition/subtraction is simpler than signed-magnitude.
• This is why most computers use signed-2's complement for negative numbers.
This process involves successive shifts and additions, demonstrated with an example:
• Multiplicand (23): 10111
• Multiplier (19): 00011
Steps:
Signed-magnitude multiplication uses hardware registers and a loop to perform successive shifts and additions of partial products to
obtain the final result. The hardware implementation is optimized compared to the manual method.
7. Hardware Algorithm
Signed-Magnitude Multiplication Hardware Algorithm (for Exams)
This flowchart details the steps involved in the hardware multiplication process.
2. Initial Setup:
• Multiplicand (B) and multiplier (Q) loaded with their respective signs (B_s and Q_s).
• Signs compared to determine the product's sign.
• A and Q cleared to represent the double-length product.
• E (accumulator) cleared.
• Sequence counter (SC) set to the number of bits in the multiplier.
3. Loop Iteration (Continues until SC = 0):
• Check the least significant bit (LSB) of the multiplier (Q_0).
o If 1: Add the multiplicand (B) to the current partial product in A.
o If 0: Do nothing.
• Shift the combined A, E, and Q registers right by one bit (shr EAQ):
o LSB of A goes to MSB of Q (shifting multiplier bits right).
o E bit goes to MSB of A.
o 0 is shifted into E.
• Decrement the sequence counter (SC) by 1.
• Check the new SC value.
o If not zero, repeat the loop (new partial product formation).
• Loop stops when SC reaches zero.
4. Final Product:
• The final product is available in both A and Q registers:
o A holds the most significant bits.
o Q holds the least significant bits.
5. Example (Table 10-2, not shown):
This table (not shown) demonstrates how the hardware algorithm works step-by-step using the previous numerical example for
better understanding.
Summary:
The hardware algorithm uses a loop to iteratively check the multiplier's LSB, perform addition or skipping based on the bit value,
and shift the registers to create the final double-length product in A and Q.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.