0% found this document useful (0 votes)
16 views23 pages

COA 2ndunit

Uploaded by

Dipika Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views23 pages

COA 2ndunit

Uploaded by

Dipika Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

lOMoARcPSD|37751672

Module-II

Computer Organization And Architecture

Signed Numbers Representation


The integer variables are represented in a signed and unsigned manner. The positive and
negative values are differentiated by using the sign flag in signed numbers. The unsigned
numbers do not use any flag for the sign, i.e., only positive numbers can be stored by the
unsigned numbers.

It is very easy to represent positive and negative numbers in our day to day life. We
represent the positive numbers without adding any sign before them and the negative
number with - (minus) sign before them. But in the digital system, it is not possible to use
negative sign before them because the data is in binary form in digital computers. For
representing the sign in binary numbers, we require a special notation.

Binary Numbers Representation

Our computer can understand only (0, 1) language. The binary numbers are represented
in both ways, i.e., signed and unsigned. The positive numbers are represented in both
ways- signed and unsigned, but the negative numbers can only be described in a signed
way. The difference between unsigned and signed numbers is that unsigned numbers do
not use any sign bit for positive and negative numbers identification, but the signed
number used.

Unsigned Numbers
As we already know, the unsigned numbers don't have any sign for representing negative
numbers. So the unsigned numbers are always positive. By default, the decimal number
representation is positive. We always assume a positive sign in front of each decimal
digit.

There is no sign bit in unsigned binary numbers so it can only represent its magnitude. In
zero and one, zero is an unsigned binary number. There is only one zero (0) in this
representation, which is always positive. Because of one unique binary equivalent form of
lOMoARcPSD|37751672

a number in unsigned number representation, it is known as unambiguous representation


technique. The range of the unsigned binary numbers starts from 0 to (2 n-1).

Example: Represent the decimal number 102 in unsigned binary numbers.

We will change this decimal number into binary, which has the only magnitude of the
given name.

Decimal Operation Result Remainder

102 102/2 51 0

51 51/2 25 1

25 25/2 12 1

12 12/2 6 0

6 6/2 3 0

3 3/2 1 11

1/2 0 1

So the binary number of (102)10 is (1100110)2, a 7-bit magnitude of the decimal number
102.

Signed Numbers
The signed numbers have a sign bit so that it can differentiate positive and negative
integer numbers. The signed binary number technique has both the sign bit and the
magnitude of the number. For representing the negative decimal number, the
corresponding symbol in front of the binary number will be added.
The signed numbers are represented in three ways. The signed bit makes two possible
representations of zero (positive (0) and negative (1)), which is an ambiguous
representation. The third representation is 2's complement representation in which no
double representation of zero is possible, which makes it unambiguous representation.
There are the following types of representation of signed binary numbers:

1. Sign-Magnitude form
In this form, a binary number has a bit for a sign symbol. If this bit is set to 1, the
number will be negative else the number will be positive if it is set to 0. Apart
from this sign-bit, the n-1 bits represent the magnitude of the number.
2. 1's Complement
By inverting each bit of a number, we can obtain the 1's complement of a number.
The negative numbers can be represented in the form of 1's complement. In this
form, the binary number also has an extra bit for sign representation as a sign-
magnitude form.
3. 2's Complement
By inverting each bit of a number and adding plus 1 to its least significant bit, we
can obtain the 2's complement of a number. The negative numbers can also be
represented in the form of 2's complement. In this form, the binary number also
has an extra bit for sign representation as a sign-magnitude form.

1's complement
In number representation techniques, the binary number system is the most used
representation technique in digital electronics. The complement is used for representing
the negative decimal number in binary form. Different types of complement are possible
of the binary number, but 1's and 2's complements are mostly used for binary numbers.
lOMoARcPSD|37751672

We can find the 1's complement of the binary number by simply inverting the given
number. For example, 1's complement of binary number 1011001 is 0100110. We can
find the 2's complement of the binary number by changing each bit(0 to 1 and 1 to 0)
and adding 1 to the least significant bit. For example, 2's complement of binary number
1011001 is (0100110)+1=0100111.

For finding 1's complement of the binary number, we can implement the logic circuit also
by using NOT gate. We use NOT gate for each bit of the binary number. So, if we want to
implement the logic circuit for 5-bit 1's complement, five NOT gates will be used.

Example 1: 11010.1101

For finding 1's complement of the given number, change all 0's to 1 and all 1's to 0. So
the 1's complement of the number 11010.1101 comes out 00101.0010.

Example 2: 100110.1001

For finding 1's complement of the given number, change all 0's to 1 and all 1's to 0. So,
the 1's complement of the number 100110.1001 comes out 011001.0110. 1's
Complement Table
Binary Number 1's Complement

0000 1111

0001 1110

0010 1101

0011 1100

0100 1011

0101 1010

0110 1001

0111 1000

1000 0111

1001 0110

1010 0101

1011 0100

1100 0011
lOMoARcPSD|37751672

1101 0010

1110 0001

1111 0000

Use of 1's complement

1's complement plays an important role in representing the signed binary numbers. The
main use of 1's complement is to represent a signed binary number. Apart from this, it is
also used to perform various arithmetic operations such as addition and subtraction.

In signed binary number representation, we can represent both positive and negative
numbers. For representing the positive numbers, there is nothing to do. But for
representing negative numbers, we have to use 1's complement technique. For
representing the negative number, we first have to represent it with a positive sign, and
then we find the 1's complement of it.

Let's take an example of a positive and negative number and see how these numbers are
represented.

=]xcfuiiiExample 1: +6 and -6

The number +6 is represented as same as the binary number. For representing both
numbers, we will take the 5-bit register.

So the +6 is represented in the 5-bit register as 0 0110.

The -6 is represented in the 5-bit register in the following way:

1. +6=0 0110
2. Find the 1's complement of the number 0 0110, i.e., 1 1001. Here, MSB denotes
that a number is a negative number.

Here, MSB refers to Most Significant Bit, and LSB denotes the Least Significant Bit.

Example 2: +120 and -120

The number +120 is represented as same as the binary number. For representing both
numbers, take the 8-bit register.
So the +120 is represented in the 8-bit register as 0 1111000.

The -120 is represented in the 8-bit register in the following way:

1. +120=0 1111000
lOMoARcPSD|37751672

2. Now, find the 1's complement of the number 0 1111000, i.e., 1 0000111. Here,
the MSB denotes the number is the negative number.

2's complement

Just like 1's complement, 2's complement is also used to represent the signed binary
numbers. For finding 2's complement of the binary number, we will first find the 1's
complement of the binary number and then add 1 to the least significant bit of it.

For example, if we want to calculate the 2's complement of the number 1011001, then
firstly, we find the 1's complement of the number that is 0100110 and add 1 to the LSB.
So, by adding 1 to the LSB, the number will be (0100110)+1=0100111. We can also
create the logic circuit using OR, AND, and NOT gates. The logic circuit for finding 2's
complement of the 5-bit binary number is as follows:

Example 1: 110100

For finding 2's complement of the given number, change all 0's to 1 and all 1's to 0. So
the 1's complement of the number 110100 is 001011. Now add 1 to the LSB of this
number, i.e., (001011)+1=001100.

Example 2: 100110

For finding 1's complement of the given number, change all 0's to 1 and all 1's to 0. So,
the 1's complement of the number 100110 is 011001. Now add one the LSB of this

number, i.e., (011001)+1=011010.

2's Complement Table


Binary Number 1's Complement 2's complement
0000 1111 0000
0001 1110 1111
0010 1101 1110
0011 1100 1101
0100 1011 1100
0101 1010 1011
0110 1001 1010
0111 1000 1001
1000 0111 1000
lOMoARcPSD|37751672

1001 0110 0111


1010 0101 0110
1011 0100 0101
1100 0011 0100
1101 0010 0011
1110 0001 0010
1111 0000 0001
Use of 2's complement
2's complement is used for representing signed numbers and performing arithmetic
operations such as subtraction, addition, etc. The positive number is simply represented
as a magnitude form. So there is nothing to do for representing positive numbers. But if
we represent the negative number, then we have to choose either 1's complement or 2's
complement technique. The 1's complement is an ambiguous technique, and 2's
complement is an unambiguous technique. Let's see an example to understand how we
can calculate the 2's complement in signed binary number representation. Example
1: +6 and -6
The number +6 is represented as same as the binary number. For representing both
numbers, take the 5-bit register.

So the +6 is represented in the 5-bit register as 0 0110.

The -6 is represented in the 5-bit register in the following way:

1. +6=0 0110
2. Now, find the 1's complement of the number 0 0110, i.e. 1 1001.
3. Now, add 1 to its LSB. When we add 1 to the LSB of 11001, the newly generated
number comes out 11010. Here, the sign bit is one which means the number is
the negative number.

Example 2: +120 and -120

The number +120 is represented as same as the binary number. For representing both
numbers, take the 8-bit register.

So the +120 is represented in the 8-bit register as 0 1111000.

The -120 is represented in the 8-bit register in the following way:

1. +120=0 1111000
2. Now, find the 1's complement of the number 0 1111000, i.e. 1 0000111. Here, the
MSB denotes the number is the negative number.
3. Now, add 1 to its LSB. When we add 1 to the LSB of 1 0000111, the newly
generated number comes out 1 0001000. Here, the sign bit is one, which means
the number is the negative number.
lOMoARcPSD|37751672

Addition and Subtraction with Signed Magnitude


Data
Addition Algorithm
The addition algorithm specifies that:
 If the signs of A and B are the same, add both the magnitudes and put
the sign of A to the result, as shown in the table below.
 Compare both the magnitudes and subtract the small number from the
greater number when the signs of A and B disagree.
 In cases where A > B, the output signs must be equal to A, or the
complement of A's sign in cases where A < B.
 Subtract B from A and change the sign of the output to positive when the
two magnitudes are equal.

Subtraction Algorithm
The subtraction algorithm states that:
 When the signs of A and B differ, the subtraction method says to add
both the magnitudes and put the sign of A to the result.
 Compare both the magnitudes and subtract the smaller number from the
greater number when the signs of A and B are the same.
 In cases where A > B, the output signs must be equal to A, or the
complement of A's sign in cases where A < B.
 Subtract B from A and change the sign of the output to positive when the
two magnitudes are equal.

FlowChart
lOMoARcPSD|37751672

Hardware Implementation for signed-magnitude addition and


subtraction
lOMoARcPSD|37751672

Let's add two values, +3 and +2, using the signed magnitude
representation.
Solution
We represent the given operands as shown below:
+3 = 0 0112
+2 = 0 0102
From the flowchart, we follow that As xor Bs = 0. This implies that As = Bs

So we do the addition of the magnitude of both operands.

Mag(+3) + Mag(+2) = 0112 + 0102 = 1012 = Mag(5)


Now the sign of the result will be that of As
Therefore, +3 + (+2) = 0 1012 = +5

Example 2

Let's subtract two values, +3 and +2, using the signed magnitude representation.
Solution

We represent the given operands as shown below:


+3 = 0 0112
+2 = 0 0102
From the flowchart, we follow that As xor Bs = 0. This implies that As = Bs
Also according to the table,

Since the magnitude of P > Q,


lOMoARcPSD|37751672

We get results by +(P-Q).


Mag(Result) = 011 + (010)’ + 1 = 011 + 101 + 1 = (001)
SignBit(Result) = 0
Therefore, +3 - (+2) = +(+3-2) = +1

What is Booth’s Algorithm?

Andrew Donald Booth’s Algorithm, introduced in 1951, revolutionized


binary multiplication by reducing the number of additions and shifts
required. This algorithm capitalizes on the concept of signed-digit
representation, where digits are encoded as either -1, 0, or 1. Booth’s
Algorithm is particularly advantageous when multiplying numbers with
repeated patterns of 1s or 0s.

How Booth’s Algorithm Works?

The algorithm operates by identifying sequences of consecutive 1s and 0s


in the multiplier, efficiently adjusting the partial products based on these
patterns. It involves three basic steps:

Booth’s Algorithm is a clever technique used to perform binary


multiplication more efficiently, particularly when dealing with numbers that
have repeated patterns of 1s or 0s in their binary representation. This
algorithm reduces the number of additions and shifts required compared to
traditional multiplication methods, making it a valuable tool in computer
organization and hardware implementations. Let’s dive into the detailed
workings of Booth’s Algorithm step by step:
lOMoARcPSD|37751672

Step 1: Initialization

1. Input: Booth’s Algorithm takes two binary numbers as input: the


multiplicand (M) and the multiplier (Q). The length of the multiplier
determines the number of iterations in the algorithm.

2. Initialization: Create three registers:


A (Accumulator): Initialize to zero, representing the result of the
multiplication.
Q (Multiplier): Load the binary multiplier into this register.
Q_-1 (Previous Qubit): Initialize to 0, representing the least significant bit
(LSB) of the multiplier before the start of the algorithm.

Step 2: Iteration

The algorithm iterates through the bits of the multiplier. For each bit of the
multiplier, it performs the following actions:

1. Identify Patterns: Examine two consecutive bits of the multiplier along


with the previous bit (Qi, Q-1, Q_i+1). These bits form a three-bit pattern.
There are four possible patterns: 00, 01, 10, and 11.

2. Pattern-based Addition or Subtraction:


If the pattern is 00 or 11, no action is taken, and the algorithm proceeds to
lOMoARcPSD|37751672

the next iteration.


If the pattern is 01, subtract the multiplicand (M) from the accumulator (A)
and store the result in the accumulator.
If the pattern is 10, add the multiplicand (M) to the accumulator (A) and
store the result in the accumulator.

3. Shift Right:

Perform an arithmetic shift right on the accumulator and the multiplier. This
is equivalent to shifting the bits one position to the right, with the most
significant bit (MSB) of Q becoming the new Q_-1, and the MSB of A
becoming the new MSB of Q.

Step 3: Normalization

After completing the iteration, perform a final normalization step:

1. Shift A/Q: Perform one last arithmetic shift right on the accumulator and
the multiplier to align their positions.

2. Result Extraction: The final result of the multiplication is stored in the


accumulator A.

Example of Booth’s Algorithm

Let’s illustrate Booth’s Algorithm with an example:

M (Multiplicand) = 1010
Q (Multiplier) = 1101

1. Initialization:

 A (Accumulator) = 0000
 Q (Multiplier) = 1101
 Q_-1 (Previous Qubit) = 0

2. Iteration:

 Iteration 1: Pattern = 001 (Subtract M from A)


 Iteration 2: Pattern = 110 (Add M to A)

3. Normalization:

 Shift A/Q Right: A = 1001, Q = 1110

Final Result:
A (Result) = 1001
lOMoARcPSD|37751672

What is Ripple Carry Adder?


A structure of multiple full adders is cascaded in a manner to gives the
results of the addition of an n bit binary sequence. This adder includes
cascaded full adders in its structure so, the carry will be generated at every
full adder stage in a ripple-carry adder circuit. These carry output at each
full adder stage is forwarded to its next full adder and there applied as a
carry input to it. This process continues up to its last full adder stage. So,
each carry output bit is rippled to the next stage of a full adder. By this
reason, it is named as “RIPPLE CARRY ADDER”. The most important
feature of it is to add the input bit sequences whether the sequence is 4 bit
or 5 bit or any.

“One of the most important point to be considered in this carry adder is the
final output is known only after the carry outputs are generated by each full
adder stage and forwarded to its next stage. So there will be a delay to get
the result with using of this carry adder”.

There are various types in ripple-carry adders. They are:

 4-bit ripple-carry adder


 8-bit ripple-carry adder
 16-bit ripple-carry adder

First, we will start with 4-bit ripple-carry-adder and then 8 bit and 16-bit
ripple-carry adders.

4-bit Ripple Carry Adder


The below diagram represents the 4-bit ripple-carry adder. In this adder,
four full adders are connected in cascade. Co is the carry input bit and it is
zero always. When this input carry ‘Co’ is applied to the two input
sequences A1 A2 A3 A4 and B1 B2 B3 B4 then output represented with S1
S2 S3 S4 and output carry C4.

4-bit RCA Diagram


lOMoARcPSD|37751672

Working of 4-bit Ripple Carry Adder


 Let’s take an example of two input sequences 0101 and 1010.
These are representing the A4 A3 A2 A1 and B4 B3 B2 B1.
 As per this adder concept, input carry is 0.
 When Ao & Bo are applied at 1st full adder along with input carry
0.
 Here A1 =1 ; B1=0 ; Cin=0
 Sum (S1) and carry (C1) will be generated as per the Sum and
Carry equations of this adder. As per its theory, the output equation
for the Sum = A1⊕B1⊕Cin and Carry = A1B1⊕B1Cin⊕CinA1
 As per this equation, for 1st full adder S1 =1 and Carry output i.e.,
C1=0.
 Same like for next input bits A2 and B2, output S2 = 1 and C2 = 0.
Here the important point is the second stage full adder gets input
carry i.e., C1 which is the output carry of initial stage full adder.
 Like this will get the final output sequence (S4 S3 S2 S1) = (1 1 1
1) and Output carry C4 = 0
 This is the addition process for 4-bit input sequences when it’s
applied to this carry adder.
8-bit Ripple Carry Adder
 It consists of 8 full adders which are connected in cascaded form.
 Each full adder carry output is connected as an input carry to the
next stage full adder.
 The input sequences are denoted by (A1 A2 A3 A4 A5 A6 A7 A8)
and (B1 B2 B3 B4 B5 B6 B7 B8) and its relevant output sequence
is denoted by (S1 S2 S3 S4 S5 S6 S7 S8).
 The addition process in an 8-bit ripple-carry-adder is the same
principle which is used in a 4-bit ripple-carry-adder i.e., each bit
from two input sequences are going to added along with input
carry.
 This will use when the addition of two 8 bit binary digits sequence.

8bit-ripple-carry-adder
16-bit Ripple Carry Adder
 It consists of 16 full adders which are connected in cascaded form.
 Each full adder carry output is connected as an input carry to the
next stage full adder.
lOMoARcPSD|37751672

 The input sequences are denoted by (A1 ….. A16) and (B1 ……
B16) and its relevant output sequence is denoted by (S1 ……..
S16).
 The addition process in a 16-bit ripple-carry-adder is the same
principle which is used in a 4-bit ripple-carry adder i.e., each bit
from two input sequences are going to add along with input carry.
 This will use when the addition of two 16 bit binary digits
sequence.

16-bit-ripple-carry-adder
Ripple Carry Adder Applications
The ripple-carry-adder applications include the following.

 These carry adders are used mostly in addition to n-bit input


sequences.
 These carry adders are applicable in the digital signal processing
and microprocessors.
Ripple Carry Adder Advantages
The ripple-carry-adder advantages include the following.

 This carry adder has an advantage like we can perform addition


process for n-bit sequences to get accurate results.
 The designing of this adder is not a complex process.
Ripple carry adder is an alternative for when half adder and full adders
do not perform the addition operation when the input bit sequences are
large. But here, it will give the output for whatever the input bit sequences
with some delay. As per the digital circuits if the circuit gives output with
delay won’t be preferable. This can be overcome by a carry look-ahead
adder circuit.

Array Multiplier

An array multiplier is a digital combinational circuit used for multiplying two


binary numbers by employing an array of full adders and half adders. This
array is used for the nearly simultaneous addition of the various product
terms involved. To form the various product terms, an array of AND gates
is used before the Adder array.
lOMoARcPSD|37751672

Checking the bits of the multiplier one at a time and forming partial
products is a sequential operation that requires a sequence of add and shift
micro-operations. The multiplication of two binary numbers can be done
with one micro-operation by means of a combinational circuit that forms the
product bits all at once. This is a fast way of multiplying two numbers since
all it takes is the time for the signals to propagate through the gates that
form the multiplication array. However, an array multiplier requires a large
number of gates, and for this reason it was not economical until the
development of integrated circuits.

For implementation of array multiplier with a combinational circuit, consider


the multiplication of two 2-bit numbers as shown in figure. The multiplicand
bits are b1 and b0, the multiplier biAssuming A = a1a0 and B= b1b0, the
various bits of the final product term P can be written as:-

1. P(0)= a0b0

2. P(1)=a1b0 + b1a0

3. P(2) = a1b1 + c1 where c1 is the carry generated during the addition for
the P(1) term.
4. P(3) = c2 where c2 is the carry generated during the addition for the P(2)
termts are a1 and a0, and the product is

For the above multiplication, an array of four AND gates is required to form
the various product terms like a0b0 etc. and then an adder array is required
to calculate the sums involving the various product terms and carry
combinations mentioned in the above equations in order to get the final
Product bits.

The first partial product is formed by multiplying a0 by b1, b0. The


multiplication of two bits such as a0 and b0 produces a 1 if both bits are 1;
otherwise, it produces 0. This is identical to an AND operation and can be
implemented with an AND gate.

The first partial product is formed by means of two AND gates.


lOMoARcPSD|37751672

The second partial product is formed by multiplying a1 by b1b0 and is


shifted one position to the left.

The above two partial products are added with two half-adder(HA) circuits.
Usually there are more bits in the partial products and it will be necessary
to use full-adders to produce the sum.
Note that the least significant bit of the product does not have to go through
an adder since it is formed by the output of the first AND gate.

A combinational circuit binary multiplier with more bits


can be constructed in similar fashion. A bit of the
multiplier is ANDed with each bit of the multiplicand in
as many levels as there are bits in the multiplier. The
binary output in each level of AND gates is added in
parallel with the partial product of the previous level to
form a new partial product. The last level produces the
product. For j multiplier bits and k multiplicand we need
j*k AND gates and (j-1) k-bit adders to produce a
product of j+k bits.

Carry Look-Ahead Adder


The adder produce carry propagation delay while performing other arithmetic operations like
multiplication and divisions as it uses several additions or subtraction steps. This is a major problem
for the adder and hence improving the speed of addition will improve the speed of all other arithmetic
operations. Hence reducing the carry propagation delay of adders is of great importance. There are
different logic design approaches that have been employed to overcome the carry propagation
problem. One widely used approach is to employ a carry look-ahead which solves this problem by
calculating the carry signals in advance, based on the input signals. This type of adder circuit is called
a carry look-ahead adder.

Here a carry signal will be generated in two cases:

Input bits A and B are 1

When one of the two bits is 1 and the carry-in is 1.


lOMoARcPSD|37751672

In ripple carry adders, for each adder block, the two bits that are to be added are
available instantly. However, each adder block waits for the carry to arrive from its
previous block. So, it is not possible to generate the sum and carry of any block until
the input carry is known. The i^{th} block waits for the i-1^{th} block to produce
its carry. So there will be a considerable time delay which is carry propagation delay

Consider the above 4-bit ripple carry adder. The sum S_{3} is produced by the
corresponding full adder as soon as the input signals are applied to it. But the carry
input C_{4} is not available on its final steady-state value until carry C_{3} is
available at its steady-state value. Similarly C_{3} depends on C_{2} and C_{2}
on C_{1} . Therefore, though the carry must propagate to all the stages in order
that output S_{3} and carry C_{4} settle their final steady-state value.

The propagation time is equal to the propagation delay of each adder block,
multiplied by the number of adder blocks in the circuit. For example, if each full adder
stage has a propagation delay of 20 nanoseconds, then S_{3} will reach its final
correct value after 60 (20 × 3) nanoseconds. The situation gets worse, if we extend
the number of stages for adding more number of bits.

Carry Look-ahead Adder :

A carry look-ahead adder reduces the propagation delay by introducing more


complex hardware. In this design, the ripple carry design is suitably transformed
such that the carry logic over fixed groups of bits of the adder is reduced to two-level
logic. Let us discuss the design in detail.
lOMoARcPSD|37751672

Consider the full adder circuit shown above with corresponding truth table. We define
two variables as ‘carry generate’ G_{i} and ‘carry propagate’ P_{i} then,

P_{i} = A_{i} \oplus B_{i} \newline G_{i} = A_{i} B_{i}

The sum output and carry output can be expressed in terms of carry generate G_{i}
and carry propagate P_{i} as

S_{i} = P_{i} \oplus C_{i} \newline C_{i+1} = G_{i} + P_{i} C_{i}

where G_{i} produces the carry when both A_{i} , B_{i} are 1 regardless of the
input carry. P_{i} is associated with the propagation of carry from C_{i} to C_{i +
1} .

The carry output Boolean function of each stage in a 4 stage carry look-ahead adder
can be expressed as
lOMoARcPSD|37751672

C_{1} = G_{0} + P_{0} C_{in} \newline C_{2} = G_{1} + P_{1} C_{1} = G_{1} + P_{1}
G_{0} + P_{1} P_{0} C_{in} \newline C_{3} = G_{2} + P_{2} C_{2} = G_{2} + P_{2}
G_{1} + P_{2} P_{1} G_{0} + P_{2} P_{1} P_{0} C_{in} \newline C_{4} = G_{3} +
P_{3} C_{3} = G_{3} + P_{3} G_{2} + P_{3} P_{2} G_{1} + P_{3} P_{2} P_{1} G_{0} +
P_{3} P_{2} P_{1} P_{0} C_{in} \newline

From the above Boolean equations we can observe that C_{4} does not have to
wait for C_{3} and C_{2} to propagate but actually C_{4} is propagated at the
same time as C_{3} and C_{2} . Since the Boolean expression for each carry
output is the sum of products so these can be implemented with one level of AND
gates followed by an OR gate.

The implementation of three Boolean functions for each carry output (C_{2} ,
C_{3} and C_{4} ) for a carry look-ahead carry generator shown in below figure.

Time Complexity Analysis :

We could think of a carry look-ahead adder as made up of two “parts”

The part that computes the carry for each bit.

The part that adds the input bits and the carry for each bit position.

The log(n) complexity arises from the part that generates the carry, not the circuit
that adds the bits.

Now, for the generation of the n^{th} carry bit, we need to perform a AND between
(n+1) inputs. The complexity of the adder comes down to how we perform this AND
operation. If we have AND gates, each with a fan-in (number of inputs accepted) of
k, then we can find the AND of all the bits in log_{k}(n+1) time. This is represented
in asymptotic notation as \Theta(log n) .

Advantages and Disadvantages of Carry Look-Ahead Adder :

Advantages –
lOMoARcPSD|37751672

The propagation delay is reduced.

It provides the fastest addition logic.

Disadvantages –

The Carry Look-ahead adder circuit gets complicated as the number of variables
increase.

The circuit is costlier as it involves more number of hardware.

Array Multiplier

An array multiplier is a digital combinational circuit used for multiplying two binary
numbers by employing an array of full adders and half adders. This array is used for
the nearly simultaneous addition of the various product terms involved. To form the
various product terms, an array of AND gates is used before the Adder array.

Checking the bits of the multiplier one at a time and forming partial products is a
sequential operation that requires a sequence of add and shift micro-operations. The
multiplication of two binary numbers can be done with one micro-operation by means
of a combinational circuit that forms the product bits all at once. This is a fast way of
multiplying two numbers since all it takes is the time for the signals to propagate
through the gates that form the multiplication array. However, an array multiplier
requires a large number of gates, and for this reason it was not economical until the
development of integrated circuits.

For implementation of array multiplier with a combinational circuit, consider the


multiplication of two 2-bit numbers as shown in figure. The multiplicand bits are b1
and b0, the multiplier bits are a1 and a0, and the product is

Assuming A = a1a0 and B= b1b0, the various bits of the final product term P can be
written as:-

1. P(0)= a0b0

2. P(1)=a1b0 + b1a0

3. P(2) = a1b1 + c1 where c1 is the carry generated during the addition for the P(1)
term.
lOMoARcPSD|37751672

4. P(3) = c2 where c2 is the carry generated during the addition for the P(2) term.

For the above multiplication, an array of four AND gates is required to form the
various product terms like a0b0 etc. and then an adder array is required to calculate
the sums involving the various product terms and carry combinations mentioned in
the above equations in order to get the final Product bits.

The first partial product is formed by multiplying a0 by b1, b0. The multiplication of
two bits such as a0 and b0 produces a 1 if both bits are 1; otherwise, it produces 0.
This is identical to an AND operation and can be implemented with an AND gate.

The first partial product is formed by means of two AND gates.

The second partial product is formed by multiplying a1 by b1b0 and is shifted one
position to the left.

The above two partial products are added with two half-adder(HA) circuits. Usually
there are more bits in the partial products and it will be necessary to use full-adders
to produce the sum.

Note that the least significant bit of the product does not have to go through an adder
since it is formed by the output of the first AND gate.

A combinational circuit binary multiplier with more bits can be constructed in similar
fashion. A bit of the multiplier is ANDed with each bit of the multiplicand in as many
levels as there are bits in the multiplier. The binary output in each level of AND gates
is added in parallel with the partial product of the previous level to form a new partial
product. The last level produces the product. For j multiplier bits and k multiplicand
we need j*k AND gates and (j-1) k-bit adders to produce a product of j+k bits.
lOMoARcPSD|37751672

You might also like