0% found this document useful (0 votes)
17 views73 pages

Co 1

Unit-1 provides an overview of basic computer architecture and components. It discusses the five main parts of a computer: input, memory, arithmetic logic unit (ALU), output, and control unit. Information is input, stored in memory, processed by the ALU, and output. The control unit coordinates the other units. Memory is divided into primary and secondary storage. The ALU performs arithmetic and logic operations. Input devices acquire information while output devices share processed results.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views73 pages

Co 1

Unit-1 provides an overview of basic computer architecture and components. It discusses the five main parts of a computer: input, memory, arithmetic logic unit (ALU), output, and control unit. Information is input, stored in memory, processed by the ALU, and output. The control unit coordinates the other units. Memory is divided into primary and secondary storage. The ALU performs arithmetic and logic operations. Input devices acquire information while output devices share processed results.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 73

UNIT-1

Basic Structure of Computers


Computer Architecture

In computer engineering, computer architecture is a set of rules and methods that


describe the functionality, organization, and implementation of computer systems

Functional unit
 A computer consists of five functionally independent main parts input, memory,
arithmetic logic unit (ALU), output and control unit.
 Input device accepts the coded information as source program i.e. high-level language.
 This is either stored in the memory or immediately used by the processor to perform the
desired operations.
 The program stored in the memory determines the processing steps.
 Basically, the computer converts one source program to an object program. i.e. into
machine language.
 Finally, the results are sent to the outside world through output device. All of these
actions are coordinated by the control unit.

Input unit: -
 The source program/high level language program/coded information/simply data is fed
to a computer through input devices keyboard is a most common type.
 Whenever a key is pressed, one corresponding word or number is translated into its
equivalent binary code over a cable & fed either to memory or processor.
 Example –Keyboard, Joysticks, trackballs, mouse, scanners etc are other input devices.

Memory unit: -
Its function is to store programs and data.
It is basically to two types 1. Primary memory 2. Secondary memory
Primary memory: -
 Is the one exclusively associated with the processor and operates at high speed.
 The memory contains a large number of semiconductors storage cells.
 These are processed in a group of fixed size called word.
 Programs must reside in the memory during execution. Instructions and data can be
written into the memory or read out under the control of processor.
Secondary memory: -
 This type of memory is used where large amounts of data & programs have to be stored,
particularly information that is accessed infrequently.
 Examples: - Magnetic disks & tapes, optical disks (ie CD-ROM’s), floppies etc.

Arithmetic logic unit (ALU): -


Most of the computer operation are executed in ALU of the processor like addition,
subtraction, division, multiplication, etc. The operands are brought into the ALU from
memory and stored in high-speed storage elements called register.
Control unit: -
The operations of all the units are coordinated by the control unit i,e it act as a nerve centre
that sends signals to other units and senses their states . The actual timing signals that
govern the transfer of data between input unit, processor, memory and output unit are
generated by the control unit.
Output unit:
These are actually are the counterparts of input unit. Its basic function is to send the
processed results to the outside world after processing. Examples: - Printer, speakers,
monitor etc.

Historical Perspective

Structure: Abacus is basically a wooden rack that has metal rods with beads
mounted on them.
Working of abacus: In the abacus, the beads were moved by the abacus
operator according to some rules to perform arithmetic calculations. In some
countries like China, Russia, and Japan, the abacus is still used by their
people.
Napier’s Bones
Napier’s Bones was a manually operated calculating device and as the name
indicates, it was invented by John Napier. In this device, he used 9 different
ivory strips (bones) marked with numbers to multiply and divide for
calculation. It was also the first machine to use the decimal point system for
calculation.
Pascaline
It is also called an Arithmetic Machine or Adding Machine. A French
mathematician-philosopher Blaise Pascal invented this between 1642 and
1644. It was the first mechanical and automatic calculator. It is invented by
Pascal to help his father, a tax accountant in his work or calculation. It could
perform addition and subtraction in quick time. It was basically a wooden box
with a series of gears and wheels. It is worked by rotating wheel like when a
wheel is rotated one revolution, it rotates the neighbouring wheel and a
series of windows is given on the top of the wheels to read the totals.
Stepped Reckoner or Leibniz wheel
A German mathematician-philosopher Gottfried Wilhelm Leibniz in 1673
developed this device by improving Pascal’s invention to develop this
machine. It was basically a digital mechanical calculator, and it was called
the stepped reckoner as it was made of fluted drums instead of gears (used
in the previous model of Pascaline).
Difference Engine
Charles Babbage who is also known as the “Father of Modern Computer”
designed the Difference Engine in the early 1820s. Difference Engine was a
mechanical computer which is capable of performing simple calculations. It
works with help of steam as it was a steam-driven calculating machine, and it
was designed to solve tables of numbers like logarithm tables.
Analytical Engine
Again in 1830 Charles Babbage developed another calculating machine
which was Analytical Engine. Analytical Engine was a mechanical computer
that used punch cards as input. It was capable of performing or solving any
mathematical problem and storing information as a permanent memory
(storage).
Tabulating Machine
Herman Hollerith, an American statistician invented this machine in the year
1890. Tabulating Machine was a mechanical tabulator that was based on
punch cards. It was capable of tabulating statistics and record or sort data or
information. This machine was used by U.S. Census in the year 1890.
Hollerith’s Tabulating Machine Company was started by Hollerith and this
company later became International Business Machine (IBM) in the year
1924.
Differential Analyzer
Differential Analyzer was the first electronic computer introduced in the year
1930 in the United States. It was basically an analog device that was
invented by Vannevar Bush. This machine consists of vacuum tubes to
switch electrical signals to perform calculations. It was capable of doing 25
calculations in a few minutes.
Mark I
In the year 1937, major changes began in the history of computers when
Howard Aiken planned to develop a machine that could perform large
calculations or calculations involving large numbers. In the year 1944, Mark I
computer was built as a partnership between IBM and Harvard. It was also
the first programmable digital computer marking a new era in the computer
world.

Generations of Computers

First Generation Computers


In the period of the year 1940-1956, it was referred to as the period of the
first generation of computers. These machines are slow, huge, and
expensive. In this generation of computers, vacuum tubes were used as the
basic components of CPU and memory. Also, they were mainly dependent
on the batch operating systems and punch cards. Magnetic tape and paper
tape were used as output and input devices. For example, ENIAC, UNIVAC-
1, EDVAC, etc.
Second Generation Computers
In the period of the year, 1957-1963 was referred to as the period of the
second generation of computers. It was the time of the transistor computers.
In the second generation of computers, transistors (which were cheap in
cost) are used. Transistors are also compact and consume less power.
Transistor computers are faster than first-generation computers. For primary
memory, magnetic cores were used, and for secondary memory magnetic
disc and tapes for storage purposes. In second-generation computers,
COBOL and FORTRAN are used as Assembly language and programming
languages, and Batch processing and multiprogramming operating systems
were used in these computers.
For example IBM 1620, IBM 7094, CDC 1604, CDC 3600, etc.
Third Generation Computers
In the third generation of computers, integrated circuits (ICs) were used
instead of transistors(in the second generation). A single IC consists of many
transistors which increased the power of a computer and also reduced the
cost. The third generation computers are more reliable, efficient, and smaller
in size. It used remote processing, time-sharing, and multiprogramming as
operating systems. FORTRON-II TO IV, COBOL, and PASCAL PL/1 were
used which are high-level programming languages.
For example IBM-360 series, Honeywell-6000 series, IBM-370/168, etc.
Fourth Generation Computers
The period of 1971-1980 was mainly the time of fourth generation
computers. It used VLSI(Very Large Scale Integrated) circuits. VLSI is a chip
containing millions of transistors and other circuit elements and because of
these chips, the computers of this generation are more compact, powerful,
fast, and affordable(low in cost). Real-time, time-sharing and distributed
operating system are used by these computers. C and C++ are used as the
programming languages in this generation of computers.
For example STAR 1000, PDP 11, CRAY-1, CRAY-X-MP, etc.
Fifth Generation Computers
From 1980 – to till date these computers are used. The ULSI (Ultra Large
Scale Integration) technology is used in fifth-generation computers instead of
the VLSI technology of fourth-generation computers. Microprocessor chips
with ten million electronic components are used in these computers. Parallel
processing hardware and AI (Artificial Intelligence) software are also used in
fifth-generation computers. The programming languages like C, C++,
Java, .Net, etc. are used.
For example Desktop, Laptop, NoteBook, UltraBook, etc.
Bus Structure
A bus is a communication system that transfers information (in any form like data,address or
control information) between components, inside a computer, or between computers.
 It is the group of wires carrying a group of bits in parallel.
 There are three kind of bus according to the type of information they carries like
1. Data Bus
2. Address Bus
3. Control Bus
 A bus which carries a word from or to memory ate called Data bus. It carries the data
from one system module to other. Data bus may consist of 32,64, 128 or even more
numbers of separate lines. This number of lines decides the width of the data bus. Each line
can carry one bit at a time. So, a data bus with 32 lines can carry 32bit at a time.
 Address Bus is used to carry the address of source or destination of the data on the data
bus.
 Control Bus is used to control the access, processing and information transferring.

 In this method bus architecture the processor will completely supervise and participate in
the transformation.
 The information will first taken to the processor register and then to the memory such
that transfer is known as program controlled transfer.
 The interconnection between i/o unit, processor and memory accomplished by two
independent system bus is known as two way bus interconnection structure.
 The system bus between i/o unit and processor consist of DAB (Device address bus),
DB(Data bus),CB(Control bus).Similarly the system bus between memory processor consist
of MAB(Memory address bus),DB,CB.
 The communication exists between
 Memory to processor
 Process to Memory
 I/o to processor
 processor to I/o
 i/o to memory
Computer Arithmetic:
Introduction:
 Arithmetic instructions in digital computers manipulate data to produce
results necessary for the solution of computational problems.
 These instructions perform arithmetic calculations and are responsible for
the bulk of activity involved in processing data in a computer.

The four basic arithmetic operations are addition, subtraction, multiplication


and division. From these four bulk operations, it is possible to formulate other
arithmetic functions and solve scientific problems by means of numerical
analysis methods.
 An arithmetic processor is the part of a processor unit that executes
arithmetic operations. The data type assumed to reside in processor registers
during the execution of an arithmetic instruction is specified in the definition of
the instruction. A:n arithmetic instruction may specify binary or decimal data,
and in each case the data may be in fixedpoint or floating-point form.
 We must be thoroughly familiar with the sequence of steps to be followed
in order to carry out the operation and achieve a correct result. The solution to
any problem that is stated by a finite number of well-defined procedural steps is
called an algorithm.
 Usually, an algorithm will contain a number of procedural steps which are
dependent on results of previous steps. A convenient method for presenting
algorithms is a flowchart

Addition and Subtraction:


 As we have discussed, there are three ways of representing negative fixed-
point binary numbers: signed-magnitude, signed-1's complement, or signed-
2's complement. Most computers use the signed-2's complement representation
when performing arithmetic operations with integers.

i. Addition and Subtraction with Signed-Magnitude Data:


When the signed numbers are added or subtracted, we find that there are eight
different conditions to consider, depending on the sign of the numbers and the
operation performed. These conditions are listed in the first column of Table
shown below.
Algorithm: (Addition with Signed-Magnitude Data)
i. When the signs of A and B are identical ,add the two magnitudes and attach
the sign of A to the result.
ii. When the signs of A and B are different, compare the magnitudes and
subtract the smaller number from the larger. Choose the sign of the result to be
the same as A if A > B or the complement of the sign of A if A < B.
iii. If the two magnitudes are equal, subtract B from A and make the sign of the
result positive.

Algorithm: (Subtraction with Signed-Magnitude Data)


i. When the signs of A and B are different, add the two magnitudes and attach
the sign of A to the result.
ii. When the signs of A and B are identical, compare the magnitudes and
subtract the smaller number from the larger. Choose the sign of the result to be
the same as A if A > B or the complement of the sign of A if A < B.
iii. If the two magnitudes are equal, subtract B from A and make the sign of the
result positive.

The two algorithms are similar except for the sign comparison. The procedure
to be followed for identical signs in the addition algorithm is the same as for
different signs in the subtraction algorithm, and vice versa.
Hardware Implementation:
To implement the two arithmetic operations with hardware, it is first necessary
that the two numbers be stored in registers.
i. Let A and B be two registers that hold the magnitudes of the numbers, and AS
and BS be two flip-flops that hold the corresponding signs.
ii. The result of the operation may be transferred to a third register: however, a
saving is achieved if the result is transferred into A and AS. Thus A and AS
together form an accumulator register.

Consider now the hardware implementation of the algorithms above.


o First, a parallel-adder is needed to perform the microoperation A + B. o
Second, a comparator circuit is needed to establish if A > B, A = B, or A < B.
o Third, two parallel-subtractor circuits are needed to perform the
microoperations A - B and B - A. The sign relationship can be determined from
an exclusive-OR gate with AS and BS as inputs.

The below figure shows a block diagram of the hardware for implementing the
addition and subtraction operations. It consists of registers A and B and sign
flip-flops AS and BS.
o Subtraction is done by adding A to the 2' s complement of B. The output
carry is transferred to flip-flop E, where it can be checked to determine the
relative magnitudes of the two numbers. o The add-overflow flip-flop AVF
holds the overflow bit when A and B are added.

The complemented provides an output of B or the complement of B depending


on the state of the mode control M.
 When M = 0, the output of B is transferred to the adder, the input carry is 0,
and the output of the adder is equal to the sum A + B.
 When M= 1, the l's complement of B is applied to the adder, the input carry
is 1, and output
This is equal to A plus the 2's complement of B, which
is
equivalent to the subtraction A - B.
Hardware Algorithm

ii. Addition and Subtraction with Signed-2's Complement Data


 The register configuration for the hardware implementation is shown in the
below Figure(a). We name the A register AC (accumulator) and the B register
BR. The leftmost bit in AC and BR represent the sign bits of the numbers. The
two sign bits are added or subtracted together with the other bits in the
complementer and parallel adder. The overflow flip-flop V is set to 1 if there is
an overflow. The output carry in this case is discarded.
 The algorithm for adding and subtracting two binary numbers in signed-2' s
complement representation is shown in the flowchart of Figure(b). The sum is
obtained by adding the contents of AC and BR (including their sign bits). The
overflow bit V is set to 1 if the exclusive-OR of the last two carries is 1, and it is
cleared to 0 otherwise. The subtraction operation is accomplished by adding the
content of AC to the 2's complement of BR.
 Comparing this algorithm with its signed-magnitude counterpart, we note
that it is much simpler to add and subtract numbers if negative numbers are
maintained in signed-2' s complement representation.

Multiplication Algorithms:
Multiplication of two fixed-point binary numbers in signed-magnitude
representation is done with paper and pencil by a process of successive
shift and adds operations. This process is best illustrated with a
numerical example.

The process of multiplication:


• It consists of looking at successive bits of the multiplier, least significant
bit first.
• If the multiplier bit is a 1, the multiplicand is copied down; otherwise, zeros
are copied down.
• The numbers copied down in successive lines are shifted one position to the
left from the previous number.
• Finally, the numbers are added and their sum forms the product.

The sign of the product is determined from the signs of the multiplicand and
multiplier. If they are alike, the sign of the product is positive. If they are
unlike, the sign of the product is negative.
Hardware Implementation for Signed-Magnitude Data
The registers A, B and other equipment are shown in Figure (a). The
multiplier is stored in the Q register and its sign in Qs. The sequence
counter SC is initially set to a number equal to the number of bits in the
multiplier. The counter is decremented by 1 after forming each partial
product. When the content of the counter reaches zero, the product is
formed and the process stops.

 Initially, the multiplicand is in register B and the multiplier in Q, Their


corresponding signs are in Bs and Qs, respectively
 The sum of A and B forms a partial product which is transferred to the
EA register.
 Both partial product and multiplier are shifted to the right. This shift will
be denoted by the statement shr EAQ to designate the right shift.
 The least significant bit of A is shifted into the most significant position of
Q, the bit from E is shifted into the most significant position of A, and 0 is
shifted into E. After the shift, one bit of the partial product is shifted into Q,
pushing the multiplier bits one position to the right.
In this manner, the rightmost flip-flop in register Q, designated by Qn, will hold
the bit of the multiplier, which must be inspected next.
Hardware Algorithm:
Initially, the multiplicand is in B and the multiplier in Q. Their corresponding
signs are in Bs and Qs, respectively. The signs are compared, and both A and Q
are set to correspond to the sign of the product since a double-length product
will be stored in registers A and Q. Registers A and E are cleared and the
sequence counter SC is set to a number equal to the number of bits of the
multiplier.
After the initialization, the low-order bit of the multiplier in Qn is tested.
i. If it is 1, the multiplicand in B is added to the present partial product in A .
ii. If it is 0 , nothing is done. Register EAQ is then shifted once to the right to
form the new partial product.

The sequence counter is decremented by 1 and its new value checked. If it is


not equal to zero, the process is repeated and a new partial product is formed.
The process stops when SC = 0.
The final product is available in both A and Q, with A holding the most
significant bits and Q holding the least significant bits.
A flowchart of the hardware multiply algorithm is shown in the below
figure (l).
Booth Multiplication Algorithm:(multiplication of 2’s complement data):
Booth algorithm gives a procedure for multiplying binary integers in signed-
2's complement representation.
Booth algorithm requires examination of the multiplier bits and shifting of the
partial product. Prior to the shifting, the multiplicand may be added to the
partial product, subtracted from the partial product, or left unchanged according
to the following rules:
1. The multiplicand is subtracted from the partial product upon encountering the
first least significant 1 in a string of 1's in the multiplier.
2. The multiplicand is added to the partial product upon encountering the first 0
(provided that there was a previous 1) in a string of O's in the multiplier.
3. The partial product does not change when the multiplier bit is identical to the
previous multiplier bit.

Hardware implementation of Booth algorithm Multiplication:


The hardware implementation of Booth algorithm requires the register configuration
shown in figure (n). This is similar addition and subtraction hardware except that the sign
bits are not separated from the rest of the registers. To show this difference, we rename
registers A, B, and Q, as AC, BR, and QR, respectively. Qn designates the least significant
bit of the multiplier in register QR. An extra flip-flop Qn+1, is appended to QR to facilitate a
double bit inspection of the multiplier. The flowchart for Booth algorithm is shown in Figure
(o).
Hardware Algorithm for Booth Multiplication:
AC and the appended bit Qn+1 are initially cleared to 0 and the sequence counter SC is set
to a number n equal to the number of bits in the multiplier. The two bits of the multiplier in
Qn and Qn+1 are inspected.
i. If the two bits are equal to 10, it means that the first 1 in a string of 1's has been
encountered. This requires a subtraction of the multiplicand from the partial product in AC.
ii. If the two bits are equal to 01, it means that the first 0 in a string of 0's has been
encountered.
This requires the addition of the multiplicand to the partial product in AC.
iii. When the two bits are equal, the partial product does not change. iv. The next step
is to shift right the partial product and the multiplier (including bit Qn+1). This is an
arithmetic shift right (ashr) operation which shifts AC and QR to the right and leaves
the sign bit in AC unchanged. The sequence counter is decremented and the
computational loop is repeated n times.
Example: multiplication of ( - 9) x ( - 13) = + 117 is shown below. Note that the multiplier in QR is
negative and that the multiplicand in BR is also negative. The 10-bit product appears in AC and
QR and is positive.

Division Algorithms:
Division of two fixed-point binary numbers in signed-magnitude representation is done with
paper and pencil by a process of successive compare, shift, and subtract operations.
The division process is illustrated by a numerical example in the below figure (q).
 The divisor B consists of five bits and the dividend A consists of ten bits. The five most
significant bits of the dividend are compared with the divisor. Since the 5-bit number is smaller
than B, we try again by taking the sixth most significant bits of A and compare this number with
B. The 6-bit number is greater than B, so we place a 1 for the quotient bit. The divisor is then
shifted once to the right and subtracted from the dividend.

 The difference is called a partial remainder because the division could have stopped here to
obtain a quotient of 1 and a remainder equal to the partial remainder. The process is continued by
comparing a partial remainder with the divisor.

• If the partial remainder is greater than or equal to the divisor, the quotient bit is equal to 1. The
divisor is then shifted right and subtracted from the partial remainder.

• If the partial remainder is smaller than the divisor, the quotient bit is 0 and no subtraction is
needed. The divisor is shifted once to the right in any case. Note that the result gives both a
quotient and a remainder.
Hardware Implementation for Signed-Magnitude Data:
The hardware for implementing the division operation is identical to that required for
multiplication.
 The divisor is stored in the B register and the double-length dividend is stored in registers A
and Q. The dividend is shifted to the left and the divisor is subtracted by adding its 2's
complement value. The information about the relative magnitude is available in E.
 If E = 1, it signifies that A≥B. A quotient bit 1 is inserted into Q, and the partial remainder is
shifted to the left to repeat the process.
 If E = 0, it signifies that A < B so the quotient in Qn remains a 0. The value of B is then
added to restore the partial remainder in A to its previous value. The partial remainder is shifted
to the left and the process is repeated again until all five quotient bits are formed.
 Note that while the partial remainder is shifted left, the quotient bits are shifted also and after
five shifts, the quotient is in Q and the final remainder is in A.

The sign of the quotient is determined from the signs of the dividend and the divisor. If the two
signs are alike, the sign o f the quotient is plus. If they are unalike, the sign is minus. The sign of
the remainder is the same as the sign of the dividend.
Divide Overflow
 The division operation may result in a quotient with an overflow. This is not a problem when
working with paper and pencil but is critical when the operation is implemented with hardware.
This is because the length of registers is finite and will not hold a number that exceeds the
standard length.
 To see this, consider a system that has 5-bit registers. We use one register to hold the divisor
and two registers to hold the dividend. From the example shown in the above, we note that the
quotient will consist of six bits if the five most significant bits of the dividend constitute a number
greater than the divisor. The quotient is to be stored in a standard 5-bit register, so the overflow
bit will require one more flip-flop for storing the sixth bit.
 This divide-overflow condition must be avoided in normal computer operations because the
entire quotient will be too long for transfer into a memory unit that has words of standard length,
that is, the same as the length of registers.
 This condition detection must be included in either the hardware or the software of the
computer, or in a combination of the two.
When the dividend is twice as long as the divisor,
i. A divide-overflow condition occurs if the high-order half bits of the dividend constitute a
number greater than or equal to the divisor.
ii. A division by zero must be avoided. This occurs because any dividend will be greater than or
equal to a divisor which is equal to zero. Overflow condition is usually detected when a special
flip-flop is set. We will call it a divide-overflow flip-flop and label it DVF. Hardware
Algorithm:
1. The dividend is in A and Q and the divisor in B . The sign of the result is

transferred into Qs to be part of the quotient. A constant is set into the sequence counter SC to
specify the number of bits in the quotient.
2. A divide-overflow condition is tested by subtracting the divisor in B from half of

the bits of the dividend stored in A. If A ≥ B, the divide-overflow flip-flop DVF is set and the
operation is terminated prematurely. If A < B, no divide overflow occurs so the value of the
dividend is restored by adding B to A.
3. The division of the magnitudes starts by shifting the dividend in AQ to the left with

the high-order bit shifted into E. If the bit shifted into E is 1, we know that EA > B because EA
consists of a 1 followed by n-1 bits while B consists of only n -1 bits. In this case, B must be
subtracted from EA and 1 inserted into Qn for the quotient bit.
4. If the shift-left operation inserts a 0 into E, the divisor is subtracted by adding its 2's
complement value and the carry is transferred into E . If E = 1, it signifies that A ≥ B; therefore,
Qn is set to 1 . If E = 0, it signifies that A < B and the original number is restored by adding B to
A . In the latter case we leave a 0 in Qn.

This process is repeated again with registers EAQ. After n times, the quotient is
formed in register Q and the remainder is found in register A.

You might also like