Computer Organization Computer Architecture

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Computer Organization and Architecture

Computer Architecture:
Computer Architecture is a functional description of requirements and design implementation for the
various parts of computer. It deals with functional behaviour of computer system. It comes before the
computer organization while designing a computer.
Difference Between computer Organization and computer architecture

S.NO COMPUTER ARCHITECTURE COMPUTER ORGANIZATION

Architecture describes what the


1. Organization describes how it does it.
computer does.

Computer Architecture deals with Computer Organization deals with


2.
functional behavior of computer system. structural relationship.

In below figure, its clear that it deals In below figure, its also clear that it
3.
with high-level design issue. deals with low-level design issue.

Where, Organization indicates its


4. Architecture indicates its hardware.
performance.

For designing a computer, its For designing a computer, organization


5.
architecture is fixed first. is decided after its architecture.
Introduction to Generation Of computers
The basic function performed by a computer is the execution of a program. A program is a sequence of
instructions, which operates on data to perform certain tasks. In modern digital computers data is
represented in binary form by using two symbols 0 and 1, which are called binary digits or bits. But the data
which we deal with consists of numeric data and characters such as decimal digits 0 to 9, alphabets A to Z,
arithmetic operators (e.g. +, -, etc.), relations operators (e.g. =, >, etc.), and many other special characters
(e.g.;,@,{,],etc.). Thus, collection of eight bits is called a byte. Thus, one byte is used to represent one
character internally. Most computers use two bytes or four bytes to represent numbers (positive and
negative) internally.
Another term, which is commonly used in computer, is a Word. A word may be defined as a unit of
information, which a computer can process, or transfer at a time. A word, generally, is equal to the number
of bits transferred between the central processing unit and the main memory in a single step. It is also be
defined as the basic unit of storage of integer data in a computer. Normally, a word may be equal to 8, 16,
32 or 64 bits .The terms like 32 bit computer, 64 bit computers etc. basically points out the word size of the
computer.

HISTORY OF COMPUTERS
Basic information about the technological development trends in computer in the past and its projections in
the future. If we want to know about computers completely then we must start from the history of
computers and look into the details of various technological and intellectual breakthrough. These are
essential to give us the feel of how much work and effort has been done to get the computer in this shape.

The ancestors of modern age computer were the mechanical and electro-mechanical devices. This ancestry
can be traced as back and 17th century, when the first machine capable of performing four mathematical
operations, viz. addition, subtraction, division and multiplication, appeared.

1. First Generation Electronic Computers (1937-1953) ~Vacuum Tubes


Three machines have been promoted at various times as the first electronic computers. These machines
used electronic switches, in the form of vacuum tubes, instead of electromechanical relays. In principle
the electronic switches would be more reliable, since they would have no moving parts that would wear out,
but the technology was still new at that time and the tubes were comparable to relays in reliability. Electronic
components had one major benefit, however: they could ``open'' and ``close'' about 1,000 times faster than
mechanical switches.
The first general purpose programmable electronic computer was the Electronic Numerical Integrator and
Computer (ENIAC), built by J. Presper Eckert and John V. Mauchly at the University of Pennsylvania. Eckert,
Mauchly, and John von Neumann, a consultant to the ENIAC project, began work on a new machine before
ENIAC was finished.

ENIAC was controlled by a set of external switches and dials; to change the program required physically
altering the settings on these controls. These controls also limited the speed of the internal electronic
operations. Through the use of a memory that was large enough to hold both instructions and data, and
using the program stored in memory to control the order of arithmetic operations, EDVAC was able to run
orders of magnitude faster than ENIAC. By storing instructions in the same medium as data, designers could
concentrate on improving the internal structure of the machine without worrying about matching it to the
speed of an external control.
The trends, which were encountered during the era of first generation computer, were:
• The first generation computer control was centralized in a single CPU, and all
operations required a direct intervention of the CPU.
• Use of ferrite-core main memory was started during this time.
• Concepts such as use of virtual memory and index register (you will know more
about these terms in advanced courses).
• Punched cards were used as input device.
• Magnetic tapes and magnetic drums were used as secondary memory.
• Binary code or machine language was used for programming.
 Advent of Von-Neumann Architecture.

2. Second Generation (1954-1962) ~ Transistors


The second generation saw several important developments at all levels of computer system design, from
the technology used to build the basic circuits to the programming languages used to write scientific
applications.

Electronic switches in this era were based on discrete diode and transistor technology with a switching time
of approximately 0.3 microseconds. The first machines to be built with this technology include TRADIC at
Bell Laboratories in 1954 and TX-0 at MIT's Lincoln Laboratory. Memory technology was based on magnetic
cores which could be accessed in random order, as opposed to mercury delay lines, in which data was stored
as an acoustic wave that passed sequentially through the medium and could be accessed only when the data
moved by the I/O interface.
During this second generation many high level programming languages were introduced, including FORTRAN
(1956), ALGOL (1958), and COBOL (1959). Important commercial machines of this era include the IBM 704
and its successors, the 709 and 7094. The latter introduced I/O processors for better throughput between
I/O devices and main memory.
The second generation also saw the first two supercomputers designed specifically for numeric
processing in scientific applications. The term ``supercomputer'' is generally reserved for a machine that is
an order of magnitude more powerful than other machines of its era. Two machines of the 1950s deserve
this title. The Livermore Atomic Research Computer (LARC) and the IBM 7030 (aka Stretch) were early
examples of machines that overlapped memory operations with processor operations and had primitive
forms of parallel processing.

3. Third Generation (1963-1972) ~ Integrated Circuits


The third generation brought huge gains in computational power. Innovations in this era include the use of
integrated circuits, or ICs (semiconductor devices with several transistors built into one physical component),
semiconductor memories starting to be used instead of magnetic cores, microprogramming as a technique
for efficiently designing complex processors, the coming of age of pipelining and other forms of parallel
processing, and the introduction of operating systems and time-sharing.

The first ICs were based on small-scale integration (SSI) circuits, which had around 10 devices per circuit
(or ``chip''), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to 100 devices
per chip. Multi layered printed circuits were developed and core memory was replaced by faster, solid state
memories. Computer designers began to take advantage of parallelism by using multiple functional units,
overlapping CPU and I/O operations, and pipelining (internal parallelism) in both the instruction stream and
the data stream. The SOLOMON computer, developed by Westinghouse Corporation, and the ILLIAC IV,
jointly developed by Burroughs, the Department of Defense and the University of Illinois, was representative
of the first parallel computers.

4. Fourth Generation (1972-1984) ~ Microprocessors


The next generation of computer systems saw the use of large scale integration (LSI - 1000 devices per chip)
and very large scale integration (VLSI - 100,000 devices per chip) in the construction of computing elements.
At this scale entire processors will fit onto a single chip, and for simple systems the entire computer
(processor, main memory, and I/O controllers) can fit on one chip. Gate delays dropped to about 1ns per
gate.

Semiconductor memories replaced core memories as the main memory in most systems; until this time the
use of semiconductor memory in most systems was limited to registers and cache. A variety of parallel
architectures began to appear; however, during this period the parallel computing efforts were of a mostly
experimental nature and most computational science was carried out on vector processors. Microcomputers
and workstations were introduced and saw wide use as alternatives to time-shared mainframe computers.
Developments in software include very high level languages such as FP (functional programming) and Prolog
(programming in logic). These languages tend to use a declarative programming style as opposed to the
imperative style of Pascal, C, FORTRAN, et al. In a declarative style, a programmer gives a mathematical
specification of what should be computed, leaving many details of how it should be computed to the
compiler and/or runtime system. These languages are not yet in wide use, but are very promising as
notations for programs that will run on massively parallel computers (systems with over 1,000 processors).
Compilers for established languages started to use sophisticated optimization techniques to improve code,
and compilers for vector processors were able to vectorize simple loops (turn loops into single instructions
that would initiate an operation over an entire vector).

Two important events marked the early part of the third generation: the development of the C programming
language and the UNIX operating system, both at Bell Labs. In 1972, Dennis Ritchie, seeking to meet the
design goals of CPL and generalize Thompson's B, developed the C language. Thompson and Ritchie then
used C to write a version of UNIX for the DEC PDP-11. This C-based UNIX was soon ported to many different
computers, relieving users from having to learn a new operating system each time they change computer
hardware. UNIX or a derivative of UNIX is now a de facto standard on virtually every computer system.

5. Fifth Generation (1984-1990) ~ Artificial Intelligence


The development of the next generation of computer systems is characterized mainly by the acceptance of
parallel processing. Until this time parallelism was limited to pipelining and vector processing, or at most to
a few processors sharing jobs. The fifth generation saw the introduction of machines with hundreds of
processors that could all be working on different parts of a single program.

Other new developments were the widespread use of computer networks and the increasing use of single-
user workstations. Prior to 1985 large scale parallel processing was viewed as a research goal, but two
systems introduced around this time are typical of the first commercial products to be based on parallel
processing. The Sequent Balance 8000 connected up to 20 processors to a single shared memory module
(but each processor had its own local cache). The machine was designed to compete with the DEC VAX-780
as a general purpose Unix system, with each processor working on a different user's job.

The Intel iPSC-1, nicknamed ``the hypercube'', took a different approach. Instead of using one memory
module, Intel connected each processor to its own memory and used a network interface to connect
processors. This distributed memory architecture meant memory was no longer a bottleneck and large
systems (using more processors) could be built. Toward the end of this period a third type of parallel
processor was introduced to the market. In this style of machine, known as a data-parallel or SIMD, there
are several thousand very simple processors. All processors work under the direction of a single control unit;
i.e. if the control unit says ``add a to b'' then all processors find their local copy of a and add it to their local
copy of b.

Scientific computing in this period was still dominated by vector processing. Most manufacturers of vector
processors introduced parallel models, but there were very few (two to eight) processors in this parallel
machines. In the area of computer networking, both wide area network (WAN) and local area network (LAN)
technology developed at a rapid pace, stimulating a transition from the traditional mainframe computing
environment toward a distributed computing environment in which each user has their own workstation for
relatively simple tasks (editing and compiling programs, reading mail) but sharing large, expensive resources
such as file servers and supercomputers. RISC technology (a style of internal organization of the CPU) and
plummeting costs for RAM brought tremendous gains in computational power of relatively low cost
workstations and servers. This period also saw a marked increase in both the quality and quantity of scientific
visualization.

6. Sixth Generation (1990 - )


This generation is beginning with many gains in parallel computing, both in the hardware area and in
improved understanding of how to develop algorithms to exploit diverse, massively parallel architectures.
Parallel systems now complete with vector processors in terms of total computing power and most expect
parallel systems to dominate the future.

Combinations of parallel/vector architectures are well established, and one corporation (Fujitsu) has
announced plans to build a system with over 200 of its high end vector processors. Workstation technology
has continued to improve, with processor designs now using a combination of RISC, pipelining, and parallel
processing. As a result it is now possible to purchase a desktop workstation for about $30,000 that has the
same overall computing power (100 megaflops) as fourth generation supercomputers.

One of the most dramatic changes in the sixth generation will be the explosive growth of wide area
networking. Network bandwidth has expanded tremendously in the last few years and will continue to
improve for the next several years. T1 transmission rates are now standard for regional networks, and the
national ``backbone'' that interconnects regional networks uses T3. Networking technology is becoming
more widespread than its original strong base in universities and government laboratories as it is rapidly
finding application in K-12 education, community networks and private industry.

Summary
Features
Generations
First Vacuum Tubes or Valves
Generation Ø used vacuum tubes as electronic circuit
Ø magnetic drum for primary storage
1942-1955 Ø mercury delay lined for memory
Ø punch-card used as secondary storage
Ø machine level programming used
Ø operating speed was used in terms of millisecond
Mark-I, UNIVAC,
ENIAC
Generations Features
Second Transistor
Generation Ø magnet core memory used as internal storage
1955-1964 Ø magnet tapes used as secondary storage
Ø little bit faster I/O devices
Ø high level language used as programming
Ø processing speed measured in microsecond

IMB 1401,
ICL 2950/10
etc.
Third Generation IC(Integrated circuits)
1964-1975 Ø semi conductor memory used as primary storage
Ø magnetic discs were used as secondary storage
Ø massive use of high level language
Ø processing speed increased to nanosecond and even faster

IBM 360 series,


UNIVAC 9000
etc.
Fourth VLSI or Microprocessor
Generation Ø massive use of magnetic and optical storage devices with capacity more than 100 GB
1975-1984 Ø advancement in software and high level language
Ø use of 4th generation language(4GL)
Ø operation speed increased beyond picoseconds and MIPS (Millions of Instructions Per
Second)
IBM PC,
Pentium PC,
Apple/Macintosh
etc.
Fifth Bio-Chips
Generation1990+ Ø AI will make computer Intelligent and knowledge based
Ø very high speed, PROLOG (programming language)

CLASSIFICATION COMPUTERS

1. MICRO COMPUTER
A microcomputer’s CPU is microprocessor. The microcomputer originated in late 1970s. the first
microcomputers were built around 8-bit microprocessor chips. It means that the chip can retrieve
instructions/data from storage, manipulate, and process an 8-bit data at a time or we can say that the
chip has a built-in 8-bit data transfer path.

An improvement on 8-bit chip technology was seen in early 1980s, when a series of 16-bit chips namely
8086 and 8088 were introduced by Intel Corporation, each one with an advancement over the other.
8088 is a 8/16 bit chip i.e. an 8-bit path is used to move data between chip and primary storage(external
path), at a time, but processing is done within the chip using a 16-bit path(internal path) at a time.

8086 is a 16/16 bit chip i.e. the internal and external paths both are 16 bit wide. Both these chips can
support a primary storage capacity of upto 1 mega byte (MB). These computers are usually divided into
desktop models and laptop models. They are terribly limited in what they can do when compared to
the larger models discussed above because they can only be used by one person at a time, they are
much slower than the larger computers, and they cannot store nearly as much information, but they
are excellent when used in small businesses, homes, and school classrooms. These computers are
inexpensive and easy to use. They have become an indispensable part of modern life. Thus
• Used for memory intense and graphic intense applications
• Are single-user machines

2. MINI COMPUTER

Minicomputers are much smaller than mainframe computers and they are also much less expensive.
The cost of these computers can vary from a few thousand dollars to several hundred thousand dollars.
They possess most of the features found on mainframe computers, but on a more limited scale. They
can still have many terminals, but not as many as the mainframes. They can store a tremendous amount
of information, but again usually not as much as the mainframe. Medium and small businesses typically
use these computers. Thus
• Fit somewhere between mainframe and PCs
• Would often be used for file servers in networks

3. MAINFRAME COMPUTER

Mainframe computers are very large, often filling an entire room. They can store enormous of information, can perform
many tasks at the same time, can communicate with many users at the same time, and are very
expensive. . The price of a mainframe computer frequently runs into the millions of dollars.

Mainframe computers usually have many terminals connected to them. These terminals look like small
computers but they are only devices used to send and receive information from the actual computer
using wires. Terminals can be located in the same room with the mainframe computer, but they can
also be in different rooms, buildings, or cities. Large businesses, government agencies, and universities
usually use this type of computer. Thus
• Most common type of large computers
• Used by many people using same databases
• Can support many terminals
• Used in large company like banks and insurance companies

4. SUPER COMPUTER

The upper end of the state of the art mainframe machine is the supercomputer. These are amongst the
fastest machines in terms of processing speed and use multiprocessing techniques, were a number of
processors are used to solve a problem. Computers built to minimize distance between points for very
fast operation. Used for extremely complicated computations. Thus
o Largest and most powerful
o Used by scientists and engineers
o Very expensive
o Would be found in places like Los Alamos or NASA
What is CISC Microprocessor?
Ans.: CISC stands for complex instruction set computer. It was developed by Intel. CISC is a type of design for
the computers. CISC based computer will have shorter programs which are made up of symbolic machine
language. The number of instructions on a CISC processor is more.
What is RISC Microprocessor?
Ans.: RISC stands for reduced instruction set computer architecture. The properties of this design are :
(i) A large number of general purpose registers and use of computers to optimize register usage.
(ii) A limited & simple instruction set.
(iii) An emphasis on optimizing the instruction pyre line.

What are the different types of Memory?


Ans.: The memory in a computer is made up of semi-conductions. Semi-conduction memories are of two
types :
(1) RAM : Random Access Memory
(2) ROM : Read Only Memory
(1) RAM : The Read and write (R/W) memory of a computer is called RAM. The User can write information
to it and read information from it. In Random Access, any memory location can be accessed in a random
memory without going through any other memory location. The RAM is a volatile memory, it means
information written to it can be accessed as long as power is on. As soon as the power is off, it can not be
accessed. There are two basic types of RAM :
(i) Static RAM (ii) Dynamic Ram
(i) S-RAM retains stored information only as long as the power supply is on. Static RAM‟s are costlier and
consume more power. They have higher speed than D-RAMs. They store information in Hip-Hope.
(ii) D-RAM loses its stored information in a very short time (for milli sec.) even when power supply is on. In
a DRAM, a binary static is stored on the gate to source stray capacitor of a transfer the presence of charge
on the stray capacitor shows 1 & absence 0.
D-RAM‟s are cheaper & lower.
Some other RAMS are :
(a) EDO (Extended Data Output) RAM : In an EDO RAMs, any memory location can be accessed.
Stores 256 bytes of data information into latches. The latches hold next 256 bytes of information so
that in most programs, which are sequentially executed, the data are available without wait states.
(b) SDRAM (Synchronous DRAMS), SGRAMs (Synchronous Graphic RAMs) : These RAM chips use
the same clock rate as CPU uses. They transfer data when the CPU expects them to be ready.
(c) DDR-SDRAM (Double Data Rate – SDRAM) : This RAM transfers data on both edges of the clock.
Therefore the transfer rate of the data becomes doubles.

(2) ROM : Read only memory: Its non volatile memory, ie, the information stored in it, is not lost even if the
power supply goes off. It‟s used for the permanent storage of information. It also posses random access
property. Information can not be written into a ROM by the users/programmers. In other words the contents
of ROMs are decided by the manufactures. The following types of ROMs an listed below :
(i) PROM : It‟s programmable ROM. Its contents are decided by the user. The user can store
permanent programs, data etc in a PROM. The data is fed into it using a PROM programs.
(ii) EPROM : An EPROM is an erasable PROM. The stored data in EPROM‟s can be erased by exposing
it to UV light for about 20 min. It‟s not easy to erase it because the EPROM IC has to be removed
from the computer and exposed to UV light. The entire data is erased and not selected portions by
the user. EPROM‟s are cheap and reliable.
(iii) EEPROM (Electrically Erasable PROM) : The chip can be erased & reprogrammed on the board
easily byte by byte. It can be erased with in a few milliseconds. There is a limit on the number of times
the EEPROM‟s can be reprogrammed, i.e.; usually around 10,000 times.
Flash Memory : Its an electrically erasable & programmable permanent type memory. It uses one transistor
memory all resulting in high packing density, low power consumption, lower cost & higher reliability.
It’s used in all power, digital cameras, MP3 players etc.

Explain the different types of Memory Modules.


Ans.: There are two types of memory modules :
(i) SIMM : Single Inline Memory Modules
(ii) DIMM : Double Inline Memory Modules
These are small printed circuit cards (PCC) on which several DRAMS memory chips are placed. Such cards
are plugged into the system board of the computer. The SIMM Circuit cards contain several memory chips
with contacts placed on only one edge of this PCC whereas in DIMM, it‟s on both sides of the PCC.

Q.2. Explain about the System Clock.


Ans.: Every computer has got a system clock. It‟s located in the microprocessor. The clock is design by a
piece of quartz crystal. The system clock keeps the computer system coordinated. It‟s an electronic system
which keeps oscillating at specified times intervals, between 0 & 1. The speed at which this oscillation takes
place is called the cycle of the clock. The time taken to reach from „0‟ to „1‟ and back is called clock cycle
the speed of the system clock is measured in terms of Hz.

Q.3. Explain about the System Bus.


Ans.: Bus means the electronic path between various components Bus refers to particular types of a cable.
Each cable of a bus carries information of one bit. Buses are of 3 types :
(1) Address Bus
(2) Data Bus
(3) Control Bus
(1) Address Bus : It carries the address of memory location of required instructions and data. The address
Bus is unidirectional, i.e., data flows in one direction from CPU to memory. The address bus data determines
the maximum number of memory addresses. This capacity is measured in binary form. E.g. A 2 –bit address
bus will provide 22 addresses.

(2) Data Bus : Data bus is an electronic path that connects CPU, memory & other h/w devices. Data bus
carries the data from CPU to memory or I/P–O/P devices and vice versa. It‟s a directional bus because it can
transmit data in either direction. The processing speed of a computer increases if the data bus is large as it
takes more data at one time.

(3) Control Bus : Control Bus controls the memory and I/O devices. This bus is bidirectional. The CPU sends
signals on the control bus to enable the O/P of the addressed memory devices.

Data Bus Standard : Bus standard represents the architecture of a bus. Following are important data bus
standards :
(i)Industry Standard Architecture (ISA) : This bus standard was the first standard released by IBM. It has 24
address lines & 16 data lines. It can be used only in a single user system. ISA bus is a low cost bus. It has a
low data transfer rate. It could not take the full advantage of the 32-bit micro processor.

(ii) Micro Channel Architecture (MCA) : IBM developed MCA bus standard. With this, bus speed was elevated
from 8.33 MHz to 10MHz which was further increased to 20 MHz & bandwidth increased from 16 bits to 32
bits.
(iii) Enhanced Industry Standard Architecture (EISA) : These buses are of 32 bit & helpful in
multiprogramming. Due to low data transfer speed, ISA cannot be used for multi tasking & multi-user-
systems. EISA is appropriate for multi user systems. The data transfer rate of EISA is double of that of ISA.
The size of EISA is same as that of ISA, so both EISA & ISA cards can be fixed in EISA connector slot. EISA
connectors are quite expenses.

(iv) Peripheral Component Interconnect (PCI) : This bus standard was developed by Intel. It‟s a 64 bit bus &
works on 66 MHz. Earlier, a 32 bit PCI bus was developed having a speed of 33 MHz. PCI bus has greater
speed and has 4 in terrupt channels. It also has a PCI bridge through which the bus can be connected to
various other devices.

Explain the role of Expansion Slots.


Ans.: The main function of the mother board is to enable connectivity between various parts of a computer
with processor & memory. Various hardware cards can be fixed on the mother board to save different
purposes. Mother boards have slots to fix the various cards-like video card, modem, sound cards etc,
expansion slots on the motherboard can be used for the following purposes:
(i) To connect the internal devices of a computer eg. Hard disk etc. to the computer bus.
(ii) To connect the computer to the external devices like mouse, printer etc.
The above functions are carried out with the help of adapters.

Structure of a Computer System


A computer is an entity that interacts in some or the other way with its external environment. All of its
linkages to external environment are classified as peripheral devices or communication lines. In particular,
basic model of a computer is as shown below. It consists of a four main components Central Processing Unit
(CPU), memory, I/O and a bus. Bus can also be a wire or a communication line or in general it can be referred
to as a system interconnection.

Functional units of a computer


I/O Interface CPU

Keyboard Input ALU

Memory
CRT
Interface
Output Control

DISK
Bus

Network

CPU: This is the computational unit and is the computer's heart. This entity controls the operations of the
computer and performs its data processing functions. It is usually referred to as a processor. All the actions
of all components are controlled by the control unit of CPU.
Memory: Memory is used to store the instructions, data and the result as well. Memory unit is an integral
part of a computer system. Main function of a memory unit is to store the information needed by the system.
Input/output interface: They are used to move data from the computer and its external environment. The
external environment may be an input or an output device like Printer, displays, keyboard etc.
System interconnection: This constitutes some mechanism that provide for communication among CPU,
Memory & I/O. These can be referred to as a system BUS. Traditionally the computer system consists of a
single CPU. But some machines like multiprocessing involves the use of multiple CPU’s and share the single
memory.
Central processing Unit
It is the heart or core component of CPU. Below shows the basic functional components of Central
processing unit.
Its major structural components are Control unit, ALU, Registers and CPU interconnections.
Control Unit: It controls the operations of the CPU and hence the computer system.
Arithmetic logic unit (ALU): It performs the computers data processing functions
Registers: It forms the internal memory for CPU. It provides storage internal to the CPU.
CPU interconnections: It provides means for communication among the control unit, ALU and registers of
the CPU.

ALU
Contol
Unit
Internal CPU
interconnection

Registers

The CPU must carry out the task given below


1. read the given instructions
2. decode them
3. get operands for execution
4. process the instruction
5. give out / store the result

To carry out these tasks the CPU needs to temporarily store some data. It must remember the location of
the last instruction so that it can know where to get the next instruction. It needs to store instruction and
data temporarily while an instruction is being executed. In general the CPU needs an internal memory for all
store either instruction or data.
The CPU contains a handful of registers which acts like local variables. The CPU runs instructions and
performs computations, mostly by the ALU. The registers are the only memory the CPU has. Register memory
is very fast for the CPU to access, since it resides on the CPU itself.
However, the CPU has rather limited memory. All the local memory it uses is in registers. It has very fast
access to registers, which are on-board on the CPU. It has much slower access to RAM.

Arithmetic logic unit (ALU)


The ALU is a collection of logic circuits designed to perform arithmetic (addition, subtraction, multiplication,
and division) and logical operations (not, and, or, and exclusive-or). It's basically the calculator of the CPU.
When an arithmetic or logical operation is required, the values and command are sent to the ALU for
processing.
Control Unit
The purpose of control unit is to control the system operations by routing the selected data items to the
selected processing hardware at the right time. Control unit acts as nerve centre for the other units.
Instruction decoder
All instructions are stored as binary values. The instruction decoder receives the instruction from memory,
interprets the value to see what instruction is to be performed, and tells the ALU and the registers which
circuits to energize in order to perform the function.
Registers: The registers are used to store the data, addresses, and flags that are in use by the CPU

Memory Units
Memory is basically a large array of bytes. Main function of a memory unit is to store the information needed
by the system. Information stored can be data, an instruction that is nothing but programs and may be some
garbage. Memory locations that do not contain any valid data may store some arbitrary values and hence
they are termed as garbage data. Memory unit is an integral part of a computer system.

The system performance is largely dependent on the organization, storage capacity and speed of operation
of the memory system. The CPU can read or write to the memory, but it's much slower than accessing
registers. Nevertheless, you need memory because registers simply hold too little information.

Most of the memory is in RAM, which can be thought of as a large array of bytes as shown below. In an array,
we can refer to individual elements using an index. In computer organization, indexes are more commonly
referred to as addresses. Addresses are the numbers used to identify successive locations. Specifying its
address and a command that performs the storage or retrieval process can access a word.

System showing the registers and memory as array of bytes

The number of bits in each word is called as word length of the computer. Large computers usually have 32
or more bits in a word. Word length of Microcomputers ranges from 8 to 32 bits. The capacity of the memory
is one factor that decides the size of the computer. Data are usually manipulated within the machine in units
of words, multiples of words or parts of words. During execution the program must reside in the main
memory. Instructions and data are written into the memory or readout from the memory under the control
of a processor.
Most memory is byte-addressable, meaning that each address refers to one byte of memory.
The bulk of the memory is stored in a separate device called RAM usually called physical memory. RAM
stores programs as well as data. The CPU fetches an instruction in RAM to a register, which is referred as an
instruction register. This register then determines what instruction it has, and executes the instruction.
Executing the instruction may require loading data from RAM to the CPU or storing data from the CPU to
RAM. The time required to access one word is called the memory access time.
Classification of Memory system of a Computer
Memory system of a computer can be broadly classified into four groups.
 Internal Memory
Internal memory refers to a set of CPU registers. These serve as working memory, storing temporary results
during the computation process. They form a general purpose register file for storing the data as it is
processed. Since the cost of these registers is very high only few registers can be used in the CPU.

 Primary Memory
Primary memory is also called as main memory, which operates at electronic speeds. CPU can directly access
the program stored in the primary memory. Main memory consists of large number of semiconductor
storage cells. Each cell is capable of storing one bit of information. Word is a group of these cells. Main
memory is organized so that the contents of one word, containing n bits, can be stored or retrieved in one
basic operation.

 Secondary Memory
This memory type is much larger in capacity and also much slower than the main memory. Secondary
memory stores system programs, large data files and the information which is not regularly used by the CPU.
When the capacity of the main memory is exceeded, the additional information is stored in the secondary
memory. Information from the secondary memory is accessed indirectly through the I/O programs that
transfer the information between the main memory and secondary memory. Examples for secondary
memory devices are magnetic hard disks and CD-ROMs.

 Cache Memory
The performance of a computer system will be severely affected if the speed disparity between the processor
and the main memory is significant. The system performance can be improved by placing a small, fast acting
buffer memory between the processor and the main memory. This buffer memory is called as cache
memory. Cost of this memory is very high.

Input/Output and I/O Interface


Any movement of information from or to the computer system is considered as Input/Output. The CPU and
its supporting circuitry provide I/O methods. Input and output unit is usually combined under the term input-
output unit (I/O). For example consider the keyboard of a video terminal, which consists of key- board for
input and a cathode ray tube display for output.

There are Wide variety of peripherals which deliver different amounts of data, run at different speeds and
present data in different formats. All the I/O peripherals are slower than CPU and RAM. Hence Need I/O a
proper I/O interfaces.
 Input Devices:
Computer accepts the coded information through the input unit. It has the capability of reading the
instructions and data to be processed. The most commonly used input devices are keyboard of a video
terminal. This is electronically connected to the processing part of a computer. The keyboards is wired such
that whenever a key is pressed the corresponding letter or digit is automatically translated into its
corresponding code and is directly sent either to memory or the processor.
Output Devices
Output unit displays the processed results. Examples are video terminals and graphic displays.
I/O devices do not alter the information content or the meaning of the data. Some devices can be used as
output only e.g. graphic displays.
Following are the Input/Output Techniques
o Programmed
o Interrupt driven
o Direct Memory Access (DMA)

System Interconnection / BUS :


“A bus is a communication pathway connecting two or more devices.”
A key characteristic of a bus is that it is a shared transmission medium. Multiple devices connect to the bus,
and a signal transmitted by any one device is available for reception by all other devices attached to the bus.
If two devices transmit during the same time period, their signals will overlap and become garbled. Thus,
only one device at a time can successfully transmit. The communication between the external environment
and CPU is established through the system Bus. System buses is classified into three different types,
depending on whether it carries data, control, or addresses information and are indicated below.

ALU
ALU
Status Flag
Register/
instruction
Shifter decoder
Internal CPU BUS

Complementer Control
Unit

Arithmetic &
Boolean logic

CPU showing Internal components of ALU


Basic components of ALU are as shown in above. The Arithmetic and Logic Unit is the core of any processor.
It performs the calculations on the input data given. ALU is needed to transfer data between the various
registers. Always ALU operates only on data in the internal CPU memory. ALU is capable of performing any
arithmetic and Boolean operations.
An Arithmetic-Logic Unit or ALU can be considered as a combination of various circuits in a single circuit that
are used to execute data processing instructions. The complexity of ALU is determined by the way in which
its arithmetic instructions are realized. Simple ALUs that perform fixed point addition and subtraction, as
well as logical operations can be realized by combinational circuits.
The ALU that realized using a combinational logic that are basically constructed from AND, OR, and NOT
gates. It is basically an implementation of a Boolean function. A generic diagram for a combinational logic
circuit of ALU is as shown below In general, a combinational logic circuit has inputs, which are divided into
data inputs and control inputs, and outputs. Control inputs tell the circuit what to do with the data inputs

A generic ALU that has 2 inputs and 1 output

A typical ALU will have two input ports and a result port. It will also have a control input telling it which
operation to perform. For example add, subtract, and, or, etc. It also consists of additional outputs bits for
condition codes. Basically, these bits indicate some facts about the computation. For example carry,
overflow, negative, zero result. Additional output bits are together called as the status bits. The status bits
are used for branching operations.
ALUs may be simple and perform only a few operations: Integer arithmetic like add and subtract and Boolean
logic like and, or, complement and left Shift, right Shifts, rotate. Such simple ALUs may be found in small 4-
and 8- bit processors.
For example: Consider an 32 bit ALU as shown in figure 1.8. It consists of source 1 labeled as SRC1 and source
2 labeled as SRC2 as the two 32-bit data inputs. It also has a control input, labeled C which is a signal for
addition. These control bits tell the ALU to perform addition operation on the data inputs

The result of the computation is sent to the output labelled DST. It is also 32 bits. There are some additional
output bits labelled ST. In our example they may indicate whether the output is zero, has overflowed.
More complex ALUs will support a wider range of integer operations like multiplication and division, floating
point operations like add, subtract, multiply, divide. It can even compute mathematical functions like square
root, sine, cosine, log, etc.
To perform arithmetic and logic operations necessary operands are transferred from the memory location
to ALU where one of the operand is stored temporarily in some register. This register is called temporary
register. Each register stores one word of data.
Control Unit
The control unit is the portion of the processor that actually causes things to happen. The purpose of control
unit is to control the system operations by routing the selected data items to the selected processing
hardware at the right time. Control unit acts as nerve centre for the other units. This unit decodes and
translates each instruction and generates the necessary enable signals for ALU and other units. Control unit
has two responsibilities i.e., instruction interpretation and instruction sequencing.
In instruction interpretation the control unit reads instruction from the memory and recognizes the
instruction type, gets the necessary operand and sends them to the appropriate functional unit. The signals
necessary to perform desired operation are taken to the processing unit and results obtained are sent to the
specified destination.
In instruction sequencing control unit determines the address of the next instruction to be executed and
loads it into program counter.
In general the I/O transfers are controlled by the software instructions that identify both the devices involved
and the type of transfer. But the actual timing signals that govern the transfers are generated by the control
circuits. Similarly the data transfer between a processor and the memory is controlled by the control circuits.

The operation of the computer can be summarized as below:


 The computer accepts information through the input unit and transfers it to the memory.
 Information stored in the memory is fetched into arithmetic and logic unit to perform the desired
operations.
 Processed information is transferred to the output unit.
 All activities inside the machine are controlled by a control unit.

Bus Structure
A bus consists of 1 or more wires. There's usually a bus that connects the CPU to memory and to disk and
I/O devices. Real computers usually have several busses, even though the simple computer we have
modelled only has one bus where we consider the data bus, the address bus, and the control bus as part one
larger bus.
The size of the bus is the number of wires in the bus. We can refer to individual wires or a group of adjacent
wires with subscripts. A bus can be drawn as a line with a dash across it to indicate there's more than one
wire. The dash in it is then labelled with the number of wires and the designation of those wires.

Representation of a 32 bit Bus


For example, consider a bus as shown in figure 1.9. It consists a slant dash on the horizontal line that
represents it is a bus that carries more wires. Also the slant dash is labeled as 32 which the number of wires
in that bus is 32. and the dash is also labeled A31-0 which indicate individual 32 wires from A0 to A31. We
can then refer to say, A10-0 or A15-9 to refer to some subset of the wires.
A bus allows any number of devices to hook up to the bus. Devices connected to the bus must share the bus.
Only one device can write to it at a time. One alternative to using a bus is to connect each pair of devices
directly. Unfortunately, for N devices, this requires about N2 connections, which may be too many. Most
devices have a fixed number of connections which doesn't permit dedicated connections to other devices.
A bus doesn't have this problem.

Data, Address, and Control Busses


There are usually 3 kinds of buses. There's a 32-bit data bus, which is used to write or read 32 bits of data to
or from memory. There's a 32-bit address bus for the CPU to specify which address to read or write from or
to memory. Finally, there's a control bus which may consist of a single wire or multiple wires to allow the
CPU and memory to communicate
For example a control signal is required to indicate when and whether a read or write is to be performed. To
support two 32-bit busses, both the CPU and memory required 64 pins or connections 32 for data and 32 for
address. Earlier there was shortage of pins and hence it was necessary to multiplex the address and data
bus. Multiplexing uses the same bus as both address and data bus.
There are other kinds of busses that are used primarily for I/O devices like USB. These are mostly high-speed
busses for external devices.

Von Neumann Architecture

IAS is the first digital computer in which the von Neumann Architecture was employed. The general structure
of the IAS computer is as shown ABOVE.
• A main memory, which stores both instructions and data,
• An arithmetic and logic unit (ALU) capable of operating on binary data,
• A control unit, which interprets the instructions in memory and causes them to be executed,
• Input and Output (I/O) equipment operated by the control unit.

The von Neumann Architecture is based on three key concepts:


1. Data and instructions are stored in a single read-write memory.
2. The content of this memory is addressable by location, without regard to the type of data contained
there.
3. Execution occurs in a sequential fashion unless explicitly modified from one instruction to the next.
The CPU is the brain of the computer. Its main function is to execute programs. It has three main sections
:(i) Arithmetic & Logical Units (ALU)
(ii) Control Unit
(iii) Accumulator & General & Special Purpose Registers

(i) ALU : The function of an ALU is to perform basic arithmetic & logical operation take
(a) Addition
(b) Subtraction etc.
It cannot perform exponential, logarithmic, trigonometric operations.

(ii) Control Unit : The control units of a CPU controls the entire operation of the computer. It also controls
all other devices such as memory input & output devices. It fetcher instruction from the memory, decodes
the instruction, interprets the instruction to know what tasks are to be performed & sends suitable control
signals to the other components to perform further operation.
It maintains the order & directs the operation of the entire system. It controls the data flow between CPU &
peripherals. Under the control of the CU the instructions are fetched from the memory one after another
for execution until all the instructions are executed.

Computer components Von Neumann architecture


CPU Memory
...
...
Instruction
Instruction

Instruction

PC MAR Instruction

IR MBR ...
Data
I/O AR
Data
ALU I/O BR Data

I/O Module
PC = Program Counter
IR = Instruction Register
MAR = Memory address register
MBR = Memory buffer register
I/O AR = I/O address register
I/O BR = I/O buffer register

iii) Register : A CPU contains a number of register to store data temporarily during the execution of a
program. The number of registers differs from processor to processor. Register are classified as follows :

(a) General Purpose Registers : There registers store data & intermediate results during execution of a
program. They are accessible to users through instructions if the users are working in assembly language.
(b) Accumulator : Its the most important GPR having multiple functions. It‟s most efficient in data
movement, arithmetic and logical operation. It has some special features that the other GPR do not have.
After the execution of arithmetic and logical instruction the result is placed in the accumulator.

Special Purpose Register : A CPU contains a number of special purpose register for different purposes.
There are :
(a) Program Counter (PC)
(b) Stack Pointer (SP)
(c) Instruction Register (IR)
(d) Index Register
(e) Memory Address Register (MAR)
(f) Memory Buffer Register (MBR)
(a) PC : The PC keeps track of the address of the instruction which is to be executed next. So it holds the
address of this memory location, which contains the rent instruction to be fetched from the memory.

b) Stack Pointer (SP) : The stack is a sequence of memory location defined by the user. It‟s used to save the
contents of a register if it‟s required during the execution of a program. The SP holds the address of the last
occupied memory location of the stack.
(c) Status Register (Flag Register) : A flag register contains a number of flags either to indicate certain
conditions arising after ALU operation or to control certain operations. The flags which indicate a condition
are called control flags. The flags which are used to control certain operation are called control flags.

A single micro processor contains the following condition flags :


(1) Carry Flag : Indicates whether there is a carry not.
(2) Zero Flag : Indicates whether the result is zero or non zero.
(3) Sign Flag : Indicates whether the result is positive or negative.
(4) Parity Flag : Indicates whether the result contains odd number of 1‟s or even number of 1‟s.

(d) Instruction Register : It holds the instruction to be decoded.


(e) Index Register : They are used for addressing. One or more registers are designated as index register. The
address of an operand is the sum of the contents of the index registers and a constant. Instruction involving
index register contain constants. This constant is added to the contents of index register to form the effective
address.
(f) Memory Address Register (MAR) : It holds the address of the instruction or data to be fetched from the
memory. The CPU transfers the address of the next instruction from the PC to MAR. From MAR it‟s sent to
be memory through the address bus.
(g) Memory Buffer Register (MBR) : It holds the instruction code or data received from or sent to the
memory. It‟s connected to data bus. The data, which are written into the memory are held in this register
until the next operation is completed.
(h)multiplier quotient (MQ): Employed to hold temporarily operands and results of ALU operations.
(i)Instruction buffer register (IBR): Employed to hold temporarily the right-hand instruction from a word in
memory.
Work Flow of Von Neumann Architecture
8086 Processor
Pin Digram:
For 8086 Processor prepare note by your own
Potential questions:
Number system (Fixed, Floating point)
Neumann Architecture
8086 architecture (pin diagram, architecture, timing diagram)

You might also like