Computer Organization Computer Architecture
Computer Organization Computer Architecture
Computer Organization Computer Architecture
Computer Architecture:
Computer Architecture is a functional description of requirements and design implementation for the
various parts of computer. It deals with functional behaviour of computer system. It comes before the
computer organization while designing a computer.
Difference Between computer Organization and computer architecture
In below figure, its clear that it deals In below figure, its also clear that it
3.
with high-level design issue. deals with low-level design issue.
HISTORY OF COMPUTERS
Basic information about the technological development trends in computer in the past and its projections in
the future. If we want to know about computers completely then we must start from the history of
computers and look into the details of various technological and intellectual breakthrough. These are
essential to give us the feel of how much work and effort has been done to get the computer in this shape.
The ancestors of modern age computer were the mechanical and electro-mechanical devices. This ancestry
can be traced as back and 17th century, when the first machine capable of performing four mathematical
operations, viz. addition, subtraction, division and multiplication, appeared.
ENIAC was controlled by a set of external switches and dials; to change the program required physically
altering the settings on these controls. These controls also limited the speed of the internal electronic
operations. Through the use of a memory that was large enough to hold both instructions and data, and
using the program stored in memory to control the order of arithmetic operations, EDVAC was able to run
orders of magnitude faster than ENIAC. By storing instructions in the same medium as data, designers could
concentrate on improving the internal structure of the machine without worrying about matching it to the
speed of an external control.
The trends, which were encountered during the era of first generation computer, were:
• The first generation computer control was centralized in a single CPU, and all
operations required a direct intervention of the CPU.
• Use of ferrite-core main memory was started during this time.
• Concepts such as use of virtual memory and index register (you will know more
about these terms in advanced courses).
• Punched cards were used as input device.
• Magnetic tapes and magnetic drums were used as secondary memory.
• Binary code or machine language was used for programming.
Advent of Von-Neumann Architecture.
Electronic switches in this era were based on discrete diode and transistor technology with a switching time
of approximately 0.3 microseconds. The first machines to be built with this technology include TRADIC at
Bell Laboratories in 1954 and TX-0 at MIT's Lincoln Laboratory. Memory technology was based on magnetic
cores which could be accessed in random order, as opposed to mercury delay lines, in which data was stored
as an acoustic wave that passed sequentially through the medium and could be accessed only when the data
moved by the I/O interface.
During this second generation many high level programming languages were introduced, including FORTRAN
(1956), ALGOL (1958), and COBOL (1959). Important commercial machines of this era include the IBM 704
and its successors, the 709 and 7094. The latter introduced I/O processors for better throughput between
I/O devices and main memory.
The second generation also saw the first two supercomputers designed specifically for numeric
processing in scientific applications. The term ``supercomputer'' is generally reserved for a machine that is
an order of magnitude more powerful than other machines of its era. Two machines of the 1950s deserve
this title. The Livermore Atomic Research Computer (LARC) and the IBM 7030 (aka Stretch) were early
examples of machines that overlapped memory operations with processor operations and had primitive
forms of parallel processing.
The first ICs were based on small-scale integration (SSI) circuits, which had around 10 devices per circuit
(or ``chip''), and evolved to the use of medium-scale integrated (MSI) circuits, which had up to 100 devices
per chip. Multi layered printed circuits were developed and core memory was replaced by faster, solid state
memories. Computer designers began to take advantage of parallelism by using multiple functional units,
overlapping CPU and I/O operations, and pipelining (internal parallelism) in both the instruction stream and
the data stream. The SOLOMON computer, developed by Westinghouse Corporation, and the ILLIAC IV,
jointly developed by Burroughs, the Department of Defense and the University of Illinois, was representative
of the first parallel computers.
Semiconductor memories replaced core memories as the main memory in most systems; until this time the
use of semiconductor memory in most systems was limited to registers and cache. A variety of parallel
architectures began to appear; however, during this period the parallel computing efforts were of a mostly
experimental nature and most computational science was carried out on vector processors. Microcomputers
and workstations were introduced and saw wide use as alternatives to time-shared mainframe computers.
Developments in software include very high level languages such as FP (functional programming) and Prolog
(programming in logic). These languages tend to use a declarative programming style as opposed to the
imperative style of Pascal, C, FORTRAN, et al. In a declarative style, a programmer gives a mathematical
specification of what should be computed, leaving many details of how it should be computed to the
compiler and/or runtime system. These languages are not yet in wide use, but are very promising as
notations for programs that will run on massively parallel computers (systems with over 1,000 processors).
Compilers for established languages started to use sophisticated optimization techniques to improve code,
and compilers for vector processors were able to vectorize simple loops (turn loops into single instructions
that would initiate an operation over an entire vector).
Two important events marked the early part of the third generation: the development of the C programming
language and the UNIX operating system, both at Bell Labs. In 1972, Dennis Ritchie, seeking to meet the
design goals of CPL and generalize Thompson's B, developed the C language. Thompson and Ritchie then
used C to write a version of UNIX for the DEC PDP-11. This C-based UNIX was soon ported to many different
computers, relieving users from having to learn a new operating system each time they change computer
hardware. UNIX or a derivative of UNIX is now a de facto standard on virtually every computer system.
Other new developments were the widespread use of computer networks and the increasing use of single-
user workstations. Prior to 1985 large scale parallel processing was viewed as a research goal, but two
systems introduced around this time are typical of the first commercial products to be based on parallel
processing. The Sequent Balance 8000 connected up to 20 processors to a single shared memory module
(but each processor had its own local cache). The machine was designed to compete with the DEC VAX-780
as a general purpose Unix system, with each processor working on a different user's job.
The Intel iPSC-1, nicknamed ``the hypercube'', took a different approach. Instead of using one memory
module, Intel connected each processor to its own memory and used a network interface to connect
processors. This distributed memory architecture meant memory was no longer a bottleneck and large
systems (using more processors) could be built. Toward the end of this period a third type of parallel
processor was introduced to the market. In this style of machine, known as a data-parallel or SIMD, there
are several thousand very simple processors. All processors work under the direction of a single control unit;
i.e. if the control unit says ``add a to b'' then all processors find their local copy of a and add it to their local
copy of b.
Scientific computing in this period was still dominated by vector processing. Most manufacturers of vector
processors introduced parallel models, but there were very few (two to eight) processors in this parallel
machines. In the area of computer networking, both wide area network (WAN) and local area network (LAN)
technology developed at a rapid pace, stimulating a transition from the traditional mainframe computing
environment toward a distributed computing environment in which each user has their own workstation for
relatively simple tasks (editing and compiling programs, reading mail) but sharing large, expensive resources
such as file servers and supercomputers. RISC technology (a style of internal organization of the CPU) and
plummeting costs for RAM brought tremendous gains in computational power of relatively low cost
workstations and servers. This period also saw a marked increase in both the quality and quantity of scientific
visualization.
Combinations of parallel/vector architectures are well established, and one corporation (Fujitsu) has
announced plans to build a system with over 200 of its high end vector processors. Workstation technology
has continued to improve, with processor designs now using a combination of RISC, pipelining, and parallel
processing. As a result it is now possible to purchase a desktop workstation for about $30,000 that has the
same overall computing power (100 megaflops) as fourth generation supercomputers.
One of the most dramatic changes in the sixth generation will be the explosive growth of wide area
networking. Network bandwidth has expanded tremendously in the last few years and will continue to
improve for the next several years. T1 transmission rates are now standard for regional networks, and the
national ``backbone'' that interconnects regional networks uses T3. Networking technology is becoming
more widespread than its original strong base in universities and government laboratories as it is rapidly
finding application in K-12 education, community networks and private industry.
Summary
Features
Generations
First Vacuum Tubes or Valves
Generation Ø used vacuum tubes as electronic circuit
Ø magnetic drum for primary storage
1942-1955 Ø mercury delay lined for memory
Ø punch-card used as secondary storage
Ø machine level programming used
Ø operating speed was used in terms of millisecond
Mark-I, UNIVAC,
ENIAC
Generations Features
Second Transistor
Generation Ø magnet core memory used as internal storage
1955-1964 Ø magnet tapes used as secondary storage
Ø little bit faster I/O devices
Ø high level language used as programming
Ø processing speed measured in microsecond
IMB 1401,
ICL 2950/10
etc.
Third Generation IC(Integrated circuits)
1964-1975 Ø semi conductor memory used as primary storage
Ø magnetic discs were used as secondary storage
Ø massive use of high level language
Ø processing speed increased to nanosecond and even faster
CLASSIFICATION COMPUTERS
1. MICRO COMPUTER
A microcomputer’s CPU is microprocessor. The microcomputer originated in late 1970s. the first
microcomputers were built around 8-bit microprocessor chips. It means that the chip can retrieve
instructions/data from storage, manipulate, and process an 8-bit data at a time or we can say that the
chip has a built-in 8-bit data transfer path.
An improvement on 8-bit chip technology was seen in early 1980s, when a series of 16-bit chips namely
8086 and 8088 were introduced by Intel Corporation, each one with an advancement over the other.
8088 is a 8/16 bit chip i.e. an 8-bit path is used to move data between chip and primary storage(external
path), at a time, but processing is done within the chip using a 16-bit path(internal path) at a time.
8086 is a 16/16 bit chip i.e. the internal and external paths both are 16 bit wide. Both these chips can
support a primary storage capacity of upto 1 mega byte (MB). These computers are usually divided into
desktop models and laptop models. They are terribly limited in what they can do when compared to
the larger models discussed above because they can only be used by one person at a time, they are
much slower than the larger computers, and they cannot store nearly as much information, but they
are excellent when used in small businesses, homes, and school classrooms. These computers are
inexpensive and easy to use. They have become an indispensable part of modern life. Thus
• Used for memory intense and graphic intense applications
• Are single-user machines
2. MINI COMPUTER
Minicomputers are much smaller than mainframe computers and they are also much less expensive.
The cost of these computers can vary from a few thousand dollars to several hundred thousand dollars.
They possess most of the features found on mainframe computers, but on a more limited scale. They
can still have many terminals, but not as many as the mainframes. They can store a tremendous amount
of information, but again usually not as much as the mainframe. Medium and small businesses typically
use these computers. Thus
• Fit somewhere between mainframe and PCs
• Would often be used for file servers in networks
3. MAINFRAME COMPUTER
Mainframe computers are very large, often filling an entire room. They can store enormous of information, can perform
many tasks at the same time, can communicate with many users at the same time, and are very
expensive. . The price of a mainframe computer frequently runs into the millions of dollars.
Mainframe computers usually have many terminals connected to them. These terminals look like small
computers but they are only devices used to send and receive information from the actual computer
using wires. Terminals can be located in the same room with the mainframe computer, but they can
also be in different rooms, buildings, or cities. Large businesses, government agencies, and universities
usually use this type of computer. Thus
• Most common type of large computers
• Used by many people using same databases
• Can support many terminals
• Used in large company like banks and insurance companies
4. SUPER COMPUTER
The upper end of the state of the art mainframe machine is the supercomputer. These are amongst the
fastest machines in terms of processing speed and use multiprocessing techniques, were a number of
processors are used to solve a problem. Computers built to minimize distance between points for very
fast operation. Used for extremely complicated computations. Thus
o Largest and most powerful
o Used by scientists and engineers
o Very expensive
o Would be found in places like Los Alamos or NASA
What is CISC Microprocessor?
Ans.: CISC stands for complex instruction set computer. It was developed by Intel. CISC is a type of design for
the computers. CISC based computer will have shorter programs which are made up of symbolic machine
language. The number of instructions on a CISC processor is more.
What is RISC Microprocessor?
Ans.: RISC stands for reduced instruction set computer architecture. The properties of this design are :
(i) A large number of general purpose registers and use of computers to optimize register usage.
(ii) A limited & simple instruction set.
(iii) An emphasis on optimizing the instruction pyre line.
(2) ROM : Read only memory: Its non volatile memory, ie, the information stored in it, is not lost even if the
power supply goes off. It‟s used for the permanent storage of information. It also posses random access
property. Information can not be written into a ROM by the users/programmers. In other words the contents
of ROMs are decided by the manufactures. The following types of ROMs an listed below :
(i) PROM : It‟s programmable ROM. Its contents are decided by the user. The user can store
permanent programs, data etc in a PROM. The data is fed into it using a PROM programs.
(ii) EPROM : An EPROM is an erasable PROM. The stored data in EPROM‟s can be erased by exposing
it to UV light for about 20 min. It‟s not easy to erase it because the EPROM IC has to be removed
from the computer and exposed to UV light. The entire data is erased and not selected portions by
the user. EPROM‟s are cheap and reliable.
(iii) EEPROM (Electrically Erasable PROM) : The chip can be erased & reprogrammed on the board
easily byte by byte. It can be erased with in a few milliseconds. There is a limit on the number of times
the EEPROM‟s can be reprogrammed, i.e.; usually around 10,000 times.
Flash Memory : Its an electrically erasable & programmable permanent type memory. It uses one transistor
memory all resulting in high packing density, low power consumption, lower cost & higher reliability.
It’s used in all power, digital cameras, MP3 players etc.
(2) Data Bus : Data bus is an electronic path that connects CPU, memory & other h/w devices. Data bus
carries the data from CPU to memory or I/P–O/P devices and vice versa. It‟s a directional bus because it can
transmit data in either direction. The processing speed of a computer increases if the data bus is large as it
takes more data at one time.
(3) Control Bus : Control Bus controls the memory and I/O devices. This bus is bidirectional. The CPU sends
signals on the control bus to enable the O/P of the addressed memory devices.
Data Bus Standard : Bus standard represents the architecture of a bus. Following are important data bus
standards :
(i)Industry Standard Architecture (ISA) : This bus standard was the first standard released by IBM. It has 24
address lines & 16 data lines. It can be used only in a single user system. ISA bus is a low cost bus. It has a
low data transfer rate. It could not take the full advantage of the 32-bit micro processor.
(ii) Micro Channel Architecture (MCA) : IBM developed MCA bus standard. With this, bus speed was elevated
from 8.33 MHz to 10MHz which was further increased to 20 MHz & bandwidth increased from 16 bits to 32
bits.
(iii) Enhanced Industry Standard Architecture (EISA) : These buses are of 32 bit & helpful in
multiprogramming. Due to low data transfer speed, ISA cannot be used for multi tasking & multi-user-
systems. EISA is appropriate for multi user systems. The data transfer rate of EISA is double of that of ISA.
The size of EISA is same as that of ISA, so both EISA & ISA cards can be fixed in EISA connector slot. EISA
connectors are quite expenses.
(iv) Peripheral Component Interconnect (PCI) : This bus standard was developed by Intel. It‟s a 64 bit bus &
works on 66 MHz. Earlier, a 32 bit PCI bus was developed having a speed of 33 MHz. PCI bus has greater
speed and has 4 in terrupt channels. It also has a PCI bridge through which the bus can be connected to
various other devices.
Memory
CRT
Interface
Output Control
DISK
Bus
Network
CPU: This is the computational unit and is the computer's heart. This entity controls the operations of the
computer and performs its data processing functions. It is usually referred to as a processor. All the actions
of all components are controlled by the control unit of CPU.
Memory: Memory is used to store the instructions, data and the result as well. Memory unit is an integral
part of a computer system. Main function of a memory unit is to store the information needed by the system.
Input/output interface: They are used to move data from the computer and its external environment. The
external environment may be an input or an output device like Printer, displays, keyboard etc.
System interconnection: This constitutes some mechanism that provide for communication among CPU,
Memory & I/O. These can be referred to as a system BUS. Traditionally the computer system consists of a
single CPU. But some machines like multiprocessing involves the use of multiple CPU’s and share the single
memory.
Central processing Unit
It is the heart or core component of CPU. Below shows the basic functional components of Central
processing unit.
Its major structural components are Control unit, ALU, Registers and CPU interconnections.
Control Unit: It controls the operations of the CPU and hence the computer system.
Arithmetic logic unit (ALU): It performs the computers data processing functions
Registers: It forms the internal memory for CPU. It provides storage internal to the CPU.
CPU interconnections: It provides means for communication among the control unit, ALU and registers of
the CPU.
ALU
Contol
Unit
Internal CPU
interconnection
Registers
To carry out these tasks the CPU needs to temporarily store some data. It must remember the location of
the last instruction so that it can know where to get the next instruction. It needs to store instruction and
data temporarily while an instruction is being executed. In general the CPU needs an internal memory for all
store either instruction or data.
The CPU contains a handful of registers which acts like local variables. The CPU runs instructions and
performs computations, mostly by the ALU. The registers are the only memory the CPU has. Register memory
is very fast for the CPU to access, since it resides on the CPU itself.
However, the CPU has rather limited memory. All the local memory it uses is in registers. It has very fast
access to registers, which are on-board on the CPU. It has much slower access to RAM.
Memory Units
Memory is basically a large array of bytes. Main function of a memory unit is to store the information needed
by the system. Information stored can be data, an instruction that is nothing but programs and may be some
garbage. Memory locations that do not contain any valid data may store some arbitrary values and hence
they are termed as garbage data. Memory unit is an integral part of a computer system.
The system performance is largely dependent on the organization, storage capacity and speed of operation
of the memory system. The CPU can read or write to the memory, but it's much slower than accessing
registers. Nevertheless, you need memory because registers simply hold too little information.
Most of the memory is in RAM, which can be thought of as a large array of bytes as shown below. In an array,
we can refer to individual elements using an index. In computer organization, indexes are more commonly
referred to as addresses. Addresses are the numbers used to identify successive locations. Specifying its
address and a command that performs the storage or retrieval process can access a word.
The number of bits in each word is called as word length of the computer. Large computers usually have 32
or more bits in a word. Word length of Microcomputers ranges from 8 to 32 bits. The capacity of the memory
is one factor that decides the size of the computer. Data are usually manipulated within the machine in units
of words, multiples of words or parts of words. During execution the program must reside in the main
memory. Instructions and data are written into the memory or readout from the memory under the control
of a processor.
Most memory is byte-addressable, meaning that each address refers to one byte of memory.
The bulk of the memory is stored in a separate device called RAM usually called physical memory. RAM
stores programs as well as data. The CPU fetches an instruction in RAM to a register, which is referred as an
instruction register. This register then determines what instruction it has, and executes the instruction.
Executing the instruction may require loading data from RAM to the CPU or storing data from the CPU to
RAM. The time required to access one word is called the memory access time.
Classification of Memory system of a Computer
Memory system of a computer can be broadly classified into four groups.
Internal Memory
Internal memory refers to a set of CPU registers. These serve as working memory, storing temporary results
during the computation process. They form a general purpose register file for storing the data as it is
processed. Since the cost of these registers is very high only few registers can be used in the CPU.
Primary Memory
Primary memory is also called as main memory, which operates at electronic speeds. CPU can directly access
the program stored in the primary memory. Main memory consists of large number of semiconductor
storage cells. Each cell is capable of storing one bit of information. Word is a group of these cells. Main
memory is organized so that the contents of one word, containing n bits, can be stored or retrieved in one
basic operation.
Secondary Memory
This memory type is much larger in capacity and also much slower than the main memory. Secondary
memory stores system programs, large data files and the information which is not regularly used by the CPU.
When the capacity of the main memory is exceeded, the additional information is stored in the secondary
memory. Information from the secondary memory is accessed indirectly through the I/O programs that
transfer the information between the main memory and secondary memory. Examples for secondary
memory devices are magnetic hard disks and CD-ROMs.
Cache Memory
The performance of a computer system will be severely affected if the speed disparity between the processor
and the main memory is significant. The system performance can be improved by placing a small, fast acting
buffer memory between the processor and the main memory. This buffer memory is called as cache
memory. Cost of this memory is very high.
There are Wide variety of peripherals which deliver different amounts of data, run at different speeds and
present data in different formats. All the I/O peripherals are slower than CPU and RAM. Hence Need I/O a
proper I/O interfaces.
Input Devices:
Computer accepts the coded information through the input unit. It has the capability of reading the
instructions and data to be processed. The most commonly used input devices are keyboard of a video
terminal. This is electronically connected to the processing part of a computer. The keyboards is wired such
that whenever a key is pressed the corresponding letter or digit is automatically translated into its
corresponding code and is directly sent either to memory or the processor.
Output Devices
Output unit displays the processed results. Examples are video terminals and graphic displays.
I/O devices do not alter the information content or the meaning of the data. Some devices can be used as
output only e.g. graphic displays.
Following are the Input/Output Techniques
o Programmed
o Interrupt driven
o Direct Memory Access (DMA)
ALU
ALU
Status Flag
Register/
instruction
Shifter decoder
Internal CPU BUS
Complementer Control
Unit
Arithmetic &
Boolean logic
A typical ALU will have two input ports and a result port. It will also have a control input telling it which
operation to perform. For example add, subtract, and, or, etc. It also consists of additional outputs bits for
condition codes. Basically, these bits indicate some facts about the computation. For example carry,
overflow, negative, zero result. Additional output bits are together called as the status bits. The status bits
are used for branching operations.
ALUs may be simple and perform only a few operations: Integer arithmetic like add and subtract and Boolean
logic like and, or, complement and left Shift, right Shifts, rotate. Such simple ALUs may be found in small 4-
and 8- bit processors.
For example: Consider an 32 bit ALU as shown in figure 1.8. It consists of source 1 labeled as SRC1 and source
2 labeled as SRC2 as the two 32-bit data inputs. It also has a control input, labeled C which is a signal for
addition. These control bits tell the ALU to perform addition operation on the data inputs
The result of the computation is sent to the output labelled DST. It is also 32 bits. There are some additional
output bits labelled ST. In our example they may indicate whether the output is zero, has overflowed.
More complex ALUs will support a wider range of integer operations like multiplication and division, floating
point operations like add, subtract, multiply, divide. It can even compute mathematical functions like square
root, sine, cosine, log, etc.
To perform arithmetic and logic operations necessary operands are transferred from the memory location
to ALU where one of the operand is stored temporarily in some register. This register is called temporary
register. Each register stores one word of data.
Control Unit
The control unit is the portion of the processor that actually causes things to happen. The purpose of control
unit is to control the system operations by routing the selected data items to the selected processing
hardware at the right time. Control unit acts as nerve centre for the other units. This unit decodes and
translates each instruction and generates the necessary enable signals for ALU and other units. Control unit
has two responsibilities i.e., instruction interpretation and instruction sequencing.
In instruction interpretation the control unit reads instruction from the memory and recognizes the
instruction type, gets the necessary operand and sends them to the appropriate functional unit. The signals
necessary to perform desired operation are taken to the processing unit and results obtained are sent to the
specified destination.
In instruction sequencing control unit determines the address of the next instruction to be executed and
loads it into program counter.
In general the I/O transfers are controlled by the software instructions that identify both the devices involved
and the type of transfer. But the actual timing signals that govern the transfers are generated by the control
circuits. Similarly the data transfer between a processor and the memory is controlled by the control circuits.
Bus Structure
A bus consists of 1 or more wires. There's usually a bus that connects the CPU to memory and to disk and
I/O devices. Real computers usually have several busses, even though the simple computer we have
modelled only has one bus where we consider the data bus, the address bus, and the control bus as part one
larger bus.
The size of the bus is the number of wires in the bus. We can refer to individual wires or a group of adjacent
wires with subscripts. A bus can be drawn as a line with a dash across it to indicate there's more than one
wire. The dash in it is then labelled with the number of wires and the designation of those wires.
IAS is the first digital computer in which the von Neumann Architecture was employed. The general structure
of the IAS computer is as shown ABOVE.
• A main memory, which stores both instructions and data,
• An arithmetic and logic unit (ALU) capable of operating on binary data,
• A control unit, which interprets the instructions in memory and causes them to be executed,
• Input and Output (I/O) equipment operated by the control unit.
(i) ALU : The function of an ALU is to perform basic arithmetic & logical operation take
(a) Addition
(b) Subtraction etc.
It cannot perform exponential, logarithmic, trigonometric operations.
(ii) Control Unit : The control units of a CPU controls the entire operation of the computer. It also controls
all other devices such as memory input & output devices. It fetcher instruction from the memory, decodes
the instruction, interprets the instruction to know what tasks are to be performed & sends suitable control
signals to the other components to perform further operation.
It maintains the order & directs the operation of the entire system. It controls the data flow between CPU &
peripherals. Under the control of the CU the instructions are fetched from the memory one after another
for execution until all the instructions are executed.
Instruction
PC MAR Instruction
IR MBR ...
Data
I/O AR
Data
ALU I/O BR Data
I/O Module
PC = Program Counter
IR = Instruction Register
MAR = Memory address register
MBR = Memory buffer register
I/O AR = I/O address register
I/O BR = I/O buffer register
iii) Register : A CPU contains a number of register to store data temporarily during the execution of a
program. The number of registers differs from processor to processor. Register are classified as follows :
(a) General Purpose Registers : There registers store data & intermediate results during execution of a
program. They are accessible to users through instructions if the users are working in assembly language.
(b) Accumulator : Its the most important GPR having multiple functions. It‟s most efficient in data
movement, arithmetic and logical operation. It has some special features that the other GPR do not have.
After the execution of arithmetic and logical instruction the result is placed in the accumulator.
Special Purpose Register : A CPU contains a number of special purpose register for different purposes.
There are :
(a) Program Counter (PC)
(b) Stack Pointer (SP)
(c) Instruction Register (IR)
(d) Index Register
(e) Memory Address Register (MAR)
(f) Memory Buffer Register (MBR)
(a) PC : The PC keeps track of the address of the instruction which is to be executed next. So it holds the
address of this memory location, which contains the rent instruction to be fetched from the memory.
b) Stack Pointer (SP) : The stack is a sequence of memory location defined by the user. It‟s used to save the
contents of a register if it‟s required during the execution of a program. The SP holds the address of the last
occupied memory location of the stack.
(c) Status Register (Flag Register) : A flag register contains a number of flags either to indicate certain
conditions arising after ALU operation or to control certain operations. The flags which indicate a condition
are called control flags. The flags which are used to control certain operation are called control flags.