Fundamentals of Computer Architecture
Fundamentals of Computer Architecture
ADD X1, X2
Machine level
Assembley Assembler
language
language
High level
Machine level
language Compiler
language
program
program
1. Each processor needs its own compiler or interpreter for each high level language.
2. The compiler reads the entire program first and then generates the object code,
3. Interpreter reads one instruction at a time, produces its object code and executes the
instruction before reading the next instruction
4. An OS is a set of programs and utilities, which acts as the interface between user
programs and computer hardware. The purpose of an operating system is to provide an
environment in which a user may execute the programs. An OS can be viewed as the
resource manager.
Types of Computers:
1. Multiframe computer
2. Minicomputer
3. Microcomputer
4. Supercomputer
History of Computers:
Mechanical Era:
(A). Abacus, (B). Mechanical computer/Calculator, (C). Babbage’s Difference Engine, (D).
Babbage’s Analytical Engine
Electronic Era:
In 1946, Von Neumann and his colleagues began the design of a new stored-program computer,
now referred to as the IAS computer, at the Institute for Advanced Studies, Princeton. Nearly, all
modern computers still use this stored-program concept. This concept has three main principal:
1. Program and data can be stored in the same memory.
2. The computer executes the program in sequence as directed by the instructions in the
program.
3. A program can modify itself when the computer executes the program.
OPCODE ADDRESS
The central processing unit (CPU) contained several high speed (vacuum-tube) registers used as
implicit storage locations for operands and results. Its input-output facilities were limited. It can
be considered as the prototype of all subsequent general-purpose computers.
Instruction format: The basic unit of information i.e. the amount of information that can be
transferred between the main memory and CPU in one step is a 40-bit word. The memory has a
capacity of 2¹² =4096 words. A word stored in the memory can represent either instruction or
data.
Data: left most bit represent the sign of the number (0 for positive and 1 for negative) while the
remaining 39 bits indicate the number’s size. The numbers are represented as fixed –point
numbers. 39 38
Sign Bit
Instruction: IAS instructions are 20 bits long, so that two instructions can be stored in each 40-
bit memory location. An instruction consists of two parts: an 8-bit op-code (operation code),
which defines the operation to be performed (add, subtract, etc) and a 12 bit address part which
can identify any of 2¹² memory locations that may be used to store an operand of the instruction.
Reduced word length: IAS instruction allows only one memory address. This
results in a substantial reduction in word length. Two aspects of IAS organization
make this possible.
1. Fixed registers in the CPU are used to store operands and results. : IAS
instruction automatically make use of these registers as required. In other words
CPU register addresses are implicitly specified by the op-code.
2. The instruction of a program are stored in the main memory in approximately the
sequence in which they are to be executed. Hence the address of the next
instruction pair is usually the address of the current instruction pair plus one. The
need for a next instruction address in the instruction format is eliminated. Special
branch instructions are included to permit the instruction execution sequence to
be varied.
Structure of an IAS computer
Bottleneck of Von-Neumann Architecture:
1. One of the major factors contributing for a computer’s performance is the time required
to move instructions and data between the CPU and main memory. The CPU has to wait
longer to obtain a data word from the memory that from its registers, because the
registers are very fast and are logically placed inside the processor (CPU). This CPU-
memory speed disparity is referred to as Von-Neumann Bottleneck
2. This performance problem is reduced by using a special type memory is called cache
memory between the CPU and main memory. Spped o the cache memory is almost
same as the CPU, for which there is almost no waiting time of the CPU for the required
data-word to come.
3. Another way to reduce the problem is by using special type computers known as
Reduced Instruction Set Computer (RISC). This class of computers generally uses a
large number of registers, through which the most of the instruction are executed. This
computer usually limits access to main memory to a few load and store instructions.
This architecture is designed to reduce the impact of the bottleneck by reducing the total
number of the memory accesses made by the CPU and by increasing the number of
register accesses.
Various subsystems of a computer
Internal organization of CPU
Memory:
1. Main memory
2. Secondary menory
3. Cache memory
I/O unit:
System bus: