Computer Organisation Chapter1
Computer Organisation Chapter1
Computer types: -
A computer can be defined as a fast electronic calculating machine that accepts the (data) digitized
input information process it as per the list of internally stored instructions and produces the resulting
information.
List of instructions are called programs & internal storage is called computer memory.
1. Personal computers: - This is the most common type found in homes, schools, Business
offices etc., It is the most common type of desk top computers with processing and storage
units along with various input and output devices.
4. Enterprise systems: - These are used for business data processing in medium to large
corporations that require much more computing power and storage capacity than work
stations. Internet associated with servers have become a dominant worldwide source of all
types of information.
5. Super computers: - These are used for large scale numerical calculations required in the
applications like weather forecasting etc.,
Functional unit: -
A computer consists of five functionally independent main parts input, memory, arithmetic logic unit
(ALU), output and control unit.
Functional units of computer
Input device accepts the coded information as source program i.e. high level language. This is either
stored in the memory or immediately used by the processor to perform the desired operations. The
program stored in the memory determines the processing steps. Basically the computer converts one
source program to an object program. i.e. into machine language.
Finally the results are sent to the outside world through output device. All of these actions are
coordinated by the control unit.
Input unit: -
Memory unit: -
Its function into store programs and data. It is basically to two types
1. Primary memory
2. Secondary memory
1. Primary memory: - Is the one exclusively associated with the processor and operates at the
electronics speeds programs must be stored in this memory while they are being executed. The
memory contains a large number of semiconductors storage cells. Each capable of storing one bit of
information. These are processed in a group of fixed site called word.
To provide easy access to a word in memory, a distinct address is associated with each word location.
Addresses are numbers that identify memory location.
Number of bits in each word is called word length of the computer. Programs must reside in the
memory during execution. Instructions and data can be written into the memory or read out under
the control of processor.
Memory in which any location can be reached in a short and fixed amount of time after specifying its
address is called random-access memory (RAM).
The time required to access one word in called memory access time. Memory which is only .readable
by the user and contents of which can’t be altered is called read only memory (ROM) it contains
operating system.
Caches are the small fast RAM units, which are coupled with the processor and are aften contained
on the same IC chip to achieve high performance. Although primary storage is essential it tends to be
expensive.
2 Secondary memory: - Is used where large amounts of data & programs have to be stored,
particularly information that is accessed infrequently.
Examples: - Magnetic disks & tapes, optical disks (ie CD-ROM’s), floppies etc.,
Most of the computer operators are executed in ALU of the processor like addition, subtraction,
division, multiplication, etc. the operands are brought into the ALU from memory and stored in high
speed storage elements called register. Then according to the instructions the operation is performed
in the required sequence.
The control and the ALU are may times faster than other devices connected to a computer system.
This enables a single processor to control a number of external devices such as key boards, displays,
magnetic and optical disks, sensors and other mechanical controllers.
Output unit:-
These actually are the counterparts of input unit. Its basic function is to send the processed results to
the outside world.
Control unit:-
It effectively is the nerve center that sends signals to other units and senses their states. The actual
timing signals that govern the transfer of data between input unit, processor, memory and output
unit are generated by the control unit
CPU organisation
Individual instructions are brought from the memory into the processor, which executes the specified
operations. Data to be stored are also stored in the memory.
register R0 & places the sum into register. This instruction requires the performance of several steps,
1. First the instruction is fetched from the memory into the processor.
2. The operand at LOCA is fetched and added to the contents of R0
3. Finally the resulting sum is stored in the register R0
The preceding add instruction combines a memory access operation with an ALU Operations. In
some other type of computers, these two types of operations are performed by separate instructions
for performance reasons.
Transfers between the memory and the processor are started by sending the address of the memory
location to be accessed to the memory unit and issuing the appropriate control signals. The data are
then transferred to or from the memory.
The instruction register (IR):- Holds the instructions that is currently being executed. Its output is
available for the control circuits which generates the timing signals that control the various
processing elements in one execution of instruction.
This is another specialized register that keeps track of execution of a program. It contains the
memory address of the next instruction to be fetched and executed.
Besides IR and PC, there are n-general purpose registers R0 through Rn-
1. The other two registers which facilitate communication with memory are: -
2. MDR – (Memory Data Register):- It contains the data to be written into or read out of the
address location.
1. Programs reside in the memory & usually get these through the I/P unit.
2. Execution of the program starts when the PC is set to point at the first instruction of the
program.
3. Contents of PC are transferred to MAR and a Read Control Signal is sent to the memory.
4. After the time required to access the memory elapses, the address word is read out of the
memory and loaded into the MDR.
5. Now contents of MDR are transferred to the IR & now the instruction is ready to be
decoded and executed.
6. If the instruction involves an operation by the ALU, it is necessary to obtain the required
operands.
7. An operand in the memory is fetched by sending its address to MAR & Initiating a read
cycle.
8. When the operand has been read from the memory to the MDR, it is transferred from MDR
to the ALU.
9. After one or two such repeated cycles, the ALU can perform the desired operation.
10. If the result of this operation is to be stored in the memory, the result is sent to MDR.
11. Address of location where the result is stored is sent to MAR & a write cycle is initiated.
12. The contents of PC are incremented so that PC points to the next instruction that is to be
executed.
A memory unit is the collection of storage units or devices together. The memory unit stores the
binary information in the form of bits. Generally, memory/storage is classified into 2 categories:
Volatile Memory: This loses its data, when power is switched off.
Non-Volatile Memory: This is a permanent storage and does not lose any data when power
is switched off.
The total memory capacity of a computer can be visualized by hierarchy of components. The memory
hierarchy system consists of all storage devices contained in a computer system from the slow
Auxiliary Memory to fast Main Memory and to smaller Cache memory.
Auxillary memory access time is generally 1000 times that of the main memory, hence it is at the
bottom of the hierarchy.
The main memory occupies the central position because it is equipped to communicate directly with
the CPU and with auxiliary memory devices through Input/output processor (I/O).
When the program not residing in main memory is needed by the CPU, they are brought in from
auxiliary memory. Programs not currently needed in main memory are transferred into auxiliary
memory to provide space in main memory for other programs that are currently in use.
Each memory type, is a collection of numerous memory locations. To access data from any memory,
first it must be located and then the data is read from the memory location. Following are the
methods to access information from memory locations:
1. Random Access: Main memories are random access memories, in which each memory
location has a unique address. Using this unique address any memory location can be
reached in the same amount of time in any order.
3. Direct Access: In this mode, information is stored in tracks, with each track having a separate
read/write head.
Main Memory
The memory unit that communicates directly within the CPU, Auxillary memory and Cache memory,
is called main memory. It is the central storage unit of the computer system. It is a large and fast
memory used to store data during computer operations. Main memory is made up
of RAM and ROM, with RAM integrated circuit chips holing the major share.
o DRAM: Dynamic RAM, is made of capacitors and transistors, and must be refreshed
every 10~100 ms. It is slower and cheaper than SRAM.
o SRAM: Static RAM, has a six transistor circuit in each cell and retains data, until
powered off.
ROM: Read Only Memory, is non-volatile and is more like a permanent storage for
information. It also stores the bootstrap loader program, to load and start the operating
system when computer is turned on. PROM(Programmable ROM), EPROM(Erasable PROM)
and EEPROM(Electrically Erasable PROM) are some commonly used ROMs.
Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not
found in cache memory then the CPU moves onto the main memory. It also transfers block of recent
data into the cache and keeps on deleting the old data in cache to accomodate the new one.
1 It stores information as long as It stores information as long as the power is supplied or a few
the power is supplied. milliseconds when the power is switched off.
3 Capacitors are not used hence To store information for a longer time, the contents of the
no refreshing is required. capacitor need to be refreshed periodically.
Input/Output Subsystem
The I/O subsystem of a computer provides an efficient mode of communication between the central
system and the outside environment.
It handles all the input output operations of the computer system. Peripheral Devices Input or output
devices that are connected to computer are called peripheral devices.
These devices are designed to read information into or out of the memory unit upon command from
the CPU and are considered to be the part of computer system. These devices are also called
peripherals. For example: Keyboards, display units and printers are common peripheral devices.
1. Input peripherals : Allows user input, from the outside world to the computer. Example: Keyboard,
Mouse etc.
2. Output peripherals: Allows information output, from the computer to the outside world. Example:
Printer, Monitor etc
3. Input-Output peripherals: Allows both input(from outside world to computer) as well as,
output(from computer to the outside world). Example: Touch screen etc.
Interfaces Interface is a shared boundary between two separate components of the computer
system which can be used to attach two or more components to the system for communication
purposes. There are two types of interface:
1. CPU Inteface
2. I/O Interface Let's understand the I/O Interface in details, Input-Output Interface Peripherals
connected to a computer need special communication links for interfacing with CPU.
In computer system, there are special hardware components between the CPU and peripherals to
control or manage the input-output transfers. These components are called input-output interface
units because they provide communication links between processor bus and peripherals. They
provide a method for transferring information between internal system and input-output devices.
Input and output interface is used as a method which helps in transferring of information between
the internal storage devices i.e. memory and the external peripheral device . A peripheral device is
that which provide input and output for the computer, it is also called Input-Output devices. For
Example: A keyboard and mouse provide Input to the computer are called input devices while a
monitor and printer that provide output to the computer are called output devices. Just like the
external hard-drives, there is also availability of some peripheral devices which are able to provide
both input and output.
In micro-computer base system, the only purpose of peripheral devices is just to provide special
communication links for the interfacing them with the CPU. To resolve the differences between
peripheral devices and CPU, there is a special need for communication links.
2. There is also a synchronization mechanism because the data transfer rate of peripheral
devices are slow than CPU.
3. In peripheral devices, data code and formats are differ from the format in the CPU and
memory.
4. The operating mode of peripheral devices are different and each may be controlled so as not
to disturb the operation of other peripheral devices connected to CPU.
There is a special need of the additional hardware to resolve the differences between CPU and
peripheral devices to supervise and synchronize all input and output devices.
1. It is used to synchronize the operating speed of CPU with respect to input-output devices.
2. It selects the input-output device which is appropriate for the interpretation of the input-
output signal.
The read signal direct data transfer from interface unit to CPU and write signal direct data transfer
from CPU to interface unit through data bus.
Address bus is used to select to interface unit. Two least significant lines of address bus ( A0 , A1 ) are
connected to select lines S0, S1. This two select input lines are used to select any one of four registers
in interface unit. The selection of interface unit is according to the following criteria :
Read state :
Selection of
S S
CS Read Write 0 1 Interface unit
0 0 1 0 0 Port A
0 0 1 0 1 Port B
0 0 1 1 0 Control Register
0 0 1 1 1 Status Register
Write State :
S S Selection of
CS Read Write 0 1 Interface unit
0 1 0 0 0 Port A
Chip Select Operation Select lines
S S Selection of
CS Read Write 0 1 Interface unit
0 1 0 0 1 Port B
0 1 0 1 0 Control Register
0 1 0 1 1 Status Register
Example :
If S0, S1 = 0 1, then Port B data register is selected for data transfer between CPU and I/O
device.
If S0, S1 = 1 0, then Control register is selected and store the control information send by the
CPU.
A programming language defines a set of instructions that are compiled together to perform a
specific task by the CPU (Central Processing Unit). The programming language mainly refers to high-
level languages such as C, C++, Pascal, Ada, COBOL, etc.
Each programming language contains a unique set of keywords and syntax, which are used to create
a set of instructions. Thousands of programming languages have been developed till now, but each
language has its specific purpose. These languages vary in the level of abstraction they provide from
the hardware. Some programming languages provide less or no abstraction while some provide
higher abstraction. Based on the levels of abstraction, they can be classified into two categories:
o Low-level language
o High-level language
The image which is given below describes the abstraction level from hardware. As we can observe
from the below image that the machine language provides no abstraction, assembly language
provides less abstraction whereas high-level language provides a higher level of abstraction.
Low-level language
The low-level language is a programming language that provides no abstraction from the hardware,
and it is represented in 0 or 1 forms, which are the machine instructions. The languages that come
under this category are the Machine level language and Assembly language.
Machine-level language
The machine-level language is a language that consists of a set of instructions that are in the binary
form 0 or 1. As we know that computers can understand only machine instructions, which are in
binary digits, i.e., 0 and 1, so the instructions given to the computer can be only in binary codes.
Creating a program in a machine-level language is a very difficult task as it is not easy for the
programmers to write the program in machine instructions. It is error-prone as it is not easy to
understand, and its maintenance is also very high. A machine-level language is not portable as each
computer has its machine instructions, so if we write a program in one computer will no longer be
valid in another computer.
The different processor architectures use different machine codes, for example, a PowerPC processor
contains RISC architecture, which requires different code than intel x86 processor, which has a CISC
architecture.
Assembly Language
The assembly language contains some human-readable commands such as mov, add, sub, etc. The
problems which we were facing in machine-level language are reduced to some extent by using an
extended form of machine-level language known as assembly language. Since assembly language
instructions are written in English words like mov, add, sub, so it is easier to write and understand.
As we know that computers can only understand the machine-level instructions, so we require a
translator that converts the assembly code into machine code. The translator used for translating the
code is known as an assembler.
The assembly language code is not portable because the data is stored in computer registers, and the
computer has to know the different sets of registers.
The assembly code is not faster than machine code because the assembly language comes above the
machine language in the hierarchy, so it means that assembly language has some abstraction from
the hardware while machine language has zero abstraction.
The following are the differences between machine-level language and assembly language:
Machine-level language Assembly language
The machine-level language comes at the lowest level in the The assembly language comes above the mach
hierarchy, so it has zero abstraction level from the hardware. means that it has less abstraction level from the
The machine-level language is written in binary digits, i.e., 0 and 1. The assembly language is written in simple Eng
is easily understandable by the users.
It does not require any translator as the machine code is directly In assembly language, the assembler is used to
executed by the computer. assembly code into machine code.
High-Level Language
The high-level language is a programming language that allows a programmer to write the programs
which are independent of a particular type of computer. The high-level languages are considered as
high-level because they are closer to human languages than machine-level languages.
When writing a program in a high-level language, then the whole attention needs to be paid to the
logic of the problem.
o The high-level language is easy to read, write, and maintain as it is written in English like
words.
o The high-level languages are designed to overcome the limitation of low-level language, i.e.,
portability. The high-level language is portable; i.e., these languages are machine-
independent.
The following are the differences between low-level language and high-level language:
It requires the assembler to convert the assembly It requires the compiler to convert the high-level
code into machine code.
language instructions into machine code.
The machine code cannot run on all machines, so The high-level code can run all the platforms, so it is a
it is not a portable language.
portable language.
Debugging and maintenance are not easier in a Debugging and maintenance are easier in a high-level
low-level language.
language.
Memory reference instructions are those commands or instructions which are in the custom to
generate a reference to the memory and approval to a program to have an approach to the
commanded information and that states as to from where the data is cache continually. These
instructions are known as Memory Reference Instructions.
There are seven memory reference instructions which are as follows &
AND
The AND instruction implements the AND logic operation on the bit collection from the register and
the memory word that is determined by the effective address. The result of this operation is moved
back to the register.
ADD
The ADD instruction adds the content of the memory word that is denoted by the effective address
to the value of the register.
LDA
The LDA instruction shares the memory word denoted by the effective address to the register.
STA
STA saves the content of the register into the memory word that is defined by the effective address.
The output is next used to the common bus and the data input is linked to the bus. It needed only
one micro-operation.
BUN
The Branch Unconditionally (BUN) instruction can send the instruction that is determined by the
effective address. They understand that the address of the next instruction to be performed is held
by the PC and it should be incremented by one to receive the address of the next instruction in the
sequence. If the control needs to implement multiple instructions that are not next in the sequence,
it can execute the BUN instruction.
BSA
BSA stands for Branch and Save return Address. These instructions can branch a part of the program
(known as subroutine or procedure). When this instruction is performed, BSA will store the address
of the next instruction from the PC into a memory location that is determined by the effective
address.
ISZ
The Increment if Zero (ISZ) instruction increments the word determined by effective address. If the
incremented cost is zero, thus PC is incremented by 1. A negative value is saved in the memory word
through the programmer. It can influence the zero value after getting incremented repeatedly. Thus,
the PC is incremented and the next instruction is skipped
Assembly language
A low-level programming language that allows programmers to directly manipulate hardware and
access specialized processor instructions. Assembly language is written in a symbolic notation that's
easier for humans to understand than machine code, which is made up of binary sequences of 0s
and 1s.
The individual lines of code that make up an assembly language program. Each assembly language
instruction corresponds to a specific machine instruction that the computer's processor can
execute. An assembler, a type of software, translates assembly language instructions into machine
language instructions
How Assembly Languages Work
Fundamentally, the most basic instructions executed by a computer are binary codes, consisting of
ones and zeros. Those codes are directly translated into the “on” and “off” states of the electricity
moving through the computer’s physical circuits. In essence, these simple codes form the basis of
“machine language,” the most fundamental variety of programming language.2
Of course, no human can construct modern software programs by explicitly programming ones and
zeros. Instead, human programmers must rely on various layers of abstraction that can allow
themselves to articulate their commands in a format that is more intuitive to humans.
Specifically, modern programmers issue commands in so-called “high-level languages,” which utilize
intuitive syntax such as whole English words and sentences, as well as logical operators such as
“and,” “or,” and “else” that are familiar to everyday usage.
The first assembly languages were developed in the 1940s, and though modern programmers and
modern natural language processors spend very little time dealing with assembly languages, they
nevertheless remain essential to the overall functioning of a computer
Syntax
When writing any code in any program language, there is an observable, specific order of rules that
must be followed to allow a compiler to execute the code without error.4 These rules are defined as
the syntax, and they contain criteria such as the maximum number of allowable characters, what
characters code lines must start with, or what certain symbols "i.e. a semi-colon" means.
Label
A label is a symbol that represents the address where an instruction or data is stored. It's purpose is
to act as the destination when referenced in a statement. Labels can be used anywhere an address
can be used in assembly languages.4 A symbolic label consists of an identifier followed by a colon,
while numeric labels consist of a single digital followed by a colon.
Operators
Also referred to as commands, operators are logical expressions that occur after the label field. In
addition, it must be preceded by at least one white-space character. Operators can either be opcode
or directive. Opcode correspond directly to machine instructions, and the operation code includes
any register name associated with the instruction. Alternatively, directive operation codes are
instructions known by the assembler.4
Directive
Directives are instructions to the assembler that tell what actions must take place during the
assembly process.5 Directives have the importance of declaring or reserving memory for variables;
these variables can be recalled later in processes to perform more dynamic functions. Directives are
also used to break programs into different sections.6
Macro
An assembly language macro is a template shoe format presents a series or pattern of statements.
This sequence of assembly language statements might be common to multiple different programs. A
macro facility is used to interpret macro definitions, while a macro call is inserted into the source
code where "normal" assembly code would have gone instead of the macro set of statements.5
Mnemonic
A mnemonic is an abbreviation for an operation. A mnemonic is entered into the operation code for
each assemble program instruction to specify a shortened "opcode" that represents a larger,
complete set of codes. For example, the mnemonic "multiply by two" has a full set of code that
carries out the mnemonic.4
High-Frequency Trading
Today, assembly languages remain the subject of study by computer science students, in order to
help them understand how modern software relates to its underlying hardware platforms. In some
cases, programmers must continue to write in assembly languages, such as when the demands on
performance are especially high, or when the hardware in question is incompatible with any current
high-level languages.
One such example that is relevant to finance are the high-frequency trading (HFT) platforms used by
some financial firms. In this marketplace, the speed and accuracy of transactions is of paramount
importance in order for the HFT trading strategies to prove profitable. Therefore, in order to gain an
edge against their competitors, some HFT firms have written their trading software directly in
assembly languages, thereby making it unnecessary to wait for the commands from a higher-level
language to be translated into machine language.
The most commonly used assembly languages include ARM, MIPS, and x86.
C++ is not comprised of assembly code. The C++ computing language consists of C++ code which a
compiler translates into an executable machine code.8
Python is more advanced than assembly languages. Assembly languages are considered a low level
language, while high-level languages such as C, Java, or Python use 0's and 1's instead of numbers,
symbols, and abbreviations.
Though considered lower level languages compared to more advanced languages, assembly
languages are still used. Assembly language is used to directly manipulate hardware, access
specialized processor instructions, or evaluate critical performance issues. These languages are also
used to leverage their speed advantage over high level languages for time-sensitive activities such as
high frequency trading