Computer Organization
Computer Organization
Computer organization
It refers to the operational units and their interconnections that realize the architectural specifications. Similarly, computer
organization is realization of what is specified by the computer architecture. It deals with how operational attributes are linked
together to meet the requirements specified by computer architecture. Some organizational attributes are hardware details,
control signals, peripherals.
Example: Say you are in a company that manufactures cars, design and all low-level details of the car come under computer
architecture (abstract, programmers view), while making its parts piece by piece and connecting together the different
components of that car by keeping the basic design in mind comes under computer organization (physical and visible).
Function
Both the structure and functioning of a computer are, in essence, simple. In general terms, there are only four:
Data processing
Data storage
Data movement
Control
The computer, must be able to process data. The data may take a wide variety of forms, and the range of processing
requirements is broad. However, we shall see that there are only a few fundamental methods or types of data processing.
It is also essential that a computer store data. Even if the computer is processing data on the fly (i.e., data come in and get
processed, and the results go out immediately), the computer must temporarily store at least those pieces of data that are being
worked on at any given moment. Thus, there is at least a short-term data storage function. Equally important, the computer
performs a long-term data storage function. Files of data are stored on the computer for subsequent retrieval and update.
Page 1 of 23
The computer must be able to move data between itself and the outside world. The computer’s operating environment consists
of devices that serve as either sources or destinations of data. When data are received from or delivered to a device that is
directly connected to the computer, the process is known as input–output (I/O), and the device is referred to as a peripheral.
When data are moved over longer distances, to or from a remote device, the process is known as data communications.
Finally, there must be control of these three functions. Ultimately, this control is exercised by the individual(s) who provides
the computer with instructions. Within the computer, a control unit manages the computer’s resources and orchestrates the
performance of its functional parts in response to those instructions.
Structure
The figure below is the simplest possible depiction of a computer. The computer interacts in some fashion with its external
environment. In general, all of its linkages to the external environment can be classified as peripheral devices or
communication lines. We will have something to say about both types of linkages.
The computer
But of greater concern is the internal structure of the computer itself. There are four main structural components:
Central processing unit (CPU): Controls the operation of the computer and performs its data processing functions;
often simply referred to as processor.
Main memory: Stores data.
I/O: Moves data between the computer and its external environment.
System interconnection: Some mechanism that provides for communication among CPU, main memory, and I/O. A
common example of system interconnection is by means of a system bus, consisting of a number of conducting wires
to which all the other components attach.
There may be one or more of each of the aforementioned components. Traditionally, there has been just a single processor. In
recent years, there has been increasing use of multiple processors in a single computer. However, for our purposes, the most
interesting and in some ways the most complex component is the CPU. Its major structural components are as follows:
Control unit: Controls the operation of the CPU and hence the computer
Arithmetic and logic unit (ALU): Performs the computer’s data processing functions
Registers: Provides storage internal to the CPU
CPU interconnection: Some mechanism that provides for communication among the control unit, ALU, and registers
GENERATIONS OF A COMPUTER
Generation in computer terminology is a change in technology a computer is/was being used. Initially, the generation term
was used to distinguish between varying hardware technologies. But nowadays, generation includes both hardware and
software, which together make up an entire computer system. There are totally five computer generations known till date.
Each generation has been discussed in detail along with their time period and characteristics. Here approximate dates
against each generation have been mentioned which are normally accepted. Following are the main five generations of
computers.
1. First generation
The period of first generation was 1946-1959. The computers of first generation used vacuum tubes as the basic components
for memory and circuitry for CPU (Central Processing Unit). These tubes, like electric bulbs, produced a lot of heat and were
prone to frequent fusing of the installations, therefore, were very expensive and could be afforded only by very large
organizations. In this generation mainly batch processing operating system were used. Punched cards, paper tape, and
Page 2 of 23
magnetic tape were used as input and output devices. The computers in this generation used machine code as programming
language. The main features of first generation are:
Vacuum tube technology
Unreliable
Supported machine language only
Very costly
Generated lot of heat
Slow input and output devices
Huge size
Need of A.C.
Non-portable
Consumed lot of electricity
Some computers of this generation were:
ENIAC
EDVAC
UNIVAC
IBM-701
IBM-650
2. Second generation
The period of second generation was 1959-1965. In this generation transistors were used that were cheaper, consumed less
power, more compact in size, more reliable and faster than the first-generation machines made of vacuum tubes. In this
generation, magnetic cores were used as primary memory and magnetic tape and magnetic disks as secondary storage devices.
In this generation assembly language and high-level programming languages like FORTRAN, COBOL were used. The
computers used batch processing and multiprogramming operating system. The main features of second generation are:
Use of transistors
Reliable in comparison to first generation computers
Smaller size as compared to first generation computers
Generated less heat as compared to first generation computers
Consumed less electricity as compared to first generation computers
Faster than first generation computers
Still very costly
A.C. needed
Supported machine and assembly languages
Some computers of this generation were:
IBM 1620
IBM 7094
CDC 1604
CDC 3600
UNIVAC 1108
3. Third generation
The period of third generation was 1965-1971. The computers of third generation used integrated circuits (IC's) in place of
transistors. A single IC has many transistors, resistors and capacitors along with the associated circuitry. The IC was invented
by Jack Kilby. This development made computers smaller in size, reliable and efficient. In this generation remote processing,
time-sharing, multi-programming operating system were used. High level languages (FORTRAN-II TO IV, COBOL,
PASCAL PL/1, BASIC, ALGOL-68 etc.) were used during this generation. The main features of third generation are:
IC used
More reliable in comparison to previous two generations
Smaller size
Generated less heat
Faster
Lesser maintenance
Still costly
Page 3 of 23
A.C needed
Consumed lesser electricity
Supported high-level language
Some computers of this generation were:
IBM-360 series
Honeywell-6000 series
PDP(Personal Data Processor)
IBM-370/168
TDC-316
4. Fourth generation
The period of fourth generation was 1971-1980. The computers of fourth generation used Very Large Scale Integrated (VLSI)
circuits. VLSI circuits having about 5000 transistors and other circuit elements and their associated circuits on a single chip
made it possible to have microcomputers of fourth generation. Fourth generation computers became more powerful, compact,
reliable, and affordable. As a result, it gave rise to personal computer (PC) revolution. In this generation time sharing, real
time, networks, distributed operating system were used. All the high-level languages like C, C++, DBASE etc., were used in
this generation. The main features of fourth generation are:
VLSI technology used
Very cheap
Portable and reliable
Use of PC's
Very small size
Pipeline processing
No A.C. needed
Concept of internet was introduced
Great developments in the fields of networks
Computers became easily available
Some computers of this generation were:
DEC 10
STAR 1000
PDP 11
CRAY-1(Super Computer)
CRAY-X-MP(Super Computer)
5. Fifth generation
The period of fifth generation is 1980-till date. In the fifth generation, the VLSI technology became ULSI (Ultra Large Scale
Integration) technology, resulting in the production of microprocessor chips having ten million electronic components. This
generation is based on parallel processing hardware and AI (Artificial Intelligence) software. AI is an emerging branch in
computer science, which interprets means and method of making computers think like human beings. All the high-level
languages like C and C++, Java, .Net etc., are used in this generation. AI includes:
Robotics
Neural Networks
Game Playing
Development of expert systems to make decisions in real life situations.
Natural language understanding and generation.
The main features of fifth generation are:
ULSI technology
Development of true artificial intelligence
Development of Natural language processing
Advancement in Parallel Processing
Advancement in Superconductor technology
More user-friendly interfaces with multimedia features
Availability of very powerful and compact computers at cheaper rates
Some computer types of this generation are:
Page 4 of 23
Desktop
Laptop
NoteBook
UltraBook
ChromeBook
COMPUTER TYPES
A. Classification based on operating principles
Based on the operating principles, computers can be classified into one of the following types:
1. Digital computers: Operate essentially by counting. All quantities are expressed as discrete or numbers. Digital
computers are useful for evaluating arithmetic expressions and manipulations of data (such as preparation of bills, ledgers,
solution of simultaneous equations etc.).
2. Analog computers: An analog computer is a form of computer that uses the continuously changeable aspects of physical
phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved. In contrast, digital
computers represent varying quantities symbolically, as their numerical values change.
3. Hybrid computers: Are computers that exhibit features of analog computers and digital computers. The digital
component normally serves as the controller and provides logical operations, while the analog component normally serves
as a solver of differential equations.
COMPUTER TYPES
A computer can be defined as a fast electronic calculating machine that accepts the (data) digitized input information process
it as per the list of internally stored instructions and produces the resulting information. List of instructions are called programs
& internal storage is called computer memory. The different types of computers are:
1. Personal computers: This is the most common type found in homes, schools, Business offices etc., It is the most common
type of desk top computers with processing and storage units along with various input and output devices.
2. Note book computers: These are compact and portable versions of PC
3. Work stations: These have high resolution input/output (I/O) graphics capability, but with same dimensions as that of
desktop computer. These are used in engineering applications of interactive design work.
4. Enterprise systems: These are used for business data processing in medium to large corporations that require much more
computing power and storage capacity than work stations. Internet associated with servers have become a dominant
worldwide source of all types of information.
5. Super computers: These are used for large scale numerical calculations required in the applications like weather
forecasting etc.,
Basic terminology
Input: Whatever is put into a computer system.
Data: Refers to the symbols that represent facts, objects, or ideas.
Information: The results of the computer storing data as bits and bytes; the words, umbers, sounds, and graphics.
Output: Consists of the processing results produced by a computer.
Processing: Manipulation of the data in many ways.
Memory: Area of the computer that temporarily holds data waiting to be processed, stored, or output.
Storage: Area of the computer that holds data on a permanent basis when it is not immediately needed for processing.
Assembly language program (ALP): Programs are written using mnemonics
Mnemonic: Instruction will be in the form of English like form
Assembler: is a software which converts ALP to MLL (Machine Level Language)
High Level Language (HLL): Programs are written using English like statements
Compiler: Convert HLL to MLL, does this job by reading source program at once
Interpreter: Converts HLL to MLL, does this job statement by statement
System software: Program routines which aid the user in the execution of programs eg: Assemblers, Compilers
Operating system: Collection of routines responsible for controlling and coordinating all the activities in a computer
system
Functional unit
A computer consists of five functionally independent main parts input, memory, arithmetic logic unit (ALU), output and
control unit. Input device accepts the coded information as source program i.e. high-level language. This is either stored in
the memory or immediately used by the processor to perform the desired operations. The program stored in the memory
determines the processing steps. Basically, the computer converts one source program to an object program. i.e. into machine
language. Finally, the results are sent to the outside world through output device. All of these actions are coordinated by the
control unit.
1. Input unit: The source program/high level language program/coded information/simply data is fed to a computer through
input devices keyboard is a most common type. Whenever a key is pressed, one corresponding word or number is
translated into its equivalent binary code over a cable and fed either to memory or processor. Joysticks, trackballs, mouse,
scanners etc. are other input devices.
2. Memory unit: Its function into store programs and data. It is basically to two types: Primary memory and Secondary
memory
Word: In computer architecture, a word is a unit of data of a defined bit length that can be addressed and moved
between storage and the computer processor. Usually, the defined bit length of a word is equivalent to the width of the
computer's data bus so that a word can be moved in a single operation from storage to a processor register. For any
Page 7 of 23
computer architecture with an eight-bit byte, the word will be some multiple of eight bits. In IBM's evolutionary
System/360 architecture, a word is 32 bits, or four contiguous eight-bit bytes. In Intel's PC processor architecture, a
word is 16 bits, or two contiguous eight-bit bytes. A word can contain a computer instruction, a storage address, or
application data that is to be manipulated (for example, added to the data in another word space). The number of bits
in each word is known as word length. Word length refers to the number of bits processed by the CPU in one go. With
modern general-purpose computers, word size can be 16 bits to 64 bits. The time required to access one word is called
the memory access time. The small, fast, RAM units are called caches. They are tightly coupled with the processor
and are often contained on the same IC chip to achieve high performance.
Primary memory: Is the one exclusively associated with the processor and operates at the electronics speeds
programs must be stored in this memory while they are being executed. The memory contains a large number of
semiconductors storage cells. Each capable of storing one bit of information. These are processed in a group of fixed
site called word. To provide easy access to a word in memory, a distinct address is associated with each word location.
Addresses are numbers that identify memory location. Number of bits in each word is called word length of the
computer. Programs must reside in the memory during execution. Instructions and data can be written into the memory
or read out under the control of processor. Memory in which any location can be reached in a short and fixed amount
of time after specifying its address is called random access memory (RAM). The time required to access one word in
called memory access time. Memory which is only readable by the user and contents of which can’t be altered is called
read only memory (ROM) it contains operating system.
Caches are the small fast RAM units, which are coupled with the processor and are often contained on the same IC
chip to achieve high performance. Although primary storage is essential it tends to be expensive.
Secondary memory: Is used where large amounts of data & programs have to be stored, particularly information that
is accessed infrequently. Examples: - Magnetic disks & tapes, optical disks (ie CD-ROM’s), floppies etc.,
Types of memory
3. Arithmetic logic unit (ALU): Most of the computer operators are executed in ALU of the processor like addition,
subtraction, division, multiplication, etc. the operands are brought into the ALU from memory and stored in high-speed
storage elements called register. Then according to the instructions, the operation is performed in the required sequence.
The control and the ALU are may times faster than other devices connected to a computer system. This enables a single
processor to control a number of external devices such as key boards, displays, magnetic and optical disks, sensors and
other mechanical controllers.
4. Output unit: These actually are the counterparts of input unit. Its basic function is to send the processed results to the
outside world. Examples: - Printer, speakers, monitor etc.
5. Control unit: It effectively is the nerve center that sends signals to other units and senses their states. The actual timing
signals that govern the transfer of data between input unit, processor, memory and output unit are generated by the control
unit.
Page 8 of 23
To perform a given task an appropriate program consisting of a list of instructions is stored in the memory. Individual
instructions are brought from the memory into the processor, which executes the specified operations. Data to be stored are
also stored in the memory.
Examples:
ADD LOCA, R0
This instruction adds the operand at memory location LOCA, to operand in register R0 and places the sum into register. This
instruction requires the performance of several steps,
1. First the instruction is fetched from the memory into the processor.
2. The operand at LOCA is fetched and added to the contents of R0
3. Finally, the resulting sum is stored in the register R0
The preceding add instruction combines a memory access operation with an ALU operations. In some other type of computers,
these two types of operations are performed by separate instructions for performance reasons.
LOAD LOCA, R1
ADD R1, R0
Transfers between the memory and the processor are started by sending the address of the memory location to be accessed to
the memory unit and issuing the appropriate control signals. The data are then transferred to or from the memory.
The figure above shows how memory and the processor can be connected. In addition to the ALU and the control circuitry,
the processor contains a number of registers used for several different purposes.
Register
It is a special, high-speed storage area within the CPU. All data must be represented in a register before it can be processed.
For example, if two numbers are to be multiplied, both numbers must be in registers, and the result is also placed in a register.
(The register can contain the address of a memory location where data is stored rather than the actual data itself.)
The number of registers that a CPU has and the size of each (number of bits) help determine the power and speed of a CPU.
For example, a 32-bit CPU is one in which each register is 32 bits wide. Therefore, each CPU instruction can manipulate 32
bits of data. In high-level languages, the compiler is responsible for translating high-level operations into low-level operations
that access registers.
Instruction format
Page 9 of 23
Computer instructions are the basic components of a machine language program. They are also known as macro-operations,
since each one is comprised of sequences of micro-operations. Each instruction initiates a sequence of micro-operations that
fetch operands from registers or memory, possibly perform arithmetic, logic, or shift operations, and store results in registers
or memory.
Instructions are encoded as binary instruction codes. Each instruction code contains of an operation code, or opcode, which
designates the overall purpose of the instruction (e.g. ADD, SUBTRACT, MOVE, INPUT, etc.). The number of bits allocated
for the opcode determined how many different instructions the architecture supports. In addition to the opcode, many
instructions also contain one or more operands, which indicate where in registers or memory the data required for the operation
is located. For example, and add instruction requires two operands, and a not instruction requires one.
15 1211 6 5 0
+----------------------------+
| Opcode | Operand | Operand |
+----------------------------+
The opcode and operands are most often encoded as unsigned binary numbers in order to minimize the number of bits used
to store them. For example, a 4-bit opcode encoded as a binary number could represent up to 16 different operations. The
control unit is responsible for decoding the opcode and operand bits in the instruction register, and then generating the control
signals necessary to drive all other hardware in the CPU to perform the sequence of micro-operations that comprise the
instruction.
Instruction cycle
Page 10 of 23
Instruction cycle with interrupt
The instruction register (IR): Holds the instructions that are currently being executed. Its output is available for the
control circuits which generates the timing signals that control the various processing elements in one execution of
instruction.
The program counter PC: This is another specialized register that keeps track of execution of a program. It contains the
memory address of the next instruction to be fetched and executed. Besides IR and PC, there are n-general purpose registers
R0 through Rn-1.
The other two registers which facilitate communication with memory are:
1. MAR – (Memory Address Register): It holds the address of the location to be accessed.
2. MDR – (Memory Data Register): It contains the data to be written into or read out of the address location.
Normal execution of a program may be preempted (temporarily interrupted) if some devices require urgent servicing, to do
this one device raises an interrupt signal. An interrupt is a request signal from an I/O device for service by the processor. The
processor provides the requested service by executing an appropriate interrupt service routine. The Diversion may change the
internal stage of the processor its state must be saved in the memory location before interruption. When the interrupt-routine
service is completed the state of the processor is restored so that the interrupted program may continue
Bus structures
Bus structure and multiple bus structures are types of bus or computing. A bus is basically a subsystem which transfers data
between the components of computer components either within a computer or between two computers. It connects peripheral
devices at the same time.
A multiple bus structure has multiple inter connected service integration buses and for each bus the other buses are its foreign
buses. A Single bus structure is very simple and consists of a single server.
A bus cannot span multiple cells. And each cell can have more than one buses. Published messages are printed on it. There is
no messaging engine on single bus structure
In single bus structure all units are connected in the same bus than connecting different buses as multiple bus structure.
Multiple bus structure's performance is better than single bus structure.
Single bus structure's cost is cheap than multiple bus structure.
Group of lines that serve as connecting path for several devices is called a bus (one bit per line). Individual parts must
communicate over a communication line or path for exchanging data, address and control information as shown in the diagram
below. Printer example - processor to printer. A common approach is to use the concept of buffer registers to hold the content
during the transfer.
Buffer registers hold the data during the data transfer temporarily. E.g. printing
Types of buses
1. Data bus
Data bus is the most common type of bus. It is used to transfer data between different components of computer. The number
of lines in data bus affects the speed of data transfer between different components. The data bus consists of 8, 16, 32, or 64
Page 12 of 23
lines. A 64-line data bus can transfer 64 bits of data at one time. The data bus lines are bi-directional. It means that: CPU can
read data from memory using these lines CPU can write data to memory locations using these lines
2. Address bus
Many components are connected to one another through buses. Each component is assigned a unique ID. This ID is called
the address of that component. It a component wants to communicate with another component, it uses address bus to specify
the address of that component. The address bus is a unidirectional bus. It can carry information only in one direction. It carries
address of memory location from microprocessor to the main memory.
3. Control bus
Control bus is used to transmit different commands or control signals from one component to another component. Suppose
CPU wants to read data from main memory it will use control. It is also used to transmit control signals like ASKS
(Acknowledgement signals). A control signal contains the following:
1. Timing information: It specifies the time for which a device can use data and address bus.
2. Command Signal: It specifies the type of operation to be performed.
Suppose that CPU gives a command to the main memory to write data. The memory sends acknowledgement signal to CPU
after writing the data successfully. CPU receives the signal and then moves to perform some other action.
Software
If a user wants to enter and run an application program, he/she needs a system software. System software is a collection of
programs that are executed as needed to perform functions such as:
Receiving and interpreting user commands
Entering and editing application programs and storing then as files in secondary storage devices
Running standard application programs such as word processors, spread sheets, games etc.
Operating system is key system software component which helps the user to exploit the below underlying hardware with the
programs.
Types of software
A layer structure showing where operating system is located on generally used software systems on desktops
System software
System software helps run the computer hardware and computer system. It includes a combination of the following:
device drivers
operating systems
servers
utilities
windowing systems
compilers
debuggers
interpreters
linkers
The purpose of systems software is to unburden the applications programmer from the often-complex details of the particular
computer being used, including such accessories as communications devices, printers, device readers, displays and keyboards,
and also to partition the computer's resources such as memory and processor time in a safe and stable manner. Examples are-
Windows XP, Linux and Mac.
Application software
Application software allows end users to accomplish one or more specific (not directly computer development related) tasks.
Typical applications include:
Business software
Computer games
Quantum chemistry and solid-state physics software
Telecommunications (i.e., the internet and everything that flows on it)
Databases
Page 13 of 23
Educational software
Medical software
Military software
Molecular modelling software
Image editing
Spreadsheet
Simulation software
Word processing
Decision making software
Application software exists for and has impacted a wide variety of topics.
Performance
The most important measure of the performance of a computer is how quickly it can execute programs. The speed with which
a computer executes program is affected by the design of its hardware. For best performance, it is necessary to design the
compilers, the machine instruction set, and the hardware in a coordinated way. The total time required to execute the program
is elapsed time is a measure of the performance of the entire computer system. It is affected by the speed of the processor, the
disk and the printer. The time needed to execute an instruction is called the processor time. Just as the elapsed time for the
execution of a program depends on all units in a computer system, the processor time depends on the hardware involved in
the execution of individual machine instructions. This hardware comprises the processor and the memory which are usually
connected by the bus as shown in the figure below.
Let us examine the flow of program instructions and data between the memory and the processor. At the start of execution,
all program instructions and the required data are stored in the main memory. As the execution proceeds, instructions are
fetched one by one over the bus into the processor, and a copy is placed in the cache later if the same instruction or data item
is needed a second time, it is read directly from the cache. The processor and relatively small cache memory can be fabricated
on a single IC chip. The internal speed of performing the basic steps of instruction processing on chip is very high and is
considerably faster than the speed at which the instruction and data can be fetched from the main memory. A program will be
executed faster if the movement of instructions and data between the main memory and the processor is minimized, which is
achieved by using the cache. For example: Suppose a number of instructions are executed repeatedly over a short period of
time as happens in a program loop. If these instructions are available in the cache, they can be fetched quickly during the
period of repeated use. The same applies to the data that are used repeatedly.
Processor clock
Processor circuits are controlled by a timing signal called clock. The clock designer the regular time intervals called clock
cycles. To execute a machine instruction the processor divides the action to be performed into a sequence of basic steps that
each step can be completed in one clock cycle. The length P of one clock cycle is an important parameter that affects the
processor performance. Processor used in today’s personal computer and work station have a clock rate that range from a few
hundred million to over a billion cycles per second.
Consider ADD R1 R2 R3
This adds the contents of R1 & R2 and places the sum into R3. The contents of R1 & R2 are first transferred to the inputs of
ALU. After the addition operation is performed, the sum is transferred to R3. The processor can read the next instruction from
the memory, while the addition operation is being performed. Then of that instruction also uses, the ALU, its operand can be
transferred to the ALU inputs at the same time that the add instructions is being transferred to R3. In the ideal case if all
instructions are overlapped to the maximum degree possible the execution proceeds at the rate of one instruction completed
in each clock cycle.
Individual instructions still require several clock cycles to complete. But for the purpose of computing T, effective value of S
is 1. A higher degree of concurrency can be achieved if multiple instructions pipelines are implemented in the processor. This
means that multiple functional units are used creating parallel paths through which different instructions can be executed in
parallel with such an arrangement, it becomes possible to start the execution of several instructions in every clock cycle. This
mode of operation is called superscalar execution. If it can be sustained for a long-time during program execution the effective
value of S can be reduced to less than one. But the parallel execution must preserve logical correctness of programs that is
the results produced must be same as those produced by the serial execution of program instructions. Now days many
processors are designed in this manner.
Clock rate
These are two possibilities for increasing the clock rate ‘R’.
1. Improving the IC technology makes logical circuit faster, which reduces the time of execution of basic steps. This allows
the clock period P, to be reduced and the clock rate R to be increased.
2. Reducing the amount of processing done in one basic step also makes it possible to reduce the clock period P. However,
if the actions that have to be performed by instructions remain the same, the number of basic steps needed may increase.
Increase in the value ‘R’ that are entirely caused by improvements in IC technology affects all aspects of the processor’s
operation equally with the exception of the time it takes to access the main memory. In the presence of cache, the percentage
of accesses to the main memory is small. Hence much of the performance gain excepted from the use of faster technology
can be realized.
Performance measurements
The performance measure is the time taken by the computer to execute a given bench mark. Initially some attempts were
made to create artificial programs that could be used as bench mark programs. But synthetic programs do not properly predict
the performance obtained when real application programs are run. A non-profit organization called SPEC- system
performance Evaluation Corporation selects and publishes bench marks. The program selected range from game playing,
compiler, and data base applications to numerically intensive programs in astrophysics and quantum chemistry. In each case,
the program is compiled under test, and the running time on a real computer is measured. The same program is also compiled
and run on one computer selected as reference.
The ‘SPEC’ rating is computed as follows.
SPEC rating = Running time on the reference computer/ Running time on the computer under test.
Multicomputer Multiprocessors
Page 16 of 23
A computer made up of several computers. A computer that has more than one CPU on its
motherboard.
Distributed computing deals with hardware and software Multiprocessing is the use of two or more central
systems containing more than one processing element, multiple processing units (CPUs) within a single computer system.
programs
It can run faster Speed depends on the all-processors speed
A multi-computer is multiple computers, each of which can Single Computer with multiple processors
have multiple processors.
Used for true parallel processing Used for true parallel processing
Processor cannot share the memory Processors can share the memory
Called as message passing multi computers Called as shared memory multi processors
Cost is more Cost is low
MEMORY ORGANIZATION
Introduction
The computer’s memory stores data, instructions required during the processing of data, and output results. Storage may be
required for a limited period of time, instantly, or, for an extended period of time. Different types of memories, each having
its own unique features, are available for use in a computer. The cache memory, registers, and RAM are fast memories and
store the data and instructions temporarily during the processing of data and instructions. The secondary memory like
magnetic disks and optical disks has large storage capacities and store the data and instructions permanently, but are slow
memory devices. The memories are organized in the computer in a manner to achieve high levels of performance at the
minimum cost. In this section, we discuss different types of memories, their characteristics and their use in the computer.
Memory representation
The computer memory stores different kinds of data like input data, output data, intermediate results, etc., and the instructions.
Binary digit or bit is the basic unit of memory. A bit is a single binary digit, i.e., 0 or 1. A bit is the smallest unit of
representation of data in a computer. However, the data is handled by the computer as a combination of bits. A group of 8 bits
form a byte. One byte is the smallest unit of data that is handled by the computer.
One byte (8 bit) can store 28 = 256 different combinations of bits, and thus can be used to represent 256 different symbols. In
a byte, the different combinations of bits fall in the range 00000000 to 11111111. A group of bytes can be further combined
to form a word. A word can be a group of 2, 4 or 8 bytes.
1 bit = 0 or 1
1 Byte (B) = 8 bits
1 Kilobyte (KB) = 210 = 1024 bytes
1 Megabyte (MB) = 220 = 1024KB
1 Gigabyte (GB) = 230 = 1024 MB = 1024 *1024 KB
1 Terabyte (TB) = 240= 1024 GB = 1024 * 1024 *1024 KB
Characteristics of memories
1. Volatility
Volatile -RAM
Non-volatile - ROM, Flash memory
2. Mutability
Read/Write- RAM, HDD, SSD, RAM, Cache, Registers
Read Only - Optical ROM (CD/DVD…), Semiconductor ROM
3. Accessibility
Random Access - RAM, Cache
Direct Access - HDD, Optical Disks
Sequential Access - Magnetic Tapes
Memory hierarchy
The memory is characterized on the basis of two key factors:
Capacity is the amount of information (in bits) that a memory can store.
Page 17 of 23
Access time is the time interval between the read/ write request and the availability of data. The lesser the access time,
the faster is the speed of memory.
Performance: Earlier when the computer system was designed without Memory Hierarchy design, the speed gap
increases between the CPU registers and Main Memory due to large difference in access time. This results in lower
performance of the system and thus, enhancement was required. This enhancement was made in the form of Memory
Hierarchy Design because of which the performance of the system increases. One of the most significant ways to increase
system performance is minimizing how far down the memory hierarchy one has to go to manipulate data.
Cost per bit: As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal memory is costlier
than External Memory.
Ideally, we want the memory with fastest speed and largest capacity. However, the cost of fast memory is very high. The
computer uses a hierarchy of memory that is organized in a manner to enable the fastest speed and largest capacity of memory.
The hierarchy of the different memory types is shown in the figure below.
The internal memory and external memory are the two broad categories of memory used in the computer.
The internal memory consists of the CPU registers, cache memory and primary memory. The internal memory is used
by the CPU to perform the computing tasks.
The external memory is also called the secondary memory. The secondary memory is used to store the large amount of
data and the software.
In general, referring to the computer memory usually means the internal memory.
Memory hierarchy
Internal memory
The key features of internal memory are:
1. Limited storage capacity.
2. Temporary storage.
3. Fast access.
4. High cost.
Registers, cache memory, and primary memory constitute the internal memory. The primary memory is further of two kinds:
RAM and ROM. Registers are the fastest and the most expensive among all the memory types. The registers are located
inside the CPU, and are directly accessible by the CPU. The speed of registers is between 1-2 ns (nanosecond). The sum of
the size of registers is about 200B. Cache memory is next in the hierarchy and is placed between the CPU and the main
memory. The speed of cache is between 2-10 ns. The cache size varies between 32 KB to 4MB. Any program or data that has
to be executed must be brought into RAM from the secondary memory. Primary memory is relatively slower than the cache
memory. The speed of RAM is around 60ns. The RAM size varies from 512KB to 64GB.
Secondary memory
The key features of secondary memory storage devices are:
Page 18 of 23
1. Very high storage capacity
2. Permanent storage (non-volatile), unless erased by user.
3. Relatively slower access.
4. Stores data and instructions that are not currently being used by CPU but may be required later for processing.
5. Cheapest among all memory.
To get the fastest speed of memory with largest capacity and least cost, the fast memory is located close to the processor. The
secondary memory, which is not as fast, is used to store information permanently, and is placed farthest from the processor.
CPU registers
Registers are very high-speed storage areas located inside the CPU. After CPU gets the data and instructions from the cache
or RAM, the data and instructions are moved to the registers for processing. Registers are manipulated directly by the control
unit of CPU during instruction execution. That is why registers are often referred to as the CPU’s working memory. Since
CPU uses registers for the processing of data, the number of registers in a CPU and the size of each register affect the power
and speed of a CPU. The more the number of registers (ten to hundreds) and bigger the size of each register (8 bits to 64 bits),
the better it is.
Cache memory
Cache memory is placed in between the CPU and the RAM. Cache memory is a fast memory, faster than the RAM. When the
CPU needs an instruction or data during processing, it first looks in the cache. If the information is present in the cache, it is
called a cache hit, and the data or instruction is retrieved from the cache. If the information is not present in cache, then it is
called a cache miss and the information is then retrieved from RAM.
Page 19 of 23
The advantages and disadvantages of cache memory are as follows
Advantages
Cache memory is faster than main memory.
It consumes less access time as compared to main memory.
It stores the program that can be executed within a short period of time.
It stores data for temporary use.
Disadvantages
Cache memory has limited capacity.
It is very expensive
Secondary memory
In the previous section, we saw that RAM is expensive and has a limited storage capacity. Since it is a volatile memory, it
cannot retain information after the computer is powered off. Thus, in addition to primary memory, an auxiliary or secondary
memory is required by a computer. The secondary memory is also called the storage device of computer. In this section, the
terms secondary memory and storage device are used interchangeably. In comparison to the primary memory, the secondary
memory stores much larger amounts of data and information (for example, an entire software program) for extended periods
of time. The data and instructions stored in secondary memory must be fetched into RAM before processing is done by CPU.
Magnetic tape drives, magnetic disk drives, optical disk drives are the different types of storage devices.
Magnetic tape
Magnetic tape is a plastic tape with magnetic coating (figure below). It is a storage medium on a large open reel or in a smaller
cartridge or cassette (like a music cassette). Magnetic tapes are cheaper storage media. They are durable, can be written,
erased, and re-written. Magnetic tapes are sequential access devices, which mean that the tape needs to rewind or move
forward to the location where the requested data is positioned in the magnetic tape. Due to their sequential nature, magnetic
tapes are not suitable for data files that need to be revised or updated often. They are generally used to store back-up data that
is not frequently used or to transfer data from one system to other.
Page 21 of 23
A 10.5-inch reel of 9-track tape
The working of magnetic tape is explained as follows:
Magnetic tape is divided horizontally into tracks (7 or 9) and vertically into frames (figure below). A frame stores one
byte of data, and a track in a frame stores one bit. Data is stored in successive frames as a string with one data (byte) per
frame.
Magnetic disk
Page 22 of 23
Magnetic disk is a direct access secondary storage device. It is a thin plastic or metallic circular plate coated with magnetic
oxide and encased in a protective cover. Data is stored on magnetic disks as magnetized spots. The presence of a magnetic
spot represents the bit 1 and its absence represents the bit 0.
Page 23 of 23