0% found this document useful (0 votes)
33 views23 pages

Computer Organization

The document discusses the history of computer generations from the first to third generation. The first generation used vacuum tubes and had limitations around heat, size and cost. The second generation used transistors which were cheaper, smaller and more reliable. The third generation began using integrated circuits allowing for even smaller size and lower cost. Key aspects around the technology, languages and systems of each generation are provided.

Uploaded by

Emmanuel Sawe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views23 pages

Computer Organization

The document discusses the history of computer generations from the first to third generation. The first generation used vacuum tubes and had limitations around heat, size and cost. The second generation used transistors which were cheaper, smaller and more reliable. The third generation began using integrated circuits allowing for even smaller size and lower cost. Key aspects around the technology, languages and systems of each generation are provided.

Uploaded by

Emmanuel Sawe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

COMPUTER ORGANIZATION AND ARCHITECTURE

UNIT 1: UNDERSTAND PRINCIPLES OF COMPUTER ORGANIZATION AND DESIGN


INTRODUCTION
Computer architecture
It refers to those attributes of a system visible to a programmer or, put another way, those attributes that have a direct impact
on the logical execution of a program. Similarly, computer architecture deals with giving operational attributes of the computer
or processor to be specific. It deals with details like physical memory, ISA (Instruction Set Architecture) of the processor, the
number of bits used to represent the data types, Input Output mechanism and technique for addressing memories.

Computer organization
It refers to the operational units and their interconnections that realize the architectural specifications. Similarly, computer
organization is realization of what is specified by the computer architecture. It deals with how operational attributes are linked
together to meet the requirements specified by computer architecture. Some organizational attributes are hardware details,
control signals, peripherals.

Example: Say you are in a company that manufactures cars, design and all low-level details of the car come under computer
architecture (abstract, programmers view), while making its parts piece by piece and connecting together the different
components of that car by keeping the basic design in mind comes under computer organization (physical and visible).

Difference between computer architecture and computer organization


Computer organization Computer architecture
Often called microarchitecture (low level) Computer architecture (a bit higher level)
Transparent from programmer (ex. a programmer does not Programmer view (i.e. programmer has to be aware of
worry much how addition is implemented in hardware) which instruction set used)
Physical components (Circuit design, Adders, Signals, Logic (Instruction set, addressing modes, data types,
Peripherals) cache optimization)
How to do? (implementation of the architecture) What to do? (Instruction set)

Structure and function


A computer is a complex system; contemporary computers contain millions of elementary electronic components. The key is
to recognize the hierarchical nature of most complex systems, including the computer. A hierarchical system is a set of
interrelated subsystems, each of the latter, in turn, hierarchical in structure until we reach some lowest level of elementary
subsystem. The hierarchical nature of complex systems is essential to both their design and their description. The designer
need only deal with a particular level of the system at a time. At each level, the system consists of a set of components and
their interrelationships. The behavior at each level depends only on a simplified, abstracted characterization of the system at
the next lower level. At each level, the designer is concerned with structure and function:
 Structure: The way in which the components are interrelated
 Function: The operation of each individual component as part of the structure

Function
Both the structure and functioning of a computer are, in essence, simple. In general terms, there are only four:
 Data processing
 Data storage
 Data movement
 Control
The computer, must be able to process data. The data may take a wide variety of forms, and the range of processing
requirements is broad. However, we shall see that there are only a few fundamental methods or types of data processing.

It is also essential that a computer store data. Even if the computer is processing data on the fly (i.e., data come in and get
processed, and the results go out immediately), the computer must temporarily store at least those pieces of data that are being
worked on at any given moment. Thus, there is at least a short-term data storage function. Equally important, the computer
performs a long-term data storage function. Files of data are stored on the computer for subsequent retrieval and update.

Page 1 of 23
The computer must be able to move data between itself and the outside world. The computer’s operating environment consists
of devices that serve as either sources or destinations of data. When data are received from or delivered to a device that is
directly connected to the computer, the process is known as input–output (I/O), and the device is referred to as a peripheral.
When data are moved over longer distances, to or from a remote device, the process is known as data communications.

Finally, there must be control of these three functions. Ultimately, this control is exercised by the individual(s) who provides
the computer with instructions. Within the computer, a control unit manages the computer’s resources and orchestrates the
performance of its functional parts in response to those instructions.

Structure
The figure below is the simplest possible depiction of a computer. The computer interacts in some fashion with its external
environment. In general, all of its linkages to the external environment can be classified as peripheral devices or
communication lines. We will have something to say about both types of linkages.

The computer

But of greater concern is the internal structure of the computer itself. There are four main structural components:
 Central processing unit (CPU): Controls the operation of the computer and performs its data processing functions;
often simply referred to as processor.
 Main memory: Stores data.
 I/O: Moves data between the computer and its external environment.
 System interconnection: Some mechanism that provides for communication among CPU, main memory, and I/O. A
common example of system interconnection is by means of a system bus, consisting of a number of conducting wires
to which all the other components attach.
There may be one or more of each of the aforementioned components. Traditionally, there has been just a single processor. In
recent years, there has been increasing use of multiple processors in a single computer. However, for our purposes, the most
interesting and in some ways the most complex component is the CPU. Its major structural components are as follows:
 Control unit: Controls the operation of the CPU and hence the computer
 Arithmetic and logic unit (ALU): Performs the computer’s data processing functions
 Registers: Provides storage internal to the CPU
 CPU interconnection: Some mechanism that provides for communication among the control unit, ALU, and registers

GENERATIONS OF A COMPUTER
Generation in computer terminology is a change in technology a computer is/was being used. Initially, the generation term
was used to distinguish between varying hardware technologies. But nowadays, generation includes both hardware and
software, which together make up an entire computer system. There are totally five computer generations known till date.
Each generation has been discussed in detail along with their time period and characteristics. Here approximate dates
against each generation have been mentioned which are normally accepted. Following are the main five generations of
computers.

1. First generation
The period of first generation was 1946-1959. The computers of first generation used vacuum tubes as the basic components
for memory and circuitry for CPU (Central Processing Unit). These tubes, like electric bulbs, produced a lot of heat and were
prone to frequent fusing of the installations, therefore, were very expensive and could be afforded only by very large
organizations. In this generation mainly batch processing operating system were used. Punched cards, paper tape, and
Page 2 of 23
magnetic tape were used as input and output devices. The computers in this generation used machine code as programming
language. The main features of first generation are:
 Vacuum tube technology
 Unreliable
 Supported machine language only
 Very costly
 Generated lot of heat
 Slow input and output devices
 Huge size
 Need of A.C.
 Non-portable
 Consumed lot of electricity
Some computers of this generation were:
 ENIAC
 EDVAC
 UNIVAC
 IBM-701
 IBM-650

2. Second generation
The period of second generation was 1959-1965. In this generation transistors were used that were cheaper, consumed less
power, more compact in size, more reliable and faster than the first-generation machines made of vacuum tubes. In this
generation, magnetic cores were used as primary memory and magnetic tape and magnetic disks as secondary storage devices.
In this generation assembly language and high-level programming languages like FORTRAN, COBOL were used. The
computers used batch processing and multiprogramming operating system. The main features of second generation are:
 Use of transistors
 Reliable in comparison to first generation computers
 Smaller size as compared to first generation computers
 Generated less heat as compared to first generation computers
 Consumed less electricity as compared to first generation computers
 Faster than first generation computers
 Still very costly
 A.C. needed
 Supported machine and assembly languages
Some computers of this generation were:
 IBM 1620
 IBM 7094
 CDC 1604
 CDC 3600
 UNIVAC 1108

3. Third generation
The period of third generation was 1965-1971. The computers of third generation used integrated circuits (IC's) in place of
transistors. A single IC has many transistors, resistors and capacitors along with the associated circuitry. The IC was invented
by Jack Kilby. This development made computers smaller in size, reliable and efficient. In this generation remote processing,
time-sharing, multi-programming operating system were used. High level languages (FORTRAN-II TO IV, COBOL,
PASCAL PL/1, BASIC, ALGOL-68 etc.) were used during this generation. The main features of third generation are:
 IC used
 More reliable in comparison to previous two generations
 Smaller size
 Generated less heat
 Faster
 Lesser maintenance
 Still costly
Page 3 of 23
 A.C needed
 Consumed lesser electricity
 Supported high-level language
Some computers of this generation were:
 IBM-360 series
 Honeywell-6000 series
 PDP(Personal Data Processor)
 IBM-370/168
 TDC-316

4. Fourth generation
The period of fourth generation was 1971-1980. The computers of fourth generation used Very Large Scale Integrated (VLSI)
circuits. VLSI circuits having about 5000 transistors and other circuit elements and their associated circuits on a single chip
made it possible to have microcomputers of fourth generation. Fourth generation computers became more powerful, compact,
reliable, and affordable. As a result, it gave rise to personal computer (PC) revolution. In this generation time sharing, real
time, networks, distributed operating system were used. All the high-level languages like C, C++, DBASE etc., were used in
this generation. The main features of fourth generation are:
 VLSI technology used
 Very cheap
 Portable and reliable
 Use of PC's
 Very small size
 Pipeline processing
 No A.C. needed
 Concept of internet was introduced
 Great developments in the fields of networks
 Computers became easily available
Some computers of this generation were:
 DEC 10
 STAR 1000
 PDP 11
 CRAY-1(Super Computer)
 CRAY-X-MP(Super Computer)

5. Fifth generation
The period of fifth generation is 1980-till date. In the fifth generation, the VLSI technology became ULSI (Ultra Large Scale
Integration) technology, resulting in the production of microprocessor chips having ten million electronic components. This
generation is based on parallel processing hardware and AI (Artificial Intelligence) software. AI is an emerging branch in
computer science, which interprets means and method of making computers think like human beings. All the high-level
languages like C and C++, Java, .Net etc., are used in this generation. AI includes:
 Robotics
 Neural Networks
 Game Playing
 Development of expert systems to make decisions in real life situations.
 Natural language understanding and generation.
The main features of fifth generation are:
 ULSI technology
 Development of true artificial intelligence
 Development of Natural language processing
 Advancement in Parallel Processing
 Advancement in Superconductor technology
 More user-friendly interfaces with multimedia features
 Availability of very powerful and compact computers at cheaper rates
Some computer types of this generation are:
Page 4 of 23
 Desktop
 Laptop
 NoteBook
 UltraBook
 ChromeBook

COMPUTER TYPES
A. Classification based on operating principles
Based on the operating principles, computers can be classified into one of the following types:
1. Digital computers: Operate essentially by counting. All quantities are expressed as discrete or numbers. Digital
computers are useful for evaluating arithmetic expressions and manipulations of data (such as preparation of bills, ledgers,
solution of simultaneous equations etc.).
2. Analog computers: An analog computer is a form of computer that uses the continuously changeable aspects of physical
phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved. In contrast, digital
computers represent varying quantities symbolically, as their numerical values change.
3. Hybrid computers: Are computers that exhibit features of analog computers and digital computers. The digital
component normally serves as the controller and provides logical operations, while the analog component normally serves
as a solver of differential equations.

B. Classification digital computer based on size and capability


Based on size and capability, computers are broadly classified into
a) Micro computers (personal computer): A microcomputer is the smallest general purpose processing system. The older
pc started 8 bit processor with speed of 3.7MB and current pc 64 bit processor with speed of 4.66 GB. Examples: - IBM
PCs, APPLE computers. Microcomputer can be classified into 2 types:
 Desktops
 Portables
The difference is portables can be used while travelling whereas desktops computers cannot be carried around. The
different portable computers are:
1. Laptop: This computer is similar to a desktop computer but the size is smaller. They are expensive than desktop. The
weight of laptop is around 3 to 5 kg.
2. Notebook: These computers are as powerful as desktop but size of these computers are comparatively smaller than
laptop and desktop. They weigh 2 to 3 kg. They are more costly than laptop.
3. Palmtop (Hand held): They are also called as personal Digital Assistant (PDA). These computers are small in size.
They can be held in hands. It is capable of doing word processing, spreadsheets and hand writing recognition, game
playing, faxing and paging. These computers are not as powerful as desktop computers. Ex: - 3com palmV.
4. Wearable computer: -The size of this computer is very small so that it can be worn on the body. It has smaller
processing power. It is used in the field of medicine. For example, pace maker to correct the heart beats. Insulin meter
to find the levels of insulin in the blood.
5. Workstations: It is used in large, high-resolution graphics screen built in network support, Engineering applications
(CAD/CAM), software development desktop publishing e.g. Unix and windows NT
b) Minicomputer: A minicomputer is a medium-sized computer. That is more powerful than a microcomputer. These
computers are usually designed to serve multiple users simultaneously (Parallel Processing). They are more expensive
than microcomputers. Examples: Digital Alpha, Sun Ultra.
c) Mainframe (enterprise) computers: Computers with large storage capacities and very high speed of processing
(compared to mini- or microcomputers) are known as mainframe computers. They support a large number of terminals
for simultaneous use by a number of users like ATM transactions. They are also used as central host computers in
distributed data processing system. Examples: - IBM 370, S/390.
d) Supercomputer: Supercomputers have extremely large storage capacity and computing speeds which are many times
faster than other computers. A supercomputer is measured in terms of tens of millions of Instructions per second (mips),
an operation is made up of numerous instructions. The supercomputer is mainly used for large scale numerical problems
in scientific and engineering disciplines such as Weather analysis. Examples: IBM Deep Blue

C. Classification based on number of microprocessors


Based on the number of microprocessors, computers can be classified into:
Page 5 of 23
a) Sequential computers: Any task complete in sequential computers is with one microcomputer only. Most of the
computers (today) we see are sequential computers where in any task is completed sequentially instruction after instruction
from the beginning to the end.
b) Parallel computers: The parallel computer is relatively fast. New types of computers that use a large number of
processors. The processors perform different tasks independently and simultaneously thus improving the speed of
execution of complex programs dramatically. Parallel computers match the speed of supercomputers at a fraction of the
cost.

D. Classification based on word-length


A binary digit is called “BIT”. A word is a group of bits which is fixed for a computer. The number of bits in a word (or word
length) determines the representation of all characters in these many bits. Word length leis in the range from 16-bit to 64-bitsf
or most computers of today.

E. Classification based on number of users


Based on number of users, computers are classified into:
a) Single user: Only one user can use the resource at any time.
b) Multi user: A single computer shared by a number of users at any time.
c) Network: A number of interconnected autonomous computers shared by a number of users at any time.

COMPUTER TYPES
A computer can be defined as a fast electronic calculating machine that accepts the (data) digitized input information process
it as per the list of internally stored instructions and produces the resulting information. List of instructions are called programs
& internal storage is called computer memory. The different types of computers are:
1. Personal computers: This is the most common type found in homes, schools, Business offices etc., It is the most common
type of desk top computers with processing and storage units along with various input and output devices.
2. Note book computers: These are compact and portable versions of PC
3. Work stations: These have high resolution input/output (I/O) graphics capability, but with same dimensions as that of
desktop computer. These are used in engineering applications of interactive design work.
4. Enterprise systems: These are used for business data processing in medium to large corporations that require much more
computing power and storage capacity than work stations. Internet associated with servers have become a dominant
worldwide source of all types of information.
5. Super computers: These are used for large scale numerical calculations required in the applications like weather
forecasting etc.,

Basic terminology
 Input: Whatever is put into a computer system.
 Data: Refers to the symbols that represent facts, objects, or ideas.
 Information: The results of the computer storing data as bits and bytes; the words, umbers, sounds, and graphics.
 Output: Consists of the processing results produced by a computer.
 Processing: Manipulation of the data in many ways.
 Memory: Area of the computer that temporarily holds data waiting to be processed, stored, or output.
 Storage: Area of the computer that holds data on a permanent basis when it is not immediately needed for processing.
 Assembly language program (ALP): Programs are written using mnemonics
 Mnemonic: Instruction will be in the form of English like form
 Assembler: is a software which converts ALP to MLL (Machine Level Language)
 High Level Language (HLL): Programs are written using English like statements
 Compiler: Convert HLL to MLL, does this job by reading source program at once
 Interpreter: Converts HLL to MLL, does this job statement by statement
 System software: Program routines which aid the user in the execution of programs eg: Assemblers, Compilers
 Operating system: Collection of routines responsible for controlling and coordinating all the activities in a computer
system

Computers has two kinds of components:


 Hardware, consisting of its physical devices (CPU, memory, bus, storage devices, ...)
Page 6 of 23
 Software, consisting of the programs it has (Operating system, applications, utilities, ...)

Functional unit
A computer consists of five functionally independent main parts input, memory, arithmetic logic unit (ALU), output and
control unit. Input device accepts the coded information as source program i.e. high-level language. This is either stored in
the memory or immediately used by the processor to perform the desired operations. The program stored in the memory
determines the processing steps. Basically, the computer converts one source program to an object program. i.e. into machine
language. Finally, the results are sent to the outside world through output device. All of these actions are coordinated by the
control unit.

Basic functional units of a computer

Block diagram of a computer

1. Input unit: The source program/high level language program/coded information/simply data is fed to a computer through
input devices keyboard is a most common type. Whenever a key is pressed, one corresponding word or number is
translated into its equivalent binary code over a cable and fed either to memory or processor. Joysticks, trackballs, mouse,
scanners etc. are other input devices.
2. Memory unit: Its function into store programs and data. It is basically to two types: Primary memory and Secondary
memory
 Word: In computer architecture, a word is a unit of data of a defined bit length that can be addressed and moved
between storage and the computer processor. Usually, the defined bit length of a word is equivalent to the width of the
computer's data bus so that a word can be moved in a single operation from storage to a processor register. For any
Page 7 of 23
computer architecture with an eight-bit byte, the word will be some multiple of eight bits. In IBM's evolutionary
System/360 architecture, a word is 32 bits, or four contiguous eight-bit bytes. In Intel's PC processor architecture, a
word is 16 bits, or two contiguous eight-bit bytes. A word can contain a computer instruction, a storage address, or
application data that is to be manipulated (for example, added to the data in another word space). The number of bits
in each word is known as word length. Word length refers to the number of bits processed by the CPU in one go. With
modern general-purpose computers, word size can be 16 bits to 64 bits. The time required to access one word is called
the memory access time. The small, fast, RAM units are called caches. They are tightly coupled with the processor
and are often contained on the same IC chip to achieve high performance.
 Primary memory: Is the one exclusively associated with the processor and operates at the electronics speeds
programs must be stored in this memory while they are being executed. The memory contains a large number of
semiconductors storage cells. Each capable of storing one bit of information. These are processed in a group of fixed
site called word. To provide easy access to a word in memory, a distinct address is associated with each word location.
Addresses are numbers that identify memory location. Number of bits in each word is called word length of the
computer. Programs must reside in the memory during execution. Instructions and data can be written into the memory
or read out under the control of processor. Memory in which any location can be reached in a short and fixed amount
of time after specifying its address is called random access memory (RAM). The time required to access one word in
called memory access time. Memory which is only readable by the user and contents of which can’t be altered is called
read only memory (ROM) it contains operating system.
 Caches are the small fast RAM units, which are coupled with the processor and are often contained on the same IC
chip to achieve high performance. Although primary storage is essential it tends to be expensive.
 Secondary memory: Is used where large amounts of data & programs have to be stored, particularly information that
is accessed infrequently. Examples: - Magnetic disks & tapes, optical disks (ie CD-ROM’s), floppies etc.,

Types of memory
3. Arithmetic logic unit (ALU): Most of the computer operators are executed in ALU of the processor like addition,
subtraction, division, multiplication, etc. the operands are brought into the ALU from memory and stored in high-speed
storage elements called register. Then according to the instructions, the operation is performed in the required sequence.
The control and the ALU are may times faster than other devices connected to a computer system. This enables a single
processor to control a number of external devices such as key boards, displays, magnetic and optical disks, sensors and
other mechanical controllers.
4. Output unit: These actually are the counterparts of input unit. Its basic function is to send the processed results to the
outside world. Examples: - Printer, speakers, monitor etc.
5. Control unit: It effectively is the nerve center that sends signals to other units and senses their states. The actual timing
signals that govern the transfer of data between input unit, processor, memory and output unit are generated by the control
unit.

Basic operational concepts

Page 8 of 23
To perform a given task an appropriate program consisting of a list of instructions is stored in the memory. Individual
instructions are brought from the memory into the processor, which executes the specified operations. Data to be stored are
also stored in the memory.

Examples:
ADD LOCA, R0
This instruction adds the operand at memory location LOCA, to operand in register R0 and places the sum into register. This
instruction requires the performance of several steps,
1. First the instruction is fetched from the memory into the processor.
2. The operand at LOCA is fetched and added to the contents of R0
3. Finally, the resulting sum is stored in the register R0
The preceding add instruction combines a memory access operation with an ALU operations. In some other type of computers,
these two types of operations are performed by separate instructions for performance reasons.
LOAD LOCA, R1
ADD R1, R0
Transfers between the memory and the processor are started by sending the address of the memory location to be accessed to
the memory unit and issuing the appropriate control signals. The data are then transferred to or from the memory.

Connections between processor and memory

The figure above shows how memory and the processor can be connected. In addition to the ALU and the control circuitry,
the processor contains a number of registers used for several different purposes.

Register
It is a special, high-speed storage area within the CPU. All data must be represented in a register before it can be processed.
For example, if two numbers are to be multiplied, both numbers must be in registers, and the result is also placed in a register.
(The register can contain the address of a memory location where data is stored rather than the actual data itself.)
The number of registers that a CPU has and the size of each (number of bits) help determine the power and speed of a CPU.
For example, a 32-bit CPU is one in which each register is 32 bits wide. Therefore, each CPU instruction can manipulate 32
bits of data. In high-level languages, the compiler is responsible for translating high-level operations into low-level operations
that access registers.

Instruction format

Page 9 of 23
Computer instructions are the basic components of a machine language program. They are also known as macro-operations,
since each one is comprised of sequences of micro-operations. Each instruction initiates a sequence of micro-operations that
fetch operands from registers or memory, possibly perform arithmetic, logic, or shift operations, and store results in registers
or memory.
Instructions are encoded as binary instruction codes. Each instruction code contains of an operation code, or opcode, which
designates the overall purpose of the instruction (e.g. ADD, SUBTRACT, MOVE, INPUT, etc.). The number of bits allocated
for the opcode determined how many different instructions the architecture supports. In addition to the opcode, many
instructions also contain one or more operands, which indicate where in registers or memory the data required for the operation
is located. For example, and add instruction requires two operands, and a not instruction requires one.

15 1211 6 5 0
+----------------------------+
| Opcode | Operand | Operand |
+----------------------------+

The opcode and operands are most often encoded as unsigned binary numbers in order to minimize the number of bits used
to store them. For example, a 4-bit opcode encoded as a binary number could represent up to 16 different operations. The
control unit is responsible for decoding the opcode and operand bits in the instruction register, and then generating the control
signals necessary to drive all other hardware in the CPU to perform the sequence of micro-operations that comprise the
instruction.

Instruction cycle

Instruction cycle without interrupt

Page 10 of 23
Instruction cycle with interrupt

The instruction register (IR): Holds the instructions that are currently being executed. Its output is available for the
control circuits which generates the timing signals that control the various processing elements in one execution of
instruction.
The program counter PC: This is another specialized register that keeps track of execution of a program. It contains the
memory address of the next instruction to be fetched and executed. Besides IR and PC, there are n-general purpose registers
R0 through Rn-1.
The other two registers which facilitate communication with memory are:
1. MAR – (Memory Address Register): It holds the address of the location to be accessed.
2. MDR – (Memory Data Register): It contains the data to be written into or read out of the address location.

Operating steps are


1. Programs reside in the memory & usually get these through the I/P unit.
2. Execution of the program starts when the PC is set to point at the first instruction of the program.
3. Contents of PC are transferred to MAR and a Read Control Signal is sent to the memory.
4. After the time required to access the memory elapses, the address word is read out of the memory and loaded into the
MDR.
5. Now contents of MDR are transferred to the IR & now the instruction is ready to be decoded and executed.
6. If the instruction involves an operation by the ALU, it is necessary to obtain the required operands.
7. An operand in the memory is fetched by sending its address to MAR & Initiating a read cycle.
8. When the operand has been read from the memory to the MDR, it is transferred from MDR to the ALU.
9. After one or two such repeated cycles, the ALU can perform the desired operation.
10. If the result of this operation is to be stored in the memory, the result is sent to MDR.
11. Address of location where the result is stored is sent to MAR & a write cycle is initiated.
12. The contents of PC are incremented so that PC points to the next instruction that is to be executed.

Normal execution of a program may be preempted (temporarily interrupted) if some devices require urgent servicing, to do
this one device raises an interrupt signal. An interrupt is a request signal from an I/O device for service by the processor. The
processor provides the requested service by executing an appropriate interrupt service routine. The Diversion may change the
internal stage of the processor its state must be saved in the memory location before interruption. When the interrupt-routine
service is completed the state of the processor is restored so that the interrupted program may continue

The Von Neumann architecture


The task of entering and altering programs for the ENIAC was extremely tedious. The programming process can be easy if
the program could be represented in a form suitable for storing in memory alongside the data. Then, a computer could get its
instructions by reading them from memory, and a program could be set or altered by setting the values of a portion of memory.
This idea is known as the stored-program concept. The first publication of the idea was in a 1945 proposal by Von Neumann
for a new computer, the EDVAC (Electronic Discrete Variable Computer). In 1946, von Neumann and his colleagues began
the design of a new stored-program computer, referred to as the IAS computer, at the Princeton Institute for Advanced Studies.
The IAS computer, although not completed until 1952, is the prototype of all subsequent general-purpose computers.
Page 11 of 23
General structure of a Von Neumann architecture
It consists of:
 A main memory, which stores both data and instruction
 An arithmetic and logic unit (ALU) capable of operating on binary data
 A control unit, which interprets the instructions in memory and causes them to be executed
 Input and output (I/O) equipment operated by the control unit

Bus structures
Bus structure and multiple bus structures are types of bus or computing. A bus is basically a subsystem which transfers data
between the components of computer components either within a computer or between two computers. It connects peripheral
devices at the same time.
A multiple bus structure has multiple inter connected service integration buses and for each bus the other buses are its foreign
buses. A Single bus structure is very simple and consists of a single server.
A bus cannot span multiple cells. And each cell can have more than one buses. Published messages are printed on it. There is
no messaging engine on single bus structure
 In single bus structure all units are connected in the same bus than connecting different buses as multiple bus structure.
 Multiple bus structure's performance is better than single bus structure.
 Single bus structure's cost is cheap than multiple bus structure.
Group of lines that serve as connecting path for several devices is called a bus (one bit per line). Individual parts must
communicate over a communication line or path for exchanging data, address and control information as shown in the diagram
below. Printer example - processor to printer. A common approach is to use the concept of buffer registers to hold the content
during the transfer.

Single bus structure

Buffer registers hold the data during the data transfer temporarily. E.g. printing
Types of buses
1. Data bus
Data bus is the most common type of bus. It is used to transfer data between different components of computer. The number
of lines in data bus affects the speed of data transfer between different components. The data bus consists of 8, 16, 32, or 64
Page 12 of 23
lines. A 64-line data bus can transfer 64 bits of data at one time. The data bus lines are bi-directional. It means that: CPU can
read data from memory using these lines CPU can write data to memory locations using these lines

2. Address bus
Many components are connected to one another through buses. Each component is assigned a unique ID. This ID is called
the address of that component. It a component wants to communicate with another component, it uses address bus to specify
the address of that component. The address bus is a unidirectional bus. It can carry information only in one direction. It carries
address of memory location from microprocessor to the main memory.

3. Control bus
Control bus is used to transmit different commands or control signals from one component to another component. Suppose
CPU wants to read data from main memory it will use control. It is also used to transmit control signals like ASKS
(Acknowledgement signals). A control signal contains the following:
1. Timing information: It specifies the time for which a device can use data and address bus.
2. Command Signal: It specifies the type of operation to be performed.
Suppose that CPU gives a command to the main memory to write data. The memory sends acknowledgement signal to CPU
after writing the data successfully. CPU receives the signal and then moves to perform some other action.

Software
If a user wants to enter and run an application program, he/she needs a system software. System software is a collection of
programs that are executed as needed to perform functions such as:
 Receiving and interpreting user commands
 Entering and editing application programs and storing then as files in secondary storage devices
 Running standard application programs such as word processors, spread sheets, games etc.
Operating system is key system software component which helps the user to exploit the below underlying hardware with the
programs.

Types of software
A layer structure showing where operating system is located on generally used software systems on desktops

System software
System software helps run the computer hardware and computer system. It includes a combination of the following:
 device drivers
 operating systems
 servers
 utilities
 windowing systems
 compilers
 debuggers
 interpreters
 linkers
The purpose of systems software is to unburden the applications programmer from the often-complex details of the particular
computer being used, including such accessories as communications devices, printers, device readers, displays and keyboards,
and also to partition the computer's resources such as memory and processor time in a safe and stable manner. Examples are-
Windows XP, Linux and Mac.

Application software
Application software allows end users to accomplish one or more specific (not directly computer development related) tasks.
Typical applications include:
 Business software
 Computer games
 Quantum chemistry and solid-state physics software
 Telecommunications (i.e., the internet and everything that flows on it)
 Databases
Page 13 of 23
 Educational software
 Medical software
 Military software
 Molecular modelling software
 Image editing
 Spreadsheet
 Simulation software
 Word processing
 Decision making software
Application software exists for and has impacted a wide variety of topics.

Performance
The most important measure of the performance of a computer is how quickly it can execute programs. The speed with which
a computer executes program is affected by the design of its hardware. For best performance, it is necessary to design the
compilers, the machine instruction set, and the hardware in a coordinated way. The total time required to execute the program
is elapsed time is a measure of the performance of the entire computer system. It is affected by the speed of the processor, the
disk and the printer. The time needed to execute an instruction is called the processor time. Just as the elapsed time for the
execution of a program depends on all units in a computer system, the processor time depends on the hardware involved in
the execution of individual machine instructions. This hardware comprises the processor and the memory which are usually
connected by the bus as shown in the figure below.

The processor caches

Let us examine the flow of program instructions and data between the memory and the processor. At the start of execution,
all program instructions and the required data are stored in the main memory. As the execution proceeds, instructions are
fetched one by one over the bus into the processor, and a copy is placed in the cache later if the same instruction or data item
is needed a second time, it is read directly from the cache. The processor and relatively small cache memory can be fabricated
on a single IC chip. The internal speed of performing the basic steps of instruction processing on chip is very high and is
considerably faster than the speed at which the instruction and data can be fetched from the main memory. A program will be
executed faster if the movement of instructions and data between the main memory and the processor is minimized, which is
achieved by using the cache. For example: Suppose a number of instructions are executed repeatedly over a short period of
time as happens in a program loop. If these instructions are available in the cache, they can be fetched quickly during the
period of repeated use. The same applies to the data that are used repeatedly.

Processor clock
Processor circuits are controlled by a timing signal called clock. The clock designer the regular time intervals called clock
cycles. To execute a machine instruction the processor divides the action to be performed into a sequence of basic steps that
each step can be completed in one clock cycle. The length P of one clock cycle is an important parameter that affects the
processor performance. Processor used in today’s personal computer and work station have a clock rate that range from a few
hundred million to over a billion cycles per second.

Basic performance equation


We now focus our attention on the processor time component of the total elapsed time. Let ‘T’ be the processor time required
to execute a program that has been prepared in some high-level language. The compiler generates a machine language object
Page 14 of 23
program that corresponds to the source program. Assume that complete execution of the program requires the execution of N
machine cycle language instructions. The number N is the actual number of instruction execution and is not necessarily equal
to the number of machine cycle instructions in the object program. Some instruction may be executed more than once, which
in the case for instructions inside a program loop others may not be executed all, depending on the input data used. Suppose
that the average number of basic steps needed to execute one machine cycle instruction is S, where each basic step is
completed in one clock cycle. If clock rate is ‘R’ cycles per second, the program execution time is given by
𝑁∗𝑆
𝑇=
𝑅
this is often referred to as the basic performance equation. We must emphasize that N, S & R are not independent parameters
changing one may affect another. Introducing a new feature in the design of a processor will lead to improved performance
only if the overall result is to reduce the value of T.

Pipelining and super scalar operation


We assume that instructions are executed one after the other. Hence the value of S is the total number of basic steps, or clock
cycles, required to execute one instruction. A substantial improvement in performance can be achieved by overlapping the
execution of successive instructions using a technique called pipelining.

Consider ADD R1 R2 R3
This adds the contents of R1 & R2 and places the sum into R3. The contents of R1 & R2 are first transferred to the inputs of
ALU. After the addition operation is performed, the sum is transferred to R3. The processor can read the next instruction from
the memory, while the addition operation is being performed. Then of that instruction also uses, the ALU, its operand can be
transferred to the ALU inputs at the same time that the add instructions is being transferred to R3. In the ideal case if all
instructions are overlapped to the maximum degree possible the execution proceeds at the rate of one instruction completed
in each clock cycle.
Individual instructions still require several clock cycles to complete. But for the purpose of computing T, effective value of S
is 1. A higher degree of concurrency can be achieved if multiple instructions pipelines are implemented in the processor. This
means that multiple functional units are used creating parallel paths through which different instructions can be executed in
parallel with such an arrangement, it becomes possible to start the execution of several instructions in every clock cycle. This
mode of operation is called superscalar execution. If it can be sustained for a long-time during program execution the effective
value of S can be reduced to less than one. But the parallel execution must preserve logical correctness of programs that is
the results produced must be same as those produced by the serial execution of program instructions. Now days many
processors are designed in this manner.

Clock rate
These are two possibilities for increasing the clock rate ‘R’.
1. Improving the IC technology makes logical circuit faster, which reduces the time of execution of basic steps. This allows
the clock period P, to be reduced and the clock rate R to be increased.
2. Reducing the amount of processing done in one basic step also makes it possible to reduce the clock period P. However,
if the actions that have to be performed by instructions remain the same, the number of basic steps needed may increase.

Increase in the value ‘R’ that are entirely caused by improvements in IC technology affects all aspects of the processor’s
operation equally with the exception of the time it takes to access the main memory. In the presence of cache, the percentage
of accesses to the main memory is small. Hence much of the performance gain excepted from the use of faster technology
can be realized.

Instruction set CISC and RISC


Simple instructions require a small number of basic steps to execute. Complex instructions involve a large number of steps.
For a processor that has only simple instruction a large number of instructions may be needed to perform a given programming
task. This could lead to a large value of ‘N’ and a small value of ‘S’ on the other hand if individual instructions perform more
complex operations, fewer instructions will be needed, leading to a lower value of N and a larger value of S. It is not obvious
if one choice is better than the other.
But complex instructions combined with pipelining would achieve one best performance. However, it is much easier to
implement efficient pipelining in processors with simple instruction sets. RISC and CISC are computing systems developed
for computers. Instruction set or instruction set architecture is the structure of the computer that provides commands to the
Page 15 of 23
computer to guide the computer for processing data manipulation. Instruction set consists of instructions, addressing modes,
native data types, registers, interrupt, exception handling and memory architecture. Instruction set can be emulated in software
by using an interpreter or built into hardware of the processor. Instruction set architecture can be considered as a boundary
between the software and hardware. Classification of microcontrollers and microprocessors can be done based on the RISC
and CISC instruction set architecture.

Comparison between RISC and CISC


RISC CISC
Acronym It stands for ‘Reduced Instruction Set Computer’ It stands for ‘Complex Instruction Set
Computer’
Definition The RISC processors have a smaller set of instructions with The CISC processors have a larger set of
few addressing nodes instructions with many addressing nodes
Memory unit It has no memory unit and uses a separate hardware to It has a memory unit to implement
implement instructions complex
instructions
Program It has a hard-wired unit of programming It has a micro-programming unit
Design It is a complex complier design It is an easy complier design
Calculations The calculations are faster and precise The calculations are slow and precise
Decoding Decoding of instructions is simple Decoding of instructions is complex
Time Execution time is very less Execution time is very high
External It does not require external memory for calculations It requires external memory for
calculations
Pipelining Pipelining does function correctly Pipelining does not function correctly
Stalling Stalling is mostly reduced in processors. The processors often stall.
Code expansion Code expansion can be a problem. Code expansion is not a problem
Disc space The space is saved The space is wasted.
Applications Used in high end applications such as video processing, Used in low end applications such as
telecommunications and image processing. security systems, home automations, etc.

Performance measurements
The performance measure is the time taken by the computer to execute a given bench mark. Initially some attempts were
made to create artificial programs that could be used as bench mark programs. But synthetic programs do not properly predict
the performance obtained when real application programs are run. A non-profit organization called SPEC- system
performance Evaluation Corporation selects and publishes bench marks. The program selected range from game playing,
compiler, and data base applications to numerically intensive programs in astrophysics and quantum chemistry. In each case,
the program is compiled under test, and the running time on a real computer is measured. The same program is also compiled
and run on one computer selected as reference.
The ‘SPEC’ rating is computed as follows.
SPEC rating = Running time on the reference computer/ Running time on the computer under test.

Multiprocessors and multicomputer


Multiprocessor computer
 Execute a number of different application tasks in parallel
 Execute subtasks of a single large task in parallel
 All processors have access to all the memory – shared-memory multiprocessor
 Cost – processors, memory units, complex interconnection networks
Multicomputers
 Each computer only has access to its own memory
 Exchange message via a communication network – message-passing multicomputers

Multicomputer Multiprocessors

Page 16 of 23
A computer made up of several computers. A computer that has more than one CPU on its
motherboard.
Distributed computing deals with hardware and software Multiprocessing is the use of two or more central
systems containing more than one processing element, multiple processing units (CPUs) within a single computer system.
programs
It can run faster Speed depends on the all-processors speed
A multi-computer is multiple computers, each of which can Single Computer with multiple processors
have multiple processors.
Used for true parallel processing Used for true parallel processing
Processor cannot share the memory Processors can share the memory
Called as message passing multi computers Called as shared memory multi processors
Cost is more Cost is low

MEMORY ORGANIZATION
Introduction
The computer’s memory stores data, instructions required during the processing of data, and output results. Storage may be
required for a limited period of time, instantly, or, for an extended period of time. Different types of memories, each having
its own unique features, are available for use in a computer. The cache memory, registers, and RAM are fast memories and
store the data and instructions temporarily during the processing of data and instructions. The secondary memory like
magnetic disks and optical disks has large storage capacities and store the data and instructions permanently, but are slow
memory devices. The memories are organized in the computer in a manner to achieve high levels of performance at the
minimum cost. In this section, we discuss different types of memories, their characteristics and their use in the computer.

Memory representation
The computer memory stores different kinds of data like input data, output data, intermediate results, etc., and the instructions.
Binary digit or bit is the basic unit of memory. A bit is a single binary digit, i.e., 0 or 1. A bit is the smallest unit of
representation of data in a computer. However, the data is handled by the computer as a combination of bits. A group of 8 bits
form a byte. One byte is the smallest unit of data that is handled by the computer.
One byte (8 bit) can store 28 = 256 different combinations of bits, and thus can be used to represent 256 different symbols. In
a byte, the different combinations of bits fall in the range 00000000 to 11111111. A group of bytes can be further combined
to form a word. A word can be a group of 2, 4 or 8 bytes.
1 bit = 0 or 1
1 Byte (B) = 8 bits
1 Kilobyte (KB) = 210 = 1024 bytes
1 Megabyte (MB) = 220 = 1024KB
1 Gigabyte (GB) = 230 = 1024 MB = 1024 *1024 KB
1 Terabyte (TB) = 240= 1024 GB = 1024 * 1024 *1024 KB

Characteristics of memories
1. Volatility
 Volatile -RAM
 Non-volatile - ROM, Flash memory
2. Mutability
 Read/Write- RAM, HDD, SSD, RAM, Cache, Registers
 Read Only - Optical ROM (CD/DVD…), Semiconductor ROM
3. Accessibility
 Random Access - RAM, Cache
 Direct Access - HDD, Optical Disks
 Sequential Access - Magnetic Tapes

Memory hierarchy
The memory is characterized on the basis of two key factors:
 Capacity is the amount of information (in bits) that a memory can store.

Page 17 of 23
 Access time is the time interval between the read/ write request and the availability of data. The lesser the access time,
the faster is the speed of memory.
 Performance: Earlier when the computer system was designed without Memory Hierarchy design, the speed gap
increases between the CPU registers and Main Memory due to large difference in access time. This results in lower
performance of the system and thus, enhancement was required. This enhancement was made in the form of Memory
Hierarchy Design because of which the performance of the system increases. One of the most significant ways to increase
system performance is minimizing how far down the memory hierarchy one has to go to manipulate data.
 Cost per bit: As we move from bottom to top in the Hierarchy, the cost per bit increases i.e. Internal memory is costlier
than External Memory.
Ideally, we want the memory with fastest speed and largest capacity. However, the cost of fast memory is very high. The
computer uses a hierarchy of memory that is organized in a manner to enable the fastest speed and largest capacity of memory.
The hierarchy of the different memory types is shown in the figure below.

The internal memory and external memory are the two broad categories of memory used in the computer.
 The internal memory consists of the CPU registers, cache memory and primary memory. The internal memory is used
by the CPU to perform the computing tasks.
 The external memory is also called the secondary memory. The secondary memory is used to store the large amount of
data and the software.

In general, referring to the computer memory usually means the internal memory.

Memory hierarchy
Internal memory
The key features of internal memory are:
1. Limited storage capacity.
2. Temporary storage.
3. Fast access.
4. High cost.

Registers, cache memory, and primary memory constitute the internal memory. The primary memory is further of two kinds:
RAM and ROM. Registers are the fastest and the most expensive among all the memory types. The registers are located
inside the CPU, and are directly accessible by the CPU. The speed of registers is between 1-2 ns (nanosecond). The sum of
the size of registers is about 200B. Cache memory is next in the hierarchy and is placed between the CPU and the main
memory. The speed of cache is between 2-10 ns. The cache size varies between 32 KB to 4MB. Any program or data that has
to be executed must be brought into RAM from the secondary memory. Primary memory is relatively slower than the cache
memory. The speed of RAM is around 60ns. The RAM size varies from 512KB to 64GB.

Secondary memory
The key features of secondary memory storage devices are:
Page 18 of 23
1. Very high storage capacity
2. Permanent storage (non-volatile), unless erased by user.
3. Relatively slower access.
4. Stores data and instructions that are not currently being used by CPU but may be required later for processing.
5. Cheapest among all memory.
To get the fastest speed of memory with largest capacity and least cost, the fast memory is located close to the processor. The
secondary memory, which is not as fast, is used to store information permanently, and is placed farthest from the processor.

With respect to CPU, the memory is organized as follows:


 Registers are placed inside the CPU (small capacity, high cost, very high speed)
 Cache memory is placed next in the hierarchy (inside and outside the CPU)
 Primary memory is placed next in the hierarchy
 Secondary memory is the farthest from CPU (large capacity, low cost, low speed)
The speed of memories is dependent on the kind of technology used for the memory. The registers, cache memory and primary
memory are semiconductor memories. They do not have any moving parts and are fast memories. The secondary memory is
magnetic or optical memory has moving parts and has slow speed.

CPU registers
Registers are very high-speed storage areas located inside the CPU. After CPU gets the data and instructions from the cache
or RAM, the data and instructions are moved to the registers for processing. Registers are manipulated directly by the control
unit of CPU during instruction execution. That is why registers are often referred to as the CPU’s working memory. Since
CPU uses registers for the processing of data, the number of registers in a CPU and the size of each register affect the power
and speed of a CPU. The more the number of registers (ten to hundreds) and bigger the size of each register (8 bits to 64 bits),
the better it is.

Cache memory
Cache memory is placed in between the CPU and the RAM. Cache memory is a fast memory, faster than the RAM. When the
CPU needs an instruction or data during processing, it first looks in the cache. If the information is present in the cache, it is
called a cache hit, and the data or instruction is retrieved from the cache. If the information is not present in cache, then it is
called a cache miss and the information is then retrieved from RAM.

Type of cache memory


Cache memory improves the speed of the CPU, but it is expensive. Type of cache memory is divided into different levels that
are L1, L2, L3.
 Level 1 (L1) cache or primary cache: L1 is the primary type cache memory. The Size of the L1 cache very small
comparison to others that is between 2KB to 64KB, it depends on computer processor. It is an embedded register in the
computer microprocessor (CPU). The Instructions that are required by the CPU that are firstly searched in L1 Cache.
Example of registers are accumulator, address register, Program counter etc.
 Level 2 (L2) cache or secondary cache: L2 is secondary type cache memory. The Size of the L2 cache is more capacious
than L1 that is between 256KB to 512KB. L2 cache is located on computer microprocessor. After searching the
Instructions in L1 Cache, if not found then it searched into L2 cache by computer microprocessor. The high-speed system
bus interconnecting the cache to the microprocessor.
 Level 3 (L3) cache or main memory: The L3 cache is larger in size but also slower in speed than L1 and L2, its size is
between 1MB to 8MB. In Multicore processors, each core may have separate L1 and L2, but all core shares a common
L3 cache. L3 cache double speed than the RAM.

Page 19 of 23
The advantages and disadvantages of cache memory are as follows
Advantages
 Cache memory is faster than main memory.
 It consumes less access time as compared to main memory.
 It stores the program that can be executed within a short period of time.
 It stores data for temporary use.

Disadvantages
 Cache memory has limited capacity.
 It is very expensive

Primary memory (main memory)


Primary memory is the main memory of computer. It is a chip mounted on the motherboard of computer. Primary memory
is categorized into two main types: Random access memory (RAM) and read only memory (rom). RAM is used for the
temporary storage of input data, output data and intermediate results. The input data entered into the computer using the
input device, is stored in RAM for processing. After processing, the output data is stored in RAM before being sent to the
output device. Any intermediate results generated during the processing of program are also stored in RAM. Unlike RAM,
the data once stored in ROM either cannot be changed or can only be changed using some special operations. Therefore,
ROM is used to store the data that does not require a change.

Types of primary memory


1. RAM (Random Access Memory)
The Word “RAM” stands for “random access memory” or may also refer to short term memory. It’s called “random” because
you can read store data randomly at any time and from any physical location. It is a temporal storage memory. RAM is volatile
that only retains all the data as long as the computer powered. It is the fastest type of memory. RAM stores the currently
processed data from the CPU and sends them to the graphics unit. There are generally two broad subcategories of RAM:
 Static RAM (SRAM): Static RAM is the form of RAM and made with flip-flops and used for primary storage are volatile.
It retains data in latch as long as the computer powered. SRAM is more expensive and consumes more power than DRAM.
It used as Cache Memory in a computer system. As technically, SRAM uses more transistors as compared to DRAM. It
is faster compared to DRAM due to the latching arrangement, and they use 6 transistors per data bit as compared to
DRAM, which uses one transistor per bit.
 Dynamic Random Access Memory (DRAM): It is another form of RAM used as Main Memory, its retains information
in Capacitors for a short period (a few milliseconds) even though the computer powered. The Data is Refreshed
Periodically to maintain in it. The DRAM is cheaper, but it can store much more information. Moreover, it is also slower
and consumes less power than SRAM.

2. ROM (Read Only Memory)


ROM is the long-term internal memory. ROM is “Non-Volatile Memory” that retains data without the flow of electricity.
ROM is an essential chip with permanently written data or programs. It is similar to the RAM that is accessed by the CPU.
ROM comes with pre-written by the computer manufacturer to hold the instructions for booting-up the computer. There is
generally three broad types of ROM:
 PROM (Programmable Read Only Memory): PROM stands for programmable ROM. It can be programmed only be
done once and read many. Unlike RAM, PROMs retain their contents without the flow of electricity. PROM is also
nonvolatile memory. The significant difference between a ROM and a PROM is that a ROM comes with pre-written by
Page 20 of 23
the computer manufacturer whereas PROM manufactured as blank memory. PROM can be programmed by PROM burner
and by blowing internal fuses permanently.
 EPROM (Erasable Programmable Read Only Memory): EPROM is pronounced EE-PROM. This memory type
retains its contents until it exposed to intense ultraviolet light that clears its contents, making it possible to reprogram the
memory.
 EEPROM (Electrically Erasable Programmable Read Only Memory): EEPROM can be burned (programmed) and
erased by first electrical waves in a millisecond. A single byte of a data or the entire contents of device can be erased. To
write or erase this memory type, you need a device called a PROM burner.
 Electrically alterable read-only memory (EAROM) is a type of EEPROM that can be modified one bit at a time.
Writing is a very slow process and again needs higher voltage (usually around 12 V) than is used for read access. EAROMs
are intended for applications that require infrequent and only partial rewriting. EAROM may be used as non-volatile
storage for critical system setup information; in many applications, EAROM has been supplanted by CMOS RAM
supplied by mains power and backed-up with a lithium battery.
 Flash memory (or simply flash) is a modern type of EEPROM invented in 1984. Flash memory can be erased and
rewritten faster than ordinary EEPROM, and newer designs feature very high endurance (exceeding 1,000,000 cycles).
Modern NAND flash makes efficient use of silicon chip area, resulting in individual ICs with a capacity as high as 32 GB
as of 2007; this feature, along with its endurance and physical durability, has allowed NAND flash to replace magnetic in
some applications (such as USB flash drives). Flash memory is sometimes called flash ROM or flash EEPROM when
used as a replacement for older ROM types, but not in applications that take advantage of its ability to be modified quickly
and frequently.

Secondary memory
In the previous section, we saw that RAM is expensive and has a limited storage capacity. Since it is a volatile memory, it
cannot retain information after the computer is powered off. Thus, in addition to primary memory, an auxiliary or secondary
memory is required by a computer. The secondary memory is also called the storage device of computer. In this section, the
terms secondary memory and storage device are used interchangeably. In comparison to the primary memory, the secondary
memory stores much larger amounts of data and information (for example, an entire software program) for extended periods
of time. The data and instructions stored in secondary memory must be fetched into RAM before processing is done by CPU.
Magnetic tape drives, magnetic disk drives, optical disk drives are the different types of storage devices.

Magnetic tape
Magnetic tape is a plastic tape with magnetic coating (figure below). It is a storage medium on a large open reel or in a smaller
cartridge or cassette (like a music cassette). Magnetic tapes are cheaper storage media. They are durable, can be written,
erased, and re-written. Magnetic tapes are sequential access devices, which mean that the tape needs to rewind or move
forward to the location where the requested data is positioned in the magnetic tape. Due to their sequential nature, magnetic
tapes are not suitable for data files that need to be revised or updated often. They are generally used to store back-up data that
is not frequently used or to transfer data from one system to other.

Page 21 of 23
A 10.5-inch reel of 9-track tape
The working of magnetic tape is explained as follows:
 Magnetic tape is divided horizontally into tracks (7 or 9) and vertically into frames (figure below). A frame stores one
byte of data, and a track in a frame stores one bit. Data is stored in successive frames as a string with one data (byte) per
frame.

A portion of magnetic tape


 Data is recorded on tape in the form of blocks, where a block consists of a group of data also called as records. Each block
is read continually. There is an Inter-Record Gap (IRG) between two blocks that provides time for the tape to be stopped
and started between records (figure below).

Blocking of data in a magnetic tape


 Magnetic tape is mounted on a magnetic tape drive for access. The basic magnetic tape drive mechanism consists of the
supply reel, take-up reel, and the read/write head assembly. The magnetic tape moves on tape drive from the supply reel
to take-up reel, with its magnetic coated side passing over the read/write head.
 Tapes are categorized based on their width - ¼ inch, ½ inch, etc.
 The storage capacity of the tape varies greatly. A 10–inch diameter reel of tape which is 2400 feet long can store up to
180 million characters.

The features of magnetic tape are:


 Inexpensive storage device
 Can store a amount of data
 Easy to carry or transport
 Not suitable for random access data
 Slow access device
 Needs dust prevention, as dust can harm the tape
 Suitable for back-up storage or archiving

Magnetic disk

Page 22 of 23
Magnetic disk is a direct access secondary storage device. It is a thin plastic or metallic circular plate coated with magnetic
oxide and encased in a protective cover. Data is stored on magnetic disks as magnetized spots. The presence of a magnetic
spot represents the bit 1 and its absence represents the bit 0.

Page 23 of 23

You might also like