0% found this document useful (0 votes)
31 views39 pages

Coa Module 4

The document discusses input-output organization in computers. It describes peripheral devices, input-output interfaces, and asynchronous data transfer methods. Asynchronous data transfer between independent units like the CPU and I/O interface uses control signals. Common methods are strobe control and handshaking.

Uploaded by

diwewe9515
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views39 pages

Coa Module 4

The document discusses input-output organization in computers. It describes peripheral devices, input-output interfaces, and asynchronous data transfer methods. Asynchronous data transfer between independent units like the CPU and I/O interface uses control signals. Common methods are strobe control and handshaking.

Uploaded by

diwewe9515
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

MODULE – IV

Input-Output Organization: Input-Output Interface, Asynchronous data transfer, Modes of

Transfer, Priority Interrupt Direct memory Access. Memory Organization: Memory

Hierarchy, Main Memory, Auxiliary memory, Associate Memory, Cache Memory.

Input-Output Organization

Input/output Subsystem

The I/O subsystem of a computer provides an efficient mode of communication between the
central system and the outside environment. It handles all the input-output operations of the
computer system.

Peripheral Devices

Input or output devices that are connected to computer are called peripheral devices. These
devices are designed to read information into or out of the memory unit upon command from the
CPU and are considered to be the part of computer system. These devices are also
called peripherals.

For example: Keyboards, display units and printers are common peripheral devices.

There are three types of peripherals:

1. Input peripherals : Allows user input, from the outside world to the computer. Example:
Keyboard, Mouse etc.

2. Output peripherals: Allows information output, from the computer to the outside world.
Example: Printer, Monitor etc

3. Input-Output peripherals: Allows both input(from outised world to computer) as well


as, output(from computer to the outside world). Example: Touch screen etc.

Interfaces

Interface is a shared boundary btween two separate components of the computer system which can
be used to attach two or more components to the system for communication purposes.
There are two types of interface:

1. CPU Inteface

2. I/O Interface

Input Output Interface

When we type something from our keyboard, the input data is transferred to the computer's
CPU, and the screen displays the output data. But how does our computer's CPU or processors
share information with the external Input-Output devices? Well, we can achieve this with the
input-output Interface.

The Input-output interface allows transferring information between external input/output


devices (i.e., peripherals) and processors. This article will discuss the Input-Output Interface,
its structure, and function.

The input-output interface allows transferring information between external input/output


devices (i.e., peripherals) and processors. A peripheral device provides input/output for the
computer, and it is also known as an Input-Output device.

This input-output interface in a computer system exists in a particular hardware component


between the system's bus and peripherals. This component is known as the "interface unit".

The below figure shows a typical input-output interface between the processor and different
peripherals:
The I/O buses include control lines, address lines, and data lines. In any general computer, the
printer, keyboard, magnetic disk, and display terminal are commonly connected. Every
peripheral unit has an interface associated with it. Every interface decodes the address and
control received from the I/O bus.

It can describe the control and address received from the computer peripheral and supports
signals for the computer peripheral controller. It can also conduct the transfer of information
or data between peripheral and processor and can also integrate the data flow.

The I/O buses are linked to all the peripheral interfaces from the computer processor. The
processor locates a device address on the address line for interaction with a specific device.
Every interface contains an address decoder that monitors the address lines, attached to the I/O
bus.

When the interface recognizes the address, it activates the direction between the bus and the
device that it controls. The interface will disable the peripherals whose address is not
equivalent to the address in the bus.

Functions of Input-Output Interface

The primary functions of the input-output Interface are listed below:

● It can synchronize the operating speed of the CPU to peripherals.

● It selects the peripheral appropriate for the interpretation of the input-output devices.

● It provides signals like timing and control signals.

● In this, data buffering may be possible through the data bus.

● It has various error detectors.

● It can convert serial data into parallel and vice versa.

● It can convert digital data into analog signals and vice versa.

Asynchronous data transfer


The internal operations in an individual unit of a digital system are synchronized using clock pulse.
It means clock pulse is given to all registers within a unit. And all data transfer among internal
registers occurs simultaneously during the occurrence of the clock pulse. Now, suppose any two
units of a digital system are designed independently, such as CPU and I/O interface.

If the registers in the I/O interface share a common clock with CPU registers, then transfer between
the two units is said to be synchronous. But in most cases, the internal timing in each unit is
independent of each other, so each uses its private clock for its internal registers. In this case, the
two units are said to be asynchronous to each other, and if data transfer occurs between them, this
data transfer is called Asynchronous Data Transfer.

But, the Asynchronous Data Transfer between two independent units requires that control signals
be transmitted between the communicating units so that the time can be indicated at which they
send data. These two methods can achieve this asynchronous way of data transfer:

o Strobe control: A strobe pulse is supplied by one unit to indicate to the other unit when
the transfer has to occur.
o Handshaking: This method is commonly used to accompany each data item being
transferred with a control signal that indicates data in the bus. The unit receiving the data
item responds with another signal to acknowledge receipt of the data.

The strobe pulse and handshaking method of asynchronous data transfer is not restricted to I/O
transfer. They are used extensively on numerous occasions requiring the transfer of data between
two independent units. So, here we consider the transmitting unit as a source and receiving unit as
a destination.

For example, the CPU is the source during output or write transfer and the destination unit during
input or read transfer.

Therefore, the control sequence during an asynchronous transfer depends on whether the transfer
is initiated by the source or by the destination.
So, while discussing each data transfer method asynchronously, you can see the control sequence
in both terms when it is initiated by source or by destination. In this way, each data transfer method
can be further divided into parts, source initiated and destination initiated.

Asynchronous Data Transfer Methods


The asynchronous data transfer between two independent units requires that control signals be
transmitted between the communicating units to indicate when they send the data. Thus, the two
methods can achieve the asynchronous way of data transfer.

1. Strobe Control Method


The Strobe Control method of asynchronous data transfer employs a single control line to time
each transfer. This control line is also known as a strobe, and it may be achieved either by source
or destination, depending on which initiate the transfer.

A. Source initiated strobe: In the below block diagram, you can see that strobe is initiated by source,
and as shown in the timing diagram, the source unit first places the data on the data bus.
B. Destination initiated strobe: In the below block diagram, you see that the strobe initiated
by destination, and in the timing diagram, the destination unit first activates the strobe
pulse, informing the source to provide the data.

The source unit responds by placing the requested binary information on the data bus. The
data must be valid and remain on the bus long enough for the destination unit to accept it.
The falling edge of the strobe pulse can use again to trigger a destination register. The
destination unit then disables the strobe. Finally, and source removes the data from the data
bus after a determined time interval.
In this case, the strobe may be a memory read control from the CPU to a memory unit. The
CPU initiates the read operation to inform the memory, which is a source unit, to place the
selected word into the data bus.

2. Handshaking Method

The strobe method has the disadvantage that the source unit that initiates the transfer has no way
of knowing whether the destination has received the data that was placed in the bus. Similarly, a
destination unit that initiates the transfer has no way of knowing whether the source unit has placed
data on the bus.

So this problem is solved by the handshaking method. The handshaking method introduces a
second control signal line that replays the unit that initiates the transfer.

In this method, one control line is in the same direction as the data flow in the bus from the source
to the destination. The source unit uses it to inform the destination unit whether there are valid data
in the bus.

The other control line is in the other direction from the destination to the source. This is because
the destination unit uses it to inform the source whether it can accept data. And in it also, the
sequence of control depends on the unit that initiates the transfer. So it means the sequence of
control depends on whether the transfer is initiated by source and destination.

o Source initiated handshaking: In the below block diagram, you can see that two
handshaking lines are "data valid", which is generated by the source unit, and "data
accepted", generated by the destination unit.
The timing diagram shows the timing relationship of the exchange of signals between the
two units. The source initiates a transfer by placing data on the bus and enabling its data
valid signal. The destination unit then activates the data accepted signal after it accepts the
data from the bus.
The source unit then disables its valid data signal, which invalidates the data on the bus.
After this, the destination unit disables its data accepted signal, and the system goes into
its initial state. The source unit does not send the next data item until after the destination
unit shows readiness to accept new data by disabling the data accepted signal.
This sequence of events described in its sequence diagram, which shows the above
sequence in which the system is present at any given time.
o Destination initiated handshaking: In the below block diagram, you see that the two
handshaking lines are "data valid", generated by the source unit, and "ready for data"
generated by the destination unit.
Note that the name of signal data accepted generated by the destination unit has been
changed to ready for data to reflect its new meaning.
The destination transfer is initiated, so the source unit does not place data on the data bus
until it receives a ready data signal from the destination unit. After that, the handshaking
process is the same as that of the source initiated.
The sequence of events is shown in its sequence diagram, and the timing relationship
between signals is shown in its timing diagram. Therefore, the sequence of events in both
cases would be identical.

Advantages of Asynchronous Data Transfer


Asynchronous Data Transfer in computer organization has the following advantages, such as:
o It is more flexible, and devices can exchange information at their own pace. In addition, individual
data characters can complete themselves so that even if one packet is corrupted, its predecessors
and successors will not be affected.
o It does not require complex processes by the receiving device. Furthermore, it means that
inconsistency in data transfer does not result in a big crisis since the device can keep up with the
data stream. It also makes asynchronous transfers suitable for applications where character data is
generated irregularly.

Disadvantages of Asynchronous Data Transfer


There are also some disadvantages of using asynchronous data for transfer in computer
organization, such as:
o The success of these transmissions depends on the start bits and their recognition. Unfortunately,
this can be easily susceptible to line interference, causing these bits to be corrupted or distorted.
o A large portion of the transmitted data is used to control and identify header bits and thus carries
no helpful information related to the transmitted data. This invariably means that more data packets
need to be sent.

Modes of Transfer
The binary information that is received from an external device is usually stored in the memory
unit. The information that is transferred from the CPU to the external device is originated from the
memory unit. CPU merely processes the information but the source and target is always the
memory unit. Data transfer between CPU and the I/O devices may be done in different modes. Data
transfer to and from the peripherals may be done in any of the three possible ways

1. Programmed I/O.
2. Interrupt- initiated I/O.
3. Direct memory access( DMA).

1. Programmed I/O
Programmed I/O uses the I/O instructions written in the computer program. The instructions in the
program initiate every data item transfer. Usually, the data transfer is from a memory and CPU
register. This case requires constant monitoring by the peripheral device's CPU.
I/O Commands
To execute an I/O-related instruction, the processor issues an address, specifying the particular I/O
module and external device, and an I/O command. There are four types of I/O commands that an I/O
module may receive when it is addressed by a processor:

● Control: Used to activate a peripheral and tell it what to do. For example, a magnetic-tape unit
may be instructed to rewind or to move forward one record. These commands are tailored to the
particular type of peripheral device.
● Test: Used to test various status conditions associated with an I/O module and its peripherals. The
processor will want to know that the peripheral of interest is powered on and available for use. It
will also want to know if the most recent I/O operation is completed and if any errors occurred.
● Read: Causes the I/O module to obtain an item of data from the peripheral and place it in an
internal buffer. The processor can then obtain the data item by requesting that the I/O module place
it on the data bus.
● Write: Causes the I/O module to take an item of data (byte or word) from the data bus and
subsequently transmit that data item to the peripheral.

Advantages:

● Programmed I/O is simple to implement.


● It requires very little hardware support.
● CPU checks status bits periodically.
Disadvantages:

● The processor has to wait for a long time for the I/O module to be ready for either transmission or
reception of data.
● The performance of the entire system is severely degraded.

2. Interrupt-initiated I/O

In the above section, we saw that the CPU is kept busy unnecessarily. We can avoid this situation by using
an interrupt-driven method for data transfer. The interrupt facilities and special commands inform the
interface for issuing an interrupt request signal as soon as the data is available from any device. In the
meantime, the CPU can execute other programs, and the interface will keep monitoring the i/O device.
Whenever it determines that the device is ready for transferring data interface initiates an interrupt request
signal to the CPU. As soon as the CPU detects an external interrupt signal, it stops the program it was
already executing, branches to the service program to process the I/O transfer, and returns to the program
it was initially running.

Interrupt driven I/O is an alternative scheme dealing with I/O. Interrupt I/O is a way of controlling
input/output activity whereby a peripheral or terminal that needs to make or receive a data transfer sends a
signal. This will cause a program interrupt to be set. At a time appropriate to the priority level of the I/O
interrupts. Relative to the total interrupt system, the processors enter an interrupt service routine. The
function of the routine will depend upon the system of interrupt levels and priorities that is implemented in
the processor. The interrupt technique requires more complex hardware and software, but makes far more
efficient use of the computer’s time and capacities. Figure 2 shows the simple interrupt processing.
For input, the device interrupts the CPU when
new data has arrived and is ready to be retrieved
by the system processor. The actual actions to
perform depend on whether the device uses I/O
ports or memory mapping.

For output, the device delivers an interrupt either


when it is ready to accept new data or to
acknowledge a successful data transfer. Memory-
mapped and DMA-capable devices usually
generate interrupts to tell the system they are done
with the buffer.
Here the CPU works on its given tasks
continuously. When an input is available, such as
when someone types a key on the keyboard, then
the CPU is interrupted from its work to take care
of the input data. The CPU can work continuously
Figure 2: Simple Interrupt Processing
on a task without checking the input devices,
allowing the devices themselves to interrupt it as
necessary.
Basic Operations of Interrupt

1. CPU issues read command.


2. I/O module gets data from peripheral whilst CPU does other work.
3. I/O module interrupts CPU.
4. CPU requests data.
5. I/O module transfers data.

Interrupt Processing
1. A device driver initiates an I/O request on behalf
of a process.
2. The device driver signals the I/O controller for
the proper device, which initiates the requested
I/O.
3. The device signals the I/O controller that is ready
to retrieve input, the output is complete or that an
error has been generated.
4. The CPU receives the interrupt signal on the
interrupt-request line and transfer control over
the interrupt handler routine.
5. The interrupt handler determines the cause of the
interrupt, performs the necessary processing and
executes a “return from” interrupt instruction.
6. The CPU returns to the execution state prior to
the interrupt being signaled.
7. The CPU continues processing until the cycle
begins again.

Advantages & Disadvantages of Interrupt Drive I/O

Advantages - fast
- efficient
Disadvantages - can be tricky to write if
using a low level language
- can be tough to get various
pieces to work well together
- usually done by the
hardware manufacturer / OS
maker, e.g. Microsoft
Working of CPU in terms of interrupts:

● CPU issues read command.


● It starts executing other programs.
● Check for interruptions at the end of each instruction cycle.
● On interruptions:-
o Process interrupt by fetching data and storing it.
o See operation system notes.
● Starts working on the program it was executing.

Priority Interrupt
A priority interrupt is a system which decides the priority at which various devices, which generates the
interrupt signal at the same time, will be serviced by the CPU. The system has authority to decide which
conditions are allowed to interrupt the CPU, while some other interrupt is being serviced. Generally, devices
with high speed transfer such as magnetic disks are given high priority and slow devices such
as keyboards are given low priority.

When two or more devices interrupt the computer simultaneously, the computer services the device with
the higher priority first.

Types of Interrupts:

Following are some different types of interrupts:

Hardware Interrupts

When the signal for the processor is from an external device or hardware then this interrupts is known
as hardware interrupt.

Let us consider an example: when we press any key on our keyboard to do some action, then this pressing
of the key will generate an interrupt signal for the processor to perform certain action. Such an interrupt
can be of two types:

● Maskable Interrupt
The hardware interrupts which can be delayed when a much high priority interrupt has occurred at
the same time.

● Non Maskable Interrupt

The hardware interrupts which cannot be delayed and should be processed by the processor
immediately.

Software Interrupts

The interrupt that is caused by any internal system of the computer system is known as a software interrupt.
It can also be of two types:

● Normal Interrupt

The interrupts that are caused by software instructions are called normal software interrupts.

● Exception

Unplanned interrupts which are produced during the execution of some program are
called exceptions, such as division by zero.

Daisy Chaining Priority

This way of deciding the interrupt priority consists of serial connection of all the devices which generates
an interrupt signal. The device with the highest priority is placed at the first position followed by lower
priority devices and the device which has lowest priority among all is placed at the last in the chain.

In daisy chaining system all the devices are connected in a serial form. The interrupt line request is common
to all devices. If any device has interrupt signal in low level state then interrupt line goes to low level state
and enables the interrupt input in the CPU. When there is no interrupt the interrupt line stays in high level
state. The CPU respond to the interrupt by enabling the interrupt acknowledge line. This signal is received
by the device 1 at its PI input. The acknowledge signal passes to next device through PO output only if
device 1 is not requesting an interrupt.

The following figure shows the block diagram for daisy chaining priority system.
4. Direct Memory Access (DMA)
DMA is a peripheral mostly found in modern Processors and Microcontrollers .DMA stands
for Direct Memory Access. This means you can perform memory read and write operations without
stealing CPU cycles. Let’s say you want to copy 1000 bytes from one memory location to another.
without DMA you have to read each byte then write that to the new location that means you need
2 cycles for each byte (assuming that you are performing this on a 8bit processor) , this totals to
2000 cycles. this is the load on CPU just for 1kB of memory .With DMA you can perform the same
task with just 2-10 cycle (you have to configure the DMA then leave it ) once it completes the
transfer it will fire an interrupt to let the CPU know that the transaction is complete.
The data transfer between any fast storage media like a memory unit and a magnetic disk gets
limited with the speed of the CPU. Thus it will be best to allow the peripherals to directly
communicate with the storage using the memory buses by removing the intervention of the CPU.
This mode of transfer of data technique is known as Direct Memory Access (DMA). During Direct
Memory Access, the CPU is idle and has no control over the memory buses. The DMA controller
takes over the buses and directly manages data transfer between the memory unit and I/O devices.
CPU Bus Signal for DMA transfer

Bus Request - We use bus requests in the DMA controller to ask the CPU to relinquish the control
buses.

Bus Grant - CPU activates bus grant to inform the DMA controller that DMA can take control of
the control buses. Once the control is taken, it can transfer data in many ways.

Types of DMA transfer using DMA controller:

● Burst Transfer: In this transfer, DMA will return the bus control after the complete data transfer.
A register is used as a byte count, which decrements for every byte transfer, and once it becomes
zero, the DMA Controller will release the control bus. When the DMA Controller operates in burst
mode, the CPU is halted for the duration of the data transfer.
● Cyclic Stealing: It is an alternative method for data transfer in which the DMA controller will
transfer one word at a time. After that, it will return the control of the buses to the CPU. The CPU
operation is only delayed for one memory cycle to allow the data transfer to “steal” one memory
cycle.

Advantages

● It is faster in data transfer without the involvement of the CPU.


● It improves overall system performance and reduces CPU workload.
● It deals with large data transfers, such as multimedia and files.
Disadvantages

● It is costly and complex hardware.


● It has limited control over the data transfer process.
● Risk of data conflicts between CPU and DMA.

Memory Organization:

Memory organization is an important aspect of computer architecture, also known as the


organization of computer systems (COA). It refers to the way that the computer’s memory is
arranged and managed.

The memory of system can be thought of as a large number of addressable storage locations. Each
location can store a fixed amount of data, typically measured in bits or bytes. The way these
locations are organized and accessed can have a notable impact on the overall production and
functionality of the system.

Memory organization involves the use of different types of memory, including RAM, ROM, cache
memory, virtual memory, flash memory, and magnetic disks. Each type of memory is used for a
specific purpose and has its own advantages and disadvantages.

Memory organization is also important for the efficient management of data and instructions
within a computer system. It involves techniques such as memory allocation, virtual memory
management, and cache management. These techniques help to optimize the use of memory and
improve system performance. Overall, memory organization is a critical aspect of computer
architecture that plays a important role in determining the performance, functionality, and
efficiency of a computer system.

Types of Memory Organization:


There are several types of memory organization used in computer systems, each with its own
advantages and disadvantages. Here are the most common types of memory organization:

I. Von Neumann architecture: This type of memory organization is named after the computer
scientist John von Neumann, who first proposed the concept. In this architecture, in the same
memory we can store both instructions and data. This memory organization is simple and easy to
implement, but it can lead to bottlenecks as the system tries to access both instructions and data at
the same time.

II. Harvard architecture: In this type of memory organization, program instructions and data are
stored in separate memory spaces. This allows for parallel access to instructions and data, which
can lead to faster performance. However, the Harvard architecture is more complex to implement
and may require more hardware resources.

III. Cache memory organization: Small, fast memory that stores frequently used data and
instructions. Cache memory is organized into levels, with each level providing increasing storage
capacity and decreasing speed. Cache memory organization is critical for improving system
performance, as it reduces the time it takes to access frequently used data and instructions.
IV. Virtual memory organization: It is a technique that allows a computer to use more memory
than it physically has. Virtual memory creates a virtual address space that is mapped to the physical
memory. This memory organization is critical for running large applications that require more
memory than is available on the system.

V. Flash memory organization: It is a non-volatile memory that is used in portable devices, such
as USB drives and memory cards. Flash memory organization involves dividing the memory into
blocks and pages, with data stored in individual pages. This allows for efficient read and writes
operations and makes flash memory ideal for storing data in portable devices.

Requirements of Memory Management System:

Memory distribution: The system must be able to distribute memory to processes as needed.
Additionally, the system must make sure that memory is allocated as effectively as possible in
order to reduce fragmentation.

Memory security: The system must prevent other processes from improperly accessing the
memory allotted to each process. Furthermore, it should make sure that processes are unable to
alter memory that does not belong to them.

Deallocating memory: The system must be able to release memory that is no longer required by
a running process. This involves restoring RAM to the system after it has been freed up and is no
longer needed.

Memory sharing: The system need to permit processes to share memory.

Virtual memory: The system must be capable of offering virtual memory, which enables
programmes to access more memory than is physically accessible. By switching data between the
RAM and the hard drive, this is accomplished.

Memory fragmentation can happen when memory is frequently allocated and deallocated; the
system should avoid this. Fragmentation might result in memory usage that is wasteful and slow
down the system.
Memory mapping: The system ought to support memory mapping, which permits the mapping
of files to memory. As data can be read and written directly from memory, this may speed up file
I/O processes.

Memory leaks are caused when a process fails to deallocate memory that it no longer requires.
The system should be able to identify and prevent memory leaks.

Ways to Organize Memory in a computer system:

Main Memory
The main memory is the central storage unit. It is an essential component of a digital computer
since it stores data and programs.
It is of two types:
● RAM (Random Access Memory)
● ROM (Read Only Memory)

RAM: It's a volatile memory. Volatile memory stores data that is dependent on the power supply.
If the power supply fails, is interrupted, or is turned off, all Data and Information on this memory
will be erased. RAM is utilized to start or boot up the computer. Read this to learn more about
RAM.
ROM: It is a type of non-volatile memory. Non-volatile memory keeps data even if the power
source is turned off. The data that is used to run the system is stored in the ROM.

RAM (Random Access Memory): AM is a type of computer memory that allows data to be read
or written in any order. It is a volatile memory that can be accessed randomly and is used to store
data and programs temporarily while they are being used.

The memory unit that communicates directly within the CPU, Auxillary memory and Cache
memory, is called main memory. It is the central storage unit of the computer system. It is a large
and fast memory used to store data during computer operations. Main memory is made up
of RAM and ROM, with RAM integrated circuit chips holing the major share.
RAM integrated circuit chips
A static RAM's main components are flip-flops, which store binary data. The information stored
in RAM chips is volatile, which means it will remain valid as long as the device is powered on.
Characteristics of a RAM chip:
1. A RAM chip is suited for communication with the CPU if it has one or more control inputs
that choose the chip only when needed.
2. A bidirectional data bus is another typical feature that allows data to be transferred from
memory to CPU during a read operation and from CPU to memory during a write operation.
3. The logic 1 and 0 signals are standard digital signals.
4. RAM chips come in a variety of sizes and are used based on the needs of the system.
5. The RAM chips are further classified into two modes, static and dynamic.

A RAM chip's block diagram is given below.


RAM: Random Access Memory

o DRAM: Dynamic RAM, is made of capacitors and transistors, and must be


refreshed every 10~100 ms. It is slower and cheaper than SRAM.

o SRAM: Static RAM, has a six transistor circuit in each cell and retains data, until
powered off.

o NVRAM: Non-Volatile RAM, retains its data, even when turned off. Example:
Flash memory.

ROM: Read Only Memory, is non-volatile and is more like a permanent storage for information.
It also stores the bootstrap loader program, to load and start the operating system when computer
is turned on. PROM(Programmable ROM), EPROM(Erasable PROM) and EEPROM(Electrically
Erasable PROM) are some commonly used ROMs. ROM (Read-Only Memory):ROM, or Read-
Only Memory, is a type of computer memory that is non-volatile, meaning that it retains its
contents even when the power is turned off. As the name suggests, ROM is read-only, meaning
that data can be read from it, but it cannot be written to.

ROM integrated circuit chips


A ROM memory is used to store data and programs permanently in a computer system.
ROM chips are available in a variety of sizes. The following block diagram represents a 128 * 8
ROM chip.
1. A ROM chip has a similar architecture as a RAM chip. However, there is a slight difference.
While RAM can perform both read and write operations, a ROM can only perform read
operations. In other words, the data bus operates in output mode only.
2. The 9-bit address lines in the ROM chip indicate any of the 512 bytes stored in it.
3. The value for chip select 1(CS1) and chip select 2(CS2) must be 0 or 1 for the proper
functioning of the unit. If this is not the case, then the data bus is said to be in a high-impedance
state.

Memory Connection to CPU


● The data and address buses are used to connect RAM and ROM chips to a CPU.
● The low-order lines in the address bus choose the byte within the chips, while the other lines
in the address bus select a particular chip through its chip select inputs.
Secondary Memory
It is also known as auxiliary memory and backup memory. It is a non-volatile memory and used
to store a large amount of data or information. The data or information stored in secondary memory
is permanent, and it is slower than primary memory. A CPU cannot access secondary memory
directly. The data/information from the auxiliary memory is first transferred to the main memory,
and then the CPU can access it.
Characteristics of Secondary Memory
● It is a slow memory but reusable.
● It is a reliable and non-volatile memory.
● It is cheaper than primary memory.
● The storage capacity of secondary memory is large.
● A computer system can run without secondary memory.
● In secondary memory, data is stored permanently even when the power is off.
We have read so far, that primary memory is volatile and has limited capacity. So, it is important
to have another form of memory that has a larger storage capacity and from which data and
programs are not lost when the computer is turned off. Such a type of memory is called secondary
memory. In secondary memory, programs and data are stored. It is also called auxiliary memory.
It is different from primary memory as it is not directly accessible through the CPU and is non-
volatile. Secondary or external storage devices have a much larger storage capacity and the cost of
secondary memory is less as compared to primary memory.

Use of Secondary memory

Secondary memory is used for different purposes but the main purposes of using secondary
memory are:

● Permanent storage: As we know that primary memory stores data only when the power supply
is on, it loses data when the power is off. So we need a secondary memory to stores data
permanently even if the power supply is off.
● Large Storage: Secondary memory provides large storage space so that we can store large data
like videos, images, audios, files, etc permanently.
● Portable: Some secondary devices are removable. So, we can easily store or transfer data from
one computer or device to another.

Types of Secondary memory

1. Fixed storage
In secondary memory, a fixed storage is an internal media device that is used to store data in a
computer system. Fixed storage is generally known as fixed disk drives or hard drives. Generally,
the data of the computer system is stored in a built-in fixed storage device. Fixed storage does not
mean that you can not remove them from the computer system, you can remove the fixed storage
device for repairing, for the upgrade, or for maintenance, etc. with the help of an expert or
engineer.
Types of fixed storage:
Following are the types of fixed storage:

● Internal flash memory (rare)


● SSD (solid-state disk)
● Hard disk drives (HDD)

2. Removable storage
In secondary memory, removable storage is an external media device that is used to store data in
a computer system. Removable storage is generally known as disks drives or external drives. It is
a storage device that can be inserted or removed from the computer according to our requirements.
We can easily remove them from the computer system while the computer system is running.
Removable storage devices are portable so we can easily transfer data from one computer to
another. Also, removable storage devices provide the fast data transfer rates associated with
storage area networks (SANs).

Types of Removable Storage:


● Optical discs (like CDs, DVDs, Blu-ray discs, etc.)
● Memory cards
● Floppy disks
● Magnetic tapes
● Disk packs
● Paper storage (like punched tapes, punched cards, etc.)

Secondary memory devices

Following are the commonly used secondary memory devices are:

1. Floppy Disk: A floppy disk consists of a magnetic disc in a square plastic case. It is used to
store data and to transfer data from one device to another device. Floppy disks are available in two
sizes (a) Size: 3.5 inches, the Storage capacity of 1.44 MB (b) Size: 5.25 inches, the Storage
capacity of 1.2 MB. To use a floppy disk, our computer needs to have a floppy disk drive. This
storage device becomes obsolete now and has been replaced by CDs, DVDs, and flash drives.
2. Compact Disc: A Compact Disc (CD) is a commonly used secondary storage device. It contains
tracks and sectors on its surface. Its shape is circular and is made up of polycarbonate plastic. The
storage capacity of CD is up to 700 MB of data. A CD may also be called a CD-ROM (Compact
Disc Read-Only Memory), in this computers can read the data present in a CD-ROM, but cannot
write new data onto it. For a CD-ROM, we require a CD-ROM. CD is of two types:
● CD-R (compact disc recordable): Once the data has been written onto it cannot be erased, it
can only be read.
● CD-RW (compact disc rewritable): It is a special type of CD in which data can be erased and
rewritten as many times as we want. It is also called an erasable CD.
3. Digital Versatile Disc: A Digital Versatile Disc also known as DVD it is looks just like a CD,
but the storage capacity is greater compared to CD, it stores up to 4.7 GB of data. DVD-ROM
drive is needed to use DVD on a computer. The video files, like movies or video recordings, etc.,
are generally stored on DVD and you can run DVD using the DVD player. DVD is of three types:
● DVD-ROM(Digital Versatile Disc Readonly): In DVD-ROM the manufacturer writes the
data in it and the user can only read that data, cannot write new data in it. For example movie
DVD, movie DVD is already written by the manufacturer we can only watch the movie but we
cannot write new data into it.
● DVD-R(Digital Versatile Disc Recordable): In DVD-R you can write the data but only one
time. Once the data has been written onto it cannot be erased, it can only be read.
● DVD-RW(Digital Versatile Disc Rewritable and Erasable): It is a special type of DVD in
which data can be erased and rewritten as many times as we want. It is also called an erasable
DVD.
4. Blu-ray Disc: A Blu-ray disc looks just like a CD or a DVD but it can store data or information
up to 25 GB data. If you want to use a Blu-ray disc, you need a Blu-ray reader. The name Blu-ray
is derived from the technology that is used to read the disc ‘Blu’ from the blue-violet laser and
‘ray’ from an optical ray.
5. Hard Disk: A hard disk is a part of a unit called a hard disk drive. It is used to storing a large
amount of data. Hard disks or hard disk drives come in different storage capacities.(like 256 GB,
500 GB, 1 TB, and 2 TB, etc.). It is created using the collection of discs known as platters. The
platters are placed one below the other. They are coated with magnetic material. Each platter
consists of a number of invisible circles and each circle having the same centre called tracks. Hard
disk is of two types (i) Internal hard disk (ii) External hard disk.
6. Flash Drive: A flash drive or pen drive comes in various storage capacities, such as 1 GB, 2
GB, 4 GB, 8 GB, 16 GB, 32 GB, 64 GB, up to 1 TB. A flash drive is used to transfer and store
data. To use a flash drive, we need to plug it into a USB port on a computer. As a flash drive is
easy to use and compact in size, Nowadays it is very popular.
7. Solid-state disk: It is also known as SDD. It is a non-volatile storage device that is used to store
and access data. It is faster, does noiseless operations(because it does not contain any moving parts
like the hard disk), consumes less power, etc. It is a great replacement for standard hard drives in
computers and laptops if the price is low and it is also suitable for tablets, notebooks, etc because
they do not require large storage.
8. SD Card: It is known as a Secure Digital Card. It is generally used in portable devices like
mobile phones, cameras, etc., to store data. It is available in different sizes like 1 GB, 2 GB, 4 GB,
8 GB, 16 GB, 32 GB, 64 GB, etc. To view the data stored in the SD card you can remove them
from the device and insert them into a computer with help of a card reader. The data stores in the
SD card is stored in memory chips(present in the SD Card) and it does not contain any moving
parts like the hard disk.

Types of Secondary Memory


1. Magnetic Tapes: Magnetic tape is a long, narrow strip of plastic film with a thin, magnetic
coating on it that is used for magnetic recording. Bits are recorded on tape as magnetic patches
called RECORDS that run along many tracks. Typically, 7 or 9 bits are recorded concurrently.
Each track has one read/write head, which allows data to be recorded and read as a sequence of
characters. It can be stopped, started moving forward or backward, or rewound.
2. Magnetic Disks: A magnetic disk is a circular metal or a plastic plate and these plates are coated
with magnetic material. The disc is used on both sides. Bits are stored in magnetized surfaces in
locations called tracks that run in concentric rings. Sectors are typically used to break tracks into
pieces.
Hard discs are discs that are permanently attached and cannot be removed by a single user.
3. Optical Disks: It’s a laser-based storage medium that can be written to and read. It is
reasonably priced and has a long lifespan. The optical disc can be taken out of the computer by
occasional users.
Types of Optical Disks
CD – ROM
● It’s called compact disk. Only read from memory.
● Information is written to the disc by using a controlled laser beam to burn pits on the disc
surface.
● It has a highly reflecting surface, which is usually aluminium.
● The diameter of the disc is 5.25 inches.
● 16000 tracks per inch is the track density.
● The capacity of a CD-ROM is 600 MB, with each sector storing 2048 bytes of data.
● The data transfer rate is about 4800KB/sec. & the new access time is around 80 milliseconds.
WORM-(WRITE ONCE READ MANY)
● A user can only write data once.
● The information is written on the disc using a laser beam.
● It is possible to read the written data as many times as desired.
● They keep lasting records of information but access time is high.
● It is possible to rewrite updated or new data to another part of the disc.
● Data that has already been written cannot be changed.
● Usual size – 5.25 inch or 3.5 inch diameter.
● The usual capacity of 5.25 inch disk is 650 MB,5.2GB etc.
DVDs
● The term “DVD” stands for “Digital Versatile/Video Disc,” and there are two sorts of DVDs:
● DVDR (writable)
● DVDRW (Re-Writable)
● DVD-ROMS (Digital Versatile Discs): These are read-only memory (ROM) discs that can
be used in a variety of ways. When compared to CD-ROMs, they can store a lot more data. It
has a thick polycarbonate plastic layer that serves as a foundation for the other layers. It’s an
optical memory that can read and write data.
● DVD-R: DVD-R is a writable optical disc that can be used just once. It’s a DVD that can be
recorded. It’s a lot like WORM. DVD-ROMs have capacities ranging from 4.7 to 17 GB. The
capacity of 3.5 inch disk is 1.3 GB.

Cache Memory:

To temporarily store frequently accessed data and instructions, systems use cache memory, a
compact, high-speed memory. It is situated between the CPU and the main memory of the
computer and is a part of the hierarchy of memory.

The cache memory is always checked by the CPU before it can access any data or instructions.
The system performs better and saves time if the data or instruction is discovered in the cache.
When information is not located in the cache, it must be accessed from the main memory, which
takes additional time.

· The concept of locality of reference governs how cache memory functions. Consequently,
frequently accessed information and instructions are likely to be used once more soon.
The ability to quickly access frequently used data and instructions thanks to cache memory
considerably boosts system performance. As a result, cache memory is a crucial part of modern
computer organization.

Cache Performance:

• Before reading or writing to a location in main memory, the processor looks for the associated
entry in cache.
• Data is read from cache when a cache hit occurs when the CPU determines that a memory
location is in cache.
• A cache miss is the technical term for when the CPU cannot find a location in cache. If there is
a cache miss, a new entry is allocated, the data is copied from main memory, and the request is
satisfied from the contents of the cache.
• Hit ratio is a commonly used measure to assess the effectiveness of cache memory. • Hit rate is
calculated as follows.
Hit Rate = Hits / (Hits + Misses) = Hits / Total Hits.

Cache Mapping

Cache mapping refers to a technique using which the content present in the main memory is
brought into the memory of the cache. Three distinct types of mapping are used for cache memory
mapping.

What is Cache Mapping?


As we know that the cache memory bridges the mismatch of speed between the main memory
and the processor. Whenever a cache hit occurs,

● The word that is required is present in the memory of the cache.


● Then the required word would be delivered from the cache memory to the CPU.
And, whenever a cache miss occurs,

● The word that is required isn’t present in the memory of the cache.
● The page consists of the required word that we need to map from the main memory.
● We can perform such a type of mapping using various different techniques of cache
mapping.
Let us discuss different techniques of cache mapping in this article.

Process of Cache Mapping


The process of cache mapping helps us define how a certain block that is present in the main
memory gets mapped to the memory of a cache in the case of any cache miss.

In simpler words, cache mapping refers to a technique using which we bring the main memory
into the cache memory. Here is a diagram that illustrates the actual process of mapping:

Techniques of Cache Mapping


One can perform the process of cache mapping using these three techniques given as follows:
1. Direct Mapping
In the case of direct mapping, a certain block of the main memory would be able to map a cache
only up to a certain line of the cache. The total line numbers of cache to which any distinct block
can map are given by the following:

Cache line number = (Address of the Main Memory Block ) Modulo (Total number of lines in
Cache)
For example,

● Let us consider that particular cache memory is divided into a total of ‘n’ number of
lines.
● Then, the block ‘j’ of the main memory would be able to map to line number only of the
cache (j mod n).

The Need for Replacement Algorithm


In the case of direct mapping,
● There is no requirement for a replacement algorithm.
● It is because the block of the main memory would be able to map to a certain line of the
cache only.
● Thus, the incoming (new) block always happens to replace the block that already exists,
if any, in this certain line.
Division of Physical Address
In the case of direct mapping, the division of the physical address occurs as follows:

2. Fully Associative Mapping


In the case of fully associative mapping,

● The main memory block is capable of mapping to any given line of the cache that’s
available freely at that particular moment.
● It helps us make a fully associative mapping comparatively more flexible than direct
mapping.
For Example
Let us consider the scenario given as follows:
Here, we can see that,

● Every single line of cache is available freely.


● Thus, any main memory block can map to a line of the cache.
● In case all the cache lines are occupied, one of the blocks that exists already needs to be
replaced.
The Need for Replacement Algorithm
In the case of fully associative mapping,

● The replacement algorithm is always required.


● The replacement algorithm suggests a block that is to be replaced whenever all the cache
lines happen to be occupied.
● So, replacement algorithms such as LRU Algorithm, FCFS Algorithm, etc., are
employed.
Division of Physical Address
In the case of fully associative mapping, the division of the physical address occurs as follows:
3. K-way Set Associative Mapping
In the case of k-way set associative mapping,

● The grouping of the cache lines occurs into various sets where all the sets consist of k
number of lines.
● Any given main memory block can map only to a particular cache set.
● However, within that very set, the block of memory can map any cache line that is freely
available.
● The cache set to which a certain main memory block can map is basically given as
follows:
Cache set number = ( Block Address of the Main Memory ) Modulo (Total Number of sets
present in the Cache)

For Example
Let us consider the example given as follows of a two-way set-associative mapping:

In this case,

● k = 2 would suggest that every set consists of two cache lines.


● Since the cache consists of 6 lines, the total number of sets that are present in the cache =
6 / 2 = 3 sets.
● The block ‘j’ of the main memory is capable of mapping to the set number only (j mod 3)
of the cache.
● Here, within this very set, the block ‘j’ is capable of mapping to any cache line that is
freely available at that moment.
● In case all the available cache lines happen to be occupied, then one of the blocks that
already exist needs to be replaced.
The Need for Replacement Algorithm
In the case of k-way set associative mapping,

● The k-way set associative mapping refers to a combination of the direct mapping as well
as the fully associative mapping.
● It makes use of the fully associative mapping that exists within each set.
● Therefore, the k-way set associative mapping needs a certain type of replacement
algorithm.
Division of Physical Address
In the case of fully k-way set mapping, the division of the physical address occurs as follows:

Special Cases

● In case k = 1, the k-way set associative mapping would become direct mapping. Thus,
Direct Mapping = one-way set associative mapping

● In the case of k = The total number of lines present in the cache, then the k-way set
associative mapping would become fully associative mapping.

You might also like