0% found this document useful (0 votes)
34 views56 pages

Io Device Management

- I/O devices can be divided into block devices which store data in fixed blocks and character devices which deliver/accept a stream of characters. Common block devices include hard disks while printers are typically character devices. - Device controllers interface between I/O devices and the CPU. Controllers use memory-mapped I/O to communicate by mapping their registers to memory addresses. DMA allows peripherals to transfer data directly to memory to reduce CPU overhead. - The goals of I/O software include device independence, uniform naming, handling errors close to hardware, and supporting both synchronous and asynchronous transfers through buffering. Early systems used polled I/O where the CPU continuously checked devices but modern systems typically use interrupt-

Uploaded by

bibek gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views56 pages

Io Device Management

- I/O devices can be divided into block devices which store data in fixed blocks and character devices which deliver/accept a stream of characters. Common block devices include hard disks while printers are typically character devices. - Device controllers interface between I/O devices and the CPU. Controllers use memory-mapped I/O to communicate by mapping their registers to memory addresses. DMA allows peripherals to transfer data directly to memory to reduce CPU overhead. - The goals of I/O software include device independence, uniform naming, handling errors close to hardware, and supporting both synchronous and asynchronous transfers through buffering. Early systems used polled I/O where the CPU continuously checked devices but modern systems typically use interrupt-

Uploaded by

bibek gautam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

Input/Output device

management
Introduction
• The two main jobs of a computer are I/O and
processing. In many cases, the main job is I/O and
the processing is merely incidental. For instance,
when we browse a web page or edit a file, our
immediate interest is to read or enter some
information, not to compute an answer.
• It must issue commands to the devices, catch
interrupts, and handle errors. It should also provide
an interface between the devices and the rest of the
system that is simple and easy to use. To the extent
possible, the interface should be the same for all
devices. The I/O code represents a significant
fraction of the total operating system.
Principles of I/O hardware
• Different people look at I/O hardware in different
ways. Electrical engineers look at in terms of chips,
wires, power supplies, motors, and all the other
physical components that make up the hardware.
• Programmers look at the interface presented to the
software- the commands the hardware accepts, the
functions it carries out, and the errors that can be
reported back.
• Here we are concerned with programming I/O
devices, not designing, building, or maintaining
them.
1. I/O Devices
• I/O devices can be roughly divided into two categories:
block devices and character devices.
• A block device is one that stores information in fixed- size
blocks, each one with its own address. Common block sizes
range from 512 bytes to 32,768 bytes. All transfers are in
units of one or more entire blocks. The essential property
of a block device is that it is possible to read or write each
block independently of all the other ones. Hard disks, CD-
Roms, and USB sticks are common block devices.
• A character device delivers or accepts a stream of
characters, without regard to any block structure. It is not
addressable and does not have any seek operation.
Printers, network interfaces, mice and most other devices
that are not disk- like can be seen as character device.
2. Device Controllers
• I/O units typically consist of a mechanical component and an
electronic component. The electronic component is called the
device controller or adapter. On personal computers, it often
takes the form of a chip on the parent board or a printed circuit
card that can be inserted into a (PCI) expansion slot.
• The controller card usually has a connector on it, into which a
cable leading to the device itself can be plugged. Many
controllers can handle two, four, or even eight identical devices.
• The interface between the controller and device is often a very
low-level interface. What actually comes off the drive, however,
is a serial bit stream and finally a checksum. The controller’s job
is to convert the serial bit stream into a block of bytes and
perform any error correction necessary. The block of bytes is
typically first assembled, bit by bit, in a buffer inside the
controller. After its checksum has been verified and the block
has been declared to be error free, it can be copied to main
memory.
3. Memory –Mapped I/O
• Each controller has a few registers that are used for
communicating with the CPU. By writing into these
registers, the operating system can command the device to
deliver data, accept data, switch itself on or off, or
otherwise perform some action. By reading from these
registers, the operating system can learn what the device
state is, whether it is prepared to accept a new command,
and so on.
• The issue thus arises of how the CPU communicates with
the control registers and the device data buffers. For this
we map all the control registers into memory space. Each
control register is assigned a unique memory address to
which no memory is assigned. This system is called
memory-mapped I/O. Usually, the assigned address are at
the top of the address space.
4. DMA(Direct Memory Access)
• No matter whether a CPU does or does not have
memory-mapped I/O, it needs to address the device
controllers to exchange data with them. The CPU can
request data from an I/O controller one byte at a time,
but doing so wastes the CPU’s time, so a different
scheme, called DMA (Direct Memory Access) is often
used.
• Direct Memory Access (DMA) is a feature in computer
systems that allows peripherals to transfer data to and
from the system's memory without involving the central
processing unit (CPU). DMA is used to improve overall
system performance by offloading data transfer tasks from
the CPU, which can then focus on other processing
activities.
• First the CPU programs the DMA controller by setting its
registers so it knows what to transfer where (step 1 in Fig. 5-4).
It also issues a command to the disk controller telling it to read
data from the disk into its internal buffer and verify the
checksum. When valid data are in the disk controller’s buffer,
DMA can begin.
• The DMA controller initiates the transfer by issuing a read
request over the bus to the disk controller (step 2). This read
request looks like any other read request, and the disk controller
does not know (or care) whether it came from the CPU or from a
DMA controller.
• Typically, the memory address to write to is on the bus’ address
lines, so when the disk controller fetches the next word from its
internal buffer, it knows where to write it. The write to memory
is another standard bus cycle (step 3).
• When the write is complete, the disk controller sends
an acknowledgement signal to the DMA controller, also
over the bus (step 4). The DMA controller then
increments the memory address to use and
decrements the byte count.
• If the byte count is still greater than 0, steps 2 through
4 are repeated until the count reaches 0. At that time,
the DMA controller interrupts the CPU to let it know
that the transfer is now complete. When the operating
system starts up, it does not have to copy the disk
block to memory; it is already there.
PRINCIPLES OF I/O SOFTWARE

• Let us now turn away from the I/O hardware


and look at the I/O software. First we will look
at its goals and then at the different ways I/O
can be done from the point of view of the
operating system.
Goals of the I/O Software
• A key concept in the design of I/O software is known as
device independence. It means that is should be
possible to write programs that can access any I/O
device without having to specify the device in advance.
For example, a program that reads a file as input should
be able to read a file on hard disk, a CD-ROM, a DVD,
or a USB stick without having to modify the programs
for each different device.
• Closely related to device independence is the goal of
uniform naming. The name of a file or a device should
simply be a string or an integer and not depend on
the device in any way.
Goals of the I/O Software(contd…)
• Another important issue for I/O software is error handling. In
general, errors should be handled as close to the hardware as
possible. If the controller discovers a read error, it should try to
correct the error itself if it can. If it cannot, then the device driver
should handle it, perhaps by just trying to read the block again.
• Another key issue is that synchronous(blocking) versus
asynchronous(interrupt-driven) transfers. Most physical I/O is
asynchronous- the CPU starts the transfer and goes off to do
something else until the interrupt arrives.
• Another issue for the I/O software is buffering. Often data that
come off a device cannot be stored directly in its final
destination.For example, when a packet comes in off the
network, the operating system does not know where to put it
until it has stored the packet somewhere and examined it.
Polled I/O
• The simplest form form of I/O is to have the CPU do
all the work. This method is called programmed I/O.
The action followed by operating system are
summarized in following manner. First the data are
copied to the kernel. Then the operating systems
enters a tight loop outputting the characters one at a
time. The essential aspect of programmed I/O is
that after outputting a character, the CPU
continuously polls the device to see if it is ready to
accept another one. This behavior is often called
polling or busy waiting.
• It is simplest to illustrate how programmed I/O works by means of an
example. Consider a user process that wants to print the eight-character
string ‘‘ABCDEFGH’’ on the printer via a serial interface. Displays on small
embedded systems sometimes work this way. The software first assembles
the string in a buffer in user space, as shown in Fig. 5-7(a).
• The user process then acquires the printer for writing by making a system
call to open it. If the printer is currently in use by another process, this call
will fail and return an error code or will block until the printer is available,
depending on the operating system and the parameters of the call. Once it
has the printer, the user process makes a system call telling the operating
system to print the string on the printer.
• As soon as it has copied the first character to the printer, the operating
system checks to see if the printer is ready to accept another one. Generally,
the printer has a second register, which gives its status.
Interrupt - Driven I/O
• The basic interrupt mechanism works as follows. The CPU
hardware has a wire called the interrupt-request line that the
CPU senses after executing every instruction. When the CPU
detects that a controller has asserted a signal on the interrupt
request line, the CPU saves a small amount of state, such as
the current value of the instruction pointer, and jumps to the
interrupt-handler routine at a fixed address in memory.
• The interrupt handler determines the cause of the interrupt,
performs the necessary processing, and executes a return
from interrupt instruction to return the CPU to the execution
state prior to the interrupt. We say that the device controller
raises an interrupt by asserting a signal on the interrupt
request line, the CPU catches the interrupt and dispatches to
the interrupt handler, and the handler clears the interrupt by
servicing the device.
Interrupt-Driven I/O Cycle
I/O Using DMA

• An obvious disadvantage of interrupt-driven I/O is that


an interrupt occurs on ev ery character. Interrupts take
time, so this scheme wastes a certain amount of CPU
time. A solution is to use DMA.
• Here the idea is to let the DMA controller feed the
characters to the printer one at time, without the CPU
being bothered.
• In essence, DMA is programmed I/O, only with the
DMA controller doing all the work, instead of the main
CPU. This strategy requires special hardware (the DMA
controller) but frees up the CPU during the I/O to do
other work
I/O SOFTWARE LAYERS
• I/O software is typically organized in four layers, as shown in Fig.
5-11. Each layer has a well-defined function to perform and a
well-defined interface to the adjacent layers. The functionality
and interfaces differ from system to system, so the discussion
that follows, which examines all the layers starting at the
bottom, is not specific to one machine.
Interrupt Handlers
• While programmed I/O is occasionally useful, for most I/O, interrupts are an
unpleasant fact of life and cannot be avoided. They should be hidden away,
deep in the bowels of the operating system, so that as little of the operating
system as possible knows about them. The best way to hide them is to have
the driver starting an I/O operation block until the I/O has completed and
the interrupt occurs. The driver can block itself, for example, by doing a
down on a semaphore, a wait on a condition variable, a receive on a
message, or something similar.
• When the interrupt happens, the interrupt procedure does whatever it has
to in order to handle the interrupt. Then it can unblock the driver that was
waiting for it. In some cases it will just complete up on a semaphore. In
others it will do a signal on a condition variable in a monitor. In still others, it
will send a message to the blocked driver. In all cases the net effect of the
interrupt will be that a driver that was previously blocked will now be able to
run. This model works best if drivers are structured as kernel processes, with
their own states, stacks, and program counters.
• We will now give an outline of this work as a series of steps that must be
performed in software after the hardware interrupt has completed.
Device Drivers
• We know that device controller has some device registers used to
give it commands or some device registers used to read out its
status or both. The number of device registers and the nature of
the commands vary radically from device to device. For example, a
mouse driver has to accept information from the mouse telling
how far it has moved and which buttons are currently
depressed. In contrast, a disk driver may have to know all about
sectors, tracks, cylinders, heads, arm motion, motor drives, and all
the other mechanics of making disk work properly. Obviously,
these drivers will be very different.
• As a consequence, each I/O device attached to a computer needs
some device-specific code for controlling it. This code, called the
device driver, is generally written by device’s manufacturer and
delivered along with the device. Since each operating system
needs its own drivers, device manufacturers commonly supply
drivers for several popular operating systems.
Device Drivers(contd.)
Device-Independent I/O Software
• Although some of the I/O software is device specific, other parts of
it are device independent. The exact boundary between the drivers
and the device- independent software is system (and device)
dependent, because some functions that could be done in a
device-independent way may actually be done in the drivers, for
efficiency or other reasons. The functions shown in Fig. below are
typically done in the device-independent software.
Uniform interfacing for device drivers
Buffering
Error reporting
Allocating and releasing dedicated devices
Providing a device-independent block size

• The basic function of the device-independent software is to


perform the I/O functions that are common to all devices and to
provide a uniform interface to the user-level software.
User Space I/O Software
• User space I/O software refers to software components or programs that
operate in the user space of an operating system and are responsible for
managing input and output (I/O) operations. User space is the part of the
address space where user-mode applications and non-privileged code run, as
opposed to the kernel space where the operating system's core functions
reside.
• in the context of I/O, user space I/O software typically includes libraries, APIs
(Application Programming Interfaces), and user-level drivers that facilitate
communication between user applications and hardware devices. Here are a
few key components and concepts related to user space I/O software:
1.User Space Libraries:
I/O Libraries: Libraries such as libc in Unix-like systems provide standard I/O functions like
fopen, fread, fwrite, etc., allowing user applications to perform file I/O operations without
directly interacting with the operating system kernel.
2.System Calls:
Some I/O operations involve system calls, which are requests made by user applications to the
operating system kernel. User space I/O software often interacts with the kernel through system
calls to perform tasks like reading from or writing to files.
3.User-Level Drivers:
In some cases, specific drivers for hardware devices may be implemented in user space. These
drivers communicate with the hardware but operate outside the kernel. User-level drivers can
provide more flexibility and ease of development for certain applications.
4. Asynchronous I/O:
• User space I/O software may leverage asynchronous I/O mechanisms, allowing
applications to initiate I/O operations and continue processing other tasks without
waiting for the completion of the I/O. This is often achieved using APIs like
POSIX AIO (Asynchronous I/O).
5. Memory-Mapped I/O:
• Some user space I/O software may utilize memory-mapped I/O techniques, where
a region of virtual memory is mapped to the memory space of a device. This
enables direct communication between the application and the device without the
need for explicit read and write operations.
6. Frameworks and APIs:
• Various frameworks and APIs exist to simplify user space I/O programming. For
example, in Linux, the libaio library provides an asynchronous I/O interface, and in
Windows, the I/O Completion Ports mechanism offers a way to handle
asynchronous I/O operations efficiently.
• User space I/O software allows developers to create applications
that interact with hardware devices and perform I/O operations
without requiring direct access to kernel-level privileges. It
enhances portability, ease of development, and often provides a
more straightforward interface for user applications.
Disk Hardware
• Disks come in a variety of types. The most
common ones are the magnetic disks (hard disks
and floppy disks). They are characterized by the
fact that reads and writes are equally fast, which
makes them ideal as secondary memory (paging,
file systems, etc.).
• Arrays of these disks are sometimes used to
provide highly reliable storage. For distribution
of programs, data, and movies, various kinds of
optical disks (CD-ROMs, CD-Recordables, and
DVDs) are also important.
Magnetic Disks
• Magnetic disks are organized into cylinders, each one
containing as many tracks as there are heads stacked
vertically. The tracks are divided into sectors, with the
number of sectors around the circumference typically being
8 to 32 on floppy disks, and up to several hundred on hard
disks. The number of heads varies from 1 to about 16.
• Older disks have littie electronics and just deliver a simple
serial bit stream. On these disks, the controller does most
of the work. On other disks, in particular, IDE (Integrated
Drive Electronics) and SAT A (Serial ATA) disks, the disk
drive itself contains a microcontroller that does
considerable work and allows the real controller to issue a
set of higher-level commands. The controller often does
track caching, bad block remapping, and much more.
Disk structure
1.Sector:
The smallest unit of data storage on a disk is a sector. A sector typically holds 512 bytes of data,
although modern disks may use larger sector sizes. The operating system interacts with the disk at
the sector level.
2.Track:
A track is a concentric circle on the disk surface. Tracks divide the disk into circular paths, and
each track consists of multiple sectors. The combination of tracks and sectors forms the basic grid
for organizing data on a disk.
3.Cylinder:
A cylinder is a set of tracks that are vertically aligned across multiple disk surfaces or platters.
The concept of cylinders arises from the physical arrangement of the disk's read/write heads,
which move together to access data on all surfaces at the same radial position.
4.Platter:
Disks consist of one or more platters, which are circular, flat disks coated with a magnetic
material. The platters are stacked on a spindle, and each platter has two surfaces (top and bottom)
where data can be stored.
5.Head:
A read/write head is a magnetic or optical device that reads data from or writes data to the disk
surfaces. There is typically one head per surface, and they move together as a unit to the desired
track.
6. Arm
The data on a hard drive is read by read-write heads. The standard configuration ( shown
below ) uses one head per surface, each on a separate arm, and controlled by a
common arm assembly which moves all heads simultaneously from one cylinder to
Disk Scheduling
• The operating system is responsible for using hardware
efficiently — for the disk drives, this means having a fast
access time and disk bandwidth.
• Access time has two major components
 " Seek time is the time for the disk arm to move the
heads to the cylinder containing the desired
sector.
 " Rotational latency is the additional time waiting for
the disk to rotate the desired sector to the disk head.
• Minimize seek time
• Seek time ≈ seek distance
• Disk bandwidth is the total number of bytes
transferred, divided by the total time between the first
FCFS
• FCFS is the simplest of all Disk Scheduling Algorithms. In
FCFS, the requests are addressed in the order they arrive in the
disk queue.
Advantages of FCFS
Here are some of the advantages of First Come First Serve.
•Every request gets a fair chance
•No indefinite postponement
Disadvantages of FCFS
Here are some of the disadvantages of First Come First Serve.
•Does not try to optimize seek time
•May not provide the best possible service
SSTF
• In SSTF (Shortest Seek Time First), requests having the shortest seek time
are executed first. So, the seek time of every request is calculated in advance
in the queue and then they are scheduled according to their calculated seek
time. As a result, the request near the disk arm will get executed first. SSTF is
certainly an improvement over FCFS as it decreases the average response
time and increases the throughput of the system
Advantages of Shortest Seek Time First
Here are some of the advantages of Shortest Seek Time First.
•The average Response Time decreases
•Throughput increases
Disadvantages of Shortest Seek Time First
Here are some of the disadvantages of Shortest Seek Time First.
•Overhead to calculate seek time in advance
•Can cause Starvation for a request if it has a higher seek time as compared to
incoming requests
•The high variance of response time as SSTF favors only some requests
SCAN
• In the SCAN algorithm the disk arm moves in a particular direction and
services the requests coming in its path and after reaching the end of the disk,
it reverses its direction and again services the request arriving in its path. So,
this algorithm works as an elevator and is hence also known as an elevator
algorithm. As a result, the requests at the midrange are serviced more and
those arriving behind the disk arm will have to wait.
Advantages of SCAN Algorithm
Here are some of the advantages of the SCAN Algorithm.
•High throughput
•Low variance of response time
•Average response time
Disadvantages of SCAN Algorithm
Here are some of the disadvantages of the SCAN Algorithm.
•Long waiting time for requests for locations just visited by disk arm
CSCAN
In the SCAN algorithm, the disk arm again scans the path that has
been scanned, after reversing its direction. So, it may be possible
that too many requests are waiting at the other end or there may be
zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the
disk arm instead of reversing its direction goes to the other end of
the disk and starts servicing the requests from there. So, the disk
arm moves in a circular fashion and this algorithm is also similar to
the SCAN algorithm hence it is known as C-SCAN (Circular
SCAN).
Advantages of C-SCAN Algorithm
Here are some of the advantages of C-SCAN.
•Provides more uniform wait time compared to SCAN.
LOOK
• LOOK Algorithm is similar to the SCAN disk
scheduling algorithm except for the difference
that the disk arm in spite of going to the end of
the disk goes only to the last request to be
serviced in front of the head and then reverses
its direction from there only. Thus it prevents
the extra delay which occurred due to
unnecessary traversal to the end of the disk.
C-LOOK
• As LOOK is similar to the SCAN algorithm, in
a similar way, C-LOOK is similar to the
CSCAN disk scheduling algorithm. In CLOOK,
the disk arm in spite of going to the end goes
only to the last request to be serviced in front of
the head and then from there goes to the other
end’s last request. Thus, it also prevents the
extra delay which occurred due to unnecessary
traversal to the end of the disk.
RAID
• Disk drives have continued to get smaller and cheaper,
so it is now economically feasible to attach a large
number of disks to a computer system. Having a large
number of disks in a system presents opportunities for
improving the rate at which data can be read or written,
if the disks are operated in parallel.
• Furthermore, this setup offers the potential for
improving the reliability of data storage, because
redundant information can be stored on multiple disks.
Thus, failure of one disk does not lead to loss of data. A
variety of disk-organization techniques, collectively
called redundant arrays of inexpensive disks (RAID), are
commonly used to address the performance and
reliability issues.
RAID LEVELS
RAID level 0 – Striping:
• In a RAID 0 system data are split up into blocks that get written
across all the drives in the array. By using multiple disks (at least 2)
at the same time, this offers superior I/O performance. This
performance can be enhanced further by using multiple
controllers, ideally one controller per disk.
Advantages:
• RAID 0 offers great performance, both in read and write
operations. There is no overhead caused by parity controls.
• All storage capacity is used, there is no overhead.
•The technology is easy to implement.
Disadvantages:
• RAID 0 is not fault-tolerant. If one drive fails, all data in the RAID 0
array are lost. It should not be used for mission-critical systems.
RAID LEVELS
RAID level 1 – Mirroring:
• Data are stored twice by writing them to both the data drive (or
set of data drives) and a mirror drive (or set of drives). If a drive
fails, the controller uses either the data drive or the mirror drive
for data recovery and continues operation. You need at least 2
drives for a RAID 1 array.
Advantages
• RAID 1 offers excellent read speed and a write-speed that is
comparable
to that of a single drive.
• In case a drive fails, data do not have to be rebuild, they just have to be
copied to the replacement drive.
• RAID 1 is a very simple technology.
Disadvantages
• The main disadvantage is that the effective storage capacity is only half
of the total drive capacity because all data get written twice.
RAID LEVELS
RAID level 5:
• RAID 5 is the most common secure RAID level. It requires at least 3 drives
but can work with up to 16. Data blocks are striped across the drives and
on one drive a parity checksum of all the block data is written. The parity
data are not written to a fixed drive, they are spread across all drives, as
the drawing below shows. Using the parity data, the computer can
recalculate the data of one of the other data blocks, should those data no
longer be available. That means a RAID 5 array can withstand a single
drive failure without losing data or access to data.
Advantages:
• Read data transactions are very fast while write data transactions are somewhat
slower (due to the parity that has to be calculated).
• If a drive fails, you still have access to all data, even while the failed drive is being
replaced and the storage controller rebuilds the data on the new drive.
Disadvantages:
• Drive failures have an effect on throughput, although this is still acceptable.
RAM Disks
• Refers to RAM that has been configured to simulate a disk drive.
You can access files on a RAM disk as you would access files on a
real disk. RAM disks, however, are approximately a thousand
times faster than hard disk drives. They are particularly useful,
therefore, for applications that require frequent disk accesses.
• Because they are made of normal RAM, RAM disks lose their
contents once the computer is turned off. To use a RAM disk,
therefore, you need to copy files from a real hard disk at the
beginning of the session and then copy the files back to the
hard disk before you turn the computer off. Note that if there is
a power failure, you will lose whatever data is on the RAM
disk. (Some RAM disks come with a battery backup to make
them more stable.)
• A RAM disk is also called a virtual disk or a RAM drive.
Disk formatting
• Before the disk can be used, each platter must receive a low-level format done by
software. The format consists of a series of concentric tracks, each containing some
number of sectors, with short gaps between the sectors
• The preamble starts with a certain bit pattern that allows the hardware to recognize
the start of the sector. It also contains the cylinder and sector numbers and some
other information. The size of the data portion is determined by the lowlevel
formatting program. Most disks use 512-byte sectors. The ECC field contains
redundant information that can be used to recover from read errors. The size and
content of this field varies from manufacturer to manufacturer, depending on how
much disk space the designer is willing to give up for higher reliability and how
complex an ECC code the controller can handle. A 16-byte ECC field is not unusual.
Furthermore, all hard disks have some number of spare sectors allocated to be used
to replace sectors with a manufacturing defect.
• The position of sector 0 on each track is offset from the previous
track when the low-level format is laid down. This offset, called
cylinder skew, is done to improve performance. The idea is to
allow the disk to read multiple tracks in one continuous
operation without losing data
• Formatting also affects performance. If a 10,000-RPM disk has 300 sectors
per track of 512 bytes each, it takes 6 msec to read the 153,600 bytes on a
track for a data rate of 25,600,000 bytes/sec or 24.4 MB/sec. It is not
possible to go faster than this, no matter what kind of interface is present,
even if it is a SCSI interface at 80 MB/sec or 160 MB/sec. Actually reading
continuously at this rate requires a large buffer in the controller. Consider, for
example, a controller with a one-sector buffer that has been given a
command to read two consecutive sectors. After reading the first sector
from the disk and doing the ECC calculation, the data must be transferred to
main memory.
• While this transfer is taking place, the next sector will fly by the head. When
the copy to memory is complete, the controller will have to wait almost an
entire rotation time for the second sector to come around again. This
problem can be eliminated by numbering the sectors in an interleaved
fashion when formatting the disk. In Fig. 5-23(a), we see the usual
numbering pattern (ignoring cylinder skew here). In Fig. 5-23(b), we see
single interleaving, which gives the controller some breathing space between
• If the copying process is very slow, the double
interleaving of Fig. 5-24(c) may be needed. If
the controller has a buffer of only one sector, it
does not matter whether the copying from the
buffer to main memory is done by the
controller, the main CPU, or a DMA chip; it still
takes some time. To avoid the need for
interleaving, the controller should be able to
buffer an entire track. Most modern controllers
can buffer many entire tracks

You might also like