0% found this document useful (0 votes)
59 views

IO System

The document discusses input/output (I/O) systems in operating systems. It covers I/O device control using device drivers, I/O hardware including buses and ports, polling and interrupt-driven I/O, direct memory access (DMA), and the kernel I/O subsystem. The kernel I/O subsystem provides services like I/O scheduling, buffering, caching, spooling, error handling, and protection from errant processes. Device drivers present a uniform interface to control varied I/O devices, while DMA allows devices to access memory directly to improve data transfer speeds.

Uploaded by

nanekaraditya06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

IO System

The document discusses input/output (I/O) systems in operating systems. It covers I/O device control using device drivers, I/O hardware including buses and ports, polling and interrupt-driven I/O, direct memory access (DMA), and the kernel I/O subsystem. The kernel I/O subsystem provides services like I/O scheduling, buffering, caching, spooling, error handling, and protection from errant processes. Device drivers present a uniform interface to control varied I/O devices, while DMA allows devices to access memory directly to improve data transfer speeds.

Uploaded by

nanekaraditya06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

IO System

Overview
The control of devices connected to the computer is a major concern
of operating-system designers. Because IO devices vary so widely in
their function and speed (consider a mouse, a hard disk, and a CD-
ROM jukebox), varied methods are needed to control them.
To encapsulate the details and oddities of different devices, the
kernel of an operating system is structured to use device-driver
modules. The device drivers present a uniform device access
interface to the I/O subsystem, much as system calls provide a
standard interface between the application and the operating
system.
IO Hardware
A device communicates with a computer system by sending signals
over a cable or even through the air. The device communicates with
the machine via a connection point (or port)-for example, a serial
port. If devices use a common set of wires, the connection is called a
bus. A bus is a set of wires and a rigidly defined protocol that
specifies a set of messages that can be sent on the wires.
Buses are used widely in computer
architecture. A typical PC bus structure
appears in Figure This figure shows a PCI bus
(the common PC system bus) that connects
the processor-memory subsystem to the fast
devices and an expansion bus that connects
relatively slow devices such as the keyboard
and serial and parallel ports. In the upper-
right portion of the figure, four disks are
connected together on a SCSI bus plugged
into a SCSR controller.
An I/O port typically consists of four registers, called the (1) status, (2)
control, (3) data-in, and (4) data-out registers.
1) The data-in register is read by the host to get input.
2) The data-out register is written by the host to send output.
3) The status register contains bits that can be read by the host. These bits
indicate states, such as whether the current command has completed,
whether a byte is available to be read from the data-in register, and
whether
a device error has occurred.
4) The control register can be written by the host to start a command or to
change the mode of a device.
Polling
• In polling is not a hardware mechanism, its a protocol in which CPU
steadily checks whether the device needs attention. In polling
process unit keeps asking the I/O device whether or not it desires CPU
processing. The CPU check every and each device hooked up there to
for insuring whether or not any device desires hardware attention.
• 2 bits are used to coordinate the producer - consumer relationship
between the controller and the host. The controller indicates its state
through the busy bit in the status register. (Recall that to set a bit
means to write a 1 into the bit and to clear a bit means to write a 0
into it.) The controller sets the busy bit ~when it is busy working and
clears the busy bit when it is ready to accept the next command.
Important points of Poling
• The process in which the CPU constantly checks the status of the device-
to see if it needs the CPU's attention.
• It is a protocol.
• In this protocol, the CPU services the device.
• The command-ready bit indicates that the device needs to be serviced.
• The CPU needs to wait and check if a device needs to be serviced.
• This wastes many of the CPU cycles.
• CPU polls the devices at regular intervals of time.
• This protocol becomes inefficient when the CPU rarely finds a device that
is ready to be serviced.
Interrupt
Interrupt is a hardware mechanism in which, the device notices the CPU that it
requires its attention. Interrupt can take place at any time. So when CPU gets an
interrupt signal trough the indication interrupt-request line, CPU stops the current
process and respond to the interrupt by passing the control to interrupt handler
which services device.

• It is a process with the help of which the CPU is notified of requiring attention.
• It is considered as a hardware mechanism.
• An interrupt handler services/works with the device.
• Interrupt-request line indicates that the device needs to be serviced.
• CPU is used only when a device requires servicing.
• This, in turn, saves the CPU cycles.
• An interrupt can occur at any point in time.
Direct Memory Access
For a device that does large transfers, such as a disk drive, it seems
wasteful to use an expensive general-purpose processor to watch
status bits and to feed data into a controller register one byte at a
time-a process termed programmed I/O (PIO).
DMA Controller is a hardware device that allows I/O devices to
directly access memory with less participation of the processor. DMA
controller needs the same old circuits of an interface to
communicate with the CPU and Input/Output devices.
1.Whenever an I/O device wants to transfer the data
to or from memory, it sends the DMA request
(DRQ) to the DMA controller. DMA controller
accepts this DRQ and asks the CPU to hold for a
few clock cycles by sending it the Hold request
(HLD).
2.CPU receives the Hold request (HLD) from DMA
controller and relinquishes the bus and sends the
Hold acknowledgement (HLDA) to DMA
controller.
3.After receiving the Hold acknowledgement
(HLDA), DMA controller acknowledges I/O
device (DACK) that the data transfer can be
performed and DMA controller takes the charge of
the system bus and transfers the data to or from
memory.
4.When the data transfer is accomplished, the DMA
raise an interrupt to let know the processor that
the task of data transfer is finished and the
processor can take control over the bus again and
Direct Memory Access Diagram
Whenever a processor is requested to read or write a block of
data, i.e. transfer a block of data, it instructs the DMA
controller by sending the following information.
The first information is whether the data has to be read from
memory or the data has to be written to the memory. It
passes this information via read or write control lines that is
between the processor and DMA controllers control logic
unit.
The processor also provides the starting address of/ for the
data block in the memory, from where the data block in
memory has to be read or where the data block has to be
written in memory. DMA controller stores this in its address
register. It is also called the starting address register.
The processor also sends the word count, i.e. how many
words are to be read or written. It stores this information in
the data count or the word count register.
The most important is the address of I/O device that wants to
Direct Memory Access Advantages and Disadvantages

Advantages:
Transferring the data without the involvement of the processor will
speed up the read-write task.
DMA reduces the clock cycle requires to read or write a block of data.
Implementing DMA also reduces the overhead of the processor.
Disadvantages
As it is a hardware unit, it would cost to implement a DMA controller
in the system.
Cache coherence problem can occur while using DMA controller.
I/O Interface
To share information between CPU and I/O devices interface is used
which is called as I/O Interface.
Various applications of I/O Interface :
Application of I/O is that we can say interface have access to open any
file without any kind of information about file i.e., even basic
information of file is unknown. It also has feature that it can be used to
also add new devices to computer system even it does not cause any
kind of interrupt to operating system. It can also used to abstract
differences in I/O devices by identifying general kinds. The access to
each of general kind is through standardized set of function which is
called as interface.
Kernel I/O Subsystem
Kernels provide many services related to I/O. Several services-scheduling,
buffering, caching, spooling, device reservation, and error handling-are
provided by the kernel's I/O subsystem and build on the hardware and
device driver infrastructure. The I/O subsystem is also responsible for
protecting itself from errant processes and malicious users.
1) I/O Scheduling –
To schedule a set of I/O requests means to determine a good order in which
to execute them. The order in which the application issues the system call is
the best choice. Scheduling can improve the overall performance of the
system, can share device access permission fairly to all the processes,
reduce the average waiting time, response time, turnaround time for I/O to
complete.
2) Buffering –
A buffer is a memory area that stores data being transferred between two
devices or between a device and an application. Buffering is done to cope
with a speed mismatch between producer and consumer of a data stream.
3) Caching –
A cache is a region of fast memory that holds a copy of data. Access to the
cached copy is much easier than the original file. For instance, the
instruction of the currently running process is stored on the disk, cached in
physical memory, and copied again in the CPU’s secondary and primary
cache.
4) Spooling and Device Reservation –
A spool is a buffer that holds the output of a device, such as a printer that
cannot accept interleaved data streams. Although a printer can serve only one
job at a time, several applications may wish to print their output concurrently,
without having their output mixes together.
The OS solves this problem by preventing all output from continuing to the
printer. The output of all applications is spooled in a separate disk file. When an
application finishes printing then the spooling system queues the corresponding
spool file for output to the printer.

5) Error Handling –
An OS that uses protected memory can guard against many kinds of hardware
and application errors so that a complete system failure is not the usual result of
each minor mechanical glitch, Devices, and I/O transfers can fail in many ways,
either for transient reasons, as when a network becomes overloaded or for
permanent reasons, as when a disk controller becomes defective.
6) I/O Protection –
Errors and the issue of protection are closely related. A user process
may attempt to issue illegal I/O instructions to disrupt the normal
function of a system. We can use the various mechanisms to ensure
that such disruption cannot take place in the system. To prevent illegal
I/O access, we define all I/O instructions to be privileged instructions.
The user cannot issue I/O instruction directly.
Disk Scheduling
Disk scheduling is done by operating systems to schedule I/O requests
arriving for the disk. Disk scheduling is also known as I/O scheduling.
Disk scheduling is important because:
• Multiple I/O requests may arrive by different processes and only one I/O
request can be served at a time by the disk controller. Thus other I/O
requests need to wait in the waiting queue and need to be scheduled.
• Two or more request may be far from each other so can result in greater
disk arm movement.
• Hard drives are one of the slowest parts of the computer system and
thus need to be accessed in an efficient manner.
• There are many Disk Scheduling Algorithms but before discussing them let’s
have a quick look at some of the important terms:
• Seek Time:Seek time is the time taken to locate the disk arm to a specified
track where the data is to be read or write. So the disk scheduling algorithm
that gives minimum average seek time is better.
• Rotational Latency: Rotational Latency is the time taken by the desired sector
of disk to rotate into a position so that it can access the read/write heads. So
the disk scheduling algorithm that gives minimum rotational latency is better.
• Transfer Time: Transfer time is the time to transfer the data. It depends on the
rotating speed of the disk and number of bytes to be transferred.
• Disk Access Time: Disk Access Time is:
Disk Access Time = Seek Time +
Rotational Latency +
Transfer Time
• Disk Response Time: Response Time is the average of time spent
by a request waiting to perform its I/O operation. Average Response
time is the response time of the all requests. Variance Response
Time is measure of how individual request are serviced with respect
to average response time. So the disk scheduling algorithm that
gives minimum variance response time is better.
1) FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In
FCFS, the requests are addressed in the order they arrive in the disk
queue. Let us understand this with the help of an example.
Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50

So, total seek time: =(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-


16)=642
Advantages:
• Every request gets a fair chance
• No indefinite postponement
Disadvantages:
• Does not try to optimize seek time
• May not provide the best possible service
• SSTF: In SSTF (Shortest Seek Time First), requests having shortest
seek time are executed first. So, the seek time of every request is
calculated in advance in the queue and then they are scheduled
according to their calculated seek time. As a result, the request near
the disk arm will get executed first. SSTF is certainly an
improvement over FCFS as it decreases the average response time
and increases the throughput of system. Let us understand this with
the help of an example.
Example:
• Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50
(82,170,43,140,24,16,190)

So, total seek time:=(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-40)+(190-170)=208


Advantages:
•Average Response Time decreases
•Throughput increases
Disadvantages:
•Overhead to calculate seek time in advance
•Can cause Starvation for a request if it has higher seek time as compared to incoming
requests
• SCAN: In SCAN algorithm the disk arm moves into a particular
direction and services the requests coming in its path and after
reaching the end of disk, it reverses its direction and again services
the request arriving in its path. So, this algorithm works as an
elevator and hence also known as elevator algorithm. As a result,
the requests at the midrange are serviced more and those arriving
behind the disk arm will have to wait.
Example:
• Suppose the requests to be addressed are-82,170,43,140,24,16,190.
And the Read/Write arm is at 50, and it is also given that the disk arm
should move “towards the larger value”.
(82,170,43,140,24,16,190)

Therefore, the seek time is calculated as:


=(199-50)+(199-16)=332

Advantages:
•High throughput
•Low variance of response time
•Average response time
Disadvantages:
•Long waiting time for requests for locations just visited by disk arm
4.CSCAN: In SCAN algorithm, the disk arm again scans the path that
has been scanned, after reversing its direction. So, it may be possible
that too many requests are waiting at the other end or there may be
zero or few requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm
instead of reversing its direction goes to the other end of the disk and
starts servicing the requests from there. So, the disk arm moves in a
circular fashion and this algorithm is also similar to SCAN algorithm
and hence it is known as C-SCAN (Circular SCAN).
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190.
And the Read/Write arm is at 50, and it is also given that the disk arm
should move “towards the larger value”.
Seek time is calculated as:
=(199-50)+(199-0)+(43-0)=391
Advantages:
•Provides more uniform wait time compared to SCAN
• LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference
that the disk arm in spite of going to the end of the disk goes only to the last request to
be serviced in front of the head and then reverses its direction from there only. Thus it
prevents the extra delay which occurred due to unnecessary traversal to the end of the
disk.
Example:
• Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move “towards
the larger value”.

So, the seek time is calculated as: =(190-50)+(190-16) = 314


• CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to
CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the
end goes only to the last request to be serviced in front of the head and then from
there goes to the other end’s last request. Thus, it also prevents the extra delay
which occurred due to unnecessary traversal to the end of the disk.
Example:
• Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move
“towards the larger value”

So, the seek time is calculated


as:

=(190-50)+(190-16)+(43-16)
=341

You might also like