Unit-7 I - o Management and Scheduling
Unit-7 I - o Management and Scheduling
GTU # 3140702
Unit-7
I/O Management &
Disk Scheduling
I/O devices
� External devices that engage in I/O with computer systems can be grouped into 3 categories:
1. Human readable, suitable for communicating with the computer user.
Examples: printers, terminals, video display, keyboard, mouse
2. Machine readable, suitable for communicating with electronic equipment.
Examples: disk drives, USB keys, sensors, controllers
3. Communication, suitable for communicating with remote devices.
Examples: modems, digital line drivers
� Devices differ in a number of areas:
1. Data Rate: there may be differences of magnitude between the data transfer rates
2. Application: the use to which a device is put has an influence on the software
3. Complexity of Control: the effect on the operating system is filtered by the complexity of the I/O module
that controls the device
4. Unit of Transfer: data may be transferred as a stream of bytes or characters or in larger blocks
5. Data Representation: different data encoding schemes are used by different devices
6. Error Conditions: the nature of errors, the way in which they are reported, their consequences, and the
available range of responses differs from one device to another
Organization of I/O functions
Organization of I/O functions
� Three techniques for performing I/O are:
1. Programmed I/O, the processor issues an I/O command on behalf of a process to an I/O module; that
process then busy waits for the operation to be completed before proceeding.
2. Interrupt-driven I/O, the processor issues an I/O command on behalf of a process
▪ if non-blocking – processor continues to execute instructions from the process that issued the I/O command
▪ if blocking – the next instruction the processor executes is from the OS, which will put the current process in a
blocked state and schedule another process
3. Direct Memory Access (DMA), a DMA module controls the exchange of data between main memory and an
I/O module
Evolution of I/O functions
• Processor directly controls a peripheral device
1
• The I/O module is enhanced to become a separate processor, with a specialized instruction set tailored for
5 I/O
• The I/O module has a local memory of its own and is, in fact, a computer in its own right
6
Direct Memory Access
� Feature of computer systems that allows certain hardware subsystems to access main
memory (RAM), independent of the central processing unit (CPU).
� Without DMA, when the CPU is using programmed input/output, it is typically fully occupied for
the entire duration of the read or write operation, and is thus unavailable to perform other work.
� With DMA, the CPU first initiates the transfer, then it does other operations while the transfer
is in progress, and it finally receives an interrupt from the DMA controller when the operation
is done.
� This feature is useful when the CPU needs to perform useful work while waiting for a
relatively slow I/O data transfer.
� Many hardware systems such as disk drive controllers, graphics cards, network cards and
sound cards use DMA.
Direct Memory Access
� DMA is particularly useful on devices like disks where
many bytes of information can be transferred in single I/O
operations .the CPU is notified only after the entire block
of data has been transferred
Disk read-write without a DMA
� The disk controller reads the block from the drive Drive
serially, bit by bit, until the entire block is in the
controller’s buffer. Buffer
� Next, it computes the checksum to verify that no read
errors have occurred.
� Then the controller causes an interrupt, so that OS can
read the block from controller’s buffer (a byte or a word at Disk Main
a time) by executing a loop. CPU Controller Memory
Bus
� Step 1: First the CPU programs the DMA controller by setting its registers so it knows what to
transfer where.
� It also issues a command to the disk controller telling it to read data from the disk into its
internal buffer and verify the checksum.
� When valid data are in the disk controller’s buffer, DMA can begin.
Disk read-write using DMA
1. CPU Drive
programs
DMA
Buffer
controller
2. DMA
requests
DMA transfer to Disk Main
CPU Controller memory Controller Memory
Bus
� Step 2: The DMA controller initiates the transfer by issuing a read request over the bus to the
disk controller.
� This read request looks like any other read request, and the disk controller does not know (or
care) whether it came from the CPU or from a DMA controller.
Disk read-write using DMA
1. CPU Drive
programs
DMA
Buffer
controller
2. DMA
requests
DMA transfer to Disk 3. Data Main
CPU Controller memory Controller transferred Memory
Bus
� Typically, the memory address to write to is on the bus’ address lines, so when the disk
controller fetches the next word from its internal buffer, it knows where to write it.
� Step 3: The write to memory is another standard bus cycle.
Disk read-write using DMA
1. CPU Drive
programs
DMA
Buffer
controller 4. ACK
2. DMA
requests
DMA transfer to Disk 3. Data Main
CPU Controller memory Controller transferred Memory
Bus
� Step 4: When the write is complete, the disk controller sends an acknowledgement signal to the
DMA controller, also over the bus.
� The DMA controller then increments the memory address to use and decrements the byte
count.
� If the byte count is still greater than 0, steps 2 to 4 are repeated until it reaches 0.
Disk read-write using DMA
1. CPU Drive
programs
DMA
Buffer
controller 4. ACK
2. DMA
requests
DMA transfer to Disk 3. Data Main
CPU Controller memory Controller transferred Memory
Bus
� At that time, the DMA controller interrupts the CPU to let it know that the transfer is now
complete.
� When the OS starts up, it does not have to copy the disk block to memory; it is already there.
Operating System Design issues
� Design objectives: Two objectives are paramount in designing the I/O facility.
1. Efficiency
▪ Efficiency is important because I/O operations often form a bottleneck in a computing system.
▪ Most I/O devices are extremely slow compared with main memory and the processor.
2. Generality
▪ It is desirable to handle all devices in a uniform manner.
▪ Applies to the way processes view I/O devices and the way the operating system manages I/O devices and
operations.
▪ Because of the diversity of device characteristics, it is difficult in practice to achieve true generality.
▪ What can be done is to use a hierarchical, modular approach to the design of the I/O function.
▪ This approach hides most of the details of device I/O in lower-level routines so that user processes and upper levels
of the OS see devices in terms of general functions, such as read, write, open, close, lock, and unlock.
I/O Buffering
� Buffering is technique by which the device manager can keep slower I/O devices busy during
times when a process is not requiring I/O operations .
� It improves the throughput of input and output operations .
� implemented directly in hardware and the corresponding drivers
� INPUT BUFFERING:- is the technique of having the input device read information into the primary
memory before the process requests it
� OUTPUT BUFFERING:- is the technique of saving information in memory and then writing it to
the device while the process continuous execution
I/O Buffering
� Perform input transfers in advance of requests being made and perform output transfers
some time after the request is made is called buffering.
� Types of I/O devices:
⮩ Block oriented: A block-oriented device stores information in blocks that are usually of fixed size, and
transfers are made one block at a time.
Generally, it is possible to reference data by its block number.
Hard disks, floppy disks and optical drives such as DVD-ROM and CD-ROM are examples of block oriented
devices.
⮩ Stream oriented: A stream-oriented device transfers data in and out as a stream of bytes, with no block
structure.
Terminals, printers, communications ports, keyboard, mouse and other pointing devices, and most other
devices that are not secondary storage are stream oriented.
I/O buffering
Operating System User Process
� Without a buffer, Operating system directly accesses the
IN
I/O
Device
device when it needs
Operating System User Process � Double buffer, Operating system use two system buffers
IN
instead of one, also known as buffer swapping. A
I/O MOVE
process can transfer data to or from one buffer while the
Device
operating system empties or fills the other buffer
Operating System User Process � Circular buffer, Operating system uses two or more
buffers. Each individual buffer is one unit in a circular
I/O IN MOVE
Device
.. buffer. Used when I/O operation must keep up with
process
Disk Arm Scheduling Algorithm
Disk scheduling is done by operating systems to schedule I/O requests arriving for the disk. Disk
scheduling is also known as I/O scheduling.
Disk scheduling is important because:
● Multiple I/O requests may arrive by different processes and only one I/O request can be served
at a time by the disk controller. Thus other I/O requests need to wait in the waiting queue and
need to be scheduled.
● Two or more request may be far from each other so can result in greater disk arm movement.
● Hard drives are one of the slowest parts of the computer system and thus need to be accessed
in an efficient manner.
● main goal is to minimize the seek time
Definitions (Internal structure of HDD)
� When the disk drive is operating, the disk is
rotating at constant speed.
� To read or write the head must be positioned
at the desired track and at the beginning of
the desired sector on that track
� Track selection involves moving the head in
a movable-head system or electronically
selecting one head on a fixed-head system
� On a movable-head system the time required
to move the disk arm to the required track is
known as SEEK TIME.
� The delay waiting for the rotation of the disk
to bring the required disk sector under the
read-write head is called ROTATIONAL
DELAY
� The sum of the seek time and the rotational
delay equals the ACCESS TIME.
DISK PERFORMANCE PARAMETERS
� the time taken to transfer the data from the disk. It varies on the rotational speed of the disk. The
faster a disk rotates, the faster we can read data, TRANSFER TIME
no. of bytes transferred / (no. of byte on track X rotation speed)
� the maximum amount of data a disc, disk, or drive is capable holding. Disk capacity is displayed
in MB (megabytes), GB (gigabytes), or TB (terabytes).DISK CAPACITY
no. of cylinders X no. of head X no. of sectors per track X no. of bytes per track
Disk Arm Scheduling Algorithm
� Following are different types of Disk Arm Scheduling Algorithm
1. FCSC (First Come First Serve) / FIFO (First In First Out)
2. SSTF (Shorted Seek Time First)
3. SCAN
4. LOOK
5. C-SCAN (Circular SCAN)
6. C-LOOK (Circular LOOK)
FCFS (First Come First Serve) / FIFO (First In First Out)
� Here requests are served in the order of their arrival.
0 1 9 12 16 34 36 50
11 1,
36,
1 16,
36 34,
9,
16 12
34
9
12
Advantages:
● Every request gets a fair chance
● No indefinite postponement
Disadvantages:
● Does not try to optimize seek time
● May not provide the best possible
service
Disadvantages:
● Overhead to calculate seek time in
advance
9 50
1
� Disk movement will be 11, 12, 16, 34, 36, 50, 9 and 1.
� Total cylinder movement: (12-11) + (16-12) + (34-16) +(36-34) +(50-36) + (50-9) + (9-1) = 88
SCAN
Example: Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50
Advantages:
● High throughput
● Low variance of response time
● Average response time
Disadvantages:
● Long waiting time for requests for
locations just visited by disk arm
So, total seek time:
=(199-50)+(199-16)
=332
C-SCAN
� From the current position disk arm starts in up direction and moves towards the end,
serving request until end.
� At the end the arm direction is reversed (down), and arm directly goes to other end and
again continues moving in upward direction.
C-SCAN
Example: Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is : 50
Advantages:
● Provides more uniform wait time
compared to SCAN