Unit-6 Device Management
Unit-6 Device Management
1
Contents
2
Principles of I/O hardware
3
• Conceptually, a simple personal computer can be abstracted to a model resembling that
of Fig. 6-1.
• The CPU, memory, and I/O devices are all connected by a system bus and communicate
with one another over it.
• Modern personal computers have a more complicated structure, involving multiple
buses, which we will look at later. For the time being, this model will be sufficient.
• In the following sections, we will briefly review these components and examine some of
the hardware issues that are of concern to operating system designers. Needless to say,
this will be a very compact summary. Many books have been written on the subject of
computer hardware and computer organization. Two well-known ones are by Tanenbaum
and Austin (2012) and Patterson and Hennessy (2013).
4
Figure 6-1. Some of the components( memory, controllers and i/o devices) of a simple personal computer.
5
I/O Devices
• I/O devices can be roughly divided into two categories:
• block devices and
• character devices.
• A block device is one that stores information in fixed-size blocks, each one
with its own address. Common block sizes range from 512 to 65,536 bytes.
• It is possible to read/write each and every block independently in case of block device.
• Hard disks, Blu-ray discs, and USB sticks are common block devices.
• A character device delivers or accepts a stream of characters, without regard
to any block structure.
• It is not addressable and does not have any seek operation.
• Printers, network interfaces, mice (for pointing), rats (for psychology lab
experiments), and most other devices that are not disk-like can be seen as
character devices 6
Figure 6-2. Some typical device, network, and bus data rates.
7
Device Controllers
8
Memory-Mapped I/O
• Memory-mapped I/O uses the same address space to address
both memory and I/O devices.
• The memory and registers of the I/O devices are mapped to (associated with)
address values. So when an address is accessed by the CPU, it may refer to a
portion of physical RAM, or it can instead refer to memory of the I/O device.
• Each controller has a few registers that are used for communicating with the
CPU. By writing into these registers, the operating system can command the
device to deliver data, accept data, switch itself on or off, or otherwise perform
some action.
• By reading from these registers, the operating system can learn what the
device’s state is, whether it is prepared to accept a new command, and so on.
9
• In addition to the control registers, many devices have a data buffer that
the operating system can read and write. For example, a common way for
computers to display pixels on the screen is to have a video RAM, which
is basically just a data buffer, available for programs or the operating
system to write into.
• The issue thus arises of how the CPU communicates with the control
registers and also with the device data buffers. Two alternatives exist. In
the first approach,
10
Figure 6-3. (a) Separate I/O and memory space. (b) Memory-mapped I/O. (c) Hybrid.
11
Figure 6-4. (a) A single-bus architecture. (b) A dual-bus memory architecture.
12
Direct Memory Access
• The CPU can request data from an I/O controller one byte at a time, but doing
so wastes the CPU’s time, so a different scheme, called DMA (Direct
Memory Access) is often used.
14
Interrupts Revisited
Figure 6-6. How an interrupt happens. The connections between the devices and the controller
actually use interrupt lines on the bus rather than dedicated wires.
15
• An interrupt that leaves the machine in a well-defined state is called a
precise interrupt (Walker and Cragon, 1995).
16
Figure 6-7. (a) A precise interrupt. (b) An imprecise interrupt.
17
• Fig. 5-7(a) illustrates a precise interrupt. All instructions up to the program
counter (316) have completed and none of those beyond it have started (or
have been rolled back to undo their effects).
• An interrupt that does not meet these requirements is called an imprecise
interrupt and makes life most unpleasant for the operating system writer,
who now has to figure out what has happened and what still has to happen.
• Fig. 5-7(b) illustrates an imprecise interrupt, where different instructions
near the program counter are in different stages of completion, with older
ones not necessarily more complete than younger ones.
• Machines with imprecise interrupts usually vomit a large amount of
internal state onto the stack to give the operating system the possibility of
figuring out what was going on.
18
Principles of I/O Software
Goals of the I/O Software
• A key concept in the design of I/O software is known as device
independence.
• Device independence is the goal of uniform naming. The name of a file or
a device should simply be a string or an integer and not depend on the
device in any way.
• issue for I/O software :
• Error handling.
• Synchronous (blocking) vs. asynchronous (interrupt-driven) transfers.
• Buffering.
• sharable vs. dedicated devices: Some I/O devices, such as disks, can be used
by many users at the same time. No problems are caused by multiple users
19
having open files on the same disk at the same time. Other devices, such as
Fundamentally input/output can be performed in one of the following three
ways: (Handling I/O):
Programmed I/O
• Programmed input–output is a method of data transmission, via input/output,
between a central processing unit and a peripheral device, such as a network
adapter or a Parallel ATA(PATA) storage device
21
Interrupt-Driven I/O
Figure 6-10. Writing a string to the printer using interrupt-driven I/O. (a) Code executed at
the time the print system call is made. (b) Interrupt service procedure for the printer.
22
I/O using DMA
• Disadvantage of interrupt-driven I/O is that an interrupt occurs on every
character. Interrupts take time, so this scheme wastes a certain amount of
CPU time. A solution is to use DMA.
• DMA is programmed I/O, only with the DMA controller doing all the
work, instead of the main CPU. This strategy requires special hardware
(the DMA controller) but frees up the CPU during the I/O to do other
work.
Figure 6-11. Printing a string using DMA. (a) Code executed when the print system call is made. (b)
Interrupt-service procedure. 23
I/O Software layers
• I/O software is typically organized in four layers, as shown in Fig. 5-11.
Each layer has a well-defined function to perform and a well-defined
interface to the adjacent layers.
Figure 6-13. Logical positioning of device drivers. In reality all communication between drivers
and device controllers goes over the bus. 26
Device-Independent I/O Software
27
User-Space I/O Software
Figure 6-15. Layers of the I/O system and the main functions of each layer.
28
Disks
Disk Hardware
• Disks come in a variety of types. The most common ones are the magnetic
hard disks. They are characterized by the fact that reads and writes are
equally fast, which makes them suitable as secondary memory (paging, file
systems, etc.).
• For distribution of programs, data, and movies, optical disks (DVDs and Blu-
ray) are also important.
29
Disk Formatting
Before disk can be used, it must be formatted:
• Low level format : Disk sector layout, Cylinder skew and Interleaving
• High level format : Boot block ,Free block list, Root directory, Empty file
system
• Typical sector is 512 bytes
• Preamble identifying start code and sector address
• Data
• Error correction code (16 bits). At least detecting errors possible with
probability almost 1
Figure 5-23. (a) No interleaving. (b) Single interleaving. (c) Double interleaving.
32
Disk Access
• Physical address on a disk
• Sectors can be given logical numbers that get decoded by Disk Controller
• Design considerations
• Seek time (moving head to correct cylinder) and rotational latency time
(waiting for correct sector) are greater than data transfer time (time to
read)
• Caching blocks that happen to pass under disk head commonly used
33
Q. A disk has 8 sectors per track and spins at 600 rpm. It takes the controller time 10ms
from the end of one I/O operation before it can issue a subsequent one. How long does it
take to read all 8 sectors using the following interleaving system?
a) No interleaving
b) Single interleaving
c) Double interleaving
Solution:
1 Minute = 600 revolution , 1/600 Minute = 1 revolution ,1/10 Second = 1 revolution
100 Millisecond = 1 revolution
a) No Interleaving:
Time to read total sector = 8×100 𝑚𝑠=800 𝑚𝑠
b) Single Interleaving:
Time to read total sector = 2×100 𝑚𝑠=200 𝑚𝑠
c) Double Interleaving:
Time to read total sector = 8/3×100 𝑚𝑠=266.67 𝑚𝑠
34
RAID
36
RAID level 0 : Striping
• RAID Level 0 is block-level striping and
non-redundant disk array
• Files are striped across disks, no redundant
information
• Best I/O performance achieved when data is
striped across multiple controllers with only
one drive per controller.
• High read throughput but no fault-tolerance
• Best write throughput (no redundant info to
write)
• Any disk failure results in data loss;
sometimes a file, sometimes the entire
volume 37
RAID level 1 : Mirroring
• RAID Level 1 is mirrored disks
with no parity
• Files are striped across (half) the
disks
• Data is written to multiple (two)
places – data disks and mirror disks
• Best fault-tolerance
• On failure, just use the surviving
disk(s)
• Factor of N (2x) space expansion
38
RAID 5: striping with parity
• The parity information is distributed over
all the disks instead of storing them in a
dedicated disk.
• Parity for blocks in the same rank is
generated on writes, recorded in a
distributed location and checked on reads.
• No more a bottleneck as the parity stress
evens out by using all the disks to store
parity information
• No possibility of losing data redundancy
since one disk does not store all the parity
information.
• Can only handle up to a single disk
failure
39
RAID 6: striping with double parity
• Block –level striping with double distributed parity.
• This increases the fault tolerance for up to two drive failures in the array.
• Each disk has two parity blocks which are stored on different disks across the
array.
• RAID 6 is a very practical infrastructure for maintaining high availability systems.
• Large parity overhead
40
RAID 01 (RAID 0+1)
• RAID level using a mirror
of stripes, achieving both
replication and sharing of
data between disks.
• The usable capacity of a
RAID 01 array is the same
as in a RAID 1 array made
of the same drives, in
which one half of the
drives is used to mirror the
other half.
41
RAID 10: Combining Mirroring And Striping
• Also called RAID 1+0 and
sometimes RAID 1 & 0
• RAID 10 combines both RAID 1 and RAID 0
by layering them in opposite order.
• In this setup, multiple RAID 1 blocks are
connected with each other to make it like
RAID 0.
• This is a nested or hybrid RAID
configuration.
• It is used in cases where huge disk
performance (greater than RAID 5 or 6) along
with redundancy is required.
• Cost per unit memory is high since data is
mirrored 42
RAID 50:
• RAID 50 also called RAID 5+0
• combines the straight block-level
striping of RAID 0 with the
distributed parity of RAID 5.
• As a RAID 0 array striped across
RAID 5 elements, minimal
RAID 50 configuration requires
six drives.
• This takes advantage of the
distributed parity of the RAID 5
level with the extra speed gained
by using the data striping of RAID
0.
43
RAID 60 (RAID 6+0)
• RAID 60, also called RAID 6+0,
• Combines the straight block-level striping of RAID 0 with the distributed double
parity of RAID 6, resulting in a RAID 0 array striped across RAID 6 elements. It
requires at least eight disks
44
RAID 100 (RAID 10+0)
• RAID 100, sometimes also called RAID 10+0, is a stripe of RAID 10s.
• This is logically equivalent to a wider RAID 10 array, but is generally implemented
using software RAID 0 over hardware RAID 10. Being "striped two ways", RAID 100
is described as a "plaid RAID"
45
Disk Arm Scheduling Algorithms
• How long it takes to read or write a disk block. The time required is
determined by three factors:
1. Seek time (the time to move the arm to the proper cylinder).
2. Rotational delay (how long for the proper sector to appear under the
reading head).
3. Actual data transfer time.
• Error checking is done by controllers
46
Disk Scheduling Algorithms
• FCFS Algorithm
• SSTF Algorithm
• SCAN Algorithm
• C-SCAN Algorithm
• LOOK Algorithm
• C-LOOK Algorithm
47
FCFS Scheduling
48
Q. A disk contains 200 cylinders ,the track sequence is 98, 183, 41, 122, 14,
124, 65, 67 and current position of R/W head is 53 calculate total no of arm
movement of head.
52
Q. Consider a disk queue with requests for I/O to blocks on cylinders 98, 183, 41,
122, 14, 124, 65, 67. The head is initially at cylinder number 53 moving towards
larger cylinder numbers on its servicing pass. The cylinders are numbered from 0 to
199.
Total head movements incurred while servicing these requests = (65 – 53) + (67 – 65) +
(98 – 67) + (122 – 98) + (124 – 122) + (183 – 124) + (199 – 183) + (199 – 41) + (41 – 14)
= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 158 + 27
= 331
C-SCAN Scheduling
• Circular-SCAN Algorithm is an improved version of the SCAN Algorithm.
• Head starts from one end of the disk and move towards the other end servicing all the
requests in between.
• After reaching the other end, head reverses its direction.
• It then returns to the starting end without servicing any request in between. The same
process repeats.
Advantages:
• The waiting time for the cylinders just visited by the head is reduced as compared to the
SCAN Algorithm.
• It provides uniform waiting time.
• It provides better response time.
Disadvantages:
• It causes more seek movements as compared to SCAN Algorithm.
• It causes the head to move till the end of the disk even if there are no requests to be
54
serviced.
Q. Suppose that a disk drive has the cylinder numbered from 0 to 199. The head is
currently at cylinder number 53 moving towards larger cylinder numbers on its
servicing pass. The queue for services of cylinder is as 98, 183, 41, 122, 14, 124, 65 and
67. What is the total head movement in each of the following disk algorithm to satisfy
the requests?