Unit 5
Unit 5
a. Dedicated: Assigned to only one job at a time until that job releases them.
b. Shared: A technique whereby a device is being shared by many processes.
c. Virtual: A technique where one physical device is simulated on another
physical device.
These characteristics help the OS manage hardware efficiently, providing a seamless user
experience while maintaining security, compatibility, and performance.
1. Disk Format
A new magnetic disk is a blank platter of a magnetic recording material.
Before a disk can store data, it must be divided into sectors that the disk controller can
read and write. This process is called low-level formatting, or physical formatting.
Low-level formatting fills the disk with a special data structure for each sector.
The data structure for a sector typically consists of a header, a data area, and a trailer.
The header and trailer contain information used by the disk controller, such as a sector
number and an error-correcting code (ECC).
When the controller writes a sector of data during normal I/O, the ECC is updated
with a value calculated from all the bytes in the data area.
When the sector is read, the ECC-is recalculated and is compared with the stored
value.
If the stored and calculated numbers are different, this mismatch indicates that the
data area of the sector has become corrupted and that the disk sector may be bad.
The ECC is an error-correcting code because it contains enough information that, if
only a few bits of data have been corrupted, the controller can identify which bits
have changed and can calculate what their correct values should be.
To use a disk to hold files, the operating system still needs to record its own data
structures on the disk. It performs this with following two steps:
i. The first step is to partition the disk into one or more groups of cylinders. The
operating system can treat each partition as though it were a separate disk.
ii. After partitioning, the second step is logical formatting (creation of a file system). The
OS stores the initial file-system data structures onto the disk. These data structures
may include maps of free and allocated space and an initial empty directory.
To increase efficiency, most file systems group blocks together into larger chunks,
frequently called clusters. Disk I/O is done via blocks, but file system I/O is done via
clusters.
2. Booting from disk
When a computer starts running or reboots to get an instance, it needs an initial
program to run. This initial program is known as the bootstrap program, and it
must initialize all aspects of the system, such as:
o First, initializes the CPU registers, device controllers, main memory, and
then starts the operating system.
o The bootstrap program finds the operating system kernel on disk to do its
job and then loads that kernel into memory.
o And last jumps to the initial address to begin the operating-system
execution.
The bootstrap is stored in read-only memory (ROM), which is convenient as it
doesn't require initialization and can be executed when powered up or reset.
ROM stands for read-only memory, and it is not susceptible to computer viruses.
However, ROM and hardware chips must be modified in order to change the
bootstrap code. Systems have a little bootstrap loader software stored.
The full bootstrap program is easily modified and stored in "boot blocks" on a
fixed disk, also known as a boot disk or system disk.
The boot ROM code instructs the disk controller to read boot blocks and execute
code, while the full bootstrap program is more sophisticated, loading the entire
operating system from a non-fixed location.
Disk Allocation
When file is created, storage space is allocated to it.
Also, when new data is added in an existing file, file size grows and it needs extra
storage space.
In a similar way, storage space is released when a file is deleted or data from a file is
deleted.
An important function of the file system is to manage space on the secondary storage,
which includes keeping track of both disk blocks allocated to files and the free block
available for allocation.
There are two main goals, which should be fulfilled while allocating space on disk to files.
1. Disk space should be utilized effectively.
2. Files should be accessed quickly.
At the time of space allocation, system must keep track of which disk blocks go with
which files.
Here the disk is considered as a collection of fixed size of blocks, where size varies
from system to system.
Most commonly it is of 512 bytes of 1kb in size.
1. Contiguous Allocation
In contiguous allocation, files are assigned to contiguous areas of secondary storage.
A user specifies in advance the size of the area needed to hold a file to be created.
If the desired amount of contiguous space is not available, the file cannot be created.
Each file occupies asset of contiguous blocks on the disk.
Thus, on a disk block of 1kb, a file of 50 kb would contain 50 consecutive disk
blocks. Whereas, if block size is 2kb; it will occupy 25 consecutive blocks.
When a file is created, a disk is searched to find out a chunk of free memory having
enough size to store a file. If such chunk is found, required memory is allocated.
The directory entry contains file name, starting block number and length of a file.
This method is widely used on CD-ROMs. Here, all the files’ sizes are known in advance.
Also, they will never change during subsequent uses of CD-ROM. Figure shown besides is
depicting such allocation for 4 different files.
Advantages:
1. Simple to implement. Information here required is only two things; one starting block
#; and second, length of file as a total number of blocks.
2. File access is quick. All the data blocks will be on the same or neighbour tracks of
disk, requiring less seek time.
Disadvantages:
1. Finding free space for a new file is time consuming. This requires searching an entire
disk until required free memory is found.
2. If size of an existing file increases, it may not be possible to accommodate such
extension.
3. External fragmentation is possible. When file is deleted, its blocks are free leaving
hole on the disk. With time, disk will consist of files and holes. Such hole may be too
small to accommodate new files and will waste space on the disk. It is known as
external fragmentation. Solution for this problem is defragmentation or compaction
the file.
2. Linked Allocation
Each file is a linked list of disk blocks.
Each linked block contains pointer to the next block in the list.
These disk blocks may be scattered anywhere on the disk.
Directory entry contains the start and last block numbers in a linked list.
When a new file is created; a new directory entry is created. Initially it contains ‘null’
as both the block number.
A write to the file causes free data blocks to be added to the file; such blocks are
added to the end of linked list.
Directory entry is updated on each such occasion. To read a file, all blocks are read by
following the pointers from block to block.
Following figure is depicting the linked allocation.
In the figure beside; to reach block # 10, it is required to traverse through block # 9,
16 and 1. An important variation on the linked allocation method, called File
Allocation Table (FAT), is used by the MS-DOS and older versions of Windows
Operating Systems.
Advantages:
1. It does not suffer from external fragmentation.
2. Any free disk block can be allocated to a file. Such block does not need to be a
consecutive block as in previous method. So, disk space can be utilized
effectively.
Disadvantages:
1. File access is time consuming. It is required to access all the data blocks in a
linked list to reach some particular block.
2. Random access is not possible directly.
3. Extra space is required for pointers in each data block.
3. Index Allocation
In linked allocation, pointers to various disk blocks are scattered on disk among
various disk blocks.
Due to this reason, linked allocation cannot support efficient direct access.
Indexed allocation solves this problem.
It brings all the pointers together into one location; the Index Block.
Each file contains its own index block.
An index block is an array of ‘disk block addresses.
The ‘ith’ entry in the index block points to the ‘ith’ block of the file.
Directory entry contains file name and the index block #.
When new file is created, a new directory entry is created. Initially all pointers in
index block are set null.
When the ‘ith’ block is first written, a free disk block is allocated and its address is put
in the ‘ith’ entry in the index block.
The following figure depicts the indexed allocation for the file ‘Hw1’. The directory
entry for the index block is done as block #3 for this file. This index block entry is
composed of all other block # entries which are having the data contents of the file
Hw1. The index entry consists of all the block entries which consists of the data
Contents.
Advantages:
1. It does not suffer from external fragmentation.
2. Direct access is efficient.
Disadvantages:
1. It suffers from wasted space. Index block may be partially filled. These was remaining
memory space of an index block. For example, if file contains two data blocks, the
index block will have only two entries to point to these two data blocks, remaining
entire index block will be wasted.
2. Maximum allowable file size depends on the size of an index block.
Solutions:
In this allocation method the main problem is the size of index block. If it is too large, it may
waste space. If it is too small it cannot accommodate enough pointers for a large file.
1. Linked Scheme
An index block is normally of one disk block size.
For larger files more than one index block can be used by linking them together by a
linked list.
3. Combined Scheme
In this solution, partial entries from indirect block point to direct block containing file
data.
Partial entries point to indirect blocks, which are index blocks.
They do not contain file data, but they contain address of disk blocks (pointers) that
do contain file data.
UNIX OS is using such type of combined scheme.
Seek Time
The time taken by the read/ write head to reach the desire track from its current
position
is called seek time.
Latency Time
Time taken by the sector to come under the read/write is called Rotational
latency/latency time.
Transfer Time
It is the time to transfer data. It depends on the rotating speed of the disk and number
of bytes to be transferred.
Access Time
Disk Access Time= Seek Time + Rotational Latency + Transfer Time.
Bandwidth
Total number of bytes transferred divided by the total time between first request for
service and completion of last transfer.
Disk Scheduling
One of the responsibilities of the operating system is to use the hardware efficiently.
For the disk drives, meeting this responsibility entails having fast access time and
large disk bandwidth.
The access time has two major components. The SEEK TIME is the time for the disk
arm to move the heads to the cylinder containing the desired sector. The LATENCY is
the additional time for the disk to rotate the desired sector to the disk head. The disk
BANDWIDTH is the total number of bytes transferred, divided by the total time
between the first request for service and the completion of the last transfer. We can
improve both the access time and the bandwidth by managing then order in which
disk I/O requests are serviced.
Whenever a process needs I/O to or from the disk, it issues a system call to the
operating system. The request specifies several pieces of information:
o Whether this operation is input or output
o What the disk address for the transfer is
o What the memory address for the transfer is
o What the number of sectors to be transferred is
If the desired disk drive and controller are available, the request can be serviced
immediately.
If the drive or controller is busy, any new requests for service will be placed in the
queue of pending requests for that drive. For a multiprogramming system with many
processes, the disk queue may often have several pending requests.
Thus, when one request is completed, the operating system chooses which pending
request to service next.
How does the operating system make this choice? Any one of several disk scheduling
algorithms can be used, and we discuss them next.
Although there are other algorithms that reduce the seek time of all requests, I will
only concentrate on the following disk scheduling algorithms:
6. C-LOOK
Types of Disk Scheduling algorithms
What we are striving for by using these algorithms is keeping Head Movements (# tracks) to
the least amount as possible. The less the head has to move the faster the seek time will be. I
will show you and explain to you why CLOOK is the best algorithm to use in trying to
establish less seek time.
Given the following queue -- 95, 180, 34, 119, 11, 123, 62, 64 with the Read-write head
initially at the track 50 and the tail track being at 199 let us now discuss the different
algorithms.
For this case it went from 50 to 95 to 180 and so on. From 50 to 95 it moved 45 tracks. If
you tally up the total number of tracks you will find how many tracks it had to go through
before finishing the entire request. In this example, it had a total head movement of 640
tracks. The disadvantage of this algorithm is noted by the oscillation from track 50 to
track 180 and then back to track 11 to 123 then to 64. As you will soon see, this is the
worse algorithm that one can use.
Advantages:
Simple and Fair to all requests
Every request gets a fair chance.
No indefinite postponement.
Disadvantage:
Not efficient, because the average seek time is very high. It suffers from zigzag effect.
SUTEX BANK COLLEGE OF COMPUTER APPLICATION & SCIENCE AMROLI
203 OPERATING SYSTEM
In this case request is serviced according to next shortest distance. Starting at 50, the
next shortest distance would be 62 instead of 34 since it is only 12 tracks away from
62 and 16 tracks away from 34. The process would continue until all the processes are
taken care of. For example the next case would be to move from 62 to 64 instead of
34 since there are only 2 tracks between them and not 18 if it were to go the other
way. Although this seems to be a better service being that it moved a total of 236
tracks, this is not an optimal one. There is a great chance that starvation would take
place. The reason for this is if there were a lot of requests close to each other the other
requests will never be handled since the distance will always be greater.
Advantages:
More efficient than FCFS.
Average Response time decreases.
Throughput increases.
Disadvantages:
Starvation is possible for requests involving longer seek time.
Overhead to calculate seek time in advance.
Can cause starvation for a request if it has higher seek time as compared to
incoming requests?
High variance of response time as SSTF favours only some requests.
Elevator Algorithms
Theses algorithms are based on the common elevator principle. Four combinations of
Elevator algorithms: SCAN, LOOK, C-SCAN and C-LOOK. They service in both directions
or in only one direction. They operate until last cylinder or until last I/O request encountered.
3. Elevator (SCAN)
In SCAN algorithm the disk arm moves into a particular direction and services the
requests coming in its path and after reaching the end of disk, it reverses its direction
and again services the request arriving in its path. So, this algorithm works as an
elevator and hence also known as elevator algorithm. As a result, the requests at the
midrange are serviced more and those arriving behind the disk arm will have to wait.
If a request comes in after it has been scanned it will not be serviced until the process
comes back down or moves back up. This process moved a total of 230 tracks. Once
again this is more optimal than the previous algorithm, but it is not the best.
Advantages:
More efficient than FCFS.
Also, there is no starvation for any requests.
High throughput.
Low variance of response time.
Average response time.
Disadvantages:
Require extra head movement between two extreme points. For example, after
servicing 5th cylinder there is no need to visit the 0th cylinder. Though, this algorithm
visits the end points.
Not so fair; cylinders which are just behind head will wait longer.
Long waiting time for requests for locations just visited by disk arm.
Circular scanning works just like the elevator to some extent. The head moves from
one end of the disk to the other, servicing requests as it goes. When it reaches the
other end, however, it immediately returns to the beginning of the disk, without
servicing any requests on the return trip. It treats the cylinders as a circular list that
wraps around from the last cylinder to the first one. It provides a more uniform wait
time than SCAN; it treats all cylinders in the same manner. It begins its scan toward
the nearest end and works its way all the way to the end of the system. Once it hits the
bottom or top it jumps to the other end and moves in the same direction. Keep in mind
that the huge jump doesn't count as a head movement. The total head movement for
this algorithm is only 187 tracks, but still, this isn't the more sufficient.
Advantages:
Provides more uniform wait time compared to SCAN.
5. LOOK
The disk arm starts at the first I/O request on the disk, and moves toward the last I/O
request on the other end, servicing requests until it gets to the other extreme I/O
request on the disk, where the head movement is reversed and servicing continues. It
moves in both directions until both last I/O requests; more inclined to serve the
middle cylinder requests. In this example head will move from 11 to 62 rather going
to 0 from 11.
6. C-LOOK
It is an enhanced Look version of C-Scan. Arm only goes as far as the last request in
each direction, then reverses direction immediately, without first going all the way to
the end of the disk. Scan versions have a larger total seek time than the corresponding
Look versions.
This is just an enhanced version of C-SCAN. In this the scanning doesn't go past the
last request in the direction that it is moving. It too jumps to the other end but not all
the way to the end. Just to the furthest request. C-SCAN had a total movement of 187
but this scan (C-LOOK) reduced it down to 157 tracks. From this you were able to see
a scan change from 644 total head movements to just 157. You should now have an
understanding as to why your operating system truly relies on the type of algorithm it
needs when it is dealing with multiple processes.
If the directory entry is on the first cylinder, and file’s data are on the final cylinder,
then disk head required to move the entire width of the disk.
These are the main three factors that must be observed at the time of selecting the
algorithm. An average seek time with FCFS is very high. SCAN required unnecessary
head movements between two end points. So, in default case, either SSTF or LOOK
is
reasonable choice as a disk scheduling algorithm.