0% found this document useful (0 votes)
6 views

Module 4 Linux

The document discusses file systems and I/O management in operating systems, focusing on their structure, operations, and access methods. It outlines various file systems like FAT, NTFS, EXT, and APFS, as well as file attributes and operations such as creating, reading, and deleting files. Additionally, it covers directory structures, including single-level, two-level, and tree-structured directories, highlighting their advantages and disadvantages.

Uploaded by

keerthiap2002
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Module 4 Linux

The document discusses file systems and I/O management in operating systems, focusing on their structure, operations, and access methods. It outlines various file systems like FAT, NTFS, EXT, and APFS, as well as file attributes and operations such as creating, reading, and deleting files. Additionally, it covers directory structures, including single-level, two-level, and tree-structured directories, highlighting their advantages and disadvantages.

Uploaded by

keerthiap2002
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Operating System with Linux (PMCAC12)

MODULE 4

FILE SYSTEMS AND I/O MANAGEMENT

File Systems

File systems are a crucial part of any operating system, providing a structured way to store,
organize, and manage data on storage devices such as hard drives, SSDs, and USB drives.

a file system acts as a bridge between the operating system and the physical storage hardware,
allowing users and applications to create, read, update, and delete files in an organized and
efficient manner.

A file system is a method an operating system uses to store, organize, and manage files and
directories on a storage device. Some common types of file systems include:

• FAT (File Allocation Table): An older file system used by older versions of Windows
and other operating systems.

• NTFS (New Technology File System): A modern file system used by Windows. It
supports features such as file and folder permissions, compression, and encryption.

• EXT (Extended File System): A file system commonly used on Linux and Unix-based
operating systems.

• HFS (Hierarchical File System): A file system used by macOS.

• APFS (Apple File System): A new file system introduced by Apple for their Macs and
iOS devices.

File Concepts:

1. Files Attributes:
Each file has characteristics like file name, file type, date (on which file was
created), etc. These characteristics are referred to as 'File Attributes'. The
operating system associates these attributes with files. In different operating
systems files may have different attributes. Some people call attributes metadata
also.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 1


Operating System with Linux (PMCAC12)

Following are some common file attributes:

1. Name: File name is the name given to the file. A name is usually a string of
characters.
2. Identifier: Identifier is a unique number for a file. It identifies files within the
file system. It is not readable to us, unlike file names.
3. Type: Type is another attribute of a file which specifies the type of file such as
archive file (.zip), source code file (.c, .java), .docx file, .txt file, etc.
4. Location: Specifies the location of the file on the device (The directory path).
This attribute is a pointer to a device.
5. Size: Specifies the current size of the file (in Kb, Mb, Gb, etc.) and possibly the
maximum allowed size of the file.
6. Protection: Specifies information about Access control (Permissions about
Who can read, edit, write, and execute the file.) It provides security to sensitive
and private information.
7. Time, date, and user identification: This information tells us about the date
and time on which the file was created, last modified, created and modified by
which user, etc.

2. File Operations:
The operating system must do to perform basic file operations given below.
• Creating a file: Two steps are necessary to create a file. First, space in the file
system must be found for the file. Second, an entry for the new file must be
made in the directory.
• Writing a file: To write a file, we make a system call specifying both the name
of the file and the information to be written to the file. Given the name of the
file, the system searches the directory to find the file's location. The system
must keep a write pointer to the location in the file where the next write is to
take place. The write pointer must be updated whenever a write occurs.
• Reading a file: To read from a file, we use a system call that specifies the
name of the file and where (in memory) the next block of the file should be
put. Again, the directory is searched for the associated entry, and the system

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 2


Operating System with Linux (PMCAC12)

needs to keep a read pointer to the location in the file where the next read is to
take place. Once the read has taken place, the read pointer is updated.
• Repositioning within a file: The directory is searched for the appropriate
entry, and the current-file-position pointer is repositioned to a given value.
Repositioning within a file need not involve any actual I/O. This file operation
is also known as a file seek.
• Deleting a file: To delete a file, we search the directory for the named file.
Having found the associated directory entry, we release all file space, so that
it can be reused bv other files, and erase the directory entry.
• Protection: Access-control information determines who can do reading,
writing, executing, and so on.
• Truncating a file: The user may want to erase the contents of a file but keep
its attributes. Rather than forcing the user to delete the file and then recreate
it, this function allows all attributes to remain unchanged—except for file
length—but lets the tile be reset to length zero and its file space released.
File System Structure
A File Structure should be according to a required format that the operating system can
understand.
• A file has a certain defined structure according to its type.
• A text file is a sequence of characters organized into lines.
• A source file is a sequence of procedures and functions.
• An object file is a sequence of bytes organized into blocks that are understandable by
the machine.
• When operating system defines different file structures, it also contains the code to
support these file structure. Unix, MS-DOS support minimum number of file
structure.
Files can be structured in several ways in which three common structures are given in this
tutorial with their short description one by one.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 3


Operating System with Linux (PMCAC12)

File Access methods


File access mechanism refers to the way the records of a file may be accessed.
There are several ways to access files –
• Sequential access
• Direct/Random access
• Indexed sequential access
1. Sequential Access
It is the simplest access method. Information in the file is processed in order, one record after
the other. This mode of access is by far the most common; for example, the editor and
compiler usually access the file in this fashion.
Read and write make up the bulk of the operation on a file. A read operation -read next- reads
the next position of the file and automatically advances a file pointer, which keeps track of
the I/O location. Similarly, for the -write next- append to the end of the file and advance to
the newly written material.

Advantages of Sequential Access Method


• It is simple to implement this file access mechanism.
• It uses lexicographic order to quickly access the next entry.
• It is suitable for applications that require access to all records in a file, in a specific
order.
• It is less prone to data corruption as the data is written sequentially and not randomly.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 4


Operating System with Linux (PMCAC12)

• It is a more efficient method for reading large files, as it only reads the required data
and does not waste time reading unnecessary data.
• It is a reliable method for backup and restore operations, as the data is stored
sequentially and can be easily restored if required.
Disadvantages of Sequential Access Method
• If the file record that needs to be accessed next is not present next to the current
record, this type of file access method is slow.
• Moving a sizable chunk of the file may be necessary to insert a new record.
• It does not allow for quick access to specific records in the file. The entire file must
be searched sequentially to find a specific record, which can be time-consuming.
• It is not well-suited for applications that require frequent updates or modifications to
the file. Updating or inserting a record in the middle of a large file can be a slow and
cumbersome process.
• Sequential access can also result in wasted storage space if records are of varying
lengths. The space between records cannot be used by other records, which can result
in inefficient use of storage.

2. Direct/Random Access Method


Another method is direct access method also known as relative access method. A fixed-
length logical record that allows the program to read and write record rapidly. in no
particular order. The direct access is based on the disk model of a file since disk allows
random access to any file block. For direct access, the file is viewed as a numbered
sequence of block or record. Thus, we may read block 14 then block 59, and then we can
write block 17. There is no restriction on the order of reading and writing for a direct
Access file.
A block number provided by the user to the operating system is normally a relative block
number, the first relative block of the file is 0 and then 1 and so on.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 5


Operating System with Linux (PMCAC12)

Advantages of Direct Access Method


• The files can be immediately accessed decreasing the average access time.
• In the direct access method, to access a block, there is no need of traversing all the
blocks present before it.
Disadvantages of Direct Access Method
• Complex Implementation: Implementing direct access can be complex, requiring
sophisticated algorithms and data structures to manage and locate records efficiently.
• Higher Storage Overhead: Direct access methods often require additional storage
for maintaining data location information (such as pointers or address tables), which
can increase the overall storage requirements.

3. Index Sequential method


It is the other method of accessing a file that is built on the top of the sequential access
method. These methods construct an index for the file. The index, like an index in the
back of a book, contains the pointer to the various blocks. To find a record in the file,
we first search the index, and then by the help of pointer we access the file directly.
Key Points Related to Index Sequential Method
• It is built on top of Sequential access.
• It controls the pointer by using index.

Advantages of Index Sequential Method


• Efficient Searching: Index sequential method allows for quick searches through the
index.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 6


Operating System with Linux (PMCAC12)

• Balanced Performance: It combines the simplicity of sequential access with the


speed of direct access, offering a balanced approach that can handle various types of
data access needs efficiently.
• Flexibility: This method allows both sequential and random access to data, making it
versatile for different types of applications, such as batch processing and real-time
querying.
• Improved Data Management : Indexing helps in better organization and
management of data. It makes data retrieval faster and more efficient, especially in
large databases.
• Reduced Access Time: By using an index to directly locate data blocks, the time
spent searching for data within large datasets is significantly reduced.

Disadvantages of Index Sequential Method


• Complex Implementation: The index sequential method is more complex to
implement and maintain compared to simple sequential access methods.
• Additional Storage: Indexes require additional storage space, which can be
significant for large datasets. This extra space can sometimes offset the benefits of
faster access.
• Update Overhead: Updating the data can be more time-consuming because both the
data and the indexes need to be updated. This can lead to increased processing time
for insertions, deletions, and modifications.
• Index Maintenance : Keeping the index up to date requires regular maintenance,
especially in dynamic environments were data changes frequently. This can add to
the system’s overhead.

Directory Structure
A directory is a container that is used to contain folders and files. It organizes files and folders
in a hierarchical manner. In other words, directories are like folders that help organize files
on a computer. Just like you use folders to keep your papers and documents in order, the
operating system uses directories to keep track of files and where they are stored. Different
structures of directories can be used to organize these files, making it easier to find and
manage them.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 7


Operating System with Linux (PMCAC12)

Understanding these directory structures is important because it helps in efficiently


organizing and accessing files on your computer. Following are the logical structures of a
directory, each providing a solution to the problem faced in the previous type of directory
structure.

Fig: Directory Structure

1. Single Level Directory


The simplest method is to have one big list of all the files on the disk. The entire system will
contain only one directory which is supposed to mention all the files present in the file
system. The directory contains one entry per each file present on the file system. This type
of directories can be used for a simple system.

Advantages
1. Implementation is very simple.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 8


Operating System with Linux (PMCAC12)

2. If the sizes of the files are very small, then the searching becomes faster.
3. File creation, searching, deletion is very simple since we have only one directory.
Disadvantages
1. We cannot have two files with the same name.
2. The directory may be very big therefore searching for a file may take so much time.
3. Protection cannot be implemented for multiple users.
4. There are no ways to group same kind of files.
5. Choosing the unique name for every file is a bit complex and limits the number of
files in the system because most of the Operating System limits the number of
characters used to construct the file name.
2. Two Level Directory
In two level directory systems, we can create a separate directory for each user. There is
one master directory which contains separate directories dedicated to each user. For each
user, there is a different directory present at the second level, containing group of user's
files. The system does not let a user to enter in the other user's directory without permission.

Characteristics of two-level directory system


1. Each files have a path name as /Username/directory-name/
2. Different users can have the same file name.
3. Searching becomes more efficient as only one user's list needs to be traversed.
4. The same kind of files cannot be grouped into a single directory for a particular user.
Every Operating System maintains a variable as PWD which contains the present directory
name (present username) so that the searching can be done appropriately.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 9


Operating System with Linux (PMCAC12)

Advantages
• The main advantage is there can be more than two files with same name, and would
be very helpful if there are multiple users.
• A security would be there which would prevent user to access other user’s files.
• Searching of the files becomes very easy in this directory structure.
Disadvantages
• As there is advantage of security, there is also disadvantage that the user cannot share
the file with the other users.
• Unlike the advantage users can create their own files, users don’t have the ability to
create subdirectories.
• Scalability is not possible because one user can’t group the same types of files
together.

3. Tree Structured Directory


• In Tree structured directory system, any directory entry can either be a file or sub
directory. Tree structured directory system overcomes the drawbacks of two-level
directory system. The similar kind of files can now be grouped in one directory.
• Each user has its own directory, and it cannot enter in the other user's directory.
However, the user has the permission to read the root's data, but he cannot write or
modify this. Only administrator of the system has the complete access of root directory.
• Searching is more efficient in this directory structure. The concept of current working
directory is used. A file can be accessed by two types of paths, either relative or
absolute.
• Absolute path is the path of the file with respect to the root directory of the system while
relative path is the path with respect to the current working directory of the system. In
tree structured directory systems, the user is given the privilege to create the files as
well as directories.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 10


Operating System with Linux (PMCAC12)

Advantages
• This directory structure allows subdirectories inside a directory.
• The searching is easier.
• File sorting of important and unimportant becomes easier.
• This directory is more scalable than the other two directory structures explained.

Disadvantages
• As the user isn’t allowed to access other user’s directory, this prevents the file sharing
among users.
• As the user has the capability to make subdirectories, if the number of subdirectories
increase the searching may become complicated.
• Users cannot modify the root directory data.
• If files do not fit in one, they might have to be fit into other directories.

4. Acyclic Graph Structure


Acyclic graph directory structure, where a file in one directory can be accessed from
multiple directories. In this way, the files could be shared in between the users. It is
designed in a way that multiple directories point to a particular directory or file with
the help of links.
The tree structured directory system does not allow the same file to exist in multiple
directories therefore sharing is major concern in tree structured directory system. We

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 11


Operating System with Linux (PMCAC12)

can provide sharing by making the directory an acyclic graph. In this system, two or
more directory entry can point to the same file or sub directory. That file or sub
directory is shared between the two directory entries.
These kinds of directory graphs can be made using links or aliases. We can have
multiple paths for a same file. Links can either be symbolic (logical) or hard link
(physical).

Advantages
• Sharing of files and directories is allowed between multiple users.
• Searching becomes too easy.
• Flexibility is increased as file sharing and editing access is there for multiple users.
Disadvantages
• Because of the complex structure it has, it is difficult to implement this directory
structure.
• The user must be very cautious to edit or even deletion of file as the file is accessed
by multiple users.
• If we need to delete the file, then we need to delete all the references of the file in
order to delete it permanently.

File Allocation Methods

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 12


Operating System with Linux (PMCAC12)

The allocation methods define how the files are stored in the disk blocks. There are three
main disk space or file allocation methods.
• Contiguous Allocation
• Linked Allocation
• Indexed Allocation

1. Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file creation. Thus, this is
a pre-allocation strategy, using variable size portions. The file allocation table needs just a
single entry for each file, showing the starting block and the length of the file. This method is
best from the point of view of the individual sequential file. Multiple blocks can be read in at
a time to improve I/O performance for sequential processing. It is also easy to retrieve a single
block.
For example, if a file starts at block b, and the ith block of the file is wanted, its location on
secondary storage is simply b+i-1.
Disadvantages of Continuous Allocation
• External fragmentation will occur, making it difficult to find contiguous blocks of
space of sufficient length. A compaction algorithm will be necessary to free up
additional space on the disk.
• Also, with pre-allocation, it is necessary to declare the size of the file at the time of
creation.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 13


Operating System with Linux (PMCAC12)

2.Linked Allocation (Non-Contiguous Allocation)


Allocation is on an individual block basis. Each block contains a pointer to the next block
in the chain. Again, the file table needs just a single entry for each file, showing the starting
block and the length of the file. Although pre-allocation is possible, it is more common
simply to allocate blocks as needed.

Any free block can be added to the chain. The blocks need not be continuous. An increase
in file size is always possible if a free disk block is available.

There is no external fragmentation because only one block at a time is needed but there
can be internal fragmentation but it exists only in the last disk block of the file.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 14


Operating System with Linux (PMCAC12)

Disadvantage Linked Allocation (Non-contiguous allocation)


• Internal fragmentation exists in the last disk block of the file.
• There is an overhead of maintaining the pointer in every disk block.
• If the pointer of any disk block is lost, the file will be truncated.
• It supports only the sequential access of files.

3. Indexed Allocation
It addresses many of the problems of contiguous and chained allocation. In this case, the file
allocation table contains a separate one-level index for each file: The index has one entry for
each block allocated to the file.
The allocation may be based on fixed-size blocks or variable-sized blocks. Allocation by
blocks eliminates external fragmentation, whereas allocation by variable-size blocks
improves locality.
This allocation technique supports both sequential and direct access to the file and thus is the
most popular form of file allocation.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 15


Operating System with Linux (PMCAC12)

Disk Scheduling Algorithms


Disk scheduling is a technique operating systems use to manage the order in which disk I/O
(input/output) requests are processed.
Disk scheduling is also known as I/O Scheduling.
The main goals of disk scheduling are to optimize the performance of disk operations, reduce
the time it takes to access data and improve overall system efficiency.

Disk scheduling algorithms are crucial in managing how data is read from and written to a
computer’s hard disk. These algorithms help determine the order in which disk read and write
requests are processed, significantly impacting the speed and efficiency of data access.
Common disk scheduling methods include First-Come, First-Served (FCFS), Shortest
Seek Time First (SSTF), SCAN, C-SCAN, LOOK, and C-LOOK.

Importance of Disk Scheduling in Operating System


• Multiple I/O requests may arrive by different processes and only one I/O request can
be served at a time by the disk controller. Thus other I/O requests need to wait in the
waiting queue and need to be scheduled.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 16


Operating System with Linux (PMCAC12)

• Two or more requests may be far from each other so this can result in greater disk
arm movement.
• Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.
Key Terms Associated with Disk Scheduling
• Seek Time: Seek time is the time taken to locate the disk arm to a specified track
where the data is to be read or written. So the disk scheduling algorithm that gives a
minimum average seek time is better.
• Rotational Latency: Rotational Latency is the time taken by the desired sector of the
disk to rotate into a position so that it can access the read/write heads. So the disk
scheduling algorithm that gives minimum rotational latency is better.
• Transfer Time: Transfer time is the time to transfer the data. It depends on the
rotating speed of the disk and the number of bytes to be transferred.
• Disk Access Time:
Disk Access Time = Seek Time + Rotational Latency + Transfer Time
Total Seek Time = Total head Movement * Seek Time
• Disk Response Time: Response Time is the average time spent by a request waiting
to perform its I/O operation. The average Response time is the response time of all
requests. Variance Response Time is the measure of how individual requests are
serviced with respect to average response time. So the disk scheduling algorithm that
gives minimum variance response time is better.

Types of Disk Scheduling Algorithms


There are several Disk Several Algorithms. We will discuss in detail each one of them.
• FCFS (First Come First Serve)
• SSTF (Shortest Seek Time First)
• SCAN
• C-SCAN
• LOOK

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 17


Operating System with Linux (PMCAC12)

• C-LOOK

1. FCFS (First Come First Serve)


FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the requests are addressed
in the order they arrive in the disk queue. Let us understand this with the help of an example.
Example:
Suppose the order of request is- (82,170,43,140,24,16,190) and current position of
Read/Write head is: 50

So, total overhead movement =


(82-50) +(170-82) +(170-43) +(140-43) +(140-24) +(24-16) +(190-16) =642

Advantages of FCFS
Here are some of the advantages of First Come First Serve.
• Every request gets a fair chance
• No indefinite postponement
Disadvantages of FCFS
Here are some of the disadvantages of First Come First Serve.
• Does not try to optimize seek time

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 18


Operating System with Linux (PMCAC12)

• May not provide the best possible service

2. SSTF (Shortest Seek Time First)


In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first.
So, the seek time of every request is calculated in advance in the queue and then they are
scheduled according to their calculated seek time. As a result, the request near the disk arm
will get executed first. SSTF is certainly an improvement over FCFS as it decreases the
average response time and increases the throughput of the system. Let us understand this with
the help of an example.
Example:
Suppose the order of request is- (82,170,43,140,24,16,190) and current position of
Read/Write head is: 50.

total overhead movement =


(50-43) +(43-24) +(24-16) +(82-16) +(140-82) +(170-140) +(190-170) =208

Advantages of Shortest Seek Time First


Here are some of the advantages of Shortest Seek Time First.
• The average Response Time decreases
• Throughput increases
Disadvantages of Shortest Seek Time First
Here are some of the disadvantages of Shortest Seek Time First.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 19


Operating System with Linux (PMCAC12)

• Overhead to calculate seek time in advance


• Can cause Starvation for a request if it has a higher seek time as compared to incoming
requests
• The high variance of response time as SSTF favors only some requests

3. SCAN
In the SCAN algorithm the disk arm moves in a particular direction and services the requests
coming in its path and after reaching the end of the disk, it reverses its direction and again
services the request arriving in its path. So, this algorithm works as an elevator and is hence
also known as an elevator algorithm. As a result, the requests at the midrange are serviced
more and those arriving behind the disk arm will have to wait.
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm
is at 50, and it is also given that the disk arm should move “towards the larger value”.

Therefore, the total overhead movement (total distance covered by the disk arm) is calculated
as
= (199-50) + (199-16) = 332
Advantages of SCAN Algorithm
Here are some of the advantages of the SCAN Algorithm.
• High throughput

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 20


Operating System with Linux (PMCAC12)

• Low variance of response time


• Average response time
Disadvantages of SCAN Algorithm
Here are some of the disadvantages of the SCAN Algorithm.
• Long waiting time for requests for locations just visited by disk arm

4. C-SCAN
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after
reversing its direction. So, it may be possible that too many requests are waiting at the other
end or there may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of
reversing its direction goes to the other end of the disk and starts servicing the requests from
there. So, the disk arm moves in a circular fashion and this algorithm is also similar to the
SCAN algorithm hence it is known as C-SCAN (Circular SCAN).
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm
is at 50, and it is also given that the disk arm should move “towards the larger value”.

So, the total overhead movement (total distance covered by the disk arm) is calculated as:
= (199-50) + (199-0) + (43-0) = 391

Advantages of C-SCAN Algorithm


Here are some of the advantages of C-SCAN.
• Provides more uniform wait time compared to SCAN.

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 21


Operating System with Linux (PMCAC12)

5. LOOK
LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference
that the disk arm in spite of going to the end of the disk goes only to the last request to be
serviced in front of the head and then reverses its direction from there only. Thus, it prevents
the extra delay which occurred due to unnecessary traversal to the end of the disk.
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm
is at 50, and it is also given that the disk arm should move “towards the larger value”.

So, the total overhead movement (total distance covered by the disk arm) is calculated as:
= (190-50) + (190-16) = 314

6. C-LOOK
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the
CSCAN disk scheduling algorithm. In CLOOK, the disk arm despite going to the end goes
only to the last request to be serviced in front of the head and then from there goes to the
other end’s last request. Thus, it also prevents the extra delay which occurred due to
unnecessary traversal to the end of the disk.
Example:

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 22


Operating System with Linux (PMCAC12)

1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the


Read/Write arm is at 50, and it is also given that the disk arm should move “towards
the larger value”

So, the total overhead movement (total distance covered by the disk arm) is calculated as
= (190-50) + (190-16) + (43-16) = 341

Prof. Sheethal P P, Asst. Professor, Dept. of MCA, EWIT 23

You might also like