UNIT 2operating System
UNIT 2operating System
File Systems: File Concept, User’s and System Programmer’s view of File System, Disk
Organization, Tape Organization, Different Modules of a File System, Disk Space Allocation Methods
– Contiguous, Linked, Indexed. Directory Structures, File Protection, System Calls for File
Management, Disk Scheduling Algorithms.
File Concept
A file system is a method an operating system uses to store, organize, and
manage files and directories on a storage device. Some common types of file
systems include:
1. FAT (File Allocation Table): An older file system used by older versions of
Windows and other operating systems.
2. NTFS (New Technology File System): A modern file system used by
5Windows. It supports features such as file and folder permissions,
compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux and
Unix-based operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple for their
Macs and iOS devices.
The advantages of using a file system imnclude the:
1. Organization: A file system allows files to be organized into directories
and subdirectories, making it easier to manage and locate files.
2. Data protection: File systems often include features such as file and
folder permissions, backup and restore, and error detection and
correction, to protect data from loss or corruption.
3. Improved performance: A well-designed file system can improve the
performance of reading and writing data by organizing it efficiently on
disk.
Disadvanta+ges of using a file system include:
1. Compatibility issues: Different file systems may not be compatible with
each other, making it difficult to transfer data between different operating
systems.
2. Disk space overhead: File systems may use some disk space to store
metadata and other overhead information, reducing the amount of space
available for user data.
3. Vulnerability: File systems can be vulnerable to data corruption,
malware, and other security threats, which can compromise the stability
and security of the system.
A file is a collection of related information that is recorded on secondary
storage. Or file is a collection of logically related entities. From the user’s
perspective, a file is the smallest allotment of logical secondary storage.
The name of the file is divided into two parts as shown below:
name
extension, separated by a period.
Files attributes and its operations:
Author C Append
Close
Usual
File type extension Function
C, java, pas,
Source code in various languages
Source Code asm, a
Word
wp, tex, rrf, doc Various word processor formats
Processor
Usual
File type extension Function
Archive arc, zip, tar Related files grouped into one compressed file
FILE DIRECTORIES:
Collection of files is a file directory. The directory contains information about
the files, including attributes, location and ownership. Much of this
information, especially that is concerned with storage, is managed by the
operating system. The directory is itself a file, accessible by various file
management routines.
TWO-LEVEL DIRECTORY
In this separate directories for each user is maintained.
Path name:Due to two levels there is a path name for every file to locate
that file.
Now,we can have same file name for different user.
Searching is efficient in this method.
TREE-STRUCTURED DIRECTORY :
Directory is maintained in the form of a tree. Searching is efficient and also
there is grouping capability. We have absolute or relative path name for a
file.
Disadvantage –
Internal fragmentation exists in last disk block of file.
There is an overhead of maintaining the pointer in every disk block.
If the pointer of any disk block is lost, the file will be truncated.
It supports only the sequential access of files.
3. Indexed Allocation –
It addresses many of the problems of contiguous and chained allocation. In
this case, the file allocation table contains a separate one-level index for
each file: The index has one entry for each block allocated to the file.
Allocation may be on the basis of fixed-size blocks or variable-sized blocks.
Allocation by blocks eliminates external fragmentation, whereas allocation by
variable-size blocks improves locality. This allocation technique supports
both sequential and direct access to the file and thus is the most popular
form of file allocation.
Disk Free Space Management :
Just as the space that is allocated to files must be managed ,so the space
that is not currently allocated to any file must be managed. To perform any of
the file allocation techniques,it is necessary to know what blocks on the disk
are available. Thus we need a disk allocation table in addition to a file
allocation table.The following are the approaches used for free space
management.
1. Bit Tables : This method uses a vector containing one bit for each block
on the disk. Each entry for a 0 corresponds to a free block and each 1
corresponds to a block in use.
For example: 00011010111100110001
In this vector every bit correspond to a particular block and 0 implies that,
that particular block is free and 1 implies that the block is already
occupied. A bit table has the advantage that it is relatively easy to find
one or a contiguous group of free blocks. Thus, a bit table works well with
any of the file allocation methods. Another advantage is that it is as small
as possible.
2. Free Block List : In this method, each block is assigned a number
sequentially and the list of the numbers of all free blocks is maintained in
a reserved block of the disk.
User View
The user view depends on the system interface that is used by the users. The
different types of user view experiences can be explained as follows −
If the user is using a personal computer, the operating system is largely designed to
make the interaction easy. Some attention is also paid to the performance of the
system, but there is no need for the operating system to worry about resource
utilization. This is because the personal computer uses all the resources available
and there is no sharing.
If the user is using a system connected to a mainframe or a minicomputer, the
operating system is largely concerned with resource utilization. This is because there
may be multiple terminals connected to the mainframe and the operating system
makes sure that all the resources such as CPU,memory, I/O devices etc. are divided
uniformly between them.
If the user is sitting on a workstation connected to other workstations through
networks, then the operating system needs to focus on both individual usage of
resources and sharing though the network. This happens because the workstation
exclusively uses its own resources but it also needs to share files etc. with other
workstations across the network.
If the user is using a handheld computer such as a mobile, then the operating system
handles the usability of the device including a few remote operations. The battery
level of the device is also taken into account.
There are some devices that contain very less or no user view because there is no
interaction with the users. Examples are embedded computers in home devices,
automobiles etc.
System View
According to the computer system, the operating system is the bridge between
applications and hardware. It is most intimate with the hardware and is used to
control it as required.
The different types of system view for operating system can be explained as follows:
The system views the operating system as a resource allocator. There are many
resources such as CPU time, memory space, file storage space, I/O devices etc. that
are required by processes for execution. It is the duty of the operating system to
allocate these resources judiciously to the processes so that the computer system
can run as smoothly as possible.
The operating system can also work as a control program. It manages all the
processes and I/O devices so that the computer system works smoothly and there
are no errors. It makes sure that the I/O devices work in a proper manner without
creating problems.
Operating systems can also be viewed as a way to make using hardware easier.
Computers were required to easily solve user problems. However it is not easy to
work directly with the computer hardware. So, operating systems were developed to
easily communicate with the hardware.
An operating system can also be considered as a program running at all times in the
background of a computer system (known as the kernel) and handling all the
application programs. This is the definition of the operating system that is generally
followed.
Tape Organization
Magnetic tape transport includes the robotic, mechanical, and electronic
components to support the methods and control structure for a magnetic tape unit.
The tape is a layer of plastic coated with a magnetic documentation medium.
Magnetic tapes are used in most organizations to save data files. Magnetic tapes
use a read-write mechanism. The read-write mechanism defines writing data on or
reading data from a magnetic tape. The tapes sequentially save the data manner. In
this sequential processing, the device should start searching at the starting and
check each record until the desired information is available.
Magnetic tape is the low-cost average for storage because it can save a huge
number of binary digits, bytes, or frames on each inch of the tape. The benefit of
magnetic tape contains unconditional storage, inexpensive, high data density, fast
transfer rate, flexibility, and ease of use.
Magnetic tape units can be interrupted, initiated to transfer forward, or in the
opposite, or it can be reversed. However, they cannot be initiated or stopped fast
enough between single characters. For this reason, data is recorded in blocks
defined as records. Gaps of unrecorded tape are added between records where the
tape can be interrupted.
The tape begins affecting while in a gap and achieves its permanent speed by the
time it arrives at the next record. Each record on tape has a recognition bit design at
the starting and end. By reading the bit design at the starting, the tape control
recognizes the data number.
FAT: FAT is a type of file system, which is developed for hard drives. It
stands for file allocation table and was first introduced in 1977, which is
used for 12 or 16 bits for each and every cluster access into the file
allocation table (FAT). On hard drives and other computer systems, it
helps to manage files on Microsoft operating systems. In devices like
digital cameras, flash memory, and other portable devices, it is also often
found that is used to store file information. It also helps to extend the life
of a hard drive as it minimizes the wear and tears on the hard disc. Today,
FAT is not used by later versions of Microsoft Windows like Windows XP,
Vista, 7, and 10 as they use NTFS. The FAT8, FAT12, FAT32, FAT16 are
all the different types of FAT (for file allocation table).
GFS: A GFS is a file system, which stands for Global File System. It has
the ability to make enable multiple computers to act as an integrated
machine, which is first developed at the University of Minnesota. But now
it is maintained by Red Hat. When the physical distance of two or more
computers is high, and they are unable to send files directly with each
other, a GFS file system makes them capable of sharing a group of files
directly. A computer can organize its I/O to preserve file systems with the
help of a global file system.
HFS: HFS (Hierarchical file system) is the file system that is used on a
Macintosh computer for creating a directory at the time a hard disk is
formatted. Generally, its basic function is to organize or hold the files on a
Macintosh hard disk. Apple is not capable of supporting to write to or
format HFS disks since when OS X came on the market. Also, HFS-
formatted drives are not recognized by Windows computers as HFS is a
Macintosh format. With the help of WIN32 or NTFS file systems, Windows
hard drives are formatted.
NTFS: NTFS is the file system, which stands for NT file system and stores
and retrieves files on Windows NT operating system and other versions of
Windows like Windows 2000, Windows XP, Windows 7, and Windows 10.
Sometimes, it is known as the New Technology File System. As compared
to the FAT and HPFS file system, it provides better methods of file
recovery and data protection and offers a number of improvements in
terms of extendibility, security, and performance.
UDF: A UDF is a file system, stands for Universal Disk Format and used
first developed by OSTA (Optical Storage Technology Association) in 1995
for ensuring consistency among data written to several optical media. It is
used with CD-ROMs and DVD-ROMs and is supported on all operating
systems. Now, it is used in the process of CD-R's and CD-RW's, called
packet writing.
On the disk storage medium, a disk file system has the ability to randomly
address data within a few amounts of time. Also, it includes the
anticipation that led to the speed of accessing data. Without regard to the
sequential location of the data, multiple users can access several data on
the disk with the help of a disk file system.
A tape file system is used to hold files on the tape as it is a tape format
and file system. As compared to disks, magnetic tapes are more powerful
to access data for a long time, which are the challenges for a general-
purpose file system in terms of creation and efficient management.
Some programs require one or more changes to fail for any reason or
need several file systems changes but do not make any changes. For
instance, a program may write configuration files or libraries mand
executables at the time of installing or updating the software. The
software may be unusable or broken if the software is stopped while
updating or installing. Also, the entire system may leave in an unusable
state if the process of installing or updating the software is incomplete.
In the 1970s, for some initial microcomputer users, disk and digital tape
devices were much expensive. A few cheaper basic data storage systems
used common audio cassette tape was designed. On the cassette
recorder, the user was informed about pressing "RECORD" when there
was required to write data by system. And, to notify the system, press
"RETURN" on the keyboard. Also, on the cassette recorder, the user was
needed to press the "PLAY" button when the system required to read
data.
Advantages:
This is very flexible in terms of file size. File size can be increased easily
since the system does not have to look for a contiguous chunk of
memory.
This method does not suffer from external fragmentation. This makes it
relatively better in terms of memory utilization.
Disadvantages:
Because the file blocks are distributed randomly on the disk, a large
number of seeks are needed to access every block individually. This
makes linked allocation slower.
It does not support random or direct access. We can not directly access
the blocks of a file. A block k of a file can be accessed by traversing k
blocks sequentially (sequential access ) from the starting block of the file
via block pointers.
Pointers required in the linked allocation incur some extra overhead.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the
pointers to all the blocks occupied by a file. Each file has its own index block.
The ith entry in the index block contains the disk address of the ith file block.
The directory entry contains the address of the index block as shown in the
image:
Advantages:
This supports direct access to the blocks occupied by the file and
therefore provides fast access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:
The pointer overhead for indexed allocation is greater than linked
allocation.
For very small files, say files that expand only 2-3 blocks, the indexed
allocation would keep one entire block (index block) for the pointers which
is inefficient in terms of memory utilization. However, in linked allocation
we lose the space of only 1 pointer per block.
For files that are very large, single index block may not be able to hold all the
pointers.
Following mechanisms can be used to resolve this:
1. Linked scheme: This scheme links two or more index blocks together for
holding the pointers. Every index block would then contain a pointer or the
address to the next index block.
2. Multilevel index: In this policy, a first level index block is used to point to
the second level index blocks which inturn points to the disk blocks
occupied by the file. This can be extended to 3 or more levels depending
on the maximum file size.
3. Combined Scheme: In this scheme, a special block called the Inode
(information Node) contains all the information about the file such as the
name, size, authority, etc and the remaining space of Inode is used to
store the Disk Block addresses which contain the actual file as shown in
the image below. The first few of these pointers in Inode point to
the direct blocks i.e the pointers contain the addresses of the disk blocks
that contain data of the file. The next few pointers point to indirect blocks.
Indirect blocks may be single indirect, double indirect or triple
indirect. Single Indirect block is the disk block that does not contain the
file data but the disk address of the blocks that contain the file data.
Similarly, double indirect blocks do not contain the file data but the disk
address of the blocks that contain the address of the blocks containing
the file data.
Directory Structures
A directory is a container that is used to contain folders and files. It
organizes files and folders in a hierarchical manner.
There are several logical structures of a directory, these are given below.
Single-level directory –
The single-level directory is the simplest directory structure. In it, all files
are contained in the same directory which makes it easy to support and
understand.
A single level directory has a significant limitation, however, when the
number of files increases or when the system has more than one user.
Since all the files are in the same directory, they must have a unique
name. if two users call their dataset test, then the unique name rule
violated.
Advantages:
Since it is a single directory, so its implementation is very easy.
If the files are smaller in size, searching will become faster.
The operations like file creation, searching, deletion, updating are very
easy in such a directory structure.
Logical Organization: Directory structures help to logically organize files
and directories in a hierarchical structure. This provides an easy way to
navigate and manage files, making it easier for users to access the data
they need.
Increased Efficiency: Directory structures can increase the efficiency of
the file system by reducing the time required to search for files. This is
because directory structures are optimized for fast file access, allowing
users to quickly locate the file they need.
Improved Security: Directory structures can provide better security for
files by allowing access to be restricted at the directory level. This helps
to prevent unauthorized access to sensitive data and ensures that
important files are protected.
Facilitates Backup and Recovery: Directory structures make it easier to
backup and recover files in the event of a system failure or data loss. By
storing related files in the same directory, it is easier to locate and backup
all the files that need to be protected.
Scalability: Directory structures are scalable, making it easy to add new
directories and files as needed. This helps to accommodate growth in the
system and makes it easier to manage large amounts of data.
Disadvantages:
There may chance of name collision because two files can have the same
name.
Searching will become time taking if the directory is large.
This can not group the same type of files together.
Two-level directory –
As we have seen, a single level directory often leads to confusion of files
names among different users. the solution to this problem is to create a
separate directory for each user
What is a directory?
Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.
Each partition must have at least one directory in which, all the files of the
partition can be listed. A directory entry is maintained for each file in the
directory which stores all the information related to that file.
A directory can be viewed as a file which contains the Meta data of the
bunch of files.
PlayNext
Unmute
Current Time 0:00
Duration 18:10
Loaded: 0.37%
Â
Fullscreen
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
File Protection
In computer systems, alot of user’s information is stored, the objective of the
operating system is to keep safe the data of the user from the improper
access to the system. Protection can be provided in number of ways. For a
single laptop system, we might provide protection by locking the computer in
a desk drawer or file cabinet. For multi-user systems, different mechanisms
are used for the protection.
Types of Access :
The files which have direct access of the any user have the need of
protection. The files which are not accessible to other users doesn’t require
any kind of protection. The mechanism of the protection provide the facility of
the controlled access by just limiting the types of access to the file. Access
can be given or not given to any user depends on several factors, one of
which is the type of access required. Several different types of operations
can be controlled:
Read – Reading from a file.
Write – Writing or rewriting the file.
Execute – Loading the file and after loading the execution process starts.
Append – Writing the new information to the already existing file, editing
must be end at the end of the existing file.
Delete – Deleting the file which is of no use and using its space for the
another data.
List – List the name and attributes of the file.
Operations like renaming, editing the existing file, copying; these can also be
controlled. There are many protection mechanism. each of them mechanism
have different advantages and disadvantages and must be appropriate for
the intended application.
Access Control :
There are different methods used by different users to access any file. The
general way of protection is to associate identity-dependent access with all
the files and directories an list called access-control list (ACL) which specify
the names of the users and the types of access associate with each of the
user. The main problem with the access list is their length. If we want to
allow everyone to read a file, we must list all the users with the read access.
This technique has two undesirable consequences:
Constructing such a list may be tedious and unrewarding task, especially if
we do not know in advance the list of the users in the system.
Previously, the entry of the any directory is of the fixed size but now it
changes to the variable size which results in the complicates space
management. These problems can be resolved by use of a condensed
version of the access list. To condense the length of the access-control list,
many systems recognize three classification of users in connection with each
file:
Owner – Owner is the user who has created the file.
Group – A group is a set of members who has similar needs and they are
sharing the same file.
Universe – In the system, all other users are under the category called
universe.
The most common recent approach is to combine access-control lists with
the normal general owner, group, and universe access control scheme. For
example: Solaris uses the three categories of access by default but allows
access-control lists to be added to specific files and directories when more
fine-grained access control is desired.
Other Protection Approaches:
The access to any system is also controlled by the password. If the use of
password is random and it is changed often, this may be result in limit the
effective access to a file.
The use of passwords has a few disadvantages:
The number of passwords are very large so it is difficult to remember the
large passwords.
If one password is used for all the files, then once it is discovered, all files
are accessible; protection is on all-or-none basis.
Introduction
File protection in an operating system is the process of securing files from
unauthorized access, alteration, or deletion. It is critical for data security and
ensures that sensitive information remains confidential and secure. Operating
systems provide various mechanisms and techniques such as file permissions,
encryption, access control lists, auditing, and physical file security to protect files.
Proper file protection involves user authentication, authorization, access control,
encryption, and auditing. Ongoing updates and patches are also necessary to
prevent security breaches. File protection in an operating system is essential to
maintain data security and minimize the risk of data breaches and other security
incidents.
Conclusion
In conclusion, file protection mechanisms are essential for ensuring data security,
compliance, and business continuity in modern operating systems. These
mechanisms provide several advantages, including data security, compliance with
regulatory requirements, business continuity, increased productivity, enhanced
collaboration, and reputation. However, there are also some disadvantages, such as
increased system overhead and complexity, and potential limitations on user
flexibility. Despite these limitations, the benefits of file protection mechanisms
outweigh the disadvantages, and organizations should implement appropriate file
protection mechanisms to protect their sensitive data and ensure their operations
are not impacted by security incidents
Conclusion
In conclusion, file protection mechanisms are essential for ensuring data security,
compliance, and business continuity in modern operating systems. These
mechanisms provide several advantages, including data security, compliance with
regulatory requirements, business continuity, increased productivity, enhanced
collaboration, and reputation. However, there are also some disadvantages, such as
increased system overhead and complexity, and potential limitations on user
flexibility. Despite these limitations, the benefits of file protection mechanisms
outweigh the disadvantages, and organizations should implement appropriate file
protection mechanisms to protect their sensitive data and ensure their operations
are not impacted by security incidents.
Below are some examples of how a system call varies from a user
function.
1. A system call function may create and use kernel processes to execute the
asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system
call with kernel-mode privilege executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that
are not present in the kernel protection domain.
4. The code and data for system calls are stored in global kernel memory.
If the request is permitted, the kernel performs the requested action, like
creating or deleting a file. As input, the application receives the kernel's
output. The application resumes the procedure after the input is received.
When the operation is finished, the kernel returns the results to the
application and then moves data from kernel space to user space in
memory.
A simple system call may take few nanoseconds to provide the result, like
retrieving the system date and time. A more complicated system call,
such as connecting to a network device, may take a few seconds. Most
operating systems launch a distinct kernel thread for each system call to
avoid bottlenecks. Modern operating systems are multi-threaded, which
means they can handle various system calls at the same time.
Now, you will learn about all the different types of system calls one-by-
one.
Process Control
Process control is the system call that is used to direct the processes.
Some process control examples include creating, load, abort, end,
execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some
file management examples include creating files, delete files, open, close,
read, write, etc.
Device Management
Device management is a system call that is used to deal with devices.
Some examples of device management include read, device, write, get
device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain
information. There are some examples of information maintenance,
including getting system data, set time or date, get time or date, set
system data, etc.
Communication
Communication is a system call that is used for communication. There are
some examples of communication, including create, delete
communication connections, send, receive messages, etc.
open()
The open() system call allows you to access a file on a file system. It
allocates resources to the file and provides a handle that the process may
refer to. Many processes can open a file at once or by a single process
only. It's all based on the file system and structure.
read()
It is used to obtain data from a file on the file system. It accepts three
arguments in general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and
open it using open() before reading.
wait()
In some systems, a process may have to wait for another process to
complete its execution before proceeding. When a parent process makes
a child process, the parent process execution is suspended until the child
process is finished. The wait() system call is used to suspend the parent
process. Once the child process has completed its execution, control is
returned to the parent process.
write()
It is used to write data from a user buffer to a device like a file. This
system call is one way for a program to generate data. It takes three
arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is
one of the most common ways to create processes in operating systems.
When a parent process spawns a child process, execution of the parent
process is interrupted until the child process completes. Once the child
process has completed its execution, control is returned to the parent
process.
close()
It is used to end file system access. When this system call is invoked, it
signifies that the program no longer requires the file, and the buffers are
flushed, the file information is altered, and the file resources are de-
allocated as a result.
exec()
When an executable file replaces an earlier executable file in an already
executing process, this system function is invoked. As a new process is
not built, the old process identification stays, but the new process
replaces data, stack, data, head, etc.
exit()
The exit() is a system call that is used to end program execution. This call
indicates that the thread execution is complete, which is especially useful
in multi-threaded environments. The operating system reclaims resources
spent by the process following the use of the exit() system function.
So,
total overhead movement (total distance covered by the disk
arm) =(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-
140)+(190-170) =208
Advantages:
Average Response Time decreases
Throughput increases
Disadvantages:
Overhead to calculate seek time in advance
Can cause Starvation for a request if it has a higher seek time as
compared to incoming requests
High variance of response time as SSTF favors only some requests
SCAN: In SCAN algorithm the disk arm moves in a particular direction and
services the requests coming in its path and after reaching the end of the
disk, it reverses its direction and again services the request arriving in its
path. So, this algorithm works as an elevator and is hence also known as
an elevator algorithm. As a result, the requests at the midrange are
serviced more and those arriving behind the disk arm will have to wait.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger
value”.
Therefore, the total overhead movement (total distance covered by the disk
arm) is calculated as:
1. =(199-50)+(199-16) =332
Advantages:
High throughput
Low variance of response time
Average response time
Disadvantages:
Long waiting time for requests for locations just visited by disk arm
1. CSCAN: In SCAN algorithm, the disk arm again scans the path that has
been scanned, after reversing its direction. So, it may be possible that too
many requests are waiting at the other end or there may be zero or few
requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm
instead of reversing its direction goes to the other end of the disk and starts
servicing the requests from there. So, the disk arm moves in a circular
fashion and this algorithm is also similar to SCAN algorithm and hence it is
known as C-SCAN (Circular SCAN).
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger
value”.
so, the total overhead movement (total distance covered by the disk arm) is
calculated as:
=(199-50)+(199-0)+(43-0) =391 Advantages:
Provides more uniform wait time compared to SCAN
1. LOOK: It is similar to the SCAN disk scheduling algorithm except for the
difference that the disk arm in spite of going to the end of the disk goes
only to the last request to be serviced in front of the head and then
reverses its direction from there only. Thus it prevents the extra delay
which occurred due to unnecessary traversal to the end of the disk.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is
calculated as:
1. =(190-50)+(190-16) =314
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger value”
So, the total overhead movement (total distance covered by the disk arm) is
calculated as:
1. =(190-50)+(190-16)+(43-16) =341
2. RSS– It stands for random scheduling and just like its name it is nature. It
is used in situations where scheduling involves random attributes such as
random processing time, random due dates, random weights, and
stochastic machine breakdowns this algorithm sits perfectly. Which is why
it is usually used for analysis and simulation.
3. LIFO– In LIFO (Last In, First Out) algorithm, the newest jobs are serviced
before the existing ones i.e. in order of requests that get serviced the job
that is newest or last entered is serviced first, and then the rest in the
same order.
Advantages
Maximizes locality and resource utilization
Can seem a little unfair to other requests and if new requests keep
coming in, it cause starvation to the old and existing ones.
1. N-STEP SCAN – It is also known as the N-STEP LOOK algorithm. In this,
a buffer is created for N requests. All requests belonging to a buffer will
be serviced in one go. Also once the buffer is full no new requests are
kept in this buffer and are sent to another one. Now, when these N
requests are serviced, the time comes for another top N request and this
way all get requests to get a guaranteed service
Advantages
It eliminates the starvation of requests completely
1. FSCAN– This algorithm uses two sub-queues. During the scan all
requests in the first queue are serviced and the new incoming requests
are added to the second queue. All new requests are kept on halt until the
existing requests in the first queue are serviced.
Advantages
FSCAN along with N-Step-SCAN prevents “arm stickiness” (phenomena
in I/O scheduling where the scheduling algorithm continues to service
requests at or near the current sector and thus prevents any seeking)
Each algorithm is unique in its own way. Overall Performance depends on
the number and type of requests.
Note: Average Rotational latency is generally taken as 1/2(Rotational
latency).
Exercise
1) Suppose a disk has 201 cylinders, numbered from 0 to 200. At some time
the disk arm is at cylinder 100, and there is a queue of disk access requests
for cylinders 30, 85, 90, 100, 105, 110, 135, and 145. If Shortest-Seek Time
First (SSTF) is being used for scheduling the disk access, the request for
cylinder 90 is serviced after servicing ____________ number of requests.
(GATE CS 2014 (A) 1 (B) 2 (C) 3 (D) 4 See this for the solution.
2) Consider an operating system capable of loading and executing a single
sequential user process at a time. The disk head scheduling algorithm used
is First Come First Served (FCFS). If FCFS is replaced by Shortest Seek
Time First (SSTF), claimed by the vendor to give 50% better benchmark
results, what is the expected improvement in the I/O performance of user
programs? (GATE CS 2004) (A) 50% (B) 40% (C) 25% (D) 0% See this for a
solution.
3) Suppose the following disk request sequence (track numbers) for a disk
with 100 tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the
initial position of the R/W head is on track 50. The additional distance that
will be traversed by the R/W head when the Shortest Seek Time First (SSTF)
algorithm is used compared to the SCAN (Elevator) algorithm (assuming that
SCAN algorithm moves towards 100 when it starts execution) is _________
tracks (A) 8 (B) 9 (C) 10 (D) 11 See this for the solution.
4) Consider a typical disk that rotates at 15000 rotations per minute (RPM)
and has a transfer rate of 50 × 10^6 bytes/sec. If the average seek time of
the disk is twice the average rotational delay and the controller’s transfer
time is 10 times the disk transfer time, the average time (in milliseconds) to
read or write a 512-byte sector of the disk is _____________ See this for the
solution.
This article is contributed by Ankit Mittal. Please write comments if you find
anything incorrect, or you want to share more information about the topic
discussed above.
Divides the disk into sectors before storing data so that the disk controller
can read and write Each sector can be:
The header retains information, data, and error correction code (ECC)
sectors of data, typically 512 bytes of data, but optional disks use the
operating system’s own data structures to preserve files using disks.
It is conducted in two stages:
1. Divide the disc into multiple cylinder groups. Each is treated as a logical
disk.
2. Logical format or “Create File System”. The OS stores the data structure
of the first file system on the disk. Contains free space and allocated space.
For efficiency, most file systems group blocks into clusters. Disk I / O runs in
blocks. File I / O runs in a cluster.
For example, the sizes can be 256,512, and 1,024 bytes. If disk is formatted
with larger sector size, fewer sectors can fit on each track.
As a result fewer headers and trailers are written on each track and more
space is obtainable for user data. – Some operating systems can handle a
sector size of 512 bytes.
Operating system keeps its own data structures on disk before it use disk to
store the files. It performs this with following two steps:
1. It partitions the disk into one or more groups of cylinders. Each partition is
treated by OS as a separate disk.
2. Logical formatting: That means creation of file system.
In order to increase the efficiency, file system groups blocks in chunks called
as clusters.
Some operating systems give special programs the ability to use a disk
partition as a large sequential array of logical blocks, without any file-system
data structures. This array is sometimes called the raw disk, and I/O to this
array is called as raw I/O.
Boot block:
Bad Blocks:
Tape Organization
Magnetic drums, magnetic tape and magnetic disks are types of magnetic
memory. These memories use property for magnetic memory. Here, we have
explained about magnetic tape in brief.
Magnetic Tape memory :
In magnetic tape only one side of the ribbon is used for storing data. It is
sequential memory which contains thin plastic ribbon to store data and
coated by magnetic oxide. Data read/write speed is slower because of
sequential access. It is highly reliable which requires magnetic tape drive
writing and reading data.
The width of the ribbon varies from 4mm to 1 Inch and it has storage
capacity 100 MB to 200 GB.
Let’s see various advantages and disadvantages of Magnetic Tape memory.
Advantages :
1. These are inexpensive, i.e., low cost memories.
2. It provides backup or archival storage.
3. It can be used for large files.
4. It can be used for copying from disk files.
5. It is a reusable memory.
6. It is compact and easy to store on racks.
Disadvantages :
1. Sequential access is the disadvantage, means it does not allow access
randomly or directly.
2. It requires caring to store, i.e., vulnerable humidity, dust free, and suitable
environment.
3. It stored data cannot be easily updated or modified, i.e., difficult to make
updates on data.