0% found this document useful (0 votes)
8 views52 pages

UNIT 2operating System

operation system notes

Uploaded by

gv892696
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views52 pages

UNIT 2operating System

operation system notes

Uploaded by

gv892696
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 52

UNIT 2.

File Systems: File Concept, User’s and System Programmer’s view of File System, Disk
Organization, Tape Organization, Different Modules of a File System, Disk Space Allocation Methods
– Contiguous, Linked, Indexed. Directory Structures, File Protection, System Calls for File
Management, Disk Scheduling Algorithms.

File Concept
A file system is a method an operating system uses to store, organize, and
manage files and directories on a storage device. Some common types of file
systems include:
1. FAT (File Allocation Table): An older file system used by older versions of
Windows and other operating systems.
2. NTFS (New Technology File System): A modern file system used by
5Windows. It supports features such as file and folder permissions,
compression, and encryption.
3. ext (Extended File System): A file system commonly used on Linux and
Unix-based operating systems.
4. HFS (Hierarchical File System): A file system used by macOS.
5. APFS (Apple File System): A new file system introduced by Apple for their
Macs and iOS devices.
The advantages of using a file system imnclude the:
1. Organization: A file system allows files to be organized into directories
and subdirectories, making it easier to manage and locate files.
2. Data protection: File systems often include features such as file and
folder permissions, backup and restore, and error detection and
correction, to protect data from loss or corruption.
3. Improved performance: A well-designed file system can improve the
performance of reading and writing data by organizing it efficiently on
disk.
Disadvanta+ges of using a file system include:
1. Compatibility issues: Different file systems may not be compatible with
each other, making it difficult to transfer data between different operating
systems.
2. Disk space overhead: File systems may use some disk space to store
metadata and other overhead information, reducing the amount of space
available for user data.
3. Vulnerability: File systems can be vulnerable to data corruption,
malware, and other security threats, which can compromise the stability
and security of the system.
A file is a collection of related information that is recorded on secondary
storage. Or file is a collection of logically related entities. From the user’s
perspective, a file is the smallest allotment of logical secondary storage.
The name of the file is divided into two parts as shown below:
 name
 extension, separated by a period.
Files attributes and its operations:

Attributes Types Operations

Name Doc Create

Type Exe Open

Size Jpg Read

Creation Data Xis Write

Author C Append

Last Modified Java Truncate

protection class Delete

Close

Usual
File type extension Function

Executable exe, com, bin Read to run machine language program

Object obj, o Compiled, machine language not linked

C, java, pas,
Source code in various languages
Source Code asm, a

Batch bat, sh Commands to the command interpreter

Text txt, doc Textual data, documents

Word
wp, tex, rrf, doc Various word processor formats
Processor
Usual
File type extension Function

Archive arc, zip, tar Related files grouped into one compressed file

Multimedia mpeg, mov, rm For containing audio/video information

Markup xml, html, tex It is the textual data and documents

Library lib, a ,so, dll It contains libraries of routines for programmers

It is a format for printing or viewing a ASCII or


gif, pdf, jpg
Print or View binary file.

FILE DIRECTORIES:
Collection of files is a file directory. The directory contains information about
the files, including attributes, location and ownership. Much of this
information, especially that is concerned with storage, is managed by the
operating system. The directory is itself a file, accessible by various file
management routines.

Information contained in a device directory are:


 Name
 Type
 Address
 Current length+
 Maximum length
 Date last accessed
 Date last updated
 Owner id
 Protection information
Operation performed on directory are:
 Search for a file
 Create a file
 Delete a file
 List a directory
 Rename a file
 Traverse the file system
Advantages of maintaining directories are:
 Efficiency: A file can be located more quickly.
 Naming: It becomes convenient for users as two users can have same
name for different files or may have different name for same file.
 Grouping: Logical grouping of files can be done by properties e.g. all
java programs, all games etc.
SINGLE-LEVEL DIRECTORY
In this a single directory is maintained for all the users.
 Naming problem: Users cannot have same name for two files.
 Grouping problem: Users cannot group files according to their need.

TWO-LEVEL DIRECTORY
In this separate directories for each user is maintained.

 Path name:Due to two levels there is a path name for every file to locate
that file.
 Now,we can have same file name for different user.
 Searching is efficient in this method.

TREE-STRUCTURED DIRECTORY :
Directory is maintained in the form of a tree. Searching is efficient and also
there is grouping capability. We have absolute or relative path name for a
file.

FILE ALLOCATION METHODS :


1. Continuous Allocation –
A single continuous set of blocks is allocated to a file at the time of file
creation. Thus, this is a pre-allocation strategy, using variable size portions.
The file allocation table needs just a single entry for each file, showing the
starting block and the length of the file. This method is best from the point of
view of the individual sequential file. Multiple blocks can be read in at a time
to improve I/O performance for sequential processing. It is also easy to
retrieve a single block. For example, if a file starts at block b, and the ith
block of the file is wanted, its location on secondary storage is simply b+i-1.
Disadvantage –
 External fragmentation will occur, making it difficult to find contiguous
blocks of space of sufficient length. Compaction algorithm will be
necessary to free up additional space on disk.
 Also, with pre-allocation, it is necessary to declare the size of the file at
the time of creation.
2. Linked Allocation(Non-contiguous allocation) –
Allocation is on an individual block basis. Each block contains a pointer to
the next block in the chain. Again the file table needs just a single entry for
each file, showing the starting block and the length of the file. Although pre-
allocation is possible, it is more common simply to allocate blocks as
needed. Any free block can be added to the chain. The blocks need not be
continuous. Increase in file size is always possible if free disk block is
available. There is no external fragmentation because only one block at a
time is needed but there can be internal fragmentation but it exists only in the
last disk block of file.

Disadvantage –
 Internal fragmentation exists in last disk block of file.
 There is an overhead of maintaining the pointer in every disk block.
 If the pointer of any disk block is lost, the file will be truncated.
 It supports only the sequential access of files.
3. Indexed Allocation –
It addresses many of the problems of contiguous and chained allocation. In
this case, the file allocation table contains a separate one-level index for
each file: The index has one entry for each block allocated to the file.
Allocation may be on the basis of fixed-size blocks or variable-sized blocks.
Allocation by blocks eliminates external fragmentation, whereas allocation by
variable-size blocks improves locality. This allocation technique supports
both sequential and direct access to the file and thus is the most popular
form of file allocation.
Disk Free Space Management :
Just as the space that is allocated to files must be managed ,so the space
that is not currently allocated to any file must be managed. To perform any of
the file allocation techniques,it is necessary to know what blocks on the disk
are available. Thus we need a disk allocation table in addition to a file
allocation table.The following are the approaches used for free space
management.

1. Bit Tables : This method uses a vector containing one bit for each block
on the disk. Each entry for a 0 corresponds to a free block and each 1
corresponds to a block in use.
For example: 00011010111100110001
In this vector every bit correspond to a particular block and 0 implies that,
that particular block is free and 1 implies that the block is already
occupied. A bit table has the advantage that it is relatively easy to find
one or a contiguous group of free blocks. Thus, a bit table works well with
any of the file allocation methods. Another advantage is that it is as small
as possible.
2. Free Block List : In this method, each block is assigned a number
sequentially and the list of the numbers of all free blocks is maintained in
a reserved block of the disk.

User’s and System Programmer’s view of File System


An operating system is a construct that allows the user application programs to
interact with the system hardware. Operating system by itself does not provide any
function but it provides an atmosphere in which different applications and programs
can do useful work.
The operating system can be observed from the point of view of the user or the
system. This is known as the user view and the system view respectively. More
details about these are given as follows –

User View
The user view depends on the system interface that is used by the users. The
different types of user view experiences can be explained as follows −

 If the user is using a personal computer, the operating system is largely designed to
make the interaction easy. Some attention is also paid to the performance of the
system, but there is no need for the operating system to worry about resource
utilization. This is because the personal computer uses all the resources available
and there is no sharing.
 If the user is using a system connected to a mainframe or a minicomputer, the
operating system is largely concerned with resource utilization. This is because there
may be multiple terminals connected to the mainframe and the operating system
makes sure that all the resources such as CPU,memory, I/O devices etc. are divided
uniformly between them.
 If the user is sitting on a workstation connected to other workstations through
networks, then the operating system needs to focus on both individual usage of
resources and sharing though the network. This happens because the workstation
exclusively uses its own resources but it also needs to share files etc. with other
workstations across the network.
 If the user is using a handheld computer such as a mobile, then the operating system
handles the usability of the device including a few remote operations. The battery
level of the device is also taken into account.
There are some devices that contain very less or no user view because there is no
interaction with the users. Examples are embedded computers in home devices,
automobiles etc.
System View
According to the computer system, the operating system is the bridge between
applications and hardware. It is most intimate with the hardware and is used to
control it as required.
The different types of system view for operating system can be explained as follows:

 The system views the operating system as a resource allocator. There are many
resources such as CPU time, memory space, file storage space, I/O devices etc. that
are required by processes for execution. It is the duty of the operating system to
allocate these resources judiciously to the processes so that the computer system
can run as smoothly as possible.
 The operating system can also work as a control program. It manages all the
processes and I/O devices so that the computer system works smoothly and there
are no errors. It makes sure that the I/O devices work in a proper manner without
creating problems.
 Operating systems can also be viewed as a way to make using hardware easier.
 Computers were required to easily solve user problems. However it is not easy to
work directly with the computer hardware. So, operating systems were developed to
easily communicate with the hardware.
 An operating system can also be considered as a program running at all times in the
background of a computer system (known as the kernel) and handling all the
application programs. This is the definition of the operating system that is generally
followed.

DISK MANAGEMENT OPERATING SYSTEM


The range of services and add-ons provided by modern operating systems is
constantly expanding, and four basic operating system management
functions are implemented by all operating systems. These management
functions are briefly described below and given the following overall context.
The four main operating system management functions (each of which are
dealt with in more detail in different places) are:
 Process Management
 Memory Management
 File and Disk Management
 I/O System Management
Most computer systems employ secondary storage devices (magnetic disks).
It provides low-cost, non-volatile storage for programs and data (tape, optical
media, flash drives, etc.). Programs and the user data they use are kept on
separate storage devices called files. The operating system is responsible
for allocating space for files on secondary storage media as needed.
There is no guarantee that files will be stored in contiguous locations on
physical disk drives, especially large files. It depends greatly on the amount
of space available. When the disc is full, new files are more likely to be
recorded in multiple locations. However, as far as the user is concerned, the
example file provided by the operating system hides the fact that the file is
fragmented into multiple parts.
The operating system needs to track the location of the disk for every part of
every file on the disk. In some cases, this means tracking hundreds of
thousands of files and file fragments on a single physical disk. Additionally,
the operating system must be able to locate each file and perform read and
write operations on it whenever it needs to. Therefore, the operating system
is responsible for configuring the file system, ensuring the safety and
reliability of reading and write operations to secondary storage, and
maintains access times (the time required to write data to or read data from
secondary storage).

Tape Organization
Magnetic tape transport includes the robotic, mechanical, and electronic
components to support the methods and control structure for a magnetic tape unit.
The tape is a layer of plastic coated with a magnetic documentation medium.
Magnetic tapes are used in most organizations to save data files. Magnetic tapes
use a read-write mechanism. The read-write mechanism defines writing data on or
reading data from a magnetic tape. The tapes sequentially save the data manner. In
this sequential processing, the device should start searching at the starting and
check each record until the desired information is available.
Magnetic tape is the low-cost average for storage because it can save a huge
number of binary digits, bytes, or frames on each inch of the tape. The benefit of
magnetic tape contains unconditional storage, inexpensive, high data density, fast
transfer rate, flexibility, and ease of use.
Magnetic tape units can be interrupted, initiated to transfer forward, or in the
opposite, or it can be reversed. However, they cannot be initiated or stopped fast
enough between single characters. For this reason, data is recorded in blocks
defined as records. Gaps of unrecorded tape are added between records where the
tape can be interrupted.
The tape begins affecting while in a gap and achieves its permanent speed by the
time it arrives at the next record. Each record on tape has a recognition bit design at
the starting and end. By reading the bit design at the starting, the tape control
recognizes the data number.

Application Areas of Magnetic Tapes


The magnetic tapes are very much suitable for the following applications −
 Serial or sequential processing.
 Backing up data on tape is very cheap.
 It is applicable for the transfer of data between multiple machines.
 It is suitable for the storage of a large volume of data.
Advantages of Magnetic Tapes
 Cost − Magnetic tape is one of the low-cost storage media. Therefore,
backing up data on tape is very cheap.
 Storage capacity − It is very large.
 Portability − It is easily portable.
 Reusable − It can remove a specific data and save another data at the same
place. Therefore it can be reused.
Disadvantages of Magnetic Tapes
 Access Time − Accessing a record requires accessing all the records before
the required record. So access time is very large in magnetic tape.
 Non-flexibility − Magnetic tape is not flexible.
 Transmission Speed − The cost of data transfer is moderate in magnetic
tape.
 Vulnerable to damage − Magnetic tapes are highly vulnerable to damage
from dust or careless handling.
 Non-human readable − Data stored on it is not in human-readable form,
therefore manual encoding is not possible at all.
 Magnetic drums, magnetic tape and magnetic disks are types of
magnetic memory. These memories use property for magnetic
memory. Here, we have explained about magnetic tape in brief.
 Magnetic Tape memory :
In magnetic tape only one side of the ribbon is used for storing data. It
is sequential memory which contains thin plastic ribbon to store data
and coated by magnetic oxide. Data read/write speed is slower
because of sequential access. It is highly reliable which requires
magnetic tape drive writing and reading data.

Different Modules of a File System,

A file system is a process of managing how and where data on a storage


disk, which is also referred to as file management or FS. It is a logical disk
component that compresses files separated into groups, which is known
as directories. It is abstract to a human user and related to a computer;
hence, it manages a disk's internal operations. Files and additional
directories can be in the directories. Although there are various file
systems with Windows, NTFS is the most common in modern times. It
would be impossible for a file with the same name to exist and also
impossible to remove installed programs and recover specific files without
file management, as well as files would have no organization without a file
structure. The file system enables you to view a file in the current
directory as files are often managed in a hierarchy.
A disk (e.g., Hard disk drive) has a file system, despite type and usage.
Also, it contains information about file size, file name, file location
fragment information, and where disk data is stored and also describes
how a user or application may access the data. The operations like
metadata, file naming, storage management, and directories/folders are
all managed by the file system.

On a storage device, files are stored in sectors in which data is stored in


groups of sectors called blocks. The size and location of the files are
identified by the file system, and it also helps to recognize which sectors
are ready to be used. Other than Windows, there are some other
operating systems that contain FAT and NTFS file system. But Apple
product (like iOS and macOS) uses HFS+ as operating system is horizon
by many different kinds of file systems.

Sometimes the term "file system" is used in the reference of partitions.


For instance, saying, "on the hard drive, two files systems are available,"
that does not have to mean the drive is divided between two file systems,
NTFS and FAT. But it means two separate partitions are there that use the
same physical disk.
In order to work, a file system is required by most of the applications you
come into contact with; therefore, each partition should have one.
Furthermore, if a program is built for use in macOS, you will be unable to
use this program on windows because programs are file system-
dependent.

Examples of file systems


The examples of file systems are given below:

FAT: FAT is a type of file system, which is developed for hard drives. It
stands for file allocation table and was first introduced in 1977, which is
used for 12 or 16 bits for each and every cluster access into the file
allocation table (FAT). On hard drives and other computer systems, it
helps to manage files on Microsoft operating systems. In devices like
digital cameras, flash memory, and other portable devices, it is also often
found that is used to store file information. It also helps to extend the life
of a hard drive as it minimizes the wear and tears on the hard disc. Today,
FAT is not used by later versions of Microsoft Windows like Windows XP,
Vista, 7, and 10 as they use NTFS. The FAT8, FAT12, FAT32, FAT16 are
all the different types of FAT (for file allocation table).

GFS: A GFS is a file system, which stands for Global File System. It has
the ability to make enable multiple computers to act as an integrated
machine, which is first developed at the University of Minnesota. But now
it is maintained by Red Hat. When the physical distance of two or more
computers is high, and they are unable to send files directly with each
other, a GFS file system makes them capable of sharing a group of files
directly. A computer can organize its I/O to preserve file systems with the
help of a global file system.

HFS: HFS (Hierarchical file system) is the file system that is used on a
Macintosh computer for creating a directory at the time a hard disk is
formatted. Generally, its basic function is to organize or hold the files on a
Macintosh hard disk. Apple is not capable of supporting to write to or
format HFS disks since when OS X came on the market. Also, HFS-
formatted drives are not recognized by Windows computers as HFS is a
Macintosh format. With the help of WIN32 or NTFS file systems, Windows
hard drives are formatted.

NTFS: NTFS is the file system, which stands for NT file system and stores
and retrieves files on Windows NT operating system and other versions of
Windows like Windows 2000, Windows XP, Windows 7, and Windows 10.
Sometimes, it is known as the New Technology File System. As compared
to the FAT and HPFS file system, it provides better methods of file
recovery and data protection and offers a number of improvements in
terms of extendibility, security, and performance.
UDF: A UDF is a file system, stands for Universal Disk Format and used
first developed by OSTA (Optical Storage Technology Association) in 1995
for ensuring consistency among data written to several optical media. It is
used with CD-ROMs and DVD-ROMs and is supported on all operating
systems. Now, it is used in the process of CD-R's and CD-RW's, called
packet writing.

Architecture of the File System


Two or three layers are contained by the file system. Sometimes, these
layers function combined and sometimes are explicitly separated. For file
operations, the API (Application Program Interface) is provided by the
logical file system, like OPEN, CLOSE, READ, and more because it is
accountable for interaction with the user application. Also, for processing,
the requested operation is forwarded to the layer that is located below it.
Furthermore, for various concurrent instances of physical file systems, the
second optional layer allows support, which is a virtual file system. And
each concurrent instance is called a file system implementation.

The third layer is responsible for handling buffering and memory


management, which is called the physical file system. It is concerned with
the physical operation of the storage device and processes physical blocks
being read or written. Furthermore, to drive the storage device, this layer
interacts with the channel and the device drivers.

Types of file systems


There are various kinds of file systems, which are as follows:

1. Disk file systems

On the disk storage medium, a disk file system has the ability to randomly
address data within a few amounts of time. Also, it includes the
anticipation that led to the speed of accessing data. Without regard to the
sequential location of the data, multiple users can access several data on
the disk with the help of a disk file system.

2. Flash file systems

A flash file system is responsible for restrictions, performance, and special


abilities of flash memory. It is superior to utilize a file system that is
designed for a flash device; however, a disk file system is the basic
storage media, which can use a flash memory device.

3. Tape file systems

A tape file system is used to hold files on the tape as it is a tape format
and file system. As compared to disks, magnetic tapes are more powerful
to access data for a long time, which are the challenges for a general-
purpose file system in terms of creation and efficient management.

4. Database file systems

A database-based file system is another method for file management.


Files are recognized by their characteristics (like a type of file, author,
topic, etc.) rather than hierarchical structured management.

5. Transactional file systems

Some programs require one or more changes to fail for any reason or
need several file systems changes but do not make any changes. For
instance, a program may write configuration files or libraries mand
executables at the time of installing or updating the software. The
software may be unusable or broken if the software is stopped while
updating or installing. Also, the entire system may leave in an unusable
state if the process of installing or updating the software is incomplete.

6. Network file systems

A network file system offers access to files on a server. In remote network-


connected computers, with the help of local interfaces, programs are able
to transparently create, manage and access hierarchical files and
directories. The file-system-like clients for FTP and WebDAV, and AFS, SMB
protocols, NFS are all examples of the network file systems.

7. Shared disk file systems

A shared-disk file system allows the same external disk subsystem to be


accessed by multiple machines, but when the number of machines
accesses the same external disk subsystem, there may be occurred
collisions in this condition; so, to prevent the collision, the file system
decides which subsystem to be accessed.

8. Minimal file system

In the 1970s, for some initial microcomputer users, disk and digital tape
devices were much expensive. A few cheaper basic data storage systems
used common audio cassette tape was designed. On the cassette
recorder, the user was informed about pressing "RECORD" when there
was required to write data by system. And, to notify the system, press
"RETURN" on the keyboard. Also, on the cassette recorder, the user was
needed to press the "PLAY" button when the system required to read
data.

9. Flat file systems


The subdirectories are not available in the flat system. It contains the only
directory, and all files are held in a single directory. Due to the relatively
small amount of data space available, this type of file system was
adequate when floppy disk media was available for the first time.

Disk Space Allocation Methods


The allocation methods define how the files are stored in the disk blocks.
There are three main disk space or file allocation methods.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.
All the three methods have their own advantages and disadvantages as
discussed below:
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk. For
example, if a file requires n blocks and is given a block b as the starting
location, then the blocks assigned to the file will be: b, b+1, b+2,……b+n-
1. This means that given the starting block address and the length of the file
(in terms of blocks required), we can determine the blocks occupied by the
file.
The directory entry for a file with contiguous allocation contains
 Address of starting block
 Length of the allocated portion.
The file ‘mail’ in the following figure starts from the block 19 with length = 6
blocks. Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.
Advantages:
 Both the Sequential and Direct Accesses are supported by this. For direct
access, the address of the kth block of the file which starts at block b can
easily be obtained as (b+k).
 This is extremely fast since the number of seeks are minimal because of
contiguous allocation of file blocks.
Disadvantages:
 This method suffers from both internal and external fragmentation. This
makes it inefficient in terms of memory utilization.
 Increasing file size is difficult because it depends on the availability of
contiguous memory at a particular instance.
2. Linked List Allocation
In this scheme, each file is a linked list of disk blocks which need not
be contiguous. The disk blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file
block. Each block contains a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly
distributed. The last block (25) contains -1 indicating a null pointer and does
not point to any other block.

Advantages:
 This is very flexible in terms of file size. File size can be increased easily
since the system does not have to look for a contiguous chunk of
memory.
 This method does not suffer from external fragmentation. This makes it
relatively better in terms of memory utilization.
Disadvantages:
 Because the file blocks are distributed randomly on the disk, a large
number of seeks are needed to access every block individually. This
makes linked allocation slower.
 It does not support random or direct access. We can not directly access
the blocks of a file. A block k of a file can be accessed by traversing k
blocks sequentially (sequential access ) from the starting block of the file
via block pointers.
 Pointers required in the linked allocation incur some extra overhead.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the
pointers to all the blocks occupied by a file. Each file has its own index block.
The ith entry in the index block contains the disk address of the ith file block.
The directory entry contains the address of the index block as shown in the
image:
Advantages:
 This supports direct access to the blocks occupied by the file and
therefore provides fast access to the file blocks.
 It overcomes the problem of external fragmentation.
Disadvantages:
 The pointer overhead for indexed allocation is greater than linked
allocation.
 For very small files, say files that expand only 2-3 blocks, the indexed
allocation would keep one entire block (index block) for the pointers which
is inefficient in terms of memory utilization. However, in linked allocation
we lose the space of only 1 pointer per block.
For files that are very large, single index block may not be able to hold all the
pointers.
Following mechanisms can be used to resolve this:
1. Linked scheme: This scheme links two or more index blocks together for
holding the pointers. Every index block would then contain a pointer or the
address to the next index block.
2. Multilevel index: In this policy, a first level index block is used to point to
the second level index blocks which inturn points to the disk blocks
occupied by the file. This can be extended to 3 or more levels depending
on the maximum file size.
3. Combined Scheme: In this scheme, a special block called the Inode
(information Node) contains all the information about the file such as the
name, size, authority, etc and the remaining space of Inode is used to
store the Disk Block addresses which contain the actual file as shown in
the image below. The first few of these pointers in Inode point to
the direct blocks i.e the pointers contain the addresses of the disk blocks
that contain data of the file. The next few pointers point to indirect blocks.
Indirect blocks may be single indirect, double indirect or triple
indirect. Single Indirect block is the disk block that does not contain the
file data but the disk address of the blocks that contain the file data.
Similarly, double indirect blocks do not contain the file data but the disk
address of the blocks that contain the address of the blocks containing
the file data.

This article is contributed by Saloni Baweja. If you like GeeksforGeeks and


would like to contribute, you can also write an article
using contribute.geeksforgeeks.org or mail your article to
[email protected]. See your article appearing on the
GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share
more information about the topic discussed above.

Directory Structures
A directory is a container that is used to contain folders and files. It
organizes files and folders in a hierarchical manner.

There are several logical structures of a directory, these are given below.
 Single-level directory –
The single-level directory is the simplest directory structure. In it, all files
are contained in the same directory which makes it easy to support and
understand.
A single level directory has a significant limitation, however, when the
number of files increases or when the system has more than one user.
Since all the files are in the same directory, they must have a unique
name. if two users call their dataset test, then the unique name rule
violated.

Advantages:
 Since it is a single directory, so its implementation is very easy.
 If the files are smaller in size, searching will become faster.
 The operations like file creation, searching, deletion, updating are very
easy in such a directory structure.
 Logical Organization: Directory structures help to logically organize files
and directories in a hierarchical structure. This provides an easy way to
navigate and manage files, making it easier for users to access the data
they need.
 Increased Efficiency: Directory structures can increase the efficiency of
the file system by reducing the time required to search for files. This is
because directory structures are optimized for fast file access, allowing
users to quickly locate the file they need.
 Improved Security: Directory structures can provide better security for
files by allowing access to be restricted at the directory level. This helps
to prevent unauthorized access to sensitive data and ensures that
important files are protected.
 Facilitates Backup and Recovery: Directory structures make it easier to
backup and recover files in the event of a system failure or data loss. By
storing related files in the same directory, it is easier to locate and backup
all the files that need to be protected.
 Scalability: Directory structures are scalable, making it easy to add new
directories and files as needed. This helps to accommodate growth in the
system and makes it easier to manage large amounts of data.
Disadvantages:
 There may chance of name collision because two files can have the same
name.
 Searching will become time taking if the directory is large.
 This can not group the same type of files together.
 Two-level directory –
As we have seen, a single level directory often leads to confusion of files
names among different users. the solution to this problem is to create a
separate directory for each user

What is a directory?
Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.

To get the benefit of different file systems on the different operating


systems, A hard disk can be divided into the number of partitions of
different sizes. The partitions are also called volumes or mini disks.

Each partition must have at least one directory in which, all the files of the
partition can be listed. A directory entry is maintained for each file in the
directory which stores all the information related to that file.

A directory can be viewed as a file which contains the Meta data of the
bunch of files.

PlayNext

Unmute
Current Time 0:00

Duration 18:10

Loaded: 0.37%
Â

Fullscreen

Backward Skip 10sPlay VideoForward Skip 10s

Every Directory supports a number of common operations on the file:

1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files

File Protection
In computer systems, alot of user’s information is stored, the objective of the
operating system is to keep safe the data of the user from the improper
access to the system. Protection can be provided in number of ways. For a
single laptop system, we might provide protection by locking the computer in
a desk drawer or file cabinet. For multi-user systems, different mechanisms
are used for the protection.
Types of Access :
The files which have direct access of the any user have the need of
protection. The files which are not accessible to other users doesn’t require
any kind of protection. The mechanism of the protection provide the facility of
the controlled access by just limiting the types of access to the file. Access
can be given or not given to any user depends on several factors, one of
which is the type of access required. Several different types of operations
can be controlled:
 Read – Reading from a file.
 Write – Writing or rewriting the file.
 Execute – Loading the file and after loading the execution process starts.
 Append – Writing the new information to the already existing file, editing
must be end at the end of the existing file.
 Delete – Deleting the file which is of no use and using its space for the
another data.
 List – List the name and attributes of the file.
Operations like renaming, editing the existing file, copying; these can also be
controlled. There are many protection mechanism. each of them mechanism
have different advantages and disadvantages and must be appropriate for
the intended application.
Access Control :
There are different methods used by different users to access any file. The
general way of protection is to associate identity-dependent access with all
the files and directories an list called access-control list (ACL) which specify
the names of the users and the types of access associate with each of the
user. The main problem with the access list is their length. If we want to
allow everyone to read a file, we must list all the users with the read access.
This technique has two undesirable consequences:
Constructing such a list may be tedious and unrewarding task, especially if
we do not know in advance the list of the users in the system.
Previously, the entry of the any directory is of the fixed size but now it
changes to the variable size which results in the complicates space
management. These problems can be resolved by use of a condensed
version of the access list. To condense the length of the access-control list,
many systems recognize three classification of users in connection with each
file:
 Owner – Owner is the user who has created the file.
 Group – A group is a set of members who has similar needs and they are
sharing the same file.
 Universe – In the system, all other users are under the category called
universe.
The most common recent approach is to combine access-control lists with
the normal general owner, group, and universe access control scheme. For
example: Solaris uses the three categories of access by default but allows
access-control lists to be added to specific files and directories when more
fine-grained access control is desired.
Other Protection Approaches:
The access to any system is also controlled by the password. If the use of
password is random and it is changed often, this may be result in limit the
effective access to a file.
The use of passwords has a few disadvantages:
 The number of passwords are very large so it is difficult to remember the
large passwords.
 If one password is used for all the files, then once it is discovered, all files
are accessible; protection is on all-or-none basis.

Introduction
File protection in an operating system is the process of securing files from
unauthorized access, alteration, or deletion. It is critical for data security and
ensures that sensitive information remains confidential and secure. Operating
systems provide various mechanisms and techniques such as file permissions,
encryption, access control lists, auditing, and physical file security to protect files.
Proper file protection involves user authentication, authorization, access control,
encryption, and auditing. Ongoing updates and patches are also necessary to
prevent security breaches. File protection in an operating system is essential to
maintain data security and minimize the risk of data breaches and other security
incidents.

What is File protection?


File protection in an operating system refers to the various mechanisms and
techniques used to secure files from unauthorized access, alteration, or deletion. It
involves controlling access to files, ensuring their security and confidentiality, and
preventing data breaches and other security incidents.
Operating systems provide several file protection features, including file
permissions, encryption, access control lists, auditing, and physical file security.
These measures allow administrators to manage access to files, determine who can
access them, what actions can be performed on them, and how they are stored and
backed up. Proper file protection requires ongoing updates and patches to fix
vulnerabilities and prevent security breaches. It is crucial for data security in the
digital age where cyber threats are prevalent. By implementing file protection
measures, organizations can safeguard their files, maintain data confidentiality, and
minimize the risk of data breaches and other security incidents.

Type of File protection


File protection is an essential component of modern operating systems, ensuring
that files are secured from unauthorized access, alteration, or deletion. In this
context, there are several types of file protection mechanisms used in operating
systems to provide robust data security.
 File Permissions − File permissions are a basic form of file protection that
controls access to files by setting permissions for users and groups. File
permissions allow the system administrator to assign specific access rights to
users and groups, which can include read, write, and execute privileges.
These access rights can be assigned at the file or directory level, allowing
users and groups to access specific files or directories as needed. File
permissions can be modified by the system administrator at any time to adjust
access privileges, which helps to prevent unauthorized access.
 Encryption − Encryption is the process of converting plain text into ciphertext
to protect files from unauthorized access. Encrypted files can only be
accessed by authorized users who have the correct encryption key to decrypt
them. Encryption is widely used to secure sensitive data such as financial
information, personal data, and other confidential information. In an operating
system, encryption can be applied to individual files or entire directories,
providing an extra layer of protection against unauthorized access.
 Access Control Lists (ACLs) − Access control lists (ACLs) are lists of
permissions attached to files and directories that define which users or groups
have access to them and what actions they can perform on them. ACLs can
be more granular than file permissions, allowing the system administrator to
specify exactly which users or groups can access specific files or directories.
ACLs can also be used to grant or deny specific permissions, such as read,
write, or execute privileges, to individual users or groups.
 Auditing and Logging − Auditing and logging are mechanisms used to track
and monitor file access, changes, and deletions. It involves creating a record
of all file access and changes, including who accessed the file, what actions
were performed, and when they were performed. Auditing and logging can
help to detect and prevent unauthorized access and can also provide an audit
trail for compliance purposes.
 Physical File Security − Physical file security involves protecting files from
physical damage or theft. It includes measures such as file storage and
access control, backup and recovery, and physical security best practices.
Physical file security is essential for ensuring the integrity and availability of
critical data, as well as compliance with regulatory requirements.
Overall, these types of file protection mechanisms are essential for ensuring data
security and minimizing the risk of data breaches and other security incidents in an
operating system. The choice of file protection mechanisms will depend on the
specific requirements of the organization, as well as the sensitivity and volume of
the data being protected. However, a combination of these file protection
mechanisms can provide comprehensive protection against various types of threats
and vulnerabilities.

Advantages of File protection


File protection is an important aspect of modern operating systems that ensures
data security and integrity by preventing unauthorized access, alteration, or deletion
of files. There are several advantages of file protection mechanisms in an operating
system, including −
 Data Security − File protection mechanisms such as encryption, access
control lists, and file permissions provide robust data security by preventing
unauthorized access to files. These mechanisms ensure that only authorized
users can access files, which helps to prevent data breaches and other
security incidents. Data security is critical for organizations that handle
sensitive data such as personal data, financial information, and intellectual
property.
 Compliance − File protection mechanisms are essential for compliance with
regulatory requirements such as GDPR, HIPAA, and PCI-DSS. These
regulations require organizations to implement appropriate security measures
to protect sensitive data from unauthorized access, alteration, or deletion.
Failure to comply with these regulations can result in significant financial
penalties and reputational damage.
 Business Continuity − File protection mechanisms are essential for ensuring
business continuity by preventing data loss due to accidental or malicious
deletion, corruption, or other types of damage. File protection mechanisms
such as backup and recovery, auditing, and logging can help to recover data
quickly in the event of a data loss incident, ensuring that business operations
can resume as quickly as possible.
 Increased Productivity − File protection mechanisms can help to increase
productivity by ensuring that files are available to authorized users when they
need them. By preventing unauthorized access, alteration, or deletion of files,
file protection mechanisms help to minimize the risk of downtime and data loss
incidents that can impact productivity.
 Enhanced Collaboration − File protection mechanisms can help to enhance
collaboration by allowing authorized users to access and share files securely.
Access control lists, file permissions, and encryption can help to ensure that
files are only accessed by authorized users, which helps to prevent conflicts
and misunderstandings that can arise when multiple users access the same
file.
 Reputation − File protection mechanisms can enhance an organizations
reputation by demonstrating a commitment to data security and compliance.
By implementing robust file protection mechanisms, organizations can build
trust with their customers, partners, and stakeholders, which can have a
positive impact on their reputation and bottom line.
Overall, these advantages of file protection mechanisms highlight the importance of
data security and the need for organizations to implement appropriate measures to
protect their sensitive data. File protection mechanisms can help to prevent data
breaches and other security incidents, ensure compliance with regulatory
requirements, and ensure business continuity in the event of a data loss incident. By
implementing a comprehensive file protection strategy, organizations can enhance
productivity, collaboration, and reputation, while minimizing the risk of data loss and
other security incidents.

Disadvantages of File protection


There are also some potential disadvantages of file protection in an operating
system, including −
 Overhead − Some file protection mechanisms such as encryption, access
control lists, and auditing can add overhead to system performance. This can
impact system resources and slow down file access and processing times.
 Complexity − File protection mechanisms can be complex and require
specialized knowledge to implement and manage. This can lead to errors and
misconfigurations that compromise data security.
 Compatibility Issues − Some file protection mechanisms may not be
compatible with all types of files or applications, leading to compatibility issues
and limitations in file usage.
 Cost − Implementing robust file protection mechanisms can be expensive,
especially for small organizations with limited budgets. This can make it
difficult to achieve full data protection.
 User Frustration − Stringent file protection mechanisms such as complex
passwords, frequent authentication requirements, and restricted access can
frustrate users and impact productivity.
Overall, these potential disadvantages of file protection mechanisms need to be
balanced against the advantages they offer in terms of data security, compliance,
and business continuity. Careful planning and implementation are necessary to
minimize the impact of these disadvantages and ensure effective file protection in
an operating system.

Conclusion
In conclusion, file protection mechanisms are essential for ensuring data security,
compliance, and business continuity in modern operating systems. These
mechanisms provide several advantages, including data security, compliance with
regulatory requirements, business continuity, increased productivity, enhanced
collaboration, and reputation. However, there are also some disadvantages, such as
increased system overhead and complexity, and potential limitations on user
flexibility. Despite these limitations, the benefits of file protection mechanisms
outweigh the disadvantages, and organizations should implement appropriate file
protection mechanisms to protect their sensitive data and ensure their operations
are not impacted by security incidents

Introduction FILE PROTECTION


File protection in an operating system is the process of securing files from
unauthorized access, alteration, or deletion. It is critical for data security and
ensures that sensitive information remains confidential and secure. Operating
systems provide various mechanisms and techniques such as file permissions,
encryption, access control lists, auditing, and physical file security to protect files.
Proper file protection involves user authentication, authorization, access control,
encryption, and auditing. Ongoing updates and patches are also necessary to
prevent security breaches. File protection in an operating system is essential to
maintain data security and minimize the risk of data breaches and other security
incidents.

What is File protection?


File protection in an operating system refers to the various mechanisms and
techniques used to secure files from unauthorized access, alteration, or deletion. It
involves controlling access to files, ensuring their security and confidentiality, and
preventing data breaches and other security incidents.
Operating systems provide several file protection features, including file
permissions, encryption, access control lists, auditing, and physical file security.
These measures allow administrators to manage access to files, determine who can
access them, what actions can be performed on them, and how they are stored and
backed up. Proper file protection requires ongoing updates and patches to fix
vulnerabilities and prevent security breaches. It is crucial for data security in the
digital age where cyber threats are prevalent. By implementing file protection
measures, organizations can safeguard their files, maintain data confidentiality, and
minimize the risk of data breaches and other security incidents.
Type of File protection
File protection is an essential component of modern operating systems, ensuring
that files are secured from unauthorized access, alteration, or deletion. In this
context, there are several types of file protection mechanisms used in operating
systems to provide robust data security.
 File Permissions − File permissions are a basic form of file protection that
controls access to files by setting permissions for users and groups. File
permissions allow the system administrator to assign specific access rights to
users and groups, which can include read, write, and execute privileges.
These access rights can be assigned at the file or directory level, allowing
users and groups to access specific files or directories as needed. File
permissions can be modified by the system administrator at any time to adjust
access privileges, which helps to prevent unauthorized access.
 Encryption − Encryption is the process of converting plain text into ciphertext
to protect files from unauthorized access. Encrypted files can only be
accessed by authorized users who have the correct encryption key to decrypt
them. Encryption is widely used to secure sensitive data such as financial
information, personal data, and other confidential information. In an operating
system, encryption can be applied to individual files or entire directories,
providing an extra layer of protection against unauthorized access.
 Access Control Lists (ACLs) − Access control lists (ACLs) are lists of
permissions attached to files and directories that define which users or groups
have access to them and what actions they can perform on them. ACLs can
be more granular than file permissions, allowing the system administrator to
specify exactly which users or groups can access specific files or directories.
ACLs can also be used to grant or deny specific permissions, such as read,
write, or execute privileges, to individual users or groups.
 Auditing and Logging − Auditing and logging are mechanisms used to track
and monitor file access, changes, and deletions. It involves creating a record
of all file access and changes, including who accessed the file, what actions
were performed, and when they were performed. Auditing and logging can
help to detect and prevent unauthorized access and can also provide an audit
trail for compliance purposes.
 Physical File Security − Physical file security involves protecting files from
physical damage or theft. It includes measures such as file storage and
access control, backup and recovery, and physical security best practices.
Physical file security is essential for ensuring the integrity and availability of
critical data, as well as compliance with regulatory requirements.
Overall, these types of file protection mechanisms are essential for ensuring data
security and minimizing the risk of data breaches and other security incidents in an
operating system. The choice of file protection mechanisms will depend on the
specific requirements of the organization, as well as the sensitivity and volume of
the data being protected. However, a combination of these file protection
mechanisms can provide comprehensive protection against various types of threats
and vulnerabilities.
Advantages of File protection
File protection is an important aspect of modern operating systems that ensures
data security and integrity by preventing unauthorized access, alteration, or deletion
of files. There are several advantages of file protection mechanisms in an operating
system, including −
 Data Security − File protection mechanisms such as encryption, access
control lists, and file permissions provide robust data security by preventing
unauthorized access to files. These mechanisms ensure that only authorized
users can access files, which helps to prevent data breaches and other
security incidents. Data security is critical for organizations that handle
sensitive data such as personal data, financial information, and intellectual
property.
 Compliance − File protection mechanisms are essential for compliance with
regulatory requirements such as GDPR, HIPAA, and PCI-DSS. These
regulations require organizations to implement appropriate security measures
to protect sensitive data from unauthorized access, alteration, or deletion.
Failure to comply with these regulations can result in significant financial
penalties and reputational damage.
 Business Continuity − File protection mechanisms are essential for ensuring
business continuity by preventing data loss due to accidental or malicious
deletion, corruption, or other types of damage. File protection mechanisms
such as backup and recovery, auditing, and logging can help to recover data
quickly in the event of a data loss incident, ensuring that business operations
can resume as quickly as possible.
 Increased Productivity − File protection mechanisms can help to increase
productivity by ensuring that files are available to authorized users when they
need them. By preventing unauthorized access, alteration, or deletion of files,
file protection mechanisms help to minimize the risk of downtime and data loss
incidents that can impact productivity.
 Enhanced Collaboration − File protection mechanisms can help to enhance
collaboration by allowing authorized users to access and share files securely.
Access control lists, file permissions, and encryption can help to ensure that
files are only accessed by authorized users, which helps to prevent conflicts
and misunderstandings that can arise when multiple users access the same
file.
 Reputation − File protection mechanisms can enhance an organizations
reputation by demonstrating a commitment to data security and compliance.
By implementing robust file protection mechanisms, organizations can build
trust with their customers, partners, and stakeholders, which can have a
positive impact on their reputation and bottom line.
Overall, these advantages of file protection mechanisms highlight the importance of
data security and the need for organizations to implement appropriate measures to
protect their sensitive data. File protection mechanisms can help to prevent data
breaches and other security incidents, ensure compliance with regulatory
requirements, and ensure business continuity in the event of a data loss incident. By
implementing a comprehensive file protection strategy, organizations can enhance
productivity, collaboration, and reputation, while minimizing the risk of data loss and
other security incidents.
Disadvantages of File protection
There are also some potential disadvantages of file protection in an operating
system, including −
 Overhead − Some file protection mechanisms such as encryption, access
control lists, and auditing can add overhead to system performance. This can
impact system resources and slow down file access and processing times.
 Complexity − File protection mechanisms can be complex and require
specialized knowledge to implement and manage. This can lead to errors and
misconfigurations that compromise data security.
 Compatibility Issues − Some file protection mechanisms may not be
compatible with all types of files or applications, leading to compatibility issues
and limitations in file usage.
 Cost − Implementing robust file protection mechanisms can be expensive,
especially for small organizations with limited budgets. This can make it
difficult to achieve full data protection.
 User Frustration − Stringent file protection mechanisms such as complex
passwords, frequent authentication requirements, and restricted access can
frustrate users and impact productivity.
Overall, these potential disadvantages of file protection mechanisms need to be
balanced against the advantages they offer in terms of data security, compliance,
and business continuity. Careful planning and implementation are necessary to
minimize the impact of these disadvantages and ensure effective file protection in
an operating system.

Conclusion
In conclusion, file protection mechanisms are essential for ensuring data security,
compliance, and business continuity in modern operating systems. These
mechanisms provide several advantages, including data security, compliance with
regulatory requirements, business continuity, increased productivity, enhanced
collaboration, and reputation. However, there are also some disadvantages, such as
increased system overhead and complexity, and potential limitations on user
flexibility. Despite these limitations, the benefits of file protection mechanisms
outweigh the disadvantages, and organizations should implement appropriate file
protection mechanisms to protect their sensitive data and ensure their operations
are not impacted by security incidents.

System Calls for File Management

System Calls in Operating System (OS)


A system call is a way for a user program to interface with the operating
system. The program requests several services, and the OS responds by
invoking a series of system calls to satisfy the request. A system call can
be written in assembly language or a high-level language like C or Pascal.
System calls are predefined functions that the operating system may
directly invoke if a high-level language is used.
In this article, you will learn about the system calls in the operating
system and discuss their types and many other things.

What is a System Call?


A system call is a method for a computer program to request a service
from the kernel of the operating system on which it is running. A system
call is a method of interacting with the operating system via programs. A
system call is a request from computer software to an operating system's
kernel.

The Application Program Interface (API) connects the operating


system's functions to user programs. It acts as a link between the
operating system and a process, allowing user-level programs to request
operating system services. The kernel system can only be accessed using
system calls. System calls are required for any programs that use
resources.

How are system calls made?


When a computer software needs to access the operating system's kernel,
it makes a system call. The system call uses an API to expose the
operating system's services to user programs. It is the only method to
access the kernel system. All programs or processes that require
resources for execution must use system calls, as they serve as an
interface between the operating system and user programs.

Below are some examples of how a system call varies from a user
function.

1. A system call function may create and use kernel processes to execute the
asynchronous processing.
2. A system call has greater authority than a standard subroutine. A system
call with kernel-mode privilege executes in the kernel protection domain.
3. System calls are not permitted to use shared libraries or any symbols that
are not present in the kernel protection domain.
4. The code and data for system calls are stored in global kernel memory.

Why do you need system calls in Operating


System?
There are various situations where you must require system calls in the
operating system. Following of the situations are as follows:
1. It is must require when a file system wants to create or delete a file.
2. Network connections require the system calls to sending and receiving
data packets.
3. If you want to read or write a file, you need to system calls.
4. If you want to access hardware devices, including a printer, scanner, you
need a system call.
5. System calls are used to create and manage new processes.

How System Calls Work


The Applications run in an area of memory known as user space. A system
call connects to the operating system's kernel, which executes in kernel
space. When an application creates a system call, it must first obtain
permission from the kernel. It achieves this using an interrupt request,
which pauses the current process and transfers control to the kernel.

If the request is permitted, the kernel performs the requested action, like
creating or deleting a file. As input, the application receives the kernel's
output. The application resumes the procedure after the input is received.
When the operation is finished, the kernel returns the results to the
application and then moves data from kernel space to user space in
memory.

A simple system call may take few nanoseconds to provide the result, like
retrieving the system date and time. A more complicated system call,
such as connecting to a network device, may take a few seconds. Most
operating systems launch a distinct kernel thread for each system call to
avoid bottlenecks. Modern operating systems are multi-threaded, which
means they can handle various system calls at the same time.

Types of System Calls


There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

Now, you will learn about all the different types of system calls one-by-
one.

Process Control
Process control is the system call that is used to direct the processes.
Some process control examples include creating, load, abort, end,
execute, process, terminate the process, etc.

File Management
File management is a system call that is used to handle the files. Some
file management examples include creating files, delete files, open, close,
read, write, etc.

Device Management
Device management is a system call that is used to deal with devices.
Some examples of device management include read, device, write, get
device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain
information. There are some examples of information maintenance,
including getting system data, set time or date, get time or date, set
system data, etc.

Communication
Communication is a system call that is used for communication. There are
some examples of communication, including create, delete
communication connections, send, receive messages, etc.

Examples of Windows and Unix system calls


There are various examples of Windows and Unix system calls. These are
as listed below in the table:

Process Windows Unix

Process Control CreateProcess() Fork()


ExitProcess() Exit()
WaitForSingleObject() Wait()

File Manipulation CreateFile() Open()


ReadFile() Read()
WriteFile() Write()
CloseHandle() Close()

Device Management SetConsoleMode() Ioctl()


ReadConsole() Read()
WriteConsole() Write()

Information Maintenance GetCurrentProcessID() Getpid()


SetTimer() Alarm()
Sleep() Sleep()

Communication CreatePipe() Pipe()


CreateFileMapping() Shmget()
MapViewOfFile() Mmap()
Protection SetFileSecurity() Chmod()
InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

Here, you will learn about some methods briefly:

open()
The open() system call allows you to access a file on a file system. It
allocates resources to the file and provides a handle that the process may
refer to. Many processes can open a file at once or by a single process
only. It's all based on the file system and structure.

read()
It is used to obtain data from a file on the file system. It accepts three
arguments in general:

o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.

The file descriptor of the file to be read could be used to identify it and
open it using open() before reading.

wait()
In some systems, a process may have to wait for another process to
complete its execution before proceeding. When a parent process makes
a child process, the parent process execution is suspended until the child
process is finished. The wait() system call is used to suspend the parent
process. Once the child process has completed its execution, control is
returned to the parent process.

write()
It is used to write data from a user buffer to a device like a file. This
system call is one way for a program to generate data. It takes three
arguments in general:

o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is
one of the most common ways to create processes in operating systems.
When a parent process spawns a child process, execution of the parent
process is interrupted until the child process completes. Once the child
process has completed its execution, control is returned to the parent
process.

close()
It is used to end file system access. When this system call is invoked, it
signifies that the program no longer requires the file, and the buffers are
flushed, the file information is altered, and the file resources are de-
allocated as a result.

exec()
When an executable file replaces an earlier executable file in an already
executing process, this system function is invoked. As a new process is
not built, the old process identification stays, but the new process
replaces data, stack, data, head, etc.

exit()
The exit() is a system call that is used to end program execution. This call
indicates that the thread execution is complete, which is especially useful
in multi-threaded environments. The operating system reclaims resources
spent by the process following the use of the exit() system function.

Disk Scheduling Algorithms.

Disk scheduling is done by operating systems to schedule I/O requests


arriving for the disk. Disk scheduling is also known as I/O scheduling. Disk
scheduling is important because:
 Multiple I/O requests may arrive by different processes and only one I/O
request can be served at a time by the disk controller. Thus other I/O
requests need to wait in the waiting queue and need to be scheduled.
 Two or more requests may be far from each other so can result in greater
disk arm movement.
 Hard drives are one of the slowest parts of the computer system and thus
need to be accessed in an efficient manner.
There are many Disk Scheduling Algorithms but before discussing them let’s
have a quick look at some of the important terms:
 Seek Time: Seek time is the time taken to locate the disk arm to a
specified track where the data is to be read or write. So the disk
scheduling algorithm that gives minimum average seek time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired
sector of disk to rotate into a position so that it can access the read/write
heads. So the disk scheduling algorithm that gives minimum rotational
latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends
on the rotating speed of the disk and number of bytes to be transferred.
 Disk Access Time: Disk Access Time is:

Disk Access Time = Seek Time +


Rotational Latency +
Transfer Time
Total Seek Time = Total head Movement * Seek Time

 Disk Response Time: Response Time is the average of time spent by a


request waiting to perform its I/O operation. Average Response time is
the response time of the all requests. Variance Response Time is
measure of how individual request are serviced with respect to average
response time. So the disk scheduling algorithm that gives minimum
variance response time is better.
Disk Scheduling Algorithms
1. FCFS: FCFS is the simplest of all the Disk Scheduling Algorithms. In
FCFS, the requests are addressed in the order they arrive in the disk
queue.Let us understand this with the help of an example.
Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is: 50
So, total overhead movement (total distance covered by the disk arm) :
=(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642
Advantages:
 Every request gets a fair chance
 No indefinite postponement
Disadvantages:
 Does not try to optimize seek time
 May not provide the best possible service
1. SSTF: In SSTF (Shortest Seek Time First), requests having shortest seek
time are executed first. So, the seek time of every request is calculated in
advance in the queue and then they are scheduled according to their
calculated seek time. As a result, the request near the disk arm will get
executed first. SSTF is certainly an improvement over FCFS as it
decreases the average response time and increases the throughput of
system.Let us understand this with the help of an example.
Example:
1. Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is :
50

So,
total overhead movement (total distance covered by the disk
arm) =(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-
140)+(190-170) =208
Advantages:
 Average Response Time decreases
 Throughput increases
Disadvantages:
 Overhead to calculate seek time in advance
 Can cause Starvation for a request if it has a higher seek time as
compared to incoming requests
 High variance of response time as SSTF favors only some requests
SCAN: In SCAN algorithm the disk arm moves in a particular direction and
services the requests coming in its path and after reaching the end of the
disk, it reverses its direction and again services the request arriving in its
path. So, this algorithm works as an elevator and is hence also known as
an elevator algorithm. As a result, the requests at the midrange are
serviced more and those arriving behind the disk arm will have to wait.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger
value”.

Therefore, the total overhead movement (total distance covered by the disk
arm) is calculated as:
1. =(199-50)+(199-16) =332

Advantages:
 High throughput
 Low variance of response time
 Average response time
Disadvantages:
 Long waiting time for requests for locations just visited by disk arm
1. CSCAN: In SCAN algorithm, the disk arm again scans the path that has
been scanned, after reversing its direction. So, it may be possible that too
many requests are waiting at the other end or there may be zero or few
requests pending at the scanned area.
These situations are avoided in CSCAN algorithm in which the disk arm
instead of reversing its direction goes to the other end of the disk and starts
servicing the requests from there. So, the disk arm moves in a circular
fashion and this algorithm is also similar to SCAN algorithm and hence it is
known as C-SCAN (Circular SCAN).
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger
value”.

so, the total overhead movement (total distance covered by the disk arm) is
calculated as:
=(199-50)+(199-0)+(43-0) =391 Advantages:
 Provides more uniform wait time compared to SCAN
1. LOOK: It is similar to the SCAN disk scheduling algorithm except for the
difference that the disk arm in spite of going to the end of the disk goes
only to the last request to be serviced in front of the head and then
reverses its direction from there only. Thus it prevents the extra delay
which occurred due to unnecessary traversal to the end of the disk.

Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is
calculated as:
1. =(190-50)+(190-16) =314

2. CLOOK: As LOOK is similar to SCAN algorithm, in similar way, CLOOK is


similar to CSCAN disk scheduling algorithm. In CLOOK, the disk arm in
spite of going to the end goes only to the last request to be serviced in
front of the head and then from there goes to the other end’s last request.
Thus, it also prevents the extra delay which occurred due to unnecessary
traversal to the end of the disk.

Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And
the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger value”
So, the total overhead movement (total distance covered by the disk arm) is
calculated as:
1. =(190-50)+(190-16)+(43-16) =341
2. RSS– It stands for random scheduling and just like its name it is nature. It
is used in situations where scheduling involves random attributes such as
random processing time, random due dates, random weights, and
stochastic machine breakdowns this algorithm sits perfectly. Which is why
it is usually used for analysis and simulation.
3. LIFO– In LIFO (Last In, First Out) algorithm, the newest jobs are serviced
before the existing ones i.e. in order of requests that get serviced the job
that is newest or last entered is serviced first, and then the rest in the
same order.
Advantages
 Maximizes locality and resource utilization
 Can seem a little unfair to other requests and if new requests keep
coming in, it cause starvation to the old and existing ones.
1. N-STEP SCAN – It is also known as the N-STEP LOOK algorithm. In this,
a buffer is created for N requests. All requests belonging to a buffer will
be serviced in one go. Also once the buffer is full no new requests are
kept in this buffer and are sent to another one. Now, when these N
requests are serviced, the time comes for another top N request and this
way all get requests to get a guaranteed service
Advantages
 It eliminates the starvation of requests completely
1. FSCAN– This algorithm uses two sub-queues. During the scan all
requests in the first queue are serviced and the new incoming requests
are added to the second queue. All new requests are kept on halt until the
existing requests in the first queue are serviced.
Advantages
 FSCAN along with N-Step-SCAN prevents “arm stickiness” (phenomena
in I/O scheduling where the scheduling algorithm continues to service
requests at or near the current sector and thus prevents any seeking)
Each algorithm is unique in its own way. Overall Performance depends on
the number and type of requests.
Note: Average Rotational latency is generally taken as 1/2(Rotational
latency).
Exercise
1) Suppose a disk has 201 cylinders, numbered from 0 to 200. At some time
the disk arm is at cylinder 100, and there is a queue of disk access requests
for cylinders 30, 85, 90, 100, 105, 110, 135, and 145. If Shortest-Seek Time
First (SSTF) is being used for scheduling the disk access, the request for
cylinder 90 is serviced after servicing ____________ number of requests.
(GATE CS 2014 (A) 1 (B) 2 (C) 3 (D) 4 See this for the solution.
2) Consider an operating system capable of loading and executing a single
sequential user process at a time. The disk head scheduling algorithm used
is First Come First Served (FCFS). If FCFS is replaced by Shortest Seek
Time First (SSTF), claimed by the vendor to give 50% better benchmark
results, what is the expected improvement in the I/O performance of user
programs? (GATE CS 2004) (A) 50% (B) 40% (C) 25% (D) 0% See this for a
solution.
3) Suppose the following disk request sequence (track numbers) for a disk
with 100 tracks is given: 45, 20, 90, 10, 50, 60, 80, 25, 70. Assume that the
initial position of the R/W head is on track 50. The additional distance that
will be traversed by the R/W head when the Shortest Seek Time First (SSTF)
algorithm is used compared to the SCAN (Elevator) algorithm (assuming that
SCAN algorithm moves towards 100 when it starts execution) is _________
tracks (A) 8 (B) 9 (C) 10 (D) 11 See this for the solution.
4) Consider a typical disk that rotates at 15000 rotations per minute (RPM)
and has a transfer rate of 50 × 10^6 bytes/sec. If the average seek time of
the disk is twice the average rotational delay and the controller’s transfer
time is 10 times the disk transfer time, the average time (in milliseconds) to
read or write a 512-byte sector of the disk is _____________ See this for the
solution.
This article is contributed by Ankit Mittal. Please write comments if you find
anything incorrect, or you want to share more information about the topic
discussed above.

DISK MANAGEMENT OPERATING SYSTEM


The range of services and add-ons provided by modern operating systems is
constantly expanding, and four basic operating system management
functions are implemented by all operating systems. These management
functions are briefly described below and given the following overall context.
The four main operating system management functions (each of which are
dealt with in more detail in different places) are:
 Process Management
 Memory Management
 File and Disk Management
 I/O System Management
Most computer systems employ secondary storage devices (magnetic disks).
It provides low-cost, non-volatile storage for programs and data (tape, optical
media, flash drives, etc.). Programs and the user data they use are kept on
separate storage devices called files. The operating system is responsible
for allocating space for files on secondary storage media as needed.
There is no guarantee that files will be stored in contiguous locations on
physical disk drives, especially large files. It depends greatly on the amount
of space available. When the disc is full, new files are more likely to be
recorded in multiple locations. However, as far as the user is concerned, the
example file provided by the operating system hides the fact that the file is
fragmented into multiple parts.
The operating system needs to track the location of the disk for every part of
every file on the disk. In some cases, this means tracking hundreds of
thousands of files and file fragments on a single physical disk. Additionally,
the operating system must be able to locate each file and perform read and
write operations on it whenever it needs to. Therefore, the operating system
is responsible for configuring the file system, ensuring the safety and
reliability of reading and write operations to secondary storage, and
maintains access times (the time required to write data to or read data from
secondary storage).
Disk management of the operating system includes:
 Disk Format
 Booting from disk
 Bad block recovery

The low-level format or physical format:

Divides the disk into sectors before storing data so that the disk controller
can read and write Each sector can be:
The header retains information, data, and error correction code (ECC)
sectors of data, typically 512 bytes of data, but optional disks use the
operating system’s own data structures to preserve files using disks.
It is conducted in two stages:
1. Divide the disc into multiple cylinder groups. Each is treated as a logical
disk.
2. Logical format or “Create File System”. The OS stores the data structure
of the first file system on the disk. Contains free space and allocated space.
For efficiency, most file systems group blocks into clusters. Disk I / O runs in
blocks. File I / O runs in a cluster.
For example, the sizes can be 256,512, and 1,024 bytes. If disk is formatted
with larger sector size, fewer sectors can fit on each track.
As a result fewer headers and trailers are written on each track and more
space is obtainable for user data. – Some operating systems can handle a
sector size of 512 bytes.
Operating system keeps its own data structures on disk before it use disk to
store the files. It performs this with following two steps:
1. It partitions the disk into one or more groups of cylinders. Each partition is
treated by OS as a separate disk.
2. Logical formatting: That means creation of file system.
In order to increase the efficiency, file system groups blocks in chunks called
as clusters.
Some operating systems give special programs the ability to use a disk
partition as a large sequential array of logical blocks, without any file-system
data structures. This array is sometimes called the raw disk, and I/O to this
array is called as raw I/O.

Boot block:

 When the computer is turned on or restarted, the program stored in the


initial bootstrap ROM finds the location of the OS kernel from the disk,
loads the kernel into memory, and runs the OS. start.
 To change the bootstrap code, you need to change the ROM and
hardware chip. Only a small bootstrap loader program is stored in ROM
instead.
 The full bootstrap code is stored in the “boot block” of the disk.
 A disk with a boot partition is called a boot disk or system disk.
 The bootstrap program is required for a computer to initiate the booting
after it is powered up or rebooted.
 It initializes all components of the system, from CPU registers to device
controllers and the contents of main memory, and then starts the
operating system.
 The bootstrap program then locates the OS kernel on disk, loads that
kemel into memory, and jumps to an initial address to start the operating-
system execution.
 The Read Only Memory (ROM) does not require initialization and is at a
fixed location that the processor can begin executing when powered up or
reset. Therefore bootstrap is stored in ROM.
 Because of read only feature of ROM; it cannot be infected by a computer
virus. The difficulty is that modification of this bootstrap code requires
changing the ROM hardware chips.
 Therefore, most systems store a small bootstrap loader program in the
boot ROM which invokes and bring full bootstrap program from disk into
main memory.
 The modified version of full bootstrap program can be simply written onto
the disk.
 The fixed storage location of full bootstrap program is in the “boot blocks”.
 A disk that has a boot partition is called a boot disk or system disk.

Bad Blocks:

 Disks are error-prone because moving parts have small tolerances.


 Most disks are even stuffed from the factory with bad blocks and are
handled in a variety of ways.
 The controller maintains a list of bad blocks.
 The controller can instruct each bad sector to be logically replaced with
one of the spare sectors. This scheme is known as sector sparing or
transfer.
 A soft error triggers the data recovery process.
 However, unrecoverable hard errors may result in data loss and require
manual intervention.
 Failure of the disk can be:
1. Complete, means there is no way other than replacing the disk. Back up of
content must be taken on new disk.
2. One or more sectors become faulty.
3. After manufacturing, the bad blocks exist. Depending on the disk and
controller in use, these blocks are handled in a different ways.
Disk management in operating systems involves organizing and maintaining
the data on a storage device, such as a hard disk drive or solid-state drive.
The main goal of disk management is to efficiently utilize the available
storage space and ensure data integrity and security.

Some common disk management techniques used in operating


systems include:

1. Partitioning: This involves dividing a single physical disk into multiple


logical partitions. Each partition can be treated as a separate storage
device, allowing for better organization and management of data.
2. Formatting: This involves preparing a disk for use by creating a file
system on it. This process typically erases all existing data on the disk.
3. File system management: This involves managing the file systems used
by the operating system to store and access data on the disk. Different
file systems have different features and performance characteristics.
4. Disk space allocation: This involves allocating space on the disk for
storing files and directories. Some common methods of allocation include
contiguous allocation, linked allocation, and indexed allocation.
5. Disk defragmentation: Over time, as files are created and deleted, the
data on a disk can become fragmented, meaning that it is scattered
across the disk. Disk defragmentation involves rearranging the data on
the disk to improve performance.

Advantages of disk management include:

1. Improved organization and management of data.


2. Efficient use of available storage space.
3. Improved data integrity and security.
4. Improved performance through techniques such as defragmentation.

Disadvantages of disk management include:

1. Increased system overhead due to disk management tasks.


2. Increased complexity in managing multiple partitions and file systems.
3. Increased risk of data loss due to errors during disk management tasks.
4. Overall, disk management is an essential aspect of operating system
management and can greatly improve system performance and data
integrity when implemented properly.

Tape Organization
Magnetic drums, magnetic tape and magnetic disks are types of magnetic
memory. These memories use property for magnetic memory. Here, we have
explained about magnetic tape in brief.
Magnetic Tape memory :
In magnetic tape only one side of the ribbon is used for storing data. It is
sequential memory which contains thin plastic ribbon to store data and
coated by magnetic oxide. Data read/write speed is slower because of
sequential access. It is highly reliable which requires magnetic tape drive
writing and reading data.
The width of the ribbon varies from 4mm to 1 Inch and it has storage
capacity 100 MB to 200 GB.
Let’s see various advantages and disadvantages of Magnetic Tape memory.
Advantages :
1. These are inexpensive, i.e., low cost memories.
2. It provides backup or archival storage.
3. It can be used for large files.
4. It can be used for copying from disk files.
5. It is a reusable memory.
6. It is compact and easy to store on racks.
Disadvantages :
1. Sequential access is the disadvantage, means it does not allow access
randomly or directly.
2. It requires caring to store, i.e., vulnerable humidity, dust free, and suitable
environment.
3. It stored data cannot be easily updated or modified, i.e., difficult to make
updates on data.

You might also like