0% found this document useful (0 votes)
69 views

Unit-4 Operating System

Demand paging brings pages into memory only when they are needed, reducing memory usage. It combines features of paging and overlaying. When a process references a page not in memory, a page fault occurs and the OS loads the page from disk. Hardware support includes page tables with valid/invalid bits and secondary storage. The OS uses a page replacement algorithm like FIFO to select a victim page when it needs to load a new page from disk.

Uploaded by

Poornima.B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views

Unit-4 Operating System

Demand paging brings pages into memory only when they are needed, reducing memory usage. It combines features of paging and overlaying. When a process references a page not in memory, a page fault occurs and the OS loads the page from disk. Hardware support includes page tables with valid/invalid bits and secondary storage. The OS uses a page replacement algorithm like FIFO to select a victim page when it needs to load a new page from disk.

Uploaded by

Poornima.B
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 25

UNIT-4

Demand paging :

A demand paging is similar to a paging system with swapping. With demand


paging , a page is brought into main memory only when a reference is made to a location on
that page.
Demand paging combines the features of simple paging and overlaying to implement
virtual memory .In this each page of a program is stored contiguously in the paging swap
space on a secondary storage. Once the page in the memory, it is accessed in simple paging.

Process

Cache

Main Memory

Disk Storage

Virtual Memory Organization


Some form of hardware support is required to distinguish between those pages that are in
memory and those pages that are on the disk. The valid invalid bit scheme can be used for
this purpose. Each entry in the page table has at a minimum two fields. Page frame and
valid-invalid bit.

0
1 A
2 B
3 C
4 D
5
6 E
7
F
Logical G Memory
Valid/ H
Frame Invalid bit
0 4 V
1
2 I
3 6 V
4
5 I
6
7 I
9 V
Page Table
I
0
I
1
2
3
4 A
5
6 C
7
8
9 F
10
11
12
13
14
15
Physical Memory

Page table with valid and invalid bit

When valid bit is set, then the associated page is legal and present in the memory.
The valid-invalid bit is checked if the page is not in memory, a page fault occurs,
transferringss control to the page fault routine in the operating system.
The disk address of the faulted page is usually provided in the file map table (FMT).
this table is parallel to the page map table. Thus when processing a page fault, the operating
system uses the virtual page number provided by the mapping hardware to index the FMT
and to obtain the related disk address. For convenience and processing speed some system
place the disk address of each out of memory page into the corresponding PMT entry.

Steps in handling a page fault

1. Check an internal table for this process, to determine weather the reference was a valid
or invalid memory access.
2. If the reference was invalid, terminate the process. If it was valid but not yet brought in
that page, now page it in.
3. Find a free frame.
4. Schedule a disk operation to read the desired page into the newly allocated frame.
5. When disk read is complete, modify the internal table kept with the process and the
page table to that the page is now in memory.
6. Restart the instruction that was interrupted by the illegal address trap. The process can
now access the page as though it had always been in memory.

Hardware support:
The hardware to support demand paging is the same as the hardware for paging and
swapping.
1. Page table: page table has the ability to mark an entry invalid through a valid-invalid
bit.
2. Secondary memory: this memory holds those pages that are not present in main
memory. It is usually high speed disk.

Software algorithm:
Demand page memory management provides tremendous flexibility for the operating
system. It must interact with information management to access and store copies of the
jobs address space on secondary storage. File map table is used to store the infor4mation
regarding a file. FMT is not used by the hardware. It is usually accessed by software.

Performance of demand paging:

Demand paging can have be a significant effect on the performance of a computer


system. Let us calculate effective access time for a demand paged memory. Let p be the
probability of a page
fault (<=p<=1).
Effective access time = (1-p)*ma + p*page fault time
Where
P= page fault
Ma=memory access time
Effective access time is directly proportional to the page fault rate. It is important to keep
the page-fault rate low in a demand paging system. Otherwise effective access time
increases, showing process execution dramatically.
Example:
If the average page fault serious time of 25 millisecond and a memory access time of
100 nanoseconds. Calculate the effective access time.
Ans: effective access time= (1-p)*ma + p*page fault time
= (1-p)*100+p*25000000
=100-100p+25000000*p
=100+24999900

Advantages of demand paging

1. Large virtual memory.


2. More efficient use of memory.
3. Unconstrained multiprogramming. There is no limit on degree of multiprogramming.

Disadvantages of demand paging


1. Number of tables and amount of process over head of a handling page interrupts are
greater than in the case of the simple paged management techniques.
2. Due to the lack of an explicit constraint on a job address space size.

Comparison of demand paging with segmentation


0 H
1 Segmentation
Load M Demand paging
2
1. Segmentation mayJof different size. 1. Pages are of same size.
2. Segment3 can be shared.
m 2. Pages can not be shared.
3. It allows dynamic growth of segments. 3. Page size is fixed.
4. Segment map table indicates the address 4. Page map table keeps track of pages
of each segment in memory. in memory.
5. Segments are allowed to the program 5. Pages are loaded in memory on
with compilation. demand.
6. Provides virtual memory. 6. also provides virtual memory

Page replacement:
 Page replacement policy deals with the selection of a page in memory to be
replaced when a new page must be brought in.
 While a user process in executing, a page fault occurs. The hardware traps to the
operating system which executes its internal tables to see that this page fault is a
genuine one rather than an illegal memory access.
 All memory is in use, all the frame in main memory are occupied and it is
necessary to bring in a new page to satisfy a page fault, replacement policy is
concerned with selecting a page current5ly in memory to be replaced.
 All this policies have as their objective that the page that is removed should be the
page least likely to be refreshed in the near future need for page replacement.
For user 1
0 A

1 B

2 D
Logical memory for user 2
3 E
0 Monitor
1
2 D
V 3 H
4 Load M
V 5 J
6 E
V 7 A
I

Dt for user 1
V
I
V
Dt V for user 2

Physical memory
Working of page replacement algorithm:-
 Find the location of the desired page on the link.
 Find the free frame.

1. If there is a free frame, use it.


2. If there is no free frame, use a page replacement algorithm to select a victim
frame.
3. Write a victim page to the disk, change the page and frame tables accordingly.
*read the desired page into the free frame change and frame table
*restart the user process .

Memory reference string:


*This string of memory references is called a reference string.A succession of
memory references made by a program executing on a computer with 1 MB of memory is
given in hex notation.
14489,1448B,14494,14496.
*when analyzing page replacement algorithms,we are interested only in the page being
referenced. Assuming a 256 bytes pge size the refrenced page are obtained simply by
omitting the two least significant hex digits.
144, 144, A1, 144,263,144……….
*the pattern of page reference above can be compared into a reference string for page
replacement analysis as follows
144, A1, 144,263,144……..
*a reference string obtained in this way is used to illustrate most of the following
replacement algorithm.
Replacement algorithms are of following types:
*First in first out (FIFO)
*least recently used (Lru)
*optimal

FIFO page replacement:


*FIFO is one of the simplest method .FIFO page replacement algorithm select the page
that has been in memory the longest.
*when a page must be replaced, the oldest space is chosen. When a page is brought into
memory, it is inserted at the tail of the queue.
*FIFO page replacement algorithm is easy to understand and program. Its performance is
not always good. FIFO is not first choice of the operating system designers for page
replacement algorithm.
Let us consider the reference string 0,1,2,3,0,1,2,3,0,1,2,3,4,5,6,7.
Page form is 3 initially all three frames are empty. the first three reference(0,1,2)cause
page faults and are brought into these empty frame the next reference 3 is replace page
0,because page 0 was brought in first .

Reference string:
Frame 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7
0 0 0 0 3 3 3 2 2 2 1 1 1 4 4 4 7
1 1 1 1 0 0 0 3 3 3 2 2 2 5 5 5
2 2 2 2 1 1 1 0 0 0 3 3 3 6 6
Page fault * * * * * * * * * * * * * * * *

*page fault:
This example incurs 16 page faults in FIFO algorithm

2. FIFO algorithm with four page frames


Frame 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7
0 1 0 0 0 0 0 0 0 0 0 0 0 4 4 4 4
1 1 1 1 1 1 1 1 1 1 1 1 1 5 5 5
2 2 2 2 2 2 2 2 2 2 2 2 2 6 6
3 3 3 3 3 3 3 3 3 3 3 3 3 7
Page * * * * * * * *
fault
Number of page faults is 8.

1. Belady’s anomaly:
For some page replacement algorithms, the page fault rate may increase as the
number of allocated frames this problem.

LRU page replacement:


Least recently used policy replaces the page in memory that has not been
referenced for the longest time.
The LRU algorithms perform better than FIFO. The LRU algorithm belongs to
the reference string with frame is 3.
0123012301234567

Reference string:

Frame 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7
0 1 0 0 0 0 0 0 0 0 0 0 0 4 4 4 4
1 1 1 1 1 1 1 1 1 1 1 1 1 5 5 5
2 2 2 2 2 2 2 2 2 2 2 2 2 6 6
Page fault * * * * * * * * * * * * * * * *

Number of page faults=16

Consider same reference string with 4 page frames.

frame 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7
0 0 0 0 0 0 0 0 0 0 0 0 0 4 4 4 4
1 1 1 1 1 1 1 1 1 1 1 1 1 5 5 5
2 2 2 2 2 2 2 2 2 2 2 2 2 6 6
3 3 3 3 3 3 3 3 3 3 3 3 3 7
Page * * * * * * * * * * * *
fault

Number of page faults=8


Implementation of the LRU algorithm imposes too much overhead to be
handeled by software alone. stack is one of the solution for implementating LRU
algorithm.its best implementes by a doubly linked list,with head and tail pointer.
Second method used for this is by using counter.counter is incremented for
every memory reference.

LRU approximation:
An LRU page replacement algorithm should the page removel status
information after every page reference.updation is done by software,cost increases.
A reference but is associated with each memory block,and this bit is automatically
set to 1 by thr hardware whenever that page is refered.the single reference bit per clock
can be used to approximation LRU removel

Optimal page replacement:


The optimal policy selects for replacement that page for which the timeto the
next reference is the longest .
This algorithm is impossible to implement because it would require the operting
system to hare perfect knowledge of future events
Let us consider the reference string with frame eqale to 3.
0123012301234567
Reference string
Frame 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7
0 0 0 0 0 0 0 0 0 0 1 1 1 4 4 4 7
1 1 1 1 1 1 2 2 2 2 2 2 5 5 5
2 2 3 3 3 3 3 3 3 3 3 3 3 6 6
Page * * * * * * * * * *
Fault

Number of page fault=10

With page frame =4 for same reference string

Frame 0 1 2 3 0 1 2 3 0 1 2 3 4 5 6 7
0 0 0 0 0 0 0 0 0 0 0 0 0 4 4 4 4
2 1 1 1 1 1 1 1 1 1 1 1 1 5 5 5
3 3 3 3 3 3 3 3 3 3 3 3 3 7
Page * * * * * * * * * *
fault
Number of page fault = 8
Thrashing:
The phenomenon of excesivly moving pages back and forth between memory and
secondry storage has been called thrashing
It conscemes lot of computer energy but accomblishes very little useful results.a process
is thrashing if it is spending more time paging than executing.thrashing results in several
performance problem.
The os monitors CPU initialization.if CPU utilization is too low,we increase the degree of
multiprogramming by introducing new process to the system.the CPU scheduler sees the
decreasing CPU utilization and increases the degree of multiprogramming as a result.
As the degree of multiprogramming increases,CPU utilization also
increases,although more slowly,until a maximum is reached.if the degree of
multiprogramming is increased even further thrashing sets in and CPU utilization drops
sharply.
With page frame = 4 for same reference string

We can limit the effect of thrashing by using a local replacement alsgorithm.to prevent
thrashing,we must provide a process as many frame as it needs.

Locality of reference:
The locality model status that as a process execute it moves from locality to
locality. A locality is a set of pages that are actively used together.a program is general
composed of several different localities,which may overlap.the ordered list of page
numbers accessed by a program is called its reference string.
Locality are iof two types
*spatial locality
*temoral locality

Spatial locality:
It refers to the fact that if a memory location is accessed it is like;y that a location
near it will be accessed in the next instruction.local variables are typical allocated
adjacent memory locations.accessing two local variables typically results in references to
memory local in close proximity to each other
Temporal locality:
If a memory location has been referenced there is good chance it will be referenced
again in a short period of time.this is calld Temporal locality

Working set model:


It uses the current memory requirement to determine the number of page frames to
allocate to the process.an informal definition is the collection of pages that a process is
working with,and which must be resident if the process is to avoid thrashing.
The idea is to use the recent needs of a process to predict its future needs.the working set
is an approximation of the programs locality.
If working set window size is 10 memory reference then the working set at time t is
{12567}.by time t2 the working set has changed to {3,4}
…2 6 1 5 7 7 7 7 5 -1 6 2 3 4 1 2 3 4 4 4 3 4 4 4 4 3

Ws(t1)={1,2,5,6,7} ws(t2)={3,4}
As working set changes corresponding changes will have to be made in the balance
set.the most important property of the working set is its size.working set stratergy
prevents thrashingwhile keeping the degree of multiprogramming as high as possible.the
difficulty with the working set model is keeping track of the working set.it is a moving
window.
File concept:
 a file is a collection of similar records.the file is treated as a singentity by
users and applications and may be reffered by name.
 a file is a container for a informations.
 file may be free from,such as text files.in general,file is sequence of
bits,bytes,lines or records.
 a file has certain defined structure according to tis type.
*text file
*source file
*executable file
*object file
 a text file is a sequence of charect organized into llines.a source file is a sequence
of subroutine and functions.an object file is a sequence of bytes organized into
blocks understandable file is a series of code sections that the loads can bring into
memory any execute.

File attributes:
 file attributes very from one os to another.the common file attributes are
1. name
2. identifier
3. type
4. location
5. size
6. protection
7. time date and user identification
 the symbolic file name is the only information kept in human readable form
identifier is the unique tag which identify the file within the file system.its usually
a number.location information is a pointer to a device and to the location.size
attribute is one of the important factors.protection attributes is the fundamental
property of the file.
 Access control information determines who can do reading,writing,executing and
so on.time date and user identification may be kept for creation,last modification
and last use.these data can be useful for protection,security and usage monitoring.
 All information about files is kept in the directory structure.it is stored on the
secondary storage device.

File operation:
 A file is an abstract data type.operation on file,operating system provides
syatemcalls for creating, deleting, read etc…basic operation on files are
1. create a file
2. writing a file
3. reading a file
4. deleting a file
5. truncating
6. repositioning within a file

create a file:
for creating a file address space in the file system is required.after creating a file
entry of the file is made in the directory. The directory entry record the name of the file
and the location in the file system.

Writing a file:
System call is used for writing into file. It is required to specify the name of the file
and information to be written to thefile.according to the directory to find the loction of
the file.

Reading a file:
To read a file system call is used.it requires the name of the file and memory
address. Again the directory is searched for the associated directory entry and the system
needs to keep a read pointer to the location in the file where the next read is to take
place.

Delete a file:
System will search the directory which file to be deleted.if directory entry is
found , it releases all fill space.that free space can be reused by another file.

Trancating a file:
User may want to erase content of file but keep its attributes. Rather that forcing
the user to delete a file and then recreate it, truncation function allows all attributes to
remain unchanged except for fil length.
Repositioning within a file:
The directory is searched for the appropriate entry, and the current file
passion is set to a given value reposition with in a file does not need to involve any actual
I/O. this file operation is also known as file seek.

File type:
A common teachniqe for implementing file types is to include the type as part of the
file name. the name is split into two parts.
A name and extension. Following table gives the file type with usual extension and
function.
FILE TYPE USUAL EXTENSION FUNCTION
Executable Exe,com,bin or none Read to run machine language
program
Object Obj,O Compiled, machine language
not linked
Source code C,cc,java,pas,asm,a Source code in varius languages
Batch Bat,sh Commands to the command
interpreter
Text Txt,doc Textual data,document
Word processor Wn,tex,rrf,doc Various word-processor formats
Library Lib,a,so,dll,mpeg,mor,rm Libraries of routines for
programmers

Print or view Arc,zip,tar Ascii or binary file in a format for


printing or viewing
Archive Arc,xip,tar Related files grouped into one
file,sometic compressed for archive
or storage
Multimedia Mpeg,mor,rm Binary fike contain audio for A/V
informal

DIRECT ACCESS:-
 Direct access allows random access to any file block.
 This method is based on a disk model of a file.
 A file is made up of fixed length logical records.
 It allows programs to read and write records rapidly in no particular order.
 In a direct access file no restrictions for reading or writing a file in any
sequence.
 Not all operating systems support both sequential and direct access for files.
 Some Operating systems uses sequential access and some operating systems
use direct access.
 It is easy to stimulate sequential access on a direct access file.

INDEXED FILE:-
In general indexed file the concept of sequentiality and a single key are
abandoned.
ACCESS METHODS
The information in the file can be accessed in several ways.
Different types of file access methods are
1. Sequential access
2. Direct access

SEQUENTIAL ACCESS;-
 Sequential access is the simplest method. Information in the file is
sequentially accessed i.e one record of other record.
 Normally read and write operations are done on the files.
 A read operation reads the next portion of the file and automatically advances
a file pointer which tracks I/O location.
 A write operation appends the end of file and such file can be reset to the
beginning.

There are two types of indexes


1. An exhaustive Index
2. Partial Index
Exhaustive Index:-
An exhaustive index contains one entry for every record in the main file.
An index is itself organized as a sequential file for ease of searching

Partial index:-
A Partial Index contains entries with records of variable length some
records will not contain all fields.

DIRECTORY STRUCTURE
A Directory contains information about the files including attributes
location and ownership. Operating system is managed this information. The directory is
itself file, owned by the operating system and accessible by various file management
routines.

The simplest form of structure for a directory is that of a list of entries, on


for each file. Directory entries are added by the creation of files and of aliases existing
files.
To understand the requirements for a file structure, it is well to consider
the type of operations that may be performed on the directory.
 Search
 Create a file
 Delete a file
 Rename a file
 List directory

Search:-
Directory structure is searched for finding particular file in the directory.
Files have symbolic names and similar names may indicate a relationship between files
Create a file:-
When a new file is created, an entry must be added to the directory.
Delete a file:-
When a file is deleted an entry must be removed from the directory.
Rename a file:-
Name of the files must be changeable when the content or use of the file
may also allow its position within the directory structure to be changed.
List Directory:-
All of portion of the directory may be requested. Request is made by a
user and result in a listing of all files owned by that user plus some of the attributes of
each file.
Different types of directory structure are
 Single level directory
 Two level directory
 Tree structured directory
 Acyclic graph directories
 General graph directory

Tree Structured Directories:


MS-DOS System is a tree structure directory. It allows user to create their
own subdirectory and to organize their files accordingly. A subdirectory contains a set of
files or subdirectories
All the directories have the same internal format one bit in each directory
entry defines the entry as a file (0) or as a subdirectory (1) special system calls are used to
create and delete directories.
Single level directory:-
Single level directory structure is single directory structure. All files are
contained in the same directory. Easy to implement and maintain single level directory
structure.

Two Level Directory:-


In two level directories each user has his own directory. It is called user
file directory (UFD) Each user file directory has a similar structure when a user refers to
a particular file, only his run UFD is searched. Different users may have files with the
same name, as long as all the file names within each UFD are unique.
Operating system cannot accidently delete another users file thah has the
same name.
Acyclic Graph Directories:-

 It allows directories to have shared subdirectories and files


 Same file or directory may be in two different directories
 Graph with no cycles is a generalization of the structured subdirectory
scheme.
 Shared files and subdirectories can be implemented by using links.
 A link is effectively a pointer to another file of subdirectory.
 A link is implemented as an absolute or relative path name

Protection
When information is kept in a computer system, we want to keep it safe
from physical damage (reliability) and improper access (protection)
File systems can be damaged by hardware problems ( such as errors in
reading or writing) Power surges or failures head crashes, dirt, temperature extremes and
vandalism. Files may e deleted accidentally. Bugs in the file system software can also
cause file contents to be lost.
Types of Access
Protection mechanisms provide controlled access by limiting the types of
file access that can be made access is permitted or denied depending on several factors,
one of which is the type of access requested several different types of operations may be
controlled.
 Read: Read from file
 Write: Write or rewrite the file
 Execute: Load the file in to memory and execute it.
 Append: Write new information at the end of the file
 Delete: Delete the file and free its space for possible reuse.
 List: List the name and attributes at the file

Protection is provided at only the lower level for instance coping a file may be
implemented simply by a sequence of read request. In this case a user with read access
can also cause the file to be copied, printed and so on.

Access Control
The most common approach to the protection problem is to make access
dependent on the identity at the user various users may need different types of access to a
file or directory. The most general scheme to implement identity dependent access is to
associate with each file and directory an access control list (ACL).
The advantages of enabling complex access methodologies the main
problem with access lists is their length if we want to allow everyone to read a file, we
must list all users with read access.
This technique has two undesirable consequences:
 Constructing such a list may be a tedious and unrewarding task, especially if
we do not know in advance the list of users in the system.
 The directory entry previously of fixed size, now needs to be of variable size,
resulting in more complicated space management

These problems can be resolved by use of a condensed version of the access list. To
condense the length of the access control list, many system recognize three classification
fo users in connection with each file.
 Owner – The user who created the file is the owner.
 Group – A set of users who are sharing the file and need similar access
is a group or work group.
 Universe – All other users in the system constitute the universe

Other Protection Approaches


Another approach to the protection problem is to associate a password
with each file. Just as access to the computer system is often controlled by a password to
access to each file can be controlled by a password.
Limited file protection is also currently available on single user systems
such as MS-DOS and Macintosh Operating System. These operating systems when
originally designed essentially ignored the protection problem.
Designing a feature in to a new operating system is almost always easier
than adding a feature to an existing one such updates are usually less effective and are not
seamless.
In a multilevel directory structure, we need to protect not only individual
files but also collections of files in a subdirectory that is we need to provide a mechanism
for directory protection.
Listing the contents of a directory must be a protected operation.
Therefore, If a path name refers to a file in a directory the user must be allowed access to
both the directory and the file.
An Example UNIX
In the UNIX system, directory protection is handled similarly to file
protection. That is associated with each subdirectory are three fields owner, group and
universe, each consisting of 3 bits rwx.

File System Structure

Objectives of the Management system:


1. To meet the data management needs and requirement of the user
2. To Protect I/O support for a variety of types of storage devices
3. To provide a standardized set of I/O interface routines
4. To minimize or eliminate the potential for lost or destroyed data
5. To guarantee that the data in the files are valid

 Users and application programs interact with the file system


 File system must identify and locate the selected file before operation on file
authorised user can only access the file.
 They may create or delete a file and perform operation on files by commands.

File System Structure:


 The Disk provides the bulk of secondary storage on which a file system is
maintained. It have two characteristics
 They can be rewritten in place i.e read write and modify operation is
possible
 They can access directly any given block of information on the disk.
 Concept of file system is used for the efficient and convenient class. It allows
the data to be stored located and retrieved easily. It is of different levels
 The below diagram shows the file system I/O is the lowest level which consist
of device drivers and interrupt handles to transform information between main
memory and a disk system. A device level and output contains low level
hardware specific instruction that is used by the hardware controller.
 Basic file system is used for issuing generic commands to device driver device
driver then reads or writes physical blocks on the disk. File organization
module know about files and their logical and Physical blocks. File
organization module translates both blocks for basic file system to transfer.
 The logical file system manages metadata information. This includes all file
system structure excluding the actual data.
 It also manages the directory structure to provide the file organization module
with the information by giving a symbolic file name. It maintains file control
block (FCB) which contains information about file.

Allocation methods: Secondary storage Management


 The space allocation is often closely related to the efficiency of fill accessing
and of logical to physical mapping o disk addresses
 A good space allocation strategy must take into consideration several related
and interactive factors, such as
1. Processing speed of sequential access of files, random access to
files and allocation and reallocation of blocks.
2. Disk space utilization
3. Ability to make use of multi sector and multi tracks transfers.
4. Main memory requirements of a given algorithm.
 Three major methods of allocating disk space are in wide use:-
1. Contiguous
2. Linked
3. Indexed

Contiguous Allocation
 A single contiguous set of blocks is allocated to a file at the time of file
creation.
 The file allocation table needs just a single entry for each file, showing the
starting block and the length of the file
 Disk addresses define a line as ordering on a disk
 If the file is n blocks long and starts at location b, then it occurs blocks b, b+1,
b2.......b+n-1. The file allocation table entry of each file indicates the address
of the starting blocks and the length of area allocate for this file.
 It is easy to retrieve a single block sequential and direct access can be
supported by contiguous allocation.
 Contiguous allocation algorithm suffers from the external fragmentation.
Compaction is used to solve the problem of external fragmentation.
 Second problem with contiguous allocation algorithm is that with pre
allocation. It is necessary to declare the size of the file at the time of creation.

Characteristics of contiguous file allocation:-


1. It supports variable size portion
2. Pre allocation is required
3. It requires only single entry for a file
4. Allocation frequency is only one

Advantages
1. It supports variable size portion
2. Easy to retrieve single block
3. Accessing a file to easy
4. It provides good performance

Disadvantages
1. It suffers from external fragmentation
2. Pre allocation is required

Linked Allocation:-
 Linked allocation solves the problem to contiguous allocation. This
allocation is on the basis of an individual block. Each block contains a
pointer to the next block in the chain.
 The directory (File allocation table) contains a pointer to the first and last
blocks of the file.

 To create new file simply create a new entry in the directory with linked
allocation each directory entity has a pointer to the first disk block of the
file.
 The size of a file does not need to be declared when that file is created. A
file can continue to grow as long as free blocks are available. It is never
necessary to compact disk space.

Characteristics:-
1. It supports fixed size portions
2. Pre allocation is possible
3. File allocation table size is one entry for a file
4. Allocation frequency is low to high

Advantages:-
1. There is no external fragmentation
2. It is never necessary to compact disk space
3. Pre allocation is not required

Disadvantages:-
1. Files are accessed only sequentially
2. Space required for pointers
3. Reliability is not good
4. Cannot support direct access

Indexed Allocation:-
 The file allocation table contains a one level index for each file, the index
has one entry for each portion allocated to the file
 The ith entry in the index block points to the ith block of the file. The
directory contiguous contains the address of the block
 Allocation may be on the basis of either fixes size blocks or variable size
portions when the file is created all pointers in the index block are set to
nil.
 Allocation by blocks eliminates external fragmentation whereas allocation
by variable size portions improves locality, The pointer overhead of hte
index block is generally greater than the pointer overhead of linked
allocation.
Advantages:-
1. It supports sequential and direct access
2. No external fragmentation
3. Faster than other two methods
4. It supports fixed and variable size blocks

Disadvantages:-
1. Indexed allocation does duffer wasted space
2. Pinter overhead is generally greater

Free Space Management


 To keep track of free disk spaces the maintaining of free space list is done by
system. It records all free disk blocks those not allocated. Four techniques
are in common use:
 Bit Vector
 Linked list
 Grouping
 Counting

Bit Vector:-
This method uses a vector containing 1 bit for each block on the disk each
entry of a 0 corresponds to a free block and each 1 corresponds to a block in use
Main advantage of this method is that it is relatives easy to find one or a
contiguous group of the free blocks. Second advantage is that it is as small as and can be
kept in main memory,
Linked List:-
In linked list all free space disk blocks are linked keeping a pointer to first
free block in a special location on a disk and catching it in memory.
This method has negligible space overhead because this is no need for a
disk allocation table for a pointer to beginning of the chain and length of first portion.
This method is suited for all allocation methods.
Grouping:-
It stores the addresses of n free blocks in the first free blocks. The first n-
1 of these blocks is actually free
The last block contains the addresses of another n free block addresses of
a large number of free blocks can be found quickly.

Counting:-
It keeps the address of the first free block and the number n of the
contiguous block that follow the first block. Each entry in the free space list then consists
of a disk address and a count.

You might also like