Os Exam
Os Exam
The Operating System is mainly used to control the hardware and coordinate its use among the various
application programs for the different users.
OS is mainly designed in order to serve two basic purposes:
1. The operating system mainly controls the allocation and use of the computing System’s resources
among the various user and tasks.
2. It mainly provides an interface between the computer hardware and the programmer that simplifies
and makes feasible for coding, creation of application programs and debugging.
Two Views of Operating System
1. User's View
2. System View
Operating System: User View
The user view of the computer refers to the interface being used. Such systems are designed for one
user to monopolize its resources, to maximize the work that the user is performing. In these cases, the
operating system is designed mostly for ease of use, with some attention paid to performance, and none
paid to resource utilization.
Operating System: System View
The operating system can be viewed as a resource allocator also. A computer system consists of many
resources like - hardware and software - that must be managed efficiently. The operating system acts as
the manager of the resources, decides between conflicting requests, controls the execution of
programs, etc.
Operating System Management Tasks
1. Process management which involves putting the tasks into order and pairing them into manageable
size before they go to the CPU.
2. Memory management which coordinates data to and from RAM (random-access memory) and
determines the necessity for virtual memory.
3. Device management provides an interface between connected devices.
4. Storage management which directs permanent data storage.
5. An application that allows standard communication between software and your computer.
6. The user interface allows you to communicate with your computer.
Types of Operating System
1. Simple Batch System
2. Multiprogramming Batch System
3. Multiprocessor System
4. Desktop System
5. Distributed Operating System
6. Clustered System
7. Realtime Operating System
8. Handheld System
Functions of Operating System
1. It boots the computer
2. It performs basic computer tasks e.g. managing the various peripheral devices e.g. mouse, keyboard
3. It provides a user interface, e.g. command line, graphical user interface (GUI)
Advantages of Operating System
• The operating system helps to improve the efficiency of the work and helps to save a lot of time by
reducing the complexity.
• The different components of a system are independent of each other, thus failure of one component
does not affect the functioning of another.
• The operating system mainly acts as an interface between the hardware and the software.
• Users can easily access the hardware without writing large programs.
• With the help of an Operating system, sharing data becomes easier with a large number of users.
• We can easily install any game or application on the Operating system easily and can run them
• An operating system can be refreshed easily from time to time without having any problems.
Disadvantages of an Operating system
• Expensive There are some open-source platforms like Linux. But some operating systems are
expensive. Also, users can use free operating systems but generally, there is a bit difficulty to run them
than others. On the other hand, operating systems like Microsoft Windows having GUI functionality and
other in-built features are very expensive.
• Virus Threat Operating Systems are open to virus attacks and sometimes it happens that many users
download the malicious software packages on their system which pauses the functioning of the
Operating system and also slows it down.
• Complexity Some operating systems are complex in nature because the language used to establish
them is not clear and well defined. If there occurs an issue in the operating system then the user
becomes unable to resolve that issue.
• System Failure An operating system is the heart of the computer system if due to any reason it will
stop functioning then the whole system will crashes.
Examples of Operating System
• Windows
• Android
• iOS
• Mac OS
• Linux
• Window Phone OS
• Chrome OS
Operating System as Extended Machine
• At the Machine level the structure of a computer’s system is complicated to program, mainly for input
or output. Programmers do not deal with hardware. They will always mainly focus on implementing
software. Therefore, a level of abstraction is supposed to be maintained.
• Operating systems provide a layer of abstraction for using disk such as files.
• This level of abstraction allows a program to create, write, and read files, without dealing with the
details of how the hardware actually works.
• The level of abstraction is the key to managing the complexity.
• Good abstractions turn an impossible task into two manageable tasks.
• The first is to define and implement the abstractions.
• The second is to solve the problem at hand.
• Operating system provides abstractions to application programs in a top down view.
For example − It is easier to deal with photos, emails, songs, and Web pages than with the details of
these files on disks.
What is swapping:
Swapping is a simple memory management technique used by the operating system to increase the
utilization of the processor by moving some blocked process from the main memory to the secondary
memory (hard disk), thus forming a queue of temporarily suspended process and the execution
continues with the newly arrived process.
Conditions of Deadlock Prevention:
1. Mutual Exclusion Condition
2. Hold and Wait Condition
3. No Pre-emption condition
4. Circular Wait Condition.
Options for breaking a Deadlock:
Simply abort one or more process to break the circular wait.
Preempt some resources from one or more of the deadlocked processes.
Algorithms for Deadlock avoidance:
1) Resource-Allocation Graph Algorithm
2) Banker's Algorithm
3) Safety Algorithm
4) Resource-Request Algorithm
Solution for Critical-Section Problem must satisfy:
1. Mutual Exclusion.
2. Progress
3. Bounded Waiting
characteristics of Deadlock:
a) A resource may be acquired exclusively by only one process at a time (Mutual Exclusion Condition).
b) A process that has acquired an exclusive resource may hold it while waiting to obtain other resources
(Hold and Wait Condition).
c) Once a process has obtained a resource, the system cannot remove the resource from the process's
control until the process has finished using the resource (No pre-emption condition).
d) And two or more processes are locked in a "circular chain" in which each process in the chain is
waiting for one or more resources that the next process in the chain is holding (circular-wait condition).
Preventation a deadlock:
In deadlock prevention our concern is to condition a system to remove any possibility of deadlocks
occurring. Havender observed that a deadlock cannot occur if a system denies any of the four necessary
conditions. The first necessary condition, namely that processes claim exclusive use of the resources
they require, is not one that we want to break, because we specifically want to allow dedicated (i.e.,
serially reusable) resources. Denying the "wait-for" condition requires that all of the resources a process
needs to complete its task be requested at once, which can result in substantial resource
underutilization and raises concerns over how to charge for resources. Denying the "no-preemption"
condition can be costly, because processes lose work when their resources are preempted. Denying the
"circular-wait" condition uses a linear ordering of resources to prevent deadlock.
This strategy can increase efficiency over the other strategies, but not without difficulties.
a) Let Work and Finish be vectors of length m and n. Initialize Work = Available. For i = 1, …, n, if
Allocationi 0 then Finish[i] = false, else Finish[i] = True
b) Find an i such that a. Finish[i] = false and b. Requesti < = Work
If no such i exists, go to step d
c) Work = Work + Allocationi Finish[i] = true
Go to step b
d) If Finish[i] = false for some i, then the system is in deadlock state.
deadlock recovery methods are used to clear deadlocks from a system so that it ma y operate free of the
deadlocks, and so that the deadlocked processes may complete their execution and free their resources.
Recovery typically requires that one or more of the deadlocked processes be flushed from the system.
The suspend mechanism allows the system to put a temporary hold on a process, and, when it is safe to
do so, resume the held process without loss of work. Checkpoint facilitates suspend capabilities by
limiting the loss of work to the time at which the last checkpoint was taken.
When a process in a system terminates, the system performs a rollback by undoing every operation
related to the terminated process that occurred since the last checkpoint.
To ensure that data in the database remains in a consistent state when deadlocked processes are
terminated, database systems typically perform resource allocations using transactions.
In personal computer systems and workstations, deadlock has generally been viewed as a limited
annoyance.
Some systems implement the basic deadlock prevention methods suggested by Havened, while others
ignore the problem—these methods seem to be satisfactory.
While ignoring deadlocks may seem dangerous, this approach can actually be rather efficient.
If deadlock is rare, then the processor time devoted to checking for deadlocks significantly reduces
system performance.
However, given current trends, deadlock will continue to be an important area of research as the
number of concurrent operations and number of resources becomes large, increasing the likelihood of
deadlock in multiprocessor and distributed systems.
Also, many real-time systems, which are becoming increasingly prevalent, require deadlock-free
resource allocation.
Segmentation: is a technique to break memory into logical pieces where each piece represents a
group of related information.
For example, data segments or code Segment for each process, data segment for operating system and
so on. Segmentation can be implemented using or without using paging. Unlike paging, segment is
having varying sizes and thus eliminates internal fragmentation. External fragmentation still exists but
to lesser extent. Address generated by CPU is divided into Segment number (s) -- segment number is
used as an index into a segment table which contains base address of each segment in physical memory
and a limit of segment. Segment offset (o) -- segment offset is first checked against limit and then is
combined with base address to define the physical memory address.
about the segmentation:
There is another way in which addressable memory can be subdivided, known as segmentation.
User View of logical memory
◦ Linear array of bytes,
Reflected by the Paging‟ memory scheme”.
◦ A collection of variable-sized entities,
User thinks in terms of “subroutines”, “stack”, “symbol table”, “main program” which are somehow
located somewhere in memory.
Segmentation supports this user view. The logical address space is a collection of segments.
Although the user can refer to objects in the program by a two-dimensional address, the actual physical
address is still a one-dimensional sequence Thus, we need to map the segment number
This mapping is effected by a segment table In order to protect the memory space, each entry in
segment table has a segment base and a segment limit.
Segments are variable-sized - Dynamic memory allocation required (first fit, best fit, worst fit).
External fragmentation - In the worst case the largest hole may not be large enough to fit in a new
segment. Note that paging has no external fragmentation problem.
Each process has its own segment table - like with paging where each process has its own page table.
The size of the segment table is determined by the number of segments, whereas the size of the page
table depends on the total amount of memory occupied.
Segment table located in main memory - is the page table with paging Segment table base register
(STBR) - points to current segment table in memory Segment table length register (STLR) - indicates
number of segments Segmentation Hardware Protection and Sharing in Segmentation
Segmentation lends itself to the implementation of protection and sharing policies Each entry has a base
address and length so inadvertent memory access can be controlled Sharing can be achieved by
segments referencing multiple processes Two processes that need to share access to a single segment
would have the same segment name and address in their segment tables.
Advantages in Segmentation:
No internal fragmentation
Segment tables consume less memory than page tables
Because of the small segment table, memory reference is easy
Lends itself to sharing data among processes.
Lends itself to protection.
External fragmentation.
Costly memory management algorithm
Unequal size of segments is not good in the case of swapping.
Memory-Management Unit: The run-time mapping form virtual to physical addresses is done by a
hardware device is a called as Memory Management Unit.
Memory Compaction: When swapping creates multiple holes in memory, it is possible to combine
them all into one big one by moving all the processes downward as far as possible.
Overlay: The idea of overlays is to keep in memory only those instructions and data that are needed at
any given time. So, to enable a process to be larger than the amount of memory allocated to it.
Thrashing: A Program which is causing page faults every few instructions to occur is called as Thrashing.
Variable partition: Hole – block of available memory; holes of various size are scattered throughout
memory When a process arrives, it is allocated memory from a hole large enough to accommodate it
Operating system maintains information about: a) allocated partitions b) free partitions Paging is a
technique in which physical memory is broken into blocks of the same size called pages.
Virtual memory: Virtual memory is a technique that allows the execution of processes that may not be
completely in the memory. One major advantage of this scheme is that programs can be larger than
physical memory. Virtual memory also allows processes to easily share files and address spaces and it
provides an efficient mechanism for process creation.
Page fault: It is an interrupt that occurs when a program requests data that is not currently in real
memory. The interrupt triggers the operating system to fetch the data from a virtual memory and load it
into RAM. An invalid page fault or page fault error occurs when the operating system cannot find the
data in virtual memory.
Dynamic loading: In dynamic loading, a routine of a program is not loaded until it is called by the
program.All routines are kept on disk in a re-locatable load format. The main program is loaded into
memory and is executed. Other routines methods or modules are loaded on request. Dynamic loading
makes better memory space utilization and unused routines are never loaded.
concept of Demand Paging? A demand paging system is quite similar to a paging system with swapping.
When we want to execute a process, we swap it into memory. Rather than swapping the entire process
into memory.
Define swap space? The secondary memory holds the pages that are not present in main memory. The
secondary memory is usually a high speed disk. It is know as the swap device, and the section of the disk
used for this purpose is known as the swap space.
What is reference string? We evaluate an algorithm by running it on a particular string of memory
references and computing the number of page faults. The string of memory references is called as the
reference string.
Define inverted page table? It is the one entry for each real page of memory. Entry consists of the
virtual address of the page stored in that real memory location; with information about the process that
owns that page. It Decreases memory needed to store each page table, but increases time needed to
search the table when a page reference occurs.
Define Hashed page table? A common approach for handling address spaces larger than 32 bits is to use
the hashed page table, with the hash value being the virtual page number. Each entry in the hash table
contains a linked list of elements that hash to the same location. Each element consists of three fields
the virtual page number, the value of mapped page frame and a pointer to the next element in the
linked list.
Define frames and pages? Physical memory is broken into fixed sized blocks are called as frames. Logical
memory which is broken into blocks of the same size is called as pages.
What is Translation look aside buffer? The TLB is associative, high speed memory. Each entry in the TLB
consists of two parts a key and value. When the associative memory is present with an item, it is
compared with all keys simultaneously. If the item is not found, the corresponding value field is
returned.
Define Hit rate? The percentage of times that a particular page number is found in the TLB is called the
hit ratio. An 80 percent hit ratio means that we the desired page number in the TLB 80 percent of the
time. If it takes 20 nanoseconds to search the TLB and 100 nanoseconds to access memory then a
mapped memory access takes 120 nanoseconds when the page number is in the TLB.
Explain optimal page replacement algorithm with an example? An optimal page-replacement
algorithm has the lowest page-fault rate of all algorithms.
An optimal page-replacement algorithm exists, and has been called OPT or MIN.
Replace the page that will not be used for the longest period of time . Use the time when a page is to be
used.
Example for the optimal replacement algorithm: Optimal replacement algorithm can process the
reference string in three frames with less than nine faults.
Unfortunately, the optimal page replacement algorithm is difficult to implement, because it requires
future knowledge of the reference string.
As a result, the optimal algorithm is used mainly for comparison studies.
For instance, it may be useful to know that, although a new algorithm is not optimal, it is within 12.3
percent of optimal at worst, and within 4.7 percent on average.
How FIFO replacement algorithm works? The simplest page replacement algorithm is FIFO
replacement algorithm.
A FIFO replacement algorithm associates with each page the time when that page was brought into
memory.
Oldest page in main memory is the one which will be selected for replacement.
Easy to implement, keep a list, replace pages from the tail and add new pages at the head.
The page replaced may be an initialization module that was used a long time and is no longer needed.
On the other hand, it could contain heavily used variable that was initialized early and is in constant use.
After we page out an active page to bring in a new one, a fault occurs almost immediately to retrieve the
active page.
Some other page will need to be replaced to bring the active page back into memory.
Thus, a bad replacement choice increases the page-fault rate and slows process execution, but does not
cause incorrect execution.
Summarize the LRU approximation page replacement? Initially, all bits are cleared to 0 by the operating
systems. As a user process executes, the bit associated with each page referenced is set to 1 by the
hardware. After sometime, we can determine which pages have been used and which have not been
used by examining the reference bits. The reference bit for a page is set, by the hardware, whenever
that page is referenced.
By using this reference bit we can know which pages are used and which pages were not used .
This partial ordering information leads to many page replacement algorithms that approximate LRU
replacement.
LRU needs special hardware and still slow.
Additional reference bits algorithm
Keep record of reference bits at regular time intervals
Associate 8 bit byte with each page table entry
At regular intervals (e.g., 100 milliseconds) OS shifts reference bit for each page into high-order bit of 8-
bit byte, shifting other bits right by one
Explain about the demand paging? A demand paging system is quite similar to a paging system with
swapping.
When we want to execute a process, we swap it into memory. Rather than swapping the entire process
into memory, however, we use a lazy swapper called pager.
When a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again.
Instead of swapping in a whole process, the pager brings only those necessary pages into memory.
Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time
and the amount of physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those pages
that are on the disk using the valid-invalid bit scheme.
Where valid and invalid pages can be checked by checking the bit.
Marking a page will have no effect if the process never attempts to access the page.
While the process executes and accesses pages that are memory resident, execution proceeds normally.
Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's
failure to bring the desired page into memory.
Advantages - Large virtual memory. More efficient use of memory.
Unconstrained multiprogramming. There is no limit on degree of multiprogramming.
Disadvantages - Number of tables and amount of processor overhead for handling page interrupts are
greater than in the case of the simple paged management techniques. Due to the lack of explicit
constraints on jobs address space size.
Explain page fault in detail?
Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's
failure to bring the desired page into memory.
But page fault can be handled as following:
Step 1 Check an internal table for this process, to determine whether the reference was a valid or it was
an invalid memory access.
Step 2 If the reference was invalid, terminate the process. If it was valid, but page have not yet brought
in, page in the latter.
Step 3 Find a free frame.
Step 4 Schedule a disk operation to read the desired page into the newly allocated frame.
Step 5 When the disk read is complete, modify the internal table kept with the process and the page
table to indicate that the page is now in memory.
Step 6 Restart the instruction that was interrupted by the illegal address trap. The process can now
access the page as though it had always been in memory.
Therefore, the operating system reads the desired page into memory and restarts the process as though
the page had always been in memory.
Explain about virtual memory management?
Virtual memory is a technique that allows the execution of processes which are not completely available
in memory. The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory is the separation of user logical memory from physical memory.
This separation allows an extremely large virtual memory to be provided for programmers when only a
smaller physical memory is available. Following are the situations, when entire program is not required
to be loaded fully in main memory. User written error handling routines are used only when an error
occurred in the data or computation. Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though only a small amount of the table
is actually used. The ability to execute a program that is only partially in memory would counter many
benefits. Demand segmentation can also be used to provide virtual memory.
Explain detail about directory structure? The file system of computers can extensive. Some system
store millions of file on terabytes of disk. The directory structures are helpful to organize our files easily.
Free space managed with partition. The file system partitions also have a directory of files.
A collection of nodes containing information about all files Both the directory structure and the files
reside on disk. Backups of these two structures are kept on tape.
A Typical File-system Organization Operations Performed on Directory:
a) Search for a file
b) Create a file
c) Delete a file
d) List a directory
e) Rename a file
f) Traverse the file system
Organize the Directory (Logically) to Obtain Efficiency – locating a file quickly.
Naming – convenient to users
Two users can have same name for different files.
The same file can have several different names.
Grouping – logical grouping of files by properties, (e.g., all Java programs, all games, …)
Single-Level Directory:
A single directory for all users. All files referenced from one directory.
only a single directory for the disk partition.
Advantage - Simplest directory structure so it is easy to understand and use.
Disadvantages - A single directory for all users. Naming problem. Grouping problem.
Two-Level Directory: Two level directory has two separate directories that is user file directory(UFD)
and master file directories(MFD).
User file directory(UFD)-Separate directory can be created for each user called user directories file.
Master file Directory(MFD)-Every file in the system has its own path name to locate the file uniquely,
the user must know the path name of the file form MFD to UFD. Separate directory for each user.
Advantages: It can have the same file name for different user. searching of a file can be made efficiently.
Disadvantage: User has to remember the path name while they search for a file.
Tree-Structured Directories: In tree structured directory, user can create subdirectory of their own.
UFD contains set of files or subdirectories.
Special system calls are used to create and delete a directory.
To locate the file ,need path name.
Path name is two types.
1.Absolute path-Start from the root node to the exact location of a file.
2.Relative path starts from the current working directory to exact location of a file .
Example: if in current directory /mail mkdir count
De le ting “mail” deleting the entire subtree rooted by ―mail‖.
Advantages:
Efficient for file searching.
Have the capability of grouping of file.
Solutions:
Back pointers, so we can delete all pointers Variable size records a problem.
Back pointers using a daisy chain organization.
Entry-hold-count solution.
New directory entry type.
Link – another name (pointer) to an existing file.
Resolve the link – follow pointer to locate the file.
Memory Management:
Memory Management is the process of controlling and coordinating computer memory, assigning
portions known as blocks to various running programs to optimize the overall performance of the
system.
It is the most important function of an operating system that manages primary memory. It helps
processes to move back and forward between the main memory and execution disk. It helps OS to
keep track of every memory location, irrespective of whether it is allocated to some process or it
remains free.
Contiguous Memory Allocation in Operating System:
A contiguous memory allocation is a memory management technique where whenever there is a
request by the user process for the memory, a single section of the contiguous memory block is
given to that process according to its requirement.
Multitasking:
Multitasking is when multiple jobs are executed by the CPU simultaneously by switching between them.
Switches occur so frequently that the users may interact with each program while it is running. An OS
does the following activities related to multitasking −
▪ The user gives instructions to the operating system or to a program directly, and receives an
immediate response.
▪ The OS handles multitasking in the way that it can handle multiple operations/executes multiple
programs at a time.
▪ Multitasking Operating Systems are also known as Time-sharing systems.
▪ These Operating Systems were developed to provide interactive use of a computer system at a
reasonable cost.
▪ A time-shared operating system uses the concept of CPU scheduling and multiprogramming to provide
each user with a small portion of a time-shared CPU.
▪ Each user has at least one separate program in memory.
▪ A program that is loaded into memory and is executing is commonly referred to as a process.
▪ When a process executes, it typically executes for only a very short time before it either finishes or
needs to perform I/O.
▪ Since interactive I/O typically runs at slower speeds, it may take a long time to complete. During this
time, a CPU can be utilized by another process.
▪ The operating system allows the users to share the computer simultaneously. Since each action or
command in a time-shared system tends to be short, only a little CPU time is needed for each user.
▪ As the system switches CPU rapidly from one user/program to the next, each user is given the
impression that he/she has his/her own CPU, whereas actually one CPU is being shared among many
users.
Multiprogramming :
Sharing the processor, when two or more programs reside in memory at the same time, is referred as
multiprogramming. Multiprogramming assumes a single shared processor. Multiprogramming increases
CPU utilization by organizing jobs so that the CPU always has one to execute.
The following figure shows the memory layout for a multiprogramming system.
Advantages:
▪ High and efficient CPU utilization.
▪ User feels that many programs are allotted CPU almost simultaneously.
Disadvantages:
▪ CPU scheduling is required.
▪ To accommodate many jobs in memory, memory management is required.
File Access Methods in OS : A file is a collection of bits/bytes or lines which is stored on secondary
storage devices like a hard drive. File access methods in OS are nothing but techniques to read data from
the system's memory. There are various ways in which we can access the files from the memory like:
1. Sequential Access
2. Direct/Relative Access, and
3. Indexed Sequential Access.
These methods by which the records in a file can be accessed are referred to as the file access
mechanism. Each file access mechanism has its own set of benefits and drawbacks, which are discussed
further in this article.
1. Sequential Access
The operating system reads the file word by word in sequential access method of file accessing. A
pointer is made, which first links to the file's base address. If the user wishes to read the first word of
the file, the pointer gives it to them and raises its value to the next word. This procedure continues till
the file is finished. It is the most basic way of file access. The data in the file is evaluated in the order
that it appears in the file and that is why it is easy and simple to access a file's data using sequential
access mechanism. For example, editors and compilers frequently use this method to check the validity
of the code.
Advantages of Sequential Access:
• The sequential access mechanism is very easy to implement.
• It uses lexicographic order to enable quick access to the next entry.
Disadvantages of Sequential Access:
• Sequential access will become slow if the next file record to be retrieved is not present next to the
currently pointed record.
• Adding a new record may need relocating a significant number of records of the file.
locate a record in the file, we first search the indexes and then use the pointer to pointer concept to
navigate to the required file.
Primary index blocks contain the links of the secondary inner blocks which contains links to the data in
the memory.
Advantages of Indexed Sequential Access:
• If the index table is appropriately arranged, it accesses the records very quickly.
• Records can be added at any position in the file quickly.
Disadvantages of Indexed Sequential Access:
• When compared to other file access methods, it is costly and less efficient.
• It needs additional storage space.
2. Linked Allocation (Non-contiguous allocation): Allocation is on an individual block basis. Each block
contains a pointer to the next block in the chain. Again the file table needs just a single entry for each
file, showing the starting block and the length of the file. Although pre-allocation is possible, it is more
common imply to allocate blocks as needed. Any free block can be added to the chain. The blocks need
not be continuous. Increase in file size is always possible if free disk block is available. There is no
external fragmentation because only one block at a time is needed but there can be internal
fragmentation but it exists only in the last disk block of file.
Disadvantage:
▪ Internal fragmentation exists in last disk block of file.
▪ There is an overhead of maintaining the pointer in every disk block.
▪ If the pointer of any disk block is lost, the file will be truncated.
▪ It supports only the sequencial access of files.
3. Indexed Allocation: It addresses many of the problems of contiguous and chained allocation. In this
case, the file allocation table contains a separate one-level index for each file: The index has one entry
for each block allocated to the file. Allocation may be on the basis of fixed-size blocks or variable-sized
blocks. Allocation by blocks eliminates external fragmentation, whereas allocation by variable-size
blocks improves locality. This allocation technique supports both sequential and direct access to the file
and thus is the most popular form of file.
1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames. Find the number of page
faults. Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page
Faults. When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in
memory so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not available in
memory so it replaces the oldest page slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not available
so it replaces 0 1 page fault. Belady’s anomaly proves that it is possible to have more page faults when
increasing the number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get
9 total page faults, but if we increase slots to 4, we get 10-page faults.
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find
number of page fault.
• Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
• 0 is already there so —> 0 Page fault.
• When 3 came it will take the place of 7 because it is not used for the longest duration of time in the
future.—>1 Page fault.
• 0 is already there so —> 0 Page fault.
• 4 will takes place of 1 —> 1 Page Fault.
• Now for the further page reference string —> 0 Page fault because they are already available in the
memory. Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that
other replacement algorithms can be analysed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find
number of page faults.
• Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults 0 is
already there so —> 0 Page fault.
• When 3 came it will take the place of 7 because it is least recently used —>1 Page fault
• 0 is already in memory so —> 0 Page fault.
• 4 will takes place of 1 —> 1 Page Fault
• Now for the further page reference string —> 0 Page fault because they are already available in the
memory.