0% found this document useful (0 votes)
5 views24 pages

Os Exam

The document provides an overview of operating systems, detailing their functions, management tasks, types, advantages, and disadvantages. It discusses key concepts such as deadlock prevention and recovery, memory management techniques like segmentation and swapping, and algorithms for resource allocation. Additionally, it highlights the user and system views of operating systems, emphasizing their role in resource management and user interaction.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views24 pages

Os Exam

The document provides an overview of operating systems, detailing their functions, management tasks, types, advantages, and disadvantages. It discusses key concepts such as deadlock prevention and recovery, memory management techniques like segmentation and swapping, and algorithms for resource allocation. Additionally, it highlights the user and system views of operating systems, emphasizing their role in resource management and user interaction.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

OPERATING SYSTEMS UNIT III II BCA

The Operating System is mainly used to control the hardware and coordinate its use among the various
application programs for the different users.
OS is mainly designed in order to serve two basic purposes:
1. The operating system mainly controls the allocation and use of the computing System’s resources
among the various user and tasks.
2. It mainly provides an interface between the computer hardware and the programmer that simplifies
and makes feasible for coding, creation of application programs and debugging.
Two Views of Operating System
1. User's View
2. System View
Operating System: User View
The user view of the computer refers to the interface being used. Such systems are designed for one
user to monopolize its resources, to maximize the work that the user is performing. In these cases, the
operating system is designed mostly for ease of use, with some attention paid to performance, and none
paid to resource utilization.
Operating System: System View
The operating system can be viewed as a resource allocator also. A computer system consists of many
resources like - hardware and software - that must be managed efficiently. The operating system acts as
the manager of the resources, decides between conflicting requests, controls the execution of
programs, etc.
Operating System Management Tasks
1. Process management which involves putting the tasks into order and pairing them into manageable
size before they go to the CPU.
2. Memory management which coordinates data to and from RAM (random-access memory) and
determines the necessity for virtual memory.
3. Device management provides an interface between connected devices.
4. Storage management which directs permanent data storage.
5. An application that allows standard communication between software and your computer.
6. The user interface allows you to communicate with your computer.
Types of Operating System
1. Simple Batch System
2. Multiprogramming Batch System
3. Multiprocessor System
4. Desktop System
5. Distributed Operating System
6. Clustered System
7. Realtime Operating System
8. Handheld System
Functions of Operating System
1. It boots the computer
2. It performs basic computer tasks e.g. managing the various peripheral devices e.g. mouse, keyboard
3. It provides a user interface, e.g. command line, graphical user interface (GUI)
Advantages of Operating System
• The operating system helps to improve the efficiency of the work and helps to save a lot of time by
reducing the complexity.
• The different components of a system are independent of each other, thus failure of one component
does not affect the functioning of another.
• The operating system mainly acts as an interface between the hardware and the software.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

• Users can easily access the hardware without writing large programs.
• With the help of an Operating system, sharing data becomes easier with a large number of users.
• We can easily install any game or application on the Operating system easily and can run them
• An operating system can be refreshed easily from time to time without having any problems.
Disadvantages of an Operating system
• Expensive There are some open-source platforms like Linux. But some operating systems are
expensive. Also, users can use free operating systems but generally, there is a bit difficulty to run them
than others. On the other hand, operating systems like Microsoft Windows having GUI functionality and
other in-built features are very expensive.
• Virus Threat Operating Systems are open to virus attacks and sometimes it happens that many users
download the malicious software packages on their system which pauses the functioning of the
Operating system and also slows it down.
• Complexity Some operating systems are complex in nature because the language used to establish
them is not clear and well defined. If there occurs an issue in the operating system then the user
becomes unable to resolve that issue.
• System Failure An operating system is the heart of the computer system if due to any reason it will
stop functioning then the whole system will crashes.
Examples of Operating System
• Windows
• Android
• iOS
• Mac OS
• Linux
• Window Phone OS
• Chrome OS
Operating System as Extended Machine
• At the Machine level the structure of a computer’s system is complicated to program, mainly for input
or output. Programmers do not deal with hardware. They will always mainly focus on implementing
software. Therefore, a level of abstraction is supposed to be maintained.
• Operating systems provide a layer of abstraction for using disk such as files.
• This level of abstraction allows a program to create, write, and read files, without dealing with the
details of how the hardware actually works.
• The level of abstraction is the key to managing the complexity.
• Good abstractions turn an impossible task into two manageable tasks.
• The first is to define and implement the abstractions.
• The second is to solve the problem at hand.
• Operating system provides abstractions to application programs in a top down view.
For example − It is easier to deal with photos, emails, songs, and Web pages than with the details of
these files on disks.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

What is swapping:
Swapping is a simple memory management technique used by the operating system to increase the
utilization of the processor by moving some blocked process from the main memory to the secondary
memory (hard disk), thus forming a queue of temporarily suspended process and the execution
continues with the newly arrived process.
Conditions of Deadlock Prevention:
1. Mutual Exclusion Condition
2. Hold and Wait Condition
3. No Pre-emption condition
4. Circular Wait Condition.
Options for breaking a Deadlock:
Simply abort one or more process to break the circular wait.
Preempt some resources from one or more of the deadlocked processes.
Algorithms for Deadlock avoidance:
1) Resource-Allocation Graph Algorithm
2) Banker's Algorithm
3) Safety Algorithm
4) Resource-Request Algorithm
Solution for Critical-Section Problem must satisfy:
1. Mutual Exclusion.
2. Progress
3. Bounded Waiting
characteristics of Deadlock:
a) A resource may be acquired exclusively by only one process at a time (Mutual Exclusion Condition).
b) A process that has acquired an exclusive resource may hold it while waiting to obtain other resources
(Hold and Wait Condition).
c) Once a process has obtained a resource, the system cannot remove the resource from the process's
control until the process has finished using the resource (No pre-emption condition).
d) And two or more processes are locked in a "circular chain" in which each process in the chain is
waiting for one or more resources that the next process in the chain is holding (circular-wait condition).
Preventation a deadlock:
In deadlock prevention our concern is to condition a system to remove any possibility of deadlocks
occurring. Havender observed that a deadlock cannot occur if a system denies any of the four necessary
conditions. The first necessary condition, namely that processes claim exclusive use of the resources
they require, is not one that we want to break, because we specifically want to allow dedicated (i.e.,
serially reusable) resources. Denying the "wait-for" condition requires that all of the resources a process
needs to complete its task be requested at once, which can result in substantial resource
underutilization and raises concerns over how to charge for resources. Denying the "no-preemption"
condition can be costly, because processes lose work when their resources are preempted. Denying the
"circular-wait" condition uses a linear ordering of resources to prevent deadlock.
This strategy can increase efficiency over the other strategies, but not without difficulties.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Note on deadlock prevention:


In deadlock avoidance the goal is to impose less stringent conditions than in deadlock prevention in an
attempt to get better resource utilization.
Avoidance methods allow the possibility of deadlock to loom, but whenever a deadlock is approached, it
is carefully sidestepped.
Dijkstra's Banker's Algorithm is an example of a deadlock avoidance algorithm.
In the Banker's Algorithm, the system ensures that a process's maximum resource need does not exceed
the number of available resources.
The system is said to be in a safe state if the operating system can guarantee that all current processes
can complete their work within a finite time.
If not, then the system is said to be in an unsafe state.
Dijkstra's Banker's Algorithm requires that resources be allocated to processes only when the allocations
result in safe states.
It has a number of weaknesses (such as requiring a fixed number of processes and resources) that
prevent it from being implemented in real systems.
A deadlock avoidance algorithm dynamically examines the resource allocation state to ensure that there
can never be a circular wait condition.
The resource allocation state is defined by the number of available and allocated resources, and the
maximum demands of the processes.
A state is safe if the system can allocate resources to each process (up to its maximum) in some order
and still avoid a deadlock.

Banker’s algorithm to avoid the deadlock condition:


Banker’s algorithm
Let ‘Available’ be a vector of length m indicating the number of available resources of each type, ‘Max’
be an ‘n x n’ matrix defining the maximum demand of each process, ‘Allocation’ be an ‘n x m’ matrix
defining the number of resources of each type currently allocated to each process, and let‘need’ be an
‘n x m’ matrix indicating the remaining resource need of each process.
Let Requesti be the request vector for process pi.
If requesti[j]=k, then process pi wants k instances of resource type rj. When a request for resources is
made by process pi, the following actions are taken:
a) If Requesti< = Needi then proceed to step b. Else the process has exceeded its maximum claim
b) If Requesti< = Available the proceed to step c. Else the resources are not available and pi must wait
c) The system pretends to have allocated the requested resources to process pi b y modifying the state
as follows.
Available = Available - Requesti
Allocationi = Allocationi + Requesti
Needi = Needi - Requesti
If the resulting resource allocation state is safe, the transaction is completed and process pi is allocated
its resources.
If the new state is unsafe, the pi must wait for Requesti and the old resource allocation state is restored.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Detail about the Deadlock recovery:


Deadlock detection and recovery Deadlock detection methods are used in systems in which deadlocks
can occur.
The goal is to determine if a deadlock has occurred, and to identify those processes and resources
involved in the deadlock.
Deadlock detection algorithms can incur significant runtime overhead.
To facilitate the detection of deadlocks, a directed graph indicates resource allocations and requests.
Deadlock can be detected using graph reductions.
If a process's resource requests may be granted, then we say that a graph may be reduced by that
process.
If a graph can be reduced by all its processes, then there is no deadlock. If a graph cannot be reduced by
all its processes, then the irreducible processes constitute the set of deadlocked processes in the graph.

a) Let Work and Finish be vectors of length m and n. Initialize Work = Available. For i = 1, …, n, if
Allocationi 0 then Finish[i] = false, else Finish[i] = True
b) Find an i such that a. Finish[i] = false and b. Requesti < = Work
If no such i exists, go to step d
c) Work = Work + Allocationi Finish[i] = true
Go to step b
d) If Finish[i] = false for some i, then the system is in deadlock state.

deadlock recovery methods are used to clear deadlocks from a system so that it ma y operate free of the
deadlocks, and so that the deadlocked processes may complete their execution and free their resources.
Recovery typically requires that one or more of the deadlocked processes be flushed from the system.
The suspend mechanism allows the system to put a temporary hold on a process, and, when it is safe to
do so, resume the held process without loss of work. Checkpoint facilitates suspend capabilities by
limiting the loss of work to the time at which the last checkpoint was taken.
When a process in a system terminates, the system performs a rollback by undoing every operation
related to the terminated process that occurred since the last checkpoint.
To ensure that data in the database remains in a consistent state when deadlocked processes are
terminated, database systems typically perform resource allocations using transactions.
In personal computer systems and workstations, deadlock has generally been viewed as a limited
annoyance.
Some systems implement the basic deadlock prevention methods suggested by Havened, while others
ignore the problem—these methods seem to be satisfactory.
While ignoring deadlocks may seem dangerous, this approach can actually be rather efficient.
If deadlock is rare, then the processor time devoted to checking for deadlocks significantly reduces
system performance.
However, given current trends, deadlock will continue to be an important area of research as the
number of concurrent operations and number of resources becomes large, increasing the likelihood of
deadlock in multiprocessor and distributed systems.
Also, many real-time systems, which are becoming increasingly prevalent, require deadlock-free
resource allocation.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Segmentation: is a technique to break memory into logical pieces where each piece represents a
group of related information.
For example, data segments or code Segment for each process, data segment for operating system and
so on. Segmentation can be implemented using or without using paging. Unlike paging, segment is
having varying sizes and thus eliminates internal fragmentation. External fragmentation still exists but
to lesser extent. Address generated by CPU is divided into Segment number (s) -- segment number is
used as an index into a segment table which contains base address of each segment in physical memory
and a limit of segment. Segment offset (o) -- segment offset is first checked against limit and then is
combined with base address to define the physical memory address.
about the segmentation:
There is another way in which addressable memory can be subdivided, known as segmentation.
User View of logical memory
◦ Linear array of bytes,
Reflected by the Paging‟ memory scheme”.
◦ A collection of variable-sized entities,
User thinks in terms of “subroutines”, “stack”, “symbol table”, “main program” which are somehow
located somewhere in memory.
Segmentation supports this user view. The logical address space is a collection of segments.
Although the user can refer to objects in the program by a two-dimensional address, the actual physical
address is still a one-dimensional sequence Thus, we need to map the segment number
This mapping is effected by a segment table In order to protect the memory space, each entry in
segment table has a segment base and a segment limit.
Segments are variable-sized - Dynamic memory allocation required (first fit, best fit, worst fit).
External fragmentation - In the worst case the largest hole may not be large enough to fit in a new
segment. Note that paging has no external fragmentation problem.
Each process has its own segment table - like with paging where each process has its own page table.
The size of the segment table is determined by the number of segments, whereas the size of the page
table depends on the total amount of memory occupied.
Segment table located in main memory - is the page table with paging Segment table base register
(STBR) - points to current segment table in memory Segment table length register (STLR) - indicates
number of segments Segmentation Hardware Protection and Sharing in Segmentation
Segmentation lends itself to the implementation of protection and sharing policies Each entry has a base
address and length so inadvertent memory access can be controlled Sharing can be achieved by
segments referencing multiple processes Two processes that need to share access to a single segment
would have the same segment name and address in their segment tables.
Advantages in Segmentation:
No internal fragmentation
Segment tables consume less memory than page tables
Because of the small segment table, memory reference is easy
Lends itself to sharing data among processes.
Lends itself to protection.
External fragmentation.
Costly memory management algorithm
Unequal size of segments is not good in the case of swapping.

Memory-Management Unit: The run-time mapping form virtual to physical addresses is done by a
hardware device is a called as Memory Management Unit.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Memory Compaction: When swapping creates multiple holes in memory, it is possible to combine
them all into one big one by moving all the processes downward as far as possible.
Overlay: The idea of overlays is to keep in memory only those instructions and data that are needed at
any given time. So, to enable a process to be larger than the amount of memory allocated to it.
Thrashing: A Program which is causing page faults every few instructions to occur is called as Thrashing.
Variable partition: Hole – block of available memory; holes of various size are scattered throughout
memory When a process arrives, it is allocated memory from a hole large enough to accommodate it
Operating system maintains information about: a) allocated partitions b) free partitions Paging is a
technique in which physical memory is broken into blocks of the same size called pages.
Virtual memory: Virtual memory is a technique that allows the execution of processes that may not be
completely in the memory. One major advantage of this scheme is that programs can be larger than
physical memory. Virtual memory also allows processes to easily share files and address spaces and it
provides an efficient mechanism for process creation.
Page fault: It is an interrupt that occurs when a program requests data that is not currently in real
memory. The interrupt triggers the operating system to fetch the data from a virtual memory and load it
into RAM. An invalid page fault or page fault error occurs when the operating system cannot find the
data in virtual memory.
Dynamic loading: In dynamic loading, a routine of a program is not loaded until it is called by the
program.All routines are kept on disk in a re-locatable load format. The main program is loaded into
memory and is executed. Other routines methods or modules are loaded on request. Dynamic loading
makes better memory space utilization and unused routines are never loaded.
concept of Demand Paging? A demand paging system is quite similar to a paging system with swapping.
When we want to execute a process, we swap it into memory. Rather than swapping the entire process
into memory.
Define swap space? The secondary memory holds the pages that are not present in main memory. The
secondary memory is usually a high speed disk. It is know as the swap device, and the section of the disk
used for this purpose is known as the swap space.
What is reference string? We evaluate an algorithm by running it on a particular string of memory
references and computing the number of page faults. The string of memory references is called as the
reference string.
Define inverted page table? It is the one entry for each real page of memory. Entry consists of the
virtual address of the page stored in that real memory location; with information about the process that
owns that page. It Decreases memory needed to store each page table, but increases time needed to
search the table when a page reference occurs.
Define Hashed page table? A common approach for handling address spaces larger than 32 bits is to use
the hashed page table, with the hash value being the virtual page number. Each entry in the hash table
contains a linked list of elements that hash to the same location. Each element consists of three fields
the virtual page number, the value of mapped page frame and a pointer to the next element in the
linked list.
Define frames and pages? Physical memory is broken into fixed sized blocks are called as frames. Logical
memory which is broken into blocks of the same size is called as pages.
What is Translation look aside buffer? The TLB is associative, high speed memory. Each entry in the TLB
consists of two parts a key and value. When the associative memory is present with an item, it is
compared with all keys simultaneously. If the item is not found, the corresponding value field is
returned.
Define Hit rate? The percentage of times that a particular page number is found in the TLB is called the
hit ratio. An 80 percent hit ratio means that we the desired page number in the TLB 80 percent of the

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

time. If it takes 20 nanoseconds to search the TLB and 100 nanoseconds to access memory then a
mapped memory access takes 120 nanoseconds when the page number is in the TLB.
Explain optimal page replacement algorithm with an example? An optimal page-replacement
algorithm has the lowest page-fault rate of all algorithms.
An optimal page-replacement algorithm exists, and has been called OPT or MIN.
Replace the page that will not be used for the longest period of time . Use the time when a page is to be
used.
Example for the optimal replacement algorithm: Optimal replacement algorithm can process the
reference string in three frames with less than nine faults.
Unfortunately, the optimal page replacement algorithm is difficult to implement, because it requires
future knowledge of the reference string.
As a result, the optimal algorithm is used mainly for comparison studies.
For instance, it may be useful to know that, although a new algorithm is not optimal, it is within 12.3
percent of optimal at worst, and within 4.7 percent on average.
How FIFO replacement algorithm works? The simplest page replacement algorithm is FIFO
replacement algorithm.
A FIFO replacement algorithm associates with each page the time when that page was brought into
memory.
Oldest page in main memory is the one which will be selected for replacement.
Easy to implement, keep a list, replace pages from the tail and add new pages at the head.
The page replaced may be an initialization module that was used a long time and is no longer needed.
On the other hand, it could contain heavily used variable that was initialized early and is in constant use.
After we page out an active page to bring in a new one, a fault occurs almost immediately to retrieve the
active page.
Some other page will need to be replaced to bring the active page back into memory.
Thus, a bad replacement choice increases the page-fault rate and slows process execution, but does not
cause incorrect execution.
Summarize the LRU approximation page replacement? Initially, all bits are cleared to 0 by the operating
systems. As a user process executes, the bit associated with each page referenced is set to 1 by the
hardware. After sometime, we can determine which pages have been used and which have not been
used by examining the reference bits. The reference bit for a page is set, by the hardware, whenever
that page is referenced.
By using this reference bit we can know which pages are used and which pages were not used .
This partial ordering information leads to many page replacement algorithms that approximate LRU
replacement.
LRU needs special hardware and still slow.
Additional reference bits algorithm
Keep record of reference bits at regular time intervals
Associate 8 bit byte with each page table entry
At regular intervals (e.g., 100 milliseconds) OS shifts reference bit for each page into high-order bit of 8-
bit byte, shifting other bits right by one

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Explain about the demand paging? A demand paging system is quite similar to a paging system with
swapping.
When we want to execute a process, we swap it into memory. Rather than swapping the entire process
into memory, however, we use a lazy swapper called pager.
When a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again.
Instead of swapping in a whole process, the pager brings only those necessary pages into memory.
Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time
and the amount of physical memory needed.
Hardware support is required to distinguish between those pages that are in memory and those pages
that are on the disk using the valid-invalid bit scheme.
Where valid and invalid pages can be checked by checking the bit.
Marking a page will have no effect if the process never attempts to access the page.
While the process executes and accesses pages that are memory resident, execution proceeds normally.
Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's
failure to bring the desired page into memory.
Advantages - Large virtual memory. More efficient use of memory.
Unconstrained multiprogramming. There is no limit on degree of multiprogramming.
Disadvantages - Number of tables and amount of processor overhead for handling page interrupts are
greater than in the case of the simple paged management techniques. Due to the lack of explicit
constraints on jobs address space size.
Explain page fault in detail?
Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's
failure to bring the desired page into memory.
But page fault can be handled as following:
Step 1 Check an internal table for this process, to determine whether the reference was a valid or it was
an invalid memory access.
Step 2 If the reference was invalid, terminate the process. If it was valid, but page have not yet brought
in, page in the latter.
Step 3 Find a free frame.
Step 4 Schedule a disk operation to read the desired page into the newly allocated frame.
Step 5 When the disk read is complete, modify the internal table kept with the process and the page
table to indicate that the page is now in memory.
Step 6 Restart the instruction that was interrupted by the illegal address trap. The process can now
access the page as though it had always been in memory.
Therefore, the operating system reads the desired page into memory and restarts the process as though
the page had always been in memory.
Explain about virtual memory management?
Virtual memory is a technique that allows the execution of processes which are not completely available
in memory. The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory is the separation of user logical memory from physical memory.
This separation allows an extremely large virtual memory to be provided for programmers when only a
smaller physical memory is available. Following are the situations, when entire program is not required
to be loaded fully in main memory. User written error handling routines are used only when an error
occurred in the data or computation. Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though only a small amount of the table
is actually used. The ability to execute a program that is only partially in memory would counter many
benefits. Demand segmentation can also be used to provide virtual memory.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Write the functions of memory management and explain?


Memory management is the functionality of an operating system which handles or manages primary
memory.
Memory management keeps track of each and every memory location either it is allocated to some
process or it is free.
It checks how much memory is to be allocated to processes. It decides which process will get memory at
what time.
It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.
Memory management provides protection by using two registers, a base register and a limit register.
The base register holds the smallest legal physical memory address and the limit register specifies the
size of the range.
For example, if the base register holds 300000 and the limit register is 1209000, then the program can
legally access all addresses from 300000 through 411999.
Instructions and data to memory addresses can be done in following ways Compile time -- When it is
known at compile time where the process will reside, compile time binding is used to generate the
absolute code.
Load time -- When it is not known at compile time where the process will reside in memory, then the
compiler generates re-locatable code.
Execution time -- If the process can be moved during its execution from one memory segment to
another, then binding must be delayed to be done at run time.
What is virtual memory? Explain.
Virtual Memory is an abstraction layer.
It allows processes to allocate more memory than there is RAM.
When a process references memory not already in RAM, then it is "paged in" on-demand.
This is called a hard page fault.
The virtual memory manager of the Operating System is responsible for making optimal use of your
RAM.
The conceptual separation of user logical memory from physical memory.
Thus we can have large virtual memory on a small physical memory.
When a page is referenced, either as code execution or data access, and that page is n‟t in memory,
then get the page from disk and re-execute the statement.
Here’s migration between memory and disk.
One instruction may require several pages. For example, a block move of data.
Time to service page faults demands that they happen only infrequently.
It makes more sense to have two bits - one indicating that the page is legal (valid) and a second to show
that the page is in memory.
STEPS FOR HANDLING PAGE WAS FAULT:
1. The process has touched a page not currently in memory.
2. Check an internal table for the target process to determine if the reference was valid 3. If page valid,
but page not resident, try to get it from secondary storage.
4. Find a free frame; a page of physical memory not currently in use. (May need to free up a page.)
5. Schedule a disk operation to read the desired page into the newly allocated frame.
6. When memory is filled, modify the page table to show the page is now resident.
7. Restart the instruction that failed.
REQUIREMENTS FOR DEMAND PAGING:
Page table mechanism. Secondary storage. Software support for fault handlers and page tables.
Architectural rules concerning restarting of instructions.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Explain the page replacement algorithms in detail?


Prevent over allocation of memory by modifying page fault service routine to include page replacement.
Use modify (dirty) bit to reduce overhead of page transfers only modified pages are written to disk
Page replacement completes separation between logical memory and physical memory – large virtual
memory can be provided on a smaller physical memory.
Basic page replacement
1. Find the location of the desired page on disk
2. Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement
algorithm to select a victim frame - Write victim frame to disk if dirty
3. Bring the desired page into the (newly) free frame; update the page and frame tables
4. Continue the process by restarting the instruction that caused the trap
Replacement Algorithms:
1. FIFO page replacement
2. Optimal page replacement
3. LRU page replacement
4. LRU-Approximation page replacement
5. Counting-Based page replacement
1. FIFO Page replacement Alg:
Easy to understand and program.
Performance is not always good.
Drawbacks of FIFO
A page which is being accessed quite often may also get replaced because it arrived earlier than those
present Ignores locality of reference. A page which was referenced last may also get replaced, although
there is high probability that the same page may be needed again.
FIFO Example
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
3 frames FIFO Replacement – Belady”s Anomaly more frames more page faults
2.Optimal Page Replacement Algorithm:
Lowest page fault rate of all algorithms
Free from Belady”s anomaly
Requires future knowledge of the reference string.
Used for comparison studies.
3.LRU Page replacement algorithm:
Free from Belady‟s Anomaly.
Problem: How to order the frame defined by the time of last use.
Solution:
Counters
Queue
Counter implementation
Every page entry has a counter; every time page is referenced through this entry, copy the clock into the
counter.
When a page needs to be changed, look at the counters to determine which are to change
Stack implementation –keep a stack of page numbers in a double link form:
Page referenced:
Move it to the top requires 6 pointers to be changed
No search for replacement

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

4.LRU approximation page replacement:


Additional-Reference-Bits Algorithm
Second-Chance Algorithm
Enhanced Second-Chance Algorithm
Additional Reference bit alg:
– Uses reference bit, initially at 0 ,after reference h/w updates it to 1.
– We can use 8-bit byte as reference bits history.
– At regular interval
– Shift the reference bit into the high-order bit of 8-bit byte, shifting the other bits right by 1 bit and
discarding the low-order bit.
Second Chance Alg:
Uses FIFO and a reference bit.
Implement using a circular queue.
Refer as CLOCK algorithm
Enhanced Second Chance alg:
Considers the Reference bit and the modify bit as an ordered pair, (Reference bit, Modify bit).
Four possible cases:
(0,0) – neither recently used nor modified.
(0,1)
(1,0)
(1,1)
Explain detail in file sharing System?
Sharing of files on multi-user systems is desirable.
Sharing may be done through a protection scheme.
On distributed systems, files may be shared across a network.
Network File System (NFS) is a common distributed file-sharing method
File Sharing – Multiple Users
User IDs identify users, allowing permissions and protections to be per-user.
Group IDs allow users to be in groups, permitting group access rights .
File Sharing – Remote File Systems
Uses networking to allow file system access between systems.
Manually via programs like FTP.
What are the main components of a Unix system?
1.Process and thread manager. 2.Virtual memory manager. 3.Security reference monitor.
4.protection/auditing I/O system. 5.device independent I/O. 6.Configuration manager. 7.Plug and Play
manager.
How do we guarantee no cycles:
Allow only links to file not subdirectories.
Garbage collection.
Every time a new link is added use a cycle detection algorithm to determine whether it is OK.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

What is file protection? Explain in detail?


Protection:
Keeping data safe from physical damage (Reliability) caused by bugs in O/S,disk crashes etc.and also
from unauthorized access(protection),generally it is provided by keeping duplicate copies of files.
Access Control
Types of access
Read
Write
Execute
Append
Delete
List
Access Lists and Groups
Mode of access: read, write, execute
Three classes of users
RWX
a) Owner access 7 1 1 1 RWX
b) Group access 6 1 1 0 RWX
c) Public access 1 0 0 1
Ask manager to create a group (unique name), say G, and add some users to the group.
For a particular file (say game) or subdirectory, defines an appropriate access.
Attach a group to a file chgrp G game Automatically, seamlessly using distributed file systems.
Semi automatically via the World Wide Web.
Client-server model allows clients to mount remote file systems from servers.
Server can serve multiple clients .
Client and user-on-client identification is insecure or complicated.
NFS is standard UNIX client-server file sharing protocol.
CIFS is standard Windows protocol.
Standard operating system file calls are translated into remote calls .
Distributed Information Systems such as LDAP, DNS, NIS, Active Directory implement unified access to
information needed for remote computing File Sharing – Failure Modes Remote file systems add new
failure modes, due to network failure, server failure .
Recovery from failure can involve state information about status of each remote request.
Stateless protocols such as NFS include all information in each request, allowing easy recovery but less
security File Sharing – Consistency Semantics Consistency semantics specify how multiple users are to
access a shared file simultaneously.
Similar to process synchronization algorithms .
Tend to be less complex due to disk I/O and network latency (for remote file systems
Andrew File System (AFS) implemented complex remote file sharing semantics
Unix file system (UFS) implements:
Writes to an open file visible immediately to other users of the same open file.
Sharing file pointer to allow multiple users to read and write concurrently AFS has session semantics
Writes only visible to sessions starting after the file is closed.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Explain detail about directory structure? The file system of computers can extensive. Some system
store millions of file on terabytes of disk. The directory structures are helpful to organize our files easily.
Free space managed with partition. The file system partitions also have a directory of files.
A collection of nodes containing information about all files Both the directory structure and the files
reside on disk. Backups of these two structures are kept on tape.
A Typical File-system Organization Operations Performed on Directory:
a) Search for a file
b) Create a file
c) Delete a file
d) List a directory
e) Rename a file
f) Traverse the file system
Organize the Directory (Logically) to Obtain Efficiency – locating a file quickly.
Naming – convenient to users
Two users can have same name for different files.
The same file can have several different names.
Grouping – logical grouping of files by properties, (e.g., all Java programs, all games, …)
Single-Level Directory:
A single directory for all users. All files referenced from one directory.
only a single directory for the disk partition.
Advantage - Simplest directory structure so it is easy to understand and use.
Disadvantages - A single directory for all users. Naming problem. Grouping problem.
Two-Level Directory: Two level directory has two separate directories that is user file directory(UFD)
and master file directories(MFD).
User file directory(UFD)-Separate directory can be created for each user called user directories file.
Master file Directory(MFD)-Every file in the system has its own path name to locate the file uniquely,
the user must know the path name of the file form MFD to UFD. Separate directory for each user.
Advantages: It can have the same file name for different user. searching of a file can be made efficiently.
Disadvantage: User has to remember the path name while they search for a file.
Tree-Structured Directories: In tree structured directory, user can create subdirectory of their own.
UFD contains set of files or subdirectories.
Special system calls are used to create and delete a directory.
To locate the file ,need path name.
Path name is two types.
1.Absolute path-Start from the root node to the exact location of a file.
2.Relative path starts from the current working directory to exact location of a file .
Example: if in current directory /mail mkdir count
De le ting “mail” deleting the entire subtree rooted by ―mail‖.
Advantages:
Efficient for file searching.
Have the capability of grouping of file.
Solutions:
Back pointers, so we can delete all pointers Variable size records a problem.
Back pointers using a daisy chain organization.
Entry-hold-count solution.
New directory entry type.
Link – another name (pointer) to an existing file.
Resolve the link – follow pointer to locate the file.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Memory Management:
Memory Management is the process of controlling and coordinating computer memory, assigning
portions known as blocks to various running programs to optimize the overall performance of the
system.
It is the most important function of an operating system that manages primary memory. It helps
processes to move back and forward between the main memory and execution disk. It helps OS to
keep track of every memory location, irrespective of whether it is allocated to some process or it
remains free.
Contiguous Memory Allocation in Operating System:
A contiguous memory allocation is a memory management technique where whenever there is a
request by the user process for the memory, a single section of the contiguous memory block is
given to that process according to its requirement.

Fixed-sized partition scheme:


• It is also known as Static-partitioning.
• The system is divided into fixed-sized partitions.
In this scheme, each partition may contain exactly one process. This process limits the extent of
multiprogramming, as the number of partitions decides the number of processes. t is achieved by
dividing the memory into fixed-sized partitions or variable-sized partitions.
Example: Let's take an example of fixed size partitioning scheme, we will divide a memory size of 15
KB into fixed-size partitions: It is important to note that these partitions are allocated to the
processes as they arrive and the partition that is allocated to the arrived process basically depends
on the algorithm followed. If there is some wastage inside the partition then it is termed Internal
Fragmentation.
Advantages of Fixed-sized partition scheme:
1. This scheme is easy to implement.
2. It makes management easier.
3. It supports multiprogramming.
Disadvantages of Fixed-sized partition scheme:
1. It limits the extent of multiprogramming.
2. The unused portion of each partition cannot be used to load other programs.
3. The size of the process cannot be larger than the size of the partition, hence limiting the size of
the processes.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Variable-sized partition scheme:


It is also known as Dynamic partitioning. In this, the scheme allocation is done dynamically.
The size of each partition is not declared initially, and only once we know the size of the process.
The size of the partition and the process is equal, hence preventing internal fragmentation.
When the process is smaller than the partition, some size of the partition gets wasted, this is known
as internal fragmentation. It is a concern in static partitioning, but dynamic partitioning aims to
solve this issue.

Advantages of Variable-sized partition scheme:


The advantages of a variable-sized partition scheme are as follows:
1. There is no internal fragmentation.
2. The degree of multiprogramming is dynamic.
3. There is no limitation on the size of the processes.

Disadvantages of Variable-sized partition scheme:


The disadvantages of a variable-sized partition scheme are as follows:
1. It is challenging to implement.
2. It is prone to external fragmentation.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Multitasking:
Multitasking is when multiple jobs are executed by the CPU simultaneously by switching between them.
Switches occur so frequently that the users may interact with each program while it is running. An OS
does the following activities related to multitasking −
▪ The user gives instructions to the operating system or to a program directly, and receives an
immediate response.
▪ The OS handles multitasking in the way that it can handle multiple operations/executes multiple
programs at a time.
▪ Multitasking Operating Systems are also known as Time-sharing systems.
▪ These Operating Systems were developed to provide interactive use of a computer system at a
reasonable cost.
▪ A time-shared operating system uses the concept of CPU scheduling and multiprogramming to provide
each user with a small portion of a time-shared CPU.
▪ Each user has at least one separate program in memory.
▪ A program that is loaded into memory and is executing is commonly referred to as a process.
▪ When a process executes, it typically executes for only a very short time before it either finishes or
needs to perform I/O.
▪ Since interactive I/O typically runs at slower speeds, it may take a long time to complete. During this
time, a CPU can be utilized by another process.
▪ The operating system allows the users to share the computer simultaneously. Since each action or
command in a time-shared system tends to be short, only a little CPU time is needed for each user.
▪ As the system switches CPU rapidly from one user/program to the next, each user is given the
impression that he/she has his/her own CPU, whereas actually one CPU is being shared among many
users.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Multiprogramming :
Sharing the processor, when two or more programs reside in memory at the same time, is referred as
multiprogramming. Multiprogramming assumes a single shared processor. Multiprogramming increases
CPU utilization by organizing jobs so that the CPU always has one to execute.
The following figure shows the memory layout for a multiprogramming system.

An OS does the following activities related to multiprogramming.


▪ The operating system keeps several jobs in memory at a time.
▪ This set of jobs is a subset of the jobs kept in the job pool.
▪ The operating system picks and begins to execute one of the jobs in the memory.
▪ Multiprogramming operating systems monitor the state of all active programs and system resources
using memory management programs to ensures that the CPU is never idle, unless there are no jobs to
process.

Advantages:
▪ High and efficient CPU utilization.
▪ User feels that many programs are allotted CPU almost simultaneously.

Disadvantages:
▪ CPU scheduling is required.
▪ To accommodate many jobs in memory, memory management is required.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Swapping and Overlays:Overlays


The main problem in Fixed partitioning is the size of a process has to be limited by the maximum size of
the partition, which means a process can never be span over another. In order to solve this problem,
earlier people have used some solution which is called as Overlays.
The concept of overlays is that whenever a process is running it will not use the complete program at
the same time, it will use only some part of it. Then overlays concept says that whatever part you
required, you load it and once the part is done, then you just unload it, means just pull it back and get
the new part you required and run it. Formally,
“The process of transferring a block of program code or other data into internal memory, replacing
what is already stored”. Sometimes it happens that compare to the size of the biggest partition, the size
of the program will be even more, then, in that case, you should go with overlays.
So overlay is a technique to run a program that is bigger than the size of the physical memory by keeping
only those instructions and data that are needed at any given time. Divide the program into modules in
such a way that not all modules need to be in the memory at the same time.
Advantage –
• Reduce memory requirement
• Reduce time requirement
Disadvantage –
• Overlap map must be specified by programmer
• Programmer must know memory requirement
• Overlapped module must be completely disjoint
• Programming design of overlays structure is complex and not possible in all cases
Example – The best example of overlays is assembler. Consider the assembler has 2 passes, 2 pass
means at any time it will be doing only one thing, either the 1st pass or the 2nd pass. This means it will
finish 1st pass first and then 2nd pass.Let assume that available main memory size is 150KB and total
code size is 200KB
Pass 1.......................70KB
Pass 2.......................80KB
Symbol table.................30KB
Common routine...............20KB
As the total code size is 200KB and main memory size is 150KB, it is not possible to use 2 passes
together. So, in this case, we should go with the overlays technique. According to the overlays concept
at any time only one pass will be used and both the passes always need. symbol table and common
routine. Now the question is if overlays-driver* is 10KB, then what is the minimum partition size
required ? For pass 1 total memory needed is = (70KB + 30KB + 20KB + 10KB) = 130KB and for pass 2
total memory needed is = (80KB + 30KB + 20KB + 10KB) = 140KB.So if we have minimum 140KB size
partition then we can run this code very easily.
Swapping
• Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a
backing store, and then brought back into memory for continued execution.
• Backing store is a usually a hard disk drive or any other secondary storage which fast in access and
large enough to accommodate copies of all memory images for all users. It must be capable of providing
direct access to these memory images.
• Major time-consuming part of swapping is transfer time. Total transfer time is directly proportional to
the amount of memory swapped. Let us assume that the user process is of size 100KB and the backing
store is a standard hard disk with transfer rate of 1 MB per second. The actual transfer of the 100K
process to or from memory will take.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

100KB / 1000KB per second


= 1/10 second
= 100 milliseconds

Physical File Logical File


It occupies the portion of memory. It It does not occupy memory space. It does
contains the original data. not contain data.
A physical file contains one record format. It can contain up to 32 record formats.
It can exist without a logical file. It cannot exist without a physical file.
If there is a logical file for the physical file, If there is a logical file for a physical file,
the physical file cannot be deleted until and the logical file can be deleted without
unless we delete the logical file. deleting the physical file.
CRTPF command is used to make such an CRTLF command is used to make such an
object. object.
Physical files represent the real data saved The logical file represents one or multiple
on an iSeries system and describe how the physical files. It also has a description of the
data is to be displayed to or retrieved from a records found in one or multiple physical
program. files.
If there is a logical file for a physical If there is a logical file for a physical
file, the physical file can’t be deleted file, the logical file can be deleted
until and unless we delete the Logical without deleting the physical file.
file.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

File Access Methods in OS : A file is a collection of bits/bytes or lines which is stored on secondary
storage devices like a hard drive. File access methods in OS are nothing but techniques to read data from
the system's memory. There are various ways in which we can access the files from the memory like:
1. Sequential Access
2. Direct/Relative Access, and
3. Indexed Sequential Access.
These methods by which the records in a file can be accessed are referred to as the file access
mechanism. Each file access mechanism has its own set of benefits and drawbacks, which are discussed
further in this article.
1. Sequential Access
The operating system reads the file word by word in sequential access method of file accessing. A
pointer is made, which first links to the file's base address. If the user wishes to read the first word of
the file, the pointer gives it to them and raises its value to the next word. This procedure continues till
the file is finished. It is the most basic way of file access. The data in the file is evaluated in the order
that it appears in the file and that is why it is easy and simple to access a file's data using sequential
access mechanism. For example, editors and compilers frequently use this method to check the validity
of the code.
Advantages of Sequential Access:
• The sequential access mechanism is very easy to implement.
• It uses lexicographic order to enable quick access to the next entry.
Disadvantages of Sequential Access:
• Sequential access will become slow if the next file record to be retrieved is not present next to the
currently pointed record.
• Adding a new record may need relocating a significant number of records of the file.

2. Direct (or Relative) Access


A Direct/Relative file access mechanism is mostly required with the database systems. In the majority of
the circumstances, we require filtered/specific data from the database, and in such circumstances,
sequential access might be highly inefficient. Assume that each block of storage holds four records and
that the record we want to access is stored in the tenth block. In such a situation, sequential access will
not be used since it will have to traverse all of the blocks to get to the required record, while direct
access will allow us to access the required record instantly.
The direct access mechanism requires the OS to perform some additional tasks but eventually leads to
much faster retrieval of records as compared to the sequential access.
Advantages of Direct/Relative Access:
• The files can be retrieved right away with direct access mechanism, reducing the average access time
of a file.
• There is no need to traverse all of the blocks that come before the required block to access the record.
Disadvantages of Direct/Relative Access:
• The direct access mechanism is typically difficult to implement due to its complexity.
• Organizations can face security issues as a result of direct access as the users may access/modify the
sensitive information. As a result, additional security processes must be put in place.
3. Indexed Sequential Access
It's the other approach to access a file that's constructed on top of the sequential access mechanism.
This method is practically similar to the pointer to pointer concept in which we store an address of a
pointer variable containing address of some other variable/record in another pointer variable. The
indexes, similar to a book's index (pointers), contain a link to various blocks present in the memory. To

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

locate a record in the file, we first search the indexes and then use the pointer to pointer concept to
navigate to the required file.
Primary index blocks contain the links of the secondary inner blocks which contains links to the data in
the memory.
Advantages of Indexed Sequential Access:
• If the index table is appropriately arranged, it accesses the records very quickly.
• Records can be added at any position in the file quickly.
Disadvantages of Indexed Sequential Access:
• When compared to other file access methods, it is costly and less efficient.
• It needs additional storage space.

Allocation strategy module:


1. Continuous Allocation: A single continuous set of blocks is allocated to a file at the time of file
creation. Thus, this is a pre-allocation strategy, using variable size portions. The file allocation table
needs just a single entry for each file, showing the starting block and the length of the file. This method
is best from the point of view of the individual sequential file. Multiple blocks can be read in at a time to
improve I/O performance for sequential processing. It is also easy to retrieve a single block. For
example, if a file starts at block b, and the ith block of the file is wanted, its location on secondary
storage is simply b+i-1.
Disadvantage
▪ External fragmentation will occur, making it difficult to find contiguous blocks of space of sufficient
length. Compaction algorithm will be necessary to free up additional space on disk.
▪ Also, with pre-allocation, it is necessary to declare the size of the file at the time of creation.

2. Linked Allocation (Non-contiguous allocation): Allocation is on an individual block basis. Each block
contains a pointer to the next block in the chain. Again the file table needs just a single entry for each
file, showing the starting block and the length of the file. Although pre-allocation is possible, it is more
common imply to allocate blocks as needed. Any free block can be added to the chain. The blocks need
not be continuous. Increase in file size is always possible if free disk block is available. There is no
external fragmentation because only one block at a time is needed but there can be internal
fragmentation but it exists only in the last disk block of file.
Disadvantage:
▪ Internal fragmentation exists in last disk block of file.
▪ There is an overhead of maintaining the pointer in every disk block.
▪ If the pointer of any disk block is lost, the file will be truncated.
▪ It supports only the sequencial access of files.

3. Indexed Allocation: It addresses many of the problems of contiguous and chained allocation. In this
case, the file allocation table contains a separate one-level index for each file: The index has one entry
for each block allocated to the file. Allocation may be on the basis of fixed-size blocks or variable-sized
blocks. Allocation by blocks eliminates external fragmentation, whereas allocation by variable-size
blocks improves locality. This allocation technique supports both sequential and direct access to the file
and thus is the most popular form of file.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Page Replacement Algorithms:


Reference String
The string of memory references is called reference string. Reference strings are generated artificially or
by tracing a given system and recording the address of each memory reference. The latter choice
produces a large number of data, where we note two things.
• For a given page size, we need to consider only the page number, not the entire address.
• If we have a reference to a page p, then any immediately following references to page p will never
cause a page fault. Page p will be in memory after the first reference; the immediately following
references will not fault.
• For example, 123,215,600,1234,76,96
• If page size is 100, then the reference string is 1,2,6,12,0,0

1. First In First Out (FIFO): This is the simplest page replacement algorithm. In this algorithm, the
operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the
queue. When a page needs to be replaced page in the front of the queue is selected for removal.
Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames. Find the number of page
faults. Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page
Faults. When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in
memory so it replaces the oldest page slot i.e 1. —>1 Page Fault. 6 comes, it is also not available in
memory so it replaces the oldest page slot i.e 3 —>1 Page Fault. Finally, when 3 come it is not available
so it replaces 0 1 page fault. Belady’s anomaly proves that it is possible to have more page faults when
increasing the number of page frames while using the First in First Out (FIFO) page replacement
algorithm. For example, if we consider reference strings 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4, and 3 slots, we get
9 total page faults, but if we increase slots to 4, we get 10-page faults.
2. Optimal Page replacement: In this algorithm, pages are replaced which would not be used for the
longest duration of time in the future.
Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find
number of page fault.
• Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
• 0 is already there so —> 0 Page fault.
• When 3 came it will take the place of 7 because it is not used for the longest duration of time in the
future.—>1 Page fault.
• 0 is already there so —> 0 Page fault.
• 4 will takes place of 1 —> 1 Page Fault.
• Now for the further page reference string —> 0 Page fault because they are already available in the
memory. Optimal page replacement is perfect, but not possible in practice as the operating system
cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that
other replacement algorithms can be analysed against it.
3. Least Recently Used: In this algorithm, page will be replaced which is least recently used.
Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find
number of page faults.
• Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults 0 is
already there so —> 0 Page fault.
• When 3 came it will take the place of 7 because it is least recently used —>1 Page fault
• 0 is already in memory so —> 0 Page fault.
• 4 will takes place of 1 —> 1 Page Fault
• Now for the further page reference string —> 0 Page fault because they are already available in the
memory.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page


OPERATING SYSTEMS UNIT III II BCA

Page Buffering algorithm:


To get a process start quickly, keep a pool of free frames.
On page fault, select a page to be replaced.
Write the new page in the frame of free pool, mark the page table and restart the process.
Now write the dirty page out of disk and place the frame holding replaced page in free pool.
Least frequently Used(LFU) algorithm
The page with the smallest count is the one which will be selected for replacement.
This algorithm suffers from the situation in which a page is used heavily during the initial phase of a
process, but then is never used again.
Most frequently Used(MFU) algorithm
This algorithm is based on the argument that the page with the smallest count was probably just
brought in and has yet to be used.

Static vs Dynamic Loading:


The choice between Static or Dynamic Loading is to be made at the time of computer program being
developed. If you have to load your program statically, then at the time of compilation, the complete
programs will be compiled and linked without leaving any external program or module dependency. The
linker combines the object program with other necessary object modules into an absolute program,
which also includes logical addresses.
If you are writing a Dynamically loaded program, then your compiler will compile the program and for all
the modules which you want to include dynamically, only references will be provided and rest of the
work will be done at the time of execution.
At the time of loading, with static loading, the absolute program (and data) is loaded into memory in
order for execution to start.
If you are using dynamic loading, dynamic routines of the library are stored on a disk in relocatable form
and are l
File:
A file is a named collection of related information that is recorded on secondary storage such as
magnetic disks, magnetic tapes and optical disks. In general, a file is a sequence of bits, bytes, lines or
records whose meaning is defined by the files creator and user.
Commonly used terms in File systems:
Field: This element stores a single value, which can be static or variable length.
DATABASE: Collection of related data is called a database. Relationships among elements of data are
explicit.
RECORD: A Record type is a complex data type that allows the programmer to create a new data type
with the desired column structure. Its groups one or more columns to form a new data type. These
columns will have their own names and data type. oaded into memory only when they are needed by
the program.

R.SENTHIL VADIVOO, DEPARTMENT OF COMPUTER APPLICATIONS Page

You might also like