0% found this document useful (0 votes)
18 views

OperatingSystem Unit-4

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

OperatingSystem Unit-4

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 63

Mangalmay Institute of Engineering and Technology, Greater Noida

Operating System (BCS-401)


Unit – 1

Ayushi Gupta
(Assistant Professor)
Department of Computer Science and Engineering
Mangalmay Institute of Engineering and Technology, Greater Noida

Content :
Memory Management: Basic bare machine, Resident monitor,
Multiprogramming with fixed partitions, Multiprogramming with variable
partitions, Protection schemes, Paging, Segmentation, Paged segmentation,
Virtual memory concepts, Demand paging, Performance of demand paging,
Page replacement algorithms, Thrashing, Cache memory organization,
Locality of reference.
Mangalmay Institute of Engineering and Technology, Greater Noida
 Basic Bare Machine:
It is the most basic environment in which there is no operating system
and user programs execute directly on the top of the hardware
resources. The program has full control over the computer. Programs
were fed to the computer system directly using machine language by the
user without any Operating system services or system software support.
The use of bare machine was very cumbersome and inefficient because
of absence of operating system all the tasks has to be performed
manually. It is the operating system that makes bare machine work like a
computer system, acts as an interface, provide friendly environment and
ease of work to system users.
Mangalmay Institute of Engineering and Technology, Greater Noida
 Resident Monitor :
Resident Monitor is the first elementary OS in which the initial control
is in monitor. Here actually a small system program performs the job
sequencing task and that's why it is also known as job sequencer.
When the computer is switched on, the resident monitor is invoked
and it would loads next program and transfers control to it. When
program is finished, the control is transferred back to monitor. The
resident monitor automatically keep on transferring the control from
one program to another with no time gap between the programs. The
resident monitor consists of three major parts. These are:
• Control Language Interpreter: It responsible for reading and
carrying out instructions punched on the cards at the time of
execution.
• Loader: Control Language Interpreter call upon the loader so as to
loads systems programs and applications programs into memory.
• Device Drivers: Both control language interpreter and the loader
uses device drivers to perform 1/0 operations using system's I/O
devices. Each I/O device has associated device drivers that handle
that I/O device.
Mangalmay Institute of Engineering and Technology, Greater Noida
 Memory Management and its basic concepts :
• Memory Management is the process of controlling and
coordinating computer memory, assigning portions known as
blocks to various running programs to optimize the overall
performance of the system.

• It is the most important function of an operating system that


manages primary memory. It helps processes to move back and
forward between the main memory and execution disk. It helps OS
to keep track of every memory location, irrespective of whether it
is allocated to some process or it remains free.

Some basic concepts of Memory Management :


Mangalmay Institute of Engineering and Technology, Greater Noida
Address binding :
In a system the user program have symbolic name for the addresses like -
ptr, addr, etc. When the program gets compiled, the compiler bind
symbolic addresses to relocateable addresses and then the linkage editor
or loader will bind relocatable addresses to absolute memory addresses.
At each step binding is mapping from one address space to another
address space. The binding of user program (instructions and data) to
memory addresses can be done at any time - Compile-time, Load time or
Execution time. Let us see each of them one by one.
a.) Compile-time binding: If starting location of user program in memory
is known at compile time, then compiler generates absolute code.
Absolute code is executable binary code that must always be loaded at a
specific location in memory. If location of user program changes, then
program need to be recompiled. For Example: The MS-DOS .COM
programs in MS-DOS Operating Systems are absolute code generated by
Compile-time binding.
Mangalmay Institute of Engineering and Technology, Greater Noida
b.) Load-time binding: If location of user program in memory is not
known at compile time, then compiler generates relocatable code. If
location of user program changes, then program need not to be
recompiled, only user code need to be reloaded to integrate changes.

c.) Run-time binding: Programs (or processes) may need to be relocated


during run-time from one memory segment to another. Run-time (or
execution time) binding is most popular and flexible scheme, providing
we have the required hardware support available in the system.
Mangalmay Institute of Engineering and Technology, Greater Noida
Logical and Physical Address :
The address defined and referenced by user (programmer) in their
program is called logical address. CPU generates logical address. Physical
address is the address generated by OS and is the actual physical storage
location in system memory. The OS based on available space in system
memory and other criteria generates physical address corresponding to
user address defined in user program. The logical addresses are also
known as virtual address and physical address is known as real address.
At compile-time and load-time address binding, the logical and physical
addresses generated are the same but at execution-time address binding
logical and physical addresses are different. However term logical
addresses and virtual address are one and the same as they both are
the reference to same address.
The space set aside for set for all logical address defined and referenced
by user in their program are called logical address space and the space
set aside for set of all physical address corresponding to these logical
address is known as physical address space.
Mangalmay Institute of Engineering and Technology, Greater Noida
Mapping from Virtual address to Physical Addresses:
The mapping from virtual to physical address is handled by the hardware
device Memory Management Unit (MMU) that generates physical
memory corresponding to each logical memory. There are various
methods to perform mapping from virtual to physical address. Some of
these are Paging, Segmentation etc.
Simple MMU scheme for Mapping Virtual address to Physical Addresses
Below Figure shows the MMU scheme for Mapping Virtual address to
Physical Addresses. In this scheme there are two registers: base register
and limit register. The base register (also called relocation register)
contains the starting (base) address from where the user program starts
in memory. For example if user program starts at address 1000 in
memory, then base register contains the value 1000. The limit register
contains the value that specifies the range (or boundary) so that the user
program can access address within that range only. In other words limit
register specify the range of logical address to be used by user program.
For example if limit register contains value 1500 and base register
contains the value 1000 then, user program can access address from
range 1000 to 2500 (1000 +1500 = 2500).
Mangalmay Institute of Engineering and Technology, Greater Noida
Limit register solves protection problem by bounding each user program
to works in its defined range only and doesn't illegally or accidentally
access other user's program.
Figure of MMU Scheme for Mapping Virtual Address to Physical
Addresses is given below.
Mangalmay Institute of Engineering and Technology, Greater Noida
 Multiprogramming with fixed partitions :
Earlier in multiprogramming system the main memory was divided into
fixed partitions where each partition was capable of holding one process.
Fixed partitions can be of equal size or unequal sizes. But their size once
fixed cannot be changed means starting and an end address of each
partition is fixed in advance.

In case of equal size fixed partition, there is a single process queue in


which all the processes waiting to come in main memory for execution
are maintained. If there is partition available, the process is loaded into
that partition. Since all partitions are of equal size, it does not matter
which partition is used.
Mangalmay Institute of Engineering and Technology, Greater Noida
Mangalmay Institute of Engineering and Technology, Greater Noida
In case of unequal size fixed partition, the size of each partition is fixed once in
starting and that size cannot be changed, it is fixed. For example, suppose system
has memory of size 64kb. Out of this 16 kb is occupied by OS and rest 48Kb is
there for user process. This 48Kb is divided into five partitions as:
Two partitions of size 4 Kb,
One partition of size 8 Kb
Two partitions of size 16 Kb
Unequal size partition is shown in below figure.
Mangalmay Institute of Engineering and Technology, Greater Noida
Unequal Size Partition :
Mangalmay Institute of Engineering and Technology, Greater Noida
A separate process queue is maintained corresponding to each partition
size. Here strategy is that when a new process enters the system it is
stored in the queue of that partition whose size is best fitted to
accommodate that program. The problem here is that some queues
might be empty while some might be loaded. In both cases when a
program completes its execution and terminates, its allocated memory
partition is free for another program waiting in a queue. But this
multiprogramming with restriction of fixed size partition causes wastage
of memory and lead to internal fragmentation. Internal fragmentation is
a problem at occurs when size of the program is smaller than the size of
partition allocated to it as this extra allocated memory is not used by
program and got wasted.
Mangalmay Institute of Engineering and Technology, Greater Noida
Multiprogramming with variable partitions:
In multiprogramming system with variable partitions, the main memory is divided
into partitions that are variable (not fixed). Here in this system-
• Each partition is capable of holding one process
• Partition can change size as the need arises. The adjacent free partitions can
combine together if needed so as to form bigger space to accommodate large
process.
• There can be different number of partitions depending upon the size of
memory.

There is a process queue in which all the processes waiting to come in main
memory for execution are maintained. The operating system uses scheduling
algorithm to dispatch process one by one from the process queue. If there is
partition available large enough to accommodate the process, OS loads the
process into that partition. When a program completes its execution and
terminates, its allocated memory area is free. The various available memory areas
are called holes. As memory is allocated and deallocated holes will appear in the
memory. Holes of various sizes are spread throughout memory. Figure 4.4 shows
'holes' in variable partitions multiprogramming system.
Mangalmay Institute of Engineering and Technology, Greater Noida
Mangalmay Institute of Engineering and Technology, Greater Noida
When a process is dispatched from process queue to store in memory, OS
first searches for the hole large enough to accommodate that process. If
none of the available holes are large enough to accommodate that
process then available adjacent free partitions can be merged together to
form big hole (bigger space) to accommodate that process. This process
of merging adjacent free partitions to form big hole is called coalescing
and is shown in figure
Mangalmay Institute of Engineering and Technology, Greater Noida
 Paging
• Basic Paging Method
Paging is one of the memory management scheme used generally in most of the operating
system. Paging is a good solution to problem of fragmentation. In paging both physical and
virtual memory are divided into fixed sized blocks. These fixed sized blocks are called frames
in physical memory and pages in logical memory. Both frame and page are of same size.
Mangalmay Institute of Engineering and Technology, Greater Noida
Below figure shows logical memory, page table and physical memory
implementation in terms of paging.
Mangalmay Institute of Engineering and Technology, Greater Noida
Page size is decided by the paging hardware. Page size is of the form 2*,
means power of 2 like - 16(24), 32(2 etc. If suppose logical address is of
size 2 power m unit (unit can be bits, byte or word) in which page size is 2
power n , then higher order 'm-n' bits of logical address denotes page
number and 'n' lower bits denotes page offset. Thus logical address
division is:
Page number Page offset

m-n n
For example suppose logical address is of size 216 and page size is 25, so
m =16 and n = 6
Thus logical address division is:
15 6 5 0

Page number Page offset

m-n = 10 n=6
m=16
Mangalmay Institute of Engineering and Technology, Greater Noida
Page size is decided by the paging hardware. Page size is of the form 2*,
m=16
Let us consider an example in which :
• Physical memory is of size 16 bytes.
• Logical memory has pages of size 2 bytes each
• So total pages that can be stored in physical memory
= Physical memory size / Logical memory size
= 16/2 = 8 pages

The formula to find physical address corresponding to logical address is:


((frame number* page size) + page offset))
Mangalmay Institute of Engineering and Technology, Greater Noida
Below Figure refer for problems:
Mangalmay Institute of Engineering and Technology, Greater Noida
So, Refer above figure for understanding about Physical and Logical
address space.
Logical address 0 is page number 0, page offset 0
And according to page table page number 0 is stored at frame number 4.

So, logical address 0 maps to physical address 8


(frame number * page size + page offset) = (4*2)+0=8. Data here is 'ab'.

Similarly, Logical address 1 is page number 0, page offset 1


And according to page table page number 0 is stored at frame number 4.
So, logical address 1 maps to physical address 9
(frame number * page size + page offset) = (4*2)+1=9. Data here is 'cd'.

Likewise, Logical address 5 is page number 2, page offset 2


And according to page table page number 2 is stored at frame number 3.
So, logical address 5 maps to physical address 7
(frame number * page size + page offset) = (3*2)+1=7. Data here is ‘kl'.
Mangalmay Institute of Engineering and Technology, Greater Noida
• Paging Hardware - TLB (Translation Look-aside Buffer) :
Paging requires a large amount of mapping information and this mapping
information in form of page table is generally stored in physical memory
so paging logically requires to accesses memory very often. Each time a
virtual address generated by the program, memory accesses to read the
page table is required so as to map virtual address to physical address.
Accessing the page table after each virtual address generated by CPU
(once per instruction) makes the system considerably slow and affects
system performance .Many times not one but several memory accesses
are required per instruction and this make the situation more worse by
degrading the system performance terribly low. Memory page tables can
be faster on context switching, but very time consuming in actual
accessing.

So to overcome this problem, the solution is to use small and fast to


access hardware cache known as Translation Look-aside Buffer (TLB). TLB
is part of memory-management unit (MMU), and is used for vi to-
physical address translation.
Mangalmay Institute of Engineering and Technology, Greater Noida
TLB is in the form of table where each row of the TLB consists of a page
number and its associated frame number. We know that logical address
generated by CPU has two parts-page number and page offset. so
whenever CPU generates logical address, page number part of this
address is searched in the TLB. If the required page number is found in
the TLB, this is known as TLB hit and then its associated frame number is
used to map the logical address to the physical address. If the page
number is not found in the TLB, this is known as TLB miss and then the
page table is referenced. From the page table frame number is obtained
which is used for mapping and also this page number and frame number
entry is done in the TLB so that they can be found when TLB is referenced
next time. If the TLB is already full then the operating systems uses page
replacement policies like LRU, FIFO etc to transfer one old entry to disk
storage so as to make space for new entry. Below figure shows paging
using TLB.
Mangalmay Institute of Engineering and Technology, Greater Noida

Paging using TLB


Mangalmay Institute of Engineering and Technology, Greater Noida
There is one important thing that need to be taken care with TLBS. The TLB
contains entry for virtual-to- physical translations that are only valid for the
currently executing process and these translation entries in TLB are of no
use for other processes. Therefore when context switching takes place
from one process to another, the paging hardware and OS should make
sure that process that is going to be executed next and using the TLB does
not accidentally use TLB entry of some previously executed process.
To solve this problem one approach is to simply flush (erase) the TLB each
time there is a context switch, means clearing off the TLB entries before
process that is going to be executed and using the TLB starts its execution.
The other approach is to add one more field in the TLB called address
space identifier (ASID) along with each page number and frame number
entry in the TLB. ASID uniquely identifies a process and is used to
differentiate one process from the other. ASID is somewhat like a process
identifier (PID), but usually ASID has fewer bits as compared to PID. Thus,
with address-space identifiers, the TLB can have translation entries of
different processes at the same without any problem. When page number
entry is found in TLB then ASID associated it with is also matched with
currently executing process ASID. If TLB ASID entry matches currently
executing process ASID, it is a TLB hit otherwise TLB miss.
Mangalmay Institute of Engineering and Technology, Greater Noida
• Different types of page table
Hierarchical Paging:
One of the big problem with single-level page tables is that they must
contain one entry for each virtual address in order to map virtual address
to physical address. This leads to large number of entries in page table
and therefore size of the page table becomes considerably very large in
size.
One solution to this problem is to use multilevel page tables or hierarchal
page table. The most commonly used multilevel page tables type is the
two-level page table in which 1st level page table (primary page table)
points to 2nd level page table (secondary page table) which contains
frame number. Here our virtual address is divided into three fields: page
number 1(also known primary page number), the page number 2(also
known as secondary page number) and the offset as shown in figure.
31 12 11 0

Page number 1 Page number 2 Page offset


10 10 12
32
Mangalmay Institute of Engineering and Technology, Greater Noida
The primary page table maps to secondary page table which maps to
page frame number.
Mangalmay Institute of Engineering and Technology, Greater Noida
Hashed Page Table :
Hashed Page table is mostly used in systems where logical address space
is larger than, like etc. In this system chains of linked lists are maintained.
When CPU generates logical address, the page number field from this
logical address is used to generate a hash value using some hash function
or we can say that page number is hashed to get a hash value. Now after
this specific linked list is assigned to each hashed page number. This
means hashed page table entry points to a specific linked list. This linked
list consists of three fields:
• page number
• frame value
• A pointer to the next element in the linked list
The page number in the hash table is compared with the first element of
assigned linked list. If values are matched the frame number stored
corresponding to hashed page number is used for calculating physical
address If the values didn't match the entire linked list is searched further
till a match is found.
Mangalmay Institute of Engineering and Technology, Greater Noida
Mangalmay Institute of Engineering and Technology, Greater Noida
Inverted Page Table :
Each process in a system has a page table associated with it. The page
table contains one entry for each of the logical address associated with
that process. Thus a page table may contain a large number of entries
resulting in (relatively) large page tables. The solution to this problem is
to use a different approach, an inverted page table.

The logical address generated by CPU is divided into 3 fields- process ID,
page number and page offset. Process id is the unique identification
number of the process that is associated with that logical address. Each
entry in the inverted page table consists of a process ID and page
number. Whenever CPU generates the logical address, the PID and page
number combination is picked from this logical address and searched in
the inverted page table. The index of the inverted page table is the frame
number. When a match is found, the index (frame number) that matches
is combined with the offset field of logical address to obtain a physical
address. If no match is found, a page fault occurs. This translation
procedure is shown in figure.
Mangalmay Institute of Engineering and Technology, Greater Noida
Inverted Page Table figure given below :
Mangalmay Institute of Engineering and Technology, Greater Noida
An Inverted Page Table has one major advantage it is never be larger than
the size of physical memory. By contrast, the size of a page table is
determined by the amount of virtual memory used. Thus using inverted
page table scheme size of page table can be reduced. But inverted page
table has one major disadvantage : it is more complex and expensive to
use (in terms of time) for mapping the virtual address to a physical
address. The search of the inverted page table can be improved by using
a hashed table.
Mangalmay Institute of Engineering and Technology, Greater Noida
 Segmentation :
Segmentation is scheme used by memory-management unit for virtual to
physical address translation. Segmentation provides a view that a user
can more easily relate and understand. This is because a user program
has different segments such as: main program, procedure, functions,
local variables, global variables, common block, stack, symbol table,
arrays, other data structures etc. All the segments defined in a user
program have specific purpose and that's why these segments are of
variable length. Also user is not concerned where and how these
elements are stored in memory. Using segmentation the logical address
space (where user program resides), is divided into segments where each
segment is of variable length and different segments can be stored
anywhere in memory. Thus user fined segmentation more easy to relate
with their program without going into details of how these segments are
managed. The logical address space where user program resides from
user outlook is shown in figure below.
Mangalmay Institute of Engineering and Technology, Greater Noida
User Outlook of Logical Address Space where User Programs resides.
Mangalmay Institute of Engineering and Technology, Greater Noida
Segmentation Implementation:
In segmentation, each segment in the logical address space has a specific
number and the length associated with it. Each segment has starting
address (base) and the limit that sets the range of that segment. In
paging the user specifies the logical address which was divides into page
number and page offset by paging hardware, but in segmentation the
user specifies the logical address using two dimensions:
<segment number, segment offset>
Thus in segmentation two-dimensional user defined logical address is to
be mapped into one-dimensional physical address. For this segment
table is used. Each entry in the segment table has:
• base that points the starting address of the segment in physical
memory.
• limit that specifies the length of the segment

Implementation of segmentation using segment table is shown in


figure.
Mangalmay Institute of Engineering and Technology, Greater Noida
Segmentation implementation using segment table

The logical address generated by CPU consists of two parts :


Segment number and segment offset.
Mangalmay Institute of Engineering and Technology, Greater Noida
Lets take an example to understand segmentation :
Mangalmay Institute of Engineering and Technology, Greater Noida
Let's take an example to understand segmentation. Suppose we have six
segments in logical address space as shown in above figure.
The segment table shown has separate entry for each segment. The
segment table entry contains base address and the limit of that segment.
As we can see in figure 4.17 the segment 4 has base address 4200 and
limit 200. So if user wants to access 47 offset within segment 4. Since
offset (47) < limit (200) so base + segment offset forms the physical
address = 4200 + 47 = 4247 is the physical address referenced by user.
Similarly if user wants to access 105 offset within segment 5. Since offset
(105) < limit (300) so base + segment offset forms the physical address =
2500 + 105 = 2605 is the physical address referenced by user.
Now suppose user wants to access 250 offset within segment 1. Since
offset (350) > limit (200) so this is invalid address reference as limit of
segment 1 is up to 200 but reference it made beyond that for 350.

Segmentation Disadvantage :
• Segmentation requires more complicated hardware for address
translation than paging.
Mangalmay Institute of Engineering and Technology, Greater Noida
• Segmentation suffers from external fragmentation. Paging only yields
a small internal fragmentation.

 Paged segmentation (or Segmentation with Paging)


To take advantage of both paging and segmentation some system
combines both of these this approaches. This approach is called paged
segmentation or we can say segmentation with paging. In this technique
is segment is viewed as a collection of pages. Logical address generated
by CPU is divided into three parts-the segment, the page and the offset.
This is shown in Below Figure.

segment page offset


Figure: Logical Address Division in Paged Segmentation

• The segment is used as an index in segment table. Entry in the


segment table contains the base address of the page table that holds
the pages belonging to that segment.
Mangalmay Institute of Engineering and Technology, Greater Noida
• Page number is used as an index in a page table and selects an entry
within page table. Page table is used to store frame number of each
page in physical memory. This frame number is actually the base
address of the page. This frame number + offset part of logical
address forms the physical address. The physical address is the actual
address in computer physical memory corresponding to the logical
address generated by CPU.
Segmentation with paging is shown in below figure.
Mangalmay Institute of Engineering and Technology, Greater Noida
• Page number is used as an index in a page table and selects an entry
within page table. Page table is used to store frame number of each
page in physical memory. This frame number is actually the base
address of the page. This frame number + offset part of logical
address forms the physical address. The physical address is the actual
address in computer physical memory corresponding to the logical
address generated by CPU.
Segmentation with paging is shown in below figure.
Mangalmay Institute of Engineering and Technology, Greater Noida
 Virtual Memory Concepts :
Virtual memory is a concept of an operating system (OS) that virtually
increases the apparent size of main memory and gives liberty to user
(programmer) to write programs without worrying about the size of
physical memory. The user has an illusion of an extremely large main
memory but in actual only a limited memory is available. The user
actually uses address and space of virtual memory which is then
translated (mapped) into corresponding main memory space. The
address generated and referenced by user program is called virtual
address and the collection of virtual addresses forms virtual address
space. Similarly, the address of main memory is called physical address
and collection of physical address is called physical address space.
The OS implement virtual memory concept by loading only that portion
of the user program at a time from disk storage(secondary memory) into
main memory that is currently need to be executed instead of loading full
program in main memory. Also that portion of user program that is not
currently required is temporarily transferred back from main memory to
disk storage (secondary memory) to make space for other program.
Mangalmay Institute of Engineering and Technology, Greater Noida
For example following are the situations when entire program is not
required to be loaded fully in main memory.
• Parts of the program called error handling routines are used only
when an error occurs.
• Some functions and procedures of a program may be used seldom.
• Many data structures arrays, structures, tables etc are assigned a fixed
size of memory space in use programs but actually only a small
amount of memory assigned to these data structures is used.

Thus virtual memory gives the facility to execute a program by loading


only portion of the program in main memory instead of entire program.
This concept provides many benefits:

• The user (programmer) is no longer be bounded by the amount of


main memory that is available and can write programs without
worrying about the size of memory.
Mangalmay Institute of Engineering and Technology, Greater Noida
• Since each user program is loaded in portion into main memory thus
taking less memory space so more programs could reside in main
memory and execute simultaneously. This will lead to efficient CPU
utilization and throughput and will give overall good system
performance.

Virtual memory is generally implemented by demand paging. It can also


be implemented in a segmentation system. Demand segmentation can
also be used to provide virtual memory.
Mangalmay Institute of Engineering and Technology, Greater Noida
 Demand Paging :
The concept of virtual memory is generally implemented by demand
paging. A demand paging system is pretty like a paging system but with
additional feature of swapping. The user process resides in secondary
memory and is a considered as a set of pages. The basic concept of
demand paging is that when we want to execute a process, than instead
of bringing (swapping in) the entire process from secondary memory into
main memory, we use a lazy swapper called pager which swaps only
those pages into memory that are needed currently. Thus, pages that are
not needed in process execution are not brought into main memory. This
considerably reduces the time required for swapping and the amount of
physical memory needed by a process.
Mangalmay Institute of Engineering and Technology, Greater Noida
This swapping in and swapping out is shown in figure.
Mangalmay Institute of Engineering and Technology, Greater Noida
To implement demand paging hardware support is required to identify
which pages are in memory and which pages are on the disk. To do this
page identification, a method of valid -invalid bit is used. Each page in
main memory has associated bit with it. The pages that are currently in
main memory and are legally valid, their corresponding bit is set to
"Valid" (V). The pages that are currently not in main memory (it is in
secondary memory) or legally not valid, their corresponding bit is set to
"Invalid" (I).A page is invalid if it is not present in logical address space.
The secondary memory holds the pages that are not in main memory
and swapping in and out of pages is done between main and secondary
memory.

Shown in below figure.


Mangalmay Institute of Engineering and Technology, Greater Noida
Mangalmay Institute of Engineering and Technology, Greater Noida
Each entry in the page table consists of a frame number and
Valid/Invalid bit. The index of the page table is the page number. When
process executes and access pages that are in main memory the process
proceeds in normal way. But when process tries to access pages that are
marked invalid in page table, then this condition is called page fault.
When a page fault happens, the OS takes following steps to come out of
this condition:
Step 1: Determine whether the page is invalid because it is not present in
logical address space. This can be determined from the internal table
maintained in PCB of that process.
Step 2: If this holds true that page is invalid because it is not present in
logical address space, then process is terminated. If this holds false, this
means page is in still in secondary memory and not yet brought in main
memory.
Step 3: The next step is to find a free frame from the list of free frames
available.
Step 4: The desired page is brought from secondary memory to the free
frame.
Mangalmay Institute of Engineering and Technology, Greater Noida
Step 5: The page table in demand paging is updated to show that page is
brought in main memory.
The corresponding frame number entry is done and valid bit is set.
Step 6: The instruction that was interrupted due to page fault is now
restarted as the required page is now available in main memory.
Mangalmay Institute of Engineering and Technology, Greater Noida
 Performance of Demand Paging :
Performance of demand paging can be measured in terms of Effective
Access Time(EAT). In most of the computers the effective access time is
the same as required to access the memory and the time required to
access the memory denoted as ma is normally between 10 to 200
nanoseconds. if there is no page fault in the system, then EAT = ma. But
if page fault occurs the required page need to be swap-in from secondary
memory to main memory and then the required page is accessed. If p be
probability of page fault then,
EAT = (1 - p) x (ma) + p x (page-fault handling time)
Page-fault handling includes:
i. If no free space available, then swapping out a page from main
memory to secondary memory so as to make free space in main
memory.
ii. Swapping in the required page from secondary memory to main
memory.
iii. Updating the page table.
iv. Restart the instruction that was interrupted due to page fault.
Mangalmay Institute of Engineering and Technology, Greater Noida
Let us take an example to understand EAT. Suppose in a system,
ma = 200 nanoseconds
Page-fault handling time = 22,000,000

EAT = (1 - p) x (ma) + p x (page-fault handling time)


= (1 - p) x (200) + p x (22,000,000)
= (200) - (200 x p) + (22,000,000 x p)
= 200 + p x (22,000,000 - 200)
= 200 + 21,999,800 x p

Thus EAT is directly proportional to page fault rate. It is important to keep


the page fault rate low in demand paging system otherwise EAT increases
affecting the overall system performance.
Mangalmay Institute of Engineering and Technology, Greater Noida
 Page Replacement Algorithm :
The page replacement algorithm decides which memory page is to be
replaced. The process of replacement is sometimes called swap out. Page
replacement is done when the
requested page is not found in the main memory (page fault).
• A page replacement algorithm tries to select which pages should be
replaced so as to minimize the total number of page misses. There are
many different page replacement algorithms. These algorithms are
evaluated by running them on a particular string of memory reference
and computing the number of page faults.
• If a process requests for page and that page is found in the main
memory then it is called page hit, otherwise page miss or page fault.
• Page replacement algorithm is mainly three types:
1. First In First Out (FIFO)
2. Least Recently Used (LRU)
3. Optimal Page Algorithm
Mangalmay Institute of Engineering and Technology, Greater Noida
 Page Replacement Algorithm :
1. First In First Out (FIFO) Algorithm
The First-In First-Out (FIFO) page replacement algorithm is one of the
easy to implement algorithm. As the name suggest the operating system
maintains a FIFO queue to store pages in memory. The pages are
arranged in FIFO queue according to their arrival. As the new page is
brought from secondary memory, it is placed at the tail (end) of the
queue. When a page needs to be swapped-out, the page at the head
(front) of the queue (the oldest page) is selected.
Mangalmay Institute of Engineering and Technology, Greater Noida
While FIFO is cheap and intuitive, it performs poorly in practical
application. Also FIFO suffers from strange phenomenon known as
Belady’s anomaly.
Belady’s anomaly is the phenomenon in which as the number of
available page frames increases, the more is the page fault rate. This is
explained in following example using First-In-First-Out (FIFO) page
replacement algorithm in figure.
Mangalmay Institute of Engineering and Technology, Greater Noida
Mangalmay Institute of Engineering and Technology, Greater Noida
2. Least Recently Used (LRU) Algorithm :
• Page which has not been used for the longest time in main memory is
the one which will be selected for replacement.
• In this algorithm page will be replaced which is least recently used.
• LRU like Optimal page replacement algorithm does not suffer from
Belody’s Anomaly phenomenon.
Mangalmay Institute of Engineering and Technology, Greater Noida

Example 2: Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1,


3, 1, 6, 3, 2, 3 with frame size 4(i.e. maximum 4 pages in a frame).
Example 3: Consider the page reference string of size 15: 7, 0, 1, 2, 0, 3,
0, 4, 2, 3, 0, 3, 1, 2, 0 with frame size 3(i.e. maximum 3 pages in a frame).
Mangalmay Institute of Engineering and Technology, Greater Noida
3. Optimal Page Algorithm :
• An optimal page-replacement algorithm has the lowest page-fault
rate of all algorithms. An optimal page-replacement algorithm exists,
and has been called OPT or MIN.
• Replace the page that will not be used for the longest period of time.
Example 1: Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1,
3, 1, 6, 3, 2, 3 with frame size 4(i.e. maximum 4 pages in a frame).
Example 2: Consider the page reference string of size 18: 7, 0, 1, 2, 0, 3,
0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7 with frame size 3(i.e. maximum 3 pages in a
frame).
Mangalmay Institute of Engineering and Technology, Greater Noida
 Thrashing :

You might also like