0% found this document useful (0 votes)
20 views101 pages

OS Unit4

Unit IV of the Operating Systems course covers Memory Management and File System Interface, detailing concepts such as swapping, contiguous and non-contiguous memory allocation, paging, and virtual memory. It also discusses file concepts, access methods, directory structures, and file sharing and protection. The document emphasizes the importance of efficient memory utilization and the mechanisms involved in managing memory and files within an operating system.

Uploaded by

himeshpulikanti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views101 pages

OS Unit4

Unit IV of the Operating Systems course covers Memory Management and File System Interface, detailing concepts such as swapping, contiguous and non-contiguous memory allocation, paging, and virtual memory. It also discusses file concepts, access methods, directory structures, and file sharing and protection. The document emphasizes the importance of efficient memory utilization and the mechanisms involved in managing memory and files within an operating system.

Uploaded by

himeshpulikanti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

OPERATING SYSTEMS

UNIT-IV
UNIT-IV:
• Memory Management: Memory
Management Swapping, Contiguous Memory
Allocation, Paging, Page-Table Structure,
Segmentation, Virtual Memory, Demand
Paging, Page-Replacement Algorithms, Frames
Allocation, Thrashing.
• File System Interface: File Concepts, Access
Methods and Directory Structure, File System
Mounting, File Sharing and Protection,
Allocation Methods, Free-Space Management,
Efficiency and Performance

OS by Dr.V.Srilakshmi. Associate Professor,


CSE
Memory Management
• Background
• Swapping
• Contiguous Memory Allocation
– Fixed Partitioning
– Variable Partitioning
• Non Contiguous Memory Allocation
– Paging
– Segmentation
Background
• Program must be brought (from disk) into memory
and placed within a process for it to be run.
• Main memory and registers are only storage CPU can
access directly
• Memory unit only sees a stream of addresses + read
requests, or address + data and write requests
• Cache sits between main memory and CPU registers
• Protection of memory required to ensure correct
operation
Background
• Program must be brought (from disk) into memory
and placed within a process for it to be run.
• Main memory and registers are only storage CPU can
access directly
• Memory unit only sees a stream of addresses + read
requests, or address + data and write requests
• Cache sits between main memory and CPU registers
• Protection of memory required to ensure correct
operation
Base and Limit Registers
● A pair of base and limit registers define the logical address space
● CPU must check every memory access generated in user mode to be sure
it is between base and limit for that user
Hardware Address Protection
Logical vs. Physical Address Space
• The concept of a logical address space that is bound to
a separate physical address space is central to proper
memory management
– Logical address – generated by the CPU; also referred to as
virtual address
– Physical address – address seen by the memory unit
• Logical and physical addresses are the same in
compile-time and load-time address-binding schemes;
logical (virtual) and physical addresses differ in
execution-time address-binding scheme
• Logical address space is the set of all logical addresses
generated by a program
• Physical address space is the set of all physical
addresses generated by a program
Logical vs. Physical Address Space
Logical vs. Physical Address Space
• The run-time mapping from virtual to physical addresses is
done by a hardware device called the memory-management
unit (MMU).
• The base register is now called a relocation register.The value
in the relocation register is added to every address generated
by a user process at the time the address is sent to memory
(see Figure 8.4). For example, if the base is at 14000, then an
attempt by the user to address location 0 is dynamically
relocated to location 14000; an access to location 346 is
mapped to location 14346.
Static and Dynamic Loading
Loading a process into the main memory is done by a
loader. There are two different types of loading :
•Static Loading: Static Loading is basically loading the
entire program into a fixed address. It requires more
memory space.
•Dynamic Loading: To gain proper memory utilization,
dynamic loading is used. In dynamic loading, a routine
is not loaded until it is called. All routines are residing
on disk in a relocatable load format. One of the
advantages of dynamic loading is that the unused
routine is never loaded.
Static and Dynamic Linking
A linker is a program that takes one or more object files
generated by a compiler and combines them into a single
executable file.
Static Linking: In static linking, the linker combines all necessary
program modules into a single executable program. So there is
no runtime dependency. Some operating systems support only
static linking, in which system language libraries are treated like
any other object module.
Dynamic Linking: The basic concept of dynamic linking is similar
to dynamic loading. In dynamic linking, “Stub” is included for
each appropriate library routine reference. A stub is a small
piece of code. When the stub is executed, it checks whether the
needed routine is already in memory or not. If not available
then the program loads the routine into memory. Here linking is
postponed until execution time
Swapping
• A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution
• Total physical memory space of processes can exceed physical
memory increasing the degree of multiprogramming in a
• system.
• Backing store – fast disk large enough to accommodate
copies of all memory images for all users; must provide direct
access to these memory images
• Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped out
so higher-priority process can be loaded and executed
• Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped
• System maintains a ready queue of ready-to-run processes
which have memory images on disk
Schematic view of Swapping
Context Switch Time including Swapping

● If next processes to be put on CPU is not in memory, need to


swap out a process and swap in target process
● Context switch time can then be very high
● Can reduce if reduce size of memory swapped – by knowing
how much memory really being used
● System calls to inform OS of memory use via
request_memory() and release_memory()
Contiguous Allocation
Contiguous Allocation
• In the Contiguous Memory Allocation, each process
is contained in a single contiguous section of
memory. In this memory allocation, all the available
memory space remains together in one place.
Fixed Partition Scheme:-
• In this type of contiguous memory allocation
technique, number of partitions are fixed.
• Size of each partition may or may not be same.
• Each time a process comes in, it will be allotted one
of the free blocks.
• This technique is also called static partitioning
Fixed Size
Partitioning(MFT-Multiprogramming with
Fixed Tasks)
Fixed Size Partitioning
The first process, which is of size 3MB is also allotted a
5MB block, and the second process, which is of
size 1MB, is also allotted a 5MB block, and
the 4MB process is also allotted a 5MB block.
Advantages:-
1) Because all of the blocks are the same size, this
scheme is simple to implement.
2) It is easy to keep track of how many blocks of
memory are left.
3) As at a time multiple processes can be kept in the
memory, this scheme can be implemented in a
system that needs multiprogramming.
Fixed Size Partitioning
Disadvantages:-
1. As the size of the blocks is fixed, we will not be able to
allot space to a process that has a greater size than the
block.
2. Limitation in degree of multiprogramming- cannot
accommodate more number of process because
number of partitions are fixed in advance.
3. If the size of the block is greater than the size of the
process, we have no other choice but to assign the
process to this block, but this will lead to much empty
space left behind in the block. This empty space
could've been used to accommodate a different
process. This is called internal fragmentation.
Internal fragmentation
For Example: Let's consider a fixed partitioning scheme where the
memory blocks are of fixed sizes( say 4MB each). Now, suppose a
process of size 3 MB comes and occupies a block of memory. So, the
1MB space in this block is free and can’t be used to allocate it to
other processes. This is called internal fragmentation. The diagram
below shows memory division, where each memory block is size
4MB.
Variable Size Partitioning(MVT-Multiprogramming
with Variable number of Tasks)
• This scheme is also known as Dynamic Partitioning and
came into existence to overcome the drawback i.e
internal fragmentation that is caused by Static
partitioning.
• In this partitioning, scheme allocation is done
dynamically.
• The size of the partition is not declared initially. Whenever
any process arrives, a partition of size equal to the size of
the process is created and then allocated to the process.
Thus the size of each partition is equal to the size of the
process.
• As partition size varies according to the need of the
process so in this partition scheme there is no internal
fragmentation.
Variable Size Partitioning
Variable Size Partitioning
Advantages:-
1. No internal fragmentation:
2. Degree of multiprogramming is dynamic: More
processes can be loaded into the memory at
the same time.
3. No limitation on the size of the process: The
size of the process cannot be restricted because
the partition size is decided according to the
process size.
Disadvantages:-
• External fragmentation
Variable Size Partitioning
External fragmentation: It arises when we are unable to allocate space to a
process even though we have total space available in the memory, but that
space is not contiguous. The holes or vacant spaces created by leaving process
mark empty spaces in the memory, and the non-contiguous spaces are not a
good fit to accommodate a new upcoming process.
Variable Size Partitioning
The first-fit, best-fit, and worst-fit strategies are the
ones most commonly used to select a free hole from
the set of available holes.
• First fit. Allocate the first hole that is big enough.
• Best fit. Allocate the smallest hole that is big enough.
• Worst fit. Allocate the largest hole.
One solution to External fragmentation is
Compaction.
For example, a process P1 of size 100 Kb makes a
request to the memory whose size is 1000 Kb, but the
free spaces in memory are 20 Kb, 50 Kb, 80 Kb.
Variable Size Partitioning

In our above example, this will bring 150 Kb on one end


which will satisfy the memory size of the process P1 which
is of 100 Kb.
Variable Size Partitioning
Although, the compaction solves the problem
of external fragmentation but there are some
disadvantages of compaction.
• The compaction is only possible when the
program supports dynamic reallocation.
• The compaction process is time consuming
and very expensive.
So, the other ways to solve the external
fragmentation is non contiguous memory
allocation.
Non contiguous Allocation
Paging
• The basic method for implementing paging
involves
breaking physical memory into fixed-sized
blocks called frames
breaking logical memory into blocks of the
same size called pages
• The backing store is divided into fixed-sized
blocks that are the same size as the memory
frames
• When a process is to be executed, its pages are
loaded into any available memory frames.
Hardware support for Paging
Address Translation Scheme for Paging
• Address generated by CPU is divided into:
– Page number (p) – used as an index into a page
table which contains base address of each page in
physical memory
– Page offset (d) – combined with base address to
define the physical memory address that is sent to
the memory unit

– For given logical address space 2m and page size 2n


Paging model of Logical and Physical
Memory
Paging Example
Paging Example
Here, in the logical address, n= 2 and m = 4. Using a
page size of 4 bytes and a physical memory of 32
bytes (8 pages)
logical address 0 maps to physical address 20 [= (5 ×
4) +0]. Logical address 3 (page 0, offset 3) maps to
physical address 23 [= (5 × 4) +3]. Logical address 4 is
page 1, offset 0; according to the page table, page 1
ismapped to frame 6. Thus, logical address 4 maps to
physical address 24 [= (6× 4) + 0]. Logical address 13
maps to physical address 9
Free Frames

Before allocation After allocation


Implementation of Page Table
Hardware implementation of Page Table can be done in several
ways:-
1)Page table is implemented using dedicated Registers:-
These registers should be built with very high-speed logic to make
the paging-address translation efficient.
The use of registers for the page table is satisfactory if the page table
is reasonably small.
Most contemporary computers, however, allow the page table to be
very large. For these machines, the use of fast registers to
implement the page table is not feasible.
2) The Page table is kept in main memory:-Page-table base register
(PTBR) points to the page table. Changing page tables requires
changing only this one register, substantially reducing context-switch
time. With this scheme, two memory accesses are needed to access
a byte (one for the page-table entry, one for the byte).
Implementation of Page Table
3) Use Translation look-aside buffer:-
• The standard solution to this problem is to use a special, small,
fast look up hardware cache called a translation look-aside buffer
(TLB). The TLB is associative, high-speed memory. Each entry in
the TLB consists of two parts: a key (or tag) and a value.
• When the associative memory is presented with an item, the item
is compared with all keys simultaneously. If the item is found, the
corresponding value field is returned.
• TLB is same like a cache memory where it can hold only few page
entries. To reduce the access time of memory, hardware cache is
used i.e., known as TLB.
• TLB have two different terms:
TLB Hit: Means an entry is found in Cache memory.
TLB Miss: Means an entry is not found in Cache memory.
Associative Memory
• Associative memory – parallel search
Page # Frame #

• Address translation (p, d)


– If p is in associative register, get frame # out
– Otherwise get frame # from page table in memory
Paging Hardware With TLB
Paging Hardware with TLB
The TLB is used with page tables in the following way:-
The TLB contains only a few of the page-table entries. When a logical
address is generated by the CPU, its page number is presented to
the TLB. If the page number is found, its frame number is
immediately available and is used to access memory.
If the page number is not in the TLB (known as a TLB miss), a
memory reference to the page table must be made. Depending on
the CPU, this may be done automatically in hardware or via an
interrupt to the operating system. When the frame number is
obtained, we can use it to access memory.
If the TLB is already full of entries, an existing entry must be selected
for replacement.
The percentage of times that the page number of interest is found in
the TLB is called the hit ratio.
Memory Protection
• To achieve memory protection in paging, there are certain protection bits
which are associated with each frame.

• This bits are maintained in the page table, so during finding the frame
number in page table protection bits are also checked.

• There is one type of protection bits i.e., associated with the page i.e.,
called as Valid-Invalid bit
Valid-invalid bit attached to each entry in the page table:
– “valid” indicates that the associated page is in the process logical
address space.
– “invalid” indicates that the page is not in the process logical address
space.
Valid (v) or Invalid (i) Bit In A Page Table
Shared Pages
• Shared code
– One copy of read-only (reentrant) code shared among processes
(i.e., text editors, compilers, window systems)
– Similar to multiple threads sharing the same process space
– Also useful for inter process communication if sharing of
read-write pages is allowed

• Private code and data


– Each process keeps a separate copy of the code and data
– The pages for the private code and data can appear anywhere in
the logical address space
Shared Pages Example
Structure of the Page Table
The Structure of the page table is of different types:
1. Hierarchical Page table
2. Hashed Page Table
3. Inverted Page Tables
Hierarchical Page Tables
• One of the solution to the single level page table is multilevel
page table or Hierarchical page table.

• The most commonly used multilevel page table type is the


two-level page table in which first level page table(primary page
table) points to the second level page table(also known as
secondary page table) which contains the frame number.

• Break up the logical address space into multiple page tables.


Two-Level Page-Table Scheme
Two-Level Paging Example
• A logical address generated by CPU is divided into:
– a page number
– a page offset

• Since the page table is paged, the page number is further divided into:
– a page number(p1)
– A page number(p2)
– a 12-bit page offset

• Thus, a logical address is as follows:

page number page offset


p1 p2 d
• where p1 is an index into the outer page table, and p2 is the displacement
within the page of the inner page table known as forward-mapped page
table
Address-Translation Scheme
Three-level Paging Scheme

OS by Dr.V.Srilakshmi. Associate Professor, CSE


Hashed Page Tables
• Hashed page table is used in a system which logical address space is
larger than 32 bits.
• In this system, chains of linked list is maintained.

• The logical address generated by the CPU is divided into two parts:
1. Page number
2. Offset/displacement
Using Hash function, this page number will generate a hash table
For every corresponding hash page number hash page table will be
generated.
And for every Hash page number a linked list is maintained.
Hashed Page Tables(Cont.)
• In this linked list,each element contains (1) the page number (2) for
page number the corresponding frame number (3) a pointer to the
next node in the linked list.

• Whatever the address is generated by the CPU, i.e, first it will generate
the hash page table for that hash page number and the page number
should be matched with the linked list.
– If it is matched with the linked list is found, then it is going to take
the corresponding frame number to get the exact physical address.
– If it is not matched in that particular linked list, with the help of
pointer list it will check the next linked list.It is going on checking
until it gets the required page number.
Hashed Page Table
Inverted Page Table
• It is used when multiple programs are working at the same time in the
computer’s memory, In case of multiprogramming environment, page
number and the offset is not enough even it is going to tell the process
identification number(PID).

• In this, logical address generated by the CPU is divided into three parts:
1. Process Identification Number(PID)
2. Page Number
3. Offset
• With the help of page number and the offset, it is going to search the page
table at ith location i.e., ‘i’ is the frame number. The frame number index ‘i’
and this particular offset is added to get an exact physical address.
• In this way logical address will be mapped onto the physical memory.
Inverted Page Table Architecture
Segmentation
• Memory-management scheme that supports user view of memory.
• The program will be divided not into equal size fragments but depends
on module size.
• In order to overcome the drawback of paging we go for segmentation.
• A program is a collection of segments
– A segment is a logical unit such as:
main program
procedure
function
method
object
local variables, global variables
common block
stack
symbol table
arrays
User’s View of a Program
Logical View of Segmentation
1

1 4

3
2
4

user space physical memory space


Segmentation Architecture
• Logical address consists of a two tuple:
<segment-number(s), offset(d)>,

• Segment table – maps two-dimensional physical addresses; each


table entry has:
– base – contains the starting physical address where the segments
reside in memory
– limit – specifies the length/size of the segment

• Segment-table base register (STBR) points to the segment table’s


location in memory

• Segment-table length register (STLR) indicates number of segments


used by a program;
Example of Segmentation
Segmentation Architecture (Cont.)

• A segmentation hardware is shown in the following diagram:


• CPU generates the logical address where this logical address has
been divided into two parts i.e., segment number and the offset.
• Depending on the segment number in the segment table it will go to
that particular location and access limit and base registers.
• So the displacement value ‘d’ should be less than the limit. If it is less
than the limit, then the base address is added with the particular
offset that will give you the respective physical address and that will
be mapped onto the physical memory.
Segmentation Hardware
Virtual Memory
• Demand Paging
• Page Replacement Algorithms
1. FIFO – First In First Out
2. Optimal Page Replacement
3. LRU – Least Recently Used
4. LRU Approximation
5. Counting Based Page Replacement
1. LFU - Least Frequently Used
2. MFU - Most Frequently Used

• Allocation of Frames
• Thrashing
1. Causes of Thrashing
2. Working Set Model
3. Page Fault Frequency
Virtual Memory
• Virtual memory is a technique that allows the
execution of processes that are not completely in
memory.
• Advantages are :
• Programs can be larger than physical memory.
• It abstracts main memory into an extremely large,
uniform array of storage, separating logical memory as
viewed by the user from physical memory.
• This technique frees programmers from the concerns
of memory-storage limitations.
• Virtual memory also allows processes to share files
easily and to implement shared memory.
Virtual Memory
• In fact, examination of real programs shows us that, in many
cases, the entire program is not needed to be put in main
memory. For instance, consider the following:
• Programs often have code to handle unusual error conditions.
Since these errors seldom, occur in practice, this code is almost
never executed.
• Arrays, lists, and tables are often allocated more memory than
they actually need. An array may be declared 100 by 100
elements, even though it is seldom larger than 10 by 10 elements.
An assembler symbol table may have room for 3,000 symbols,
although the average program has less than 200 symbols.
• • Certain options and features of a program may be used rarely.
Virtual Memory -Introduction
• Virtual memory involves the separation of logical
memory as perceived by users from physical memory.
This separation allows an extremely large virtual
memory to be provided for programmers when only a
smaller physical memory is available .
• Virtual memory makes the task of programming much
easier, because the programmer no longer needs to
worry about the amount of physical memory available;
he/she can concentrate instead on the problem to be
programmed.
Diagram showing Virtual Memory large than Physical
Memory
Demand Paging
• Demand paging : Pages are loaded when they are demanded
during program execution, pages never accessed are never
loaded.
• A demand-paging system is similar to a paging system with
swapping where processes reside in secondary memory (usually a
disk).
• When we want to execute a process, we swap it into memory.
Rather than swapping the entire process into memory, we use a
lazy swapper.
• A lazy swapper never swaps a page into memory unless that page
will be needed. A swapper manipulates entire processes, whereas
a pager is concerned with the individual pages of a process. We
thus use “pager,” rather than “swapper,” in connection with
demand paging.
Demand Paging
Page table when some pages are not in main memory
Page Fault
• What happens if the process tries to access a page that was not
brought into memory?
• Access to a page marked invalid causes a page fault.
• The paging hardware, in translating the address through the page
table, will notice that the invalid bit is set, causing a trap to the
operating system.
• This trap is the result of the operating system’s failure to bring the
desired page into memory.
Procedure for handling Page Fault
The procedure for handling this page fault is:
1. Check the page table to determine whether the reference was
a valid or an invalid memory access.
2. If the reference was invalid, we terminate the process. If it was
valid but not in main memory, we now load it in.
3. We find a free frame by taking one from the free-frame list.
4. We schedule a disk operation to read the desired page into the
frame.
5. When the disk read is complete, we modify the internal
reference bit in page table.
6. We restart the instruction that was interrupted by the trap.
Steps in Handling a Page Fault
Page Replacement
What happens if the main memory is full and there are
no free frames? --------- Solution is Page replacement.
1) Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to
select a victim frame.
c. Write the victim frame to the disk; change the page and frame
tables accordingly.
3. Read the desired page into the newly freed frame; change the
page and frame tables.
4. Continue the user process from where the page fault occurred.
Page Replacement

OS by Dr.V.Srilakshmi. Associate Professor, CSE


Page Replacement Algorithms
1. FIFO – First In First Out
2. Optimal Page Replacement
3. LRU – Least Recently Used
4. LRU Approximation
5. Counting Based Page Replacement
1. LFU - Least Frequently Used
2. MFU - Most Frequently Used
FIFO Page Replacement algorithm
• The simplest page-replacement algorithm is a first-in,
first-out (FIFO) algorithm.
• A FIFO replacement algorithm associates with each
page the time when that page was brought into
memory.
• When a page must be replaced, the oldest page is
chosen.
• .We can even create a FIFO queue to hold all pages in
memory. We replace the page at the head of the
queue.
First-In-First-Out (FIFO) Algorithm
• Reference string:
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
• 3 frames (3 pages in memory at a time per process)

15 page faults
• Belady found that, In FIFO page replacement
algorithm, the number of page faults will get
increased with the increment in number of frames.
• This is an Anomaly called as Belady's Anomaly.
Belady’s Anomaly
Optimal Page Replacement Algorithm
• This algorithm has the lowest page-fault rate
of all algorithms and will never suffer from
Belady’s anomaly. Such an algorithm is called
OPT or MIN.
• Principle is : Replace the page that will not
be used for the longest period of time.
• The optimal page-replacement algorithm is
difficult to implement, because it requires
future knowledge of the reference string.
• As a result, the optimal algorithm is used
mainly for comparison studies.
Optimal Page Replacement Algorithm
• Reference string:
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
• 3 frames (3 pages can be in memory at a time
per process)

9 page faults
LRU(Least Recently Used) Page Replacement
algorithm
• LRU replacement associates with each page
the time of that page’s last use.
• When a page must be replaced, LRU chooses
the page that has not been used for the
longest period of time.
• We can think of this strategy as the optimal
page-replacement algorithm looking backward
in time, rather than forward.
LRU Page Replacement Algorithm
• Reference string:
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
• 3 frames (3 pages can be in memory at a time
per process)

12 page faults
LRU Page Replacement algorithm
• LRU page-replacement algorithm may require
substantial hardware assistance.
• Two implementations are possible:
a)Counters
b)Stack
• Like optimal replacement, LRU replacement
does not suffer from Belady’s anomaly.
• Both belong to a class of page-replacement
algorithms, called stack algorithms, that can
never exhibit Belady’s anomaly.
LRU Algorithm (Cont.)
• Counter implementation
– Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter
– When a page needs to be changed, look at the counters to find
smallest value
• Search through table needed
• Stack implementation
– Keep a stack of page numbers in a double link form:
– Page referenced:
• move it to the top
– But each update more expensive
• LRU and OPT are cases of stack algorithms that don’t have
Belady’s Anomaly
LRU Approximation Algorithms
• LRU needs special hardware and still slow
• Reference bit
– With each page associate a bit, initially = 0
– When page is referenced bit set to 1
– Replace any with reference bit = 0 (if one exists)
• Second-chance algorithm
– Generally FIFO, plus hardware-provided reference bit
– Clock replacement
– If page to be replaced has
• Reference bit = 0 -> replace it
• reference bit = 1 then:
– set reference bit 0, leave page in memory
– replace next page
Second-Chance (clock) Page-Replacement Algorithm
Counting Algorithms
• Keep a counter of the number of references that
have been made to each page
– Not common

• LFU Algorithm: replaces page with smallest count

• MFU Algorithm: based on the argument that the


page with the smallest count was probably just
brought in and has yet to be used
Allocation of Frames

• Each process needs minimum number of


frames
• Two major allocation schemes
– fixed allocation
– priority allocation
Fixed Allocation
• Equal allocation – For example, if there are 100 frames
(after allocating frames for the OS) and 5 processes, give
each process 20 frames
– Keep some as free frame buffer pool

• Proportional allocation – Allocate according to the size


of process
– Dynamic as degree of multiprogramming, process
sizes change
Priority Allocation
• Use a proportional allocation scheme using
priorities rather than size

• If process Pi generates a page fault,


– select for replacement one of its frames
– select for replacement a frame from a process
with lower priority number
Global vs. Local Allocation
• Global replacement – process selects a
replacement frame from the set of all frames; one
process can take a frame from another
– But then process execution time can vary greatly
– But greater throughput so more common

• Local replacement – each process selects from


only its own set of allocated frames
– More consistent per-process performance
– But possibly underutilized memory
Thrashing
• If a process does not have “enough” pages, the page-fault rate
is very high
– Page fault to get page
– Replace existing frame
– But quickly need replaced frame back
– This leads to:
• Low CPU utilization
• Operating system thinking that it needs to increase the degree of
multiprogramming
• Another process added to the system

• Thrashing ≡ a process is busy swapping pages in and out


Thrashing (Cont.)
Thrashing
• As the degree of multiprogramming increases, CPU utilization
also increases, although more slowly, until a maximum is
reached.
• If the degree of multiprogramming is increased even further,
thrashing sets in, and CPU utilization drops sharply.
• At this point, to increase CPU utilization and stop thrashing,
we must decrease the degree of multiprogramming.
• To prevent thrashing, we must provide a process with as many
frames as it needs. But how do we know how many frames it
“needs”?.
• There are several techniques. The Working-Set Strategy starts
by looking at how many frames a process is actually using.
Working-Set Model
• Δ ≡ working-set window ≡ a fixed number of page
references
Example: 10,000 instructions
• WSSi (working Set Size of Process Pi) =
total number of pages referenced in the most recent Δ
(varies in time)
– if Δ too small will not encompass entire locality
– if Δ too large will encompass several localities
– if Δ = ∞ ⇒ will encompass entire program
• D = Σ WSSi ≡ total demand frames
– Approximation of locality
• if D > m ⇒ Thrashing
• Policy if D > m, then suspend or swap out one of the
processes
Working-set model
Working-Set Model
• This working-set strategy prevents thrashing
while keeping the degree of
multiprogramming as high as possible. Thus, it
optimizes CPU utilization.
• The difficulty with the working-set model is
keeping track of the working set.
• The working-set window is a moving window.
At each memory reference, a new reference
appears at one end, and the oldest reference
drops off the other end.
Page-Fault Frequency
• More direct approach than WSS
• The specific problem is how to prevent thrashing.
Thrashing has a high page-fault rate. Thus, we want to
control the page-fault rate.
• When it is too high, we know that the process needs more
frames. Conversely, if the page-fault rate is too low, then
the process may have too many frames. We can establish
upper and lower bounds on the desired page-fault rate.
• If the actual page-fault rate exceeds the upper limit, we
allocate the process another frame. If the page-fault rate
falls below the lower limit, we remove a frame from the
process. Thus, we can directly measure and control the
page-fault rate to prevent thrashing.
Page-Fault Frequency

You might also like