0% found this document useful (0 votes)
90 views37 pages

Cs3451-Unit 3 Os Notes

The document provides an in-depth overview of memory management techniques, including swapping, contiguous memory allocation, paging, and segmentation. It discusses concepts such as address binding, dynamic loading, and memory protection, along with various memory allocation methods and their advantages and disadvantages. Additionally, it covers structures of page tables, including hierarchical paging, hashed page tables, and inverted page tables, emphasizing their role in efficient memory management.

Uploaded by

saranya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views37 pages

Cs3451-Unit 3 Os Notes

The document provides an in-depth overview of memory management techniques, including swapping, contiguous memory allocation, paging, and segmentation. It discusses concepts such as address binding, dynamic loading, and memory protection, along with various memory allocation methods and their advantages and disadvantages. Additionally, it covers structures of page tables, including hierarchical paging, hashed page tables, and inverted page tables, emphasizing their role in efficient memory management.

Uploaded by

saranya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

SURYA GROUP OF INSTITUTIONS

SCHOOL OF ENGINEERING & TECHNOLOGY


Vikravandi – Villupuram

UNIT III MEMORY MANAGEMENT

Main Memory - Swapping - Contiguous Memory Allocation – Paging - Structure of the Page
Table Segmentation, Segmentation with paging; Virtual Memory - Demand Paging – Copy
on Write - Page Replacement - Allocation of Frames –Thrashing.

Memory Management: Background

1. Compile time: Must generate absolute code if memory location is known in


prior.

2. Load time: Must generate relocatable code if memory location is not known at
compile time

3. Execution time: Need hardware support for address maps (e.g., base and limit
registers).

Address Binding

• Address binding: Mapping of instructions and data from one address to


another address in memory.

Multistep Processing of a User Program

Fig.3.1 Processing of a User Program

Logical vs. Physical Address Space


 Logical address–generated by the CPU; also referred to as “virtual address“
Physical address – An address seen by the memory unit.
 Logical and physical addresses are the same in ―compile-time and load-time.

 Logical (virtual) and physical addresses differ in ―execution-time.

Memory-Management Unit (MMU)

 It is a hardware device that maps Logical address to physical address.

 In this scheme, the relocation register‘s value is added to Logical address


generated by a user process.

 The user program deals with logical addresses; it never sees the real
physical addresses

Dynamic relocation using relocation register

Fig.3.2 Dynamic relocation using relocation register

Dynamic Loading

 The routine is not loaded until it is called.

 Better memory-space utilization; unused routine is never loaded

 No special support from the operating system

Dynamic Linking

 Linking postponed until execution time & is particularly useful for libraries

 Small piece of code called stub, used to locate the appropriate memory-resident
library routine or function.
Overlays:

 Enable a process larger than the amount of memory allocated to it.

 At a given time, the needed instructions & data are to be kept within a memory.

Swapping

 Swapping is a memory management scheme in which any process can be


temporarily swapped from main memory to secondary memory so that the main
memory can be made available for other processes. It is used to improve main
memory utilization. In secondary memory, the place where the swapped-out
process is stored is called swap space.

 The purpose of the swapping in operating system is to access the data present
in the hard disk and bring it to RAM so that the application programs can use
it. The thing to remember is that swapping is used only when data is not
present in RAM.

 Although the process of swapping affects the performance of the system, it


helps to run larger and more than one process. This is the reason why
swapping is also referred to as memory compaction.

 The concept of swapping has divided into two more concepts: Swap-in and
Swap-out.

 Swap-out is a method of removing a process from RAM and adding it to the hard disk.
 Swap-in is a method of removing a program from a hard disk and putting it back into the
main memory or RAM.
Fig.3.3. Swapping

Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take to
transfer from main memory to secondary memory.

User process size is 2048Kb


Data transfer rate is 1Mbps = 1024 kbps
Time = process size / transfer rate
= 2048 / 1024
= 2 seconds
= 2000 milliseconds
Now taking swap-in and swap-out time, the process will take 4000 milliseconds.
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a single main memory.

2. It helps to create and use virtual memory.


3. Swapping allows the CPU to perform multiple tasks simultaneously. Therefore, processes
do not have to wait very long before they are executed.
4. It improves the main memory utilization.

Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number of
Page Fault and decrease the overall processing performance.
Contiguous memory allocation

Most systems allow programs to allocate more memory to its address space during
execution. Data allocated in the heap segments of programs in an example of such
allocated memory. What is required to support dynamic memory allocation in the
following schemes?
i) Continuous memory allocation

Write a brief note on contiguous memory allocation.

Contiguous memory allocation

Memory Protection:

1. Protecting the OS from user process.

2. Protecting user processes from one another.

o Protection is done by ―Relocation-register & Limit-register scheme

o Relocation register contains value of smallest physical address i.e base


value.

o Limit register contains range of logical addresses – each logical address


must be less than the limit register

H/W address protection with base and limit registers

Fig.3.4 H/W address protection


Memory Allocation

Each process is contained in a single contiguous section of memory. There are two
methods namely:

 Fixed – Partition Method

 Variable – Partition Method

Fixed–Partition Method:

 Divide memory into fixed size partitions, where each partition has exactly one
process.

 The drawback is memory space unused within a partition is wasted.(eg. when


process size<partition size)

Variable-partition Method:

 Divide memory into variable size partitions, depending upon the size of the
incoming process.

 When a process terminates, the partition becomes available for another


process.

Dynamic Storage-Allocation Problem:

How to satisfy a request of size n from a list of free holes?

Solution:

 First-fit: Allocate the first hole that is big enough.

 Best-fit: Allocate the smallest hole that is big enough; must search entire
list, unless ordered by size. Produces the smallest left over hole.

 Worst-fit: Allocate the largest hole; must also search entire list.

Problem for above solutions:

 Internal Fragmentation – Allocated memory may be slightly larger than


requested memory.

 External Fragmentation – This takes place when enough total memory


space exists to satisfy a request, but it is not contiguous i.e, storage is
fragmented into a large number of small holes scattered throughout the
main memory.

Solutions for external fragmentation:

1. Coalescing: Merge the adjacent holes together.

2. Compaction: Move all processes towards one end of memory, hole towards
other end of memory, producing one large hole of available memory.
Paging.

 It is a memory management scheme that permits the physical address


space of a process to be non contiguous.

 It avoids the considerable problem of fitting the varying size memory chunks on
to the backing store.

(i) Basic Method:

 Divide logical memory into blocks of same size called “pages”.

 Divide physical memory into fixed-sized blocks called “frames”

Address Translation Scheme – Logical Address is divided into

Page number (p) – used as an index into a page table which contains base
address of each page in physical memory

Page offset (d) – combined with base address to define the physical address
i.e., Physical address = base address +offset

Paging Hardware

Fig.3.5 Paging model of logical and physical memory


Fig.3.6 Paging example for a 32-byte memory with 4-byte pages

Page size = 4 bytes

Physical memory size = 32 bytes i.e( 4 X 8 = 32 so, 8 pages)

Logical address 0 maps to physical address 20 i.e ((5X4)+0)

Where Frame no=5, Page size=4, Offset=0

Fig.3.7 Paging Example


Translation look-aside buffers

TLB (Translation Look-aside Buffer)

 It is a fast lookup hardware cache.

 It contains the recently or frequently used page table entries.

 It has two parts: Key (tag) &Value.

Paging Hardware with TLB

Fig.3.8 Paging H/W withTLB

When a logical address is generated by CPU, its page number is presented to TLB.

 TLB hit: If the page number is found, its frame number is immediately
available & is used to access memory

 TLB miss: If the page number is not in the TLB, a memory reference to the
page table must be made.

 Hit ratio: Percentage of times that a particular page is found in the TLB.

(ii) Memory Protection

 Memory protection implemented by associating protection bit with each frame

 Valid-invalid bit attached to each entry in the page table:

 valid (v) - indicates that the associated page is in the process logical address
space, and is thus a legal page
 invalid(i) - indicates that the page is not in the process logical address space

Fig.3.9 Paging valid - Invalid Bit

Segmentation.

Most systems allow programs to allocate more memory to its address space during
execution. Data allocated in the heap segments of programs in an example of such
allocated memory. What is required to support dynamic memory allocation in the
following schemes?
Pure segmentation
Segmentation
 Memory-management scheme that supports user view of memory
 A program is a collection of segments.
 A segment is a logical unit such as: Main program, Procedure, Function, Method, Object,
Local variables, global variables, Common block, Stack, Symbol table, arrays
User‟s View of a Program
Fig.3.10 Logical Address Space for segmentation
Segmentation Hardware
 Logical address consists of a two tuple : <Segment-number, offset>
 Segment table – maps two-dimensional physical addresses; each table entry has:
Base – contains the starting physical address where the segments reside in memory
Limit – specifies the length of the segment
 Segment-table base register (STBR) points to the segment table‘s location in memory
 Segment-table length register (STLR) indicates number of segments used by a program;
Sharing
 shared segments
 same segment number
Allocation
 first fit/best fit
 external fragmentation
Protection: With each entry in segment table associate:
 validation bit = 0
 Protection bits associated with segments; code sharing occurs at segment level
 Since segments vary in length, memory allocation is a dynamic storage-allocation problem
Address Translation scheme
Fig.3.11 Segmentation Hardware

EXAMPLE:

Fig.3.12 Segmentation Example


Sharing of Segments

Fig.3.13 Sharing of Segments


Advantage of segmentation involves the sharing of code or data.
 Each process has a segment table associated with it, which the dispatcher uses to define
the hardware segment table when this process is given the CPU.
 Segments are shared when entries in the segment tables of two different processes point
to the same physical location.

Segmentation and paging.


Compare paging with segmentation in terms of the amount of memory required by
the address translation structures in order to convert virtual addresses to physical
addresses.
Compare paging with segmentation in terms of memory requirement by the
address translation structure in order to convert virtual addresses to physical
memory.
 The IBM 386 uses segmentation with paging for memory management.
 The maximum number of segments per process is 16 KB, and each segment can be as
large as 4 gigabytes.
 The local-address space of a process is divided into two partitions.
 The first partition consists of up to 8 KB segments that are private to that process.
 The second partition consists of up to 8KB segments that are shared among all the
processes.
 Information about the first partition is kept in the local descriptor table (LDT),
information about the second partition is kept in the global descriptor table (GDT).
 Each entry in the LDT and GDT consist of 8 bytes, with detailed information about a
particular segment including the base location and length of the segment.
The logical address is a pair (selector, offset) where the selector is a 16-bit number:

S g p

13 1 2
 Where s designates the segment number, g indicates whether the segment is in the GDT or
LDT, and p deals with protection.
 The offset is a 32-bit number specifying the location of the byte.
 The base and limit information about the segment are used to generate a linear-address.
 First, the limit is used to check for address validity.
 If the address is not valid, a memory fault is generated, resulting in a trap to the operating
system.
 If it is valid, then the value of the offset is added to the value of the base, resulting in a 32-bit
linear address. This address is then translated into a physical address.
 The linear address is divided into a page number consisting of 20 bits, and a page offset
consisting of 12 bits.
 Since we page the page table, the page number is further divided into a 10-bit page directory
pointer and a 10-bit page table pointer.
 The logical address is as follows.

P1 P2 d
Fig.3.14 Segmentation with Paging
 To improve the efficiency of physical memory use.
 Intel 386 page tables can be swapped to disk.
 In this case, an invalid bit is used in the page directory entry to indicate whether the table
to which the entry is pointing is in memory or on disk.
 If the table is on disk, the operating system can use the other 31 bits to specify the disk
location of the table; the table then can be brought into memory on demand.

4. Explain any two structures of the page table with neat diagrams.

Structures of the Page Table

1. Hierarchical Paging

 Break up the Page table into smaller pieces. Because if the page table is too
large then it is quite difficult to search the page number.

Example: “Two-Level Paging“


P1 P2 d

Fig. 3.15 Hierarchical Paging

Address-Translation Scheme for hierarchical paging

Address-translation scheme for a two-level 32-bit paging architecture

Fig.3.16 Address Translation Scheme

 It requires more number of memory accesses, when the number of levels is


increased.

(b) Hashed Page Tables

 Each entry in hash table contains a linked list of elements that hash to the
same location.
 Each entry consists of;

(a) Virtual page numbers

(b) Value of mapped page frame.

(c) Pointer to the next element in the linked list.

Working Procedure:

 The virtual page number in the virtual address is hashed into the hash
table.

 Virtual page number is compared to field(a) in the 1st element in the linked list.

 If there is a match, the corresponding page frame (field (b)) is used to form the
desired physical address.

 If there is no match, subsequent entries in the linked list are searched for
a matching virtual page number.

Fig.3.17 Hashed Paging

(c) Inverted Page Table

It has one entry for each real page (frame) of memory & each entry consists of the
virtual address of the page stored in that real memory location, with information about
the process that owns that page. So, only one page table is in the system.
Fig.3.18 Inverted Paging

When a memory reference occurs, part of the virtual address, consisting of

<Process-id, Page-no> is presented to the memory sub-system.

 Then the inverted page table is searched for match:

o If a match is found, then the physical address is generated.

o If no match is found, then an illegal address access has been


attempted.

 Merit: Reduce the amount of memory needed.

 Demerit: Improve the amount of time needed to search the table when a page
reference occurs.

(v) Shared Pages

 One advantage of paging is the possibility of sharing common code.

Shared code

 One copy of read-only (re entrant) code shared among processes (i.e., text
editors, compilers, window systems).

 Shared code must appear in same location in the logical address space of
all processes

EXAMPLE:
Fig.3.19 Shared Paging

32 and 64 bit architecture examples.

 The IA-32 architecture supported both paging and segmentation.

IA-32 Architecture

 Memory management in IA-32 systems is divided into two components


segmentation and paging and works as follows:

 The CPU generates logical addresses, which are given to the segmentation
unit.

 The segmentation unit produces a linear address for each logical address.

 The linear address is then given to the paging unit, which in turn generates
the physical address in main memory.

 Thus, the segmentation and paging units form the equivalent of the memory-
management unit(MMU).
Fig.3.20 Logical to physical address Translation

IA-32 Segmentation

 The IA-32 architecture allows a segment to be as large as 4 GB, and the


maximum number of segments per process is 16KB.

 The logical address space of a process is divided into two partitions.

 The first partition consists of up to 8K segments that are private to that


process.

 The second partition consists of up to 8K segments that are shared among


all the processes.

 Information about the first partition is kept in the local descriptor table (LDT);

 Information about the second partition is kept in the global descriptor table
(GDT).

 Each entry in the LDT and GDT consists of an 8-byte segment descriptor
with detailed information about a particular segment, including the base
location and limit of that segment.

 The logical address is a pair (selector, offset), where the selector is a 16-bit
number:

 In which s designates the segment number, g indicates whether the


segment is in the GDT or LDT, and p deals with protection. The offset is a
32-bit number specifying the location of the byte within the segment.

 The machine has six segment registers, allowing six segments to be addressed
at any one time by a process. It also has six 8-byte micro program registers to
hold the corresponding descriptors from either the LDT or GDT.
 The base and limit information about the segment in question is used to
generate a linear address.

 First, the limit is used to check for address validity.

 If the address is not valid, a memory fault is generated, resulting in a trap


to the operating system.

Fig.3.21 IA32 Segmentation

 If it is valid, then the value of the offset is added to the value of the base,
resulting in a 32-bit linear address.

IA-32 Paging

 The IA-32 architecture allows a page size of either 4 KB or 4MB.

 For 4-KB pages, IA-32 uses a two-level paging scheme in which the division
of the 32- bit linear address is as following.

 The Page directory entry points to an inner page table that is indexed by the contents
of the innermost 10 bits in the linear address.

 To improve the efficiency of physical memory use, IA-32 page tables can be swapped to
disk.
Fig.3.22 Page Address Extension

 In this case, an invalid bit is used in the page directory entry to indicate
whether the table to which the entry is pointing is in memory or on disk.

 Page Address Extension (PAE) also increased the page-directory and page-table
entries from 32 to 64 bits in size, which allowed the base address of page
tables and page frames to extend from 20 to 24bits.

Virtual Memory

It is a technique that allows the execution of processes that may not be


completely in main memory.

Advantages:

 Allows the program that can be larger than the physical memory.

 Separation of user logical memory from physical memory

 Allows processes to easily share files & address space.

 Allows for more efficient for process creation.

Virtual memory can be implemented using,


 Demand paging
 Demand segmentation

Virtual Memory that is Larger than Physical Memory

Fig.3.23 Virtual memory and physical memory

Demand Paging

 It is similar to a paging system with swapping.

 Demand Paging - Bring a page into memory only when it is needed

 To execute a process, swap that entire process into memory.

 Lazy Swapper - Never swaps a page into memory unless that page will be
needed.

Advantages

 Less I/O needed

 Less memory needed

 Faster response

 More users
Transfer of a paged memory to contiguous disk space

Fig.3.24 Transfer of a paged memory to contiguous disk space

Valid-Invalid bit

 Valid associated page is in memory.

 In-Valid - valid page but is currently on the disk

Page table when some pages are not in main memory

Fig.3.25 Page table when some pages are not in main memory
Copy on Write.

 fork() creating a copy of the parent’s address space for the child, duplicating the pages
belonging to the parent.

 Many child processes invoke the exec() system call immediately after creation, the
copying of the parent’s address space may be unnecessary.

 Instead, we can use a technique known as copy-on-write, which works by allowing the
parent and child processes initially to share the same pages. These shared pages are
marked as copy-on-write pages, meaning that if either process writes to a shared page, a
copy of the shared page is created.

 Fig. show the contents of the physical memory before and after process 1 modifies page
C.

Fig.3.26 Before Process 1 modifies page C

Fig.3.27 After Process 1 modifies Page C


 When the copy-on-write technique is used, only the pages that are modified by
either process are copied;

 All unmodified pages can be shared by the parent and child processes.

 Pages that cannot be modified (pages containing executable code) can be shared
by the parent and child. Copy-on-write is a common technique used by several
operating systems.

 When it is determined that a page is going to be duplicated using copy on-write, it


is important to note the location from which the free page will be allocated. Many
operating systems provide a pool of free pages for such requests.

 These free pages are typically allocated when the stack or heap for a process must
expand or when there are copy-on-write pages to be managed.

 Operating systems typically allocate these pages using a technique known as


zero-fill-on-demand. Zero-fill-on-demand pages have been zeroed-out before being
allocated, thus erasing the previous contents.

 vfork() (for virtual memory fork)—that operates differently from fork() with copy-
on-write. With vfork(), the parent process is suspended, and the child process
uses the address space of the parent.

Page Fault

 Access to a page marked invalid causes a page fault trap.

Steps in Handling a Page Fault


Fig.3.28 Steps in Handling a Page Fault

1. Determine whether the reference is a valid or invalid memory access

a. If the reference is invalid then terminate the process.

b. If the reference is valid then the page has not been yet brought into main
memory.

2. Find a free frame.

3. Read the desired page into the newly allocated frame.

4. Reset the page table to indicate that the page is now in memory.

5. Restart the instruction that was interrupted.

Pure demand paging

 Never bring a page into memory until it is required.

 We could start a process with no pages in memory.

 When the OS sets the instruction pointer to the 1st instruction of the
process, which is on the non-memory resident page, then the process
immediately faults for the page.

 After this page is bought into the memory, the process continue to execute,
faulting as necessary until every page that it needs is in memory.

Performance of demand paging


 Let p be the probability of a page fault 0

 Effective Access Time(EAT)

EAT = (1 – p) x ma + p x page fault time.

Where m a - memory access, p Probability of page fault (0≤ p≤1)

A page fault causes the following sequence to occur:

1. Trap to the OS

2. Save the user registers and process state.

3. Determine that the interrupt was a page fault.

4. Check whether the reference was legal and find the location of page on disk.

5. Read the page from disk to free frame.

i. Wait in a queue until read request is serviced.

ii. Wait for seek time and latency time.

iii. Transfer the page from disk to free frame.

6. While waiting, allocate CPU to some other user.

7. Interrupt from disk.

8. Save registers and process state for other users.

9. Determine that the interrupt was from disk.

10. Reset the page table to indicate that the page is now in memory.

11. Wait for CPU to be allocated to this process again.

12. Restart the instruction that was interrupted.

Page Replacement

 If no frames are free, we could find one that is not currently being used
& free it.

 We can free a frame by writing its contents to swap space & changing
the page table to indicate that the page is no longer in memory.

 Then we can use that freed frame to hold the page for which the process
faulted.

Basic Page Replacement

1. Find the location of the desired page on disk

2. Find a free frame

 If there is a free frame, then use it.

 If there is no free frame, use a page replacement algorithm to select a victim


frame

 Write the victim page to the disk, change the page & frame tables
accordingly.

3. Read the desired page into the (new) free frame. Update the page and frame tables.

4. Restart the Process.

Fig.3.29 Page Replacement

Note:

If no frames are free, two page transfers are required & this situation effectively
doubles the page- fault service time.

Modify bit:
 It indicates that any word or byte in the page is modified.

 When we select a page for replacement, we examine its modify bit.

Page Replacement Algorithms


a. FIFO Page Replacement
b. Optimal Page Replacement
c. LRU Page Replacement
(a) FIFO page replacement algorithm
 Replace the oldest page.
 This algorithm associates with each page, the time when that page was brought in.
Example:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
No.of available frames = 3 (3 pages can be in memory at a time per process)

No. of page faults = 15


Drawback:
 FIFO page replacement algorithms performance is not always good.
 To illustrate this, consider the following example:
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
 If No.of available frames = 3 then the no.of page faults =9
 If No.of available frames =4 then the no.of page faults =10
 Here the no. of page faults increases when the no.of frames increases .This is called as
Belady‟s Anomaly.
(b) Optimal page replacement algorithm
 Replace the page that will not be used for the longest period of time.
Example:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1
No.of available frames = 3

No. of page faults = 9


Drawback:
It is difficult to implement as it requires future knowledge of the reference string.

(c) LRU (Least Recently Used) page replacement algorithm


 Replace the page that has not been used for the longest period of time.
Example:
Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0,1,7,0,1

No. of page faults = 12

Thrashing

 If the page fault and swapping happens very frequently at a higher rate, then the
operating system has to spend more time swapping these pages. This state in the
operating system is termed thrashing. Because of thrashing the CPU utilization is going
to be reduced.

Example
 If any process does not have the number of frames that it needs to support pages in
active use then it will quickly page fault.

 And at this point, the process must replace some pages. As all the pages of the process
are actively in use, it must replace a page that will be needed again right away.

 Consequently, the process will quickly fault again, and again, and again, replacing
pages that it must bring back in immediately. This high paging activity by a process is
called thrashing.

 During thrashing, the CPU spends less time on some actual productive work spend
more time swapping.

Fig.3.30 Thrashing

Causes of Thrashing

 If a process does not have ―enough pages, the page-fault rate is very high.
This leads to:

o Low CPU utilization

o Operating system thinks that it needs to increase the degree of


multiprogramming

o another process is added to the system

 When the CPU utilization is low, the OS increases the degree of


multiprogramming.

 If global replacement is used then as processes enter the main memory they
tend to steal frames belonging to other processes.
 Eventually all processes will not have enough frames and hence the page
fault rate becomes very high.

 Thus swapping in and swapping out of pages only takes place.

 This is the cause of thrashing.

Effect of Thrashing

 At the time, when thrashing starts then the operating system tries to apply either
the Global page replacement Algorithm or the Local page replacement algorithm.

Global Page Replacement

 The Global Page replacement has access to bring any page, whenever thrashing found it
tries to bring more pages. Actually, due to this, no process can get enough frames and
as a result, the thrashing will increase more and more. Thus the global page
replacement algorithm is not suitable whenever thrashing happens.

Local Page Replacement

 Unlike the Global Page replacement, the local page replacement will select pages which
only belongs to that process. Due to this, there is a chance of a reduction in the
thrashing. As it is also proved that there are many disadvantages of Local Page
replacement. Thus local page replacement is simply an alternative to Global Page
replacement.

Techniques used to handle the thrashing

 Local Page replacement is better than the Global Page replacement but local page
replacement has many disadvantages too, so it is not suggestible.

 To limit thrashing, we can use a local replacement algorithm.

 To prevent thrashing, there are two methods namely ,

[1] Working Set Strategy

[2] Page Fault Frequency

1. Working Set Strategy

 It is based on the assumption of the model of locality.

 Locality is defined as the set of pages actively used together.


 Working set is the set of pages in the most recent ∆ page references

 ∆ is the working set window.

 if ∆ too small, it will not encompass entire locality

 if ∆ too large, it will encompass several localities

 if ∆= 0, it will encompass entire program

D =∆WSSi

 Where WSSi is the working set size for process i.

 D is the total demand of frames

 If D >m then Thrashing will occur.

2. Page Fault Frequency

Fig.3.31 Page fault frequency

o If actual rate too low, process loses frame


o If actual rate too high, process gains frame

 The working-set model is successful and its knowledge can be useful in preparing but it
is a very clumpy approach in order to avoid thrashing. There is another technique that
is used to avoid thrashing and it is Page Fault Frequency (PFF) and it is a more direct
approach.

 The main problem is how to prevent thrashing. As thrashing has a high page fault rate
and also we want to control the page fault rate.

 When the Page fault is too high, then we know that the process needs more frames.
Conversely, if the page fault-rate is too low then the process may have too many frames.

 We can establish upper and lower bounds on the desired page faults. If the actual page-
fault rate exceeds the upper limit then we will allocate the process to another frame.
And if the page fault rate falls below the lower limit then we can remove the frame from
the process.

 Thus with this, we can directly measure and control the page fault rate in order to
prevent thrashing.

Advantage and disadvantage of contiguous and non-contiguous memory


allocation?

The advantage of contiguous memory allocation is

1. It supports fast sequential and direct access

2. It provides a good performance

3. The number of disk seek required is minimal

The disadvantage of contiguous memory allocation is

 fragmentation

Non contiguous memory allocation, offers the following advantages over


contiguous memory allocation:

 Allows the interdependence of code and data among processes.

 External fragmentation is none existent with non contiguous memory


allocation.

 Virtual memory allocation is strongly supported in non contiguous


memory allocation.

Non contiguous memory allocation methods include Paging and Segmentation.

Advantages of paging

 Paging Eliminates Fragmentation

 Multiprogramming is supported

 Overheads that come with compaction during relocation are eliminated

Disadvantages of paging:

 Paging increases the price of computer hardware, as page addresses are


mapped to hardware

 Memory is forced to store variables like page tables

 Some memory space stays unused when available blocks are not
sufficient for address space for jobs to run

Advantages of segmentation:

 Fragmentation is eliminated in Segmentation memory allocation

 Segmentation fully supports virtual memory

 Dynamic memory segment growth is fully supported

 Segmentation supports Dynamic Linking

 Segmentation allows the user to view memory in a logical sense.

Disadvantages of segmentation:

 Main memory will always limit the size of segmentation, that is,
segmentation is bound by the size limit of memory

 It is difficult to manage segments on secondary storage

 Segmentation is slower than paging.

 Segmentation falls victim to external fragmentation even though it


eliminates internal fragmentation.
Given five memory partitions of 100Kb, 500Kb, 200Kb, 300Kb, 600Kb (in
order), how would the first-fit, best-fit, and worst-fit algorithms place
processes of 212 Kb, 417 Kb, 112 Kb, and 426 Kb (in order)? Which
algorithm makes the most efficient use of memory?

First-fit:

212K is put in 500K partition

417K is put in 600K partition

112K is put in 288K partition (new partition 288K = 500K - 212K)

426K must wait

Best-fit:

212K is put in 300K partition

417K is put in 500K partition

112K is put in 200K partition

426K is put in 600K partition

Worst-fit:

212K is put in 600K partition

417K is put in 500K partition

112K is put in 388K partition

426K must wait

In this example, best-fit turns out to be the best.

You might also like