0% found this document useful (0 votes)
23 views160 pages

Oslecture10 13

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views160 pages

Oslecture10 13

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 160

Memory Management

Outline
• Background
• Dynamic Loading
• Dynamic Linking
• Overlays
• Logical versus Physical Address Space
• Memory Management Unit
• Swapping
• Memory Protection

Memory Allocation

Memory Fragmentation

Paging

Virtual Memory

Demand paging
- Performance of demand paging

Page Replacement
- Page Replacement Algorithms

Thrashing

Working – set model
Background
• Program must be brought into memory and
placed within a process for it to be executed.
• Input Queue - collection of processes on the
disk that are waiting to be brought into
memory for execution.
• User programs go through several steps
before being executed.
Functions of Memory
Management

Keep track of every memory location.

Track of whether memory is allocated or not

Track of how much memory is allocated.

Take decision which process will get memory
and when.

Updates the status of memory location when it
is allocated or freed.
Names and Binding

– Symbolic names  Logical names  Physical


names
• Symbolic Names: known in a context or path
– file names, program names, printer/device names, user names
• Logical Names: used to label a specific entity
– inodes, job number, major/minor device numbers, process id
(pid), uid, gid..
• Physical Names: address of entity
– inode address on disk or memory
– entry point or variable address
– PCB address
Binding of instructions and data to
memory
– Address binding of instructions and data to memory addresses
can happen at three different stages.
• Compile time:
– If memory location is known apriori, absolute code can be generated;
must recompile code if starting location changes.
• Load time:
– Must generate relocatable code if memory location is not known at
compile time.
• Execution time:
– Binding delayed until runtime if the process can be moved during its
execution from one memory segment to another. Need hardware
support for address maps (e.g. base and limit registers).
Binding time tradeoffs
– Early binding
– compiler - produces efficient code
– allows checking to be done early
– allows estimates of running time and space
– Delayed binding
– Linker, loader
– produces efficient code, allows separate
compilation
– portability and sharing of object code
Contd..

-Late binding
-VM, dynamic linking/loading, overlaying,
Interpreting code less efficient, checks done at
runtime flexible, allows dynamic reconfiguration
Dynamic Loading
• Routine is not loaded until it is called.
• Better memory-space utilization; unused
routine is never loaded.
• Useful when large amounts of code are
needed to handle infrequently occurring
cases.
• No special support from the operating system
is required; implemented through program
design.
Dynamic Linking
• Linking postponed until execution time.
• Small piece of code, stub, used to locate the
appropriate memory-resident library routine.
• Stub replaces itself with the address of the routine,
and executes the routine.
• Operating system needed to check if routine is in
processes’ memory address.
• Dynamic linking is particularly useful for libraries.
Overlays
• To enable a process to be larger than the
amount of memory allocated to it.
• Keep in memory only those instructions and
data that are needed at any given time.
• Implemented by user, no special support from
operating system; programming design of
overlay structure is complex.
Contd...


For eg. Consider a two pass assembler.

During pass1:
- construct a symbol table

During pass2:
- generate machine language code
Contd..

Partition an assembler into pass1 code ,pass2
code, symbol table,common support routine
used by both pass1 & pass2

Assume the size of these components are as
follows:
pass1 70kb
pass2 80kb
symbol table 20kb

common routine 30kb


Contd..

- To load everything at once,How much memory


would require?
- If only 150 kb is available then How you run
your process?
Solution for Overlays
- Define 2 overlays
- Overlay 1 - symbol table,common
routines,pass1
- Overlay 2 - symbol table,common
routines,pass2
- Add an overlay driver and start with overlay 1 in
memory
Overlays for a Two-Pass Assembler

20K
Symbol table

Common routines 30 K

Overlay driver 10 K

Overlay2
Overlay 1 pass1 Overlay Area
pass2

70k 80k
Contd...
- When finish pass1,overlay driver jumps to
overlay 2 into memory.
- Overlay 1 needs only 120kb
- Overlay 2 needs only 130kb.
- As in dynamic loading,overlays do not require
any special support from O.S
- Implemented by user with simple file
structures.
Logical vs. Physical Address
Space
-The concept of a logical address space that is
bound to a separate physical address space is
central to proper memory management.
-Logical Address or virtual address -
generated by CPU
Physical Address: address seen by memory unit.
Logical vs. Physical Address Space

– Logical and physical addresses are the same in


compile time and load-time binding schemes

– Logical and physical addresses differ in execution-


time address-binding scheme.
Memory Management Unit (MMU)
• Hardware device that maps virtual to physical
address.
• In MMU scheme, the value in the relocation
register is added to every address generated
by a user process at the time it is sent to
memory.
• The user program deals with logical addresses;
it never sees the real physical address.
Dynamic relocation using a
relocation register

Relocation
registar

1400

Logical
addres
Physical address
s
CPU
+ Memory
346 14346

MMU
Swapping
– A process can be swapped temporarily
out of memory to a backing store and
then brought back into memory for
continued execution.

– Backing Store - fast disk large enough to


accommodate copies of all memory images
for all users.
– must provide direct access to these
memory images.
Contd..
- Roll out, roll in - swapping variant used for priority
based scheduling algorithms; lower priority
process is swapped out, so higher priority
process can be loaded and executed.
- Major part of swap time is transfer time; total
transfer time is directly proportional to the amount
of Memory swapped.
- Modified versions of swapping are
found on many systems, i.e. UNIX and
Microsoft Windows.
Schematic view of swapping
Memory Protection
- Necessary to protect the memory from the
user programs.
- Protect the user programs from each other.
- It is possible using limit registers and
relocation registers.
- Relocation registers have the smallest
physical address.
Cont..
- Relocation register contains value of smallest
physical address while the limit register contains
range of logical addresses
- Using the values of these two registers, memory
can be protected.
- When ever a new process is loaded , the address
of this process are updated in limit and relocation
register and the action of the CPU are also
checked.each logical address must be less than
the limit register.
Contd...

- If the Logical address is generated by the CPU is


less than the address in the limit registers then
it is added to the relocation registers address.
- Mapping of the Physical addresses is done in
the memory.
Hardware Support for Relocation and Limit Registers
Memory Allocation
Contiguous Allocation Noncontiguous Allocation
•Centralized •Decentealized
•Program/Process as a •Program parts are
unit is completely stored distributed in memory
•Partitioning •Partitioning
- Fixed - Paging
- Variable - Segmentation
- Virtual memory
Partitioning
• Main memory usually into two partitions
• Resident Operating System, usually held in low
memory with interrupt vector.
• User processes then held in high memory.

• Single partition allocation


• Relocation register scheme used to protect user
processes from each other, and from changing
operating system code and data.
Cont…
• Multiple partition Allocation
• Hole - block of available memory; holes of
various sizes are scattered throughout memory.
• When a process arrives, it is allocated memory
from a hole large enough to accommodate it.
• Operating system maintains information about
– allocated partitions
– free partitions (hole)
Dynamic Storage Allocation
Problem
- How to satisfy a request of size n from a list
of free holes.
- First-fit
-allocate the first hole that is big enough
- Best-fit
-Allocate the smallest hole that is big
enough; must search entire list, unless
ordered by size. Produces the smallest
leftover hole.
Contd..

- Worst-fit
-Allocate the largest hole; must also search entire
list. Produces the largest leftover hole.

- First-fit and best-fit are better than worst-fit in


terms of speed and storage utilization.
• For eg. Given memory partition of 100
kb,500kb,200kb,300kb and 600kb. How
would each of the first fit, best fit, worst fit
algorithm place processes of
212kb,417kb,112kb,426kb. Which algorithm
makes the most efficient use of memory.
Size of Processes
P1 P2 P3 P4
212 kb 417 kb 112 kb 426 kb

OS 100kb 500kb 200kb 300kb 600kb


212
112
Solution
1. First Fit: In this algorithm, the first memory
partition where the processes fits is allocated
to it.
Thus P1 gets 500kb , P2 gets 600kb
P3 gets 200kb, P4 is not allocated any
memory space.
OS 212 112 417

100KB 500KB 200KB 300KB 600KB


Cont…
Thus space utilized is 212 +112 + 417 = 741 kb
Total space available =
(100 + 500 +200 +300 +600) = 1700kb
Unused memory space (1700 - 741) = 959 kb
2. Best Fit: Very less memory is left unused.
Thus P1 is allocated 300 kb
P2 is allocated 500 kb
P3 is allocated 200kb
P4 is allocated 600 kb
Cont…
OS 417 112 212 426

100kb 500kb 200kb 300kb 600kb

Thus the memory space utilized =


(417+112+212+426) = 1167 kb
Total available memory space = 1700 kb,
Unused memory space = (1700-1167)=533kb.
Cont…

3. Worst fit : In this algorithm, the process is


allocated that memory partition in which
maximum space is unutilized. In largest hole
the smallest process is fit.
P3 P1 P2 P4
112 212 417 426
Thus, P3 is allocated in 500 kb
P1 is allocated in 600 kb
P2 and P4 are not allocated any memory space
OS 112 212

100kb 500kb 200kb 300kb 600kb

Thus memory space utilized ( 212+112)= 324 kb


Total available memory space = 1700 kb
Unused memory space = (1700 - 324)= 1376kb
From the above algorithm, it is seen that, the
amount of unused space left by
First fit = 959 kb
Best fit = 533 kb
Worst fit = 1376kb
Cont..
Thus, best fit algorithm make the most efficient
use of memory as it leaves the least unused
memory space of all three algorithms.
Memory Fragmentation

- Memory fragmentation implies the existence


of unusable memory areas in a computer
system.
- 2 types of fragmentation
- Internal fragmentation
- External fragmentation
Cont…
Internal fragmentation
-allocated memory may be slightly larger than
requested memory; this size difference is memory
internal to a partition, but not being used.
- For eg. Consider a multiple partition scheme with a
hole of 18,464 bytes. Suppose that the next process
request 18,462 bytes. If we allocate exactly the
requested block, we are left with a hole of 2 bytes.
Contd...
External fragmentation:
- When a free partition is too small to accommodate
any program is called external fragmentation.
- total memory space exists to satisfy a request, but it
is not contiguous.
- One solution to the problem of external
fragmentation is compaction
- compaction :Shuffle the memory contents to place all
free memory together in one large block .
Cont…
-Compaction is not always possible. if relocation
is static and is done at load time, then
compaction cannot done.
- Compaction is possible only if relocation is
dynamic, and is done at execution time.
- Another possible solution to the external
fragmentation problem is to permit the logical
address space.
Contd..

- For eg. Consider the five Processes allocating to the


memory of size 256K and has done job scheduling
Job/Process Memory Time
0
1 60 K 10
2 100 K 5
40

216 k
3 30 K 20
4 70 K 8
5 50 K 15
256k
Compaction

0 0
Monitor
Monitor
40 K
40 K
Job 5
Job 5
90 K
90 K
10 K
Compact
100 K Job 4
160 K
Job 4
170 K
30 K Job3
200 K
190 K
Job 3

230 K 66K
26 K
256 K
256 K
Paging
- Memory management scheme that permits
the Physical address space of a process to
be non-contiguous;
- process is allocated physical memory wherever it
is available.
-Divide physical memory into fixed size
blocks called frames
- size is power of 2, 512 bytes and 8192bytes
Contd....
- Divide logical memory into same size blocks
called pages.
- When a process is to be executed, its pages are
loaded into any available memory frames from
the backing store.
- The backing store is divided into fixed sized
blocks that are of the same size as the memory
frames.
Contd...

- Keep track of all free frames.


- To run a program of size n pages, find n free
frames and load program.
- Set up a page table to translate logical to physical
addresses.
Note:: Internal Fragmentation possible!!
Address Translation Scheme

Address generated by CPU is divided into:


Two parts
- Page number(p):
- used as an index into page table which contains
base address of each page in physical memory.
- Page offset(d):
- combined with base address to define the physical
memory address that is sent to the memory unit.
Contd..

- When a process is to be loaded , its pages


are moved to free frames in the physical
memory.
- The information about frame no. Where a
page is stored, is entered in the page table.
- During process execution, CPU generates a
logical address that comprises of page
number(p) & offset within the page.
Contd..

- The page no. P is used to index into the page


table and fetch corresponding frame no.(f).

- The physical address is obtained by combining


the frame no(f) with the offset(d).
Address Translation Architecture
Paging Example

- Consider the memory using a page size 4 bytes


& physical memory of 32 bytes. (8 pages)

- How the user's view of memory can be mapped


into physical memory?
Contd..

- Logical address 0 is page 0,offset 0


- Indexing into the page table, page o is in frame
5
- physical address= (Frame no * page size) +
offset
For eg.Logical address 0 maps to physical address
= (5*4)+0=20
Paging Example
Contd..

- Paging itself is a form of dynamic relocation.

- Every logical address is bound by the paging


H/W to some physical address.

- Any free frame can be allocated to a process


that needs it.
Paging Example
Free Frames

Before allocation After allocation


Translation look – aside
buffer(TLB)
• TLB is associative high speed memory.
• Each entry in the TLB consists of two parts: a
key and a value.
• When the associative memory is presented
with an item it is compared with all keys
simultaneously.
• If the item is found, the corresponding value
field is returned.
Cont..
• TLB is used with page table in the following
way:
• The TLB contains only few of the page table
entries.
• When a logical address is generated by the
CPU, its page number is presented to the TLB.
• If the page no. found , its frame no. is
immediately available and is used to access
memory.
Cont…
• If the page no. is not in TLB(know as TLB miss)
a memory reference to the page table must
be made, If we add the page no. and frame
no. To the TLB so that they will found quickly
on the next reference.
• If the TLB is already full of entries, operating
system must select one for replacement.
Paging Hardware With TLB
Contd..
-Hit ratio: Percentage of times that a particular
page number is found in the TLB
- For eg. An 80 % hit ratio means that we find the
desired page number in the TLB 80% of the
time.
- Hit ratio = 
- Associative Lookup =  time unit
- Effective Access Time (EAT)
EAT = (1 + )  + (2 + )(1 – )
Contd..

- Assume cache hit rate is 98%, memory access


time is quintupled (100 vs. 500 nanoseconds),
cache lookup time is 20 nanoseconds
- Effective Access time = 0.98 * 120 + .02 * 520
= 128 ns
This is only a 28% slowdown in memory access
time...
Memory Protection In Paging
• Memory protection implemented by
associating protection bits with each frame.
• These bits are kept in the page table.
• Valid/invalid bit attached to each entry in page
table.
– Valid: indicates that the associated page is in the
process’ logical address space,a legal page.
Contd...

- Invalid: indicates that the page is not in the


process’ logical address space.
- Illegal addresses are trapped by using valid-
invalid bit.
- The operating system sets this bit for each
page to allow or disallow accesses to that
page.
Valid (v) or Invalid (i) Bit In A Page
Table
Shared pages
• Shared Code:
- and data can be shared among processes
• Reentrant (non self-modifying) code can be shared.
• Map them into pages with common page frame
mappings
• Single copy of read-only code - compilers, editors etc..
• Shared code must appear in the same location in the
logical address space of all processes
Contd...

- Private code and data


- Each process keeps a separate copy of code and data
- The pages for private code and data can appear anywhere
in logical address space.
Shared Pages Example
Segmentation
• Memory Management Scheme that
supports user view of memory.
• A program is a collection of segments.
• A segment is a logical unit such as
–main program, procedure, function
–local variables, global variables,common
block
–stack, symbol table, arrays
Contd..

- Protect each entity independently


- Allow each segment to grow independently
- Share each segment independently
User’s View of a Program
Segmentation Architecture
– Logical address consists of a two tuple
<segment-number, offset>
– Segment Table
• Maps two-dimensional user-defined addresses
into one-dimensional physical addresses. Each
table entry has
– Base - contains the starting physical
address where the segments reside in
memory.
– Limit - specifies the length of the segment.
Contd..

- Segment-table base register (STBR)points to the


segment table’s location in memory.

- Segment-table length register (STLR) indicates


the number of segments used by a program;
segment number is legal if s < STLR.
Segmentation Architecture
(Cont.)
 Protection. With each entry in
segment table associate:

 validation bit = 0  illegal segment

 read/write/execute privileges

 Protection bits associated with


segments; code sharing occurs at
segment level.
Contd..

- Since segments vary in length, memory


allocation is a dynamic storage-allocation
problem.

- A segmentation example is shown in the


following diagram
Segmentation Hardware
Example of Segmentation
Sharing of Segments
Cont..
Consider the following segment table:
Segment Base Length
219 600
0
1 2300 14
2 90 100
3 1327 580
4 1952 96
Cont..
What are the physical address for the following
logical address:
1)0430 2) 110 3) 2550 4) 3400 5) 4112
Solution:
1)0430 -> s=0 , d= 430
B + ( d < length )
430 < 600
219 + 430 = 649
Virtual Memory
• Virtual memory is a technique which allows
the execution of processes that may not be
completely in memory.
• Separation of user logical memory from
physical memory.
• This separation allows an extremely large
virtual memory to be provided for
programmers when only a smaller physical
memory is available.
Cont..
• It makes the task of programming much
easier because the programmer no longer
needs to worry about the amount of physical
memory available.
• Logical address space can therefore be much
larger than physical address space.
• Need to allow pages to be swapped in and
out.
• Allows files and memory to be shared by
several different processes through page
sharing.
Contd..

• The sharing of pages further allows performance


improvements during process creation.
• Virtual Memory can be implement Via:
-Demand paging
- Segmentation system
Virtual Memory That is Larger Than Physical Memory
Demand Paging
• A demand paging system is similar to a
paging system with swapping.
• When we want to execute a process, we
swap it into memory.
• Rather than swapping the entire process into
memory ,we use lazy swapper.
• A lazy swapper never swaps a page into
memory unless that page will be needed.
Cont..
• A swapper manipulates entire process where
as pager is connected with the individual
pages of a process.
• When a process is to be swapped in, the
pager guesses which pages will be used
before the process is swapped out again.
• Instead of swapping in a whole process, the
pager brings only those necessary pages into
memory.
Cont..
• Bring a page into memory only when it is needed.
– Less I/O needed
– Less Memory needed
– Faster response
– More users
• The first reference to a page will trap to OS with a
page fault.
• OS looks at another table to decide
–Invalid reference - abort
–Just not in memory.
Transfer of a Paged Memory to Contiguous Disk Space
Valid-Invalid Bit
– With each page table entry a valid-invalid
bit is associated (1  in-memory, 0  not
in memory).
– Initially, valid-invalid bit is set to 0 on all
entries.
• During address translation, if valid-invalid bit in
page table entry is 0 --- page fault occurs.
• Example of a page-table snapshot
Contd..

Frame # valid-invalid bit

1
1
1
1
0

0
0

page table
Page Table When Some Pages Are Not in Main Memory
Page Fault
• It is a type of interrupt raised when a running
process access a memory page that is
mapped into virtual memory but not loaded
in main memory.
Procedure for Handling a Page
Fault
– Page is needed - reference to page
– Step 1: Check an internal table (usually kept
with the PCB) for this process, to determine
whether the reference is valid or invalid
memory access.
– Step 2:If the reference is invalid, terminate
the process. If it is valid but we have not yet
brought in that page, now page in it.
–Step 3: Find a free frame(by taking one from
the free frame list).
Contd..

Step 4: Schedule a disk operation to read the desired


page into the newly allocated frame.
Step 5: When the disk read is complete, modify the
internal table kept with the process and the page
table to indicate that page is now in memory.
Step 6: Restart instruction interrupted by illegal address
trap. The process will continue as if page had always
been in memory.
Steps in Handling a Page Fault
What happens if there is no free
frame?
• Page replacement - find some page in memory
that is not really in use and swap it.
• Need page replacement algorithm
• Performance Issue - need an algorithm which
will result in minimum number of page faults.
• Same page may be brought into memory many
times.
Performance of Demand Paging

• Demand paging can have a significant effect


on the performance of a computer system.
• Let us compute the effective access time for a
demand paged memory.
• The memory access time denoted by ma
ranges from 10 to 200 ns.
• If we have no page fault, the effective access
time is equal to the memory access time.
Performance of Demand Paging
• If a page fault occurs, first read the relevant
page from disk and then access the desired
page.
• Let p be the probability of a page fault
( 0  p  1.0), p to be close to zero, only few
page faults.
– If p = 0, no page faults
– If p = 1, every reference is a page fault
Cont..
• Effective Access Time (EAT)= (1-p) * memory-
access + p * (page fault time)
• Memory Access time = 1 microsecond
• 50% of the time the page that is being
replaced has been modified and therefore
needs to be swapped out.
Demand Paging Example

• An average page fault service time is 25 ms and


memory access time is 100 ns, calculate
effective access time.
EAT = (1-p) * memory-access + p * (page fault
time)
(1-p)* 100 + p* 25,000,000
= 100 + 24,999,900 * p
Cont..
• EAT is directly proportional to the page fault
rate. It is important to keep the page fault
rate low in demand paging system otherwise
the EAT increases, process execution goes
slow dramatically.
Process Creation
-Virtual memory allows other benefits
during process creation:

- Copy-on-Write

- Memory-Mapped Files
Copy-on-Write
• Copy-on-Write (COW) allows both parent
and child processes to initially share the
same pages in memory.
• If process modifies a shared page, only
then the page is copied.
• COW allows more efficient process creation
as only modified pages are copied.
• Free pages are allocated from a pool
of zeroed-out pages.
Memory-Mapped Files
• Memory-mapped file I/O allows file I/O to
be treated as routine memory access by
mapping a disk block to a page in memory.
• A file is initially read using demand paging.
A page-sized portion of the file is read from
the file system into a physical page.
Subsequent reads/writes to/from the file
are treated as ordinary memory accesses.
Cont..
• Simplifies file access by treating file I/O
through memory rather than read() write()
system calls.

• Also allows several processes to map the


same file allowing the pages in memory to be
shared.
Memory Mapped Files
Page Replacement
• Prevent over-allocation of memory by
modifying page fault service routine to include
page replacement.
• Use modify bit to reduce overhead of page
transfers - only modified pages are written to
disk.
• Page replacement
• large virtual memory can be provided on a smaller
physical memory.
Need For Page Replacement
Basic Page Replacement
1. Find the location of the desired page on the disk.
2. Find a free frame.
a) If there is a free frame, use it.
b) If there is no free frame, use a page
replacement algorithm to select a victim
frame.
c) Write the victim frame to the disk ; change the
page and frame tables accordingly.
Cont..
3.Read the desired page into the newly freed frame;
change the page and frame tables.
4. Continue the user process from where the page fault
occurred.
• If no frames are free, two page transfers(one out and
one in) are required.
• This situation effectively doubles the page fault
service time and increases the effective access time
accordingly.
Page Replacement
Page Replacement Algorithms

• There are many different page replacement


algorithms.
• Every operating system has its own replacement
scheme.
• Evaluate an algorithm by running it on a particular
string of memory references and computing the no.
of page faults.
• The string of memory references is called a
reference string.
Page Replacement Algorithms
• AS the number of frames available, the
number of page fault decreases.
• Want lowest page-fault rate.
• Assume reference string in examples to follow
is
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
Page Replacement Strategies
• The Principle of Optimality
– Replace the page that will not be used again the
farthest time into the future.

• Random Page Replacement


– Choose a page randomly

• FIFO - First in First Out


– Replace the page that has been in memory the
longest.
Contd..
- LRU - Least Recently Used
-Replace the page that has not been used for the longest
time.
- LFU - Least Frequently Used
-Replace the page that is used least often.
- NUR - Not Used Recently
-An approximation to LRU
- Working Set
- Keep in memory those pages that the process is actively
using
First-In-First-Out (FIFO) Algorithm

• The simplest page – replacement algorithm.


• A FIFO replacement algorithm associates with
each page the time when that page was
brought into memory.
• When a page must be replaced, the oldest
page is chosen.
• BELADY’S ANOMALY: Page fault rate
increases as the number of allocated frames
increases.
Cont..
• For eg. Consider page reference string
1, 3, 0, 3, 5, 6,3 with 3 page frames. Find
number of page faults ?

Page reference 1,3,0,3,5,6,3


Cont..
Cont..
• Initially all slots are empty, so when 1, 3, 0 came
they are allocated to the empty slots —> 3 Page
Faults.
when 3 comes, it is already in memory so —> 0
Page Faults.
Then 5 comes, it is not available in memory so it
replaces the oldest page slot i.e 1. —>1 Page Fault.
6 comes, it is also not available in memory so it
replaces the oldest page slot i.e 3 —>1 Page Fault.
Finally when 3 come it is not avilable so it replaces
0 1 page fault
Cont..
Belady’s anomaly – Belady’s anomaly proves that it
is possible to have more page faults when increasing
the number of page frames while using the First in
First Out (FIFO) page replacement algorithm. For
example, if we consider reference string 3, 2, 1, 0, 3,
2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 total page
faults, but if we increase slots to 4, we get 10 page
faults.
FIFO Illustrating Belady’s Anamoly
First-In-First-Out (FIFO) Algorithm
Reference String: 1,2,3,4,1,2,5,1,2,3,4,5
• Assume x frames ( x pages can be in memory
at a time per process)
Frame 1 1 4 5
Frame 2 2 1 3
3 frames Frame 3 3 2 4 9 Page faults

Frame 1 1 5 4
Frame 2 2 1 5
Frame 3 3 2 10 Page faults
4 frames Frame 4 4 3

FIFO Replacement - Belady’s Anomaly -- more frames does not mean less page faults
Optimal Page Replacement
• In this algorithm, pages are replaced which
would not be used for the longest duration of
time in the future.
• Example: Consider the page references
7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,
with 4 page frame. Find number of page fault.
Cont..
Cont..
• Initially all slots are empty, so when 7 0 1 2 are
allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault.
• when 3 came it will take the place of 7 because
it is not used for the longest duration of time in
the future.—>1 Page fault.
• 0 is already there so —> 0 Page fault.
• 4 will takes place of 1 —> 1 Page Fault.
Cont..
• Now for the further page reference string —> 0
Page fault because they are already available
in the memory.
• Optimal page replacement is perfect, but not
possible in practice as the operating system
cannot know future requests.
• The use of Optimal Page replacement is to set
up a benchmark so that other replacement
algorithms can be analyzed against it.
Least Recently Used (LRU)
Algorithm
– Use recent past as an approximation of near
future.
– Choose the page that has not been used for the
longest period of time.
– May require hardware assistance to implement.
- Considered good, but difficult to implement
Like all stack algorithms, LRU does not suffer

from Belady’s anomaly.


Cont
• For eg. Consider the page reference string

7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 1,2,0

with 3 page frames. Find number of page


faults?
Cont…
Implementation of LRU algorithm
• Counter Implementation
–Every page entry has a counter; every
time page is referenced through this
entry, copy the clock into the counter.
–When a page needs to be changes, look
at the counters to determine which page
to change (page with smallest time
value).
Cont..
• Stack Implementation
• Keeps a stack of page number in a doubly
linked form
• Page referenced
– move it to the top
– required 6 pointers to be changed
• No search required for replacement
Stack Records Most Recent Page References
LRU Approximation Algorithms
– Reference Bit
– With each page, associate a bit, initially = 0.
– When page is referenced, bit is set to 1.
– Replace the one which is 0 (if one exists). Do
not know order however.
– Additional Reference Bits Algorithm
– Record reference bits at regular intervals.
– Keep 8 bits (say) for each page in a table in
memory.
Contd..

- Periodically, shift reference bit into high-order


bit, i.e. shift other bits to the right, dropping
the lowest bit.
- During page replacement, interpret 8bits as
unsigned integer.
- The page with the lowest number is the LRU
page.
LRU Approximation Algorithms
– Second Chance
• FIFO (clock) replacement algorithm
• Need a reference bit.
• When a page is selected, inspect the reference bit.
• If the reference bit = 0, replace the page.
• If page to be replaced (in clock order) has reference bit
= 1, then
– set reference bit to 0
– leave page in memory
– replace next page (in clock order) subject to same rules.
Contd..

Degrades to FIFO if every reference bit is already set

Can be enhanced be including a modify (dirty) bit as
well

(0,0) – neither recently used, nor modified (best
choice)

(0,1) – not recently used, but modified (need write-

back)

(1,0) – recently used, but not modified (might need
soon)

(1,1) – recently used and modified (worst choice)
Second-Chance (clock) Page-Replacement Algorithm
Cont…
Advantages:
1)Neither optimal nor LRU replacement suffers
from Belady’s anomaly.
2)For stack algorithms, the set of pages in
memory for n frames is always a subset
of the set of pages that would be in
memory with n+1 frames.
Cont..
Disadvantages:
1)Neither implementation of LRU would be
conceivable without hardware assistance
beyond the standard TLB registers.
2)The updating of the clock fields or stack must
be done for every memory reference.
3)Few systems could tolerate that level of
overhead for memory management.
Thrashing
• If the number of frames allocated to low
priority process falls below the minimum no.
required, it must suspend that process
execution.
• We should then page out its remaining pages,
freeing all its allocated frames.
• If the process that does not have no. of
frames, it will quickly page fault. At this point
it must replace some page.
Cont..
• However, all its pages are in active use. It
must replace a page that will needed again.
• Consequently, it quickly faults again and
again. The process continues to fault,
replacing pages for which it then faults and
brings back in right away.
• This high paging activity is called Thrashing.
• A process is thrashing if it is spending more
time paging than executing.
Cont..
• If a process does not have enough pages for
the execution then the page-fault will occur.
• rate is very high.
• This leads to:
• low CPU utilization.
• OS thinks that it needs to increase the degree
of multiprogramming
• Another process is added to the system.
• System throughput plunges...
Thrashing (cont.)
– Why does paging work?
– Locality Model - computations have locality!
– Locality - set of pages that are actively used together.
– Process migrates from one locality to another.
– Localities may overlap.

147
Thrashing
– Why does thrashing occur?
•  (size of locality)  total memory size
Working Set Model

• The working – set model is based on the


assumption of locality.
• The model uses a parameter Define the
working – set window.
- A fixed number of page references, e.g.
10,000 instructions
• If a page is in active use, it will be in the
working set.
Cont..
• If it is no longer being used, it will drop from the
working set time units after its last reference.
– WSSj (working set size of process Pj) - total
number of pages referenced in the most recent 
(varies in time)
– For eg. Given the sequence of memory references
if 
- Then the working set at time t1 is {1,2,5,6,7}
•Time t2, the working set has changed to {3,4}
Cont…
• The accuracy of the working set depends on
the selection of 
• If  too small, will not encompass entire locality.
• If  too large, will encompass several localities.
• If  = , will encompass entire program.
– D =  WSSi  total demand frames
• If D  m (number of available frames)
thrashing
– Policy: If D  m, then suspend one of the
processes.
Working-set model
Cont..
• The OS monitors the working- set of each
process and allocates to that working – set of
enough frames to provide it with its working
– set size.
• The working – set strategy prevents threshing
while keeping the degree of
multiprogramming as high as possible.Thus it
optimizes CPU utilization.
Keeping Track of the Working Set
• Approximate with
• interval timer + a reference bit
– Example:  = 10,000
–Timer interrupts after every 5000 time units.
–Whenever a timer interrupts, copy and set
the values of all reference bits to 0.
–Keep in memory 2 bits for each page
(indicated if page was used within last
10,000 to 15,000 references).
Cont…
–If one of the bits in memory = 1 
page in working set.
• Not completely accurate - cannot tell
where reference occurred.
• Improvement - 10 bits and interrupt
every 1000 time units.
Page fault Frequency Scheme
• Control thrashing by establishing acceptable page-fault
rate.
– If page fault rate too low, process loses frame.
– If page fault rate too high, process needs and gains a frame.

156
Demand Paging Issues
– Prepaging
• Tries to prevent high level of initial paging.
– E.g. If a process is suspended, keep list of pages in working set
and bring entire working set back before restarting process.
– Tradeoff - page fault vs. prepaging - depends on how many
pages brought back are reused.
– Page Size Selection
• fragmentation
• table size
• I/O overhead
• locality
Demand Paging Issues
– Program Structure
• Array A[1024,1024] of integer
• Assume each row is stored on one page
• Assume only one frame in memory
• Program 1
for j := 1 to 1024 do
for i := 1 to 1024 do
A[i,j] := 0;
1024 * 1024 page faults
• Program 2
for i := 1 to 1024 do
for j:= 1 to 1024 do
A[i,j] := 0;
1024 page faults
Demand Paging Issues
• I/O Interlock and addressing
• Say I/O is done to/from virtual memory. I/O is
implemented by I/O controller.
– Process A issues I/O request
– CPU is given to other processes
– Page faults occur - process A’s pages are paged out.
– I/O now tries to occur - but frame is being used for another
process.
• Solution 1: never execute I/O to memory - I/O takes
place into system memory. Copying Overhead!!
• Solution 2: Lock pages in memory - cannot be selected
for replacement.
Demand Segmentation
• Used when there is insufficient hardware to
implement demand paging.
• OS/2 allocates memory in segments, which it
keeps track of through segment descriptors.
• Segment descriptor contains valid bit to indicate
whether the segment is currently in memory.
– If segment is in main memory, access continues.
– If not in memory, segment fault.

You might also like