0% found this document useful (0 votes)
40 views

Operating System Notes

OS CONTENT

Uploaded by

Utkarsh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Operating System Notes

OS CONTENT

Uploaded by

Utkarsh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

VIRTUAL MEMORY

• Virtual memory – separation of user logical memory from


physical memory
– Only part of the program needs to be in memory for
execution
– Logical address space can therefore be much larger than
physical address space
– Allows address spaces to be shared by several processes
– Allows for more efficient process creation
– More programs running concurrently
– Less I/O needed to load or swap processes
Virtual Memory(Cont.)
• Virtual address space – logical view of how process is stored
in memory
– Usually start at address 0, contiguous addresses until end
of space
– Meanwhile, physical memory organized in page frames
– MMU must map logical to physical
• Virtual memory can be implemented via:
– Demand paging
– Demand segmentation
Virtual Memory That is Larger Than Physical Memory
Virtual-address Space
 Usually design logical address space for
stack to start at Max logical address and
grow “down” while heap grows “up”
 Maximizes address space use
 Unused address space between
the two is hole
 No physical memory needed
until heap or stack grows to a
given new page
 Enables sparse address spaces with
holes left for growth, dynamically linked
libraries, etc
 System libraries shared via mapping into
virtual address space
 Shared memory by mapping pages read-
write into virtual address space
 Pages can be shared during fork(),
speeding process creation
Shared Library Using Virtual
Memory
Demand Paging
• Could bring entire process into memory at load
time
• Or bring a page into memory only when it is
needed
– Less I/O needed, no unnecessary I/O
– Less memory needed
– Faster response
– More users
• Similar to paging system with swapping (diagram
on right)
• Page is needed  reference to it
– invalid reference  abort
– not-in-memory  bring to memory
• Lazy swapper – never swaps a page into memory
unless page will be needed
– Swapper that deals with pages is a pager
Basic Concepts
• With swapping, pager guesses which pages will be
used before swapping out again
• Instead, pager brings in only those pages into
memory
• How to determine that set of pages?
– Need new MMU functionality to implement demand
paging
• If pages needed are already memory resident
– No difference from non demand-paging
• If page needed and not memory resident
– Need to detect and load the page into memory from
storage
• Without changing program behavior
• Without programmer needing to change code
Valid-Invalid Bit
• With each page table entry a valid–invalid bit is associated
(v  in-memory – memory resident, i  not-in-memory)
• Initially valid–invalid bit is set to i on all entries
• Example of a page table snapshot:

• During MMU address translation, if valid–invalid bit in page table entry is i  page
fault
Page Table When Some Pages Are Not in Main Memory
Page Fault
• If there is a reference to a page, first reference to
that page will trap to operating system:
page fault
1. Operating system looks at another table to decide:
– Invalid reference  abort
– Just not in memory
2. Find free frame
3. Swap page into frame via scheduled disk operation
4. Reset tables to indicate page now in memory
Set validation bit = v
5. Restart the instruction that caused the page fault
Steps in Handling a Page Fault
Aspects of Demand Paging
• Extreme case – start process with no pages in memory
– OS sets instruction pointer to first instruction of process,
non-memory-resident -> page fault
– And for every other process pages on first access
– Pure demand paging
• Actually, a given instruction could access multiple pages
-> multiple page faults
– Consider fetch and decode of instruction which adds 2
numbers from memory and stores result back to memory
– Pain decreased because of locality of reference
• Hardware support needed for demand paging
– Page table with valid / invalid bit
– Secondary memory (swap device with swap space)
– Instruction restart
Instruction Restart
• Consider an instruction that could access
several different locations
– block move

– auto increment/decrement location


– Restart the whole operation?
• What if source and destination overlap?
Performance of Demand Paging
• Stages in Demand Paging (worse case)
1. Trap to the operating system
2. Save the user registers and process state
3. Determine that the interrupt was a page fault
4. Check that the page reference was legal and determine the location of the page
on the disk
5. Issue a read from the disk to a free frame:
1. Wait in a queue for this device until the read request is serviced
2. Wait for the device seek and/or latency time
3. Begin the transfer of the page to a free frame
6. While waiting, allocate the CPU to some other user
7. Receive an interrupt from the disk I/O subsystem (I/O completed)
8. Save the registers and process state for the other user
9. Determine that the interrupt was from the disk
10. Correct the page table and other tables to show page is now in memory
11. Wait for the CPU to be allocated to this process again
12. Restore the user registers, process state, and new page table, and then resume
the interrupted instruction
Performance of Demand Paging
(Cont.)
• Three major activities
– Service the interrupt – careful coding means just several hundred
instructions needed
– Read the page – lots of time
– Restart the process – again just a small amount of time
• Page Fault Rate 0  p  1
– if p = 0 no page faults
– if p = 1, every reference is a fault
• Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in )
Demand Paging Example
• Memory access time = 200 nanoseconds
• Average page-fault service time = 8
milliseconds
• EAT = (1 – p) x 200 + p (8 milliseconds)
= (1 – p) x 200 + p x 8,000,000
= 200 + p x 7,999,800
What Happens if There is no Free Frame?
• Used up by process pages
• Also in demand from the kernel, I/O buffers, etc
• How much to allocate to each?
• Page replacement – find some page in memory,
but not really in use, page it out
– Algorithm – terminate? swap out? replace the page?
– Performance – want an algorithm which will result
in minimum number of page faults
• Same page may be brought into memory
several times
Page Replacement
• Prevent over-allocation of memory by modifying page-fault
service routine to include page replacement

• Use modify (dirty) bit to reduce overhead of page transfers – only


modified pages are written to disk

• Page replacement completes separation between logical memory


and physical memory – large virtual memory can be provided on a
smaller physical memory
Need For Page Replacement
Basic Page Replacement
1. Find the location of the desired page on disk

2. Find a free frame:


- If there is a free frame, use it
- If there is no free frame, use a page replacement algorithm
to select a victim frame
- Write victim frame to disk if dirty

3. Bring the desired page into the (newly) free frame; update the
page and frame tables

4. Continue the process by restarting the instruction that caused


the trap

Note now potentially 2 page transfers for page fault – increasing


EAT
Page Replacement
Page and Frame Replacement Algorithms

• Frame-allocation algorithm determines


– How many frames to give each process
– Which frames to replace
• Page-replacement algorithm
– Want lowest page-fault rate on both first access and re-access
• Evaluate algorithm by running it on a particular string of memory
references (reference string) and computing the number of page faults
on that string
– String is just page numbers, not full addresses
– Repeated access to the same page does not cause a page fault
– Results depend on number of frames available
• In all our examples, the reference string of referenced page numbers is
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
Graph of Page Faults Versus The Number of Frames
First-In-First-Out (FIFO) Algorithm
• Reference string: 7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1
• 3 frames (3 pages can be in memory at a time per process)

15 page faults

• Can vary by reference string: consider


1,2,3,4,1,2,5,1,2,3,4,5
– Adding more frames can cause more page faults!
• Belady’s Anomaly
• How to track ages of pages?
– Just use a FIFO queue
FIFO Illustrating Belady’s Anomaly
Optimal Algorithm
• Replace page that will not be used for longest period of time

• The first three references cause faults that fill the three empty
frames.

• The reference to page 2 replaces page 7, because page 7 will not


be used until reference 18, whereas page 0 will be used at 5, and
page 1 at 14

9 page faults
Least Recently Used (LRU) Algorithm

• Use past knowledge rather than future


• Replace page that has not been used in the most
amount of time
• Associate time of last use with each page

12 page faults
LRU Algorithm (Cont.)
• Counter implementation
– Every page entry has a counter; every time page is referenced
through this entry, copy the clock into the counter
– When a page needs to be changed, look at the counters to find
smallest value
• Search through table needed
• Stack implementation
– Keep a stack of page numbers in a double link form:
– Page referenced:
• move it to the top
• requires 6 pointers to be changed
– But each update more expensive
– No search for replacement
• LRU and OPT are cases of stack algorithms that don’t have
Belady’s Anomaly
Use Of A Stack to Record Most Recent Page References
LRU Approximation Algorithms
• LRU needs special hardware and still slow
• Reference bit
– With each page associate a bit, initially = 0
– When page is referenced bit set to 1
– Replace any with reference bit = 0 (if one exists)
• We do not know the order, however
• Second-chance algorithm
– Generally FIFO, plus hardware-provided reference bit
– Clock replacement
– If page to be replaced has
• Reference bit = 0 -> replace it
• reference bit = 1 then:
– set reference bit 0, leave page in memory
– replace next page, subject to same rules
Second-Chance (clock) Page-Replacement Algorithm
Enhanced Second-Chance Algorithm

• Improve algorithm by using reference bit and modify bit (if


available) in concert
• Take ordered pair (reference, modify)
1. (0, 0) neither recently used not modified – best page to
replace
2. (0, 1) not recently used but modified – not quite as good, must
write out before replacement
3. (1, 0) recently used but clean – probably will be used again
soon
4. (1, 1) recently used and modified – probably will be used again
soon and need to write out before replacement
• When page replacement called for, use the clock scheme but
use the four classes replace page in lowest non-empty class
– Might need to search circular queue several times
Counting Algorithms
• Keep a counter of the number of references that
have been made to each page
– Not common
Least Frequently Used (LFU) Algorithm: replaces
page with smallest count

• Most Frequently Used (MFU) Algorithm: based


on the argument that the page with the smallest
count was probably just brought in and has yet
to be used
Page-Buffering Algorithms
• Keep a pool of free frames, always
– Then frame available when needed, not found at fault
time
– Read page into free frame and select victim to evict
and add to free pool
– When convenient, evict victim
• Possibly, keep list of modified pages
– When backing store otherwise idle, write pages there
and set to non-dirty
• Possibly, keep free frame contents intact and note
what is in them
– If referenced again before reused, no need to load
contents again from disk
– Generally useful to reduce penalty if wrong victim
frame selected
Applications and Page Replacement

• All of these algorithms have OS guessing about


future page access
• Some applications have better knowledge – i.e.
databases
• Memory intensive applications can cause double
buffering
– OS keeps copy of page in memory as I/O buffer
– Application keeps page in memory for its own work
• Operating system can given direct access to the
disk, getting out of the way of the applications
– Raw disk mode
• Bypasses buffering, locking, etc
Allocation of Frames
• Each process needs minimum number of frames
• There are various constraints to the strategies for the
allocation of frames:
• You cannot allocate more than the total number of
available frames.
• At least a minimum number of frames should be
allocated to each process.
This constraint is supported by two reasons.
• The first reason is, as less number of frames are allocated,
there is an increase in the page fault ratio, decreasing the
performance of the execution of the process.
• Secondly, there should be enough frames to hold all the
different pages that any single instruction can reference.
Allocation Algorithms
The two algorithms commonly used to allocate frames to a process are:
Equal allocation – For example, if there are 100 frames (after allocating frames for
the OS) and 5 processes, give each process 20 frames
– Keep some as free frame buffer pool
Proportional allocation – Allocate according to the size of process
Dynamic as degree of multiprogramming, process sizes change
si  size of process pi
S   si
m  total number of frames
s
ai  allocation for pi  i  m
S

With proportional allocation, we would split 62 frames between two processes,


one of 10 pages and one of 127 pages, by allocating 4 frames and 57 frames,
respectively`
Priority Allocation
• Use a proportional allocation scheme
using priorities rather than size

• If process Pi generates a page fault,


– select for replacement one of its frames
– select for replacement a frame from a
process with lower priority number
Global vs. Local Allocation
• Global replacement – process selects a
replacement frame from the set of all frames;
one process can take a frame from another
– But then process execution time can vary greatly
– But greater throughput so more common
• Local replacement – each process selects from
only its own set of allocated frames
– More consistent per-process performance
– But possibly underutilized memory
Non-Uniform Memory Access
• So far all memory accessed equally
• Many systems are NUMA – speed of access to
memory varies
– Consider system boards containing CPUs and memory,
interconnected over a system bus
• Optimal performance comes from allocating memory
“close to” the CPU on which the thread is scheduled
– And modifying the scheduler to schedule the thread on the
same system board when possible
– Solved by Solaris by creating lgroups
• Structure to track CPU / Memory low latency groups
• Used my schedule and pager
• When possible schedule all threads of a process and allocate all
memory for that process within the lgroup
Thrashing
• If a process does not have “enough” pages, the page-fault
rate is very high
– Page fault to get page
– Replace existing frame
– But quickly need replaced frame back
– This leads to:
• Low CPU utilization
• Operating system thinking that it needs to increase the degree of
multiprogramming
• Another process added to the system

• Thrashing  a process is busy swapping pages in and out


• A process is thrashing if it is spending more time paging
than executing.
Thrashing (Cont.)
Demand Paging and Thrashing
• Why does demand paging work?
Locality model
– Process migrates from one locality to
another
– Localities may overlap

• Why does thrashing occur?


 size of locality > total memory size
– Limit effects by using local or priority page
replacement
Locality In A Memory-Reference Pattern
Working-Set Model
•   working-set window  a fixed number of page references
Example: 10,000 instructions
• WSSi (working set of Process Pi) =
total number of pages referenced in the most recent  (varies in time)
– if  too small will not encompass entire locality
– if  too large will encompass several localities
– if  =   will encompass entire program
• D =  WSSi  total demand frames
– Approximation of locality
• if D > m  Thrashing
– Policy if D > m, then suspend or swap out one of the processes
Keeping Track of the Working Set
• Approximate with interval timer + a reference bit
• Example:  = 10,000
– Timer interrupts after every 5000 time units
– Keep in memory 2 bits for each page
– Whenever a timer interrupts copy and sets the values
of all reference bits to 0
– If one of the bits in memory = 1  page in working set
• Why is this not completely accurate?
• Improvement = 10 bits and interrupt every 1000
time units
Page-Fault Frequency
• More direct approach than WSS
• Establish “acceptable” page-fault frequency (PFF) rate and use local
replacement policy
– If actual rate too low, process loses frame
– If actual rate too high, process gains frame
• If the actual page-fault rate exceeds the upper limit, allocate the process another
frame
• If the page-fault rate falls below the lower limit, remove a frame from the process.
• Directly measure and control the page-fault rate to prevent thrashing.

You might also like