0% found this document useful (0 votes)
20 views23 pages

OS Presentation Chapter 9

Uploaded by

Ayesha Masood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views23 pages

OS Presentation Chapter 9

Uploaded by

Ayesha Masood
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Virtual Memory

Submitted To Submitted By
Dr.Shoaib 2021-MSCS-03
2021-MSCS-07
2021-MSCS-12
2021-MSCS-16
2021-MSCS-33

1
Background
● Background
● Demand Paging
● Copy-on-Write
● Page Replacement
● Allocation of Frames
● Thrashing
● Allocating Kernel Memory
● Other Considerations
● Operating-System Examples

2
What is virtual memory?
A computer can address more memory than the amount physically installed on
the system. This extra memory is actually called virtual memory.
Purposes:
❑Extend the use of physical memory by using disk.
❑allows us to have memory protection, because each virtual address is
translated to a physical address

3
Demand Paging
• Demand Paging Bring a page into memory only when it is needed.
– Less memory needed
– Faster response
– More users
– There is no limit on degree of multiprogramming.
• Page is needed __reference to it
– invalid reference __abort
– not-in-memory__bring to memory

4
Copy on Write
• Copy-on-Write (COW) allows both parent and child processes to initially
share the same pages in memory If either process modifies a shared page,
only then is the page copied.
• COW allows more efficient process creation as only modified pages are
copied.
• Free pages are allocated from a pool of zeroed-out pages.

5
Page Replacement
• Find the location of the desired page on disk.
• If there is a free frame, use it otherwise, use a page
replacement algorithm to select a victim frame, write it to the
disk, change the page and frame tables.
• Bring the desired page into the (newly) free frame; update the
page and frame table.
• Restart the process.
• If no frames are free, two page transfers (one out and one in)
are required.

6
Page Replacement Algorithms
1.FIFO
• Pages in main memory are kept in a list.
• Newest page is in head and the oldest in tail
• It does not take advantage of page access patterns or frequency

7
Replacement Algorithms Cont..
2.Least Recently Used (LRU)
• It swaps the pages that have been used the least over a period of
time.

8
Replacement Algorithms Cont..
3. Optimal Replacement (OPT)
• When the memory is full, remove a page that will be
unreferenced for the longest time.
• The OS keeps track of all pages referenced by the program

9
Allocating Kernel Memory
● Treated differently from user memory
● Often allocated from a free-memory pool
● Kernel requests memory for structures of varying sizes
● Some kernel memory needs to be contiguous
▪ I.e. for device I/O

Buddy System
• Allocates memory from fixed-size segment consisting of
physicallycontiguous pages
• Memory allocated using power-of-2 allocator
• Satisfies requests in units sized as power of 2
• Request rounded up to next highest power of 2
Buddy System Cont..
• When smaller allocation needed than is available, current chunk split
into two buddies of next-lower power of 2
• Continue until appropriate sized chunk available
• For example, assume 256KB chunk available, kernel requests 21KB
• Split into AL and AR of 128KB each
• One further divided into BL and BR of 64KB
– One further into CL and CR of 32KB each – one used to satisfy request
• Advantage – quickly coalesce unused chunks into larger chunk
• Disadvantage - fragmentation
Buddy System Allocator
Slab Allocator
• Alternate strategy
• Slab is one or more physically contiguous pages
• Cache consists of one or more slabs
• Single cache for each unique kernel data structure
⮚Each cache filled with objects – instantiations of the data structure
• When cache created, filled with objects marked as free
• When structures stored, objects marked as used
⮚If slab is full of used objects, next object allocated from empty slab
• If no empty slabs, new slab allocated
• Benefits include no fragmentation, fast memory request satisfaction
Slab Allocation
Slab Allocator in Linux
● For example process descriptor is of type struct task_struct
● Approx 1.7KB of memory
● New task -> allocate new struct from cache
● Will use existing free struct task_struct
● Slab can be in three possible states
❖ Full – all used
❖ Empty – all free
❖ Partial – mix of free and used
● Upon request, slab allocator
❖ Uses free struct in partial slab
❖ If none, takes one from empty slab
❖ If no empty slab, create new empty
Slab Allocator in Linux (Cont.)
● Slab started in Solaris, now wide-spread for both kernel mode and user
memory in various OSes
● Linux 2.2 had SLAB, now has both SLOB and SLUB allocators
● SLOB for systems with limited memory
● Simple List of Blocks – maintains 3 list objects for small, medium, large
objects
● SLUB is performance-optimized SLAB removes per-CPU queues, metadata
stored in page structure
Allocation of frames
● What are frames?
The main memory of the system is divided into frames. The OS has to
allocate a sufficient number of frames for each process.
● What is frame allocation?
A process that helps to decide how many frames to allocate to each
process
● Frame allocation algorithm
Frame allocation algorithms are used if you have multiple processes; it
helps decide how many frames to allocate to each process.
● The two algorithms commonly used to allocate frames to a
process are: Equal allocation ,Proportional allocation

17
Allocation of frames
• Equal allocation
– In a system with x frames and y processes, each process gets equal
number of frames, i.e. x/y. For instance, if the system has 48 frames
and 9 processes, each process will get 5 frames. The three frames
which are not allocated to any process can be used as a free-frame
buffer pool.
• Disadvantage
– In systems with processes of varying sizes, it does not make much
sense to give each process equal frames. Allocation of a large number
of frames to a small process will eventually lead to the wastage of a
large number of allocated unused frames.

18
Allocation of frames
• Proportional allocation:
– Frames are allocated to each process according to the process size.
– For a process pi of size si, the number of allocated frames is ai =
(si/S)*m, where S is the sum of the sizes of all the processes and m is
the number of frames in the system. For instance, in a system with 62
frames, if there is a process of 10KB and another process of 127KB,
then the first process will be allocated (10/137)*62 = 4 frames and the
other process will get (127/137)*62 = 57 frames.
• Advantage
– All the processes share the available frames according to their needs,
rather than equally.

• Local replacement
– When a process needs a page which is not in the memory, it can bring
in the new page and allocate it a frame from its own 19
set of allocated
frames only.
Allocation of frames
• Global replacement
– The Global Page replacement has access to bring any page, whenever
thrashing found it tries to bring more pages.

Thrashing
• In case, if the page fault and swapping happens very frequently at a higher
rate, then the operating system has to spend more time swapping these
pages. This state in the operating system is termed thrashing.

• page fault
– if any process does not have the number of frames that it needs to
support pages in active use then it will quickly page fault.

20
Thrashing
• Paging
– divide process in pages and bring them to memory.
• Effect of Thrashing
– At the time, when thrashing starts then the operating system tries to
apply either the Global page replacement Algorithm or the Local page
replacement algorithm.
• Solution to thrashing
– Increase main memory size : increase main memory size every time
according to our requirement.
– Long-term scheduler: reduce degree of multiplication

21
Other Considerations
• Prepaging
– To reduce the large number of page faults that occurs at process startup
– Prepage all or some of the pages a process will need, before they are
referenced , But if prepaged pages are unused, I/O and memory was
wasted
– Assume s pages are prepaged and α of the pages is used
– Is cost of s * α save pages faults > or < than the cost of prepaging
s * (1- α) unnecessary pages?
– α near zero prepaging loses
• Page size selection
– Fragmentation , Page table size Resolution , I/O overhead Number of
page faults Locality ,TLB size and effectiveness
– Always power of 2, usually in the range 212 (4,096 bytes) to 222
– (4,194,304 bytes) 22
– On average, growing over time
Other Considerations
• TLB Reach - The amount of memory accessible from the TLB
– TLB Reach = (TLB Size) X (Page Size)
– Ideally, the working set of each process is stored in the TLB
- Otherwise there is a high degree of page faults
– Increase the Page Size
- This may lead to an increase in fragmentation as not all
applications require a large page size
– Provide Multiple Page Sizes
- This allows applications that require larger page sizes the
opportunity to use them without an increase in fragmentation
– I/O Interlock
– Pinning
23

You might also like