Cs3451-Unit 3 Os Notes
Cs3451-Unit 3 Os Notes
Main Memory - Swapping - Contiguous Memory Allocation – Paging - Structure of the Page
Table Segmentation, Segmentation with paging; Virtual Memory - Demand Paging – Copy
on Write - Page Replacement - Allocation of Frames –Thrashing.
2. Load time: Must generate relocatable code if memory location is not known at
compile time
3. Execution time: Need hardware support for address maps (e.g., base and limit
registers).
Address Binding
The user program deals with logical addresses; it never sees the real
physical addresses
Dynamic Loading
Dynamic Linking
Linking postponed until execution time & is particularly useful for libraries
Small piece of code called stub, used to locate the appropriate memory-resident
library routine or function.
Overlays:
At a given time, the needed instructions & data are to be kept within a memory.
Swapping
The purpose of the swapping in operating system is to access the data present
in the hard disk and bring it to RAM so that the application programs can use
it. The thing to remember is that swapping is used only when data is not
present in RAM.
The concept of swapping has divided into two more concepts: Swap-in and
Swap-out.
Swap-out is a method of removing a process from RAM and adding it to the hard disk.
Swap-in is a method of removing a program from a hard disk and putting it back into the
main memory or RAM.
Fig.3.3. Swapping
Example: Suppose the user process's size is 2048KB and is a standard hard disk where
swapping has a data transfer rate of 1Mbps. Now we will calculate how long it will take to
transfer from main memory to secondary memory.
Disadvantages of Swapping
1. If the computer system loses power, the user may lose all information related to the
program in case of substantial swapping activity.
2. If the swapping algorithm is not good, the composite method can increase the number of
Page Fault and decrease the overall processing performance.
Contiguous memory allocation
Most systems allow programs to allocate more memory to its address space during
execution. Data allocated in the heap segments of programs in an example of such
allocated memory. What is required to support dynamic memory allocation in the
following schemes?
i) Continuous memory allocation
Memory Protection:
Each process is contained in a single contiguous section of memory. There are two
methods namely:
Fixed–Partition Method:
Divide memory into fixed size partitions, where each partition has exactly one
process.
Variable-partition Method:
Divide memory into variable size partitions, depending upon the size of the
incoming process.
Solution:
Best-fit: Allocate the smallest hole that is big enough; must search entire
list, unless ordered by size. Produces the smallest left over hole.
Worst-fit: Allocate the largest hole; must also search entire list.
2. Compaction: Move all processes towards one end of memory, hole towards
other end of memory, producing one large hole of available memory.
Paging.
It avoids the considerable problem of fitting the varying size memory chunks on
to the backing store.
Page number (p) – used as an index into a page table which contains base
address of each page in physical memory
Page offset (d) – combined with base address to define the physical address
i.e., Physical address = base address +offset
Paging Hardware
When a logical address is generated by CPU, its page number is presented to TLB.
TLB hit: If the page number is found, its frame number is immediately
available & is used to access memory
TLB miss: If the page number is not in the TLB, a memory reference to the
page table must be made.
Hit ratio: Percentage of times that a particular page is found in the TLB.
valid (v) - indicates that the associated page is in the process logical address
space, and is thus a legal page
invalid(i) - indicates that the page is not in the process logical address space
Segmentation.
Most systems allow programs to allocate more memory to its address space during
execution. Data allocated in the heap segments of programs in an example of such
allocated memory. What is required to support dynamic memory allocation in the
following schemes?
Pure segmentation
Segmentation
Memory-management scheme that supports user view of memory
A program is a collection of segments.
A segment is a logical unit such as: Main program, Procedure, Function, Method, Object,
Local variables, global variables, Common block, Stack, Symbol table, arrays
User‟s View of a Program
Fig.3.10 Logical Address Space for segmentation
Segmentation Hardware
Logical address consists of a two tuple : <Segment-number, offset>
Segment table – maps two-dimensional physical addresses; each table entry has:
Base – contains the starting physical address where the segments reside in memory
Limit – specifies the length of the segment
Segment-table base register (STBR) points to the segment table‘s location in memory
Segment-table length register (STLR) indicates number of segments used by a program;
Sharing
shared segments
same segment number
Allocation
first fit/best fit
external fragmentation
Protection: With each entry in segment table associate:
validation bit = 0
Protection bits associated with segments; code sharing occurs at segment level
Since segments vary in length, memory allocation is a dynamic storage-allocation problem
Address Translation scheme
Fig.3.11 Segmentation Hardware
EXAMPLE:
S g p
13 1 2
Where s designates the segment number, g indicates whether the segment is in the GDT or
LDT, and p deals with protection.
The offset is a 32-bit number specifying the location of the byte.
The base and limit information about the segment are used to generate a linear-address.
First, the limit is used to check for address validity.
If the address is not valid, a memory fault is generated, resulting in a trap to the operating
system.
If it is valid, then the value of the offset is added to the value of the base, resulting in a 32-bit
linear address. This address is then translated into a physical address.
The linear address is divided into a page number consisting of 20 bits, and a page offset
consisting of 12 bits.
Since we page the page table, the page number is further divided into a 10-bit page directory
pointer and a 10-bit page table pointer.
The logical address is as follows.
P1 P2 d
Fig.3.14 Segmentation with Paging
To improve the efficiency of physical memory use.
Intel 386 page tables can be swapped to disk.
In this case, an invalid bit is used in the page directory entry to indicate whether the table
to which the entry is pointing is in memory or on disk.
If the table is on disk, the operating system can use the other 31 bits to specify the disk
location of the table; the table then can be brought into memory on demand.
4. Explain any two structures of the page table with neat diagrams.
1. Hierarchical Paging
Break up the Page table into smaller pieces. Because if the page table is too
large then it is quite difficult to search the page number.
Each entry in hash table contains a linked list of elements that hash to the
same location.
Each entry consists of;
Working Procedure:
The virtual page number in the virtual address is hashed into the hash
table.
Virtual page number is compared to field(a) in the 1st element in the linked list.
If there is a match, the corresponding page frame (field (b)) is used to form the
desired physical address.
If there is no match, subsequent entries in the linked list are searched for
a matching virtual page number.
It has one entry for each real page (frame) of memory & each entry consists of the
virtual address of the page stored in that real memory location, with information about
the process that owns that page. So, only one page table is in the system.
Fig.3.18 Inverted Paging
Demerit: Improve the amount of time needed to search the table when a page
reference occurs.
Shared code
One copy of read-only (re entrant) code shared among processes (i.e., text
editors, compilers, window systems).
Shared code must appear in same location in the logical address space of
all processes
EXAMPLE:
Fig.3.19 Shared Paging
IA-32 Architecture
The CPU generates logical addresses, which are given to the segmentation
unit.
The segmentation unit produces a linear address for each logical address.
The linear address is then given to the paging unit, which in turn generates
the physical address in main memory.
Thus, the segmentation and paging units form the equivalent of the memory-
management unit(MMU).
Fig.3.20 Logical to physical address Translation
IA-32 Segmentation
Information about the first partition is kept in the local descriptor table (LDT);
Information about the second partition is kept in the global descriptor table
(GDT).
Each entry in the LDT and GDT consists of an 8-byte segment descriptor
with detailed information about a particular segment, including the base
location and limit of that segment.
The logical address is a pair (selector, offset), where the selector is a 16-bit
number:
The machine has six segment registers, allowing six segments to be addressed
at any one time by a process. It also has six 8-byte micro program registers to
hold the corresponding descriptors from either the LDT or GDT.
The base and limit information about the segment in question is used to
generate a linear address.
If it is valid, then the value of the offset is added to the value of the base,
resulting in a 32-bit linear address.
IA-32 Paging
For 4-KB pages, IA-32 uses a two-level paging scheme in which the division
of the 32- bit linear address is as following.
The Page directory entry points to an inner page table that is indexed by the contents
of the innermost 10 bits in the linear address.
To improve the efficiency of physical memory use, IA-32 page tables can be swapped to
disk.
Fig.3.22 Page Address Extension
In this case, an invalid bit is used in the page directory entry to indicate
whether the table to which the entry is pointing is in memory or on disk.
Page Address Extension (PAE) also increased the page-directory and page-table
entries from 32 to 64 bits in size, which allowed the base address of page
tables and page frames to extend from 20 to 24bits.
Virtual Memory
Advantages:
Allows the program that can be larger than the physical memory.
Demand Paging
Lazy Swapper - Never swaps a page into memory unless that page will be
needed.
Advantages
Faster response
More users
Transfer of a paged memory to contiguous disk space
Valid-Invalid bit
Fig.3.25 Page table when some pages are not in main memory
Copy on Write.
fork() creating a copy of the parent’s address space for the child, duplicating the pages
belonging to the parent.
Many child processes invoke the exec() system call immediately after creation, the
copying of the parent’s address space may be unnecessary.
Instead, we can use a technique known as copy-on-write, which works by allowing the
parent and child processes initially to share the same pages. These shared pages are
marked as copy-on-write pages, meaning that if either process writes to a shared page, a
copy of the shared page is created.
Fig. show the contents of the physical memory before and after process 1 modifies page
C.
All unmodified pages can be shared by the parent and child processes.
Pages that cannot be modified (pages containing executable code) can be shared
by the parent and child. Copy-on-write is a common technique used by several
operating systems.
These free pages are typically allocated when the stack or heap for a process must
expand or when there are copy-on-write pages to be managed.
vfork() (for virtual memory fork)—that operates differently from fork() with copy-
on-write. With vfork(), the parent process is suspended, and the child process
uses the address space of the parent.
Page Fault
b. If the reference is valid then the page has not been yet brought into main
memory.
4. Reset the page table to indicate that the page is now in memory.
When the OS sets the instruction pointer to the 1st instruction of the
process, which is on the non-memory resident page, then the process
immediately faults for the page.
After this page is bought into the memory, the process continue to execute,
faulting as necessary until every page that it needs is in memory.
1. Trap to the OS
4. Check whether the reference was legal and find the location of page on disk.
10. Reset the page table to indicate that the page is now in memory.
Page Replacement
If no frames are free, we could find one that is not currently being used
& free it.
We can free a frame by writing its contents to swap space & changing
the page table to indicate that the page is no longer in memory.
Then we can use that freed frame to hold the page for which the process
faulted.
Write the victim page to the disk, change the page & frame tables
accordingly.
3. Read the desired page into the (new) free frame. Update the page and frame tables.
Note:
If no frames are free, two page transfers are required & this situation effectively
doubles the page- fault service time.
Modify bit:
It indicates that any word or byte in the page is modified.
Thrashing
If the page fault and swapping happens very frequently at a higher rate, then the
operating system has to spend more time swapping these pages. This state in the
operating system is termed thrashing. Because of thrashing the CPU utilization is going
to be reduced.
Example
If any process does not have the number of frames that it needs to support pages in
active use then it will quickly page fault.
And at this point, the process must replace some pages. As all the pages of the process
are actively in use, it must replace a page that will be needed again right away.
Consequently, the process will quickly fault again, and again, and again, replacing
pages that it must bring back in immediately. This high paging activity by a process is
called thrashing.
During thrashing, the CPU spends less time on some actual productive work spend
more time swapping.
Fig.3.30 Thrashing
Causes of Thrashing
If a process does not have ―enough pages, the page-fault rate is very high.
This leads to:
If global replacement is used then as processes enter the main memory they
tend to steal frames belonging to other processes.
Eventually all processes will not have enough frames and hence the page
fault rate becomes very high.
Effect of Thrashing
At the time, when thrashing starts then the operating system tries to apply either
the Global page replacement Algorithm or the Local page replacement algorithm.
The Global Page replacement has access to bring any page, whenever thrashing found it
tries to bring more pages. Actually, due to this, no process can get enough frames and
as a result, the thrashing will increase more and more. Thus the global page
replacement algorithm is not suitable whenever thrashing happens.
Unlike the Global Page replacement, the local page replacement will select pages which
only belongs to that process. Due to this, there is a chance of a reduction in the
thrashing. As it is also proved that there are many disadvantages of Local Page
replacement. Thus local page replacement is simply an alternative to Global Page
replacement.
Local Page replacement is better than the Global Page replacement but local page
replacement has many disadvantages too, so it is not suggestible.
D =∆WSSi
The working-set model is successful and its knowledge can be useful in preparing but it
is a very clumpy approach in order to avoid thrashing. There is another technique that
is used to avoid thrashing and it is Page Fault Frequency (PFF) and it is a more direct
approach.
The main problem is how to prevent thrashing. As thrashing has a high page fault rate
and also we want to control the page fault rate.
When the Page fault is too high, then we know that the process needs more frames.
Conversely, if the page fault-rate is too low then the process may have too many frames.
We can establish upper and lower bounds on the desired page faults. If the actual page-
fault rate exceeds the upper limit then we will allocate the process to another frame.
And if the page fault rate falls below the lower limit then we can remove the frame from
the process.
Thus with this, we can directly measure and control the page fault rate in order to
prevent thrashing.
fragmentation
Advantages of paging
Multiprogramming is supported
Disadvantages of paging:
Some memory space stays unused when available blocks are not
sufficient for address space for jobs to run
Advantages of segmentation:
Disadvantages of segmentation:
Main memory will always limit the size of segmentation, that is,
segmentation is bound by the size limit of memory
First-fit:
Best-fit:
Worst-fit: