Unit4 OS
Unit4 OS
Memory Management and Virtual Memory - Logical versus Physical Address Space,
Swapping, Contiguous Allocation, Paging, Segmentation, Segmentation with Paging, Demand
Paging, Page Replacement, Page Replacement Algorithms.
Also, at times one program is dependent on some other program. In such a case, rather than
loading all the dependent programs, CPU links the dependent programs to the main executing
program when it is required. This mechanism is known as Dynamic Linking.
Physical address can be defined as the exact location in the main memory. Physical
Address identifies a physical location of required instruction/data in a memory. The user
program generates the logical address and in order to execute it the logical address must be
converted into the physical address by Memory Management Unit (MMU). Each time the
program gets executed, the physical address may not be same every time. The relocation register
The basic difference between Logical and physical address is that Logical address is
generated by CPU with reference to starting point of a program whereas the physical address
is a location that exists in the memory unit.
The logical address does not exist physically in the memory whereas physical address is a
location in the memory that can be accessed physically.
The logical address is generated by the CPU while the program is running and the physical
address is computed by the Memory Management Unit (MMU).
3. SWAPPING
Swapping is a mechanism in which a process can be swapped (or moved) temporarily out of
main memory to secondary storage (disk) and make that memory available to other processes. At
some time later, the system swaps back the process from the secondary storage to main memory.
Whenever the main memory runs out of space, one of the inactive processes is swapped
to create free space for executing other new process. This improves system throughput and
processor is utilized effectively.
i. Static Partitioning
Static partitioning is a fixed size partitioning scheme.
In this technique, main memory is pre-divided into fixed size partitions.
The size of each partition is fixed and can not be changed latter.
Each partition is allowed to store only one process. The fixed size partitions can be of
two types: Equal size or Unequal size partitioning.
In equal size partitioning, all the partitions will have the same size. In un-equal size
partitioning, all the partitions may have the different sizes.
Example:
These partitions are allocated to the processes as they arrive. In equal size partitioning
scheme, as all the partition sizes are equal. So there is no difference in allocating any partition to
any process. But, in un-equal size partitioning scheme, allocating a particular partition to a
process is very important because, when a process is allocated to one of the partition, some
memory will be wasted if process size is less than partition size. This wastage of memory is
called fragmentation. The following algorithms are used in un-equal size partitioning scheme.
Example: Consider that initially there are three free partitions /holes available with sizes 700
KB, 950 KB and 500 KB as shown in the figure (a). If we want to insert P3 process with size 450
KB, then the implementation of different algorithms are shown below.
Pi th
: Partition allocated to i process : Free partition /hole
In First-fit algorithm, 700 KB hole is allocated to P3 process and the remaining memory 250
KB is wasted. In Best-fit algorithm, 500 KB hole is allocated to P3 process and the remaining
memory 50 KB is wasted. In Worst-fit algorithm, 950 KB hole is allocated to P3 process and the
remaining memory 500 KB is wasted. As the memory wasted is inside the partition, so it is
called internal fragmentation.
Best Fit Algorithm works best because the space left after the allocation inside the partition is
of very small size. Thus, internal fragmentation is least. Worst Fit Algorithm works worst. This
is because space left after the allocation inside the partition is of very large size. Thus, internal
fragmentation is maximum.
For example, consider processes = {P1, P2, P3, P4} with memory size = {2, 7, 1, 5} MBs
respectively. They are loaded into main memory with partition size is equal to process size as
shown in the below figure (a).
Suppose process P1(2MB) and process P3(1MB) completed their execution, they will
leave the memory and two holes (free spaces) are created. The total free space is 4 MB
Internal Fragmentation
Internal Fragmentation occurs only in static partitioning.
It occurs when the space is left inside the fixed partition after allocating the partition
to a process. This space is called internal fragmentation.
This space can not be allocated to any other process.
This is because only one process is allowed to store in each partition.
External Fragmentation
External Fragmentation occurs only in dynamic partitioning.
It occurs when the total amount of empty space required to store the process is
available in the main memory but the free memory is not contiguous.
As the space is not contiguous, so the process can not be stored.
The external fragmentation problem can be solved using compaction technique. The
compaction is a technique used to move all processes towards one end of memory and all the
free blocks of memory are moved towards the other end. This is shown in the below diagram.
Huge amount of time is invested for moving all the free spaces to one side and the CPU will
remain idle for all this time. Despite of the fact that the compaction avoids external
fragmentation, it makes system inefficient.
When a process needs to be executed, its pages are loaded into any available memory
frames. In order to start execution, not all the pages of it to loaded into main memory; only few
pages are enough. After loading the pages, the details of which page is loaded into which frame
is saved in a table called page table. Each process maintains a separate page table. The starting
location address of a page table is saved in the Page Table Base Register (PTBR).
When processor wants to execute an instruction, it generates the logical address. The
logical address contain two parts: a page number (Pi) and a page offset (d). The page number is
used as an index into a page table and fetch corresponding frame no (f). The physical address is
obtained by combining the frame no (f) with the offset d. This is shown in the below diagram.
Pi d f d
Segment Table:
Segment table is a table that stores the information about each segment of the process.
It has two columns. First column stores the size or length of the segment. Second column
stores the base address or starting address of the segment in the main memory.
Segment table is stored as a separate segment in the main memory.
The base address of the segment table is stored in Segment Table Base Register (STBR).
When a process needs to be executed, its segments are loaded into main memory. After
loading the segments, the details of each segment’s base address and the limit value are saved in
a table called segmentation table. Base Address contains the starting physical address where the
segments stored in main memory. Limit specifies the length of the segment. When processor
wants to execute an instruction, it generates the logical address. This address contain two parts: a
segment number (s) and a segment offset (d). The segment number is used as an index into a
segment table and fetches corresponding base address. If the segment offset (d) is greater than
limit size, then an addressing error is raised. Otherwise, physical address is generated. The
L
d
It allows to divide the program into modules which provides better visualization.
Segment table consumes less space as compared to page table in paging.
It solves the problem of internal fragmentation.
Process is first divided into segments and then each segment is divided into pages.
These pages are then stored in the frames of main memory.
A separate page table exists for each segment to keep track of the frames storing the
pages of that segment.
Number of entries in the page table of a particular segment = Number of pages that
segment is divided.
A segment table exists that keeps track of the frames storing the page tables of segments.
Number of entries in the segment table of a process = Number of segments that process is
divided.
The base address of the segment table is stored in the segment table base register (STBR).
When a process needs to be executed, CPU generates the logical address for each instruction.
The logical address consisting of three parts:
The physical address is computed from the given logical address as follows:
The segment table base register (STBR) contain the starting address of segment table.
For the given segment number, corresponding entry is found in that segment table.
Segment table provides the address of page table belongs to the referred segment number.
For the given page number, corresponding entry is found in that page table.
This page table provides the frame number of the required page of the referred segment.
The frame number combined with the page offset to get the required physical address.
The below diagram illustrates the above steps of translating logical address into physical address:
Large programs can be written, as virtual space available is huge compared to physical
memory.
More physical memory available, as programs are stored on virtual memory, so they
occupy very less space on actual physical memory.
Demand paging suggests keeping all the pages of a process in the virtual memory until
they are required. In other words, it says that do not load any page into the main memory until it
is required. Whenever any page is referred it may or may not be in the main memory. If the
referred page is not present in the main memory then there will be a miss and the concept is
called page fault occur. The CPU has to transfer the missed page from the virtual memory into
one of the free frame of the main memory. If the main memory does not have any free frame,
then page replacement algorithms are used to swap one of the pages from the main memory with
the required page. When OS use demand paging, the page table contains one extra column
Ravindar.M, Asso.Prof, CSE Dept, JITS-KNR
named as valid bit. If the page is in main memory, then its valid bit is set to true; otherwise it is
set to false. This valid bit is used to know whether the required page is in main memory or not.
There are cases when no pages are loaded into the memory initially; pages are only
loaded when demanded by the process by generating page faults. This is called “Pure Demand
Paging”. In demand paging, some of the pages are loaded into main memory initially before
start executing it.
“Pre-paging” is a technique used to load pages into main memory before they are needed
to avoid page faults. But identifying the needed pages in advance is very difficult.
1. Put the process that need the page not in main memory in the wait queue, until any other
process finishes its execution thereby freeing some of the frames.
2. Or, remove some other process completely from the memory to free frames.
3. Or, find some pages that are not being used right now, move them to the disk (virtual
memory) to get free frames. This technique is called Page replacement and is most
commonly used. We have some great algorithms to carry on page replacement efficiently.
The following steps are followed by page replacement algorithms to replace the referenced page.
Find the location of the page referenced by ongoing process on the disk.
Find a free frame. If there is a free frame, use it. If there is no free frame, use a page-
replacement algorithm to select any existing frame to be replaced, such frame is known
as victim frame.
Write the page form the victim frame to disk. Change all related page tables to indicate
that this page is no longer in memory.
Note: A good page replacement algorithm is one that minimizes the number of page faults.
Solution:
Belady’s anomaly: Generally, when number of frames increases then number page faults has to
decrease. But, when FIFO page replacement algorithm is used on most of the reference strings, it
found that when number of frames increases then number page faults also increasing. This
phenomenon is called Belady’s anomaly. For example, if we consider reference string 3, 2, 1, 0,
3, 2, 4, 3, 2, 1, 0, 4 and number of frames as 3, we get total 9 page faults, but if we increase
number of frames to 4, we get 10 page faults.
This algorithm replaces the page that will not be referred by the CPU in future for the
longest time.
It is practically impossible to implement this algorithm.
This is because the pages that will not be used in future for the longest time can not be
predicted.
However, it is the best known algorithm and gives the least number of page faults.
Hence, it is used as a performance measure criterion for other algorithms.
Solution:
As the name suggests, this algorithm works on the principle of “Least Recently Used”.
It replaces the page that has not been referred by the CPU for the longest time.
This algorithm is just opposite to the optimal page replacement algorithm. In this, we
look at the past instead of staring at future.
Solution:
13. THRASHING
Thrashing is a condition or a situation when the system is spending a major portion of its
time in servicing the page faults, but the actual execution or processing done is very negligible.
The basic concept involved is that if a process is allocated too few frames, then there will
be too many and too frequent page faults. As a result, no useful work would be done by the CPU
and the CPU utilization would fall drastically.
Locality Model:
A locality is a set of pages that are actively used together. The locality model states that
as a process executes, it moves from one locality to another. A program is generally composed of
several different localities which may overlap.
1. Working Set Model: This model is based on the above-stated concept of the Locality
Model. The basic principle states that if we allocate enough frames to a process to
accommodate its current locality, it will only fault whenever it moves to some new locality.
But if the allocated frames are lesser than the size of the current locality, the process is bound
to thrash.
According to this model, based on a parameter A (no of recent referenced pages), the
working set is defined as the set of pages in the most recent ‘A’ page references. Hence, all
the actively used pages would always end up being a part of the working set. The accuracy of
the working set is dependent on the value of parameter A. If A is too large, then working sets
may overlap. On the other hand, for smaller values of A, the locality might not be covered
entirely.
If the summation of working set sizes of all the processes present in the main memory
exceeds the availability of frames, then some of the processes have to be suspended
(swapped out of memory). Otherwise, i.e., if there are enough extra frames, then some more
processes can be loaded in the memory. This technique prevents thrashing along with
ensuring the highest degree of multiprogramming possible. Thus, it optimizes CPU
utilization.
2. Page Fault Frequency: A more direct approach to handle thrashing is the one that uses
Page-Fault Frequency concept. The problem associated with Thrashing is the high page fault
rate and thus, the concept here is to control the page fault rate.
If the page fault rate is too high, it indicates that the process has too few frames allocated
to it. On the contrary, a low page fault rate indicates that the process has too many frames.
Upper and lower limits can be established on the desired page fault rate as shown in the
below diagram.
If the page fault rate is high with no free frames, then some of the processes can be
suspended and frames allocated to them can be reallocated to other processes. The suspended
processes can then be restarted later.