Memory Management Strategy New
Memory Management Strategy New
Memory management strategy is useful to obtain best possible use of main memory resource. Memory
management strategies are divided into the following categories:
1) Fetch strategy a). Demand fetch strategy b.)Anticipatory fetch strategy
2) Placement strategies
3) Replacement strategies
1. Fetch strategy: fetch strategies are concern with when to obtain the next piece of program or data for transfer
from secondary memory to main memory.
a) Demand fetch strategies: In demand fetch strategies, a running process/program references/call a piece
of program or data into the main memory.
b) Anticipatory fetch strategies: in this strategy the O.S predicts a program’s needs and load the program
or data pieces into the main memory before they are actually needed. When they are requested they will
be available without any delay.
2. Placement strategies: these strategies are concern with determining the place in main memory to store the
incoming data.
3. Replacement strategies: these strategies are concern with determining which piece of program or data to
displace to make room for incoming program.
Non contiguous allocation systems allow programs and data to be separated into several smaller blocks that
may be placed in any available free storage holes even if these holes are not adjacent to each other. Example-
paging and segmentation.
All address developed by the user program and checked to be sure that they are not less than a.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
Multiprogramming system:
1) Fixed partition multiprogramming: - In fixed partition multiprogramming the main storage are divided into a
number of fixed size partitions. Each partition could hold a single job. IBM used this scheme for systems 360
OS/MFT (Multiprogramming with a fixed number of tasks).
Absolute translation and loading: In this scheme if the job was ready to execute and its partition is occupied by
other job, then that job had to wait, even if the partition were available. There was wastage of storage resource.
Figure:
Re locatable translation and loading: In this case the programs can be run in any available partition that is large
enough to hold it. So the wastage of storage resources is not here.
Advantages
Simple & Easy : There is no need of using complex algorithm by OS, so it is simple and easy to implement.
Predictable: The OS can ensure a minimum amount of memory for each process.
Secure: Since each process runs in its own partition so there is no change of any process interfere other's
process memory.
Disadvantages
Internal Fragmentation: process allocated to a partition may leave some unused space which is internal to a
partition that can’t be used by other process.
Degree of Multiprogramming: since total number of partition are fixed for each process so there is limitation
on degree of multiprogramming.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
Embedded systems, real-time systems, and systems with limited memory resources may use this technique.
Advantages:
No Limitation on the Size of the Process: since there is no fixed partition a process of any size can be loaded
into the memory.
No Internal Fragmentation: In variable Partition, space in the main memory is allocated according to the
need of the process.
No restriction on the Degree of Multiprogramming: Any number of processes can be loaded until memory
is full.
Disadvantage
External fragmentation: when the process holding memory, is finished then it leaves holes in the main
storage. If the new process can be fitted in this hole otherwise wastage of storage occurs.
Fragmentation
In computer storage, fragmentation is a phenomenon in which storage space is used inefficiently, reducing
storage capacity and in most cases the performance. The term is also used to denote the wasted space itself.
External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm
allocates some memory for a process and a small piece is left over that cannot be effectively used or the
process is terminated and leaves hole. If too much external fragmentation occurs, the amount of usable
memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous. It
frequently occurs in variable partition.
Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks
because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition, but not being used.
This space is unavailable for use by the system until that job is finished region is released. It frequently occurs
in fixed partition.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
Virtual memory
The virtual memory technique allows users to use more memory for a program than the real memory of a
computer.
Actually the program is stored in the secondary memory. The memory
management unit (MMU) transfers the currently needed part of the program from the secondary memory
to the main memory for execution.
Let us suppose, O.S needs 80 M.B of memory to hold the entire program that are running but there are
only 32 M.B of RAM is installed in the computer. The operating System set up 80 M.B of virtual memory
and uses a virtual memory manager to manage the virtual memory. O.S keeps 32 M.B size of program in
RAM and remaining on the disk. When O.S needs a part of memory that is currently not in physical
memory. O.S picks a part of physical memory that hasn’t been used recently, write it to the disk and takes
needed part of the program into the memory. This is called swapping.
There are two techniques to load from virtual memory to physical memory.
1. Paging
2. Segmentation
Paging: In this technique, physical memory is broken into a number of fixed size blocks called frames or
page frames and the logical memory is broken into a number of same size blocks called pages. When a
process is to be executed, its pages are loaded into any available memory frames from the backing store.
The page sizes (also the frame sizes) are always powers of 2, and vary between 512 bytes to 8192 bytes
per page.
In this scheme every address generated by the CPU is divided into two parts i.e page number(p) and
offset(displacement)(d)
Here, as page number takes 5bits, so range of values is 0 to 31(i.e. 25-1). Similarly, offset value uses 11-
bits, so range is 0 to 2023(i.e., 211–1). Summarizing this we can say paging scheme uses 32 pages, each
with 2024 locations.
For associating virtual (logical) address with the real address or physical address, there is a mechanism
called DAT (Dynamic Address Translation) which is used to convert logical address into the physical
address.
It is similar to direct mapping scheme but here as TLB’s contain only few page table entries, so
search is fast. But it is quite expensive due to register support. So, both direct and associative
mapping schemes can also be combined to get more benefits. Here, page number is matched with
all associative registers simultaneously. The percentage of the number of times the page is found in
TLB’s is called hit ratio. If it is not found, it is searched in page table and added into TLB. But if
TLB is already full then page replacement policies can be used. Entries in TLB can be limited only.
Demand paging:
In this technique, initially all the pages of a process resides in secondary storage, once the process has to
execute, only those pages are loaded into the memory which are required immediately to execute a
process rather than entire process. i.e pages are loaded on demand by CPU. A demand paging system is
similar to a paging system with swapping. To implement demand paging, it is necessary for the operating
system to keep track of which pages are currently in use. The page map table contains an entry bit for
each virtual page of the related process. For each page actually swapped in memory, page map table
points to actual location that contains the corresponding page frame and entry bit is set and marked as
YES if it is in memory. Alternatively, the entry bit is reset and marked as NO if a particular page is not in
memory. If a program during execution never accesses those pages which are marked as NO, there will be
no problem and execution proceeds normally.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
if the program tries to access a page that was not swapped in memory. In this case, page fault trap
occurs. Page fault trap is the result of the operating system's failure to bring a valid part of the program
into memory.
When the running program experiences a page fault, it must be suspended until the missing page is
swapped in main memory. Since disk access time is usually several orders of magnitude longer than main
memory cycle time, operating system usually schedules another process during this period. Here is a list of
steps operating system follows in handling a page fault:
1. If a process refers to a page which is not in the physical memory, then an internal table kept with a
process control block is checked to verify whether a memory reference to a page was valid or invalid.
2. If the memory reference to a page was valid, but the page is missing, the process of bringing a
page into the physical memory starts.
3. Free memory location is identified to bring a missing page.
4. By reading a disk, the desired page is brought back into the free memory location.
5. Once the page is in the physical memory, the internal table kept with the process and page map
table is updated to indicate that the page is now in memory.
6. Restart the instruction that was interrupted due to the missing page.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
Swapping:
Swapping is a technique in which a process can be swapped temporarily out of memory to a backing store and
then brought back into memory for continued execution.
Ex: in multiprogramming environment with a round robin scheduling, when the quantum expires, the memory
manager will start swap out the process that just finished and swap in another process into the memory that are
ready to execute. Again in case of priority scheduling, if a higher priority process arrives and wants service the
memory manager can swap out the lower priority process and load and execute higher priority process. When
that process is finished, lower priority process can be swapped back in and continued.
O.S
Swap out P1
User space P2
Swap in
Main memory
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
Segmentation
In this scheme, logical memory is broken into a number of variable length blocks called segments.
Every address is generated by the CPU is divided into two parts
Advantages
It offers protection within the segments
You can achieve sharing by segments referencing multiple processes.
Not internal fragmentation
Segment tables use lesser memory than paging
Disadvantages
In segmentation method, processes are loaded/ removed from the main memory. Therefore, the free
memory space is separated into small pieces which may create a problem of external fragmentation
Costly memory management algorithm
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
Paging Segmentation
In Paging system, the logical address is In Segmentation System, the logical address is
divided into a page number and page divided into segment number and segment
offset.(Page Displacement) offset.
In order to maintain the page data, the In order to maintain the segment data, the segment
page map table is created in the Paging map table is created in the segmentation
The page table mainly contains the base The segment table mainly contains the
address of each page. segment number, base address and limit.
Paging Segmentation
Both paging and segmentation have advantages and disadvantages. In fact, some architecture provides both.
The Intel Pentium architecture supports both pure segmentation and segmentation with paging.
In Segmented Paging, the main memory is divided into variable size segments which are further divided into
fixed size pages.
Each Page table contains the various information about every page of the segment. The Segment Table
contains the information about every segment. Each segment table entry points to a page table entry and every
page table entry is mapped to one of the page within a segment.
The actual frame number with the page offset is mapped to the main memory to get the desired word in the
page of the certain segment of the process.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
1. Best First Search: The incoming job is placed in the hole in main storage in which it fits most tightly
and leaves the smallest amount of unused space. Memory is utilized efficiently but it is slower because it
traverses all the memory partition in search of best.
2. First Fit Search: The incoming job is placed in the main memory in the first available hole large
enough to hold it. It is simple to implement, faster but may claim more memory than it actually
needed.
.
3. Worst Fit Search: An incoming job is placed in the main storage in which it fits worst that is the
largest possible hole. it is slower because it traverses all the memory partition in search of worst.
Memory hole can be utilized to load another process.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
Given memory partition of 100k, 500k, 200k, 300k, and 600k (in order), how would first- fit algorithm,
best-fit algorithm and worst-fit algorithm place processes of 212k, 417k, 112k, and 426k (order) ?
3. FIFO: In this strategy there is a time stamp for each page as it enters to the primary storage. When a
page needs to be replaced, O.S chooses the one that has been in primary storage from longer time.
The FIFO page replacement can also be implementing with a FIFO queue, as each page arrives, it is
placed at the tail of the queue and pages are replaced from the head of the queue.
But main drawback is that a page can be replacement which is constantly used by the process.
Belady’s Anomaly: Belady’s anomaly state that, for some page-replacement algorithms, the page-fault
rate may increase as the number of allocated frames increases. We would expect that giving more
memory to a process would improve its performance. But here noticed that this assumption is not always
true. Belady's anomaly was discovered as a result.
1,2,3,4,1,2,5,1,2,3,4,5
Using FIFO page replacement algorithm, for three memory frames, total no. of page fault is 9 where for
four memory frames total no. of page fault is 10, which is unexpected.
4. Second chance page replacement: The basic algorithm of second-chance replacement is a FIFO
replacement algorithm. When a page has been selected, however, we inspect its reference pages bits If
the value is 0, we proceed to replace this page; but if the reference bit is set to 1, we give the page a
second chance and move on to select the next FIFO page. When a page gets a second chance, its
reference bit is cleared, and its arrival time is reset to the current time. Thus, a page that is given a second
chance will not be replaced until all other pages have been replaced (or given second chances). In
addition, if a page is used often enough to keep its reference bit set, it will never be replaced.
5. LRU: In this strategy selects that page for the replacement that has not been used for the longest time.
6. LFU: In this replacement strategy, that page will be replaced which is least frequently or least
intensively referenced. But the main drawback is that when a page is just coming will be replaced
because it is not used much of time.
7. MFU: It replaces page with highest count i.e the page which is most frenquently used. It is based on
the argument that the page with the smallest count was probably just brought in and has yet to be used.
Thrashing
The O.S monitors CPU utilization. If the CPU utilization is too low, OS increase the degree of
multiprogramming by introducing a new process to the system. The page replacement algorithm, replaces
pages without regard to the process to which they belong.
Now suppose that the process enters a new phase in its execution and needs more frames. It starts page
fault and taking frames a way from other processes. These processes need those pages, however and so, they
also fault, taking frames from other processes. These faulting processes must use the paging device to swap
pages in and out. As they queue up for paging device and wait for paging device, at this time CPU utilization
decreases.
The CPU scheduler sees the CPU decreasing CPU utilization and increases the
degree of multiprogramming as a result more page fault, this high paging activity make idle the CPU this
Classes By: K.K. Singh, The Launcher Academy, City Centre, Opp- Gossner College, Club Road Ranchi.
Contact- 8877155769
unexpected situation is called thrashing.