0% found this document useful (0 votes)
11 views22 pages

Memory Management Strategy New

DSA notes

Uploaded by

Rahul Bharadwaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views22 pages

Memory Management Strategy New

DSA notes

Uploaded by

Rahul Bharadwaj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

The Launcher Academy

Floor, City Center Opposite – Gossner College, Club Road,


Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
Memory management strategy

Memory management strategy is useful to obtain best possible use of main memory resource. Memory
management strategies are divided into the following categories:
1) Fetch strategy a). Demand fetch strategy b.)Anticipatory fetch strategy
2) Placement strategies
3) Replacement strategies

1. Fetch strategy: fetch strategies are concern with when to obtain the next piece of program or data for transfer
from secondary memory to main memory.

a) Demand fetch strategies: In demand fetch strategies, a running process/program references/call a piece
of program or data into the main memory.
b) Anticipatory fetch strategies: in this strategy the O.S predicts a program’s needs and load the program
or data pieces into the main memory before they are actually needed. When they are requested they will
be available without any delay.
2. Placement strategies: these strategies are concern with determining the place in main memory to store the
incoming data.
3. Replacement strategies: these strategies are concern with determining which piece of program or data to
displace to make room for incoming program.

Contiguous and Non contiguous allocation


Contiguous allocation systems require that an entire program occupy one block of adjacent storage location in
order to execute.

Non contiguous allocation systems allow programs and data to be separated into several smaller blocks that
may be placed in any available free storage holes even if these holes are not adjacent to each other. Example-
paging and segmentation.

Single user system:


A single user uses the storage area except the operating system area. The user can use contiguous storage in
main memory.
In single user system, the O.S should be protected from user. This protection is implemented by the use of a
single boundary register built into the CPU, it contain the highest address of the O.S. Any time the user refers
the address, the address is checked, if the user tries to enter the O.S, the program will be terminated with an
error message.
Figure:

All address developed by the user program and checked to be sure that they are not less than a.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Multiprogramming system:
1) Fixed partition multiprogramming: - In fixed partition multiprogramming the main storage are divided into a
number of fixed size partitions. Each partition could hold a single job. IBM used this scheme for systems 360
OS/MFT (Multiprogramming with a fixed number of tasks).
Absolute translation and loading: In this scheme if the job was ready to execute and its partition is occupied by
other job, then that job had to wait, even if the partition were available. There was wastage of storage resource.
Figure:

Re locatable translation and loading: In this case the programs can be run in any available partition that is large
enough to hold it. So the wastage of storage resources is not here.

Advantages
Simple & Easy : There is no need of using complex algorithm by OS, so it is simple and easy to implement.
Predictable: The OS can ensure a minimum amount of memory for each process.

Secure: Since each process runs in its own partition so there is no change of any process interfere other's
process memory.

No external fragmentation: Fixed Partitioning eliminates the problem of external fragmentation.

Disadvantages
Internal Fragmentation: process allocated to a partition may leave some unused space which is internal to a
partition that can’t be used by other process.

Degree of Multiprogramming: since total number of partition are fixed for each process so there is limitation
on degree of multiprogramming.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Embedded systems, real-time systems, and systems with limited memory resources may use this technique.

2) Variable partition multiprogramming:


In this scheme the O.S allow jobs to occupy as much space as they needed. No fixed boundaries are there. The
job will be given as much storage as they required.
In variable partition multiprogramming, when the job holding storage area, is finished then it leaves holes in
the main storage. If the new job can be fitted in this hole otherwise wastage of storage occurs. IBM used this
technique for OS/MVT (Multiprogramming with a Variable number of Tasks) as the partitions are of variable
length and number.
Figure:

Advantages:
No Limitation on the Size of the Process: since there is no fixed partition a process of any size can be loaded
into the memory.

No Internal Fragmentation: In variable Partition, space in the main memory is allocated according to the
need of the process.

No restriction on the Degree of Multiprogramming: Any number of processes can be loaded until memory
is full.

Disadvantage
External fragmentation: when the process holding memory, is finished then it leaves holes in the main
storage. If the new process can be fitted in this hole otherwise wastage of storage occurs.

Difficult to implement: More complex algorithms are used to implement.

O.S combines these holes by


1) coalescing: when a job is finished and frees its storage, then O.S check, is there any hole adjacent to this
frees storage( hole). If it occurs, these holes are combined to make a larger hole, this is called coalescing.
2) Storage compaction: Some time it happens that many holes are distributed through out the main memory
which is not adjacent. Some job needs certain amount of storage where no holes are bigger to store this job but
when all the holes combined then it is sufficient to store requesting job, in this method all the occupied
memory moves to the one end and all the free holes on the other end. Now the requested job gets the memory,
this process is also called burping or garbage collection. The main drawback is that the system must stop while
it performs the compaction because it involves relocating the job that are in storage.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Fragmentation
In computer storage, fragmentation is a phenomenon in which storage space is used inefficiently, reducing
storage capacity and in most cases the performance. The term is also used to denote the wasted space itself.

External Fragmentation: External Fragmentation happens when a dynamic memory allocation algorithm
allocates some memory for a process and a small piece is left over that cannot be effectively used or the
process is terminated and leaves hole. If too much external fragmentation occurs, the amount of usable
memory is drastically reduced. Total memory space exists to satisfy a request, but it is not contiguous. It
frequently occurs in variable partition.

Internal Fragmentation: Internal fragmentation is the space wasted inside of allocated memory blocks
because of restriction on the allowed sizes of allocated blocks. Allocated memory may be slightly larger than
requested memory; this size difference is memory internal to a partition, but not being used.
This space is unavailable for use by the system until that job is finished region is released. It frequently occurs
in fixed partition.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Virtual memory
The virtual memory technique allows users to use more memory for a program than the real memory of a
computer.
Actually the program is stored in the secondary memory. The memory
management unit (MMU) transfers the currently needed part of the program from the secondary memory
to the main memory for execution.
Let us suppose, O.S needs 80 M.B of memory to hold the entire program that are running but there are
only 32 M.B of RAM is installed in the computer. The operating System set up 80 M.B of virtual memory
and uses a virtual memory manager to manage the virtual memory. O.S keeps 32 M.B size of program in
RAM and remaining on the disk. When O.S needs a part of memory that is currently not in physical
memory. O.S picks a part of physical memory that hasn’t been used recently, write it to the disk and takes
needed part of the program into the memory. This is called swapping.

There are two techniques to load from virtual memory to physical memory.

1. Paging
2. Segmentation
Paging: In this technique, physical memory is broken into a number of fixed size blocks called frames or
page frames and the logical memory is broken into a number of same size blocks called pages. When a
process is to be executed, its pages are loaded into any available memory frames from the backing store.
The page sizes (also the frame sizes) are always powers of 2, and vary between 512 bytes to 8192 bytes
per page.

In this scheme every address generated by the CPU is divided into two parts i.e page number(p) and
offset(displacement)(d)

p d Virtual address V= (p,d)


Page no Displacement
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

For example, a 16-bit address can be divided as given in Figure below

Here, as page number takes 5bits, so range of values is 0 to 31(i.e. 25-1). Similarly, offset value uses 11-
bits, so range is 0 to 2023(i.e., 211–1). Summarizing this we can say paging scheme uses 32 pages, each
with 2024 locations.

For associating virtual (logical) address with the real address or physical address, there is a mechanism
called DAT (Dynamic Address Translation) which is used to convert logical address into the physical
address.

Page Map Table contains three types of information r – Page


resident bit
if r = 0 then page is not in real storage. if r =
1 then page is in the real storage.
s – Secondary storage address p’ –
Page frame number

Page address translation by direct mapping


A running process references virtual address v = (p, d), before a process begins running, the O.S loads the
primary storage address of the page map table (PMT) into the “page map table origin register”. The base
address b, of the page map table is added to the page number p, to form the address in primary storage
b+p, of the entry in page map table forpage p. This entry indicates that page frame p’ corresponds to
virtual page p. then p’ is concatenated with the displacement ‘d’ to form real address r.

Pages address translation by Associative mapping:


This scheme is based on the use of dedicated registers with high speed and efficiency. These small, fast-
lookup cache help to place the entire page table into a content-addresses associative storage, hence
speed-up the lookup problem with a cache.
These are known as associative registers or Translation Look-aside Buffers (TLB’s). Each register
consists of two entries:
1) Key, which is matched with logical page p.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
2) Value which returns page frame number corresponding to p.

It is similar to direct mapping scheme but here as TLB’s contain only few page table entries, so
search is fast. But it is quite expensive due to register support. So, both direct and associative
mapping schemes can also be combined to get more benefits. Here, page number is matched with
all associative registers simultaneously. The percentage of the number of times the page is found in
TLB’s is called hit ratio. If it is not found, it is searched in page table and added into TLB. But if
TLB is already full then page replacement policies can be used. Entries in TLB can be limited only.

Advantages and Disadvantages of Paging


 Paging reduces external fragmentation, but still suffers from internal fragmentation.
For example- if page size is 2,048 bytes, a process of 72,766 bytes would need 35 pages plus 1,086 bytes. It
would be allocated 36 frames, resulting in an internal fragmentation of 2,048 — 1,086 = 962 bytes. In the
worst case, a process would need n pages plus 1 byte. It would be allocated, n + 1 frames, resulting in an
internal fragmentation of almost an entire frame.
 Paging is simple to implement and assumed as an efficient memory management technique.
 Due to equal size of the pages and frames, swapping becomes very easy.
 Page table requires extra memory space, so may not be good for a system having small RAM.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Demand paging:
In this technique, initially all the pages of a process resides in secondary storage, once the process has to
execute, only those pages are loaded into the memory which are required immediately to execute a
process rather than entire process. i.e pages are loaded on demand by CPU. A demand paging system is
similar to a paging system with swapping. To implement demand paging, it is necessary for the operating
system to keep track of which pages are currently in use. The page map table contains an entry bit for
each virtual page of the related process. For each page actually swapped in memory, page map table
points to actual location that contains the corresponding page frame and entry bit is set and marked as
YES if it is in memory. Alternatively, the entry bit is reset and marked as NO if a particular page is not in
memory. If a program during execution never accesses those pages which are marked as NO, there will be
no problem and execution proceeds normally.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

if the program tries to access a page that was not swapped in memory. In this case, page fault trap
occurs. Page fault trap is the result of the operating system's failure to bring a valid part of the program
into memory.

When the running program experiences a page fault, it must be suspended until the missing page is
swapped in main memory. Since disk access time is usually several orders of magnitude longer than main
memory cycle time, operating system usually schedules another process during this period. Here is a list of
steps operating system follows in handling a page fault:
1. If a process refers to a page which is not in the physical memory, then an internal table kept with a
process control block is checked to verify whether a memory reference to a page was valid or invalid.
2. If the memory reference to a page was valid, but the page is missing, the process of bringing a
page into the physical memory starts.
3. Free memory location is identified to bring a missing page.
4. By reading a disk, the desired page is brought back into the free memory location.
5. Once the page is in the physical memory, the internal table kept with the process and page map
table is updated to indicate that the page is now in memory.
6. Restart the instruction that was interrupted due to the missing page.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Swapping:
Swapping is a technique in which a process can be swapped temporarily out of memory to a backing store and
then brought back into memory for continued execution.
Ex: in multiprogramming environment with a round robin scheduling, when the quantum expires, the memory
manager will start swap out the process that just finished and swap in another process into the memory that are
ready to execute. Again in case of priority scheduling, if a higher priority process arrives and wants service the
memory manager can swap out the lower priority process and load and execute higher priority process. When
that process is finished, lower priority process can be swapped back in and continued.
O.S

Swap out P1

User space P2
Swap in

Main memory
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Segmentation
In this scheme, logical memory is broken into a number of variable length blocks called segments.
Every address is generated by the CPU is divided into two parts

S d Virtual address V=(s, d)


Segment No Displacement
Here segment table has segment base and segment limit.
Dynamic address translation under segmentation. A running process refers a virtual address V=(s,d) . The
segment number s is added to the base address b in the segment map table origin register to form the real
storage address, b+s, of the entry for segment s in the segment map table (SMT).

The segment map table contains the following information r –


Segment resident bit
if r = 0 then seg. is not in real storage.
if r = 1 then seg. is in the real storage.
s – Secondary storage address
p’ – Base address of the segment
l – Segment length
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
Example
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Consider the following data:


Segment limit base
0 1000 1400
1 400 6300
2 400 4300
3 1100 3200
4 1000 4700
What are the physical addresses for the following logical addresses?
(1). 2, 53
(2) 3, 852

Advantages and disadvantages of segmentation

Advantages
 It offers protection within the segments
 You can achieve sharing by segments referencing multiple processes.
 Not internal fragmentation
 Segment tables use lesser memory than paging

Disadvantages
 In segmentation method, processes are loaded/ removed from the main memory. Therefore, the free
memory space is separated into small pieces which may create a problem of external fragmentation
 Costly memory management algorithm
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Difference between Paging and Segmentation

Paging Segmentation

Paging is a memory management Segmentation is also a memory management


technique where memory is partitioned technique where memory is partitioned into
into fixed-sized blocks that are variable-sized blocks that are commonly
commonly known as pages. known as segments.

In Paging system, the logical address is In Segmentation System, the logical address is
divided into a page number and page divided into segment number and segment
offset.(Page Displacement) offset.

This technique may lead to Internal Segmentation may lead to External


Fragmentation. Fragmentation.

While in Segmentation, the size of the


In Paging, the page size is decided by
segment is decided by the user or user
the hardware. It is always power of 2.
program.

In order to maintain the page data, the In order to maintain the segment data, the segment
page map table is created in the Paging map table is created in the segmentation

The page table mainly contains the base The segment table mainly contains the
address of each page. segment number, base address and limit.

This technique is faster than


segmentation is slower than paging.
segmentation.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Paging Segmentation

In Paging, a list of free frames is In Segmentation, a list of holes is maintained


maintained by the Operating system. by the Operating system.

In this technique, in order to calculate In this technique, in order to calculate the


the absolute address page number and absolute address segment number and the
the offset both are required. offset both are required.

Paging is closer to Operating System Segmentation is closer to User

Segmented Paging/ Segmentation with paging


Pure segmentation is not very popular and not being used in many of the operating systems. However,
Segmentation can be combined with Paging to get the best features out of both the techniques.

Both paging and segmentation have advantages and disadvantages. In fact, some architecture provides both.
The Intel Pentium architecture supports both pure segmentation and segmentation with paging.

In Segmented Paging, the main memory is divided into variable size segments which are further divided into
fixed size pages.

1. Pages are smaller than segments.


2. Each Segment has a page table which means every program has multiple page tables.
3. The logical address is represented as Segment Number (base address), Page number and page offset.

Segment Number: It points to the appropriate Segment Number.

Page Number: It Points to the exact page within the segment

Page Offset: Used as an offset within the page frame

Each Page table contains the various information about every page of the segment. The Segment Table
contains the information about every segment. Each segment table entry points to a page table entry and every
page table entry is mapped to one of the page within a segment.

Translation of logical address to physical address


The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.
The CPU generates a logical address which is divided into two parts: Segment Number and Segment Offset.
The Segment Offset must be less than the segment limit. Offset is further divided into Page number and Page
Offset. To map the exact page number in the page table, the page number is added into the page table base.

The actual frame number with the page offset is mapped to the main memory to get the desired word in the
page of the certain segment of the process.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Storage placement strategy

There are three storage placement strategies:


1. Best first search
2. First fit search
3. Worst fit search

1. Best First Search: The incoming job is placed in the hole in main storage in which it fits most tightly
and leaves the smallest amount of unused space. Memory is utilized efficiently but it is slower because it
traverses all the memory partition in search of best.

2. First Fit Search: The incoming job is placed in the main memory in the first available hole large
enough to hold it. It is simple to implement, faster but may claim more memory than it actually
needed.
.
3. Worst Fit Search: An incoming job is placed in the main storage in which it fits worst that is the
largest possible hole. it is slower because it traverses all the memory partition in search of worst.
Memory hole can be utilized to load another process.
The Launcher Academy
Floor, City Center Opposite – Gossner College, Club Road,
Office NO- 17/18, 2nd
Ranchi, Contact No. 8877155769, 7903154392.

Given memory partition of 100k, 500k, 200k, 300k, and 600k (in order), how would first- fit algorithm,
best-fit algorithm and worst-fit algorithm place processes of 212k, 417k, 112k, and 426k (order) ?

Replacement strategy algorithms


1. The Principle of Optimality. Optimal Page Replacement
2. Random Page Replacement
3. First in First Out (FIFO)
4. Second Chance Page Replacement
5. Least Recently Used (LRU)
6. Least Frequently Used (LFU)
7. Most Frequently Used (MFU)
1. The Principle of Optimality: In this strategy, the O.S Replace page that will not be used for longest
period of time. But this is not good strategy because we cannot predict the future.

2. Random Page Replacement


In this strategy the o.s randomly choose the page which is to be replaced. All pages in main storage
have an equal likelihood of being selected for replacement. But the main drawback is that this strategy
could select any page for replacement including the next page to be referenced.
Classes By: K.K. Singh, The Launcher Academy, City Centre, Opp- Gossner College, Club Road Ranchi.
Contact- 8877155769

3. FIFO: In this strategy there is a time stamp for each page as it enters to the primary storage. When a
page needs to be replaced, O.S chooses the one that has been in primary storage from longer time.

The FIFO page replacement can also be implementing with a FIFO queue, as each page arrives, it is
placed at the tail of the queue and pages are replaced from the head of the queue.
But main drawback is that a page can be replacement which is constantly used by the process.

Belady’s Anomaly: Belady’s anomaly state that, for some page-replacement algorithms, the page-fault
rate may increase as the number of allocated frames increases. We would expect that giving more
memory to a process would improve its performance. But here noticed that this assumption is not always
true. Belady's anomaly was discovered as a result.

Consider the following reference string:

1,2,3,4,1,2,5,1,2,3,4,5

Using FIFO page replacement algorithm, for three memory frames, total no. of page fault is 9 where for
four memory frames total no. of page fault is 10, which is unexpected.

4. Second chance page replacement: The basic algorithm of second-chance replacement is a FIFO
replacement algorithm. When a page has been selected, however, we inspect its reference pages bits If
the value is 0, we proceed to replace this page; but if the reference bit is set to 1, we give the page a
second chance and move on to select the next FIFO page. When a page gets a second chance, its
reference bit is cleared, and its arrival time is reset to the current time. Thus, a page that is given a second
chance will not be replaced until all other pages have been replaced (or given second chances). In
addition, if a page is used often enough to keep its reference bit set, it will never be replaced.

5. LRU: In this strategy selects that page for the replacement that has not been used for the longest time.

6. LFU: In this replacement strategy, that page will be replaced which is least frequently or least
intensively referenced. But the main drawback is that when a page is just coming will be replaced
because it is not used much of time.

7. MFU: It replaces page with highest count i.e the page which is most frenquently used. It is based on
the argument that the page with the smallest count was probably just brought in and has yet to be used.

Thrashing
The O.S monitors CPU utilization. If the CPU utilization is too low, OS increase the degree of
multiprogramming by introducing a new process to the system. The page replacement algorithm, replaces
pages without regard to the process to which they belong.
Now suppose that the process enters a new phase in its execution and needs more frames. It starts page
fault and taking frames a way from other processes. These processes need those pages, however and so, they
also fault, taking frames from other processes. These faulting processes must use the paging device to swap
pages in and out. As they queue up for paging device and wait for paging device, at this time CPU utilization
decreases.
The CPU scheduler sees the CPU decreasing CPU utilization and increases the
degree of multiprogramming as a result more page fault, this high paging activity make idle the CPU this
Classes By: K.K. Singh, The Launcher Academy, City Centre, Opp- Gossner College, Club Road Ranchi.
Contact- 8877155769
unexpected situation is called thrashing.

You might also like