3.0 Memory Management Notes
3.0 Memory Management Notes
THE TECHNIQUES:
(i) (Single-user scheme) Swapping:
Swapping is a memory allocation technique that interchanges the contents of an area of the main
memory with the contents of an area in secondary (e.g. disk). It involves moving part of a
process from the main memory to a disk. When none of the processes in the main memory is in a
ready state, the O/S swaps out one of the blocked processes.
When the O/S has swapped it out, it can admit a newly created process or it can bring in a
suspended process, if its now ready to run (Swapping in). For swapping out to be possible, there
is need to have large backing storage e.g. disk to accommodate copies of all images in the
memory for all users, and must provide direct access to those memory images.
Figure – Memory allocation with variable partitions
a) Fixed Partitioning:
The main memory available for use by multiple processes is partitioned into static number of
regions with fixed boundary. Here, static means they don’t change as the system runs. A process
may be loaded into the partition with the capacity that is equal or greater than by the process.
There are two alternatives for fixed partitioning:
Use of equal size partitions
Use of unequal partitions
Use of Equal Partitions:
Any process whose size is less than or equal to the partition size can be loaded into any available
partition. If all partitions are full & no process is ready or running, the O/S can swap out of any
partition and load in another process so that there is some work for the processor. It has the
following difficulties:
A program may be too big to fit into a partition. Therefore the programmer must design
the program with the use of overlays so that only a partition of a program needs to be in
memory at any one time. When a module is needed that is not present in memory, the
user program must call/Load that module into the program partition, overlaying whatever
program or data that was there
Main memory utilization is extremely inefficient; any program no matter how small it
may be, it occupies a whole partition. There is wasted space internal to the partition, since
a program/Block of data is smaller than the partition. This is called Internal
Segmentation.
NB: Both of these problems can be solved through equal size partition.
Applications:
An example of a successful O/S that used the technique is IBM’s, mainframe O/S called
O/SMFT (Multiprogramming with a fixed number of tasks)
b) Dynamic Partitioning:
The partitions are of variable lengths & number that keeps on changing as the system runs.
(Dynamically). This is per the memory requirement for each process. When a process is brought
into main memory, is allocated exactly as much memory as it required and no movement.
As time goes on, there comes a situation in which there are several small spaces (Holes) in
memory into which no process can fit. The memory becomes more fragmented and memory
utilization decreases. This is described as External fragmentation i.e. the memory external
to the partitions becomes increasingly fragmented.
Disadvantages:
Presence of unoccupied holes in the main memory.
Inefficient use of the processor as it gets engaged in compaction process to overcome
external fragmentation i.e. time wasting
Disadvantages:
The O/S keeps checking on the job-waiting queue, this takes time.
Long interacts leads to long waiting for jobs on the queue before compaction creates
space for them
Short intervals may waste a lot of time, incase there is no job waiting; as the CPU gets
them engaged to compaction that is not required at some time.
NB: Apart from the above techniques, the O/S may consider the following policies:
Compaction Policies
First fit: Places the program where it fist finds space to fit. This is first but wastes
memory.
Best fit: Places a program where it will waste least space. Saves on memory but it is time
consuming.
Worst fit: rarely used, since it places a program where it will waste more space.
Advantages:
With segmentation, a program may occupy more than one partition, which need not to be
continuous.
Segmentation eliminates internal fragmentation, but has some little external frames.
Segmentation is usually visible to a program & is provided as a convenient tool for
organizing programs & data. The programmer/compiler will assign programs & data to
different segments but he must be aware of maximum segment length..
Disadvantages:
Since there is an equal size segment, there is no simple relationship between the
logical and physical address.
Memory manager need to keep track of the segments and how they are allocated. This
is an extra task done using; segment map table (Segment table) for each segment if it
gives the starting address in memory of the corresponding segment.
v) Overlays:
When the program is too big to fit in the available memory, the S/W splits the program into
pieces called Overlays. The overlays are kept on the disk and swapped in and out of the
memory by the O/s dynamically as needed.
Overlaying is an activity/technique in which the program and its data are organized, such that,
modules can be assigned the same region of memory with a program being responsible for
switching the module in and out when needed.
Disadvantages:
Inefficient since it wastes programmer’s time.
Has a low throughput.
Only one process can be in memory at a time.
Disadvantages:
Increase H/W costs- it requires the use of cache memory (Associated memory)
Increased overhead for handling paging interrupts.
Increased S/W complexity to prevent thrashing ( A situation in virtual memory, in which
the processor spends most of the time swapping pieces of program rather than executing
instructions.
CACHE MEMORY;
Is a memory that is smaller and faster than the main memory, placed between main
memory and processor. It acts as a buffer for recently used memory and make the CPU faster.
When a page fault occurs, the operating system has to choose which page to remove from the
memory to make room for the page that has to be brought in. If the page to be removed has been
modified while in memory, it must be rewritten to the disk to bring the disk copy up to date. If
the page has not been modified there is no need to rewrite it. The page to be read in just
overwrites the page being evicted.
It is possible to choose the page to be removed randomly, though system performance is much
better if a page that is not heavily used is chosen. If a heavily used page is removed it will
probably be brought back in quickly, resulting in extra overhead.
At the moment that a page fault occurs, one of the pages in memory will be referenced on the
next instruction. Other pages may not be referenced until later on. Each page can be labeled with
the number of instructions that will be executed before the page is first referenced.
This algorithm says that the page with the highest label should be removed. The problem with
this algorithm is that it is not realizable. At the point of the page fault the operating system has
no idea of when each of the pages will be referenced next. By running a program on a simulator
and keeping track of all page references, it is possible to implement optimal page replacement
algorithm on the second run by using the page reference information collected during the first
run. This method is not used in practical systems.
Most computers with virtual memory have two status bits associated with each page for
collecting useful statistics. R is set when the page is referenced (read) and M when the page is
written to (modified). The bits are contained in each page table entry. It is essential that once a
bit has been set to 1, it will stay until the operating system resets it to 0.
When a process is started up, both page bits for all its pages are set to 0 by the operating system.
Periodically, the R bit is cleared to distinguish pages that have been referenced recently, from
those that have been. When a page fault occurs, the operating system inspects all the pages and
divides them into four categories based on the current values of the R and M bits.
The NRU algorithm removes a page at random from the lowest numbered non-empty class.
NRU is easy to understand, efficient to implement and gives performance that, while not optimal
is often adequate.
One possibility is to find the product that the supermarket has been stocking longest and get rid
of it on the grounds that no one is interested in it any more. This way, the supermarket maintains
a list of all products it is currently selling in the order they were introduced.
The same idea is applicable as a page replacement algorithm. The operating system maintains a
list of all pages currently in memory, with the page at the head of the list the oldest one and the
page at the tail the most recent arrival. On a page fault, the page at the head is removed and the
new page added to the tail of the list. This is not a good algorithm because there are chances of
removing a heavily used page.
A simple modification to FIFO that avoids the problem of throwing out a heavily used page is to
inspect the R bit of the oldest page. If it is 0, the page is both old and unused, so it is replaced. If
the R bit is 1, the bit is cleared and the page is put at the end of the list of pages, and its load time
is updated as though it has just arrived in memory. The search continues.
The algorithm can be illustrated using the following example. Pages A through H are kept on a
linked list sorted by the time they arrived in memory. Suppose that a page fault occurs at time 20.
The oldest page is A, which arrived at time 0, when the process started. If A has the R bit
cleared, it is evicted from memory. On the other hand if the R bit is set, A is put on to the end of
the list and its load time reset to current time (20). The R bit is also cleared and the search for a
suitable page continues with B.
Figure 3.10
The second chance looks for an old page that has not been referenced in the previous clock
interval. If all have been referenced then second chance degenerates into a pure FIFO. Suppose
that all the pages have their R bits set. One by one the operating system moves the pages to the
end of the list and clears their R bit. Eventually, it comes back to page A, which now has its R bit
cleared. At this point A is evicted. Thus the algorithm always terminates.
A good approximation to the optimal algorithm is based on the observation that pages that have
been heavily used in the last few instructions will probably be heavily used again in the next few
instructions. Conversely, pages that have not been used for ages will probably remain unused for
a longer time. This idea suggests a realizable algorithm: when a page fault occurs, throws out the
pages that have been unused for the longest time. This strategy is called LRU (Least Recently
Used) paging.
To implement LRU, it is necessary to maintain a linked list of all pages in memory, with the
most recently used page at the front and the least recently used page at the rear.
The problem is that the list must be updated on every memory reference. Finding a page in the
list, deleting and then moving it from to the front is a very time consuming operation.
b) Segmentation:
A segment in virtual memory is a block that has a virtual address. The block of a program be
unequal length and may have dynamically varying length.
When memory is allocated dynamically, the operating system must manage it. There are two ways to
keep track of memory usage: bit maps and free lists. With bit maps, memory is divided into allocation
units. Corresponding to each allocation unit is a bit in the bit map, which is 0 if the unit is free and 1 if it
is occupied.
Another way of keeping track of memory is to maintain a linked list of allocated and free memory
segments, where a segment is either a process or hole between two processes. The memory of fig 22(a) is
represented in fig 22(c) as linked list of segments. Each entry in the list specifies a hole (H) or a process
(P), the address at which it starts, the length and a pointer to the next entry. In this example, the segment
list is kept sorted by address.
When the processes and holes are kept on a list sorted by address, several algorithms can be used to
allocate memory for a newly created or swapped in process. We assume that the memory manager knows
how much memory to allocate.
The simplest algorithm is first fit. The memory manager scans along the list of segments until it finds a
hole that is big enough. The whole is then broken up into two pieces, one for the process and one for
unused memory, except for the unlikely case of a perfect fit.
Another variation of first fit is the next fit. It works the same way as first fit, except that it keeps track of
where it is whenever it finds a suitable whole. The next time it is called to find a hole, it starts searching
the list from the place where it left off last time, instead of the beginning as with the previous algorithm.
The best-fit algorithm searches the entire list and takes the smallest hole that is adequate. Rather than
breaking up a big hole that may be needed later, best fit, tries to find a hole that is close to the actual size
needed. Best fit is slower than first fit and also results in more wasted memory because it tends to fill
memory with tiny useless holes.
To get around the problem of breaking up nearly exact matches into a process and a tiny hole, one could
think about worst fit, i.e. take the largest available hole, so that the hole broken off will be big enough to
be useful.