0% found this document useful (0 votes)
6 views16 pages

3.0 Memory Management Notes

Memory management in computers involves the organization and allocation of main and secondary memory, ensuring efficient utilization of limited and volatile main memory. The Memory Manager oversees tasks such as tracking memory usage, allocating space for programs, and managing multiple processes in multiprogramming systems. Various memory allocation techniques, including swapping, partitioning, paging, segmentation, overlays, and virtual memory, each have their advantages and disadvantages in optimizing memory use and performance.

Uploaded by

shillacatayliina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views16 pages

3.0 Memory Management Notes

Memory management in computers involves the organization and allocation of main and secondary memory, ensuring efficient utilization of limited and volatile main memory. The Memory Manager oversees tasks such as tracking memory usage, allocating space for programs, and managing multiple processes in multiprogramming systems. Various memory allocation techniques, including swapping, partitioning, paging, segmentation, overlays, and virtual memory, each have their advantages and disadvantages in optimizing memory use and performance.

Uploaded by

shillacatayliina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

MEMORY MANAGEMENT:

Memory in Computers is divided into main memory/Secondary memory. In main


memory we stored what is currently processed while secondary memory is used as a back up for
the rest data & programs. Main memory is limited, expensive & volatile, & requires to be well
utilized.
The main memory stores the kernel part of the O/S that permanently occupy certain
part of the main memory, the rest of the memory is used to hold programs currently being
executed, hence its called the “User Part”. In Uni.-programming systems, main memory is sub
divided only into two parts i.e. one for the O/S & the other for the user (Single Program).
In multiprogramming systems, the user part must be further sub divided to
accommodate multiple processes. The O/S that manages the main memory i.e. the Memory
Manager carries out the task of sub dividing the user part of the main memory dynamically. Its
primary task is to preserve space in the main memory for the O/S to manage the user part.

Functions of the Memory Manager:


 Keeping track of which parts of memory are in use & which one aren’t in use.
 Allocating memory to programs & re-allocating it when they are done.
 Keeping the CPU busy most of the time by allowing several processes to run
concurrently.
NB: Memory management is intended to satisfy some requirement & this is the objective of
memory management.

Requirement/Objective/Functions of Memory Management:


(i) Sharing:
Memory management system allows control access to shared areas of memory without violating
protection of the processor. Sharing will require some relocation & the mechanism used should
not be interfered with the problem
(ii) Relocation:
Memory management systems should be able to allow programs to be moved about (Relocated)
in the memory. This should be achieved since it’s not possible to know in advance the region in
memory where a program is executing.
(iii) Protection:
Each process should be protected against unwanted/Interference by other processes since they
should share the available memory, i.e., program in other processes should not be able to
reference memory locations in a process for reading or writing processes without permission.

(iv) Physical Organization:


It’s the responsibility of memory management to ensure there is order in the organization of the
flow of information between main memory & storage space.

(v) Logical Organization:


Main memory is organized as linear (Array) of locations or one-dimensional address space
consisting of sequence of bytes or words.

Memory allocation Techniques:


The principle task of memory management is to bring programs into the main memory for the
processor to execute, allocate them memory space & de-allocate them when they are done.

THE TECHNIQUES:
(i) (Single-user scheme) Swapping:
Swapping is a memory allocation technique that interchanges the contents of an area of the main
memory with the contents of an area in secondary (e.g. disk). It involves moving part of a
process from the main memory to a disk. When none of the processes in the main memory is in a
ready state, the O/S swaps out one of the blocked processes.
When the O/S has swapped it out, it can admit a newly created process or it can bring in a
suspended process, if its now ready to run (Swapping in). For swapping out to be possible, there
is need to have large backing storage e.g. disk to accommodate copies of all images in the
memory for all users, and must provide direct access to those memory images.
Figure – Memory allocation with variable partitions

(ii) Partitioned Allocation Schemes:


Memory is divided into partitions/Regions, each region having one program at a time to be
executed, thus the degree of multiprogramming is limited to the number of regions.
When a region is free, a program is selected from the job queue and loaded into the free region of
the main memory. When it terminates, the region becomes available for another to occupy and
execute. In this scheme, two algorithms are used:
 Fixed partitioning
 Dynamic partitioning

a) Fixed Partitioning:
The main memory available for use by multiple processes is partitioned into static number of
regions with fixed boundary. Here, static means they don’t change as the system runs. A process
may be loaded into the partition with the capacity that is equal or greater than by the process.
There are two alternatives for fixed partitioning:
 Use of equal size partitions
 Use of unequal partitions
Use of Equal Partitions:
Any process whose size is less than or equal to the partition size can be loaded into any available
partition. If all partitions are full & no process is ready or running, the O/S can swap out of any
partition and load in another process so that there is some work for the processor. It has the
following difficulties:
 A program may be too big to fit into a partition. Therefore the programmer must design
the program with the use of overlays so that only a partition of a program needs to be in
memory at any one time. When a module is needed that is not present in memory, the
user program must call/Load that module into the program partition, overlaying whatever
program or data that was there
 Main memory utilization is extremely inefficient; any program no matter how small it
may be, it occupies a whole partition. There is wasted space internal to the partition, since
a program/Block of data is smaller than the partition. This is called Internal

Segmentation.
NB: Both of these problems can be solved through equal size partition.

Use of Unequal Size Partitions:


The fixed partitions vary in sizes. There are two ways to assign processes to partitions:
 Each process is assigned to the smallest partition within which it can/will fit. So
processes are always assigned in such a way as to minimize internal fragmentation.
However, at a time when all partitions are occupied, except one with a very big size and a
process with a small capacity wants to come in, has to wait. The memory will be
idle/Unutilized.
 Use of a single queue for all processes. When it’s time to load a process into the memory,
the smallest available portion is selected. If all partitions are occupied, then swap out a
process in the smallest partition that will hold the incoming process; considering factors
such as priority and swapping blocked processes rather than ready process.

Advantages (Strengths) of Fixed Partitions:


 Its simple
 Requires little O/S S/W and processing overhead.

Disadvantages (Weakness) of Fixed Partitions:


 Only a number of active programs equal to the number of partitions can exist in the
memory because the memory in the system is already specified (Fixed) for specific
processes.
 Small jobs will not utilize the partition space efficiently (Problem of internal
segmentation)
 Inefficient use of the main memory.

Applications:
An example of a successful O/S that used the technique is IBM’s, mainframe O/S called
O/SMFT (Multiprogramming with a fixed number of tasks)

b) Dynamic Partitioning:
The partitions are of variable lengths & number that keeps on changing as the system runs.
(Dynamically). This is per the memory requirement for each process. When a process is brought
into main memory, is allocated exactly as much memory as it required and no movement.
As time goes on, there comes a situation in which there are several small spaces (Holes) in
memory into which no process can fit. The memory becomes more fragmented and memory
utilization decreases. This is described as External fragmentation i.e. the memory external
to the partitions becomes increasingly fragmented.

Advantages (Strengths ) of dynamic Partitioning:


 No internal segmentation
 More efficient user of the main memory

Disadvantages:
 Presence of unoccupied holes in the main memory.
 Inefficient use of the processor as it gets engaged in compaction process to overcome
external fragmentation i.e. time wasting

c) Relocatable Dynamic Partitioning


One technique for overcoming external fragmentation is Compaction also called garbage
collection i.e. the O/s keeps on shifting the process so that they are continuous (Next to each
other or following to each other in the left together in one block which may if possible be
occupied by a process latter. However, memory compaction is time consuming & waste the
processor time.
The Process of Compaction:
 Relocate every program in memory so that they are continuous, this creates free space.
 Change every address and every reference to an address.
 Leave the data unchanged
To achieve this, the O/S uses a special register (Temporary memory storage in the CPU). These
are:
 Relocation register: Its stores the value that must be added to each address refered to in
the program
 Bound registers: It stores the highest or some cases the lowest location in memory
accessible by each program during execution.

When is Compaction Done:


 When there is massive/maximum memory wastage. This is the best time to do
compaction.
 When there are no jobs writing & they can fit in the some total available partitions

Disadvantages:
 The O/S keeps checking on the job-waiting queue, this takes time.
 Long interacts leads to long waiting for jobs on the queue before compaction creates
space for them
 Short intervals may waste a lot of time, incase there is no job waiting; as the CPU gets
them engaged to compaction that is not required at some time.

NB: Apart from the above techniques, the O/S may consider the following policies:
Compaction Policies
 First fit: Places the program where it fist finds space to fit. This is first but wastes
memory.
 Best fit: Places a program where it will waste least space. Saves on memory but it is time
consuming.
 Worst fit: rarely used, since it places a program where it will waste more space.

(iii) Paging (Simple Paging):


The main memory is sub divided into many small equal fixed sizes (Parts) known as
Frames/Page frames. Each process is also divided into small fixed parts known as Pages of the
same size as frames.
A frame can hold one page of data. Smaller process requires fewer pages while larger processes
require more paging in the transfer of pages between memory. It allows a program to be non-
continuous thus allowing a program to be allocated memory whenever it’s available.
Before executing a program, the memory manager prepares it by:
 Determining the number of pages in the program
 Locating enough empty frames in memory
 Loading all of the program pages into the frames.
When the program is being prepared for loading, it pages on in a logical sequence (Concept of
logic address) i.e. the first pages have the first line of the program and the last pages have the last
line, but the loading process is different from the previous schemes, since the pages do not have
to be loaded in adjacent memory blocks.
The O/S maintains a page table (Page map table), for each process and memory map table. Each
active job has its own page table that contains vital information for each page. It shows the frame
location for each page & page number, sequentially (Logical address). A logical address is the
location of a word relative to the beginning of a program, which the processor translates into a
physical address.
With paging, the logical to physical address is still done by the processor. Presented with a
logical address (page numbers & off set) the processor uses the page table to produce a physical
address (Frame number).
A page table contains, one entry for each page of the process, so that the page number easily
indexes the table. Each page table contains the number of the frame in main memory. The
memory map table has one entry for each frame showing the location of the frame & its status as
either free or busy.
Advantages of Page Memory Allocations:
 Main memory is used more efficiently as programs can be stored in non-continuos
locations.
 Any page of any job can occupy an empty page frame.
 Compaction is eliminated, since there is no external and internal page fragmentation in
most page frames.
Limitations:
 Because job pages can be located anywhere in memory, the manager now needs a
mechanism to keep track of them. This means enlarging the size and complexity of the
O/S. this increases overhead.
 There is small amount of internal fragmentation.

(iv) Segmentation (Simple segmentation):


Based on structuring of programs into modules (Logical grouping codes). The
program and its data are divided into several segments of different sizes. One for each module.
There is a maximum segmentation length.
Memory is not divided into frames, because the size of each segment is different
(Some are large & others are small) hence memory is located in a dynamic manner.

Advantages:
 With segmentation, a program may occupy more than one partition, which need not to be
continuous.
 Segmentation eliminates internal fragmentation, but has some little external frames.
 Segmentation is usually visible to a program & is provided as a convenient tool for
organizing programs & data. The programmer/compiler will assign programs & data to
different segments but he must be aware of maximum segment length..
Disadvantages:
 Since there is an equal size segment, there is no simple relationship between the
logical and physical address.
 Memory manager need to keep track of the segments and how they are allocated. This
is an extra task done using; segment map table (Segment table) for each segment if it
gives the starting address in memory of the corresponding segment.

v) Overlays:
When the program is too big to fit in the available memory, the S/W splits the program into
pieces called Overlays. The overlays are kept on the disk and swapped in and out of the
memory by the O/s dynamically as needed.
Overlaying is an activity/technique in which the program and its data are organized, such that,
modules can be assigned the same region of memory with a program being responsible for
switching the module in and out when needed.

Disadvantages:
 Inefficient since it wastes programmer’s time.
 Has a low throughput.
 Only one process can be in memory at a time.

(vi) Virtual Memory:


It gives the user the impression that the whole page are loaded in main memory,
during their entire processing time, while in reality only a portion of each is stored there. The
basic idea is that the combined size of the program and data may exceed the amount of physical
memory available for it.
Virtual memory is the storage space that may be regarded as addressable memory by
the user. In which virtual memory is matched to the real memory. The O/S keeps this parts of the
program currently in use in the main memory and the rest of the disk e.g. a 16Mb program can
run on a 4Mb memory by carefully choosing which 4MB of a program to keep in the main
memory at each instance, with the other parts of the program being swapped between main
memory as needed.
Virtual memory works well in multiprogramming environment, since most programs
spend a lot of time waiting (E.g. for I/O pages to be swapped in or out) and in time-sharing, they
wait for their time slice (Quantum). While waiting the CPU can be given to another process.
Advantages:
 A job size is no longer restricted to the size of the main memory or the free space
available to it.
 Memory is used more efficiently, because the only section of the job stored in memory is
those needed immediately, while those not needed remain in secondary storage.
 Allows unlimited amount of multiprogramming (Which can apply at many jobs or users
in a time sharing environment)
 It eliminates external fragmentation and minimizes internal fragmentation by combining
segmentation and paging.
 It allows for sharing of code and data.
 Facilitates dynamic linking of program segments.

Disadvantages:
 Increase H/W costs- it requires the use of cache memory (Associated memory)
 Increased overhead for handling paging interrupts.
 Increased S/W complexity to prevent thrashing ( A situation in virtual memory, in which
the processor spends most of the time swapping pieces of program rather than executing
instructions.

CACHE MEMORY;
Is a memory that is smaller and faster than the main memory, placed between main
memory and processor. It acts as a buffer for recently used memory and make the CPU faster.

Techniques Used in Virtual Memory:


a) Paging:
A page in virtual storage is a fixed length block that has a virtual address and that is
transferred as a unit between the main memory and secondary storage. A page fault occurs when
a page containing a reference block of data (Word) is not in main memory
This causes an interrupt (Paging interrupt) that requires the proper page be brought to
the main memory. In virtual memory, the virtual address does not go directly into the memory,
but instead they go to the memory management unit (MMU)
MMU is a collection of chips that map the virtual address into the physical address.
The virtual address is divided into units called pages. The corresponding units in the physical
memory are called page frames.
The concept of page tables is used and to avoid having huge table in memory all the
time many Computers uses multilevel tables. The entry located by indexing into the top-level
page table yields the address or the page frame. The disk address used to hold the page when it’s
not in memory, is not part of the page table.

Page Replacement Algorithms

When a page fault occurs, the operating system has to choose which page to remove from the
memory to make room for the page that has to be brought in. If the page to be removed has been
modified while in memory, it must be rewritten to the disk to bring the disk copy up to date. If
the page has not been modified there is no need to rewrite it. The page to be read in just
overwrites the page being evicted.

It is possible to choose the page to be removed randomly, though system performance is much
better if a page that is not heavily used is chosen. If a heavily used page is removed it will
probably be brought back in quickly, resulting in extra overhead.

i) Optimal Page Replacement Algorithm

At the moment that a page fault occurs, one of the pages in memory will be referenced on the
next instruction. Other pages may not be referenced until later on. Each page can be labeled with
the number of instructions that will be executed before the page is first referenced.
This algorithm says that the page with the highest label should be removed. The problem with
this algorithm is that it is not realizable. At the point of the page fault the operating system has
no idea of when each of the pages will be referenced next. By running a program on a simulator
and keeping track of all page references, it is possible to implement optimal page replacement
algorithm on the second run by using the page reference information collected during the first
run. This method is not used in practical systems.

ii) Not recently Used (NRU) Page Replacement Algorithm

Most computers with virtual memory have two status bits associated with each page for
collecting useful statistics. R is set when the page is referenced (read) and M when the page is
written to (modified). The bits are contained in each page table entry. It is essential that once a
bit has been set to 1, it will stay until the operating system resets it to 0.

When a process is started up, both page bits for all its pages are set to 0 by the operating system.
Periodically, the R bit is cleared to distinguish pages that have been referenced recently, from
those that have been. When a page fault occurs, the operating system inspects all the pages and
divides them into four categories based on the current values of the R and M bits.

Class 0: not referenced, not modified.


Class 1: not referenced, modified
Class 2: referenced, not modified
Class 3: referenced, modified

The NRU algorithm removes a page at random from the lowest numbered non-empty class.
NRU is easy to understand, efficient to implement and gives performance that, while not optimal
is often adequate.

iii) The First-In, First-Out (FIFO) Page Replacement Algorithm


Consider a supermarket that has enough shelves to display exactly k different products. One day,
some company introduces a new product that is an instant success. The supermarket has to get
rid of one old product in order to stock it.

One possibility is to find the product that the supermarket has been stocking longest and get rid
of it on the grounds that no one is interested in it any more. This way, the supermarket maintains
a list of all products it is currently selling in the order they were introduced.

The same idea is applicable as a page replacement algorithm. The operating system maintains a
list of all pages currently in memory, with the page at the head of the list the oldest one and the
page at the tail the most recent arrival. On a page fault, the page at the head is removed and the
new page added to the tail of the list. This is not a good algorithm because there are chances of
removing a heavily used page.

iv) The second Chance Page Replacement Algorithm.

A simple modification to FIFO that avoids the problem of throwing out a heavily used page is to
inspect the R bit of the oldest page. If it is 0, the page is both old and unused, so it is replaced. If
the R bit is 1, the bit is cleared and the page is put at the end of the list of pages, and its load time
is updated as though it has just arrived in memory. The search continues.

The algorithm can be illustrated using the following example. Pages A through H are kept on a
linked list sorted by the time they arrived in memory. Suppose that a page fault occurs at time 20.
The oldest page is A, which arrived at time 0, when the process started. If A has the R bit
cleared, it is evicted from memory. On the other hand if the R bit is set, A is put on to the end of
the list and its load time reset to current time (20). The R bit is also cleared and the search for a
suitable page continues with B.
Figure 3.10

The second chance looks for an old page that has not been referenced in the previous clock
interval. If all have been referenced then second chance degenerates into a pure FIFO. Suppose
that all the pages have their R bits set. One by one the operating system moves the pages to the
end of the list and clears their R bit. Eventually, it comes back to page A, which now has its R bit
cleared. At this point A is evicted. Thus the algorithm always terminates.

v) The Least recently Used (LRU) Page Replacement Algorithm

A good approximation to the optimal algorithm is based on the observation that pages that have
been heavily used in the last few instructions will probably be heavily used again in the next few
instructions. Conversely, pages that have not been used for ages will probably remain unused for
a longer time. This idea suggests a realizable algorithm: when a page fault occurs, throws out the
pages that have been unused for the longest time. This strategy is called LRU (Least Recently
Used) paging.

To implement LRU, it is necessary to maintain a linked list of all pages in memory, with the
most recently used page at the front and the least recently used page at the rear.

The problem is that the list must be updated on every memory reference. Finding a page in the
list, deleting and then moving it from to the front is a very time consuming operation.
b) Segmentation:
A segment in virtual memory is a block that has a virtual address. The block of a program be
unequal length and may have dynamically varying length.

OTHER TECHNIQUES OF MEMORY MANAGEMENT


1 Memory Management with Bit Maps

When memory is allocated dynamically, the operating system must manage it. There are two ways to
keep track of memory usage: bit maps and free lists. With bit maps, memory is divided into allocation
units. Corresponding to each allocation unit is a bit in the bit map, which is 0 if the unit is free and 1 if it
is occupied.

Figure 3.6 – Memory management with bit maps


The size of the allocation unit is an important design issue. The smaller the allocation unit the larger the
bit maps. If the allocation unit is large, the bit map will be smaller, but appreciable memory will be
wasted in the last unit if the process size is not exact multiple of the allocation unit. A bit map provides a
simple way to keep track of memory words in a fixed amount of memory because the size of the bit map
depends on the size of memory and size of allocation unit. The main problem with it is that when it has
been decided to bring a k unit process into memory, the memory manager must search the bit map to find
a run of k consecutive 0 bits in the map. This is a slow operation.

3.3.2 Memory Management with Linked Lists

Another way of keeping track of memory is to maintain a linked list of allocated and free memory
segments, where a segment is either a process or hole between two processes. The memory of fig 22(a) is
represented in fig 22(c) as linked list of segments. Each entry in the list specifies a hole (H) or a process
(P), the address at which it starts, the length and a pointer to the next entry. In this example, the segment
list is kept sorted by address.

When the processes and holes are kept on a list sorted by address, several algorithms can be used to
allocate memory for a newly created or swapped in process. We assume that the memory manager knows
how much memory to allocate.

The simplest algorithm is first fit. The memory manager scans along the list of segments until it finds a
hole that is big enough. The whole is then broken up into two pieces, one for the process and one for
unused memory, except for the unlikely case of a perfect fit.

Another variation of first fit is the next fit. It works the same way as first fit, except that it keeps track of
where it is whenever it finds a suitable whole. The next time it is called to find a hole, it starts searching
the list from the place where it left off last time, instead of the beginning as with the previous algorithm.

The best-fit algorithm searches the entire list and takes the smallest hole that is adequate. Rather than
breaking up a big hole that may be needed later, best fit, tries to find a hole that is close to the actual size
needed. Best fit is slower than first fit and also results in more wasted memory because it tends to fill
memory with tiny useless holes.

To get around the problem of breaking up nearly exact matches into a process and a tiny hole, one could
think about worst fit, i.e. take the largest available hole, so that the hole broken off will be big enough to
be useful.

You might also like