Memory
Memory
Management
Prepared By:
Dr. Sanjeev Gangwar
Assistant Professor,
Department of Computer Applications,
VBS Purvanchal University, Jaunpur
Memory Management
Memory is large array of words or bytes, each having its unique address. CPU fetches
instructions from memory according to value of program counter. The instructions undergo
instruction execution cycle. To increase both CPU utilization and speed of its response to users,
computers must keep several processes in memory. Specifically, the memory management
modules are concerned with following four functions:
1. Keeping track of whether each location is allocated or unallocated, to which process and how
much.
2. Deciding to whom the memory is allocated, how much, when and where. If memory is to be
shared by more than one process concurrently, it must be determined which process’ request
should be satisfied.
3. Once it is decided to allocate memory, the specific locations must be selected and allocated.
Memory status information is updated.
4. Handling the deallocation/reallocation of memory. After the process holding memory is
finished, memory locations held by it are declared free by changing the status information.
There are varieties of memory management systems. They are:
Binding of Instructions and Data to Memory: the binding of data and program to
memory address can be done at any of the following steps:
Compile Time: Binding at compile time generates absolute addresses where a prior
knowledge is required that where a process resides in the memory. After sometime if the
starting location of a process in memory is changed, then the entire process must be
recompiled to generate the absolute address again.
Load Time: at compile time if it is not known that where a process will reside in
memory then the compiler must generate relocatable address. In this case, final binding is
delayed until load time. If the starting address is changed then we need only to reload the
user code to incorporate this changed value.
Execution Time: this method permits moving a process from one memory segment to
another during rum time. In this case, final binding is delayed until run time.
Most systems allow a user process to reside in any part of the physical memory. In most cases, a
user program will go through several steps- some of which are optional-before being executed as
shown in figure 1. Addresses may be represented in different ways during these steps. Addresses
in the source programs are generally symbolic. A compiler will bind these addresses to
relocatable addresses. The linkage editor or loader will bind these addresses to absolute
addresses. Each binding is a mapping from one address space to another.
Dynamic Linking
Linking is postponed until execution time.
Small piece of code, stub, is used to locate the appropriate memory-resident library
routine, or to load the library if the routine is not already present.
When this stub is executed, it checks to see whether the needed routine is already in
memory. If not, the program loads the routine into memory.
Stub replaces itself with the address of the routine, and executes the routine.
Thus the next time that code segment is reached, the library routine is executed directly,
incurring no cost for dynamic linking.
Operating system is needed to check if routine is in processes’ memory address.
Dynamic linking is particularly useful for libraries.
Contiguous Allocation: In contiguous memory allocation, all the available memory space
remain together in one place. It means freely available memory partitions are not scattered here
and there across the whole memory space. The main memory must accommodate both the
operating system and the various user processes. The memory is usually divided into two
partitions, one for the resident operating system, and one for the user processes.
In the contiguous memory allocation when any user process request for the memory a single
section of the contiguous memory block is given to that process according to its need. We can
achieve contiguous memory allocation by dividing memory into the fixed-sized partition.
A single process is allocated in that fixed sized single partition. But this will increase the degree
of multiprogramming means more than one process in the main memory that bounds the number
of fixed partition done in memory. Internal fragmentation increases because of the contiguous
memory allocation.
Fixed sized partition: In the fixed sized partition the system divides memory into fixed size
partition (may or may not be of the same size) here entire partition is allowed to a process and if
there is some wastage inside the partition is allocated to a process and if there is some wastage
inside the partition then it is called internal fragmentation.
Variable size partition: In the variable size partition, the memory is treated as one unit and
space allocated to a process is exactly the same as required and the leftover space can be reused
again.
This procedure is a particular instance of the general dynamic storage-allocation problem,
which is how to satisfy a request of size n from a list of free holes. There are many solutions to
this problem. The set of holes is searched to determine which hole is best to allocate, first-fit,
best-fit, and worst-fit are the most common strategies used to select a free hole from the set of
available holes.
First-fit: Allocate the first hole that is big enough. Searching can start either at the
beginning of the set of holes or where the previous first-fit search ended. We can stop
searching as soon as we find a free hole that is large enough.
Best-fit: Allocate the smallest hole that is big enough. We must search the entire list,
unless the list is kept ordered by size. This strategy-produces the smallest leftover hole.
Worst-fit: Allocate the largest hole. Again, we must search the entire list unless it is
sorted by size. This strategy produces the largest leftover hole which may be more useful
than the smaller leftover hole from a best approach.
The disadvantage of contiguous memory allocation is fragmentation. There are two types of
fragmentation, namely, internal fragmentation and External fragmentation.
Internal fragmentation: When memory is free internally, that is inside a process but it
cannot be used, we call that fragment as internal fragment. For example say a hole of size 1252
bytes is available. Let the size of the process be 1258. If the hole is allocated to this process, then
six bytes are left which is not used. These six bytes which cannot be used forms the internal
fragmentation.
External fragmentation: All the three dynamic storage allocation methods discussed above
suffer external fragmentation. When the total memory space that is got by adding the scattered
holes is sufficient to satisfy a request but it is not available contiguously, then this type of
fragmentation is called external fragmentation.
One more solution to external fragmentation is to have the logical address space and physical
address space to be noncontiguous. Paging and Segmentation are popular noncontiguous
allocation methods.
Paging
Segmentation
Paging: Paging is a memory management scheme that eliminates the need for contiguous
allocation of physical memory. This scheme permits the physical address space of a process to be
non- contiguous.
The physical memory is divided into a number of fixed size blocks, called frames and the logical
address space is also divided into fixed size blocks called pages. When a process is to be
executed, its pages are loaded into any available memory frames from the backing store. The
backing store is also divided into fixed size blocks that are of the same size of the frames i.e. the
size of a frame is same as the size of a page for a particular hardware.
Every address generated by the CPU is divided into two parts: a page number (p) and a page
offset (d). The page number is used as an index into a page table. The page table contains the
base address of each page in physical memory. This base address is combined with the page
offset to define the physical memory address that is sent to the memory unit. The paging model
of memory is shown in figure 6.
The page size like the frame size is defined by the hardware. The size of a page is typically a
power of 2, varying between 512 bytes and 16 MB per page depending on the computer
architecture. The selection of a power of 2 as a page size makes the translation of a logical
address into a page number and page offset particularly easy. If the size of logical address is 2m ,
and a page size is 2n addressing units, then the high order m-n bits of a logical address designate
the page number, and the n low order bits designate the page offset.
Paging Example: consider the memory in figure 7. Using a page size of 4 bytes and a
physical memory of 32 bytes (8 pages), we show how the user’s view of memory can be mapped
into physical memory. Logical address 0 is page 0, offset 0. Indexing into the page table, we find
that the page 0 is in frame 5. Thus logical address 0 maps to physical address 20(=(5x4)+0).
Logical address 3 (page 0, offset 3) maps to physical address 23(=(5x4)+3). Logical address 4 is
page 1, offset 0, according to the page table, page 1 is mapped to frame 6. Thus, logical address 4
maps to physical address 24(=(6x4)+0). Logical address 13 maps to physical address 9.
When we use a paging scheme, we have no external fragmentation. Any free frame can be
allocated to a process that needs it. However, we may have some internal fragmentation.
0 a 0 5 0
1 b 1 6
2 c 2 1
3 d 3 2
4 e 4 i
Page Table
5 f j
6 g k
7 h l
8 i 8 m
9 j n
10 k o
11 l p
12 m 12
13 n
14 o
15 p
16
Logical memory
20 a
b
c
d
24 e
f
g
h
28
Physical memory
Segment base contains the starting physical address where the segment resides in memory,
whereas segment limit specifies the size of segment. The use of segment table is illustrated in the
figure 8.
Sol. For the given logical address the first digit refers to the segment number (s) while remaining
digits refers to the offset value (d).
(a) 0430 the first digit 0 refers to segment 0 and 430 refers to offset value for the logical
address (0430). Also the size of segment 0 is 600 as shown in given table
References:
(1) Abraham Silberschatz, Galvin & Gagne, Operating System Concepts, John Wiley &
Sons, INC.
(2) Harvay M.Deital, Introduction to Operating System, Addition Wesley Publication
Company.
(3) Andrew S.Tanenbaum, Operating System Design and Implementation, PHI
(4) Vijay Shukla, Operating System, S.K. Kataria & Sons