Module-3 Memory Management
Module-3 Memory Management
1. Memory management
The part of the OS that manage the memory is called the Memory Manager. It response
for the following:
Keep track of which parts of the memory are in use and which parts are not in use.
Allocate memory to Processes when they need it and de allocates it when done.
Manage swapping between main memory and disk (auxiliary memory), when main
memory is not big enough to hold all the processes.
Memory Management systems can be divided into two classes those that move processes
back and forth between main memory and disk during execution (swapping and paging), and
those that do not.
The Simplest possible memory management scheme is to have just one process in
memory at a time, and to allow that process to use all the memory.
OS in ROM Device
Driver in
ROM
OS in OS in RAM
RAM
0 0 0
OS
Partition 1
Partition 2
Partition 3
Partition 4
Fig. 2a
If a job was ready to run and its partition was occupied, then that job had to wait; even if other
partitions were available. This results in wastage of storage resource.
Fig . 2b
An alternative organisation is to maintain a single queue. Whenever a partition becomes free, the
job closest to the front of the queue that fix in it, could be loaded into the empty partition and run.
It is undesirable to waste a large partition on a small job. Therefore a different strategy is
followed. In this strategy whenever a partition becomes free the whole input queue is searched
and the largest job that fix into the empty partition is picked.
P4 700K
P3 400 K
200 K
P2 100 K
P1 0K
fig. 3a
OS
What is needed is a call to 100K +100. If the program is loaded into partition-2 (P2), it
must be carried out as a call to 200K+100 and so on. This problem is known as relocation
problem.
A solution to both relocation and protection problems are to equip the machine with two
special hardware registers called: Base and Limit registers.
When a process is scheduled the base register is loaded with the address of the start of its
partition, and the limit registers loaded with the length of the partition.
Limit Register
Maximum
address of
400 partition
Base Register
200 Address of
start of its
partition
Base
address
fig. 2b
Every memory address generated automatically has the base register contents added to it
before being sent to memory. Thus, if base register is 200K, a call 100 instruction is turned into a
call (200K+100) instruction without modifying the instruction itself. Addresses are also checked
against the limit register to make sure that ‘no attempt’ is made to address memory outside the
current partition.
The hardware protects the base and limit registers to prevent user programs from
modifying them. The IBM PC uses a weaker version of this scheme; it has a base register (the
segment register) but no limit registers. An additional advantage of using a base register for
relocation is that a program can be moved in memory after it has started execution. After it has
been moved, it is only required to change the value of the base register, to make it ready to run.
Swapping
With timesharing, there are normally more users than there is memory to hold all their
processes, so it is necessary to keep excess process on disk. To run these processes they must be
brought into main memory. Moving processes from main memory to disk and back is called
swapping.
C C C C C
B B B B
E
A A A
D D D
OS OS OS OS OS OS
OS
(a) (b) (c) (d) (e) (f) (g)
Initially only process A is in memory.
Processes B and C are created or swapped in from disk.
Process A terminates or swapped out to disk. (Fig d).
Process D comes in (fig e), and B goes out (fig f)
Finally E comes in (fig g)
The main difference between fixed partitions and the variable partitions is that the
number, location and size of the partitions vary dynamically in the latter as processes come and
go, where as they are fixed in the former. Every storage organization scheme involves some
degree of waste. In variable partition multi programming the waste does not becomes obvious
until jobs start to finish and leave holes in the main storage. It is possible to combine all the holes
by moving processes downward as far as possible, and make a big one. This technique is known
as Memory compaction. It is usually not done, because it requires a lot of CPU time.
1000K 1000K
750K
Process C 550K
450K Process C
250K 250K
process A process A
100K 100K
0K OS 0K
OS
fig. 6a fig. 6b
It is clear that the compaction will have the desired effect of making the total free space
more usable by incoming processes, but this is achieved at the expense of large-scale movement
of current processes. All processes would need to be suspended while the re-shuffle takes place,
with attendant updating of process context information, such as the load address. Such activity
would not be feasible in a time critical system and would be a major overhead in any system.
In practice, the compaction scheme has seldom been used due to the fact that its
overheads and added complexity tend to minimise its advantage over the non - compacted
scheme. So, we are still in pursuit of a technique, which will make better use of the memory and
hence enhance the throughput of the system. Our current problem is that we create holes in
available memory, which can only be consolidated at the considerable expense of moving active
processes. The residual size of these free space holes is the essential problem; they are frequently
too small to accommodate a full process.
Discusses about storage placement algorithms:
First fit
Best fit
Worst fit
2. Virtual Memory
If the combined size of the program, data, and stack exceeds the amount of physical
memory available for it, then part of the program will be saved on the disk. Therefore the
physical memory was virtually enhanced by adding the part of the disk space.
2.1 Paging
In a paged system, each process is notionally divided into a number of fixed size 'chunks'
called pages, typically 4KB in length. The memory space is also viewed as a set of page frames
of the same size. The loading process now involves transferring each process page to some
memory page frame.
Figure below shows an example of three processes, which have been loaded into
contiguous pages in the memory.
Main Main
fig.
A15b Memory
A1 Memory
A1
A2 A2 A2
A3 A3
A4 A4 A4
A5 A5 A5
B1 D1
B2 D2
B3 E1
C1 C1 C1
C2 C2 C2
C3 C3 C3
C4 C4 C4
E2
E3
fig .7d fig .7e fig 7f
Paging alleviates the problem of fragmented free space, since a process can be distributed over a
number of separate holes. After a period of operation, the pages of active processes could become
extensively intermixed.
There is still the residual problem of there being a number of free spaces available which are
insufficient in total to accommodate a new process; such space would be wasted. However, in
general, the space utilization and consequently the system throughput are improved.
Paging requires relocation of multiple parts of each process. Clearly, the needs of paging in this
respect are more elaborate. The key to the solution of this problem lies in the way a specific
memory location is addressed in a paging environment.
Page No Displacement
The page number uses the top 5 bits therefore has a value range of 0 - 31 pages. The displacement
value uses 11 bits and therefore has a range of 0 - 2047. This means that a system based on this
scheme would have 32 pages each of 2048 locations.
To solve the relocation problem, we observe that when a process page is positioned in some
memory page frame, the page number parameter of the paging address changes but the
displacement remains constant. Hence, relocation reduces simply to converting a process page
number to a memory page frame number. This is accomplished using a page table; this has one
entry for each possible page number and contains the corresponding memory page frame number.
The overall conversion process is shown below in figure 7h.
p d
Converted address
0
1
2
3
4
p
5
fig. 7h
Note that the process page number is used to index the page table, which in effect is an array of
memory page frame numbers.
Paging is impressed upon the physical form of the process and is independent of the programme's
structure.
Paging is used to improve memory utilisation by avoiding fragmentation.
Segment 3
Process A Segment 2
A3
B4 Main
fig 8a Segment 1 Memory
A2
Segment 4
fig 8c
Segment 3 A1
Process
B B2
Segment 2
fig 8 b B3
Segment 1
B1
Segment Addressing
The segment address consists of two parts, namely the segment reference, s , and displacement d,
within that segment, which are derived from a subdivision of the bits of the logical address. The
segment reference indexes a process segment table whose entries specify the base address and the
segment size.
Physical
address
Segment Table
fig. 8d.