0% found this document useful (0 votes)
47 views25 pages

Unit 3 2015

The document discusses several topics related to main memory management in operating systems: 1) The operating system is responsible for tracking used memory, deciding which processes load into available memory, and allocating and deallocating memory as needed. 2) Processes are loaded into memory through address binding at compile time, load time, or runtime. 3) Logical addresses seen by the CPU are mapped to physical addresses by the memory management unit through dynamic address translation. 4) Dynamic loading and dynamic linking help improve memory utilization by delaying loading of unused code and sharing libraries between processes.

Uploaded by

pathakpranav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views25 pages

Unit 3 2015

The document discusses several topics related to main memory management in operating systems: 1) The operating system is responsible for tracking used memory, deciding which processes load into available memory, and allocating and deallocating memory as needed. 2) Processes are loaded into memory through address binding at compile time, load time, or runtime. 3) Logical addresses seen by the CPU are mapped to physical addresses by the memory management unit through dynamic address translation. 4) Dynamic loading and dynamic linking help improve memory utilization by delaying loading of unused code and sharing libraries between processes.

Uploaded by

pathakpranav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Fundamentals of Operating Systems - Memory management (Unit 3)

MAIN MEMORY MANAGEMENT

Main memory is a large array of words or bytes. Each word or byte has its own address. For a program to be
executed, it must be mapped to absolute addresses and loaded into memory. The operating system is
responsible for the following activities in connection with memory management.

 Keep track of which parts of memory are currently being used and by whom.

 Decide which processes are to be loaded into memory when memory space becomes available.

 Allocate and deallocate memory space as needed.

ADDRESS BINDING
Usually a program resides on a disk as a binary executable file. The program must be brought into memory
and placed in a process for it to be executed. Addresses in the source program are generally symbolic (such
as count).

A compiler will typically bind these symbolic addresses to relocatable addresses (such as “14 bytes from the
beginning of this module”). The linkage editor or loader will in turn bind these relocatable addresses to
absolute addresses (such as 74014).

Each binding is a mapping from one address space to another.

Classically, the binding of instructions and data to memory addresses can be done at any step along the way:

Compile time: If it is known at compile time where the process will reside in memory, then absolute can be
generated.

Load time: if it is known at compile time where the process will reside in the memory, the compiler must
generate relocatable code. In this case, final binding is delayed until load time.

1
Fundamentals of Operating Systems - Memory management (Unit 3)

Execution time/Run time: If the process can be moved during execution from one memory segment to
another, then binding must be delayed until runtime. Special hardware must be available for this scheme to
work.

FIG-Multistep processing of user program

2
Fundamentals of Operating Systems - Memory management (Unit 3)

LOGICAL AND PHYSICAL MEMORY

An address generated by the CPU is commonly referred to as a logical address, whereas an address as seen
by the memory unit (that is, the one loaded into the memory address register –MAR of the memory) is
commonly referred to as a physical address.

The compile time and load-time address binding schemes result in an environment where the logical and
physical addresses are the same. However, the execution-time addresses binding scheme results in an
environment where the logical and physical addresses differ.

The set of all logical addresses generated by a program i s referred to as a logical address space; the set of all
physical addresses corresponding to these logical addresses is referred to as physical address space.

The run-time mapping from virtual to physical address is done by the memory-management unit (MMU),
which is a hardware device.

Dyna mic relocation using relocation register

Rel ocation

Regi ster
14000

Secondary
Logi cal Phys i cal Memory
a ddress a ddress
CPU +

346 14346

MMU

DYNAMIC LOADING

In our discussion so far, the entire program and all data of a process must be in physical memory for the
process to execute. The size of a process is thus limited to the size of physical memory. To obtain better
memory-space utilization, we use dynamic loading. With dynamic loading a routine is not loaded into
memory until it is called. All routines are kept on disk in relocatable format. The main program is loaded into
memory and is executed. When a routine needs to call another routine the calling routine first checks to see
whether the other routine has been loaded. If not the relocatable linker is called to load the desired rout ine

3
Fundamentals of Operating Systems - Memory management (Unit 3)

into memory and to update the program's address tables to reflect this change. Then control is passed to the
newly loaded routine.

Advantages

 The advantage of dynamic loading is that an unused routine is never loaded. This method is
particularly useful when large amounts of code are needed to handle infrequently occurring cases,
such as error routines. In this case, although the total program size may be large the portion that is
used and loaded may be much smaller.
 It does not require special support from the operating system. It is the responsibility of the users to
design their programs to take advantage. Operating systems my help the programmer by providing
library routines to implement dynamic loading.

DYNAMIC LINKING AND SHARED LIBRARIES

Some Operating systems support only Static Linking, in which the system language libraries are treated like
any other object module and are combined by the loader into the binary program image.

The concept of dynamic linking is similar to that of dynamic loading. Here linking is postponed until execution
time. This feature is usually used with system libraries such as language subroutine libraries. Without this
facility, each program on a system must include a copy of its language library in the executable image. T his
requirement wastes both disk space and main memory.

With dynamic linking a stub is included in the image for each library routine reference. The stub is a small
piece of code that indicates how to locate the appropriate memory-resident library routine or how to locate
the appropriate memory resident library routine or how to load the library if the routine is not already
present. When the stub is executed, it checks to see whether the needed routine is already in memory. If not
the program loads the routine into memory. Either way the stub replaces itself with the address of the
routine executes the routine. The next time that particular code segment is reached, the library routine is
executed directly, incurring no cost for dynamic linking. All processes that use a language library execute only
one copy of the library code.

This feature can be extended to library updates such as bug fixes. A library may be replaced by a new version
and all programs that reference the library will automatically use the new version. Without dynamic linking
all such programs would need to be relinked to gain access to the new library.

Comparison of Dynamic Linking and Dynamic Loading

Dynamic linking requires help from the Operating System. Dynamic loading require special support from the
Operating System.

If the processes in memory are protected from one another , then the Operating System is the only entity
that can check to see whether the needed routine is in another processes memory space or that can allow

4
Fundamentals of Operating Systems - Memory management (Unit 3)

multiple processes to access the same memory addresses.

SWAPPING
Any process should be in main memory during its execution. During its execution if the process needs any
Input/Output device or a higher priority process wants to enter into main memory. But the size of the main
memory is limited. Thus for accommodating new process or for using I/O device, present running process
needs to move into secondary memory. The process can be swapped temporarily out of memory to a backing
store and then brought back into memory for continued execution. This is called "swap out". New process
needs to move into main memory is called "swap in". For example, assume a multi-programming
environment with a round robin scheduling algorithm. When a time quantum expires the memory manager
will start to swap out the process that has just finished and to swap another process into the memory space
that has been freed. In the meantime the CPU scheduler will allocate a time slice to some other process in
memory. When each process finishes its quantum it will be wrapped with another process. Ideally the
memory manager can swap processes fast enough that some processes will be in memory ready to execute
when the CPU scheduler wants to reschedule the CPU. In addition the quantum must be large enough to
allow reasonable amounts of computing to be done between swaps.

Normally a process that is swapped out will be swapped back into the same memory space it occupied
previously. This restriction is dictated by the method of address binding. If binding is done at load time then
the process cannot be easily moved to a different location. If execution time binding is being used however,
then a process can be swapped into a different memory space because the physical addresses are computed
during execution time.

Swapping requires a backing store which commonly is a fast disk. It must be large enough to accommodate
copies of all memory images for all users and it must provide direct access to these images. The system
maintains a ready queue consisting of all processes whose memory images are on the backing store or in
memory and are ready to run. Whenever the CPU scheduler decides to execute a process it calls the
dispatcher which checks to see whether the next process in the queue is in the memory. If it is not and if
there is no free memory region the dispatcher swaps out a process currently in memory and swaps in the
desired process. It then reloads registers and transfers control to the selected process.

5
Fundamentals of Operating Systems - Memory management (Unit 3)

The context switch time in such a swapping system is fairly high. To get an idea of it , let us assume that the
user processes 10MB in size and the backing store is a standard hard disk with a transfer rate of 40MB per
second. The actual transfer of the 10MB process to or from main memory takes:

10000KB/40000KB per second=1/4 second =250 milliseconds

For efficient CPU utilization we want the execution time for each process to be long relative to the swap time.

Swapping is constrained by other factors as well. If we want to swap a process, we must be sure that it is
completely idle. Of particular concern is an pending I/O. A process may be waiting for an I/O operation when
we want to swap that process to free up memory. However, if the I/O is asynchronously accessing the user
memory for I/O buffers, then the process cannot be swapped. Assume that the I/O operation is queued
because the device is busy. If we want to swap out process P1 and swap in process P2, the I/O might then
attempt to use memory that now belongs to process P2.

There are two main solutions to this problem:

Never swap a process with pending I/O or execute I/O operations into operating system buffers. Transfers
between Operating system buffers and process memory then occur only when the process is wrapped in.

MEMORY MAPPING AND PROTECTION

When the CPU scheduler selects a process for execution, the dispatcher loads the relocation and
limit registers with the correct values as part of the context switch. Because every address generated by the
CPU is checked against these registers, we can protect both the operating system and the other users'
programs and data from being modified by this running process.

The relocation-register scheme provides an effective way to allow the operating-system size to change
dynamically. This flexibility is desirable in many situations. For example, the operating system contains code
and buffer space for device drivers. If a device driver [or other operating-system service] is not commonly
used, we do not want to keep the code and data in memory, as we might be able to use that space for other
purposes. Such code is sometimes called transient operating-system code; it comes and goes as needed.
Thus, using this code changes the size of the operating system during program execution.

To protect the operating system code and data by the user processes as well as protect user processes from
one another using relocation register and limit register.

This is depicted in the figure below:

6
Fundamentals of Operating Systems - Memory management (Unit 3)

MEMORY MANAGEMENT SCHEMES

Memory management

techniques
Contiguous Non
contiguous
allocation
Single partition Multiple partition paging allocation
Segmentation

allocation allocation
Fixed partition Variable partition

CONTIGUOUS ALLOCATION

Single partition allocation

The memory is divided into two partitions, one for the resident operating system, and one for the user
process.

OS

User

7
Fundamentals of Operating Systems - Memory management (Unit 3)

Advantages

 It is simple.
 It is easy to understand and use.

Disadvantages

 It leads to poor utilization of processor and memory.

 User’s job is limited to the size of available memory.

Multiple partition allocation

OS OS

P3
P2
P1

P1

P2

P3

Fixed partition Allocation Variable partition Allocation

1. Fixed partition

Memory is divided into a no. of fixed sized partitions. Each partition may contain exactly one
process. When a partition is free, a process is selected from the input Queue and is loaded into a free
partition.

8
Fundamentals of Operating Systems - Memory management (Unit 3)

Advantages

 Any process whose size is less than or equal to the partition size can be loaded into any available
partition.

 It supports multiprogramming.

Disadvantages

 If a program is too big to fit into a partition use overlay technique.

 Memory use is inefficient, i.e., block of data loaded into memory may be smaller than the
partition. It is known as internal fragmentation.

2. Variable partition

Initially all memory is available for user processes, and is considered as large block of available
memory. When a process arrives and needs memory, we search for a hole large enough for this
process and allocate only as much.

Advantages

 Minimize wastage of memory.

Disadvantages

 This scheme is optimum from the system point of view. Because larger partitions remains
unused.

3. Dynamic Partitioning

Even though when we overcome some of the difficulties in variable sized fixed partitioning, dynamic
partitioning require more sophisticated memory management techniques. The partitions used are of
variable length. That is when a process is brought into main memory, it allocates exactly as much
memory as it requires. Each partition may contain exactly one process. Thus the degree of
multiprogramming is bound by the number of partitions. In this method when a partition is free a
process is selected from the input queue and is loaded into the free partition. When the process
terminates the partition becomes available for another process. This method was used by IBM's
mainframe operating system, OS/MVT (Multiprogramming with variable number of tasks) and it is no
longer in use now.

Figure below is showing the allocation of blocks in different stages by using dynamic
partitioning method. That is the available main memory size is 1 MB. Initially the main memory is
empty except the operating system shown in Figure a. Then process 1 is loaded as shown in Figure b,
then process 2 is loaded as shown in Figure c without the wastage of space and the remaining space
in main memory is 146K it is free. Process 1 is swaps out shown in Figure d for allocating the other

9
Fundamentals of Operating Systems - Memory management (Unit 3)

higher priority process. After allocating process 3, 50K whole is created it is called internal
fragmentation, shown in Figure e.
Let us consider the following scenario:

Process Size (in Kb) Arrival Time Service Time (in ms)

P1 350 0 40

P2 400 10 45

P3 300 30 35

P4 200 35 25

Now process 2 swaps out shown in Figure f. Process 1 swaps in, into this block. But process 1 size
is only 350K, this leads to create a whole of 50K shown in Figure g.

Like this, it creates a lot of small holes in memory. Ultimately memory becomes more and more
fragmented and it leads to decline memory usage. This is called ‘external fragmentation’. To overcome
external fragmentation by using a technique called "compaction". As the part of the compaction process,
from time to time, operating system shifts the processes so that they are contiguous and this free
memory is together creates a block. In Figure h compaction results in a block of free memory of length
246K.

10
Fundamentals of Operating Systems - Memory management (Unit 3)

Advantages

 Partitions are changed dynamically.


 It does not suffer from internal fragmentation.

Disadvantages

 It is a time consuming process (i.e., compaction).

 Wasteful of processor time, because from time to time to move a program from one region to
another in main memory without invalidating the memory references.

MEMORY ALLOCATION AND FRAGMENTATION

If the free memory is present within a partition then it is called "internal fragmentation". Similarly if the free
blocks are present outside the partition, then it is called "external fragmentation". Solution to the "external
fragmentation" is compaction.

Solution to the "internal fragmentation" is the "placement" algorithm only.

Because memory compaction is time-consuming, when it is time to load or swap a process into main memory
and if there is more than one free block of memory of sufficient size, then the operating system must decide
which free block to allocate by using three different placement algorithms.

This is shown in the figure below:

11
Fundamentals of Operating Systems - Memory management (Unit 3)

 Best-fit: It chooses the block that is closest in size to the given request from the beginning to the
ending free blocks. We must search the entire list, unless it is ordered by size. This strategy produces
the smallest leftover hole.
 First-fit: It begins to scan memory from the beginning and chooses the first available block which is
large enough. Searching can start either at the beginning of the set of blocks or where the previous
first-fit search ended. We can stop searching as soon as we find a free block that is large enough.
 Worst-fit: It allocates the largest block. We must search the entire the entire list, unless it is sorted
by size. This strategy produces the largest leftover hole, which may be more useful than the smaller
leftover hole from a best-fit approach.
 Last-fit: It begins to scan memory from the location of the last placement and chooses the next
available block. In the figure below the last allocated block is 18k, thus it starts from this position and
the next block itself can accommodate this 20K block in place of 36K free block. It leads to the
wastage of 16KB space.

First-fit algorithm is the simplest, best and fastest algorithm. Next-fit produce slightly worse results than the
first-fit and compaction may be required more frequently with next-fit algorithm. Best-fit is the worst
performer, even though it is to minimize the wastage space. Because it consumes the lot of processor time
for searching the block which is close to its size.

FRAGMENTATION
Both the first-fit and best-fit strategies for memory allocation suffer from external fragmentation. As
the processes are loaded and removed from memory, the free memory space is broken into little pieces.
External fragmentation exists when there is enough total memory space to satisfy a request, but the available
spaces are not contiguous. Storage is fragmented into a large number of small holes. This fragmentation
problem can be severe. In the worst case we could have a block of free or wasted memory between every
two processes. If all these small pieces of memory were in one big free block instead, we might be able to run
several more processes.

Whether we are using the first-fit or best-fit strategy can affect the amount of fragmentation. First-
fit is better for some systems and best-fit is better for others. Another factor is which end of a free block is
allocated. No matter which algorithm is used external fragmentation will be a problem.
Depending on the total amount of memory storage and the average process size, external fragmentation may
be a minor or a major problem. Statistical analysis of first fit for instance reveals that even with some
optimization given N allocated blocks, another 0.5N blocks will be lost to fragmentation. that is one-third of
memory may be unusable. This property is known as the 50-percent rule.

Memory fragmentation can be internal as well as external. Consider a multiple-partition allo9cation scheme
with a hole of 18,464 bytes. If we allocate exactly the requested block we are left with a hole of 2 bytes. The
overhead to keep track of this hole will be substantially larger than the hole itself. The general approach to
avoid this problem is to break the physical memory into fixed -sized blocks and allocate memory in units
based on block size. With this approach , the memory allocated to a process may be slightly larger than the

12
Fundamentals of Operating Systems - Memory management (Unit 3)

requested memory. The difference between these two numbers is internal fragmentation- memory that is
internal to a partition but is not being used.

One solution to the problem of external fragmentation is compaction. The goal is to shuffle the contents so
as to place all free memory together in one large block. Compaction is not always possible , however if
relocation is static and is done at assembly or load time, compaction cannot be done. Compaction is possible
only if relocation is dynamic and is done at execution time. If addresses are relocated dynamically, relocation
requires only moving the program and data and then changing the base register to reflect the new base
address. When compaction is possible we must determine its cost. The simplest compaction algorithm is to
move all processes toward one end of the memory. All holes move in the other direction producing one large
block of available memory. This scheme can be expensive.

NON CONTIGUOUS ALLOCATION


PAGING

Physical memory is broken into fixed sized blocks called frames.

Logical memory is broken into blocks of the same size called pages.

The backing store is divided into fixed-sized blocks that are of the same size as the memory frames.

The user program views the memory as one single contiguous space containing only this one program. In
fact, the user program is scattered throughout the physical memory, which also holds other programs.

When a process arrives in the system to be executed, its size expressed in pages, is examined. Each page of
the process needs one frame. Thus if the process requires n pages, there must be at least n frames available
in memory. If there are n frames available in the memory, they are allocated in this arriving process. A page
table is maintained which stores the frame no: corresponding to each page.

Every address generated by the CPU is divided into 2 parts: a page no: (p) and a page offset (d). The page no:
is used to index into the page table. The page table contains the base address of each page in physical
memory. This base address is combined with the page offset to obtain the physical memory address that is
sent to the memory unit.

13
Fundamentals of Operating Systems - Memory management (Unit 3)

PAGING HARDWARE

Logical address Physical address

CPU p d f d

d d
_______
p Physical
_______
____ memory

f______
_____

Page table

0 A 0 M
PAGING EXAMPLE
0 0
1 B N

2 C 0 5 O
4 E 4
1 3 D P 1
5 F
1 2
6 G
8 I 8 E
7 H
2 9 J F 2
2 3
10 K G
12 M 12 I
3 11 L H
13 N 3 0 J 3

14 O K
16
15 P Total physical memory = 24 bytes L 4

20 A
Page size = 4 bytes
B 5

D
14
Fundamentals of Operating Systems - Memory management (Unit 3)

Eg. The logical address 6 is page 1, offset 2. Indexing into the page table we find that page 1 is in frame 2.
Thus logical address 6 (content G) maps to physical address 10 (2*4 + 2) (content G).

PAGING with MMU and TLB

The page tables may be very large making it infeasible to store the entire page table in registers. So the page
table is stored in main memory. A Page Table Base Register(PTBR) points to the page table. This would
require two memory accesses for every memory access a process wishes to make!! A fast lookup cache may
be used to overcome this problem. These specially built caches are called translation look-aside buffers(TLBs).
The TLB is a set of associative registers which contains a set of page number(the key) and frame number
pairs. All the page number registers may be checked in parallel. This makes a TLB look up very fast. When a
context switch occurs the TLB must be flushed.

The hit ratio is the percentage of time the page number is found in the associative registers. Using the hit
ratio and information about the memory access time it is possible to calculate the effective memory access
time. Suppose memory access takes 100ns and accessing the TLB adds 10ns. What is the effective access time
if the hit ratio was 95% ?

Effective time = 0.95 X 110 + 0.05 X 210

= 115ns.

Memory protection in a the paged environment may be accomplished by associating protection bits with
each page. This enables the operating system to permit the process to either just read from a page or both

15
Fundamentals of Operating Systems - Memory management (Unit 3)

read from and writes to a page. This information may be kept the page table. A trap will occur if the process
attempts to write to read only pages.

This approach may also be used to make some pages execute only.

Another bit that may be attached to each entry in the page table is a valid-invalid bit. This indicates if page is
part of a process logical address space. Accessing an invalid page will cause a memory violation trap. The
page which is valid is the part of logical address space.

Advantages: No external fragmentation.

Disadvantage: Little Internal fragmentation is there.

16
Fundamentals of Operating Systems - Memory management (Unit 3)

SEGMENTATION
In paging, the user’s view is not the same as the actual physical memory. The user’s view is mapped onto
physical memory.

The user prefers to view memory as a collection of variable-sized segments, with no memory ordering among
segments.

Sqrt

stack
Symbol

table Main
program

Logical address space

Each of these segments is of variable length. Elements within the segment are identified by their offset from
the beginning of the segment e.g. the first statement of the program, the 5th instruction of the Sqrt function,
the 17th entry in the symbol table, and so on.

Segmentation is a memory management scheme that supports this user view of memory. A logical address
space is a collection of segments. Each segment has a name and a length. The addresses specify both the
segment name and the offset within the segment.

The user therefore specifies each address by two quantities: a segment name and an offset.

For simplicity of implementation, segments are numbered and are referred by segment no. the loader takes
all the program segments and assigns them numbers.

Therefore, a logical address consists of <segment no. , offset>

Advantages: No internal fragmentation.

Disadvantages: Improved memory utilization and reduced overhead compared to dynamic partitioning.

17
Fundamentals of Operating Systems - Memory management (Unit 3)

SEGMENTATION HARDWARE

Logical address Physical address


s ______________
______________
CPU s d __bas
limit
e +
d
_____
_____
___ Physical
_____
memory
Segment table
<
yes
no

Trap: addressing error

SEGMENTATION EXAMPLE

Logical address
s 1000 1400
0
CPU s d 400 6300 4300
Seg 2
1
2 53 +
400 4300
2
4700
d

_____
Segment_____
table
___
Should be between 0 and 400 _____

18
Fundamentals of Operating Systems - Memory management (Unit 3)

Difference between paging and segmentation

Paging – Computer memory is divided into small partitions that are all the same size and referred to as, page
frames. Then when a process is loaded it gets divided into pages which are the same size as those previous
frames. The process pages are then loaded into the frames.

Segmentation – Computer memory is allocated in various sizes (segments) depending on the need for
address space by the process. These segments may be individually protected or shared between processes.
Commonly you will see what are called “Segmentation Faults” in programs, this is because the data that’s is
about to be read or written is outside the permitted address space of that process.

So now we can distinguish the differences and look at a comparison between the two:

Paging:
Transparent to programmer (system allocates memory)
No separate protection
No separate compiling
No shared code

Segmentation:
Involves programmer (allocates memory to specific function inside code)
Separate compiling
Separate protection
Share code

19
Fundamentals of Operating Systems - Memory management (Unit 3)

Virtual Memory
Virtual memory is a technique that allows the execution of a program that may not be completely in memory.
The main visible advantage of the scheme is that programs can be larger than physical memory.

As with simple paging, except that it is not necessary to load all of the pages of a process. Nonresident pages
that are needed are brought in later automatically.

Virtual memory is the separation of user logical memory from physical memory. This separation allows
extremely large virtual memory to be provided for programmers when only a small physical memory is
available.

Virtual memory is commonly implemented by demand paging. (Demand segmentation can also be used.)

A demand paging system is similar to a paging system with swapping. Processes reside on the secondary
memory. When we want to execute a process, the pager guesses which pages will be used and brings in
those pages into the memory.

If a process tries to use a page not in memory (if the pager does not guess correctly), the hardware traps to
the Operating System (page fault). The OS reads the desired pages into the memory and restarts the process
as though the page had always been in memory.

Advantages:

No external fragmentation.
Higher degree of multiprogramming.
Large virtual address space.

Disadvantages:

20
Fundamentals of Operating Systems - Memory management (Unit 3)

Overhead of complex memory management.

What is page fault? Write steps to handle page fault.

If a process tries to use a page which is not in memory (if the pager does not guess correctly), the hardware
traps to the Operating System, and is known as page fault. Now using following steps operating system can
handle the page fault.

1. The process has touched a page not currently in memory.

2. Check an internal table for the target process to determine if the reference was valid (do this in hardware.)

3. If page valid, but page not resident, try to get it from secondary storage.

4. Find a free frame; a page of physical memory not currently in use. (May need to free up a page.)

5. Schedule a disk operation to read the desired page into the newly allocated frame.

6. When memory is filled, modify the page table to show the page is now resident.

7. Restart the instruction that failed

Do these steps using the figure you can see on the next page.

21
Fundamentals of Operating Systems - Memory management (Unit 3)

When we over-allocate memory, we need to push out something already in memory. Over-allocation may
occur when programs need to fault in more pages than there are physical frames to handle. Approach: If no
physical frame is free, find one not currently being touched and free it. Steps to follow are:

1. Find requested page on disk.

2. Find a free frame.

a. If there's a free frame, use it

b. Otherwise, select a victim page.

3. Write the victim page to disk.

4. Read the new page into freed frame. Change page and frame tables.

5. Restart user process.

Hardware requirements include "dirty" or modified bit.

22
Fundamentals of Operating Systems - Memory management (Unit 3)

Page replacement Algorithms Example.

1. FIFO

2. LRU

3. OPR

23
Fundamentals of Operating Systems - Memory management (Unit 3)

Summary of memory management techniques

Technique Description Strengths Weaknesses

Fixed partitioning Main memory is divided into * Easily * Inefficient use of


static partitions at system implemented memory due to internal
generation time. A process * Little overhead fragmentation
may be loaded into a * Maximum number of
partition of equal or greater active processes is fixed
size

Dynamic partitioning Partitions are created * No internal * External fragmentation


dynamically so that each fragmentation * Inefficient use of
process uses only the space * More efficient use processor due to the need
it needs of main memory. of compaction

Simple paging Main memory is divided into * No external * A small amount of


a number of equal size fragmentation internal fragmentation.
frames.

Each process is divided into


number of equal size pages
of the same length as the
frames.

A process loaded by loading


all pages into available not
necessarily contiguous
frames

Simple segmentation Each process is divided into * No internal * Improved memory


a number of segments. A fragmentation utilization and reduced
process is loaded by loading overhead compared to
all of its segments into dynamic partitioning.
dynamic partitions that
don’t need to be contiguous.

Virtual memory As with simple paging, * No external * Overhead of complex


paging except that it is not fragmentation memory management.
necessary to load all of the * Higher degree of
pages of a process. multiprogramming
Nonresident pages that are * Large virtual
needed are brought in later address space.
automatically.

24
Fundamentals of Operating Systems - Memory management (Unit 3)

Virtual memory As with segmentation * No internal * Overhead of complex


segmentation except for that it is not fragmentation memory management.
necessary to load all of the * Higher degree of
segments of a process. multiprogramming
Nonresident segments that * Large virtual
are needed are brought in address space
later automatically. * Protection and
sharing support

25

You might also like