0% found this document useful (0 votes)
43 views46 pages

Unit-1

memory management

Uploaded by

ipscr.mansi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views46 pages

Unit-1

memory management

Uploaded by

ipscr.mansi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

MEMORY MANAGEMENT

MCS-041
Block-2
Unit-1
Topics to be covered

 Overlays and Swapping

 Logical and Physical Address Space

 Single Process Monitor

 Contiguous Allocation Methods

 Single Partition System

 Multiple Partition System


 Paging

 Principles of Operation

 Page Allocation

 Hardware Support for Paging

 Protection and Sharing

 Segmentation

 Principles of Operation

 Address Translation

 Protection and Sharing


MEMORY
MANAGEMENT
Memory Management
Memory management is the functionality of an operating system which handles or manages
primary memory. Memory management keeps track of each and every memory location either
it is allocated to some process or it is free. It checks how much memory is to be allocated to
processes. It decides which process will get memory at what time. It tracks whenever some
memory gets freed or unallocated and correspondingly it updates the status.

Memory management provides protection by using two registers, a base register and a limit
register. The base register holds the smallest legal physical memory address and the limit
register specifies the size of the range. For example, if the base register holds 300000 and the
limit register is 1209000, then the program can legally access all addresses from 300000
through 411999.
OVERLAYS
AND
SWAPPING
Overlay
Overlaying means "replacement of a block of stored instructions or data with
another." Overlaying is a programming method that allows programs to be larger than the
computer's main memory. Overlay gives the program a way to extend limited main storage.

A program based on overlay scheme mainly consists of following:


• A “root” piece which is always memory resident
• Set of overlays.

Limitations of Overlays:
 Require careful and time-consuming planning.
 Programmer is responsible for organizing overlay structure of program with the help of file
structures etc. and also to ensure that piece of code is already loaded when it is called.
 Operating System provides the facility to load files into overlay region.
Example of Overlay
Suppose total available memory is 140K. Consider a program with four subroutines named
as: Read ( ), Function1( ), Function2( ) and Display( ). First, Read is invoked that reads a
set of data. Based on this data set values, conditionally either one of routine Function1 or
Function2 is called. And then Display is called to output results. Here, Function1 and
Function2 are mutually exclusive and are not required simultaneously in memory.

Without the overlay it requires 180 K of memory and with the overlay support memory
requirement is 130K.

Without Overlay With Overlay


Swapping
Swapping is a mechanism in which a process can be swapped temporarily out of main memory
to a backing store , and then brought back into memory for continued execution.
Backing store is a usually a hard disk drive or any other secondary storage which fast in access
and large enough to accommodate copies of all memory images for all users. It must be capable
of providing direct access to these memory images.

Major time consuming part of swapping is transfer time. Total transfer time is directly
proportional to the amount of memory swapped. Let us assume that the user process is of size
100KB and the backing store is a standard hard disk with transfer rate of 1 MB per second. The
actual transfer of the 100K process to or from memory will take
100KB / 1000KB per second
= 1/10 second
= 100 milliseconds
Swapping
Benefits of Swapping

• Allows higher degree of multiprogramming.

• Allows dynamic relocation, i.e., if address binding at execution time is being used

we can swap in different location else in case of compile and load time bindings

processes have to be moved to same location only.

• Better memory utilisation.

• Less wastage of CPU time on compaction, and

• Can easily be applied on priority-based scheduling algorithms to improve

performance.
LOGICAL
VS.
PHYSICAL
ADDRESS SPACE
Logical & Physical Address Space

An address generated by the CPU is a logical address whereas address actually available on

memory unit is a physical address. Logical address is also known a Virtual address.

 Virtual and physical addresses are the same in compile-time and load-time address-binding

schemes. Virtual and physical addresses differ in execution-time address-binding scheme.

 The set of all logical addresses generated by a program is referred to as a logical address

space. The set of all physical addresses corresponding to these logical addresses is referred to

as a physical address space.


Logical & Physical Address Space

 The run-time mapping from virtual to physical address is done by the memory
management unit (MMU) which is a hardware device. MMU uses following
mechanism to convert virtual address to physical address.

 The value in the base register is added to every address generated by a user
process which is treated as offset at the time it is sent to memory. For example, if
the base register value is 10000, then an attempt by the user to use address
location 100 will be dynamically reallocated to location 10100.

 The user program deals with virtual addresses; it never sees the real physical
addresses.
SINGLE PROCESS
MONITOR
(MONOPROGRAMMING)
In the simplest case of single-user system everything was easy as at a time there

was just one process in memory and no address translation was done by the

operating system dynamically during execution. Protection of OS (or part of it) can

be achieved by keeping it in ROM. We can also have a separate OS address space

only accessible in supervisor mode.


CONTIGUOUS
ALLOCATION METHODS
Memory
Allocation
Partitioned Memory allocation

The concept of multiprogramming emphasizes on maximizing CPU utilization by

overlapping CPU and I/O. Memory may be allocated as:

• Single large partition for processes to use or

• Multiple partitions with a single process using a single partition.


Single-Partition System

This approach keeps the Operating System in the lower part of the memory and other user

processes in the upper part. With this scheme, Operating System can be protected from

updating in user processes. Relocation-register scheme known as dynamic relocation is

useful for this purpose. It not only protects user processes from each other but also from

changing OS code and data. Two registers are used: relocation register, contains value of

the smallest physical address and limit register, contains logical addresses range. Both these

are set by Operating System when the job starts.


Memory-Management Unit (MMU)
• Hardware device that maps virtual to physical address
• In MMU scheme, the value in the relocation register is added to
every address generated by a user process at the time it is sent to
memory
• The user program deals with logical addresses; it never sees the
real physical addresses

Dynamic relocation using a relocation register


Base and Limit Registers
• A pair of base and limit registers define the logical address space
How To Protect Processes?
• Relocation registers used to protect user processes from each other, and from
changing operating-system code and data
– Base register contains value of smallest physical address
– Limit register contains range of logical addresses
– MMU maps logical address dynamically
Multi Partitions (Fixed Partitions)

(a) Separate input queues for each partition


(b) Single input queue
Multi Partitions (Dynamic Partitions)
• Main memory usually into two partitions:
– Resident operating system, usually held in low memory with interrupt vector
– User processes, held in high memory
• Multiple-partition allocation
– Hole – block of available memory; holes of various size are scattered throughout
memory
– When a process arrives, it is allocated memory from a large enough hole
– Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9

process 8 process 10

process 2 process 2 process 2 process 2


Dynamic Storage-Allocation Problem

How to satisfy a request of size n from a list of free holes


• First-fit: Allocate the first hole that is big enough
• Best-fit: Allocate the smallest hole that is big enough; must search
entire list, unless ordered by size
– Produces the smallest leftover hole
• Worst-fit: Allocate the largest hole; must also search entire list
– Produces the largest leftover hole

Statistics:
 First-fit and best-fit better than worst-fit in terms of speed and
storage utilization
 Fragmentation: for N allocated blocks, another N/2 wasted due to
fragmentation
Fragmentation
• External Fragmentation – total memory space exists to satisfy a request, but it is not
contiguous

• Internal Fragmentation – allocated memory may be slightly larger than requested memory;
this size difference is memory internal to a partition, but not being used.

• Reduce external fragmentation by compaction


– Shuffle memory contents to place all free memory together in one large block

– Compaction is possible only if relocation is dynamic, and is done at execution time

– I/O problem

• Latch job in memory while it is involved in I/O

• Do I/O only into OS buffers

– Can be very slow: 256MB of memory, copy 4 bytes in 40ns  compacting memory in
2.7 sec

• Almost never used


Paging
• Partition memory into small equal-size chunks and
divide each process into the same size chunks
• The chunks of a process are called pages and chunks
of memory are called frames
• Operating system maintains a page table for each
process
– contains the frame location for each page in the
process
– memory address consist of a page number and offset
within the page
Paging
Page Allocation
In variable sized partitioning of memory every time when a process of size n is to be loaded, it is

important to know the best location from the list of available/free holes. This dynamic storage

allocation is necessary to increase efficiency and throughput of system. Most commonly used

strategies to make such selection are:

1) Best-fit Policy: Allocating the hole in which the process fits most “tightly” i.e., the difference

between the hole size and the process size is the minimum one.

2) First-fit Policy: Allocating the first available hole (according to memory order), which is big

enough to accommodate the new process.

3) Worst-fit Policy: Allocating the largest hole that will leave maximum amount of unused space

i.e., leftover space is maximum after allocation.


Hardware Support for Paging
Every logical page in paging scheme is divided into two parts:

1) A page number (p) in logical address space

2) The displacement (or offset) in page p at which item resides (i.e., from start of page).

This is known as Address Translation scheme.

The table, which holds virtual address to physical address translations, is called the page table.

As displacement is constant, so only translation of virtual page number to physical page is

required.
Paging Translation

logical address
virtual page # offset
physical memory

page
page table frame 0
page
frame 1
physical address
page
page frame # F(PFN) offset frame 2
page
frame 3


page
frame Y
Paging address Translation by direct
mapping
This is the case of direct mapping as page table sends directly to physical memory
page. But disadvantage of this scheme is its speed of translation. This is because
page table is kept in primary storage and its size can be considerably large which
increases instruction execution time (also access time) and hence decreases system
speed. To overcome this additional hardware support of registers and buffers can be
used.
Paging Address Translation with
Associative Mapping
This scheme is based on the use of dedicated registers with high speed and efficiency. These

small, fast-lookup cache help to place the entire page table into a content-addresses associative

storage, hence speed-up the lookup problem with a cache. These are known as associative

registers or Translation Look-aside Buffers (TLB’s). Each register consists of two entries:

1) Key, which is matched with logical page p.

2) Value which returns page frame number corresponding to p.

It is similar to direct mapping scheme but here as TLB’s contain only few page table entries, so

search is fast. But it is quite expensive due to register support.


Paging Address Translation with
Associative Mapping
Paging Address Translation with Direct &
Associative Mapping
Both direct and associative mapping schemes can also be combined to get more benefits.
Here, page number is matched with all associative registers simultaneously. The percentage
of the number of times the page is found in TLB’s is called hit ratio. If it is not found, it is
searched in page table and added into TLB. But if TLB is already full then page replacement
policies can be used. Entries in TLB can be limited only.
Protection and Sharing

Paging hardware typically also contains some protection mechanism. In page table

corresponding to each frame a protection bit is associated. This bit can tell if page is read-

only or read-write.

Sharing code and data takes place if two page table entries in different processes point to

same physical page, the processes share the memory. If one process writes the data, other

process will see the changes. It is a very efficient way to communicate. Sharing must also be

controlled to protect modification and accessing data in one process by another process. For

this programs are kept separately as procedures and data, where procedures and data that are

non-modifiable (pure/reentrant code) can be shared.


Advantages of paging scheme

 Virtual address space must be greater than main memory size. i.e., can execute program

with large logical address space as compared with physical address space.

 Avoid external fragmentation and hence storage compaction.

 Full utilization of available main storage.


Disadvantages of paging scheme

 Disadvantages of paging include internal fragmentation problem i.e., wastage

within allocated page when process is smaller than page boundary.

 Also, extra resource consumption and overheads for paging hardware and virtual

address to physical address translation takes place.


Segmentation

• All segments of all programs do not have to be of the same

length

• There is a maximum segment length

• Addressing consist of two parts - a segment number and an

offset

• Since segments are not equal, segmentation is similar to

dynamic partitioning
Address Translation

This mapping between two is done by segment table, which contains segment base and its

limit. The segment base has starting physical address of segment, and segment limit

provides the length of segment.


Principle of Operation

 This scheme is similar to variable partition allocation method with improvement that the

process is divided into parts. For fast retrieval we can use registers as in paged scheme.

This is known as a segment-table length register (STLR).

 The segments in a segmentation scheme correspond to logical divisions of the process and

are defined by program names.

 Extract the segment number and offset from logical address first. Then use segment

number as index into segment table to obtain segment base address and its limit /length.

 Also, check that the offset is not greater than given limit in segment table.

 Now, general physical address is obtained by adding the offset to the base address.

You might also like