0% found this document useful (0 votes)
55 views41 pages

Os Iv

The document discusses various aspects of memory management. It describes how main memory is organized into words that are identified by physical addresses. It explains logical versus physical addressing and how memory management units map logical addresses to physical addresses. The document also covers memory allocation techniques like static and dynamic allocation, and paging and segmentation approaches for virtual memory management.

Uploaded by

Naman Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views41 pages

Os Iv

The document discusses various aspects of memory management. It describes how main memory is organized into words that are identified by physical addresses. It explains logical versus physical addressing and how memory management units map logical addresses to physical addresses. The document also covers memory allocation techniques like static and dynamic allocation, and paging and segmentation approaches for virtual memory management.

Uploaded by

Naman Verma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 41

S. R.

Roy

Memory Management
The Main Memory
It is a large array of electronic storage units.
Each unit stores a BIT.

Such storage units are organised as distinct groups.


Any such group is called a WORD.

A word is identified by a unique address, which is used for

either storing information into it or retrieving


information from it.

12/7/2014

Memory Mgt.

S. R. Roy

Memory Management
Programs and data reside in the memory words, in

order that the CPU executes them.


A program consists of a series of instructions.
An instruction consists of an OPCODE and zero, one
or more operands.
The operands can be the actual data or the addresses
of the memory locations holding the data or may be
giving some kind of instructions to generate the
address of the location that holds the required data
item.
12/7/2014

Memory Mgt.

S. R. Roy

Memory Management
These addresses (that are used by programs) are

called logical addresses.


The actual address of a memory location is called its
physical address.
In order that a program runs successfully,
instructions and data must be associated with physical
addresses.
The OS provides some function that converts a CPU
generated logical address into a physical address.

12/7/2014

Memory Mgt.

S. R. Roy

Memory Management
Binding
Let us consider the following program:
Phys. Add
0

Move 3

Add 4

Move 3

10

Move 3
Add 4

Add 4

11

Sta 5

Sta 5

12

Sta 5

10

10

13

15

15

14

25

5
Before execution
12/7/2014

15

After execution

Memory Mgt.

S. R. Roy

Memory Management
Multi-user
System

Single
User
System

Fence
Register

User Need efficiency

Base/Bound
Register

Starting with Protection


for Integrity
12/7/2014

Memory Mgt.

Segmentation
/ Paging

Starting VM management

S. R. Roy

Fence Register
Processor checks all

memory references
to see if reference
is beyond fence
address
Limitation: can not
protect one user
from another
12/7/2014

Memory Mgt.

Address
0

Memory
Operating
System

n + 1

Fence
Register

User Program
Space
High

S. R. Roy

Base and Bounds Register

Processor checks all


memory references

Memory

according to base and

OS Kernel

bound.

Limitation: text versus


data segment within same

Base
Register

User B
Program Space

process not protected

Solution: two sets of

base/ bound registers

One for data

One for text


12/7/2014

Memory Mgt.

User A
Program Space

Bound
Register

S. R. Roy

User C
Program Space

Address Spaces
Address space
Set of addresses of memory
Physical Address space(PAS)
O to N-1,
N = size of memory
Kernel occupies lowest addresses

PAS
0

PM
kernel

N-1
12/7/2014

Memory Mgt.

S. R. Roy

Logical vs. Physical Addressing


Logical address of a

process

LAS
0
N1 - 1

starts at 0

locations occupied in

the physical memory

Memory Mgt.

P2

PM
P2

P1

N2 - 1

P3

P3
N3 - 1

12/7/2014

P1

PAS

Compiler generated
Independent of

LM

S. R. Roy

N-1
10

Base and Limit Registers


A pair of base and limit registers define

the logical address space

12/7/2014

Memory Mgt.

S. R. Roy

11

Multistep Processing of a User Program

12/7/2014

Memory Mgt.

S. R. Roy

12

Memory-Management Unit (MMU)


Hardware device that maps logical address to

physical address

In MMU scheme, the value in the relocation/

base register is added to every address


generated by a user process

The user program deals with logical addresses;

it never sees the real physical addresses

12/7/2014

Memory Mgt.

S. R. Roy

13

Dynamic relocation using a relocation register

12/7/2014

Memory Mgt.

S. R. Roy

14

Swapping
A process can be swapped temporarily out of memory to a backing store,
and then brought back into memory for continued execution

Backing store fast disk large enough to accommodate copies of all


memory images for all users; must provide direct access to these memory
images
Roll out, roll in swapping uses priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be

loaded and executed

Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped
Modified versions of swapping are found on many systems (i.e., UNIX,

Linux, and Windows

System maintains a ready queue of ready-to-run processes which have


memory images on disk

12/7/2014

Memory Mgt.

S. R. Roy

15

Schematic View of Swapping

12/7/2014

Memory Mgt.

S. R. Roy

16

Swapping
Swap-time
Assume that
user program is 20 K words
average latency of backing store is 8ms
transfer rate is 250,000 words per sec.
Swap-in time = 8 ms + (20,000 words / 250,000 k words / sec)
= 8 ms + 2 / 25 sec
= 8 ms + 2000/ 25 ms = 8ms + 80 ms
= 88 ms
Total swap time = 2 X 88 ms = 176 ms

12/7/2014

Memory Mgt.

S. R. Roy

17

Swapping
For efficient CPU utilization, the execution time for each

process must be longer relative to the swap time.

Thus in an RR CPU scheduling the time quantum must be longer

than 176 ms or 0.176 secs.

We see that the major part of the swap time is the transfer

time.

The transfer time is directly proportional to the amount of

memory swapped.

12/7/2014

Memory Mgt.

S. R. Roy

18

Swapping
Say, we have a 32 K memory system, and resident monitor of 12

K.
Therefore, max. user program is 20 K

A user program may use less than 20 K.

So, there would be no need to swap 20 K.


It is thus always useful to know how much memory a user

program is using and not how much it may use.


This would enable to swap the amount of memory that is actually
needed.

12/7/2014

Memory Mgt.

S. R. Roy

19

Swapping
Swap time for a user program of 4 K would be
8 ms + ( 4000 / 250) ms = 24 ms
Thus, such a system (Dynamic Memory Requirement)
needs to issue system calls ( request / release memory)
to inform the resident monitor of its changing memory
needs.

12/7/2014

Memory Mgt.

S. R. Roy

20

Overlapped Swapping
o The primary objective is to overlap the swapping of

one process with the execution of some other process.


When the current process
is executing, the previous
process is swapped-out
from buffer-1 and the next
process is swapped into
buffer-2.

R-M
Bf-1
Bf-2

Prev.Pr.
Next Pr.

When the current process


releases the CPU, the next
process is brought to the
user area from buffer-2
and the current process to
buffer-1 for swapping out.

Fence
Running
User

12/7/2014

Memory Mgt.

S. R. Roy

21

Overlapped Swapping
Problem:
Assume that I/O operation are queued and a process

is waiting for the release of I/O devices, if in the


mean time the process is swapped out and another is
swapped in, the I/O operation would attempt to use
the memory that now is being held by some other
process.
Solution:
Never swap a process with pending I/O.

12/7/2014

Memory Mgt.

S. R. Roy

22

Memory Allocation
Memory allocation to a process involves specification of

memory addresses to various instructions and data of a


process.

Memory allocation is a part of binding.


A variable in a program is called an entity, an entity can

have many attributes viz. type, scope, memory address


etc..

Binding

is the process of specifying value to some


attribute.
Memory allocation is the process of specifying the
memory address attribute of an entity.
12/7/2014

Memory Mgt.

S. R. Roy

23

Static & Dynamic Memory Allocation


Static memory allocation is done by a compiler, linker

or loader while readying a process for execution.


Dynamic memory allocation on the other hand is done
just before the first-time use of an entity (lazy
allocation) during the execution of a program.
Static allocation is possible only when data sizes are
known before the initiation of program execution. If
sizes are not known then they have to be guessed, and
that may lead to problems.
Dynamic allocation does not need to address such
issues, as the actual size of the entity would be known
during execution.

12/7/2014

Memory Mgt.

S. R. Roy

24

Static & Dynamic Memory Allocation


Static memory allocation does not require any memory

allocation operation during the execution of a


program, whereas in case of dynamic allocation the
process of memory allocation is done several times.

Operating systems implement both types of memory

allocation schemes in order to be efficient.

12/7/2014

Memory Mgt.

S. R. Roy

25

Static & Dynamic linking/loading


Linking refers to the process of connecting various

modules together to form a executable program.


Loading on the other hand is the process of allocating
memory to a program or a part of a program.
Static linker links all modules prior to the execution
of a program. If several programs use the same

library routines then each program is given a private


copy of the routine, thus several copies of some
routines may exist at the same time in the memory.

Dynamic linking is done as and when a module is

referenced during the execution of the program.

12/7/2014

Memory Mgt.

S. R. Roy

26

Memory Partitioning
Memory is divided into a number of partitions, each holds a

program or a process.
The degree of multiprogramming depends on the number of
partitions.
When a partition is free, a process is picked from the job queue
and is loaded into that free partition.
The allocation of memory partition is done using two schemes:

12/7/2014

Multiple Contiguous Fixed Partition Allocation and


Multiple Contiguous Variable Partition Allocation.

Memory Mgt.

S. R. Roy

27

Memory Partitioning
IBMs OS/ 360 uses the following algorithms to implement the

aforesaid schemes:

MFT (Multiprogramming with Fixed number of Tasks)


MVT (Multiprogramming with Variable number of Tasks)

Since many programs reside in the memory, the code and the
data of one program must be protected from being used by
some other program.

This is achieved by using two kinds of registers viz.,

Bound Registers and


Base and Limit Registers

12/7/2014

Memory Mgt.

S. R. Roy

28

Memory Partitioning
Two bound registers are used per program:
Lower bound register holds the lowest physical address and the
Upper bound register holds the highest address.
Legal user addresses therefore range from the lower bound to the
0

upper bound.
Monitor
User-1

Lower
bound

User-2

User-3

Upper
bound

Bound register scheme requires


static memory allocation at load or
compile time.

102400

12/7/2014

Memory Mgt.

S. R. Roy

29

Memory Partitioning
Base & Limit registers:
Base register holds the smallest physical address
Limit register holds the range of logical addresses.

The legal addresses are therefore base to (base+limit)


Legal user addresses range
from 0 to limit, the logical
address 0 is relocated to
base.

12/7/2014

Memory Mgt.

S. R. Roy

30

HW address protection with base and limit registers

12/7/2014

Memory Mgt.

S. R. Roy

31

MFT
Memory is partitioned into regions of fixed sizes that do

not change when the system runs.


When a job enters a system, it is put in a job queue.
Job scheduler takes into account the memory requirements
of each job in the job queue and also the available free
regions.
When a job is allocated memory space, it can compete for
the CPU.
When a job terminates, it releases the memory space,
which is then allocated to some other job waiting for
memory allocation.
Various strategies are used for allocating memory to jobs.

12/7/2014

Memory Mgt.

S. R. Roy

32

MFT
One of the strategies is to classify jobs into

different categories depending upon their memory


requirements.
o Each memory region has its own job queue.
o Job classification is to select the appropriate job
queue for each job.

The other strategy is to put all jobs in one queue. In

this scheme the job scheduler selects the next job to


be run and waits until a memory region of appropriate
size is available.

12/7/2014

Memory Mgt.

S. R. Roy

33

MFT

12/7/2014

Memory Mgt.

S. R. Roy

34

MFT

RM
2K
6K

Q2 will hold any job with memory requirement of 2K or less.


Q6 will hold any job . 6K or less.

Q12 12K or less.

In this strategy since each queue has its own memory


region, there is no competition between queues for memory.

12K

12/7/2014

Memory Mgt.

S. R. Roy

35

MFT

RM
2K
6K

In this scheme the job scheduler selects the


next job to be run and waits until a memory
partition of appropriate size is available.
Let us consider the following:
5K

2K

3K

7K

7K

1K

4K

Job scheduler is FCFS


12K

12/7/2014

Memory Mgt.

S. R. Roy

36

MFT
Job 1 is assigned 6K region
Job 2 is assigned 2K region
Job 3 requires 6K region, which has been assigned to job1,
hence it waits.
Though the region for job 4 is free, it can not be allocated as

the scheduler is FCFS

12/7/2014

Memory Mgt.

S. R. Roy

37

MVT

12/7/2014

Memory Mgt.

S. R. Roy

38

Contiguous Allocation (Cont.)


Multiple-partition allocation
Hole block of available memory; holes of various size

are scattered throughout memory


When a process arrives, it is allocated memory from a
hole large enough to accommodate it
Operating system maintains information about:
a) allocated partitions b) free partitions (hole)

OS

OS

OS

OS

process 5

process 5

process 5

process 5

process 9

process 9

process 8
process 2

12/7/2014

process 10
process 2

Memory Mgt.

process 2

S. R. Roy

process 2

39

Dynamic Storage-Allocation Problem


How to satisfy a request of size n from a list of free holes
First-fit: Allocate the first hole that is big enough
Best-fit: Allocate the smallest hole that is big enough; must

search entire list, unless ordered by size


Produces the smallest leftover hole
Worst-fit: Allocate the largest hole; must also search entire list
Produces the largest leftover hole

First-fit and best-fit better than worst-fit in terms of


speed and storage utilization

12/7/2014

Memory Mgt.

S. R. Roy

40

Fragmentation
External Fragmentation total memory space exists to

satisfy a request, but it is not contiguous


Internal Fragmentation allocated memory may be
slightly larger than requested memory; this size difference is
memory internal to a partition, but not being used
Reduce external fragmentation by compaction
Shuffle memory contents to place all free memory together
in one large block
Compaction is possible only if relocation is dynamic, and is
done at execution time
I/O problem
Latch job in memory while it is involved in I/O
Do I/O only into OS buffers

12/7/2014

Memory Mgt.

S. R. Roy

41

You might also like