0% found this document useful (0 votes)
7 views26 pages

Operating System Lecture Nine: DR Jamal Altuwaijari

ok

Uploaded by

Yusef Abusalem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views26 pages

Operating System Lecture Nine: DR Jamal Altuwaijari

ok

Uploaded by

Yusef Abusalem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Operating system

Lecture nine
part1
Dr jamal altuwaijari
9. Memory Management
9.1 Introduction.
The organization and management of the main memory or primary, or real memory of
a CIS has been one of most important factors influencing 0/S design, programs and data
need to be in main storage in order to be executed or referenced, and if they do not
needed immediately may be kept on secondary storage media such as tapes or disk
until needed and then brought into the main storage for execution or reference.
The figure 9.1 show the typical storage hierarchy.

Figure 9.1: Hierarchical storage organization


9.2 Storage management strategies
There are many strategies are used to obtaining the best possible use of the
main storage resource, and some of these strategies are:
A. Fetch strategies are concerned with when to obtain the next piece of
program or data for transfer to main storage from secondary storage.
Deimand fetch strategies in which the next piece of program or data is
brought into the main storage when its referenced by a running program.
Anticipatory fetch strategies makes predict and guesses and

anticipating the future where program control will go next.


B. Placement strategies are concerned with determining where in main
storage to place a new program, we shall consider the first—fit, best—fit, and
worst—fit storage placement strategies
C. Replacement strategies are concerned with determining which piece of
program or data to displace to make room for a new program.
9.3 Contiguous V S non—contivrous storage
allocation
The continuous allocation means each program occupy a single
contiguous
block of storage locations.
In non continuous storage allocation, a program is divided into several
blocks or segments that may be placed through out main storage in
pieces not necessary adjacent to one another.
It is more difficult for an OS to control (non—contiguous storage
allocation).
9.4 Allocation methods
There are many methods used to organize and allocate storage to user
programs and data some of these are:
9.4.1 Bare machine
The user has complete control over the entire memory space, and
can use
the memory in flexible manner. This method is simple and not require
any cost,
and not need any special H/V or S/W.
0
User
32k
Memory
Figure 9.2
9.4.2 Single user contiguous storage allocation
The earliest C/S allowed only a single user at a time to use the
machine.
Where the memory is divided into two sections one for the user and
one for the
0/S as in the figure 9.3 below.
User
0

a User

b Un used

Figure 9.3: Single user contiguous storage allocation


9.4.2 Single user contiguous storage allocation
In this scheme programs are limited in size to the amount of main
storage.
The protection H/W is use the fence address to protect the O/S and the
user
program where every user address is compared with the fence address
if>_ the
fence the address is correct else indicate an error. See the figure 9.4
below.
9.4.2 Single user contiguous storage allocation
In the case of CVS size changes during program execution. There two
way
to modify the basic scheme:
a. The user program is loaded into high memory down to wards the
fence
rather than from the fence toward high memory. The advantage here
is that
all unused space is in the middle and either the user or the 0/S can
expand
into this unused memory as necessary, see the figure 9.5.
b. A more general approach used is to delay
9.4.2 Single user contiguous storage allocation
address binding until execution time.
dynamic relocation scheme requires
different H/\V support as in the figure 9.6.

Figure 9.5: Loading user into HAY


The fence register is now called the relocation or base register and are
the
Value of this register is added to every address generated by a user
process at
the time it is sent to memory, see the figure 9.6 below.
9.4.2 Single user contiguous storage allocation

Figure 9.6: Dynamic relocation using a base register or relocation register.

Now we have two different types of addresses logical addresses 0 —> mix
and physical addresses R+0 —> R ÷ max (R fence value). The user only generate
the logical addresses.
9.4.3 Overlays allocation methods
The size of program (process) can be larger than the a mount of
memory
allocated to it, a technique called overlays is sometimes used. The idea
of
overlays is to keep in memory only those instruction and data that are
needed at
any given time. When other instruction are needed they are loaded
into space
that was occupied previously by instruction that are no longer needed.
9.4.3 Overlays allocation methods
. Example
Consider a two—pass assembler:' During pass 1 it constructs symbol table
then during pass 2 it generates – inachine — languages code. We may able to
partition such as assembler into pass I code, pass 2 code the symbol table and
common support routines used by both pass 1 and pass 2. Assume the size of
these components are as follows:
Pass 1 = 70 KB.
Pass 2 = SO KB
S. Table = 20 KB.
Common routines = 30 KB
To load every thing at once we require 200 KB of memory. If only 150 KB
is available we cannot run our process. But that pass I and pass 2 do not used to
be in memory at the same time, so we can define two overlays:
A =Pass 1+ S.T.+ C.R = SO + 20 + 30 = 130
B= Pass 2+ S.T.+ C.R = 70 + 20 + 30 = 120
9.4.3 Overlays allocation methods

Figure 9.7: Overlays for two- pass assembler


9.4.3 Overlays allocation methods
Logical versus physical address space
An address generated by the CPU is commonly referred to as a logical
address. where as an address seen by the memory unit is commonly
referred to
as a physical address.
The compile—time and load—time address—binding scheme result in an
environment where the logical and physical addresses are the same, and
these
two addresses at execution—time are differ.
The set of all logical addresses generated by the program is referred to as a
logical address space, the set of all physical addresses corresponding to these
logical addresses is referred to as a physical address space.
The run—time mapping from virtual to physical addresses is done by the
9.4.4 Swapping allocation method
A process can be swapped temporarily out of memory to a backing
store
and then brought back into memory for continued execution swapping
requires
a backing store is commonly a fast disk and must be large enough to
accommodate copies of all memory images.
9.5 Multiple-Partition Allocation
Because it is desirable in general that there be several user processes
residing in memory at the same time we need to consider the problem
of how of
allocate available memory to the various processes that are in the
input queue
waiting to be brought into memory.
The memory is divided into a number of regions or partitions.
Each region
have one process.
There are two major memory management schemes are possible.
9.5.1 Multinle contiguous fixed volition (MFT)
The memory is divided into a number of fixed—sized partitions. Each
one contain exactly one process. Thus the degree of multi programming
is bound by the number of partitions. \Vhen the partition is free the
processing selected from the input queue and is loaded into the free
partition. When the process terminated the partition becomes
available for another process.
Protection H/W
The protection can be provided by using two resisters as in figure
9.9:
i. Bounds registers: The values of the smallest and largest physical
addresses.
ii. Base and limit registers. The value of the smallest physical address
and the range of logical addresses.
Figure 9.9: Multiple Partitions
Leat user address range from 0 to the limit and dynamically
relocated to physical to physical addresses ranging from base to base ÷
limit. See the diagrams below 9.10.

(a) Trap to O/S- addressing error


(b) Trap addressing error
Figure 9.10: (a) Bounds registers and (b)Base / Limit registers .
With MET for example if we have three partitions of size 2K, 6K, and
12K weneed three queues Q2, Q6 and Q12. An incoming job requiring
5K of memory would be appended to Q6, a new job needing 10K put in
Q12 a job of
2K go in Q2. See the figure below.
OS

2K

6K

12K

MFL with one queue memory


9.5.2 Multiple contiguous variable partition
(MVT)
The main problem with MFT is not allocate the best available memory
space size (memory blocks) to the smallest internal and external space.
The solution to this problem is to allow to the size of partition to be
dynamically variable. This method is called Multiple Contiguous
Variable Partition (MVT). In this method, the operating system keeps a
table indicating which parts of memory are available and which are
occupied. Initially, all memory is available for user processes and is
considered one large block of available memory, a hole. As processes
enter the system, they are put into an input queue. The operating
system takes into account the memory requirements of each process
and the amount of available memory space in determining which
processes are allocated memory. The disadvantage of this method is
the internal and external fragmentation, which lead to bad utilization of
memory.
Storage placement strategies
. These strategies are used to determine where in the main storage to
place incoming programs and data, three strategies are used:
i.Best—Fist strategy: An incoming job is placed in the smallest hole in
main memory big enough. We must search entire list unless the list is
kept ordered by size.
ii. First—Fit: Allocate the first hole that is big enough searching can start
either at the beginning of the set of holes or where the previous first—fit
search ended. Searching stop as soon as we find a free hole that is large
enough.
iii. Worst—Fit: Allocate the largest hole. Again we search entire the list
unless it is sorted by size.
Simulation indicate both First—Fit and best—fit are better than worst—
fit in terms of decreasing both time and storage utilization.
Storage placement strategies

.Figure 9.12: the use of placement strategies •


fragmentation
A job which needs in words of memory may be run in a region of n
words,
where n > m. The difference between these two number (n-m) is
internal. Fragmentation memory which is internal to a region but is not
being used. External Fragmentation occurs when a region is unused and
available but too small for any waiting job. Both types of fragmentation
are sources of memory waste in MFT.
F = IF + EF
Compaction
One solution to fragmentation is compaction. The goal is to shuffle the
memory contents to place all free memory together in one large block.
Compaction is not always possible when it is possible we must
determine its cost. The simplest compaction algorithm is to move all
jobs towards one end of memory all holes move in the other direction,
producing one large hole of available memory, see figure 9.13 below.

Figure 9.12: Compaction process.

You might also like