0% found this document useful (0 votes)
22 views36 pages

Unit 7

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views36 pages

Unit 7

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Unit 7

Memory Management
Why memory needs to be
managed?
 Before a program can be executed, it must be
loaded into memory.
 In a multi-programming environment, programs are
loaded into different portions of the memory.
 Discrepancies occurred, especially when the codes
are referring to are logical addresses, and their
actual physical addresses in the physical memory
are different.
Physical address vs. Logical address

 Logical address :
 The address that a program perceived after it was
compiled.
 Usually starts from 0 and increases contiguously
onward till the end of the program.
Physical address vs. Logical address

 Physical address :
 The address that the CPU is referring.
 Usually starts from 0 and increase contiguously
onward till the extreme of the available memory
module.
 Different programs can be resided within the
available memory chips. Hence, the starting
addresses (physical) usually are not 0.
Discrepancy example
Physical memory
Program X Starting address 0
Used
Starting address 0
Program X starts
at, say 2712 Available
Jump to 4000
Used

Jump to 6712
Available
Address 4000
After loading, the
target of this jump
should be 2712+4000, Used
which is 6712. .
Obviously, this requires some .
kind of management .
To where in the memory a
program is loaded?
 How to determine where a program should
be loaded to, within the memory?
 Some suggestions:
 First fit
 Best fit
 Worst fit
 There exists some other schemes, of course.
First fit
Physical memory
Program is loaded into the FIRST Used

hole that can accommodate it Available(80K) Leaving a


45K hole
Used

Program I Available(100K
)
(35K)
Used

Available(40K)

Used

Available(60K)
Worst fit
Physical memory
Program is loaded into the Biggest Used

Hole that can hold the entire piece Available(80K)


of code.
Used
Leaving a
Available(100K 65K hole
Program I )

(35K) Used

This scheme leaves the Available(40K)


biggest hole in memory. Used
It has a better chance to
Available(60K)
be utilized later.
Best fit
Physical memory
Program is loaded into the Best Used

hole, that is the SMALLEST, Available(80K)


but large enough to accommodate
Used
this program.
Available(100K
Program I )

(35K) Used
Leaving a
Available(40K)
5K hole
This hole has a good
Used
chance that it cannot
be used anymore. Available(60K)
Analysis of the 3 schemes

 First fit – fast, it doesn’t need to parse through the


whole memory space. But it doesn’t utilize the
memory in a good way.
 Best fit – it needs to parse through the whole
memory space in order to find out the smallest hole
that is capable to hold the program. It creates a lot
of unusable holes.
 Worst fit – same as best fit, but big holes are left.
As time goes by

 No matter which scheme is using, as processes


come and go, memory holes are scattered around.
 This sometimes is called – external
fragmentation.
 It may happen that the total available memory is
sufficient for a new process, but since they are not
contiguous, this new process cannot be loaded.
Paging

 Another way to overcome the above


mentioned problem is – paging.
 The physical memory is divided into some
fix-sized chunk, say 4K byte (size is always
2^x byte), called page frames.
 Logical address of a program is also divided
into segments of same size as frames, called
pages.
Paging

 Now, pages of program can be loaded into frames. One


thing that is different is – pages need not be contiguously
loaded. This can make finding free memory for programs
easier.
 Of course, there are still some spaces in the last page of a
program cannot (and won’t) be used by others. This is
known as Internal Fragmentation.
 Because the codes of a program is no longer contiguous,
there must be some ways to translate its logical address the
corresponding physical address. Such mapping is even more
complicated to the one shown on previous pages.
Page frames
Physical memory Address
0000 0000 0000 0000
Frame 0
0000 0111 1111 1111
Consider a system of 16-bit address. Frame 1
0000 1000 0000 0000

The theoretical max memory space 0000 1111 1111 1111


0001 0000 0000 0000
Frame 2
0001 0111 1111 1111
that is addressable is 2^16, which
Frame 3 ….
is 65536 byte. If this system really
Frame 4 ….
has 64K memory installed, and the
….
frame size is 2K byte, the memory Frame 5
….
contains 32 frames then. …
… ….

….
Frame 30

Frame 31 1111 1000 0000 0000


1111 1111 1111 1111
Pages
Physical memory Address
0000 0000 0000 0000
Suppose a program of 7K in size, Frame 0
0000 0111 1111 1111
the codes are also divided into pages Frame 1(free)
0000 1000 0000 0000
0000 1111 1111 1111
of the same size (2K) Frame 2 0001 0000 0000 0000
0001 0111 1111 1111
Internal Frame 3(free) ….
Fragmentation ….
Frame 4(free)
0000 0000 0000 0000
Page 0 Frame 5
….
0000 0111 1111 1111
0000 1000 0000 0000
Page 1 …
….
0000 1111 1111 1111
0001 0000 0000 0000 … ….
Page 2 …
0001 0111 1111 1111
0001 1000 0000 0000
….
Page 3
0001 1011 1111 1111 Frame 30(free)

Frame 31(free) 1111 1000 0000 0000


1111 1111 1111 1111
Pages – address mapping
Physical memory Address
Page Frame address 0000 0000 0000 0000
Frame 0
0 0010 0000 0000 0000 0000 0111 1111 1111
0000 1000 0000 0000
1 1111 0000 0000 0000 Physical Frame 1(free)
0000 1111 1111 1111
2 0001 1000 0000 0000 Address 0001 0000 0000 0000
Frame 2
0001 0111 1111 1111
3 0000 1000 0000 0000
Frame 3(free) ….
+ Frame 4(free) ….
0000 0000 0000 0000
Page 0 Frame 5
….
0000 0111 1111 1111
0000 1000 0000 0000
Page 1 …
….
0000 1111 1111 1111
0001 0000 0000 0000 … ….
Page 2 …
0001 0111 1111 1111
0001 1000 0000 0000
….
Page 3
0001 1011 1111 1111 Frame 30(free)

Frame 31(free) 1111 1000 0000 0000


Offset 1111 1111 1111 1111
Paging

 The page table itself resides in memory as well, fit


into frames, of course.
 So, 2 memory references are need for each memory
access – one for searching the page table and the
other is for the data. Hence, performance is
deteriorated.
 To improve the performance, a small amount of fast
response memory chips (always known as associate
memory) is used to keep a portion of page entries
that is most likely to be referred.
Paging
 In some cases, especially
when the program size is
large, the page table itself
can be quite large. It is not
easy to find a contiguous
space for keeping such a
big page table.
 Multi-level page tables is
being used. In this way,
page tables can be split into
smaller size (but the total
size won’t be smaller).
Paging

 The outer table is not referring to the program


data now. It refers to another page table which in
turn refers to the actual data.
 The mapping between logical address and
physical address is more complex and the time
required for such memory access is longer.
Segmentation

 Another way to handle memory location is called


segmentation.
 It is more logical to think of a piece of code as a collection
of segments – routines, functions, data …. etc, instead of a
collection of some meaningless fix-sized pages.
 Contiguous memory space is allocated to each segment.
 Since each segment can be of different size, external
fragmentation is possible to be occurred here.
Segmentation

 Also because of the variable length of segments,


there may be a case that a erroneous (or malicious)
program tries to access a piece of data (or code)
which is out of the range of a particular segment.
 Address validation of each memory access is
needed.
Segmentation
Start
Segment no. Starting addr. length
0 0100 0000 0000 0000 1000 N
Seg. No
1 0111 0101 1000 0000 750 Found? Trap to OS

2 0101 1111 1000 0000 1200 Y


… …. …
Offset < length? N

Memory reference (2, 1076) Phy. Addr =


starting_addr + offset

Seg. No Offset End


Combining paging with
segmentation
 Segmentation suffers from external
fragmentation (there are no pages and
frames)
 In order to eliminate external fragmentation,
each segment is further divided into pages,
which are then used to mapped to different
frames in memory.
 In this way, only internal fragmentation
exists.
Combining paging with
segmentation
Base address of
segment table (b) Segment Page No. Displace-
No. (s) (p) ment (d)
b

+ s

p
s s’ s’
+ d
p

s’
p’

Page table Memory


Segment table
Demand Paging

 Using the idea of paging, there is no need to bring in the


WHOLE program for execution.
 Only the page that is currently need should be existing in
main memory frames.
 In case a page is not resided in memory, a PAGE FAULT is
happened and the required page is then brought into
memory.
 Level of multiprogramming is enhanced
 Available memory size is virtually unlimited by the amount
of physical memory.
 Program size larger than the available memory can now be
executed also.
Down sides of demand paging

 Careful handling of page fault is a critical factor the


the success of demand paging.
 Time for handling page fault is large compared with
the time of accessing physical memory.
 An intelligent page replacement algorithm is
required, in order to achieve high performance :
FIFO, OPT, LRU, LFU … etc.
 Thrashing may occur if the available frames is less
than a certain threshold.
Effective time of memory
accesses
 te=(1-p) x tm + p x tp
 tp is much longer than tm.
(since it requires bringing in new pages)
Usually hundreds of thousands times.
te : effective time of each memory access
tm : time for each access to main memory
tp : time for handling each page fault
p : probability of page fault
Effective time for memory
accesses
 Suppose tp is 20ms and tm is 50ns.
 If we want the performance penalty to be 10%, ie.,
in average, each memory access is to be completed
in 55ns. We have :
55 > (1-p) x 50 + 20,000,000p
> 50 – 50p + 20,000,000p
> 50 – 19,999,950p
p < 5/19,999,950
Page replacement algorithm

 FIFO (First In First Out) – replace the page that was brought
in at the earliest time. (simple to implement)
 OPT (Optimal) – replace the page that won’t be used in the
nearest future. (In reality, it is hard to figure out when a page
will be needed)
 LRU (Least Recently Used) – replace the page that has not
been used for the longest time. (A good algorithm but need
hardware support to keep track of the usage history)
 LFU (Least Frequently Used) – replace the page that was
used rarely.
Page replacement algorithm

Suppose at a certain instance, the following pages of a process


are to be accessed in turn : 4,3,6,4,1,9,4,3,7,8,7,4,3,2
Suppose we have 4 available frames for this process.
FIFO #page fault : 10
Frame 4 3 6 4 1 9 4 3 7 8 7 4 3 2
No.
1 4 4 4 4 4 9 9 9 9 8 8 8 8 8
2 3 3 3 3 3 4 4 4 4 4 4 4 2
3 6 6 6 6 6 3 3 3 3 3 3 3
4 1 1 1 1 7 7 7 7 7 7
Page fault * * * * * * * * * *
occurred
Page replacement algorithm

Suppose at a certain instance, the following pages of a process


are to be accessed in turn : 4,3,6,4,1,9,4,3,7,8,7,4,3,2
Suppose we have 4 available frames for this process.
OPT #page fault : 8
Frame 4 3 6 4 1 9 4 3 7 8 7 4 3 2
No.
1 4 4 4 4 4 4 4 4 4 4 4 4 4 4
2 3 3 3 3 3 3 3 3 3 3 3 3 3
3 6 6 6 9 9 9 7 7 7 7 7 2
4 1 1 1 1 1 8 8 8 8 8
Page fault * * * * * * * *
occurred
Page replacement algorithm

Suppose at a certain instance, the following pages of a process


are to be accessed in turn : 4,3,6,4,1,9,4,3,7,8,7,4,3,2
Suppose we have 4 available frames for this process.
LRU #page fault : 9
Frame 4 3 6 4 1 9 4 3 7 8 7 4 3 2
No.
1 4 4 4 4 4 4 4 4 4 4 4 4 4 4
2 3 3 3 3 9 9 9 9 8 8 8 8 2
3 6 6 6 6 6 3 3 3 3 3 3 3
4 1 1 1 1 7 7 7 7 7 7
Page fault * * * * * * * * *
occurred
Page replacement algorithm

Suppose at a certain instance, the following pages of a process


are to be accessed in turn : 4,3,6,4,1,9,4,3,7,8,7,4,3,2
Suppose we have 4 available frames for this process.
LFU #page fault : 10
Frame 4 3 6 4 1 9 4 3 7 8 7 4 3 2
No.
1 4 4 4 4 4 4 4 4 4 4 4 4 4 4
2 3 3 3 3 9 9 3 3 3 3 3 3 3
3 6 6 6 6 6 6 7 8 7 7 7 7
4 1 1 1 1 1 1 1 1 1 2
Page fault * * * * * * * * * *
occurred
Thrashing
 When the number of page frames allocated to a process falls to a certain
threshold, page faults will occur severely.
 When new pages are being brought in, some frames must be freed in order to
accommodate them. This results in kicking out some pages that are actively
used.
 Page faults occurs even more.
 To make things worse, in this scenario, the OS discovers that the CPU utilization
drops and hence, choose some processes in the ready queue to start their
execution, in order to make CPU utilization high.
 This, again makes available frames decrease further.
 Finally, pages are being brought in and out, but the CPU does nothing
meaningful.
 This situation is called thrashing.
Working set model

 To detect thrashing, working set modeling can be used.


 Take the page references in former pages as
example :4,3,6,4,1,9,4,3,7,8,7,4,3,2
 Working set is a window of specific size, say 8. When this
window is applied to the above references, the working set
is {1, 3, 4, 6, 9} at the start.
 As the process continues to execute, the working set
window shifts, the working set and its size changes as well.
Working set model

• The working set is changing as the pages of a process are being


referenced, so is the working set size.
• If, at a particular instance, the total working set size of all active

processes exceeds the total available page frames, thrashing will


happen. Hence, some processes may need to be suspended.

You might also like