Operating Systems - Chapter 1 TS2
Operating Systems - Chapter 1 TS2
Memory Management
1- Introduction:
Most computers have a memory hierarchy with a small amount of very
fast, expensive, volatile cache memory, some number of MBs of medium
speed, medium price, volatile main memory [RAM], and so on. The job
of the Operation System [OS] is to coordinate how these memories are
used.
registers
cache
main memory
solid-state disk
magnetic disk
optical disk
magnetic tapes
The part of the OS that manages the memory hierarchy is called the
Memory Manager (Image 1.1.1). Its job is to keep track of which parts of
memory are in use and which are not in use, to allocate memory-to-
processes when they need it, and de-allocate them when they are done,
and to manage swapping between RAM and HDD when the RAM is too
small to hold all the processes.
2- Basic Memory Management:
Memory Management systems can be divided into 2 classes: Those that
move processes back and forward between the RAM and HDD during
execution [Swapping and Paging], and those that don’t.
When the system is organized in this way, only one process at a time
can be running. As soon as the user types a command the OS copies
the requested program from these 2 memories and execute it. When the
process finishes the OS displays a prompt character and waits for a new
command. When it receives the command it loads a new program into
memory overwriting the first one.
4- Multi-Programming With Fixed Partitions:
On Time-Sharing systems, having multiple processes in memory at once
means that when one process is blocked waiting for I/O to finish, another
one can use the CPU. Thus, Multi-Programming increases the CPU
utilization. However, even on personal computers, it’s often useful to be
able to run 2 or more programs at once.
The easiest way to achieve Multi-Programming is to simply divide the
memory up into “n” partitions [Possibly not equal]. This partitioning can
be done manually when the system is starting up.
a- Fixed memory partitions with separate Input Queues [InQ] for each
partition (Image 1.4.1).
When a job arrives, it can be put into the InQ for the smallest partition
large enough to hold it. Since the partitions are fixed in this scheme,
any space in a partition not used by a job is lost.
The disadvantage of sorting the incoming jobs into separate queues
becomes apparent when the queue for a large partition is empty but
the queue for a small partition is full. Example: Partitions 1-3 (Image
1.4.1).
b- Fixed memory partitions with single InQ for each partition (Image
1.4.2).
C C C C C
B B B B
E
A A A
D D D
OS OS OS OS OS OS OS
The main difference between the fixed partition and the variable partition
RAMs is that the number, location and size of partitions of the variable
partition RAMs vary dynamically in the latter as processes come and go,
whereas they are fixed in the former.
When swapping creates multiple holes in RAM, it’s possible to combine
them all into one big hole by moving all processes downward as far as
possible. This technique is called “Memory Compaction”.
Memory Management with Bit Maps: When memory is assigned
dynamically the OS must manage it. There are 2 ways to keep track of
memory usage:
a- Bit Maps: With a bit maps, memory is divided up into allocation
units, perhaps as small as few words or large as several kilo
Bytes. Corresponding to each allocation unit is a bit in the bit map
[0: Free Unit / 1: Occupied Unit].
A B C D E …
1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 …
Bit Maps [The 2nd Row]
b- Free List.
Process Starts Length Pointer
P 0 5 H 5 3 P 8 6
P 14 4 H 18 2 P 20 6
60K –
X
64K
56K –
X
60K
52K –
X
56K
48K –
X VAS
52K
Virtual Pages.
44K –
7
48K
40K –
X
44K
36K –
5
40K
Physical Address Space
32K – Page Frames.
X
36K
28K – 12K –
X 0
32K 16K
24K – 08K –
X 6
28K 12K
20K – 3 04K – 1
24K 08K
16K – 00K –
4 2
20K 04K
2 08K – 12K
7 28K – 32K 1 04K – 08K
6 24K – 28K 0 00K – 04K
5 20K – 24K
4 16K – 20K
3 12K – 16K
Bus
The MMU Sends Physical addresses to the memory
(Image 1.6.1)
Segmentation: Unlike paged systems, memory is allocated by partitions
of variable sizes. A segment has a size that can vary, increasing or
decreasing, during execution.
Address Translation: The compiler generates addresses belonging to
segments. Each address can be written in the form [S, D], where: “S”
indicates the number of the segment, and “D” indicates the displacement
within the segment.
Example: Let LA= 9035 (2, 843), and the segment “Se” with size T. If D
< T PA= beginning address + D; If D > T Error in displacement into
this segment.
You can move a segmented system to a paged system if T is the same
for all segments.
7- Page Replacement Algorithms:
When a page fault occurs, the OS has to choose a page to remove from
the RAM to free up space and replace it with the missing one. If the
page to be replaced has been modified while in RAM, it must be
rewritten to HDD before replacement to update the disk copy. However,
if the page hasn’t been changed, the disk copy is already up to date, so
no rewrite is needed.
a- First In First Out [FIFO]: The OS maintains a list of all pages currently
in RAM, with the page at the head of the list is the oldest one, and the
page at the tail is the most recently arrived.
On a page fault, the page at the head is removed and the new
page is added to the tail of the list.
- - - - - - - - - - -
- The number of fault page is 8.
1 1 1 1 5 5 5 5 5 5 5
2 2 2 2 2 2 4 4 4 4
3 3 3 3 3 3 1 1 1
7 7 7 7 7 7 6 6
- - - - - - - -
b- Least Recently Used [LRU]: A good approximation to the optimal
algorithm is based on the abbreviation that pages that have been
heavily used in the last few instructions will probably be heavily used
again in the next few. Conversely, pages that haven’t been used for
ages will probably remain unused for a long time.
The idea suggests a realizable algorithm: When a page fault occurs,
throw out the page that hasn’t been used for the longest time,
this strategy is called “LRU”.
Example: Calculate the number of page fault, using LRU algorithm, in the following:
0 1 2 3 1 3 4 0 1 0 4 0 1
- For 3 cases.
- For 4 cases.
Solution:
- The number of page fault is 7.
0 0 0 3 3 3n 3 3 1 1 1 1 1n
1 1 1 1n 1 1 0 0 0n 0 0n 0
2 2 2 2 4 4 4 4 4n 4 4
- - - - - - -
- The number of page fault is 6.
0 0 0 0 0 0 4 4 4 4 4n 4 4
1 1 1 1n 1 1 1 1n 1 1 1 1n
2 2 2 2 2 0 0 0n 0 0n 0
3 3 3n 3 3 3 3 3 3 3
- - - - - -
c- Optimal:
Example: Calculate the number of page fault, using optimal algorithm, in the following:
0 1 2 3 1 3 4 0 1 2 4 1 0 1 3
- For 4 cases.
Solution:
- The number of page fault is 6.
List 0 1 2 3 1 3 4 0 1 2 4 1 0 1 3
Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 1 3
2 2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 4 4 4 4 4 4 4 4 4
- - - - T0Future=8 - T0Future=13 -
T1Future=9 T1Future=14 Maxi
T2Future=10 T2Future=10
T3Future=15 Maxi T4Future=11
Note: In case of no page fault for a long interval of time, the new process is replaced with
the lowest value process.