0% found this document useful (0 votes)
15 views11 pages

Operating Systems - Chapter 1 TS2

The document provides an overview of memory management in operating systems, detailing the memory hierarchy and the role of the Memory Manager. It discusses various memory management techniques, including mono-programming, multi-programming, swapping, virtual memory, and page replacement algorithms. Additionally, it explains concepts such as bit maps, segmentation, and the importance of managing memory efficiently to optimize CPU utilization.

Uploaded by

Ahmad DIB
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views11 pages

Operating Systems - Chapter 1 TS2

The document provides an overview of memory management in operating systems, detailing the memory hierarchy and the role of the Memory Manager. It discusses various memory management techniques, including mono-programming, multi-programming, swapping, virtual memory, and page replacement algorithms. Additionally, it explains concepts such as bit maps, segmentation, and the importance of managing memory efficiently to optimize CPU utilization.

Uploaded by

Ahmad DIB
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Chapter 1

Memory Management
1- Introduction:
Most computers have a memory hierarchy with a small amount of very
fast, expensive, volatile cache memory, some number of MBs of medium
speed, medium price, volatile main memory [RAM], and so on. The job
of the Operation System [OS] is to coordinate how these memories are
used.
registers

cache

main memory

solid-state disk

magnetic disk

optical disk

magnetic tapes

Figure 1.1 Storage-device hierarchy.

The part of the OS that manages the memory hierarchy is called the
Memory Manager (Image 1.1.1). Its job is to keep track of which parts of
memory are in use and which are not in use, to allocate memory-to-
processes when they need it, and de-allocate them when they are done,
and to manage swapping between RAM and HDD when the RAM is too
small to hold all the processes.
2- Basic Memory Management:
Memory Management systems can be divided into 2 classes: Those that
move processes back and forward between the RAM and HDD during
execution [Swapping and Paging], and those that don’t.

3- Mono-Programming Without Swapping Or Paging:


The simplest possible memory management scheme is to run just one
program at a time, sharing the memory between the program and the
OS. 3 variations on this situation (Image 1.3.1). The later model is used
by small MS-DOS systems. On IBM PCs, the partition in the ROM is
called BIOS.

When the system is organized in this way, only one process at a time
can be running. As soon as the user types a command the OS copies
the requested program from these 2 memories and execute it. When the
process finishes the OS displays a prompt character and waits for a new
command. When it receives the command it loads a new program into
memory overwriting the first one.
4- Multi-Programming With Fixed Partitions:
On Time-Sharing systems, having multiple processes in memory at once
means that when one process is blocked waiting for I/O to finish, another
one can use the CPU. Thus, Multi-Programming increases the CPU
utilization. However, even on personal computers, it’s often useful to be
able to run 2 or more programs at once.
The easiest way to achieve Multi-Programming is to simply divide the
memory up into “n” partitions [Possibly not equal]. This partitioning can
be done manually when the system is starting up.
a- Fixed memory partitions with separate Input Queues [InQ] for each
partition (Image 1.4.1).

When a job arrives, it can be put into the InQ for the smallest partition
large enough to hold it. Since the partitions are fixed in this scheme,
any space in a partition not used by a job is lost.
The disadvantage of sorting the incoming jobs into separate queues
becomes apparent when the queue for a large partition is empty but
the queue for a small partition is full. Example: Partitions 1-3 (Image
1.4.1).
b- Fixed memory partitions with single InQ for each partition (Image
1.4.2).

An alternative organization is to maintain a single InQ.


Whenever a partition becomes free, the job closest to the front of the
queue that fits in it could be loaded into the empty partition and run.
Since it’s undesirable to waste a large partition on a small job, a
different strategy is to search the whole InQ whenever a partition
becomes free, and pick the largest job that fits.
Multi-programming introduces 2 essential problems that must be
resolved: Relocation and Protection. It’s clear that different jobs will be
running at different addresses (Image 1.4.2). When a program is linked,
the linker must know at what address the program will begin in the
memory. Example (Image 1.4.2): Suppose that the first instruction is a
call for a procedure at absolute 100, within the binary file produced by
the linker, and of size 100k. If this program is loaded in partition 1, that
instruction will jump to absolute address 100 in the main memory. What
is needed is a call for 100k + 100k; if the program loaded into partition 2
must be carried out as a call to 200k + 100k and so on. This is called
“Re-Allocation”.
5- Swapping:
With time sharing system or graphically oriented personal computers,
the situation is different. Sometimes, there isn’t enough RAM space to
hold all the currently active processes, so excess processes must be
kept on the HDD and brought into run dynamically.
1st Strategy:The simplest strategy, called “Swapping”, consists of
bringing in each process in its entirety, running it till it finishes, then
putting it back on the HDD.
2nd Strategy:The other strategy, called “Virtual Memory”, allows
programs to run even when they are only partially in RAM.

C C C C C
B B B B
E

A A A
D D D
OS OS OS OS OS OS OS

(a) (b) (c) (d) (e) (f) (g)


Memory allocation changes as processes come into RAM and
leave it [Variable Size Partition].

The main difference between the fixed partition and the variable partition
RAMs is that the number, location and size of partitions of the variable
partition RAMs vary dynamically in the latter as processes come and go,
whereas they are fixed in the former.
When swapping creates multiple holes in RAM, it’s possible to combine
them all into one big hole by moving all processes downward as far as
possible. This technique is called “Memory Compaction”.
Memory Management with Bit Maps: When memory is assigned
dynamically the OS must manage it. There are 2 ways to keep track of
memory usage:
a- Bit Maps: With a bit maps, memory is divided up into allocation
units, perhaps as small as few words or large as several kilo
Bytes. Corresponding to each allocation unit is a bit in the bit map
[0: Free Unit / 1: Occupied Unit].
A B C D E …

1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 0 0 …
Bit Maps [The 2nd Row]

b- Free List.
Process Starts Length Pointer

P 0 5 H 5 3 P 8 6

P 14 4 H 18 2 P 20 6

6- Virtual Memory [VM]: (Size of VM is = to Size of Ram, or 1.5 * size


of Ram, or 2 * size of Ram)
A Process is a program in execution
With programs that are too big to fit in the available memory, the solution
usually adapted is to split the program into pieces, called “Overlays”.
Overlay “0” will start running first. When it is done, it will call another
overlay. Some overlay systems were highly complex, allowing multiple
overlays in memory at once. The overlays are kept on the disk and
swapped in and out of the memory by the operating system, dynamically
as needed.
The actual work of swapping overlays in and out is done by the system.
The work of splitting the program into pieces had to be done by the
programmer. Splitting up large programs into small modular pieces was
time-consuming and boring. The method that was devised has come to
be known as “VM”.
The basic idea of VM is that the combined side of the program, data and
stack may exceed the amount of the physical memory RAM (Main
Memory) available for it. Example, if there is a memory of 4 GB capacity,
so the VM can be 1 x 4GB or 1.5 x 4GB or 2 x 4GB.
The operating system keeps those parts of the program currently in use
in main memory and the rest on the disk.
Paging: Most VM systems use a technique called “Paging”. When a
program uses an instruction like: “MOVE R, 1000”, it is copying the
contents of memory address 1000 to “R”. Addresses can be generated
using indexing, base registers, segment registers and other ways.
These program-generated addresses are called “Virtual Addresses”
[VAs]and form the “Virtual Address Space” [VAS].
On computers without VM, the VA Virtual Address is put directly onto the
main memory RAM bus and cause the physical memory word with the
same address to be read or written (Image 1.6.1).
When VM is used, the VAs don’t go directly to the memory bus. Instead,
they go to the MMU, a chip or a collection of chips, which maps the VAs
to the physical memory addresses.

60K –
X
64K
56K –
X
60K
52K –
X
56K
48K –
X VAS
52K
Virtual Pages.
44K –
7
48K
40K –
X
44K
36K –
5
40K
Physical Address Space
32K – Page Frames.
X
36K

28K – 12K –
X 0
32K 16K
24K – 08K –
X 6
28K 12K
20K – 3 04K – 1
24K 08K
16K – 00K –
4 2
20K 04K
2 08K – 12K
7 28K – 32K 1 04K – 08K
6 24K – 28K 0 00K – 04K
5 20K – 24K
4 16K – 20K
3 12K – 16K

The VAS is divided up into units called “Pages” (Left). The


corresponding units in physical memory are called “Page Frames”
(Right). The pages and the page frames are always exactly the same
size.
When the program tries to access address “0”, for example, using the
instruction: “MOVE R, 0”, the VA “0” is sent to MMU. The MMU sees that
this VA falls in page “0” [0 – 4095], which according to its mapping is
page frame “2” [8192 – 12287].
The set of logical addresses [VAS] of a program is broken down into
pages numbered from “0” and forward. Central/Physical memory is
divided into “page frames” in the same way. The logical address of a
program space becomes concrete that once placed in the main memory.
When the system does no virtual memory, it has the same number of
boxes [Page Frames] and pages. A program is loaded from the hard disk
into main memory when the system sets a box to hold the page.
Address Translation: A logical address [LA] is given in the form (x, y),
where: “x” indicates the number of the page, and “y” indicates the
displacement within the page. If the pages have a size of 4 KB, each
contain 4096 bytes/addresses, so the logical “22480” becomes (5,
2000).
The Physical Address [PA] is calculated as follows: The OS begins by
finding the location of page “x” in the RAM [The box/page-frame], then
performs a move of “y” displacement from the first address in this box. In
fact the boxes and pages are of the same size.
The CPU
sends virtual CPU Card Memory Disk Controller
addresses to
the MMU CPU RAM Disk
Controll
er
Memory
Management Unit

Bus
The MMU Sends Physical addresses to the memory

(Image 1.6.1)
Segmentation: Unlike paged systems, memory is allocated by partitions
of variable sizes. A segment has a size that can vary, increasing or
decreasing, during execution.
Address Translation: The compiler generates addresses belonging to
segments. Each address can be written in the form [S, D], where: “S”
indicates the number of the segment, and “D” indicates the displacement
within the segment.
Example: Let LA= 9035 (2, 843), and the segment “Se” with size T. If D
< T  PA= beginning address + D; If D > T  Error in displacement into
this segment.
You can move a segmented system to a paged system if T is the same
for all segments.
7- Page Replacement Algorithms:
When a page fault occurs, the OS has to choose a page to remove from
the RAM to free up space and replace it with the missing one. If the
page to be replaced has been modified while in RAM, it must be
rewritten to HDD before replacement to update the disk copy. However,
if the page hasn’t been changed, the disk copy is already up to date, so
no rewrite is needed.
a- First In First Out [FIFO]: The OS maintains a list of all pages currently
in RAM, with the page at the head of the list is the oldest one, and the
page at the tail is the most recently arrived.
On a page fault, the page at the head is removed and the new
page is added to the tail of the list.

Example: Suppose the following list of processes:


1 2 3 7 5 2 3 4 1 6 5
Calculate the number of fault page by using FIFO if:
- The memory case [Box] is equal to 3.
- The memory case [Box] is equal to 4.
Solution:
- The number of fault page is 11.
Memory 6
Box1
1 1 1 7 7 7 3 3 3 6
Memory 5
Box2
2 2 2 5 5 5 4 4 4
Memory 1
Box3
3 3 3 2 2 2 1 1

- - - - - - - - - - -
- The number of fault page is 8.
1 1 1 1 5 5 5 5 5 5 5
2 2 2 2 2 2 4 4 4 4
3 3 3 3 3 3 1 1 1
7 7 7 7 7 7 6 6
- - - - - - - -
b- Least Recently Used [LRU]: A good approximation to the optimal
algorithm is based on the abbreviation that pages that have been
heavily used in the last few instructions will probably be heavily used
again in the next few. Conversely, pages that haven’t been used for
ages will probably remain unused for a long time.
The idea suggests a realizable algorithm: When a page fault occurs,
throw out the page that hasn’t been used for the longest time,
this strategy is called “LRU”.

Example: Calculate the number of page fault, using LRU algorithm, in the following:
0 1 2 3 1 3 4 0 1 0 4 0 1

- For 3 cases.
- For 4 cases.
Solution:
- The number of page fault is 7.
0 0 0 3 3 3n 3 3 1 1 1 1 1n
1 1 1 1n 1 1 0 0 0n 0 0n 0
2 2 2 2 4 4 4 4 4n 4 4
- - - - - - -
- The number of page fault is 6.
0 0 0 0 0 0 4 4 4 4 4n 4 4
1 1 1 1n 1 1 1 1n 1 1 1 1n
2 2 2 2 2 0 0 0n 0 0n 0
3 3 3n 3 3 3 3 3 3 3
- - - - - -
c- Optimal:
Example: Calculate the number of page fault, using optimal algorithm, in the following:
0 1 2 3 1 3 4 0 1 2 4 1 0 1 3
- For 4 cases.
Solution:
- The number of page fault is 6.
List 0 1 2 3 1 3 4 0 1 2 4 1 0 1 3

Time 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 1 1 1 1 1 1 1 1 1 1 1 1 3
2 2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 4 4 4 4 4 4 4 4 4
- - - - T0Future=8 - T0Future=13 -
T1Future=9 T1Future=14 Maxi

T2Future=10 T2Future=10
T3Future=15 Maxi T4Future=11

Note: In case of no page fault for a long interval of time, the new process is replaced with
the lowest value process.

You might also like