Week
Week
CST2555 2022/23
Operating Systems and
Computer Networks
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018
What we will learn today
the instructions to loading the base and limit registers are privileged
Address Binding
Programs on disk, ready to be brought into memory to execute form an
input queue
• Without support, must be loaded into address 0000
Inconvenient to have first user process physical address always at
0000
• How can it not be?
Addresses represented in different ways at different stages of a
program’s life
• Source code addresses usually symbolic
• Compiled code addresses bind to relocatable addresses
i.e., “14 bytes from beginning of this module”
• Linker or loader will bind relocatable addresses to absolute
addresses
i.e., 74014
• Each binding maps one address space to another
Binding of Instructions and Data to Memory
Contiguous Allocation
What do we mean by contiguous allocation?
Consecutive blocks of memory allocated to user processes are called contiguous
memory. For example, if a user process needs some x bytes of contiguous memory, then all
the x bytes will reside in one place in the memory that is defined by a range of memory
addresses: 0x0000 to 0x00FF.
With contiguous allocation, a base and a limit register are sufficient to describe the address
space of a process
In most schemes for memory management, we can assume that the operating
system occupies some fixed portion of main memory and that the rest of main
memory is available for use by multiple user processes.
The simplest scheme for managing this available memory is to partition it into
regions with fixed boundaries.
Fixed Partitioning or Dynamic Partitioning can be used
Fixed Partitioning
Equal-size partitions
• any process whose size is less than or equal to the partition size can be loaded into an
available partition
• if all partitions are full, the operating system can swap a process out of a partition
• Difficulty is that a program may not fit in a partition. The programmer must design
the program with overlays
Fixed Partitioning
Main memory use is inefficient. Any program, no matter how small, occupies an entire partition.
This phenomenon, in which there is wasted space internal to a partition due to the fact that the
block of data loaded is smaller than the partition, is referred to as internal fragmentation.
Placement Algorithm with Partitions
Equal-size partitions
• As long as there is any available partition, a process can be loaded into that partition.
Because all partitions are of equal size, it does not matter which partition is used. If all
partitions are occupied with processes that are not ready to run, then one of these
processes must be swapped out to make room for a new process. Which one to swap out is
a scheduling decision.
Unequal-size partitions
• can assign each process to the smallest partition within which it will fit
• a scheduling queue is needed for each partition, to hold swapped-out processes destined
for that partition
• processes are assigned in such a way as to minimize wasted memory within a partition
FIXED PARTITIONING
The advantage of this approach is that processes are always assigned in such a way as to
minimize wasted memory within a partition (internal fragmentation).
The approach is not optimum from the point of view of the whole system
A preferable approach would be to employ a single queue for all processes when it is time
to load a process into main memory, the smallest available partition that will hold the
process is selected. If all partitions are occupied, then a swapping decision must be made.
Fixed Partitioning is not used today. One example of a successful
operating system that did use this technique was an early IBM
mainframe operating system, OS/MFT (Multiprogramming with a Fixed
Number of Tasks).
Dynamic Partitioning
An important operating system that used this technique was IBM’s mainframe
operating system, OS/MVT (Multiprogramming with a Variable Number of Tasks).
Partitions are of variable length and number
Process is allocated exactly as much memory as required and no more.
Eventually get holes in the memory. This is called external fragmentation
indicating that the memory that is external to all partitions becomes
increasingly fragmented.
Ways to overcome:
First-fit algorithm
• Fastest
• First-fit begins to scan memory from the beginning and chooses the first available
block that is large enough.
• May have many process loaded in the front end of memory that must be searched over
when trying to find a free block
Dynamic Partitioning Placement Algorithm
Next-fit
• Next-fit begins to scan memory from the location of the last
placement and chooses the next available block that is large enough.
• More often allocate a block of memory at the end of memory where
the largest block is found
• The largest block of memory is broken up into smaller blocks
• Compaction is required to obtain a large block at the end of memory
Non- contiguous Allocation (Paging)
Contiguous allocation seems natural, but we have the fragmentation problem.
If we can split the process into several non-contiguous chunks, we would not have these problems:
P a ge # F ra m e #
Since the page table is paged, the page number is further divided into:
• a 10-bit page number
• a 10-bit page offset
where p1 is an index into the outer page table, and p2 is the displacement within
the page of the inner page table
Known as forward-mapped page table
Address-Translation Scheme
64-bit Logical Address Space
Even two-level paging scheme not sufficient
If page size is 4 KB (212)
• Then page table has 252 entries
• If two level scheme, inner page tables could be 210 4-byte entries
• Address would look like
The CPU generates a logical address for the page it needs. Now, this logical address needs to
be mapped to the physical address. This logical address has two entries, i.e., a page number
(P3) and an offset.
• The page number from the logical address is directed to the hash function.
• The hash function produces a hash value corresponding to the page number.
• This hash value directs to an entry in the hash table.
• Each entry in the hash table has a link list. Here the page number is compared with the first element's first entry. If a
match is found, then the second entry is checked.
• In this example, the logical address includes page number P3 which does not match the first element of the link list
as it includes page number P1. So we will move ahead and check the next element; now, this element has a page
number entry, i.e., P3, so further, we will check the frame entry of the element, which is fr5. We will append the
offset provided in the logical address to this frame number to reach the page's physical address. So, this is how the
hashed page table works to map the logical address to the physical address.
Inverted Page Table
The concept of normal paging says that every process maintains its own page table, which includes the entries
of all the pages belonging to the process. The large process may have a page table with millions of entries.
Such a page table consumes a large amount of memory. Consider we have six processes in execution. So, six
processes will have some or the other of their page in the main memory, which would compel their page tables
also to be in the main memory consuming a lot of space. This is the drawback of the paging concept.
Rather than each process having a page table and keeping track of all possible logical pages, track all physical
frames
The concept of an inverted page table consists of a one-page table entry for every frame of the main memory.
So, the number of page table entries in the Inverted Page Table reduces to the number of frames in physical
memory. A single page table represents the paging information of all the processes.
Decreases memory needed to store each page table, but increases time needed to search the table when a
page reference occurs
The overhead of storing an individual page table for every process gets eliminated through the inverted page
table. Only a fixed portion of memory is required to store the paging information of all the processes together.
This technique is called inverted paging, as the indexing is done with respect to the frame number instead of
the logical page number.
Inverted Page Table Architecture
The CPU generates the logical
address for the page it needs to
access.
INDEPENDENT STUDY
Operating System Concepts – 10th Edition Silberschatz, Galvin and Gagne ©2018