Os - 4rth Unit (1) Operating System For Masters in Computer Applications
Os - 4rth Unit (1) Operating System For Masters in Computer Applications
Memory Management:
In a uni-programming system, main memory is divided into two parts: one part for the operating
system (resident monitor, kernel) and one part for the user program currently being executed.
In a multiprogramming system, the “user” part of memory must be further subdivided to
accommodate multiple processes. The task of subdivision is carried out dynamically by the
operating system and is known as memory management. Binding of Instructions and Data to
Memory Address binding of instructions and data to memory addresses can happen at three
different stages.
1. Compile time: The compile time is the time taken to compile the program or source code.
During compilation, if memory location known a priori, then it generates absolute codes.
2.Load time: It is the time taken to link all related program file and load into the main memory.
It must generate relocatable code if memory location is not known at compile time.
3. Execution time: It is the time taken to execute the program in main memory by processor.
Binding delayed until run time if the process can be moved during its execution from one
memory segment to another. Need hardware support for address maps (e.g., base and limit
registers)
Physical and Virtual Address Space:
⇒ An address generated by the CPU is commonly referred to as a logical address or a virtual
address whereas an address seen by the main memory unit is commonly referred to as a physical
address.
⇒ The set of all logical addresses generated by a program is a logical-address space whereas the
set of all physical addresses corresponding to these logical addresses is a physical address space.
⇒ Logical and physical addresses are the same in compile-time and load-time addressvbinding
schemes; logical (virtual) and physical addresses differ in execution-time addressvbinding
scheme.
⇒ The Memory Management Unit is a hardware device that maps virtual to physical address. In
MMU scheme, the value in the relocation register is added to every address generated by a user
process at the time it is sent to memory as follows:
Memory Allocation Strategies– Fixed and Variable Partitions:
MEMORY ALLOCATION:
The main memory must accommodate both the operating system and the various user processes.
We need to allocate different parts of the main memory in the most efficient way possible.
The main memory is usually divided into two partitions: one for the resident operating system,
and one for the user processes. We may place the operating system in either low memory or high
memory. The major factor affecting this decision is the location of the interrupt vector. Since the
interrupt vector is often in low memory, programmers usually place the operating system in low
memory as well.
There are following two ways to allocate memory for user processes:
1. Contiguous memory allocation
2. Non contiguous memory allocation
1. Contiguous Memory Allocation Here, all the processes are stored in contiguous memory
locations. To load multiple processes into memory, the Operating System must divide memory
into multiple partitions for those processes.
Hardware Support: The relocation-register scheme used to protect user processes from each
other, and from changing operating system code and data. Relocation register contains value of
smallest physical address of a partition and limit register contains range of that partition. Each
logical address must be less than the limit register.
Virtual Memory:
Virtual memory is a technique that allows the execution of processes that may not be completely
in memory. Only part of the program needs to be in memory for execution. It means that Logical
address space can be much larger than physical address space. Virtual memory allows processes
to easily share files and address spaces, and it provides an efficient mechanism for process
creation.
Virtual memory is the separation of user logical memory from physical memory. This separation
allows an extremely large virtual memory to be provided for programmers when only a smaller
physical memory is available. Virtual memory makes the task of programming much easier,
because the programmer no longer needs to worry about the amount of physical memory
available.
Demand Paging:
A demand-paging system is similar to a paging system with swapping. Generally, Processes
reside on secondary memory (which is usually a disk). When we want to execute a process, we
swap it into memory. Rather than swapping the entire process into memory, it swaps the required
page. This can be done by a lazy swapper. A lazy swapper never swaps a page into memory
unless that page will be needed. A swapper manipulates entire processes, whereas a pager is
concerned with the individual pages of a process. Page transfer Method: When a process is to be
swapped in, the pager guesses which pages will be used before the process is swapped out again.
Instead of swapping in a whole process, the pager brings only those necessary pages into
memory. Thus, it avoids reading into memory pages that will not be used anyway, decreasing the
swap time and the amount of physical memory needed.
Page Table: The valid-invalid bit scheme of Page table can be used for indicating which pages
are currently in memory. When this bit is set to "valid", this value indicates that the associated
page is both legal and in memory. If the bit is set to "invalid", this value indicates that the page
either is not valid or is valid but is currently on the disk. The page-table entry for a page that is
brought into memory is set as usual, but the pagetable entry for a page that is not currently in
memory is simply marked invalid, or contains the address of the page on disk.
Page Replacement:
The page replacement is a mechanism that loads a page from disc to memory when a page of
memory needs to be allocated. Page replacement can be described as follows:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim page to the disk; change the page and frame tables accordingly.
3. Read the desired page into the (newly) free frame; change the page and frame tables.
4. Restart the user process.
Page Replacement algorithms:
The page replacement algorithms decide which memory pages to page out (swap out, write to
disk) when a page of memory needs to be allocated. We evaluate an algorithm by running it on a
particular string of memory references and computing the number of page faults. The string of
memory references is called a reference string.
The different page replacement algorithms are described as follows:
1. First-In-First-Out (FIFO) Algorithm:
A FIFO replacement algorithm associates with each page the time when that page was brought
into memory. When a page must be replaced, the oldest page is chosen to swap out. We can
create a FIFO queue to hold all pages in memory. We replace the page at the head of the queue.
When a page is brought into memory, we insert it at the tail of the queue.
Example:
(FIFO page-replacement algorithm)
Note: For some page-replacement algorithms, the page fault rate may increase as the number of
allocated frames increases. This most unexpected result is known as Belady's anomaly.
2. Optimal Page Replacement algorithm: One result of the discovery of Belady's anomaly was
the search for an optimal page replacement algorithm. An optimal page-replacement algorithm
has the lowest page-fault rate of all algorithms, and will never suffer from Belady's anomaly.
Such an algorithm does exist, and has been called OPT or MIN.
It is simply “Replace the page that will not be used for the longest period of time”. Use of this
page-replacement algorithm guarantees the lowest possible pagefault rate for a fixed number of
frames.
Example:
4. LRU Approximation Page Replacement algorithm: In this algorithm, Reference bits are
associated with each entry in the page table. Initially, all bits are cleared (to 0) by the operating
system. As a user process executes, the bit associated with each page referenced is set (to 1) by
the hardware. After some time, we can determine which pages have been used and which have
not been used by examining the reference bits.
This algorithm can be classified into different categories as follows:
i. Additional-Reference-Bits Algorithm: It can keep an 8-bit(1 byte) for each page in a page
table in memory. At regular intervals, a timer interrupt transfers control to the operating system.
The operating system shifts the reference bit for each page into the highorder bit of its 8-bit,
shifting the other bits right over 1 bit position, discarding the low-order bit. These 8 bits shift
registers contain the history of page use for the last eight time periods. If we interpret these 8-
bits as unsigned integers, the page with the lowest number is the LRU page, and it can be
replaced.
ii. Second-Chance Algorithm: The basic algorithm of second-chance replacement is a FIFO
replacement algorithm. When a page has been selected, we inspect its reference bit. If the value
is 0, we proceed to replace this page. If the reference bit is set to 1, we give that page a second
chance and move on to select the next FIFO page. When a page gets a second chance, its
reference bit is cleared and its arrival time is reset to the current time. Thus, a page that is given
a second chance will not be replaced until all other pages are replaced.
5. Counting-Based Page Replacement: We could keep a counter of the number of references
that have been made to each page, and develop the following two schemes.
i. LFU page replacement algorithm: The least frequently used (LFU) pagereplacement
algorithm requires that the page with the smallest count be replaced. The reason for this selection
is that an actively used page should have a large reference count.
ii. MFU page-replacement algorithm: The most frequently used (MFU) page replacement
algorithm is based on the argument that the page with the largest count be replaced.
Allocation of frames:
When a page fault occurs, there is a free frame available to store new page into a frame. While
the page swap is taking place, a replacement can be selected, which is written to the disk as the
user process continues to execute. The operating system allocate all its buffer and table space
from the free-frame list for new page.
Two major allocation Algorithm/schemes.
1. equal allocation
2. proportional allocation
1. Equal allocation: The easiest way to split m frames among n processes is to give everyone an
equal share, m/n frames. This scheme is called equal allocation.
2. proportional allocation: Here, it allocates available memory to each process according to its
size. Let the size of the virtual memory for process pi be si, and define S= ∑ Si Then, if the total
number of available frames is m, we allocate ai frames to process pi, where ai is approximately
ai = Si/ S x m
Thrashing:
The system spends most of its time shuttling pages between main memory and secondary
memory due to frequent page faults. This behavior is known as thrashing.
A process is thrashing if it is spending more time paging than executing.
This leads to:
1.low CPU utilization and the operating system thinks that it needs to increase the degree of
multiprogramming.