0% found this document useful (0 votes)
8 views13 pages

Os - 4rth Unit (1) Operating System For Masters in Computer Applications

Memory management involves dividing main memory for the operating system and user processes, with techniques for binding instructions and data to memory addresses at compile, load, and execution times. It includes strategies for memory allocation such as contiguous and non-contiguous methods, along with fragmentation issues and solutions like paging and segmentation. Virtual memory allows processes to execute with only part of their data in memory, enhancing efficiency and simplifying programming, while demand paging optimizes memory usage by loading only necessary pages.

Uploaded by

Charan Adabala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views13 pages

Os - 4rth Unit (1) Operating System For Masters in Computer Applications

Memory management involves dividing main memory for the operating system and user processes, with techniques for binding instructions and data to memory addresses at compile, load, and execution times. It includes strategies for memory allocation such as contiguous and non-contiguous methods, along with fragmentation issues and solutions like paging and segmentation. Virtual memory allows processes to execute with only part of their data in memory, enhancing efficiency and simplifying programming, while demand paging optimizes memory usage by loading only necessary pages.

Uploaded by

Charan Adabala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

UNIT IV-MEMORY MANAGEMENT

Memory Management:
In a uni-programming system, main memory is divided into two parts: one part for the operating
system (resident monitor, kernel) and one part for the user program currently being executed.
In a multiprogramming system, the “user” part of memory must be further subdivided to
accommodate multiple processes. The task of subdivision is carried out dynamically by the
operating system and is known as memory management. Binding of Instructions and Data to
Memory Address binding of instructions and data to memory addresses can happen at three
different stages.
1. Compile time: The compile time is the time taken to compile the program or source code.
During compilation, if memory location known a priori, then it generates absolute codes.
2.Load time: It is the time taken to link all related program file and load into the main memory.
It must generate relocatable code if memory location is not known at compile time.
3. Execution time: It is the time taken to execute the program in main memory by processor.
Binding delayed until run time if the process can be moved during its execution from one
memory segment to another. Need hardware support for address maps (e.g., base and limit
registers)
Physical and Virtual Address Space:
⇒ An address generated by the CPU is commonly referred to as a logical address or a virtual
address whereas an address seen by the main memory unit is commonly referred to as a physical
address.
⇒ The set of all logical addresses generated by a program is a logical-address space whereas the
set of all physical addresses corresponding to these logical addresses is a physical address space.
⇒ Logical and physical addresses are the same in compile-time and load-time addressvbinding
schemes; logical (virtual) and physical addresses differ in execution-time addressvbinding
scheme.
⇒ The Memory Management Unit is a hardware device that maps virtual to physical address. In
MMU scheme, the value in the relocation register is added to every address generated by a user
process at the time it is sent to memory as follows:
Memory Allocation Strategies– Fixed and Variable Partitions:
MEMORY ALLOCATION:
The main memory must accommodate both the operating system and the various user processes.
We need to allocate different parts of the main memory in the most efficient way possible.
The main memory is usually divided into two partitions: one for the resident operating system,
and one for the user processes. We may place the operating system in either low memory or high
memory. The major factor affecting this decision is the location of the interrupt vector. Since the
interrupt vector is often in low memory, programmers usually place the operating system in low
memory as well.
There are following two ways to allocate memory for user processes:
1. Contiguous memory allocation
2. Non contiguous memory allocation
1. Contiguous Memory Allocation Here, all the processes are stored in contiguous memory
locations. To load multiple processes into memory, the Operating System must divide memory
into multiple partitions for those processes.
Hardware Support: The relocation-register scheme used to protect user processes from each
other, and from changing operating system code and data. Relocation register contains value of
smallest physical address of a partition and limit register contains range of that partition. Each
logical address must be less than the limit register.

(Hardware support for relocation and limit registers)


According to size of partitions, the multiple partition schemes are divided into two types:
i. Multiple fixed partition/ multiprogramming with fixed task(MFT)
ii. Multiple variable partition/ multiprogramming with variable task(MVT)
i. Multiple fixed partitions: Main memory is divided into a number of static partitions at
system generation time. In this case, any process whose size is less than or equal to the partition
size can be loaded into any available partition. If all partitions are full and no process is in the
Ready or Running state, the operating system can swap a process out of any of the partitions and
load in another process, so that there is some work for the processor.
Advantages: Simple to implement and little operating system overhead. Disadvantage: *
Inefficient use of memory due to internal fragmentation. * Maximum number of active processes
is fixed.
ii. Multiple variable partitions: With this partitioning, the partitions are of variable length and
number. When a process is brought into main memory, it is allocated exactly as much memory
as it requires and no more. Advantages: No internal fragmentation and more efficient use of
main memory. Disadvantages: Inefficient use of processor due to the need for compaction to
counter external fragmentation. Partition Selection policy: When the multiple memory holes
(partitions) are large enough to contain a process, the operating system must use an algorithm to
select in which hole the process will be loaded.
The partition selection algorithm are as follows:
⇒ First-fit: The OS looks at all sections of free memory. The process is allocated to the first
hole found that is big enough size than the size of process.
⇒ Next Fit: The next fit search starts at the last hole allocated and The process is allocated to
the next hole found that is big enough size than the size of process. ⇒ Best-fit: The Best Fit
searches the entire list of holes to find the smallest hole that is big enough size than the size of
process.
⇒ Worst-fit: The Worst Fit searches the entire list of holes to find the largest hole that is big
enough size than the size of process.
Fragmentation: The wasting of memory space is called fragmentation. There are two types of
fragmentation as follows:
1. External Fragmentation: The total memory space exists to satisfy a request, but it is not
contiguous. This wasted space not allocated to any partition is called external fragmentation. The
external fragmentation can be reduce by compaction. The goal is to shuffle the memory contents
to place all free memory together in one large block. Compaction is possible only if relocation is
dynamic, and is done at execution time.
2. Internal Fragmentation: The allocated memory may be slightly larger than requested
memory. The wasted space within a partition is called internal fragmentation. One method to
reduce internal fragmentation is to use partitions of different size.
2. Noncontiguous memory allocation: In noncontiguous memory allocation, it is allowed to
store the processes in non contiguous memory locations.
There are different techniques used to load processes into memory, as follows:
1. Paging
2. Segmentation
3. Virtual memory paging(Demand paging) etc.
Paging:
Main memory is divided into a number of equal-size blocks, are called frames. Each process is
divided into a number of equal-size block of the same length as frames, are called Pages. A
process is loaded by loading all of its pages into available frames (may not be contiguous).

Process of Translation from logical to physical addresses


⇒ Every address generated by the CPU is divided into two parts: a page number (p) and a page
offset (d). The page number is used as an index into a page table. ⇒ The page table contains the
base address of each page in physical memory. This base address is combined with the page
offset to define the physical memory address that is sent to the memory unit.
⇒ If the size of logical-address space is 2m and a page size is 2n addressing units (bytes or
words), then the high-order (m – n) bits of a logical address designate the page number and the n
low-order bits designate the page offset. Thus, the logical address is as follows: Where p is an
index into the page table and d is the displacement within the page.
Segmentation:
Segmentation is a memory-management scheme that supports user view of memory. A program
is a collection of segments. A segment is a logical unit such as: main program, procedure,
function, method, object, local variables, global variables, common block, stack, symbol table,
arrays etc. A logical-address space is a collection of segments. Each segment has a name and a
length. The user specifies each address by two quantities: a segment name/number and an offset.
Hence, Logical address consists of a two tuple: Segment table maps two-dimensional physical
addresses and each entry in table has: base – contains the starting physical address where the
segments reside in memory. limit – specifies the length of the segment. Segment-table base
register (STBR) points to the segment table’s location in memory. Segment-table length register
(STLR) indicates number of segments used by a program.
The segment number is used as an index into the segment table. The offset d of the logical
address must be between 0 and the segment limit. If it is not, we trap to the operating system that
logical addressing attempt beyond end of segment. If this offset is legal, it is added to the
segment base to produce the address in physical memory of the desired byte. Consider we have
five segments numbered from 0 through 4. The segments are stored in physical memory as
shown in figure. The segment table has a separate entry for each segment, giving start address in
physical memory (or base) and the length of that segment (or limit). For example, segment 2 is
400 bytes long and begins at location 4300. Thus, a reference to byte 53 of segment 2 is mapped
onto location 4300 + 53 = 4353.

Virtual Memory:
Virtual memory is a technique that allows the execution of processes that may not be completely
in memory. Only part of the program needs to be in memory for execution. It means that Logical
address space can be much larger than physical address space. Virtual memory allows processes
to easily share files and address spaces, and it provides an efficient mechanism for process
creation.
Virtual memory is the separation of user logical memory from physical memory. This separation
allows an extremely large virtual memory to be provided for programmers when only a smaller
physical memory is available. Virtual memory makes the task of programming much easier,
because the programmer no longer needs to worry about the amount of physical memory
available.

Demand Paging:
A demand-paging system is similar to a paging system with swapping. Generally, Processes
reside on secondary memory (which is usually a disk). When we want to execute a process, we
swap it into memory. Rather than swapping the entire process into memory, it swaps the required
page. This can be done by a lazy swapper. A lazy swapper never swaps a page into memory
unless that page will be needed. A swapper manipulates entire processes, whereas a pager is
concerned with the individual pages of a process. Page transfer Method: When a process is to be
swapped in, the pager guesses which pages will be used before the process is swapped out again.
Instead of swapping in a whole process, the pager brings only those necessary pages into
memory. Thus, it avoids reading into memory pages that will not be used anyway, decreasing the
swap time and the amount of physical memory needed.
Page Table: The valid-invalid bit scheme of Page table can be used for indicating which pages
are currently in memory. When this bit is set to "valid", this value indicates that the associated
page is both legal and in memory. If the bit is set to "invalid", this value indicates that the page
either is not valid or is valid but is currently on the disk. The page-table entry for a page that is
brought into memory is set as usual, but the pagetable entry for a page that is not currently in
memory is simply marked invalid, or contains the address of the page on disk.

(Page table when some pages are not in main memory)


When a page references an invalid page, then it is called Page Fault. It means that page is not in
main memory. The procedure for handling page fault is as follows:
1. We check an internal table for this process, to determine whether the reference was a valid or
invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid, but we have not yet
brought in that page in to memory.
3. We find a free frame (by taking one from the free-frame list).
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and the
page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the illegal address trap. The process can now
access the page as though it had always been in memory.

(Diagram of Steps in handling a page fault)


Note: The pages are copied into memory, only when they are required. This mechanism is called
Pure Demand Paging. Performance of Demand Paging Let p be the probability of a page fault
(0< p < 1). Then the effective access time is Effective access time = (1 - p) x memory access
time + p x page fault time In any case, we are faced with three major components of the page-
fault service time: 1. Service the page-fault interrupt. 2. Read in the page. 3. Restart the process.
Copy-on-write:
Copy on Write or simply COW is a resource management technique. One of its main use is
in the implementation of the fork system call in which it shares the virtual memory(pages) of
the OS.
In UNIX like OS, fork() system call creates a duplicate process of the parent process which is
called as the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then
both of these processes initially will share the same pages in memory and these shared pages
will be marked as copy-on-write which means that if any of these processes will try to modify
the shared pages then only a copy of these pages will be created and the modifications will be
done on the copy of pages by that process and thus not affecting the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies page
3.

Page Replacement:
The page replacement is a mechanism that loads a page from disc to memory when a page of
memory needs to be allocated. Page replacement can be described as follows:
1. Find the location of the desired page on the disk.
2. Find a free frame:
a. If there is a free frame, use it.
b. If there is no free frame, use a page-replacement algorithm to select a victim frame.
c. Write the victim page to the disk; change the page and frame tables accordingly.
3. Read the desired page into the (newly) free frame; change the page and frame tables.
4. Restart the user process.
Page Replacement algorithms:
The page replacement algorithms decide which memory pages to page out (swap out, write to
disk) when a page of memory needs to be allocated. We evaluate an algorithm by running it on a
particular string of memory references and computing the number of page faults. The string of
memory references is called a reference string.
The different page replacement algorithms are described as follows:
1. First-In-First-Out (FIFO) Algorithm:
A FIFO replacement algorithm associates with each page the time when that page was brought
into memory. When a page must be replaced, the oldest page is chosen to swap out. We can
create a FIFO queue to hold all pages in memory. We replace the page at the head of the queue.
When a page is brought into memory, we insert it at the tail of the queue.
Example:
(FIFO page-replacement algorithm)
Note: For some page-replacement algorithms, the page fault rate may increase as the number of
allocated frames increases. This most unexpected result is known as Belady's anomaly.
2. Optimal Page Replacement algorithm: One result of the discovery of Belady's anomaly was
the search for an optimal page replacement algorithm. An optimal page-replacement algorithm
has the lowest page-fault rate of all algorithms, and will never suffer from Belady's anomaly.
Such an algorithm does exist, and has been called OPT or MIN.
It is simply “Replace the page that will not be used for the longest period of time”. Use of this
page-replacement algorithm guarantees the lowest possible pagefault rate for a fixed number of
frames.
Example:

(Optimal page-replacement algorithm)


3. LRU Page Replacement algorithm: If we use the recent past as an approximation of the near
future, then we will replace the page that has not been used for the longest period of time. This
approach is the leastrecently-used (LRU) algorithm.
LRU replacement associates with each page the time of that page's last use. When a page must
be replaced, LRU chooses that page that has not been used for the longest period of time.
Example:

4. LRU Approximation Page Replacement algorithm: In this algorithm, Reference bits are
associated with each entry in the page table. Initially, all bits are cleared (to 0) by the operating
system. As a user process executes, the bit associated with each page referenced is set (to 1) by
the hardware. After some time, we can determine which pages have been used and which have
not been used by examining the reference bits.
This algorithm can be classified into different categories as follows:
i. Additional-Reference-Bits Algorithm: It can keep an 8-bit(1 byte) for each page in a page
table in memory. At regular intervals, a timer interrupt transfers control to the operating system.
The operating system shifts the reference bit for each page into the highorder bit of its 8-bit,
shifting the other bits right over 1 bit position, discarding the low-order bit. These 8 bits shift
registers contain the history of page use for the last eight time periods. If we interpret these 8-
bits as unsigned integers, the page with the lowest number is the LRU page, and it can be
replaced.
ii. Second-Chance Algorithm: The basic algorithm of second-chance replacement is a FIFO
replacement algorithm. When a page has been selected, we inspect its reference bit. If the value
is 0, we proceed to replace this page. If the reference bit is set to 1, we give that page a second
chance and move on to select the next FIFO page. When a page gets a second chance, its
reference bit is cleared and its arrival time is reset to the current time. Thus, a page that is given
a second chance will not be replaced until all other pages are replaced.
5. Counting-Based Page Replacement: We could keep a counter of the number of references
that have been made to each page, and develop the following two schemes.
i. LFU page replacement algorithm: The least frequently used (LFU) pagereplacement
algorithm requires that the page with the smallest count be replaced. The reason for this selection
is that an actively used page should have a large reference count.
ii. MFU page-replacement algorithm: The most frequently used (MFU) page replacement
algorithm is based on the argument that the page with the largest count be replaced.
Allocation of frames:
When a page fault occurs, there is a free frame available to store new page into a frame. While
the page swap is taking place, a replacement can be selected, which is written to the disk as the
user process continues to execute. The operating system allocate all its buffer and table space
from the free-frame list for new page.
Two major allocation Algorithm/schemes.
1. equal allocation
2. proportional allocation
1. Equal allocation: The easiest way to split m frames among n processes is to give everyone an
equal share, m/n frames. This scheme is called equal allocation.
2. proportional allocation: Here, it allocates available memory to each process according to its
size. Let the size of the virtual memory for process pi be si, and define S= ∑ Si Then, if the total
number of available frames is m, we allocate ai frames to process pi, where ai is approximately
ai = Si/ S x m
Thrashing:
The system spends most of its time shuttling pages between main memory and secondary
memory due to frequent page faults. This behavior is known as thrashing.
A process is thrashing if it is spending more time paging than executing.
This leads to:
1.low CPU utilization and the operating system thinks that it needs to increase the degree of
multiprogramming.

You might also like