0% found this document useful (0 votes)
35 views50 pages

Unit-3 Os

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views50 pages

Unit-3 Os

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 50

UNIT III

Memory
Management

To provide a detailed description of various ways of organizing memory hardware


To discuss various memory-management techniques, including paging and segmentation
To provide a detailed description of the Intel Pentium, which supports both pure segmentation and
segmentation with paging
Program must be brought (from disk) into memory and placed within a process for it to be run
Main memory and registers are only storage CPU can access directly
Register access in one CPU clock (or less)
Main memory can take many cycles
Cache sits between main memory and CPU registers
Protection of memory required to ensure correct operation

Base and Limit Registers

A pair of base and limit registers define the logical address space

Binding of Instructions and Data to Memory


Address binding of instructions and data to memory addresses can happen at three different stages
Compile time: If memory location known a priori, absolute code can be generated; must recompile
code if starting location changes
Load time: Must generate relocatable code if memory location is not known at compile time
Execution time: Binding delayed until run time if the process can be moved during its execution from
one memory segment to another. Need hardware support for address maps (e.g., base and limit
registers)
Multistep Processing of a User Program

Logical vs. Physical Address Space

The concept of a logical address space that is bound to a separate physical address space is central to
proper memory management
Logical address – generated by the CPU; also referred to as virtual address
Physical address – address seen by the memory unit
Logical and physical addresses are the same in compile-time and load-time address-binding schemes;
logical (virtual) and physical addresses differ in execution-time address-binding scheme
Memory-Management Unit (MMU)
Hardware device that maps virtual to physical address
In MMU scheme, the value in the relocation register is added to every address generated by a user
process at the time it is sent to memory
The user program deals with logical addresses; it never sees the real physical addresses

Dynamic relocation using a relocation register

Dynamic Loading
Routine is not loaded until it is called
Better memory-space utilization; unused routine is never loaded
Useful when large amounts of code are needed to handle infrequently occurring cases
No special support from the operating system is required implemented through program design

Dynamic Linking
Linking postponed until execution time
Small piece of code, stub, used to locate the appropriate memory-resident library routine
Stub replaces itself with the address of the routine, and executes the routine
Operating system needed to check if routine is in processes’ memory address
Dynamic linking is particularly useful for libraries
System also known as shared libraries
Swapping
A process can be swapped temporarily out of memory to a backing store, and then brought back into memory
for continued executionnBacking store – fast disk large enough to accommodate copies of all memory images
for all users; must provide direct access to these memory imagesnRoll out, roll in – swapping variant used for
priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be
loaded and executednMajor part of swap time is transfer time; total transfer time is directly proportional to the
amount of memory swappednModified versions of swapping are found on many systems (i.e., UNIX, Linux,
and Windows)
System maintains a ready queue of ready-to-run processes which have memory images on disk
Multiple-partition allocation
Hole – block of available memory; holes of various size are scattered throughout memory When a process arrives, it is
enough
Shri to accommodate
Vishnu Engineering it College for Women

Schematic View of Swapping

Contiguous Allocation

Main memory usually into two partitions:


Resident operating system, usually held in low memory with interrupt vector
User processes then held in high memorynRelocation registers used to protect user processes from each
other, and from changing operating-system code and data
Base register contains value of smallest physical address
Limit register contains range of logical addresses – each logical address must be less than the limit
register
MMU maps logical address dynamically

Hardware Support for Relocation and Limit Registers

Dept. of Computer Science and Engineering Page 71


Shri Vishnu Engineering College for Women

Operating system maintains information about:

a) allocated partitions b) free partitions (hole)

OS OS OS OS

process 5 process 5 process 5 process 5


process 9 process 9

process 8 process 10

process 2 process 2 process 2 process 2

Dynamic Storage-Allocation Problem


First-fit: Allocate the first hole that is big enough
Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size
Produces the smallest leftover hole
Worst-fit: Allocate the largest hole; must also search entire list
Produces the largest leftover hole
First-fit and best-fit better than worst-fit in terms of speed and storage utilization
Fragmentation
External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous
Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size
difference is memory internal to a partition, but not being used
Reduce external fragmentation by compaction

Dept. of Computer Science and Engineering Page 72


Shri Vishnu Engineering College for Women
Shuffle memory contents to place all free memory together in one large block
Compaction is possible only if relocation is dynamic, and is done at execution time.
I/O problem
Latch job in memory while it is involved in I/O
Do I/O only into OS buffers

Paging

Logical address space of a process can be noncontiguous; process is allocated physical memory
whenever the latter is available
Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes
and 8,192 bytes)
Divide logical memory into blocks of same size called pagesnKeep track of all free frames

Dept. of Computer Science and Engineering Page 73


Shri Vishnu Engineering College for Women

To run a program of size n pages, need to find n free frames and load program
Set up a page table to translate logical to physical addresses
Internal fragmentation

Address Translation Scheme

Address generated by CPU is divided into


Page number (p) – used as an index into a page table which contains base address of each page in
physical memory

Page offset (d) – combined with base address to define the physical memory address that is sent to the
memory unit
For given logical address space 2m and page size 2n

Paging Hardware

p age number
page offset
p d
m-n n

Paging Model of Logical and Physical Memory

Dept. of Computer Science and Engineering Page 74


Shri Vishnu Engineering College for Women

Dept. of Computer Science and Engineering Page 75


Shri Vishnu Engineering College for Women

Paging Example

32-byte memory and 4-byte pages

Free Frames

Dept. of Computer Science and Engineering Page 76


Shri Vishnu Engineering College for Women

Implementation of Page Table

Page table is kept in main memory


Page-table base register (PTBR) points to the page table
Page-table length register (PRLR) indicates size of the page table
In this scheme every data/instruction access requires two memory accesses. One for the page table and
one for the data/instruction.

The two memory access problem can be solved by the use of a special fast-lookup hardware cache
called associative memory or translation look-aside buffers (TLBs)

Some TLBs store address-space identifiers (ASIDs) in each TLB entry – uniquely identifies each
process to provide address-space protection for that process

Associative Memory
Associative memory – parallel search
Address translation (p, d)
If p is in associative register, get frame # out
Otherwise get frame # from page table in memory

Page #
Frame #

Paging Hardware With TLB

Dept. of Computer Science and Engineering Page 77


Shri Vishnu Engineering College for Women

Dept. of Computer Science and Engineering Page 78


Shri Vishnu Engineering College for Women

Effective Access Time

Associative Lookup = e time unit


Assume memory cycle time is 1 microsecond
Hit ratio – percentage of times that a page number is found in the associative registers; ratio related to
number of associative registers
Hit ratio = an Effective Access Time (EAT)
EAT = (1 + e) a + (2 + e)(1 – a)
=2+e–a
Memory Protection
Memory protection implemented by associating protection bit with each frame
Valid-invalid bit attached to each entry in the page table:
“valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page
“invalid” indicates that the page is not in the process’ logical address space
Valid (v) or Invalid (i) Bit In A Page Table

Shared Pages
Shared code
One copy of read-only (reentrant) code shared among processes (i.e., text editors, compilers, window
systems).

Dept. of Computer Science and Engineering Page 79


Shri Vishnu Engineering College for Women
Shared code must appear in same location in the logical address space of all processes

Private code and data


Each process keeps a separate copy of the code and data
The pages for the private code and data can appear anywhere in the logical address space

Dept. of Computer Science and Engineering Page 80


Shri Vishnu Engineering College for Women

Shared Pages Example

Structure of the Page Table

Hierarchical Paging
Hashed Page Tables
Inverted Page Tables

Hierarchical Page Tables

Break up the logical address space into multiple page tables


A simple technique is a two-level page table

Dept. of Computer Science and Engineering Page 81


Shri Vishnu Engineering College for Women
Two-Level Page-Table Scheme

Dept. of Computer Science and Engineering Page 82


Shri Vishnu Engineering College for Women

Two-Level Paging Example


A logical address (on 32-bit machine with 1K page size) is divided into:
a page number consisting of 22 bits
a page offset consisting of 10 bits
Since the page table is paged, the page number is further divided into:
a 12-bit page number
a 10-bit page offset
Thus, a logical address is as follows:

where pi is an index into the outer page table, and p2 is the displacement within the page of the outer
page table

page number page offset


pi p2 d

12 10 10

Address-Translation Scheme

Three-level Paging Scheme


Shri Vishnu Engineering College for Women

Dept. of Computer Science and Engineering Page 78


Shri Vishnu Engineering College for Women

Hashed Page Tables

Common in address spaces > 32 bits


The virtual page number is hashed into a page table
This page table contains a chain of elements hashing to the same location
Virtual page numbers are compared in this chain searching for a match
If a match is found, the corresponding physical frame is extracted

Hashed Page Table

Inverted Page Table

One entry for each real page of memory


Entry consists of the virtual address of the page stored in that real memory location, with information
about the process that owns that page

Decreases memory needed to store each page table, but increases time needed to search the table when a
page reference occurs
Dept. of Computer Science and Engineering Page 79
Shri Vishnu Engineering College for Women
Use hash table to limit the search to one — or at most a few — page-table entries

Dept. of Computer Science and Engineering Page 80


Shri Vishnu Engineering College for Women

Inverted Page Table Architecture

Segmentation
Memory-management scheme that supports user view of memory
A program is a collection of segments
A segment is a logical unit such as:
main program
procedure function
method
object
local variables, global variables
common block
stack
symbol table
arrays

User’s View of a Program

Dept. of Computer Science and Engineering Page 81


Shri Vishnu Engineering College for Women

Logical View of Segmentation

4
1

3
2
4

user space

Segmentation Architecture
Logical address consists of a two tuple:
o <segment-number, offset>,
Segment table – maps two-dimensional physical adpdrhesysess;iecaachl tambleeemntroy
rhyas:space
base – contains the starting physical address where the segments reside in memory
limit – specifies the length of the segment
Segment-table base register (STBR) points to the segment table’s location in memory
Segment-table length register (STLR) indicates number of segments used by a program;
segment number s is legal if s < STLR
Protection
With each entry in segment table associate:

Dept. of Computer Science and Engineering Page 82


Shri Vishnu Engineering College for Women
validation bit = 0 Þ illegal segment
read/write/execute privileges
Protection bits associated with segments; code sharing occurs at segment level
Since segments vary in length, memory allocation is a dynamic storage-allocation problem
A segmentation example is shown in the following diagram

Dept. of Computer Science and Engineering Page 83


Shri Vishnu Engineering College for Women

Segmentation Hardware

Example of Segmentation

Example: The Intel Pentium


Supports both segmentation and segmentation with paging
CPU generates logical address
Given to segmentation unit

Dept. of Computer Science and Engineering Page 84


Shri Vishnu Engineering College for Women
Which produces linear addresses
Linear address given to paging unit

Which generates physical address in main memory


Paging units form equivalent of MMU
Logical to Physical Address Translation in Pentium

Intel Pentium Segmentation

Dept. of Computer Science and Engineering Page 85


Shri Vishnu Engineering College for Women

Pentium Paging Architecture

Linear Address in Linux

Three-level Paging in Linux

Dept. of Computer Science and Engineering Page 86


Shri Vishnu Engineering College for Women

Virtual Memory
Virtual memory is a technique that allows the execution of process that may not be completely in

memory. The main visible advantage of this scheme is that programs can be larger than physical
memory.

Virtual memory is the separation of user logical memory from physical memory this separation allows an

extremely large virtual memory to be provided for programmers when only a smaller physical memory
is available ( Fig ).
Following are the situations, when entire program is not required to load fully.

1. User written error handling routines are used only when an error occurs in the data or computation.
2. Certain options and features of a program may be used rarely.
3. Many tables are assigned a fixed amount of address space even though only a small amount of the table is
actually used.
The ability to execute a program that is only partially in memory would counter many benefits.

1. Less number of I/O would be needed to load or swap each user program into memory.
2. A program would no longer be constrained by the amount of physical memory that is available.
3. Each user program could take less physical memory, more programs could be run the same time, with a

Dept. of Computer Science and Engineering Page 87


Shri Vishnu Engineering College for Women
corresponding increase in CPU utilization and throughput.

Dept. of Computer Science and Engineering Page 88


Shri Vishnu Engineering College for Women

Fig. Diagram showing virtual memory that is larger than physical memory.

Virtual memory is commonly implemented by demand paging. It can also be implemented in a


segmentation system. Demand segmentation can also be used to provide virtual memory.

Demand Paging

A demand paging is similar to a paging system with swapping(Fig 5.2). When we want to execute a
process, we swap it into memory. Rather than swapping the entire process into memory.

When a process is to be swapped in, the pager guesses which pages will be used before the process is
swapped out again Instead of swapping in a whole process, the pager brings only those necessary pages into
memory. Thus, it avoids reading into memory pages that will not be used in anyway, decreasing the swap time
and the amount of physical memory needed.

Hardware support is required to distinguish between those pages that are in memory and those pages that
are on the disk using the valid-invalid bit scheme. Where valid and invalid pages can be checked checking the
bit and marking a page will have no effect if the process never attempts to access the pages. While the process
executes and accesses pages that are memory resident, execution proceeds normally.

Dept. of Computer Science and Engineering Page 89


Shri Vishnu Engineering College for Women

Fig. Transfer of a paged memory to continuous disk space

Access to a page marked invalid causes a page-fault trap. This trap is the result of the operating system's
failure to bring the desired page into memory. But page fault can be handled as following (Fig 5.3):

Dept. of Computer Science and Engineering Page 90


Shri Vishnu Engineering College for Women
Fig. Steps in handling a page fault

Dept. of Computer Science and Engineering Page 91


Shri Vishnu Engineering College for Women

1. We check an internal table for this process to determine whether the reference was a
valid or invalid memory access.
2. If the reference was invalid, we terminate the process. If .it was valid, but we have
not yet brought in that page, we now page in the latter.
3. We find a free frame.

4. We schedule a disk operation to read the desired page into the newly allocated
frame.

5. When the disk read is complete, we modify the internal table kept with the
process and the page table to indicate that the page is now in memory.

6. We restart the instruction that was interrupted by the illegal address trap. The
process can now access the page as though it had always been memory.

Therefore, the operating system reads the desired page into memory and restarts
the process as though the page had always been in memory.

The page replacement is used to make the frame free if they are not in used.
If no frame is free then other process is called in.

Advantages of Demand Paging:


1. Large virtual memory.
2. More efficient use of memory.
3. Unconstrained multiprogramming. There is no limit on degree of
multiprogramming.

Disadvantages of Demand Paging:


4. Number of tables and amount of processor over head for handling page interrupts
are greater than in the case of the simple paged management techniques.
5. due to the lack of an explicit constraints on a jobs address space size.

Copy on Write
Copy on Write or simply COW is a resource management technique. One of its main use is
in the implementation of the fork system call in which it shares the virtual memory(pages) of
Dept. ofthe
Computer
OS. Science and Engineering Page 92
Shri Vishnu Engineering College for Women

In UNIX like OS, fork() system call creates a duplicate process of the parent process which is
called as the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then
both of these processes initially will share the same pages in memory and these shared pages
will be marked as copy-on-write which means that if any of these processes will try to modify
the shared pages then only a copy of these pages will be created and the modifications will be
done on the copy of pages by that process and thus not affecting the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies page
3.
The below figures shows what happens before and after process P modifies page 3.

Dept. of Computer Science and Engineering Page 93


Shri Vishnu Engineering College for Women

Page Replacement Algorithms in Operating Systems


Dept. of Computer Science and Engineering Page 94
Shri Vishnu Engineering College for Women

In an operating system that uses paging for memory management, a page replacement
algorithm is needed to decide which page needs to be replaced when new page comes in.
Page Fault – A page fault happens wh
en a running program accesses a memory page that is mapped into the virtual address space,
but not loaded in physical memory.
Since actual physical memory is much smaller than virtual memory, page faults happen. In
case of page fault, Operating System might have to replace one of the existing pages with the
newly needed page. Different page replacement algorithms suggest different ways to decide
which page to replace. The target for all algorithms is to reduce the number of page faults.
Page Replacement Algorithms :
 First In First Out (FIFO) –
This is the simplest page replacement algorithm. In this algorithm, the operating
system keeps track of all pages in the memory in a queue, the oldest page is in the
front of the queue. When a page needs to be replaced page in the front of the
queue is selected for removal.

Example-1Consider page reference string 1, 3, 0, 3, 5, 6 with 3 page frames.Find


number of page faults.

Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty
slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults.
Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e
1. —>1 Page Fault.
6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3
Dept. of Computer—>1
Science
Pageand Engineering
Fault. Page 95
Finally when 3 come it is not avilable so it replaces 0 1 page fault
Shri Vishnu Engineering College for Women

Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page
faults when increasing the number of page frames while using the First in First
Out (FIFO) page replacement algorithm. For example, if we consider reference
string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 total page faults, but if
we increase slots to 4, we get 10 page faults.
 Optimal Page replacement –
In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.
Example-2:Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4
page frame. Find number of page fault.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already there so —> 0 Page fault.
when 3 came it will take the place of 7 because it is not used for the longest
duration of time in the future.—>1 Page fault.
0 is already there so —> 0 Page fault..
4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are
already available in the memory.
Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to
set up a benchmark so that other replacement algorithms can be analyzed against
it.
Dept. of Computer Science and Engineering Page 96
Shri Vishnu Engineering College for Women

 Least Recently Used –


In this algorithm page will be replaced which is least recently used.
Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with
4 page frames.Find number of page faults.

Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4
Page faults
0 is already their so —> 0 Page fault.
when 3 came it will take the place of 7 because it is least recently used —>1 Page
fault
0 is already in memory so —> 0 Page fault.
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are
already available in the memory.

Allocation of frames in Operating System


An important aspect of operating systems, virtual memory is implemented using demand
paging. Demand paging necessitates the development of a page-replacement algorithm and
a frame allocation algorithm. Frame allocation algorithms are used if you have multiple
processes; it helps decide how many frames to allocate to each process.
Dept. of Computer Science and Engineering Page 97
There are various constraints to the strategies for the allocation of frames:
Shri Vishnu Engineering College for Women

 You cannot allocate more than the total number of available frames.
 At least a minimum number of frames should be allocated to each process.
This constraint is supported by two reasons. The first reason is, as less number of
frames are allocated, there is an increase in the page fault ratio, decreasing the
performance of the execution of the process. Secondly, there should be enough
frames to hold all the different pages that any single instruction can reference.
Frame allocation algorithms –
The two algorithms commonly used to allocate frames to a process are:
1. Equal allocation: In a system with x frames and y processes, each process
gets equal number of frames, i.e. x/y. For instance, if the system has 48 frames
and 9 processes, each process will get 5 frames. The three frames which are not
allocated to any process can be used as a free-frame buffer pool.
 Disadvantage: In systems with processes of varying sizes, it
does not make much sense to give each process equal frames.
Allocation of a large number of frames to a small process will
eventually lead to the wastage of a large number of allocated unused
frames.
2. Proportional allocation: Frames are allocated to each process according
to the process size.
For a process pi of size si, the number of allocated frames is ai = (si/S)*m, where S
is the sum of the sizes of all the processes and m is the number of frames in the
system. For instance, in a system with 62 frames, if there is a process of 10KB and
another process of 127KB, then the first process will be allocated (10/137)*62 = 4
frames and the other process will get (127/137)*62 = 57 frames.
 Advantage: All the processes share the available frames
according to their needs, rather than equally.
Global vs Local Allocation –
The number of frames allocated to a process can also dynamically change depending on
whether you have used global replacement or local replacement for replacing pages in case
of a page fault.
1. Local replacement: When a process needs a page which is not in the
memory, it can bring in the new page and allocate it a frame from its own set of
allocated frames only.
 Advantage: The pages in memory for a particular process and
the page fault ratio is affected by the paging behavior of only that
process.
 Disadvantage: A low priority process may hinder a high
priority process by not making its frames available to the high priority
process.
2. Global replacement: When a process needs a page which is not in the
memory, it can bring in the new page and allocate it a frame from the set of all
frames, even if that frame is currently allocated to some other process; that is, one
process can take a frame from another.
 Advantage: Does not hinder the performance of processes and
hence results in greater system throughput.
 Disadvantage: The page fault ratio of a process can not be
solely controlled by the process itself. The pages in memory for a
process depends on the paging behavior of other processes as well.
Dept. of Computer Science and Engineering Page 98
Shri Vishnu Engineering College for Women

Techniques to handle Thrashing


Thrashing is a condition or a situation when the system is spending a major portion of its
time in servicing the page faults, but the actual processing done is very negligible.

The basic concept involved is that if a process is allocated too few frames, then there will be
too many and too frequent page faults. As a result, no useful work would be done by the CPU
and the CPU utilisation would fall drastically. The long-term scheduler would then try to
improve the CPU utilisation by loading some more processes into the memory thereby
increasing the degree of multiprogramming. This would result in a further decrease in the
CPU utilization triggering a chained reaction of higher page faults followed by an increase in
the degree of multiprogramming, called Thrashing.
Locality Model –
A locality is a set of pages that are actively used together. The locality model states that as a
process executes, it moves from one locality to another. A program is generally composed of
several different localities which may overlap.

For example when a function is called, it defines a new locality where memory references are
made to the instructions of the function call, it’s local and global variables, etc. Similarly,
when the function is exited, the process leaves this locality.
Techniques to handle:
1. Working Set Model –
This model is based on the above-stated concept of the Locality Model.
The basic principle states that if we allocate enough frames to a process to
Dept. of Computeraccommodate its current locality, it will only fault whenever it moves to some
Science and Engineering Page 99
new locality. But if the allocated frames are lesser than the size of the current
locality, the process is bound to thrash.
Shri Vishnu Engineering College for Women

According to this model, based on a parameter A, the working set is defined as the
set of pages in the most recent ‘A’ page references. Hence, all the actively used
pages would always end up being a part of the working set.
The accuracy of the working set is dependant on the value of parameter A. If A is
too large, then working sets may overlap. On the other hand, for smaller values of
A, the locality might not be covered entirely.

If D is the total demand for frames and is the working set size for a
process i,

Now, if ‘m’ is the number of frames available in the memory, there are 2
possibilities:
 (i) D>m i.e. total demand exceeds the number of frames, then
thrashing will occur as some processes would not get enough frames.
 (ii) D<=m, then there would be no thrashing.
If there are enough extra frames, then some more processes can be loaded in the
memory. On the other hand, if the summation of working set sizes exceeds the
availability of frames, then some of the processes have to be suspended(swapped
out of memory).
This technique prevents thrashing along with ensuring the highest degree of
multiprogramming possible. Thus, it optimizes CPU utilisation.
2. Page Fault Frequency –
A more direct approach to handle thrashing is the one that uses Page-Fault
Frequency concept.

Dept. of Computer Science and Engineering Page 100


Shri Vishnu Engineering College for Women

The problem associated with Thrashing is the high page fault rate and thus, the
concept here is to control the page fault rate.
If the page fault rate is too high, it indicates that the process has too few frames
allocated to it. On the contrary, a low page fault rate indicates that the process has
too many frames.
Upper and lower limits can be established on the desired page fault rate as shown
in the diagram.
If the page fault rate falls below the lower limit, frames can be removed from the
process. Similarly, if the page fault rate exceeds the upper limit, more number of
frames can be allocated to the process.
In other words, the graphical state of the system should be kept limited to the
rectangular region formed in the given diagram.
Here too, if the page fault rate is high with no free frames, then some of the
processes can be suspended and frames allocated to them can be reallocated to
other processes. The suspended processes can then be restarted later.

Memory mapped I/O and Isolated I/O


As a CPU needs to communicate with the various memory and input-output devices (I/O) as
we know data between the processor and these devices flow with the help of the system bus.
There are three ways in which system bus can be allotted to them :
1. Separate set of address, control and data bus to I/O and memory.
2. Have common bus (data and address) for I/O and memory but separate
control lines.
3. Have common bus (data, address, and control) for I/O and memory.
In first case it is simple because both have different set of address space and instruction but
require more buses.
Isolated I/O –
Then we have Isolated I/O in which we Have common bus(data and address) for I/O and
memory but separate read and write control lines for I/O. So when CPU decode instruction
then if data is for I/O then it places the address on the address line and set I/O read or write
control line on due to which data transfer occurs between CPU and I/O. As the address space
of memory and I/O is isolated and the name is so. The address for I/O here is called ports.
Here we have different read-write instruction for both I/O and memory.

Dept. of Computer Science and Engineering Page 101


Shri Vishnu Engineering College for Women

Memory Mapped I/O –


In this case every bus in common due to which the same set of instructions work for memory
and I/O. Hence we manipulate I/O same as memory and both have same address space, due to
which addressing capability of memory become less because some part is occupied by the
I/O.

Dept. of Computer Science and Engineering Page 102


Shri Vishnu Engineering College for Women

Differences between memory mapped I/O and isolated I/O –

Mem
I/O Map

Both
sam
addr
y and I/O have separate address space spac

Due
addi
I/O
addr
mem
beco
less
ress can be used by the memory mem

Sam
instr
can
both
and
Dept. of Computer Science and Engineering Page 103
e instruction control read and write operation in I/O and Memory Mem
Shri Vishnu Engineering College for Women

Mem
I/O Map

Nor
mem
addr
/O address are called ports. for b

Less
fficient due to separate buses effic

Sma
in size due to more buses size

Sim
logi
used
is al
treat
mem
mplex due to separate separate logic is used to control both. only

ing kernel memory (buddy system and slab system)

ategies for managing free memory that is assigned to kernel processes:

1. Buddy system –

allocation system is an algorithm in which a larger memory block is divided into small parts to satisfy the request.
gorithm is used to give best fit. The two smaller parts of block are of equal size and called as buddies. In the same
one of the two buddies will further divide into smaller parts until the request is fulfilled. Benefit of this technique is
two buddies can combine to form the block of larger size according to the memory request.
e – If the request of 25Kb is made then block of size 32Kb is allocated.

Dept. of Computer Science and Engineering Page 104


Shri Vishnu Engineering College for Women

Mem
I/O Map

ypes of Buddy System –

1. Binary buddy system


2. Fibonacci buddy system
3. Weighted buddy system
4. Tertiary buddy system
uddy system?
artition size and procees size are different then poor match occurs and may use space inefficiently.
y to implement and efficient then dynamic allocation.
buddy Dept.
systemof–Computer Science and Engineering Page 105
ddy system maintains a list of the free blocks of each size (called a free list), so that it is easy to find ablock of the
Shri Vishnu Engineering College for Women

Mem
I/O Map

size, if one is available. If no block of the requested size is available, Allocate searches for the first nonempty list for
of atleast the size requested. In either case, a block is removed from the free list.
le – Assume the size of memory segment is initially 256kb and the kernel rquests 25kb of memory. The segment is
divided into two buddies. Let we call A1 and A2 each 128kb in size. One of these buddies is further divided into
kb buddies let say B1 and B2. But the next highest power of 25kb is 32kb so, either B1 or B2 is further divided into
kb buddies(C1 and C2) and finally one of these buddies is used to satisfy the 25kb request. A split block can only be
with its unique buddy block, which then reforms the larger block they were split from.
cci buddy system –
the system in which blocks are divided into sizes which are fibonacci numbers. It satisfy the following relation:
(i-1)+Z(i-2)
2, 3, 5, 8, 13, 21, 34, 55, 144, 233, 377, 610. The address calculation for the binary and weighted buddy systems is
forward, but the original procedure for the Fibonacci buddy system was either limited to a small, fixed number of
zes or a time consuming computation.
tages –
 In comparison to other simpler techniques such as dynamic allocation, the buddy memory system has
little external fragmentation.
 The buddy memory allocation system is implemented with the use of a binary tree to represent used or
unused split memory blocks.
 The buddy system is very fast to allocate or deallocate memory.
 In buddy systems, the cost to allocate and free a block of memory is low compared to that of best-fit or
first-fit algorithms.
 Other advantage is coalescing.
 Address calculation is easy.
s coalescing?
ined as how quickly adjacent buddies can be combined to form larger segments this is known as coalescing.
mple, when the kernel releases the C1 unit it was allocated, the system can coalesce C1 and C2 into a 64kb segment.
gment B1 can in turn be coalesced with its buddy B2 to form a 128kb segment. Ultimately we can end up with the
256kb segment.
ack –
in drawback in buddy system is internal fragmentation as larger block of memory is acquired then required. For
e if a 36 kb request is made then it can only be satisfied by 64 kb segment and reamining memory is wasted.

2. Slab Allocation –

nd strategy for allocating kernel memory is known as slab allocation. It eliminates fragmentation caused by
ons and deallocations. This method is used to retain allocated memory that contains a data object of a certain type for
pon subsequent allocations of objects of the same type. In slab allocation memory chunks suitable to fit data objects
in type or size are preallocated. Cache does not free the space immediately after use although it keeps track of data
re required frequently so that whenever request is made the data will reach very fast. Two terms required are:
Dept. of Computer Science and Engineering Page 106
 Slab – A slab is made up of one or more physically contiguous pages. The slab is the actual container of
Shri Vishnu Engineering College for Women

Mem
I/O Map

data associated with objects of the specific kind of the containing cache.
 Cache – Cache represents a small amount of very fast memory. A cache consists of one or more slabs.
There is a single cache for each unique kernel data structure.

le –
 A separate cache for a data structure representing processes descriptors
 Separate cache for file objects
 Separate cache for semaphores etc.
che is populated with objects that are instantiations of the kernel data structure the cache represents. For example the
epresenting semaphores stores instances of semaphore objects, the cache representing process descriptors stores
es of process descriptor objects.
mentation –
b allocation algorithm uses caches to store kernel objects. When a cache is created a number of objects which are
marked as free are allocated to the cache. The number of objects in the cache depends on size of the associated slab.
e – A 12 kb slab (made up of three contiguous 4 kb pages) could store six 2 kb objects. Initially all objects in the
re marked as free. When a new object for a kernel data structure is needed, the allocator can assign any free object
e cache to satisfy the request. The object assigned from the cache is marked as used.
, a slab may in one of three possible states:
1. Full – All objects in the slab are marked as used
2. Empty – All objects in the slab are marked as free
3. Dept. of Computer
Partial Science
– The slab and
consists of Engineering
both Page 107
b allocator first attempts to satisfy the request with a free object in a partial slab. If none exists, a free object is
Shri Vishnu Engineering College for Women

Mem
I/O Map

d from an empty slab. If no empty slabs are available, a new slab is allocated from contiguous physical pages and
d to a cache.
s of slab allocator –
 No memory is wasted due to fragmentation because each unique kernel data structure has an associated
cache.
 Memory request can be satisfied quickly.
 The slab allocating scheme is particularly effective for managing when objects are frequently allocated or
deallocated. The act of allocating and releasing memory can be a time consuming process. However, objects are
created in advance and thus can be quickly allocated from the cache. When the kernel has finished with an object
and releases it, it is marked as free and return to its cache, thus making it immediately available for subsequent
request from the kernel.

Dept. of Computer Science and Engineering Page 108

You might also like