Unit IV - OS
Unit IV - OS
1.1.1 Introduction:
It has the ability to recover the operating system of a machine to the identical state it was
at a given point in time.
Importance of ensuring that the operating system is internally consistent as at a particular
point in time.
1
Operating System (KCS-401) Notes By Nilotpal Pathak
network and boot details.
2
Operating System (KCS-401) Notes By Nilotpal Pathak
A Resident monitor (1950s-1970s) was a piece of software that was an integral part of a
general-use computer using punched card input (Batch systems). The resident monitor
governed the machine before and after each control card was executed, loaded and
interpreted each control card, and acted as a job sequencer for batch operations.
Early computers were physically enormous machine run from the console. First the
program has to be loaded manually from the front panel switches or from the punched
cards. Then appropriate button is pushed to locate the starting address and therefore start
the execution of the program. Resident monitors were replaced by the boot monitor, boot
loader or BIOS, and the operating system kernel, when
rewritable base instruction sets became necessary.
The primitive operating system in charge of executing
the batch job was called a resident monitor. it resided
permanently in memory & monitored the execution of
each job in succession.
The control-card interpreter is responsible for
reading and carrying out the instructions on the cards at
the point of execution.
The loader is invoked by the control-card interpreter to
load system programs and application programs into
memory at intervals.
The device drivers are used by both the control-card
interpreter and the loader for the system's I/O devices
to perform I/O. Often, the system and application programs are linked to these same
device drivers, providing continuity in their operation, as well as saving memory space
and programming time.
3
Operating System (KCS-401) Notes By Nilotpal Pathak
RKGIT
Operating System (KCS-401) Notes By Nilotpal Pathak
c) Execution Time: If the process can be moved during its execution from one memory segment
to another, then binding must be delayed until run time.
Logical and physical addresses are the same in compile-time and load-time address-
binding schemes.
The run time mapping from virtual to physical addresses is done by a hardware device
In MMU scheme, the value in the relocation register is added to every address
The user program deals with logical addresses; it never sees the real physical addresses.
Operating System (KCS-401) Notes By Nilotpal Pathak
1.3.1.2 Dynamic Loading: To obtain better memory space utilization, we can use dynamic
loading. With dynamic loading, a routine is not loaded until it is called.
1.3.1.4 Overlays: The idea of overlays is to keep in memory only those instructions and data that
are needed at any given time. When other instructions are needed, they are loaded into space
occupied previously by instruction that are no longer needed.
Operating System (KCS-401) Notes By Nilotpal Pathak
1.3.2 Contiguous Memory Allocation: The memory is usually divided into partitions one for
resident operating system (low memory) and one for the user processes (high memory).
1.3.2.1 Memory Protection: In this, we protect the O.S. from user process and protecting user
processes from one another. We can provide this protection by using a relocation register/
(ii) Limit Register contains range of logical addresses. Each logical address must be less than
limit register.
In this method, memory is divided into partitions whose sizes are fixed.
OS is placed into the lowest bytes of memory. Processes are classified on
entry to the system according to their memory they requirements. We need
one Process Queue (PQ) for each class of process. If a process is selected
to allocate memory, then it goes into memory and competes for the
processor. The number of fixed partition gives the degree of
multiprogramming. Since each queue has its own memory region, there
is no competition between queues for the memory.
Operating System (KCS-401) Notes By Nilotpal Pathak
Normally, a process swapped out will eventually be swapped back into the same
partition. But this restriction can be relaxed with dynamic relocation.
In some cases, a process executing may request more memory than its partition size. Say
we have a 6 KB process running in 6 KB partition and it now requires a more memory of
1KB. Then, the following policies are possible:
a) Return control to the user program. Let the program decide either quit or
modify its operation so that it can run (possibly slow) in less space.
b) Abort the process. (The user states the maximum amount of memory that the
process will need, so it is the user’s responsibility to stick to that limit).
c) If dynamic relocation is being used, swap the process out to the next largest
PQ and locate into that partition when its turn comes.
The main problem with the fixed partitioning method is how to determine the number of
partitions, and how to determine their sizes.
Operating System (KCS-401) Notes By Nilotpal Pathak
Fragmentation:
If a whole partition is currently not being used, then it is called an external
fragmentation.
And if a partition is being used by a process requiring some memory smaller than the
partition size, then it is called an internal fragmentation.
With fixed partitions we have to deal with the problem of determining the number and sizes
of partitions to minimize internal and external fragmentation. If we use variable partitioning
instead, then partition sizes may vary dynamically.
In the variable partitioning method, we keep a table (linked list) indicating used/free areas in
memory. Initially, the whole memory is free and it is considered as one large block. When a new
process arrives, the OS searches for a block of free memory large enough for that process. We
keep the rest available (free) for the future processes. If a block becomes free, then the OS tries
to merge it with its neighbors if they are also free.
Operating System (KCS-401) Notes By Nilotpal Pathak
There are three algorithms for searching the list of free blocks for a specific amount of
memory. (DYNAMIC STORAGE ALLOCATION PROBLEM)
a) First Fit : Allocate the first free block that is large enough for the new process. This is a fast
algorithm.
b) Best Fit : Allocate the smallest block among those that are large enough for the new process.
In this method, the OS has to search the entire list, or it can keep it sorted and stop when it hits
an entry which has a size larger than the size of new process. This algorithm produces the
smallest left over block. However, it requires more time for searching all the list or sorting it.
c) Worst Fit: Allocate the largest block among those that are large enough for the new process.
Again a search of the entire list or sorting it is needed. This algorithm produces the largest over
block.
Operating System (KCS-401) Notes By Nilotpal Pathak
Compaction: Compaction is a method to overcome the external fragmentation problem. All free
blocks are brought together as one large block of free space. Compaction requires dynamic
relocation. Certainly, compaction has a cost and selection of an optimal compaction strategy is
difficult. One method for compaction is swapping out those processes that are to be moved
within the memory, and swapping them into different memory locations.
When a process is to be executed, its pages are loaded into any available memory frames
from the backing store. The backing store is divided into fixed-sized blocks that are of the
same size as the memory frames.
Every address generated by the CPU is divided into two parts: a page number (p) and a
page offset (d).
The page number is used as an index into a page table.
The page table contains the base address of each page in physical memory.
This base address is combined with the page offset to define the physical memory address
that is sent to the memory unit.
Frame no
Page no
Offset 0
Offset 1
.
.
Offset n-1
Example 1:
a. Show the physical memory
implementation using logical
memory & page table as in figure.
Example 2:Consider a logical address space of eight pages of 1,024 words each, mapped onto a
physical memory of 32 frames.
a. How many bits are in the logical address?
b. How many bits are in the physical address?
Operating System (KCS-401) Notes By Nilotpal Pathak
■ In this scheme every data/instruction access requires two memory accesses. One for the
page table and one for the data/instruction.
■ The two memory access problem can be solved by the use of a special fast-lookup
hardware cache called associative memory or translation look-aside buffers (TLBs).
■ The Percentage of time that a particular page number is found in TLB is called the hit
ratio.
Example1: Consider a paging system with the page table stored in memory.
a. If a memory reference takes 200 nanoseconds, how long does a paged memory reference take?
Ans. 400 nanoseconds; 200 nanoseconds to access the page table and 200 nanoseconds to access
the word in memory.
b. If we add associative registers, and 75 percent of all page-table references are found in the
associative registers, what is the effective memory reference time? (Assume that finding a page-
table entry in the associative registers takes zero time, if the entry is there.)
Ans. Effective access time = 0.75 × (200 nanoseconds) + 0.25 × (400 nanoseconds) = 250
nanoseconds.
Operating System (KCS-401) Notes By Nilotpal Pathak
Example2: If the hit ratio is 80% and 20 nanoseconds to search the page table & 100
nanosecond to access the memory. What will be the effective memory access-time.
Ans. Effective access time = = 0.80 × (120 nanoseconds) + 0.20 × (220 nanoseconds) = 140
nanoseconds.
1.3.3.3 Protection:
■ Memory protection implemented by associating protection bit with each frame
■ Valid-invalid bit attached to each entry in the page table:
● “valid” indicates that the associated page is in the process’ logical address space,
and is thus a legal page.
● “invalid” indicates that the page is not in the process’ logical address space.
Operating System (KCS-401) Notes By Nilotpal Pathak
1.3.4 Segmentation:
procedure,
function,
method,
object,
common block,
stack,
<segment-number, offset>,
d) Segment table – maps two-dimensional physical addresses; each table entry has:
● base – contains the starting physical address where the segments reside in
memory
e) Segment-table base register (STBR) points to the segment table’s location in memory
Example:
Problem 2: What will be physical memory mapping of segment 3rd byte 852
Ans. 3200 (the base of segment 3) + 852 = 4052.
Problem 3: What will be physical memory mapping of to byte 1222 of segment 0?
Ans. in a trap to the operating system, as this segment is only 1,000 bytes long.
a) Protection
read/write/execute privileges
b) Protection bits associated with segments; code sharing occurs at segment level
1.3.4.4 Fragmentation:
a) The long term scheduler must find and allocate memory for all the segment of a user
program. This situation is similar to paging except that the segments are of variable
length, pages are all the same size.
b) Thus, as with the variable sized partition scheme memory allocation is a dynamic storage
allocation problem usually solved with a best bit or first fit algo.
c) Segmentation may cause external fragmentation, when all blocks of free memory too
small to accommodate a segment.
Operating System (KCS-401) Notes By Nilotpal Pathak
a) The logical address space of a process is divided into two partitions. The first partition
consist of upto 8KB segments that are private to that process. Second partition consists
of upto 8KB segments that are shared among all the processes.
b) Implementation about first partition is kept in the Local Descriptor Table (LDT) &
about second partition is kept in the Global Descriptor Table (GDT).
c) The logical address is a pair (selector, offset), where the selector is a 16 bit number.
The offset is a 32-bit number specifying the location of the byte within the segment in
question.
s g p
13 1 2
s: segment no. selector (16 bit)
g: whether the segment is in GDT or LDT
p: protection
e) In figure, First the limit is used to check for address validity. If the address is not
valid, a memory fault is generated, resulting in a trap to the operating system. If it is
valid, the value of offset is added to the value of the base, resulting in a 32-bit linear
address. This address is then translated into a physical address.
Operating System (KCS-401) Notes By Nilotpal Pathak
1. We check an internal table (usually kept with the process control block) for this process to
determine whether the reference was a valid or an invalid memory access.
2. If the reference was invalid, we terminate the process. If it was valid, but we have not yet
brought in that page, we now page it in.
3. We find a free frame (by taking one from the free-frame list, for example).
4. We schedule a disk operation to read the desired page into the newly allocated frame.
5. When the disk read is complete, we modify the internal table kept with the process and the
page table to indicate that the page is now in memory.
6. We restart the instruction that was interrupted by the trap. The process can now access the
page as though it had always been in memory.
1. Checking the address and finding a free frame or victim page (fast)
4. Context switch for the process and resume its execution (fast)
it can execute with no more faults. This scheme is pure demand paging: Never bring a
page into memory until it is required.
Example
Assume there are 3 frames, and consider the reference string 5, 7, 6, 0, 7, 1, 7, 2, 0, 1, 7, 1, 0.
Show the content of memeory after each memory reference if FIFO page replacement algorithm
is used. Find also the number of page faults
Belady’s Anomaly
As an exercise consider the reference string below. Apply the FIFO method and find the number
of page faults considering different number of frames. Then, examine whether the replacement
suffer Belady’s anomaly.
Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1.4.2.3 Least Recently Used (LRU):
In this algorithm, the victim is the page that has not been used for the longest period. So,
this algorithm makes us be rid of the considerations when no swapping occurs.
The OS using this method, has to associate with each page, the time it was last used
which means some extra storage. In the simplest way, the OS sets the reference bit of a page to
"1" when it is referenced. This bit will not give the order of use but it will simply tell whether the
corresponding frame is referenced recently or not. The OS resets all reference bits periodically.
Example
Assume there are 3 frames, and consider the reference string 5, 7, 6, 0, 7, 1, 7, 2, 0, 1, 7, 1, 0.
Show the content of memory after each memory reference if LRU page replacement algorithm is
used. Find also the number of page faults.
1.4.3 Thrashing
A process is thrashing if it is spending more time for paging in/out (due to frequent page
faults) than executing.
Thrashing causes considerable degradation in system performance. If a process does not
have enough number of frames allocated to it, it will issue a page fault. A victim page
must be chosen, but if all pages are in active use. So, the victim page selection and a new
page replacement will be needed to be done in a very short time. This means another
page fault will be issued shortly, and so on and so forth.
Local replacement algorithms can limit the effects of thrashing. If the degree of
multiprogramming is increased over a limit, processor utilization falls down considerably
because of thrashing.
To prevent thrashing, we must provide a process as many frames as it needs. For this, a
model called the working set model is developed which depends on the locality model of
program execution.
Example : Assume ∆ = 10 , and consider the reference string given below, on which the
window is shown at different time instants
Now, compute the WS size (WSS) for each process, and find the total demand, D of the
system at that instance in time, as the summation of all the WS sizes.
If the number of frames is n, then
a. If D > n , the system is thrashing.
b. If D < n, the system is all right, the degree of multiprogramming can possibly be increased.
A. Hierarchical Paging:
It Break up the logical address space into multiple page tables.
A simple technique is a two-level page table (for example):
A logical address (on 32-bit machine with 1K
page size) is divided into:
2. On a system using demand-paged memory, it takes 120 ns to satisfy a memory request if the
page is in memory. If the page is not in memory, the request takes on an average 5ms. What
would the page fault rate to be achieved an effective access time of 1 micro-sec? Assume the
system is running only a single process and the CPU is idle during page swaps.
3. A system using demand-paged memory, takes 250 ns to satisfy a memory request if the page
is in memory. If the page is not in memory, the request takes on an average 5ms if a free frame is
available or the page to be swapped out has been modified or 12 ms if the page to be swapped
out has been modified. What is the effective access time if the page fault rate is 2%, and 40% of
the time the page to be replaced has been modified? Assume the system is running only a single
process and the CPU is idle during page swaps.
4. On a system using a disk cache, the mean access time is 41.2 ms, the mean cache access time
is 2ms, the mean disk access time is 100 ms and the system has 8MB of cache memory. For each
doubling of the amount of memory, the miss rate is halved. How much memory must be added to
reduce the mean access time to 20 ms? Assume the amount of memory may only increase by
doubling.
5. On a system using paging and segmentation, the virtual address space consists of upto 8
segments where each segment can be upto 229 bytes long. The hardware pages each segment into
256-bytes pages. Determine the bits needed in the virtual address to specify the
(i)Segment number (ii)Page Number (iii)Offset with page (iv)Entire virtual address
6. How many page faults occur for optimal page reaplacement algorithm with following
reference string for four page frames:
1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2