0% found this document useful (0 votes)
140 views43 pages

Operating Systems SBP QBank Solved

The document provides information about CPU scheduling and deadlocks in operating systems. It includes questions and answers on topics like preemptive vs non-preemptive scheduling, multilevel queue scheduling, priority scheduling algorithms, process and system call scheduling, and the four necessary conditions for and techniques to prevent deadlocks. Resource allocation graphs are discussed as a tool to detect potential deadlocks by illustrating resource usage and requests between interacting processes.

Uploaded by

Subhojit Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
140 views43 pages

Operating Systems SBP QBank Solved

The document provides information about CPU scheduling and deadlocks in operating systems. It includes questions and answers on topics like preemptive vs non-preemptive scheduling, multilevel queue scheduling, priority scheduling algorithms, process and system call scheduling, and the four necessary conditions for and techniques to prevent deadlocks. Resource allocation graphs are discussed as a tool to detect potential deadlocks by illustrating resource usage and requests between interacting processes.

Uploaded by

Subhojit Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

Operating Systems (MCAP2104)

Question Bank

CPU Scheduling
1. Explain the difference between preemptive and nonpreemptive scheduling. [3]
A:
Basis for
Preemptive Scheduling Non Preemptive Scheduling
Comparison
Once resources are allocated to a process,
The resources are allocated to a
Basic the process holds it till it completes its burst
process for a limited time.
time or switches to waiting state.
Process can be interrupted in Process cannot be interrupted till it
Interrupt
between. terminates or switches to waiting state.
If a high priority process frequently If a process with long burst time is running
Starvation arrives in the ready queue, low CPU, then another process with less CPU
priority process may starve. burst time may starve.
Preemptive scheduling has overheads Non-preemptive scheduling does not have
Overhead
of scheduling the processes. overheads.
Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.
Preemptive scheduling is cost Non-preemptive scheduling is not cost
Cost
associated. associative.

2. What advantage is there in having different time-quantum sizes at different levels of a


multilevel queuing system? [3]
A:

3. What (if any) relation holds between the following pairs of algorithm sets? [5]

(i) Priority and SJF


The shortest job has the highest priority.

(ii) Multilevel feedback queues and FCFS


The lowest level of MLFQ is FCFS.

(iii) Priority and FCFS


FCFS gives the highest priority to the job having been in existence the
longest.

(iv) RR and SJF


None.
4. Distinguish between PCS and SCS scheduling [3]
PCS scheduling is done local to the process. It is how the thread library
schedules threads onto available LWPs. SCS scheduling is the situation where
the operating system schedules kernel threads. On systems using either many-to-
one or many-to-many, the two scheduling models are fundamentally different.
On systems using one-to-one, PCS and SCS are the same.
5. Assume that an operating system maps user-level threads to the kernel using the
many-to-many model and that the mapping is done through the use of LWPs. Furthermore,
the system allows program developers to create real-time threads. Is it necessary to bind a
real-time thread to an LWP? [4]

Yes. Timing is crucial to real-time applications. If a thread is marked as real-time but is


not bound to an LWP, the thread may have to wait to be attached to an LWP before
running. Consider if a real-time thread is running (is attached to an LWP) and then
proceeds to block (i.e. must perform I/O, has been preempted by a higher-priority real-
time thread, is waiting for a mutual exclusion lock, etc.) While the real-time thread is
blocked, the LWP it was attached to has been assigned to another thread. When the real-
time thread has been scheduled to run again, it must first wait to be attached to an LWP.
By binding an LWP to a realtime
thread you are ensuring the thread will be able to run with minimal delay once it is
scheduled.

Deadlock

1. Define Deadlock? [2]


A deadlock is a situation in which two computer programs sharing the same
resource are effectively preventing each other from accessing the resource,
resulting in both programs ceasing to function

2. What are the necessary conditions for deadlock?[4]


Deadlock can arise if four conditions hold simultaneously.
• Mutual exclusion: only one process at a time can use a
resource.
• Hold and wait: a process holding at least one resource is
waiting to acquire additional resources held by other
processes.
• No preemption: a resource can be released only voluntarily
by the process holding it, after that process has completed its
task.
• Circular wait: there exists a set {P0, P1, …, P0} of waiting
processes such that P0 is waiting for a resource that is held by
P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is
waiting for a resource that is held by Pn, and P0 is waiting for
a resource that is held by P0.

3. Define Resource-allocation graph. Explain how Resource-allocation graph use in deadlock


avoidance.[2+3]

Resource-Allocation Graph

A set of vertices V and a set of edges E.

• V is partitioned into two types:


– P = {P1, P2, …, Pn}, the set consisting of all the processes in
the system.

– R = {R1, R2, …, Rm}, the set consisting of all resource types


in the system.
• request edge – directed edge P1  Rj
• assignment edge – directed edge Rj  Pi

Operating System Concepts Silberschatz and Galvin1999


7.6

A resource allocation graph tracks which resource is held by which


process and which process is waiting for a resource of a particular type. It
is very powerful and simple tool to illustrate how interacting processes can
deadlock. If a process is using a resource, an arrow is drawn from the
resource node to the process node. If a process is requesting a resource, an
arrow is drawn from the process node to the resource node.

If there is a cycle in the Resource Allocation Graph and each resource in


the cycle provides only one instance, then the processes will deadlock.

4. Explain the Deadlock prevention techniques. [6]

Deadlocks can be prevented by preventing at least one of the four required


conditions:

1. Mutual Exclusion

 Shared resources such as read-only files do not lead to deadlocks.


 Unfortunately some resources, such as printers and tape drives, require
exclusive access by a single process.

2. Hold and Wait

 To prevent this condition processes must be prevented from holding one


or more resources while simultaneously waiting for one or more others.
There are several possibilities for this:
o Require that all processes request all resources at one time. This
can be wasteful of system resources if a process needs one
resource early in its execution and doesn't need some other
resource until much later.
o Require that processes holding resources must release them before
requesting new resources, and then re-acquire the released
resources along with the new ones in a single new request. This
can be a problem if a process has partially completed an operation
using a resource and then fails to get it re-allocated after releasing
it.
o Either of the methods described above can lead to starvation if a
process requires one or more popular resources.

3. No Preemption
 Preemption of process resource allocations can prevent this condition of
deadlocks, when it is possible.
o One approach is that if a process is forced to wait when requesting
a new resource, then all other resources previously held by this
process are implicitly released, ( preempted ), forcing this process
to re-acquire the old resources along with the new resources in a
single request, similar to the previous discussion.
o Another approach is that when a resource is requested and not
available, then the system looks to see what other processes
currently have those resources and are themselves blocked
waiting for some other resource. If such a process is found, then
some of their resources may get preempted and added to the list of
resources for which the process is waiting.
o Either of these approaches may be applicable for resources whose
states are easily saved and restored, such as registers and memory,
but are generally not applicable to other devices such as printers
and tape drives.

4. Circular Wait

 One way to avoid circular wait is to number all resources, and to require
that processes request resources only in strictly increasing ( or
decreasing ) order.
 In other words, in order to request resource Rj, a process must first
release all Ri such that i >= j.
 One big challenge in this scheme is determining the relative ordering of
the different resources

5. When we say, a system is in safe state. Explain with an example. [4]


Safe State

• When a process requests an available resource, system must


decide if immediate allocation leaves the system in a safe state.
• System is in safe state if there exists a safe sequence of all
processes.
• Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources
that Pi can still request can be satisfied by currently available
resources + resources held by all the Pj, with j<I.
– If Pi resource needs are not immediately available, then Pi
can wait until all Pj have finished.
– When Pj is finished, Pi can obtain needed resources,
execute, return allocated resources, and terminate.
– When Pi terminates, Pi+1 can obtain its needed resources,
and so on.

Operating System Concepts Silberschatz and Galvin1999


7.16
6. Describe the Banker’s algorithm for deadlock avoidance. [7]

The banker’s algorithm is a resource allocation and deadlock avoidance algorithm


that tests for safety by simulating the allocation for predetermined maximum possible
amounts of all resources, then makes an “s-state” check to test for possible activities,
before deciding whether allocation should be allowed to continue.

Following Data structures are used to implement the Banker’s Algorithm:

Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources
types.

Available :
 It is a 1-d array of size ‘m’ indicating the number of available resources of
each type.
 Available[ j ] = k means there are ‘k’ instances of resource type Rj

Max :

 It is a 2-d array of size ‘n*m’ that defines the maximum demand of each
process in a system.
 Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource
type Rj.

Allocation :

 It is a 2-d array of size ‘n*m’ that defines the number of resources of each type
currently allocated to each process.
 Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of
resource type Rj

Need :

 It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of
each process.
 Need [ i, j ] = k means process Pi currently allocated ‘k’ instances of resource
type Rj
 Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]

7. What is wait-for graph? How wait-for graph is used in deadlock detection? [2+3]
A wait-for graph in computer science is a directed graph used for deadlock detection in
operating systems and relational database systems. In computer science, a system that allows
concurrent operation of multiple processes and locking of resources and which does not
provide mechanisms to avoid or prevent deadlock must support a mechanism to detect
deadlocks and an algorithm for recovering from them. to implies is holding a resource that
needs and thus is waiting for The wait-for-graph scheme is not applicable to a resource
allocation system with multiple instances of each resource type.

8. Explain a Deadlock detection algorithm with multiple instances of each


resource type. [5]
Several Instances of a Resource Type

• Available: A vector of length m indicates the number of available


resources of each type.
• Allocation: An n x m matrix defines the number of resources of
each type currently allocated to each process.
• Request: An n x m matrix indicates the current request of each
process. If Request [ij] = k, then process Pi is requesting k more
instances of resource type. Rj.

Operating System Concepts Silberschatz and Galvin1999


7.32

Detection Algorithm

1. Let Work and Finish be vectors of length m and n, respectively


Initialize:
(a) Work :- Available
(b) For i = 1,2, …, n, if Allocationi  0, then
Finish[i] := false;otherwise, Finish[i] := true.
2. Find an index i such that both:
(a) Finish[i] = false
(b) Requesti  Work
If no such i exists, go to step 4.

Operating System Concepts Silberschatz and Galvin1999


7.33
Detection Algorithm (Cont.)

3. Work := Work + Allocationi


Finish[i] := true
go to step 2.
4. If Finish[i] = false, for some i, 1  i  n, then the system is in
deadlock state. Moreover, if Finish[i] = false, then Pi is
deadlocked.

Algorithm requires an order of m x n2 operations to detect whether


the system is in deadlocked state.

Operating System Concepts Silberschatz and Galvin1999


7.34

Example of Detection Algorithm

• Five processes P0 through P4; three resource types


A (7 instances), B (2 instances), and C (6 instances).
• Snapshot at time T0:
Allocation Request Available
ABC ABC ABC
P0 010 000 000
P1 200 202
P2 303 000
P3 211 100
P4 002 002
• Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i.

Operating System Concepts Silberschatz and Galvin1999


7.35
Example (Cont.)

• P2 requests an additional instance of type C.


Request
ABC
P0 000
P1 201
P2 001
P3 100
P4 002
• State of system?
– Can reclaim resources held by process P0, but insufficient
resources to fulfill other processes; requests.
– Deadlock exists, consisting of processes P1, P2, P3, and P4.

Operating System Concepts Silberschatz and Galvin1999


7.36

9. Define the methods used by the operating system to recover from the deadlock.
[3]

Recovery from Deadlock: Process Termination

• Abort all deadlocked processes.


• Abort one process at a time until the deadlock cycle is eliminated.
• In which order should we choose to abort?
– Priority of the process.
– How long process has computed, and how much longer to
completion.
– Resources the process has used.
– Resources process needs to complete.
– How many processes will need to be terminated.
– Is process interactive or batch?

Operating System Concepts Silberschatz and Galvin1999


7.38
Recovery from Deadlock: Resource Preemption

• Selecting a victim – minimize cost.


• Rollback – return to some safe state, restart process fro that
state.
• Starvation – same process may always be picked as victim,
include number of rollback in cost factor.

Operating System Concepts Silberschatz and Galvin1999


7.39

10. How can deadlocks be eliminated by aborting a process? Also discuss the factors
those may affect in time of choosing a process to terminate.[2+3]
Galvin Page 257-258

To eliminate deadlocks by aborting a process, we use one of the two methods:


 Abort all deadlocked processes: This method clearly will break the deadlock cycle,
but at great expense; the deadlocked processes may have computed for a long time,
and the results of these partial computations must be discarded and probably will
have to be recomputed later.
 Abort one process at a time until the deadlock cycle is eliminated: This method
incurs considerable overhead, since, after each process is aborted, a deadlock-
detection algorithm must be invoked to determine whether any process are still
deadlocked.

Many factors may affect which process is chosen:


 What the priority of the process is
 How long the process has computed and how much longer the process will
compute before completing its designated task
 How many and what type of resources the process has used
 How many more resources the process needs in order to complete.
 How many processes will need to be terminated
 Whether the process is interactive or batch
11. Explain the technique to eliminate deadlocks using resource preemption. [4]
Galvin page 258-259

 Selecting a victim: Which resources and which processes are to be pre-empted


are to be selected. As in process termination, we must determine the order of
preemption to minimize cost. Cost factors may include such parameters as the
number of resources a deadlocked process is holding and the amount of time
the process has thus far consumed during its execution.
 Rollback: If we pre-empt a resource from a process, we must rollback the
process to some safe state and restart it from that state.
Since, in general, it is difficult to determine what a safe state is, the simplest
solution is a total rollback i.e. abort the process and then restart it. Although it
is more effective to rollback the process only as far as necessary to break the
deadlock, this method requires the system to keep more information about the
state of all running processes.
 Starvation: In a system where victim selection is based primarily on cost
factors, it may happen that the same process is always picked as a victim. As a
result, this process never completes its designated task, a starvation situation
that must be dealt with in any practical system. Clearly, it must be ensured that
a process can be picked as a victim only a finite number of times. The most
common solution is to include the number of rollbacks in the cost factor.

12. List three examples of deadlocks that are not related to a computer system environment.
[3]

 When two trains from opposite directions are running on same track towards each
other is example of deadlock.
 At a crossing, a car waiting for a pedestrian to cross and the pedestrian in turn
waiting for the car to cross over. As a result both will wait for each other
infinitely and deadlock.
 When on a single lane bridge, two cars try to cross the bridge from opposite
directions then it can be said a deadlock...
 A person going down a ladder while another person is climbing up the ladder

13. Suppose that a system is in an unsafe state. Show that it is possible for the processes to
complete their execution without entering a deadlock state. [5]

Deadlock means something specific: there are two (or more) processes that are
currently blocked waiting for each other.
In an unsafe state you can also be in a situation where there might be a deadlock
sometime in the future, but it hasn't happened yet because one or both of the
processes haven't actually started waiting.

Consider the following example:


Process A Process B
lock X lock Y # state is "unsafe"
unlock Y
lock Y # state is back to "safe" (no
deadlock this time. We got lucky.)

Consider a system with 12 tape drives with:


Process Max Need Current
P0: 10 5
P2: 9 3

This is an unsafe state. But we're not in a deadlock. There's only 4 free drives, so, for
example, if P0 does request an additional 5, and P2 does request an additional 1, we
will deadlock, but it hasn't happened yet. And P0 might not request any more drives,
but might instead free up the drives it already has. The Max need is over all possible
executions of the program, and this might not be one of the executions where we
need all 10 drives in P0.

14.Is it possible to have a deadlock involving only one single-threaded process? Explain
your answer. [3]
It is not possible to have a deadlock involving only one single process. The deadlock
involves a circular “hold-and-wait” condition between two or more processes, so “one”
process cannot hold a resource, yet be waiting for another resource that it is holding. In
addition, deadlock is not possible between two threads in a process, because it is the process
that holds resources, not the thread that is, each thread has access to the resources held by
the process.

Memory Management Strategies


1. What are the differences between internal and external fragmentation? How can their
occurrence be prevented or mitigated? Explain with example. [2+4=6]

A: Internal Fragmentation
1. When a process is allocated more memory than required, few spaces is left unused and
this is called as INTERNAL FRAGMENTATION.
2. It occurs when memory is divided into fixed-sized partitions.
3. It can be cured by allocating memory dynamically or having partitions of different sizes.
External Fragmentation

1. After execution of processes when they are swapped out of memory and other smaller
processes replace them, many small non contiguous (adjacent) blocks of unused spaces
are formed which can serve a new request if all of them are put together but as they are
not adjacent to each other a new request can't be served and this is known
as EXTERNAL FRAGMENTATION.
2. It occurs when memory is divided into variable-sized partitions based on size of process.
3. It can be cured by Compaction, Paging and Segmentation.

Solution to external fragmentation:


1) Compaction: shuffling the fragmented memory into one contiguous location.
2) Another possible solution to the external fragmentation problem is to permit the logical
address space of the processes to be non contiguous, thus allowing a process to be allocated
physical memory wherever the latter is available. This is done using Paging and
Segmentation.
Solution to internal fragmentation:
Dynamic Partitioning of Memory is the solution for Internal Fragmentation in OS.

2. Which type of fragmentation occurs in paging systems? Which type occurs in systems that
use pure segmentation? [3+4=7]
In a paging system, the wasted space in the last page is lost to internal
fragmentation. Because a page has fixed size, but processes may request more
or less space. Say a page is 32 units, and a process requests 20 units. Then when
a page is given to the requesting process, that page is no longer useable despite
having 12 units of free "internal" space.

In a pure segmentation system, some space is invariably lost between the segments.
This is due to external fragmentation. External fragmentation occurs in systems that
use pure segmentation. Because each segment has varied size to fit each program size,
the holes (unused memory) occur external to the allocated memory partition.
3. What is compaction? Which type of fragmentation does it solve?[2+2]

Compaction is to shuffle memory contents to place all free memory


together in one large block.
It solves external fragmentation. Compaction takes all of the free memory and
puts it all in one place so as to be usable for the next process (es) which are waiting
for memory. This is only done with dynamic and relocatable dynamic memory
systems.

4. What is a modify bit in page replacement? What are the benefits of using it?[2+2]
A dirty bit or modified bit is a bit that is associated with a block of computer memory and
indicates whether or not the corresponding block of memory has been modified.[1] The dirty
bit is set when the processor writes to (modifies) this memory.

The bit indicates that its associated block of memory has been modified and has not been
saved to storage yet. When a block of memory is to be replaced, its corresponding dirty bit is
checked to see if the block needs to be written back to secondary memory before being
replaced or if it can simply be removed. Dirty bits are used by the CPU cache and in the page
replacement algorithms of an operating system.

5. What is a TLB? Explain its purpose in paging schemes. [2+3]

A translation lookaside buffer (TLB) is a memory cache that stores recent translations
of virtual memory to physical addresses for faster retrieval.

When a virtual memory address is referenced by a program, the search starts in the
CPU. First, instruction caches are checked. If the required memory is not in these
very fast caches, the system has to look up the memory’s physical address. At this
point, TLB is checked for a quick reference to the location in physical memory.

When an address is searched in the TLB and not found, the physical memory must be
searched with a memory page crawl operation. As virtual memory addresses are
translated, values referenced are added to TLB. When a value can be retrieved from
TLB, speed is enhanced because the memory address is stored in the TLB on
processor. Most processors include TLBs to increase the speed of virtual memory
operations through the inherent latency-reducing proximity as well as the high-
running frequencies of current CPU’s.
TLBs also add the support required for multi-user computers to keep memory
separate, by having a user and a supervisor mode as well as using permissions on
read and write bits to enable sharing.

6. Explain the concept of shared pages with an example. What do you mean by re
entrant code? [3+3]

Shared Pages

• Shared code
– One copy of read-only (reentrant) code shared among
processes (i.e., text editors, compilers, window systems).
– Shared code must appear in same location in the logical
address space of all processes.
• Private code and data
– Each process keeps a separate copy of the code and data.
– The pages for the private code and data can appear
anywhere in the logical address space.

Operating System Concepts 8.29 Silberschatz and Galvin1999

Shared Pages Example

Operating System Concepts 8.30 Silberschatz and Galvin1999


Re entrant code is some code that can be left at some state and then safely re
entered (multi threaded applications need to be that. ).
One of the pre-requisites of writing re entrant code is not to use globals and
static memory.

7. Explain the differences between Hierarchical, Hashed and Inverted paging schemes.
Which of the three is not suitable for implementing shared pages and why? [5+5]

8. How is each page indexed in a page table? What is the reason for associating a
valid/invalid bit for each entry in a page table? [3]

9. What are the limitations of paging? How are these solved by segmentation? [3+4]

 Paging reduces external fragmentation, but still suffers from internal


fragmentation.
 Page table requires extra memory space, so may not be good for a system
having small RAM.

Segmentation memory management works very similar to paging but here segments
are of variable-length where as in paging pages are of fixed size. So no internal
fragmentation.

A program segment contains the program's main function, utility functions, data
structures, and so on. The operating system maintains a segment map table for every
process and a list of free memory blocks along with segment numbers, their size and
corresponding memory locations in main memory. For each segment, the table stores
the starting address of the segment and the length of the segment. A reference to a
memory location includes a value that identifies a segment and an offset. So less
extra space needed.
Virtual Memory Management

1. Justify the following statements with respect to virtual memory:


(a) The space used to run a program is not constrained by the amount of physical memory
that is available.

A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than
physical memory. Virtual memory serves two purposes. First, it allows us to extend
the use of physical memory by using disk. Second, it allows us to have memory
protection, because each virtual address is translated to a physical address.
(b) More programs can be run concurrently without any degradation in performance. [4+4]

2. What is demand paging? What are the benefits of using it? What is pure demand
paging? [2+2+1]

Demand paging is a type of swapping done in virtual memory systems. In demand paging,
the data is not copied from the disk to the RAM until they are needed or being demanded
by some program. The data will not be copied when the data is already available on the
memory. This is otherwise called a lazy evaluation because only the demanded pages of
memory are being swapped from the secondary storage (disk space) to the main memory.

Demand paging, as opposed to loading all pages immediately:

 Only loads pages that are demanded by the executing process.


 As there is more space in main memory, more processes can be loaded, reducing the
context switching time, which utilizes large amounts of resources.
 Less loading latency occurs at program start-up, as less information is accessed from
secondary storage and less information is brought into main memory.
 As main memory is expensive compared to secondary memory, this technique helps
significantly reduce the bill of material (BOM) cost in smart phones for example. Symbian
OS had this feature.

When starting execution of a process with no pages in memory, the operating system sets
the instruction pointer to the first instruction of the process, which is on a non-memory
resident page, the process immediately faults for the page. After this page is brought into
memory, the process continues to execute, faulting as necessary until every page that it
needs is in memory. At that point, it can execute with no more faults. This schema is pure
demand paging.

3. Describe the steps taken by the operating system when a page fault occurs. [6]

A page fault occurs when a program attempts to access data or code that is in its
address space, but is not currently located in the system RAM. So when page fault
occurs then following sequence of events happens:
 The computer hardware traps to the kernel and program counter (PC) is saved
on the stack. Current instruction state information is saved in CPU registers.
 An assembly program is started to save the general registers and other volatile
information to keep the OS from destroying it.
 Operating system finds that a page fault has occurred and tries to find out
which virtual page is needed. Sometimes hardware register contains this
required information. If not, the operating system must retrieve PC, fetch
instruction and find out what it was doing when the fault occurred.
 Once virtual address caused page fault is known, system checks to see if
address is valid and checks if there is no protection access problem.
 If the virtual address is valid, the system checks to see if a page frame is free. If
no frames are free, the page replacement algorithm is run to remove a page.
 If frame selected is dirty, page is scheduled for transfer to disk, context switch
takes place, fault process is suspended and another process is made to run until
disk transfer is completed.
 As soon as page frame is clean, operating system looks up disk address where
needed page is, schedules disk operation to bring it in.
 When disk interrupt indicates page has arrived, page tables are updated to
reflect its position, and frame marked as being in normal state.
 Faulting instruction is backed up to state it had when it began and PC is reset.
Faulting is scheduled, operating system returns to routine that called it.
 Assembly Routine reloads register and other state information, returns to user
space to continue execution.

4. What is Belady’s anomaly? Explain which of the three page replacement algorithms
suffer(s) from this problem? [3+2]
In computer storage, Belady’s anomaly is the phenomenon in which increasing the
number of page frames results in an increase in the number of page faults for certain
memory access patterns. This phenomenon is commonly experienced when using the first-
in first-out (FIFO) page replacement algorithm.
5. What is a modify bit in page replacement? What are the benefits of using it? [2+2]

A dirty bit or modified bit is a bit that is associated with a block of computer memory and
indicates whether or not the corresponding block of memory has been modified.[1] The dirty
bit is set when the processor writes to (modifies) this memory.

The bit indicates that its associated block of memory has been modified and has not been
saved to storage yet. When a block of memory is to be replaced, its corresponding dirty bit is
checked to see if the block needs to be written back to secondary memory before being
replaced or if it can simply be removed. Dirty bits are used by the CPU cache and in the page
replacement algorithms of an operating system.

6. How are counters used in LRU page replacement? How are they implemented by the
operating system? [3+4]

The least recently used lru page replacement algorithm keeps track of page usage
over a short period of time. LRU replaced a page which is least recently used. Its uses
a counter implement.

A counter is applied to each frame start from 1 and when a page is not in the frame its
replace the page that has lowest no of the counter value. And if the page is available
in frame its change the counter value of that page and increment by 1.

7. What are copy-on-write pages? [4]


Copy-on-write (CoW or COW), sometimes referred to as implicit sharing[1] or
shadowing,[2] is a resource-management technique used in computer programming to
efficiently implement a "duplicate" or "copy" operation on modifiable resources.[3] If a
resource is duplicated but not modified, it is not necessary to create a new resource; the
resource can be shared between the copy and the original. Modifications must still create a
copy, hence the technique: the copy operation is deferred to the first write. By sharing
resources in this way, it is possible to significantly reduce the resource consumption of
unmodified copies, while adding a small overhead to resource-modifying operations.

8. What is thrashing? What are its causes? How can its occurrence be reduced? [3+3+3]

Thrashing in computing is an issue caused when virtual memory is in use. It occurs


when the virtual memory of a computer is rapidly exchanging data for data on hard
disk, to the exclusion of most application-level processing. As the main memory gets
filled, additional pages need to be swapped in and out of virtual memory. The
swapping causes a very high rate of hard disk access. Thrashing can continue for a
long duration until the underlying issue is addressed. Thrashing can potentially result
in total collapse of the hard drive of the computer.

Thrashing is also known as disk thrashing.

Thrashing occurs when there are too much pages in our memory, and each page
refers to another page. The real memory shortens in capacity to have all the pages in
it, so it uses 'virtual memory'. When each page in execution demands that page that is
not currently in real memory (RAM) it place some pages on virtual memory and
adjust the required page on RAM. If CPU is so much busy in doing this task,
thrashing occurs.

Ways to prevent thrashing:

1. instruct mid-term scheduler to swap out some of the process too recover from
thrashing
2. instructing the dispatcher not to load more processes after a threshold

10. Explain the differences between global and local page-replacement algorithms. [4]

Replacement algorithms can be local or global.

When a process incurs a page fault, a local page replacement algorithm selects for replacement
some page that belongs to that same process (or a group of processes sharing a memory partition).
A global replacement algorithm is free to select any page in memory.

Local page replacement assumes some form of memory partitioning that determines how many
pages are to be assigned to a given process or a group of processes. Most popular forms of
partitioning are fixed partitioning and balanced set algorithms based on the working set model. The
advantage of local page replacement is its scalability: each process can handle its page faults
independently, leading to more consistent performance for that process. However global page
replacement is more efficient on an overall system basis.[1]
11. What is non-uniform memory access (NUMA)? [4]

Non-uniform memory access (NUMA) is a computer memory design used in multiprocessing,


where the memory access time depends on the memory location relative to the processor. Under
NUMA, a processor can access its own local memory faster than non-local memory (memory local
to another processor or memory shared between processors). The benefits of NUMA are limited to
particular workloads, notably on servers where the data is often associated strongly with certain
tasks or users.[1]

NUMA architectures logically follow in scaling from symmetric multiprocessing (SMP)


architectures.

Disk scheduling

1. What is disk partitioning? What are the problems in storing data on a raw disk that is
not partitioned?[2]

Disk partitioning or disk slicing is the creation of one or more regions on a hard disk or
other secondary storage, so that an operating system can manage information in each
region separately. These regions are called partitions.

2. What is a bit vector in free space management? Why is it used?[2]

A Bitmap or Bit Vector is series or collection of bits where each bit corresponds to a
disk block. The bit can take two values: 0 and 1: 0 indicates that the block is
allocated and 1 indicates a free block.
The given instance of disk blocks on the disk in Figure 1 (where green blocks are
allocated) can be represented by a bitmap of 16 bits as: 0000111000000110.
Advantages –

 Simple to understand.
 Finding the first free block is efficient. It requires scanning the words (a group
of 8 bits) in a bitmap for a non-zero word. (A 0-valued word has all bits 0). The
first free block is then found by scanning for the first 1 bit in the non-zero
word.

3. Explain the differences between linked allocation and contiguous allocation for file
systems. Specify the advantages and disadvantages of both. [2+4]

Contiguous allocation

 each file occupies a set of consecutive addresses on disk


 each directory entry contains:
o file name
o starting address of the first block
o block address = sector id (e.g., block = 4K)
o length in blocks
 usual dynamic storage allocation problem
o use first fit, best fit, or worst fit algorithms to manage storage
 if the file can increase in size, either
o leave no extra space, and copy the file elsewhere if it expands
o leave extra space

Linked allocation

 each data block contains the block address of the next block in the file
 each directory entry contains:
o file name
o block address: pointer to the first block
o sometimes, also have a pointer to the last block (adding to the end
of the file is much faster using this pointer)
 a view of the linked list

The advantages of linked allocation are:


1.No external fragmentation.
2. Size of the file does not need to be declared.

The disadvantages of linked allocation are:


1. Used only for sequential access of files.
2. Direct access is not supported.
3. Memory space required for the pointers.
4. Reliability is compromised if the pointers are lost or damaged.

The advantages of contiguous memory allocation are:

1. It supports fast sequential and direct access

2. It provides a good performance

3. The number of disk seek required is minimal


The disadvantages of contiguous memory allocation are:
1. suffers from external fragmentation
2. Suffers from internal fragmentation
3. Difficulty in finding space for a new file
4.File cannot be extended
5. Size of the file is to be declared in advance

File Systems

1. What are the different attributes of a file? What operations can be performed on files?
[3+3]

2. What is the benefit of using a file-open count in managing open-file table entries? [3]

3. Explain the difference between sequential and direct access for files. How can sequential
access be simulated on a direct access file? [3+3]

Sequential access must begin at the beginning and access each element in order, one
after the other. Direct access allows the access of any element directly by locating it by
its index number or address. Arrays allow direct access. Magnetic tape has only
sequential access, but CDs had direct access.
4. Explain the difference between the instructions read n and read next when reading from a
file. [3]

5. Explain the differences between single-level, two-level and tree-structured


directories.[6]

l. Single level directory: In a single level directory system, all the files are placed in
one directory. This is very common on single-user OS's. A single-level directory has
significant limitations, however, when the number of files increases or when there is
more than one user. Since all files are in the same directory, they must have unique
names. If there are two users who call their data file "test", then the unique-name rule
is violated. Although file names are generally selected to reflect the content of the
file, they are often quite limited in length. Even with a single-user, as the number of
files increases, it becomes difficult to remember the names of all the files in order to
create only files with unique names shown in the figure.

2. Two level directory: In the two-level directory system, the system maintains a
master block that has one entry for each user. This master block contains the
addresses of the directory of the users. There are still problems with two level
directory structures. This structure effectively isolates one user from another. This is
an advantage when the users are completely independent, but a disadvantage when
the users want to cooperate on some task and access files of other users. Some
systems simply do not allow local files to be accessed by other users shown in the
figure.

3. Tree structured directory: In the tree-structured directory, the directory


themselves are files. This leads to the possibility of having sub-directories that can
contain files and sub-subdirectories. An interesting policy decision in a tree-
structured directory structure is how to handle the deletion of a directory. If a
directory is empty, its entry in its containing directory can simply be deleted.
However, suppose the directory to be deleted id not empty, but contains several files,
or possibly sub-directories. Some systems will not delete a directory unless it is
empty. Thus, to delete a directory, someone must first delete all the files in that
directory. If these are any sub-directories, this procedure must be applied recursively
to them, so that they can be deleted also. This approach may result in an insubstantial
amount of work shown in the figure.
6. What is disk partitioning? What are the problems in storing data on a raw disk that is not
partitioned? [2+3]

Disk partitioning or disk slicing is the creation of one or more regions on a hard disk or
other secondary storage, so that an operating system can manage information in each
region separately. These regions are called partitions.

7. Explain the differences between user file directory (UFD) and master file directory
(MFD). [5]

A crucial set of data structures that must be translated are the User File Directories (UFD's), which
are similar to the directories of today. The UFD's are essential for preserving any file links in a
directory. Furthermore, on the incremental backups, one cannot tell what files were on the original
file system but were not included on that incremental without decoding this information. Therefore,
it is critical to make some sense out of these data structures since we cannot depend on future
archivists to decipher this raw information.

The UFD is a much more difficult structure to interpret than the MFD for two reasons. First, the
UFD keeps critical information in structures that can be decoded only by interpreting PDP-10 byte
pointers. Second, the UFD uses a custom method to track disk block allocation, which must be
interpreted to determine file length.
The Master File Directory (MFD) was an essential data structure on any ITS file system, so any
effort to preserve the rest of the data should include this structure too. The MFD is similar to the
modern-day ``root directory'' in a hierarchical file system, except that ITS had a flat file system with
only one level of directories. So, the MFD contained a listing of all of the directories on disk. Each
user had his/her own directory and each directory had a unique index number associated with it.
The author of a file was recorded as the index number to his directory in the MFD. In essence, the
only information the MFD provides us is the user ID number to user name mapping required to
determine whose files are whose.

Decoding the MFD is simple as long as one understands some of conventions used in ITS data
structures. It was our hope that by translating this in a rational manner, I would be the last person
required to understand the format of the MFD.

The MFD is basically an array of usernames encoded in ``sixbit''; the index number is determined
by the position of the name in the array. Sixbit is a method of encoding characters in 36-bit words in
which each character is 6 bits long, for a total of 6 characters per word. Sixbit does not include the
lower-case characters, so all user names were in capital letters. Each user name was limited to 6
characters, so it would fit in one 36-bit word. The translation from the array of sixbit user names to
an array of ASCII user names was straightforward.

If the MFD was included on a backup tape, it was always the first file. Therefore, the archivist
software looks for the presence of the MFD and then decodes it as mentioned above. The raw and
translated forms of the MFD are then written out in TCFS format as with the rest of the files.
However, the archivist also keeps a copy of the MFD in memory so it can determine the user name
associated with any files it finds later on the tape.

8. Describe the purpose of using the classifications (i) owner (ii) group and (iii) universe to
enforce access control on files. [6]

Implementing File Systems


1. What is a file control block (FCB)? What information is stored in a FCB? [3+4]
FCB (File Control Block) is an internal file system structure used in DOS for
accessing files on disk.

The FCB block contains information about the drive name, filename, file type and
other information that is required by the system when a file is accessed or
created.

2. What is the purpose of using a file descriptor (also called a file handle)?[5]
In Unix and related computer operating systems, a file descriptor (FD, less
frequently files) is an abstract indicator (handle) used to access a file or other
input/output resource, such as a pipe or network socket. File descriptors form
part of the POSIX application programming interface.

In windows it is called file handle.

3. Explain the differences between (i) contiguous (ii) linked and (iii) indexed allocation for file
systems with examples. Which of these methods suffer(s) from external fragmentation and
why? [6+2]

1. Contiguous Allocation

In this scheme, each file occupies a contiguous set of blocks on the disk. For example, if a file
requires n blocks and is given a block b as the starting location, then the blocks assigned to the file
will be: b, b+1, b+2,……b+n-1. This means that given the starting block address and the length of
the file (in terms of blocks required), we can determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation contains

 Address of starting block


 Length of the allocated portion.

The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks. Therefore, it
occupies 19, 20, 21, 22, 23, 24 blocks.

Advantages:

 Both the Sequential and Direct Accesses are supported by this. For direct access, the address
of the kth block of the file which starts at block b can easily be obtained as (b+k).
 This is extremely fast since the number of seeks are minimal because of contiguous
allocation of file blocks.

Disadvantages:

 This method suffers from both internal and external fragmentation. This makes it inefficient
in terms of memory utilization.
 Increasing file size is difficult because it depends on the availability of contiguous memory
at a particular instance.

2. Linked List Allocation

In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk
blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block contains
a pointer to the next block occupied by the file.

The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block
(25) contains -1 indicating a null pointer and does not point to any other block.

Advantages:

 This is very flexible in terms of file size. File size can be increased easily since the system
does not have to look for a contiguous chunk of memory.
 This method does not suffer from external fragmentation. This makes it relatively better in
terms of memory utilization.

Disadvantages:
 Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked allocation slower.
 It does not support random or direct access. We can not directly access the blocks of a file.
A block k of a file can be accessed by traversing k blocks sequentially (sequential access )
from the starting block of the file via block pointers.
 Pointers required in the linked allocation incur some extra overhead.

3. Indexed Allocation

In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file. Each file has its own index block. The ith entry in the index block contains the
disk address of the ith file block. The directory entry contains the address of the index block as
shown in the image:

Advantages:

 This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.
 It overcomes the problem of external fragmentation.

Disadvantages:

 The pointer overhead for indexed allocation is greater than linked allocation.
 For very small files, say files that expand only 2-3 blocks, the indexed allocation would
keep one entire block (index block) for the pointers which is inefficient in terms of memory
utilization. However, in linked allocation we lose the space of only 1 pointer per block.

Contiguous memory allocation suffers from external fragmentation because External


fragmentation is the breaking up of free memory into small chunks
via partitioning, which can mean a request for a larger partition
later may fail due to lack of contiguous memory, even though
enough total memory is available.

4. “Contiguous allocation supports both sequential and direct access”: Justify this statement
with examples. [4]

Accessing a file that has been allocated contiguously is easy. For sequential access, the file
system remembers the disk address of the last block referenced and, when necessary, reads
the next block. For direct access to block i of a file that starts at block b, we can immediately
access block b+i. Thus contiguous allocation supports both sequential and direct access.

5. What is the benefit of using an extent in contiguous allocation scheme? [3]

A problem with contiguous allocation is determining how much space is needed for a file.
If too little space is allocated, the file may not be extended after a certain time as the file
grows in size. And on the other hand, if we allocate large size then we may suffer from
internal fragmentation. So the total amount of space needed for a file must be known in
advance.
But even after that, preallocation may be insufficient. A file that will grow slowly over a
long period (months or years) must be allocated enough space for its final size, even
though much of that space will be unused for a long time. The file therefore will have a
large amount of internal fragmentation.
To minimize these drawbacks, a contiguous chunk of contiguous space, called extent, is
allocated initially; and then if that amount proves not to be large enough, another chunk is
added.

6. For linked allocation, explain the following schemes: (i) Linked scheme (ii) Multilevel
index scheme and (iii) Combined scheme. [6]
Galvin page 410-411

7. What is a file-allocation table (FAT)? Describe its entries. [2+3]


File allocation table (FAT) is a simple but efficient method of disk-space allocation. A
section of disk at the beginning of each volume is set aside to contain the table. The table
has one entry for each disk block and is indexed by block number. It maintains a map of
the clusters (the basic units of logical storage on a hard disk) that a file has been stored in.

The directory entry contains the block number of the first block of the file. The table entry
indexed by that block number contains the block number of the next block in the file, The
chain continues until the last block, which has a special end-of-file value as the table entry.
Unused blocks are indicated by a 0 table value. Allocating a new block to a file is a simple
matter of finding the first 0-valued table entry and replacing the previous end-of-file value
with the address of the new block. The 0 is then replaced with the end-of-file value.

8. What is a bit vector in free space management? Why is it used? [2+2]

To keep track of free disk space, the operating system maintains a free-space list. Bit
vector is an approach where the free space list is implemented as a bit map vector. It
contains the number of bits where each bit represents each block. If the block is
empty then the bit is 1 otherwise it is 0. Initially all the blocks are empty therefore
each bit in the bit map vector contains 1. As the space allocation proceeds, the file
system starts allocating blocks to the files and setting the respective bit to 0.

The main advantage of this approach is its relative simplicity and its efficiency in finding
the first free block or n consecutive free blocks on the disk.

Secondary-Storage Structure

1. What is the difference between seek time and rotational latency in the context of
accessing a particular sector within a cylinder? [5]

You might also like