Operating Systems SBP QBank Solved
Operating Systems SBP QBank Solved
Question Bank
CPU Scheduling
1. Explain the difference between preemptive and nonpreemptive scheduling. [3]
A:
Basis for
Preemptive Scheduling Non Preemptive Scheduling
Comparison
Once resources are allocated to a process,
The resources are allocated to a
Basic the process holds it till it completes its burst
process for a limited time.
time or switches to waiting state.
Process can be interrupted in Process cannot be interrupted till it
Interrupt
between. terminates or switches to waiting state.
If a high priority process frequently If a process with long burst time is running
Starvation arrives in the ready queue, low CPU, then another process with less CPU
priority process may starve. burst time may starve.
Preemptive scheduling has overheads Non-preemptive scheduling does not have
Overhead
of scheduling the processes. overheads.
Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.
Preemptive scheduling is cost Non-preemptive scheduling is not cost
Cost
associated. associative.
3. What (if any) relation holds between the following pairs of algorithm sets? [5]
Deadlock
Resource-Allocation Graph
1. Mutual Exclusion
3. No Preemption
Preemption of process resource allocations can prevent this condition of
deadlocks, when it is possible.
o One approach is that if a process is forced to wait when requesting
a new resource, then all other resources previously held by this
process are implicitly released, ( preempted ), forcing this process
to re-acquire the old resources along with the new resources in a
single request, similar to the previous discussion.
o Another approach is that when a resource is requested and not
available, then the system looks to see what other processes
currently have those resources and are themselves blocked
waiting for some other resource. If such a process is found, then
some of their resources may get preempted and added to the list of
resources for which the process is waiting.
o Either of these approaches may be applicable for resources whose
states are easily saved and restored, such as registers and memory,
but are generally not applicable to other devices such as printers
and tape drives.
4. Circular Wait
One way to avoid circular wait is to number all resources, and to require
that processes request resources only in strictly increasing ( or
decreasing ) order.
In other words, in order to request resource Rj, a process must first
release all Ri such that i >= j.
One big challenge in this scheme is determining the relative ordering of
the different resources
Let ‘n’ be the number of processes in the system and ‘m’ be the number of resources
types.
Available :
It is a 1-d array of size ‘m’ indicating the number of available resources of
each type.
Available[ j ] = k means there are ‘k’ instances of resource type Rj
Max :
It is a 2-d array of size ‘n*m’ that defines the maximum demand of each
process in a system.
Max[ i, j ] = k means process Pi may request at most ‘k’ instances of resource
type Rj.
Allocation :
It is a 2-d array of size ‘n*m’ that defines the number of resources of each type
currently allocated to each process.
Allocation[ i, j ] = k means process Pi is currently allocated ‘k’ instances of
resource type Rj
Need :
It is a 2-d array of size ‘n*m’ that indicates the remaining resource need of
each process.
Need [ i, j ] = k means process Pi currently allocated ‘k’ instances of resource
type Rj
Need [ i, j ] = Max [ i, j ] – Allocation [ i, j ]
7. What is wait-for graph? How wait-for graph is used in deadlock detection? [2+3]
A wait-for graph in computer science is a directed graph used for deadlock detection in
operating systems and relational database systems. In computer science, a system that allows
concurrent operation of multiple processes and locking of resources and which does not
provide mechanisms to avoid or prevent deadlock must support a mechanism to detect
deadlocks and an algorithm for recovering from them. to implies is holding a resource that
needs and thus is waiting for The wait-for-graph scheme is not applicable to a resource
allocation system with multiple instances of each resource type.
Detection Algorithm
9. Define the methods used by the operating system to recover from the deadlock.
[3]
10. How can deadlocks be eliminated by aborting a process? Also discuss the factors
those may affect in time of choosing a process to terminate.[2+3]
Galvin Page 257-258
12. List three examples of deadlocks that are not related to a computer system environment.
[3]
When two trains from opposite directions are running on same track towards each
other is example of deadlock.
At a crossing, a car waiting for a pedestrian to cross and the pedestrian in turn
waiting for the car to cross over. As a result both will wait for each other
infinitely and deadlock.
When on a single lane bridge, two cars try to cross the bridge from opposite
directions then it can be said a deadlock...
A person going down a ladder while another person is climbing up the ladder
13. Suppose that a system is in an unsafe state. Show that it is possible for the processes to
complete their execution without entering a deadlock state. [5]
Deadlock means something specific: there are two (or more) processes that are
currently blocked waiting for each other.
In an unsafe state you can also be in a situation where there might be a deadlock
sometime in the future, but it hasn't happened yet because one or both of the
processes haven't actually started waiting.
This is an unsafe state. But we're not in a deadlock. There's only 4 free drives, so, for
example, if P0 does request an additional 5, and P2 does request an additional 1, we
will deadlock, but it hasn't happened yet. And P0 might not request any more drives,
but might instead free up the drives it already has. The Max need is over all possible
executions of the program, and this might not be one of the executions where we
need all 10 drives in P0.
14.Is it possible to have a deadlock involving only one single-threaded process? Explain
your answer. [3]
It is not possible to have a deadlock involving only one single process. The deadlock
involves a circular “hold-and-wait” condition between two or more processes, so “one”
process cannot hold a resource, yet be waiting for another resource that it is holding. In
addition, deadlock is not possible between two threads in a process, because it is the process
that holds resources, not the thread that is, each thread has access to the resources held by
the process.
A: Internal Fragmentation
1. When a process is allocated more memory than required, few spaces is left unused and
this is called as INTERNAL FRAGMENTATION.
2. It occurs when memory is divided into fixed-sized partitions.
3. It can be cured by allocating memory dynamically or having partitions of different sizes.
External Fragmentation
1. After execution of processes when they are swapped out of memory and other smaller
processes replace them, many small non contiguous (adjacent) blocks of unused spaces
are formed which can serve a new request if all of them are put together but as they are
not adjacent to each other a new request can't be served and this is known
as EXTERNAL FRAGMENTATION.
2. It occurs when memory is divided into variable-sized partitions based on size of process.
3. It can be cured by Compaction, Paging and Segmentation.
2. Which type of fragmentation occurs in paging systems? Which type occurs in systems that
use pure segmentation? [3+4=7]
In a paging system, the wasted space in the last page is lost to internal
fragmentation. Because a page has fixed size, but processes may request more
or less space. Say a page is 32 units, and a process requests 20 units. Then when
a page is given to the requesting process, that page is no longer useable despite
having 12 units of free "internal" space.
In a pure segmentation system, some space is invariably lost between the segments.
This is due to external fragmentation. External fragmentation occurs in systems that
use pure segmentation. Because each segment has varied size to fit each program size,
the holes (unused memory) occur external to the allocated memory partition.
3. What is compaction? Which type of fragmentation does it solve?[2+2]
4. What is a modify bit in page replacement? What are the benefits of using it?[2+2]
A dirty bit or modified bit is a bit that is associated with a block of computer memory and
indicates whether or not the corresponding block of memory has been modified.[1] The dirty
bit is set when the processor writes to (modifies) this memory.
The bit indicates that its associated block of memory has been modified and has not been
saved to storage yet. When a block of memory is to be replaced, its corresponding dirty bit is
checked to see if the block needs to be written back to secondary memory before being
replaced or if it can simply be removed. Dirty bits are used by the CPU cache and in the page
replacement algorithms of an operating system.
A translation lookaside buffer (TLB) is a memory cache that stores recent translations
of virtual memory to physical addresses for faster retrieval.
When a virtual memory address is referenced by a program, the search starts in the
CPU. First, instruction caches are checked. If the required memory is not in these
very fast caches, the system has to look up the memory’s physical address. At this
point, TLB is checked for a quick reference to the location in physical memory.
When an address is searched in the TLB and not found, the physical memory must be
searched with a memory page crawl operation. As virtual memory addresses are
translated, values referenced are added to TLB. When a value can be retrieved from
TLB, speed is enhanced because the memory address is stored in the TLB on
processor. Most processors include TLBs to increase the speed of virtual memory
operations through the inherent latency-reducing proximity as well as the high-
running frequencies of current CPU’s.
TLBs also add the support required for multi-user computers to keep memory
separate, by having a user and a supervisor mode as well as using permissions on
read and write bits to enable sharing.
6. Explain the concept of shared pages with an example. What do you mean by re
entrant code? [3+3]
Shared Pages
• Shared code
– One copy of read-only (reentrant) code shared among
processes (i.e., text editors, compilers, window systems).
– Shared code must appear in same location in the logical
address space of all processes.
• Private code and data
– Each process keeps a separate copy of the code and data.
– The pages for the private code and data can appear
anywhere in the logical address space.
7. Explain the differences between Hierarchical, Hashed and Inverted paging schemes.
Which of the three is not suitable for implementing shared pages and why? [5+5]
8. How is each page indexed in a page table? What is the reason for associating a
valid/invalid bit for each entry in a page table? [3]
9. What are the limitations of paging? How are these solved by segmentation? [3+4]
Segmentation memory management works very similar to paging but here segments
are of variable-length where as in paging pages are of fixed size. So no internal
fragmentation.
A program segment contains the program's main function, utility functions, data
structures, and so on. The operating system maintains a segment map table for every
process and a list of free memory blocks along with segment numbers, their size and
corresponding memory locations in main memory. For each segment, the table stores
the starting address of the segment and the length of the segment. A reference to a
memory location includes a value that identifies a segment and an offset. So less
extra space needed.
Virtual Memory Management
A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than
physical memory. Virtual memory serves two purposes. First, it allows us to extend
the use of physical memory by using disk. Second, it allows us to have memory
protection, because each virtual address is translated to a physical address.
(b) More programs can be run concurrently without any degradation in performance. [4+4]
2. What is demand paging? What are the benefits of using it? What is pure demand
paging? [2+2+1]
Demand paging is a type of swapping done in virtual memory systems. In demand paging,
the data is not copied from the disk to the RAM until they are needed or being demanded
by some program. The data will not be copied when the data is already available on the
memory. This is otherwise called a lazy evaluation because only the demanded pages of
memory are being swapped from the secondary storage (disk space) to the main memory.
When starting execution of a process with no pages in memory, the operating system sets
the instruction pointer to the first instruction of the process, which is on a non-memory
resident page, the process immediately faults for the page. After this page is brought into
memory, the process continues to execute, faulting as necessary until every page that it
needs is in memory. At that point, it can execute with no more faults. This schema is pure
demand paging.
3. Describe the steps taken by the operating system when a page fault occurs. [6]
A page fault occurs when a program attempts to access data or code that is in its
address space, but is not currently located in the system RAM. So when page fault
occurs then following sequence of events happens:
The computer hardware traps to the kernel and program counter (PC) is saved
on the stack. Current instruction state information is saved in CPU registers.
An assembly program is started to save the general registers and other volatile
information to keep the OS from destroying it.
Operating system finds that a page fault has occurred and tries to find out
which virtual page is needed. Sometimes hardware register contains this
required information. If not, the operating system must retrieve PC, fetch
instruction and find out what it was doing when the fault occurred.
Once virtual address caused page fault is known, system checks to see if
address is valid and checks if there is no protection access problem.
If the virtual address is valid, the system checks to see if a page frame is free. If
no frames are free, the page replacement algorithm is run to remove a page.
If frame selected is dirty, page is scheduled for transfer to disk, context switch
takes place, fault process is suspended and another process is made to run until
disk transfer is completed.
As soon as page frame is clean, operating system looks up disk address where
needed page is, schedules disk operation to bring it in.
When disk interrupt indicates page has arrived, page tables are updated to
reflect its position, and frame marked as being in normal state.
Faulting instruction is backed up to state it had when it began and PC is reset.
Faulting is scheduled, operating system returns to routine that called it.
Assembly Routine reloads register and other state information, returns to user
space to continue execution.
4. What is Belady’s anomaly? Explain which of the three page replacement algorithms
suffer(s) from this problem? [3+2]
In computer storage, Belady’s anomaly is the phenomenon in which increasing the
number of page frames results in an increase in the number of page faults for certain
memory access patterns. This phenomenon is commonly experienced when using the first-
in first-out (FIFO) page replacement algorithm.
5. What is a modify bit in page replacement? What are the benefits of using it? [2+2]
A dirty bit or modified bit is a bit that is associated with a block of computer memory and
indicates whether or not the corresponding block of memory has been modified.[1] The dirty
bit is set when the processor writes to (modifies) this memory.
The bit indicates that its associated block of memory has been modified and has not been
saved to storage yet. When a block of memory is to be replaced, its corresponding dirty bit is
checked to see if the block needs to be written back to secondary memory before being
replaced or if it can simply be removed. Dirty bits are used by the CPU cache and in the page
replacement algorithms of an operating system.
6. How are counters used in LRU page replacement? How are they implemented by the
operating system? [3+4]
The least recently used lru page replacement algorithm keeps track of page usage
over a short period of time. LRU replaced a page which is least recently used. Its uses
a counter implement.
A counter is applied to each frame start from 1 and when a page is not in the frame its
replace the page that has lowest no of the counter value. And if the page is available
in frame its change the counter value of that page and increment by 1.
8. What is thrashing? What are its causes? How can its occurrence be reduced? [3+3+3]
Thrashing occurs when there are too much pages in our memory, and each page
refers to another page. The real memory shortens in capacity to have all the pages in
it, so it uses 'virtual memory'. When each page in execution demands that page that is
not currently in real memory (RAM) it place some pages on virtual memory and
adjust the required page on RAM. If CPU is so much busy in doing this task,
thrashing occurs.
1. instruct mid-term scheduler to swap out some of the process too recover from
thrashing
2. instructing the dispatcher not to load more processes after a threshold
10. Explain the differences between global and local page-replacement algorithms. [4]
When a process incurs a page fault, a local page replacement algorithm selects for replacement
some page that belongs to that same process (or a group of processes sharing a memory partition).
A global replacement algorithm is free to select any page in memory.
Local page replacement assumes some form of memory partitioning that determines how many
pages are to be assigned to a given process or a group of processes. Most popular forms of
partitioning are fixed partitioning and balanced set algorithms based on the working set model. The
advantage of local page replacement is its scalability: each process can handle its page faults
independently, leading to more consistent performance for that process. However global page
replacement is more efficient on an overall system basis.[1]
11. What is non-uniform memory access (NUMA)? [4]
Disk scheduling
1. What is disk partitioning? What are the problems in storing data on a raw disk that is
not partitioned?[2]
Disk partitioning or disk slicing is the creation of one or more regions on a hard disk or
other secondary storage, so that an operating system can manage information in each
region separately. These regions are called partitions.
A Bitmap or Bit Vector is series or collection of bits where each bit corresponds to a
disk block. The bit can take two values: 0 and 1: 0 indicates that the block is
allocated and 1 indicates a free block.
The given instance of disk blocks on the disk in Figure 1 (where green blocks are
allocated) can be represented by a bitmap of 16 bits as: 0000111000000110.
Advantages –
Simple to understand.
Finding the first free block is efficient. It requires scanning the words (a group
of 8 bits) in a bitmap for a non-zero word. (A 0-valued word has all bits 0). The
first free block is then found by scanning for the first 1 bit in the non-zero
word.
3. Explain the differences between linked allocation and contiguous allocation for file
systems. Specify the advantages and disadvantages of both. [2+4]
Contiguous allocation
Linked allocation
each data block contains the block address of the next block in the file
each directory entry contains:
o file name
o block address: pointer to the first block
o sometimes, also have a pointer to the last block (adding to the end
of the file is much faster using this pointer)
a view of the linked list
File Systems
1. What are the different attributes of a file? What operations can be performed on files?
[3+3]
2. What is the benefit of using a file-open count in managing open-file table entries? [3]
3. Explain the difference between sequential and direct access for files. How can sequential
access be simulated on a direct access file? [3+3]
Sequential access must begin at the beginning and access each element in order, one
after the other. Direct access allows the access of any element directly by locating it by
its index number or address. Arrays allow direct access. Magnetic tape has only
sequential access, but CDs had direct access.
4. Explain the difference between the instructions read n and read next when reading from a
file. [3]
l. Single level directory: In a single level directory system, all the files are placed in
one directory. This is very common on single-user OS's. A single-level directory has
significant limitations, however, when the number of files increases or when there is
more than one user. Since all files are in the same directory, they must have unique
names. If there are two users who call their data file "test", then the unique-name rule
is violated. Although file names are generally selected to reflect the content of the
file, they are often quite limited in length. Even with a single-user, as the number of
files increases, it becomes difficult to remember the names of all the files in order to
create only files with unique names shown in the figure.
2. Two level directory: In the two-level directory system, the system maintains a
master block that has one entry for each user. This master block contains the
addresses of the directory of the users. There are still problems with two level
directory structures. This structure effectively isolates one user from another. This is
an advantage when the users are completely independent, but a disadvantage when
the users want to cooperate on some task and access files of other users. Some
systems simply do not allow local files to be accessed by other users shown in the
figure.
Disk partitioning or disk slicing is the creation of one or more regions on a hard disk or
other secondary storage, so that an operating system can manage information in each
region separately. These regions are called partitions.
7. Explain the differences between user file directory (UFD) and master file directory
(MFD). [5]
A crucial set of data structures that must be translated are the User File Directories (UFD's), which
are similar to the directories of today. The UFD's are essential for preserving any file links in a
directory. Furthermore, on the incremental backups, one cannot tell what files were on the original
file system but were not included on that incremental without decoding this information. Therefore,
it is critical to make some sense out of these data structures since we cannot depend on future
archivists to decipher this raw information.
The UFD is a much more difficult structure to interpret than the MFD for two reasons. First, the
UFD keeps critical information in structures that can be decoded only by interpreting PDP-10 byte
pointers. Second, the UFD uses a custom method to track disk block allocation, which must be
interpreted to determine file length.
The Master File Directory (MFD) was an essential data structure on any ITS file system, so any
effort to preserve the rest of the data should include this structure too. The MFD is similar to the
modern-day ``root directory'' in a hierarchical file system, except that ITS had a flat file system with
only one level of directories. So, the MFD contained a listing of all of the directories on disk. Each
user had his/her own directory and each directory had a unique index number associated with it.
The author of a file was recorded as the index number to his directory in the MFD. In essence, the
only information the MFD provides us is the user ID number to user name mapping required to
determine whose files are whose.
Decoding the MFD is simple as long as one understands some of conventions used in ITS data
structures. It was our hope that by translating this in a rational manner, I would be the last person
required to understand the format of the MFD.
The MFD is basically an array of usernames encoded in ``sixbit''; the index number is determined
by the position of the name in the array. Sixbit is a method of encoding characters in 36-bit words in
which each character is 6 bits long, for a total of 6 characters per word. Sixbit does not include the
lower-case characters, so all user names were in capital letters. Each user name was limited to 6
characters, so it would fit in one 36-bit word. The translation from the array of sixbit user names to
an array of ASCII user names was straightforward.
If the MFD was included on a backup tape, it was always the first file. Therefore, the archivist
software looks for the presence of the MFD and then decodes it as mentioned above. The raw and
translated forms of the MFD are then written out in TCFS format as with the rest of the files.
However, the archivist also keeps a copy of the MFD in memory so it can determine the user name
associated with any files it finds later on the tape.
8. Describe the purpose of using the classifications (i) owner (ii) group and (iii) universe to
enforce access control on files. [6]
The FCB block contains information about the drive name, filename, file type and
other information that is required by the system when a file is accessed or
created.
2. What is the purpose of using a file descriptor (also called a file handle)?[5]
In Unix and related computer operating systems, a file descriptor (FD, less
frequently files) is an abstract indicator (handle) used to access a file or other
input/output resource, such as a pipe or network socket. File descriptors form
part of the POSIX application programming interface.
3. Explain the differences between (i) contiguous (ii) linked and (iii) indexed allocation for file
systems with examples. Which of these methods suffer(s) from external fragmentation and
why? [6+2]
1. Contiguous Allocation
In this scheme, each file occupies a contiguous set of blocks on the disk. For example, if a file
requires n blocks and is given a block b as the starting location, then the blocks assigned to the file
will be: b, b+1, b+2,……b+n-1. This means that given the starting block address and the length of
the file (in terms of blocks required), we can determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation contains
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks. Therefore, it
occupies 19, 20, 21, 22, 23, 24 blocks.
Advantages:
Both the Sequential and Direct Accesses are supported by this. For direct access, the address
of the kth block of the file which starts at block b can easily be obtained as (b+k).
This is extremely fast since the number of seeks are minimal because of contiguous
allocation of file blocks.
Disadvantages:
This method suffers from both internal and external fragmentation. This makes it inefficient
in terms of memory utilization.
Increasing file size is difficult because it depends on the availability of contiguous memory
at a particular instance.
In this scheme, each file is a linked list of disk blocks which need not be contiguous. The disk
blocks can be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and the ending file block. Each block contains
a pointer to the next block occupied by the file.
The file ‘jeep’ in following image shows how the blocks are randomly distributed. The last block
(25) contains -1 indicating a null pointer and does not point to any other block.
Advantages:
This is very flexible in terms of file size. File size can be increased easily since the system
does not have to look for a contiguous chunk of memory.
This method does not suffer from external fragmentation. This makes it relatively better in
terms of memory utilization.
Disadvantages:
Because the file blocks are distributed randomly on the disk, a large number of seeks are
needed to access every block individually. This makes linked allocation slower.
It does not support random or direct access. We can not directly access the blocks of a file.
A block k of a file can be accessed by traversing k blocks sequentially (sequential access )
from the starting block of the file via block pointers.
Pointers required in the linked allocation incur some extra overhead.
3. Indexed Allocation
In this scheme, a special block known as the Index block contains the pointers to all the blocks
occupied by a file. Each file has its own index block. The ith entry in the index block contains the
disk address of the ith file block. The directory entry contains the address of the index block as
shown in the image:
Advantages:
This supports direct access to the blocks occupied by the file and therefore provides fast
access to the file blocks.
It overcomes the problem of external fragmentation.
Disadvantages:
The pointer overhead for indexed allocation is greater than linked allocation.
For very small files, say files that expand only 2-3 blocks, the indexed allocation would
keep one entire block (index block) for the pointers which is inefficient in terms of memory
utilization. However, in linked allocation we lose the space of only 1 pointer per block.
4. “Contiguous allocation supports both sequential and direct access”: Justify this statement
with examples. [4]
Accessing a file that has been allocated contiguously is easy. For sequential access, the file
system remembers the disk address of the last block referenced and, when necessary, reads
the next block. For direct access to block i of a file that starts at block b, we can immediately
access block b+i. Thus contiguous allocation supports both sequential and direct access.
A problem with contiguous allocation is determining how much space is needed for a file.
If too little space is allocated, the file may not be extended after a certain time as the file
grows in size. And on the other hand, if we allocate large size then we may suffer from
internal fragmentation. So the total amount of space needed for a file must be known in
advance.
But even after that, preallocation may be insufficient. A file that will grow slowly over a
long period (months or years) must be allocated enough space for its final size, even
though much of that space will be unused for a long time. The file therefore will have a
large amount of internal fragmentation.
To minimize these drawbacks, a contiguous chunk of contiguous space, called extent, is
allocated initially; and then if that amount proves not to be large enough, another chunk is
added.
6. For linked allocation, explain the following schemes: (i) Linked scheme (ii) Multilevel
index scheme and (iii) Combined scheme. [6]
Galvin page 410-411
The directory entry contains the block number of the first block of the file. The table entry
indexed by that block number contains the block number of the next block in the file, The
chain continues until the last block, which has a special end-of-file value as the table entry.
Unused blocks are indicated by a 0 table value. Allocating a new block to a file is a simple
matter of finding the first 0-valued table entry and replacing the previous end-of-file value
with the address of the new block. The 0 is then replaced with the end-of-file value.
To keep track of free disk space, the operating system maintains a free-space list. Bit
vector is an approach where the free space list is implemented as a bit map vector. It
contains the number of bits where each bit represents each block. If the block is
empty then the bit is 1 otherwise it is 0. Initially all the blocks are empty therefore
each bit in the bit map vector contains 1. As the space allocation proceeds, the file
system starts allocating blocks to the files and setting the respective bit to 0.
The main advantage of this approach is its relative simplicity and its efficiency in finding
the first free block or n consecutive free blocks on the disk.
Secondary-Storage Structure
1. What is the difference between seek time and rotational latency in the context of
accessing a particular sector within a cylinder? [5]