22CS304 - Operating Systems (Lab Integrated) - Answer Key
22CS304 - Operating Systems (Lab Integrated) - Answer Key
4. What scheduling policy will you use for each of the following cases?
a. The processes arrive at large time intervals. - FCFS
b. The system’s efficiency is measured by the percentage of jobs completed. - SJN
c. All the processes take almost equal amounts of time to complete.- Round Robin (RR) policy
d. Processes assigned with priority process - Priority scheduling - a non-preemptive algorithm
8. Consider the scenario, if it takes 20 ns to search the TLB, 100ns to access memory and 100 ns to search the
page table then find the effective access time for a 98% hit ratio.
OR
11. b) i) Explain the steps involved in process creation and termination. 7
ii) Outline the purpose of system calls and summarize the types of system calls. 6
• System calls are a way for programs to interact with the operating system's kernel.
• They provide a standardized interface for executing system-level functions, such as process
management, file access, network communication, and memory allocation.
12. a) i) Consider the following set of processes with burst time and priority. 13
The processes are assumed to have arrived in the order P1, P2, P3, P4, P5,
all at time 0.
a. Draw four Gantt charts that illustrate the execution of these processes
using the scheduling algorithms: FCFS, SJF, nonpreemptive priority
(a larger priority number implies a higher priority), and Round Robin
(quantum = 2).
b. Calculate the turnaround time and waiting time of each process for
each of the scheduling algorithms.
c. Which of the algorithms results in the minimum average waiting
time?
a. The four Gantt charts:
b. Turn around time:
(OR)
12. b) i) Outline the types of multithreaded models. 7
(OR)
13. b) i) Consider the following current state of a system. 13
Need matrix is calculated by subtracting Allocation Matrix from the Max matrix
14. a) i) Demonstrate how the paging memory management scheme avoids external 7
fragmentation.
• To avoid external fragmentation, the segments are divided into pages. Page tables are
hence added for all segments.
• When the size of a page table becomes large, the page table is also divided into pages
and an outer page table is also added.
• This is called paged segmentation and is used in the Intel 80386 architecture.
ii) Given six memory partitions of 300 KB, 600 KB, 350 KB, 200 KB, 750 KB, and 125 KB 6
(in order), how would the first-fit, best-fit, and worst-fit algorithms place processes of size
115 KB, 500 KB, 358 KB, 200 KB, and 375 KB (in order)? Rank the algorithms in terms of
how efficiently they use memory.
OR
14. b) i) Consider the following page reference string: 13
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
How many page faults would occur for the following replacement algorithms, assuming
four frames? Remember that all frames are initially empty, so your first unique pages will
cost one fault each.
a. LRU replacement
b. FIFO replacement
c. Optimal replacement
LRU:
4 frames:
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
Frame 1: 1 - - 6 -
Frame 2: 2 - - - - -
Frame 3: 3 5 3 - -
Frame 4: 4 6 7 1
10 faults
FIFO:
4 frames:
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
Frame 1: 1 - 5 3 - 1
Frame 2: 2 - 6 7 3
Frame 3: 3 2 - 6 -
Frame 4: 4 1 2 -
14 faults
Optimal:
4 frames:
1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6
Frame 1: 1 - - 7 1
Frame 2: 2 - - - - -
Frame 3: 3 - - -
Frame 4: 4 5 6 - -
8 faults
Linked Allocation: In this technique, each file is represented by a linked list of disk blocks.
When a file is created, the operating system finds enough free space on the disk and links the
blocks of the file to form a chain. This method is simple to implement but can lead to
fragmentation and wastage of space.
Contiguous Allocation: In this technique, each file is stored as a contiguous block of disk
space. When a file is created, the operating system finds a contiguous block of free space and
assigns it to the file. This method is efficient as it minimizes fragmentation but suffers from the
problem of external fragmentation.
Indexed Allocation: In this technique, a separate index block is used to store the addresses of all
the disk blocks that make up a file. When a file is created, the operating system creates an index
block and stores the addresses of all the blocks in the file. This method is efficient in terms of
storage space and minimizes fragmentation.
File Allocation Table (FAT): In this technique, the operating system uses a file allocation table
to keep track of the location of each file on the disk. When a file is created, the operating system
updates the file allocation table with the address of the disk blocks that make up the file. This
method is widely used in Microsoft Windows operating systems.
Volume Shadow Copy: This is a technology used in Microsoft Windows operating systems to
create backup copies of files or entire volumes. When a file is modified, the operating system
creates a shadow copy of the file and stores it in a separate location. This method is useful for
data recovery and protection against accidental file deletion.
OR
15. b) i) Explain the various disk allocation methods with illustration. 13
Part C (1 x 15 = 15 Marks)
16. a) i) Analyze why interrupt and dispatch latency times must be bounded in a hard 8
real-time system
Answer: Interrupt latency is the period of time required to perform the following tasks: save the
currently executing instruction, determine the type of interrupt, save the current process state,
and then invoke the appropriate interrupt service routine. Dispatch latency is the cost associated
with stopping one process and starting another. Both interrupt and dispatch latency needs to be
minimized in order to ensure that real-time tasks receive immediate attention. Furthermore,
sometimes interrupts are disabled when kernel data structures are being modified, so the
interrupt does not get serviced immediately. For hard real-time systems, the time-period for
which interrupts are disabled must be bounded in order to guarantee the desired quality of
service.
Both interrupt and dispatch latency needs to be minimized in order to ensure that real-time tasks
receive immediate attention. Furthermore, sometimes interrupts are disabled when kernel data
structures are being modified, so the interrupt does not get serviced immediately.
(ii) How does the signal () operation associated with monitors differ from the 7
corresponding operation defined for semaphores?
The signal( ) operations associated with monitors is not persistent. If a signal is performed and if
there are no waiting threads, then the signal is simply ignored and the system does not remember
the fact that the signal took place. If a subsequent wait operation is performed, then the
corresponding thread simply blocks. Whereas in semaphores, every signal results in a
corresponding increment of the semaphore value even if there are no waiting threads. A future
wait operation would immediately succeed because of the earlier increment.
OR
16. b) i) Suppose that a disk drive has 5,000 cylinders, numbered 0 to 4,999. The 8
drive is currently serving a request at cylinder 2,150, and the previous request was at
cylinder 1,805. The queue of pending requests, in FIFO order, is: 2,069, 1,212, 2,296, 2,800,
544, 1,618, 356, 1,523, 4,965, 3681. Starting from the current head position, what is the
total distance (in cylinders) that the disk arm moves to satisfy all the pending requests for
each of the following disk-scheduling algorithms?
a. FCFS
b. SSTF
c. SCAN
d. LOOK
ii) Illustrate how the page faults are handled in demand paging technique. 7
• The basic idea behind paging is that when a process is swapped in, the pager only loads into
memory those pages that it expects the process to need ( right away. )
• Pages that are not loaded into memory are marked as invalid in the page table, using the invalid
bit. ( The rest of the page table entry may either be blank or contain information about where to
find the swapped-out page on the hard drive. )
• If the process only ever accesses pages that are loaded in memory ( memory resident pages ),
then the process runs exactly as if all the pages were loaded in to memory.
• On the other hand, if a page is needed that was not originally loaded up, then a page fault
trap is generated, which must be handled in a series of steps:
1. The memory address requested is first checked, to make sure it was a valid memory
request.
2. If the reference was invalid, the process is terminated. Otherwise, the page must be paged
in.
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from disk. ( This will usually
block the process on an I/O wait, allowing some other process to use the CPU in the
meantime. )
5. When the I/O operation is complete, the process’s page table is updated with the new
frame number, and the invalid bit is changed to indicate that this is now a valid page
reference.
6. The instruction that caused the page fault must now be restarted from the beginning, ( as
soon as this process gets another turn on the CPU. )
• In an extreme case, NO pages are swapped in for a process until they are requested by page faults.
This is known as pure demand paging.
• In theory each instruction could generate multiple page faults. In practice this is very rare, due
to locality of reference, covered in section 9.6.1.
• The hardware necessary to support virtual memory is the same as for paging and swapping: A
page table and secondary memory. ( Swap space, whose allocation is discussed in chapter 12. )
• A crucial part of the process is that the instruction must be restarted from scratch once the
desired page has been made available in memory. For most simple instructions this is not a major
difficulty. However there are some architectures that allow a single instruction to modify a fairly
large block of data, ( which may span a page boundary ), and if some of the data gets modified
before the page fault occurs, this could cause problems. One solution is to access both ends of the
block before executing the instruction, guaranteeing that the necessary pages get paged in before
the instruction begins.
*********************************