Gate OS
Gate OS
System calls, Processes, Threads, Inter‐process communication, Concurrency and synchronization. Deadlock. CPU scheduling.
Memory management and Virtual memory. File systems. Disks is also under this
5.1.1 Context Switch: GATE CSE 1999 | Question: 2.12 top☝ ☛ https://gateoverflow.in/1490
Which of the following actions is/are typically not performed by the operating system when switching context from process A
to process B?
A. Saving current register values and restoring saved register values for process B.
B. Changing address translation tables.
C. Swapping out the memory image of process A to the disk.
D. Invalidating the translation look-aside buffer.
Answer ☟
5.1.2 Context Switch: GATE CSE 2000 | Question: 1.20, ISRO2008-47 top☝ ☛ https://gateoverflow.in/644
Which of the following need not necessarily be saved on a context switch between processes?
A. General purpose registers
B. Translation look-aside buffer
C. Program counter
D. All of the above
Answer ☟
5.1.3 Context Switch: GATE CSE 2011 | Question: 6, UGCNET-June2013-III: 62 top☝ ☛ https://gateoverflow.in/2108
Let the time taken to switch from user mode to kernel mode of execution be T 1 while time taken to switch between two user
processes be T 2. Which of the following is correct?
A. T1 > T2
B. T1 = T2
C. T1 < T2
D. Nothing can be said about the relation between T 1 and T 2
Answer ☟
5.1.1 Context Switch: GATE CSE 1999 | Question: 2.12 top☝ ☛ https://gateoverflow.in/1490
Processes are generally swapped out from memory to Disk (secondary memory) when they are suspended. So. Processes
are not swapped during context switching.
TLB : Whenever any page table entry is referred for the first time it is temporarily saved in TLB. Every element of this memory
has a tag. And whenever anything is searched it is compared against TLB and we can get that entry/data with less memory
access.
And Invalidation of TLB means resetting TLB which is necessary because a TLB entry may belong to any page table of any
process thus resetting ensures that the entry corresponds to the process that we are searching for.
5.1.2 Context Switch: GATE CSE 2000 | Question: 1.20, ISRO2008-47 top☝ ☛ https://gateoverflow.in/644
Answer: (B)
We don't need to save TLB or cache to ensure correct program resumption. They are just bonus for ensuring better performance.
But PC, stack and registers must be saved as otherwise program cannot resume.
5.1.3 Context Switch: GATE CSE 2011 | Question: 6, UGCNET-June2013-III: 62 top☝ ☛ https://gateoverflow.in/2108
Time taken to switch two processes is very large as compared to time taken to switch between kernel and user mode of
execution because :
When you switch processes, you have to do a context switch, save the PCB of previous process (note that the PCB of a process
in Linux has over 95 entries), then save registers and then load the PCB of new process and load its registers etc.
When you switch between kernel and user mode of execution, OS has to just change a single bit at hardware level which is very
fast operation.
So, answer is: (C).
Context switches can occur only in kernel mode. So, to do context switch first switch from user mode to kernel mode and then
do context switch (save the PCB of the previous process and load the PCB of new process)
C is answer.
79 votes -- Sachin Mittal (15.8k points)
5.2.1 Deadlock Prevention Avoidance Detection: GATE CSE 2018 | Question: 24 top☝ ☛ https://gateoverflow.in/204098
Consider a system with 3 processes that share 4 instances of the same resource type. Each process can request a maximum of
K instances. Resources can be requested and releases only one at a time. The largest value of K that will always avoid deadlock is
___
Answer ☟
5.2.2 Deadlock Prevention Avoidance Detection: GATE CSE 2018 | Question: 39 top☝ ☛ https://gateoverflow.in/204113
In a system, there are three types of resources: E, F and G. Four processes P0 , P1 , P2 and P3 execute concurrently. At the
outset, the processes have declared their maximum resource requirements using a matrix named Max as given below. For example,
Max[P2 , F ] is the maximum number of instances of F that P2 would require. The number of instances of the resources allocated to
the various processes at any given state is given by a matrix named Allocation.
Consider a state of the system with the Allocation matrix as shown below, and in which 3 instances of E and 3 instances of F are
only resources available.
From the perspective of deadlock avoidance, which one of the following is true?
Answer ☟
5.2.3 Deadlock Prevention Avoidance Detection: GATE CSE 2021 Set 2 | Question: 43 top☝ ☛ https://gateoverflow.in/357497
Consider a computer system with multiple shared resource types, with one instance per resource type. Each instance can be
owned by only one process at a time. Owning and freeing of resources are done by holding a global lock (L) . The following scheme
is used to own a resource instance:
function OWNRESOURCE(Resource R)
Acquire lock L // a global lock
if R is available then
Acquire R
Release lock L
else
if R is owned by another process P then
Terminate P, after releasing all resources owned by P
Acquire R
Restart P
Release lock L
end if
end if
end function
Which of the following choice(s) about the above scheme is/are correct?
Answer ☟
5.2.4 Deadlock Prevention Avoidance Detection: GATE IT 2004 | Question: 63 top☝ ☛ https://gateoverflow.in/3706
In a certain operating system, deadlock prevention is attempted using the following scheme. Each process is assigned a unique
timestamp, and is restarted with the same timestamp if killed. Let Ph be the process holding a resource R, Pr be a process
requesting for the same resource R, and T (Ph ) and T (Pr ) be their timestamps respectively. The decision to wait or preempt one of
the processes is based on the following algorithm.
if T(Pr) < T(Ph) then
kill Pr
else wait
5.2.1 Deadlock Prevention Avoidance Detection: GATE CSE 2018 | Question: 24 top☝ ☛ https://gateoverflow.in/204098
Number of processes = 3
Number of Resources = 4
Let's distribute each process one less than maximum demand (K − 1) resources. i.e. 3(K − 1)
Provide an additional resource to any of three processes for deadlock avoidance.
Now, this 3K-2 should be less than or equal to the number of resources we have right now.
3K − 2 ≤ 4
3K ≤ 6
K≤2
So, largest value of K = 2
68 votes -- Digvijay (44.9k points)
5.2.2 Deadlock Prevention Avoidance Detection: GATE CSE 2018 | Question: 39 top☝ ☛ https://gateoverflow.in/204113
Give (0, 3, 0) out of (4, 3, 1) unit of resources to P2 and P2 will completes its execution.
After execution, it will release resources.
Available Resource = (4, 3, 1) + (1, 0, 3) = (5, 3, 4)
Allocate (1, 0, 2) out of (5, 3, 4) unit of resources to P1 and P1 will completes its execution.
After execution, it will release resources.
Available Resource = (5, 3, 4) + (1, 1, 2) = (6, 4, 6)
And finally, allocate resources to P3 .
Correct Answer: A
18 votes -- Digvijay (44.9k points)
5.2.3 Deadlock Prevention Avoidance Detection: GATE CSE 2021 Set 2 | Question: 43 top☝ ☛ https://gateoverflow.in/357497
A system is in Deadlock when all the processes are in Waiting state. This is similar to a traffic jam where no vehicle
moves.
A system is in Livelock when the processes do repeated work without any progress for the system (still no useful work). This is
similar to a traffic jam where some vehicles reverses and then move forward hitting the same block again.
Now, both deadlock and livelock are mutually exclusive – at any point of time only one can happen in a system. But both of
5.2.4 Deadlock Prevention Avoidance Detection: GATE IT 2004 | Question: 63 top☝ ☛ https://gateoverflow.in/3706
Answer is (A).
When the process wakes up again after it has been killed once or twice IT WILL HAVE SAME TIME-STAMP as it had WHEN
IT WAS KILLED FIRST TIME. And that time stamp can never be greater than a process that was killed after that or a NEW
process that may have arrived.
So every time when the killed process wakes up it MIGHT ALWAYS find a new process that will say "your time stamp is less
than me and I take this resource", which of course is as we know, and that process will again be killed.
This may happen indefinitely if processes keep coming and killing that "INNOCENT" process every time it try to access.
So, STARVATION is possible. Deadlock is not possible.
5.3.1 Disk Scheduling: GATE CSE 1989 | Question: 4-xii top☝ ☛ https://gateoverflow.in/88222
Disk requests come to disk driver for cylinders 10, 22, 20, 2, 40, 6 and 38, in that order at a time when the disk drive is
reading from cylinder 20. The seek time is 6 msec per cylinder. Compute the total seek time if the disk arm scheduling algorithm is.
Answer ☟
Assuming the current disk cylinder to be 50 and the sequence for the cylinders to be 1, 36, 49, 65, 53, 12, 3, 20, 55, 16, 65
and 78 find the sequence of servicing using
Answer ☟
The head of a moving head disk with 100 tracks numbered 0 to 99 is currently serving a request at track 55. If the queue of
requests kept in FIFO order is
which of the two disk scheduling algorithms FCFS (First Come First Served) and SSTF (Shortest Seek Time First) will require less
head movement? Find the head movement for each of the algorithms.
Answer ☟
5.3.4 Disk Scheduling: GATE CSE 1999 | Question: 1.10 top☝ ☛ https://gateoverflow.in/1463
Which of the following disk scheduling strategies is likely to give the best throughput?
A. Farthest cylinder next
B. Nearest cylinder next
C. First come first served
D. Elevator algorithm
Answer ☟
Consider an operating system capable of loading and executing a single sequential user process at a time. The disk head
scheduling algorithm used is First Come First Served (FCFS). If FCFS is replaced by Shortest Seek Time First (SSTF), claimed by
the vendor to give 50% better benchmark results, what is the expected improvement in the I/O performance of user programs?
A. 50%
B. 40%
C. 25%
D. 0%
Answer ☟
Consider a disk system with 100 cylinders. The requests to access the cylinders occur in following sequence:
4, 34, 10, 7, 19, 73, 2, 15, 6, 20
Assuming that the head is currently at cylinder 50, what is the time taken to satisfy all requests if it takes 1ms to move from one
cylinder to adjacent one and shortest seek time first policy is used?
A. 95 ms
B. 119 ms
C. 233 ms
D. 276 ms
Answer ☟
5.3.7 Disk Scheduling: GATE CSE 2014 Set 1 | Question: 19 top☝ ☛ https://gateoverflow.in/1786
Suppose a disk has 201 cylinders, numbered from 0 to 200. At some time the disk arm is at cylinder 100, and there is a queue
of disk access requests for cylinders 30, 85, 90, 100, 105, 110, 135 and 145. If Shortest-Seek Time First (SSTF) is being used for
scheduling the disk access, the request for cylinder 90 is serviced after servicing ____________ number of requests.
Answer ☟
Suppose the following disk request sequence (track numbers) for a disk with 100 tracks is given:
Assume that the initial position of the R/W head is on track 50. The additional distance that will be traversed by the R/W head when
the Shortest Seek Time First (SSTF) algorithm is used compared to the SCAN (Elevator) algorithm (assuming that SCAN algorithm
moves towards 100 when it starts execution) is________________tracks.
Answer ☟
5.3.9 Disk Scheduling: GATE CSE 2016 Set 1 | Question: 48 top☝ ☛ https://gateoverflow.in/39716
Cylinder a disk queue with requests for I /O to blocks on cylinders 47, 38, 121, 191, 87, 11, 92, 10. The C-LOOK scheduling
algorithm is used. The head is initially at cylinder number 63, moving towards larger cylinder numbers on its servicing pass. The
cylinders are numbered from 0 to 199. The total head movement (in number of cylinders) incurred while servicing these requests
is__________.
Answer ☟
Consider the following five disk five disk access requests of the form (request id, cylinder number) that are present in the disk
scheduler queue at a given time.
(P, 155), (Q, 85), (R, 110), (S, 30), (T , 115)
Assume the head is positioned at cylinder 100. The scheduler follows Shortest Seek Time First scheduling to service the requests.
Which one of the following statements is FALSE?
A. T is serviced before P .
B. Q is serviced after S ,but before T .
C. The head reverses its direction of movement between servicing of Q and P .
D. R is serviced before P .
Answer ☟
A disk has 200 tracks (numbered 0 through 199). At a given time, it was servicing the request of reading data from track 120,
and at the previous request, service was for track 90. The pending requests (in order of their arrival) are for track numbers.
30 70 115 130 110 80 20 25.
How many times will the head change its direction for the disk scheduling policies SSTF(Shortest Seek Time First) and FCFS (First
Come First Serve)?
A. 2 and 3
B. 3 and 3
C. 3 and 4
D. 4 and 4
Answer ☟
The head of a hard disk serves requests following the shortest seek time first (SSTF) policy. The head is initially positioned at
track number 180.
Answer ☟
The head of a hard disk serves requests following the shortest seek time first (SSTF) policy.
What is the maximum cardinality of the request set, so that the head changes its direction after servicing every request if the total
number of tracks are 2048 and the head can start from any track?
A. 9
B. 10
C. 11
D. 12
Answer ☟
5.3.1 Disk Scheduling: GATE CSE 1989 | Question: 4-xii top☝ ☛ https://gateoverflow.in/88222
A. In FCFS sequence will be ⇒ 20, 10, 22, 20, 2, 40, 6, 38
Total movement: |20 − 10| + |10 − 22| + |22 − 20| + |20 − 2| + |2 − 40| + |40 − 6| + |6 − 38| = 146
So total seek time = 146 × 6 = 876msec
1. SSTF
Sequence will be ⇒ 50, 49, 53, 55, 65, 65, 78, 36, 20, 16, 12, 3, 1
FCFS : 55 → 10 → 70 → 75 → 23 → 65 ⇒ 45 + 60 + 5 + 52 + 42 = 204.
SSTF : 55 → 65 → 70 → 75 → 23 → 10 ⇒ 10 + 5 + 5 + 52 + 13 = 85
Hence, SSTF.
5.3.4 Disk Scheduling: GATE CSE 1999 | Question: 1.10 top☝ ☛ https://gateoverflow.in/1463
A. Farthest cylinder next → This might be candidate for worst algorithm . This is false.
B. Nearest cylinder next → This is output.
C. First come first served → This will not give best throughput. It is random .
D. Elevator algorithm → This is good but issue is that once direction is fixed we don't come back, until we go all the other
way. So it does not give best throughput.
Correct Answer: B
Question says "single sequential user process". So, all the requests to disk scheduler will be in sequence and each one
will be blocking the execution and hence there is no use of any disk scheduling algorithm. Any disk scheduling algorithm gives
the same input sequence and hence the improvement will be 0% for SSTF over FCFS.
Correct Answer: D
74 votes -- Arjun Suresh (332k points)
Answer is (B).
= (50 − 34) + (34 − 20) + (20 − 19) + (19 − 15) + (15 − 10) + (10 − 7) + (7 − 6) + (6 − 4) + (4 − 2) + (73 − 2)
= 16 + 14 + 1 + 4 + 5 + 3 + 1 + 2 + 2 + 71
= 119 ms
5.3.7 Disk Scheduling: GATE CSE 2014 Set 1 | Question: 19 top☝ ☛ https://gateoverflow.in/1786
Requests are serviced in following order
5.3.8 Disk Scheduling: GATE CSE 2015 Set 1 | Question: 30 top☝ ☛ https://gateoverflow.in/8227
Refer : http://www.cs.iit.edu/~cs561/cs450/disksched/disksched.html
5.3.9 Disk Scheduling: GATE CSE 2016 Set 1 | Question: 48 top☝ ☛ https://gateoverflow.in/39716
63 → 191 = 128
191 → 10 = 181
10 → 47 = 37
Total = 346
67 votes -- Abhilash Panicker (7.6k points)
Answer is 346 as already calculated in answers here. Those having some doubt regarding long jump can check this image.
In the question Total Head Movements are asked. When Head reaches any End, there is no mechanism for head to jump
directly to some arbitrary track. It has to Move. So it has to move along the tracks to reach Track Request on other side.
Therefore head will move and we must count it.
Since the purpose of disk scheduling algorithms is to reduce such Head movements by finding an Optimal algorithm. If
we ignore the move which is actually happening in disk, that doesn't serve the purpose of analyzing the algorithms.
Shortest Seek Time First (SSTF), selects the request with minimum to seek time first from the current head position.
In the given question disk requests are given in the form of ⟨request id, cylinder number⟩
Cylinder Queue: (P, 155), (Q, 85), (R, 110), (S, 30), (T , 115)
Head starts at: 100
Answer is (C)
SSTF: (90) 120 115 110 130 80 70 30 25 20
Direction changes at 120, 110, 130
FCFS: (90) 120 30 70 115 130 110 80 20 25
direction changes at 120, 30, 130, 20
It should be (B).
We need two conditions to satisfy:
The first condition can be satisfied by not having two requests in the equal distance from the current location. As shown below,
we must not have request located int he red marked positions.
Now to maximize the no of request we need the requests to be located as compact as possible. Which can be done by just
placing the request in the next position after the red marked position in a particular direction (the direction in which the head
need to move now to satisfy the 1st criteria).
Seek length sequences for maximum cardinality and alternating head movements:
1, 3, 7, 15, …
1 2 3 4
− 1, − 1, − 1, − 1, …
Correct Answer: C
Estimate the average latency, the disk storage capacity, and the data transfer rate.
Answer ☟
A certain moving arm disk storage, with one head, has the following specifications:
The average latency of this device is P ms and the data transfer rate is Q bits/sec. Write the values of P and Q.
Answer ☟
Answer ☟
If the overhead for formatting a disk is 96 bytes for a 4000 byte sector,
A. Compute the unformatted capacity of the disk for the following parameters:
Number of surfaces: 8
Outer diameter of the disk: 12 cm
Inner diameter of the disk: 4 cm
Inner track space: 0.1 mm
Number of sectors per track: 20
B. If the disk in (A) is rotating at 360 rpm, determine the effective data transfer rate which is defined as the number of bytes
transferred per second between disk and memory.
Answer ☟
A file system with a one-level directory structure is implemented on a disk with disk block size of 4K bytes. The disk is used
as follows:
Answer ☟
A program P reads and processes 1000 consecutive records from a sequential file F stored on device D without using any
file system facilities. Given the following
Answer ☟
Answer ☟
Free disk space can be used to keep track of using a free list or a bit map. Disk addresses require d bits. For a disk with B
blocks, F of which are free, state the condition under which the free list uses less space than the bit map.
Answer ☟
Consider a disk with c cylinders, t tracks per cylinder, s sectors per track and a sector length sl . A logical file dl with fixed
record length rl is stored continuously on this disk starting at location (cL , tL , sL ) , where cL , tL and SL are the cylinder, track and
sector numbers, respectively. Derive the formula to calculate the disk address (i.e. cylinder, track and sector) of a logical record n
assuming that rl = sl .
Answer ☟
5.4.10 Disks: GATE CSE 1999 | Question: 2-18, ISRO2008-46 top☝ ☛ https://gateoverflow.in/1496
Answer ☟
Answer ☟
Consider a disk with the 100 tracks numbered from 0 to 99 rotating at 3000 rpm. The number of sectors per track is 100 and
the time to move the head between two successive tracks is 0.2 millisecond.
A. Consider a set of disk requests to read data from tracks 32, 7, 45, 5 and 10. Assuming that the elevator algorithm is used to
schedule disk requests, and the head is initially at track 25 moving up (towards larger track numbers), what is the total seek
time for servicing the requests?
B. Consider an initial set of 100 arbitrary disk requests and assume that no new disk requests arrive while servicing these
requests. If the head is initially at track 0 and the elevator algorithm is used to schedule disk requests, what is the worse case
time to complete all the requests?
Answer ☟
Consider a disk with the following specifications: 20 surfaces, 1000 tracks/surface, 16 sectors/track, data density 1 KB/sector,
rotation speed 3000 rpm. The operating system initiates the transfer between the disk and the memory sector-wise. Once the head
has been placed on the right track, the disk reads a sector in a single scan. It reads bits from the sector while the head is passing over
the sector. The read bits are formed into bytes in a serial-in-parallel-out buffer and each byte is then transferred to memory. The disk
Answer ☟
5.4.14 Disks: GATE CSE 2003 | Question: 25, ISRO2009-12 top☝ ☛ https://gateoverflow.in/915
Using a larger block size in a fixed block size file system leads to
Answer ☟
A unix-style I-nodes has 10 direct pointers and one single, one double and one triple indirect pointers. Disk block size is
1 Kbyte, disk block address is 32 bits, and 48-bit integers are used. What is the maximum possible file size?
A. 224 bytes
B. 232 bytes
C. 234 bytes
D. 248 bytes
Answer ☟
Answer ☟
5.4.17 Disks: GATE CSE 2007 | Question: 11, ISRO2009-36, ISRO2016-21 top☝ ☛ https://gateoverflow.in/1209
Consider a disk pack with 16 surfaces, 128 tracks per surface and 256 sectors per track. 512 bytes of data are stored in a bit
serial manner in a sector. The capacity of the disk pack and the number of bits required to specify a particular sector in the disk are
respectively:
A. 256 Mbyte, 19 bits
B. 256 Mbyte, 28 bits
C. 512 Mbyte, 20 bits
64 28
Answer ☟
For a magnetic disk with concentric circular tracks, the seek latency is not linearly proportional to the seek distance due to
Answer ☟
A hard disk has 63 sectors per track, 10 platters each with 2 recording surfaces and 1000 cylinders. The address of a sector is
given as a triple ⟨c, h, s⟩, where c is the cylinder number, h is the surface number and s is the sector number. Thus, the 0th sector is
addresses as ⟨0, 0, 0⟩ , the 1st sector as ⟨0, 0, 1⟩ , and so on
The address ⟨400, 16, 29⟩ corresponds to sector number:
A. 505035
B. 505036
C. 505037
D. 505038
Answer ☟
A hard disk has 63 sectors per track, 10 platters each with 2 recording surfaces and 1000 cylinders. The address of a sector is
given as a triple ⟨c, h, s⟩, where c is the cylinder number, h is the surface number and s is the sector number. Thus, the 0th sector is
addresses as ⟨0, 0, 0⟩ , the 1st sector as ⟨0, 0, 1⟩ , and so on
The address of the 1039th sector is
Answer ☟
An application loads 100 libraries at startup. Loading each library requires exactly one disk access. The seek time of the disk
to a random location is given as 10 ms. Rotational speed of disk is 6000 rpm. If all 100 libraries are loaded from random locations
on the disk, how long does it take to load all libraries? (The time to transfer data from the disk block once the head has been
positioned at the start of the block may be neglected.)
A. 0.50 s
B. 1.50 s
C. 1.25 s
D. 1.00 s
Answer ☟
A file system with 300 GByte disk uses a file descriptor with 8 direct block addresses, 1 indirect block address and 1 doubly
indirect block address. The size of each disk block is 128 Bytes and the size of each disk block address is 8 Bytes. The maximum
possible file size in this file system is
A. 3 KBytes
B. 35 KBytes
C. 280 KBytes
D. dependent on the size of the disk
Answer ☟
Consider a hard disk with 16 recording surfaces (0 − 15) having 16384 cylinders (0 − 16383) and each cylinder contains 64
sectors (0 − 63) . Data storage capacity in each sector is 512 bytes. Data are organized cylinder-wise and the addressing format is
⟨cylinder no., surface no., sector no.⟩ . A file of size 42797 KB is stored in the disk and the starting disk location of the file is
⟨1200, 9, 40⟩ . What is the cylinder number of the last sector of the file, if it is stored in a contiguous manner?
A. 1281
B. 1282
C. 1283
D. 1284
Answer ☟
A FAT (file allocation table) based file system is being used and the total overhead of each entry in the FAT is 4 bytes in size.
Given a 100 × 106 bytes disk on which the file system is stored and data block size is 103 bytes, the maximum size of a file that
can be stored on this disk in units of 106 bytes is _________.
Answer ☟
Consider a disk pack with a seek time of 4 milliseconds and rotational speed of 10000 rotations per minute (RPM). It has 600
sectors per track and each sector can store 512 bytes of data. Consider a file stored in the disk. The file contains 2000 sectors.
Assume that every sector access necessitates a seek, and the average rotational latency for accessing each sector is half of the time
for one complete rotation. The total time (in milliseconds) needed to read the entire file is__________________
Answer ☟
Consider a typical disk that rotates at 15000 rotations per minute (RPM) and has a transfer rate of 50 × 106 bytes/sec. If the
average seek time of the disk is twice the average rotational delay and the controller's transfer time is 10 times the disk transfer time,
the average time (in milliseconds) to read or write a 512-byte sector of the disk is _____
Answer ☟
Consider a storage disk with 4 platters (numbered as 0, 1, 2 and 3) , 200 cylinders (numbered as 0, 1, … , 199 ), and 256
sectors per track (numbered as 0, 1, … 255 ). The following 6 disk requests of the form [sector number, cylinder number, platter
number] are received by the disk controller at the same time:
[120, 72, 2], [180, 134, 1], [60, 20, 0], [212, 86, 3], [56, 116, 2], [118, 16, 1]
Currently head is positioned at sector number 100 of cylinder 80, and is moving towards higher cylinder numbers. The average
power dissipation in moving the head over 100 cylinders is 20 milliwatts and for reversing the direction of the head movement once
is 15 milliwatts. Power dissipation associated with rotational latency and switching of head between different platters is negligible.
The total power consumption in milliwatts to satisfy all of the above disk requests using the Shortest Seek Time First disk
scheduling algorithm is _____
Answer ☟
In a computer system, four files of size 11050 bytes, 4990 bytes, 5170 bytes and 12640 bytes need to be stored. For storing
these files on disk, we can use either 100 byte disk blocks or 200 byte disk blocks (but can't mix block sizes). For each block used to
store a file, 4 bytes of bookkeeping information also needs to be stored on the disk. Thus, the total space used to store a file is the
sum of the space taken to store the file and the space taken to store the book keeping information for the blocks allocated for storing
the file. A disk block can store either bookkeeping information for a file or data from a file, but not both.
What is the total space required for storing the files using 100 byte disk blocks and 200 byte disk blocks respectively?
A. 35400 and 35800 bytes
B. 35800 and 35400 bytes
C. 35600 and 35400 bytes
D. 35400 and 35600 bytes
Answer ☟
A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The
innermost track has a storage capacity of 10 MB.
What is the total amount of data that can be stored on the disk if it is used with a drive that rotates it with
A. I. 80 MB ; II. 2040 MB
B. I. 2040 MB ; II 80 MB
C. I. 80 MB ; II. 360 MB
D. I. 360 MB ; II. 80 MB
Answer ☟
A disk has 8 equidistant tracks. The diameters of the innermost and outermost tracks are 1 cm and 8 cm respectively. The
innermost track has a storage capacity of 10 MB.
If the disk has 20 sectors per track and is currently at the end of the 5th sector of the inner-most track and the head can move at a
speed of 10 meters/sec and it is rotating at constant angular velocity of 6000 RPM, how much time will it take to read 1 MB
contiguous data starting from the sector 4 of the outer-most track?
A. 13.5 ms
B. 10 ms
C. 9.5 ms
D. 20 ms
Answer ☟
What is the average time taken for transferring 250 bytes from the disk ?
A. 300.5 ms
B. 255.5 ms
C. 255 ms
D. 300 ms
Answer ☟
Answers: Disks
1. Avg Latency = 12 × 60 60
R
= 12 × 3600 = 8.33 ms
2. Disk Storage Capacity = (We need a number of surface to calculate it) 404 × 130030 Bytes ≃ 50 MB per surface
(approx)
R
3. Data transfer rate = Track capacity × 60 = 130030 × 3600
60 = 7801.8 kBps
RPM = 2400
Average latency is the time for half a rotation = 0.5 × 60/2400 s = 3/240 s = 12.5 ms.
In one full rotation, entire data in a track can be transferred. Track storage capacity = 62500 bits.
File system uses directories which are the files containing the name and location of other file in the file system. Unlike
other file,directory does not store the user data. Directories are the file that can point to other directories. Root directory point to
various user directory. So they will be stored in such a way that user cannot easily modify them. They should be placed at fixed
location on the disk.
Correct Answer: B
39 votes -- neha pawar (3.3k points)
For (A) part :
No of track = Recording width/ inner space between track
Recording width = (Outer Diameter − Inner Diameter )/2 = (12 − 4)/2 = 4 cm
Therefore no. of track = 4 cm/0.1 mm = 400 track
Since they have ask capacity of unformatted disk , so no 96 bytes in 4000 bytes would be wasted for non data purpose
Whole 4000 is used
So, total capacity = 400 × 8 × 20 × 4000 = 256 × 106 Bytes = 256 MB
For (B) part :
Its is given 360 rotations in 60 seconds
That is 360 rotations = 60 sec
Therefore, 1 rotations will take (1/6) sec
In (1/6) sec - we can read one track = 20 × (4000 − 96) B = 20 × 3904 B
Then, in 1 sec it will be = 20 × 3904 × 6 bytes = Data transfer rate = 468.480 KBps ( when we consider 1 Read/Write
Head for all surface) .
If we consider 1 Read/Write Heads per surface ( which is default approach ), then number of surfaces = 8
Data transfer rate = (468.480 × 8) KBps = 3747.84 KBps
But for our convenience we consider only 1 surface, it reads from one surface at a time. As data transfer rate is measured wrt a
single surface only .
Hence, for part B, the correct answer is 468.480 KBps .
a. Maximum possible number of files:
As per question, 32 bits (or 4 Bytes) are required per file. And there is only one block to store this, ie the Disk block 1,
which is of size 4KB. So number of files possible is 4 KB/4 Bytes= 1 K files possible.
b. Max file size:
As per question the Disk Block Address (FAT entry gives DBA) is of 8 bits. So, ideally the max file size should be
28 = 256 Block size.. But question makes it clear that two blocks, DB0 and DB1, stores control information. So.
effectively we have 256 − 2 = 254 blocks with us and the max file size shoud be = 254× size of one block = 254 × 4
KB = 1016 KB.
1000 consecutive records
Size of 1 record = 3200 Bytes
Access Time of device D = 10 ms
Data Transfer Rate of device D = 800 ∗ 103 Bytes per second.
CPU time to process Each record = 3ms.
3200 Bytes
Time to transfer 1 record (3200 Bytes) = 3 = 4 ms
800∗10
(A) Unblocked records with No buffer. Hence, each time only when a record is fetched in its full entirety it will be processed.
Time to fetch = Access Time for D( Every time you'll access the device. This is also known as device latency )+(Data transfer
time)
= 10ms + 4ms = 14ms
Total time taken by CPU for each record = fetch + execute = 14ms + 3ms = 17ms
To get the Total time of the program we think in terms of the last record because when it is processed, all others would already
have been processed too!.
Last record R1000 would be fetched at t = 0 + 14 ∗ 1000 = 14000 ms and 3ms will be taken by CPU to process this.
So, total elapsed time of program P = 14000 + 3 = 14003ms = 14.003sec
(C) Each disk block contains
2 records and Assuming buffer can hold
1 disk block at a time.
So, 1 Block Size = 2 ∗ 3200 = 6400 Bytes
6400
Time to read a block = = 8 ms.
800∗103
Each block read you have to incur the device access cost.
So, the total time to fetch one block and bring it into buffer = 10 + 8 = 18 ms.
We have 1000 files and so we need to read in 500 blocks.
Each block has two records and therefore CPU time per block = 6ms.
Again to count the program time P, we think in terms of the last Block.
Last block would be fetched at t = 0 + (18 ∗ 500) = 9000 ms.
After this 6 ms more to process 2 records present in the 500th block.
So, program time P = 9000 + 6 = 9006ms = 9.006sec .
Answer is (D) .
' The formatted disk capacity is always less than the "raw" unformatted capacity specified by the disk's manufacturer,
because some portion of each track is used for sector identification and for gaps (empty spaces) between sectors and at
the end of the track.
Reference : https://en.wikipedia.org/wiki/Floppy_disk_format
References
Bit map maintains one bit for each block, If it is free then bit will be "0" if occupied then bit will be "1".
For space purpose, it doesn't matter what bit we are using, only matters that how many blocks are there.
For B blocks, Bit map takes space of "B" bits.
Given that we have F free blocks, therefore F addresses in a list, and each address size is d bits therefore Free list takes space of
"Fd ".
condition under which the free list uses less space than the bit map: Fd < B
GIVEN: Consider a disk with c cylinders, t tracks per cylinder, s sectors per track
from this, we can conclude that 1 cylinder contains = t*s sectors
and one track contains =s sectors
now we have to drive the formula of logical address n
n
so the cylinder no is ⌊( ts )⌋
and track number will be floor of ( (n%ts)/s)
and sector no will be n%s
5.4.10 Disks: GATE CSE 1999 | Question: 2-18, ISRO2008-46 top☝ ☛ https://gateoverflow.in/1496
A. Fault tolerance and
B. High Speed
A disk driver is a device driver that allows a specific disk drive to communicate with the remainder of the computer. A
good example of this driver is a floppy disk driver.
32 votes -- Bhagirathi Nayak (11.7k points)
Answer for (A):
We will need to go from 25 → 99 → 5 . (As we will move up all the way to 99,servicing all request, then come back to 5.)
3000 rpm
I.e. 50 rps
(a) 20 × 1000 × 16 × 1KB = 3, 20, 000KB
(b)
3000 rotations = 60 seconds
60
1 rotation = seconds
3000
1
1 rotation = 1 track = seconds
50
1
1 track = 16 × 1KB = seconds
50
800KB = 1 second
Answer is (A). Larger block size means less number of blocks to fetch and hence better throughput. But larger block size
also means space is wasted when only small size is required.
Size of Disk Block = 1024 Byte
We have:
10 Direct
So, total size = 218 + 226 + 234 Byte + 10240Byte . Which is nearly 234 Bytes . (We don't have exact option available.
Choose approximate one)
Answer → (C)
Swap space( on the disk) is used by Operating System to store the pages that are swapped out of the memory due to less
memory available on the disk. Interestingly the Android Operating System, which is Linux kernel under the hood has the
swapping disabled and has its own way of handling "low memory" situations.
5.4.17 Disks: GATE CSE 2007 | Question: 11, ISRO2009-36, ISRO2016-21 top☝ ☛ https://gateoverflow.in/1209
Answer is (A).
16 surfaces = 4 bits, 128 tracks = 7 bits, 256 sectors = 8 bits, sector size 512 bytes = 9 bits
The answer is B, because due to Inertia
Whenever your read-write head moves from 1 track to another track, it has to face resistance due to change in state of motion
including speed and direction, which is nothing but inertia. Hence the answer is B
31 votes -- spriti1991 (1.5k points)
The data on a disk is ordered in the following way. It is first stored on the first sector of the first surface of the first
cylinder. Then in the next sector, and next, until all the sectors on the first track are exhausted. Then it moves on to the first
sector of the second surface (remains at the same cylinder), then next sector and so on. It exhausts all available surfaces for the
first cylinder in this way. After that, it moves on to repeat the process for the next cylinder.
So, to reach to the cylinder numbered 400(401th cylinder) we need to skip 400 × (10 × 2) × 63 = 504, 000 sectors.
Then, to skip to the 16th surface of the cylinder numbered 400, we need to skip another 16 × 63 = 1, 008 sectors.
Finally, to find the 29 sector, we need to move another 29 sectors.
In total, we moved 504, 000 + 1, 008 + 29 = 505, 037 sectors.
Hence, the answer to 51 is option (C).
1039th sector will be stored in track number (1039 + 1)/63 = 16.5 (as counting starts from 0 as given in question) and
each track has 63 sectors. So, we need to go to 17th track which will be numbered 16 and each cylinder has 20 tracks (10
platters ×2 recording surface each) . Number of extra sectors needed = 1040 − 16 × 63 = 32 and hence the sector number will
be 31. So, option (C).
Disk access time = Seek time + Rotational latency + Transfer time (given that transfer time is neglected)
Seek time = 10 ms
Rotational speed = 6000 rpm
Direct block addressing will point to 8 disk blocks = 8 × 128 B = 1 KB
Singly Indirect block addressing will point to 1 disk block which has 128/8 disc block addresses = (128/8) × 128 B = 2 KB
Doubly indirect block addressing will point to 1 disk block which has 128/8 addresses to disk blocks which in turn has 128/8
addresses to disk blocks = 16 × 16 × 128 B = 32 KB
Total = 35 KB
Answer is (B).
First convert ⟨1200, 9, 40⟩ into sector address.
1315010/(16 × 64 ) = 1284.189453 (1284 will be cylinder number and remaining sectors = 194)
Correct Answer: D
210 votes -- Laxmi (793 points)
42797 KB = 42797 × 1024 bytes require 42797 × 1024/512 sectors = 85594 sectors.
⟨1200, 9, 40⟩ is the starting address. So, we can have 24 sectors in this recording surface. Remaining 85570 sectors.
In a cylinder, we have 16 recording surfaces. So, 1331 recording surfaces require ⌈ 1331
16 ⌉ = 84 different cylinders.
The first cylinder (after the current one) starts at 1201. So, the last one should be 1284.
⟨1284, 3, 1⟩ will be the end address. (1331 − 16 × 83 + 1 − 1 = 3 (3 surfaces full and 1 partial and −1 since address starts
from 0), and 85570 − 1337 × 64 − 1 = 1)
37 votes -- Arjun Suresh (332k points)
Each datablock will have its entry.
Disk Capacity 100MB
So, Total Number of entries in the FAT = Block size = 1KB = 100K
We have to give space to Overheads on the same file system and at the rest available space we can store data.
So, assuming that we use all available storage space to store a single file = Maximum file size =
Total File System size − Overhead = 100MB − 0.4MB = 99.6MB
88 votes -- Kalpish Singhal (1.6k points)
Since each sector requires a seek,
Total time = 2000 (seek time + avg. rotational latency + data transfer time)
Since data transfer rate is not given, we can take that in 1 rotation, all data in a track is read. i.e., in 60/10000 = 6 ms,
600 × 512 bytes are read. So, time to read 512 bytes = 6/600 ms = 0.01 ms
= 2000 × (4 ms + 60 × 1000/2 × 10000 + 0.01)
= 2000 × (7.01 ms)
= 14020 ms.
http://www.csee.umbc.edu/~olano/611s06/storage-io.pdf
References
Average time to read/write = Avg. seek time + Avg. rotational delay + Effective transfer time
60
Rotational delay = 15 = 4 ms
1
Avg. rotational delay = 2 × 4 = 2 ms
Avg. seek time = 2 × 2 = 4 ms
512 Bytes
Disk transfer time = = 0.0102 ms
50∗106 Bytes/sec
Shortest Seek Time First (SSTF), selects the request with minimum to seek time first from the current head position.
In the given question disk requests are given in the form of ⟨sectorNo, cylinderNo, platterNo⟩ .
Cylinder Queue : 72, 134, 20, 86, 116, 16
Head starts at : 80
Total head movements in SSTF = (86 − 80) + (86 − 72) + (134 − 72) + (134 − 16) = 200
for 100 bytes block:
11050 = 111 blocks requiring 111 × 4 = 444 bytes of bookkeeping info which requires another 5 disk blocks. So, totally
111 + 5 = 116 disk blocks. Similarly,
4990 = 50 + (50 × 4)/100 = 52
5170 = 52 + (52 × 4)/100 = 55
12640 = 127 + (127 × 4/100) = 133
-----
356 × 100 = 35600 bytes
56 + (56 × 4/200) = 58
25 + (25 × 4/200) = 26
26 + (26 × 4/200) = 27
64 + (64 × 4/200) = 66
-----
177 × 200 = 35400
Total Time = Seek + Rotation + Transfer.
Seek Time :
Current Track 1
Destination Track 8
Distance Required to travel = 4 − 0.5 = 3.5 Cm
Time required = 10 m/s == 1 Cm/ms == 3.5 ms [ Time= Distance / Speed ]
Rotation Time:
6000 RPM in 60 sec
100 RPS in 1 sec
1 Revolution in 10 ms
1 Revolution = Covering entire Track
1 Track = 20 sector
1 sector required = 10/20 = 0.5 ms
Disk is constantly Rotating so when head moved from inner most track to outer most track total movement of disk
= (3.5/0.5) = 7 sectors
Which means that when disk reached outer most track head was at end of 12th sector
Total Rotational Delay = Time required to go from end of 12 to end of 3 = 11 sectors
1 sector = 0.5 ms so 11 sector = 5.5ms
Transfer Time
Total Data in Outer most track = 10 MB
Data in single Sector = 10 MB/20 = 0.5 MB
Data required to read = 1 MB = 2 sector
Time required to read data = 2 × 0.5 = 1ms
Total Time = Seek + Rotation + Transfer = 3.5ms + 5.5ms + 1ms = 10 ms
Correct Answer: B
option (D)
Explanation
Avg. time to transfer = Avg. seek time + Avg. rotational delay + Data transfer time
Avg Seek Time
given that : time to move between successive tracks is 1 ms
time to move from track 1 to track 1 : 0ms
time to move from track 1 to track 2 : 1ms
time to move from track 1 to track 3 : 2ms
..
..
time to move from track 1 to track 500 : 499ms
∑ 0+1+2+3+...+499
Avg Seek time = 500
= 249.5 ms
Avg Rotational Delay
RMP : 600
= .05sec
= 50 ms
Data Transfer Time
One 1 Roatation we can read data on one complete track .
= 100 × 500 = 50, 000 B data is read in one complete rotation
one complete rotation takes 0.1 s ( we seen above )
0.1 → 50, 000 bytes.
250 bytes → 0.1 × 250/50, 000 = 0.5 ms
Avg. time to transfer
= Avg. seek time
+ Avg. rotational delay
+ Data transfer time
= 249.5 + 50 + 0.5
= 300 ms
5.5.1 File System: GATE CSE 2002 | Question: 2.22 top☝ ☛ https://gateoverflow.in/852
In the index allocation scheme of blocks to a file, the maximum possible size of the file depends on
A. the size of the blocks, and the size of the address of the blocks.
B. the number of blocks used for the index, and the size of the blocks.
C. the size of the blocks, the number of blocks used for the index, and the size of the address of the blocks.
D. None of the above
Answer ☟
The data blocks of a very large file in the Unix file system are allocated using
A. continuous allocation
B. linked allocation
C. indexed allocation
D. an extension of indexed allocation
Answer ☟
5.5.3 File System: GATE CSE 2017 Set 2 | Question: 08 top☝ ☛ https://gateoverflow.in/118437
In a file allocation system, which of the following allocation scheme(s) can be used if no external fragmentation is allowed ?
1. Contiguous
2. Linked
3. Indexed
A. 1 and 3 only
B. 2 only
C. 3 only
D. 2 and 3 only
Answer ☟
The index node (inode) of a Unix -like file system has 12 direct, one single-indirect and one double-indirect pointers. The disk
block size is 4 kB, and the disk block address is 32-bits long. The maximum possible file size is (rounded off to 1 decimal place)
____ GB
Answer ☟
5.5.5 File System: GATE CSE 2021 Set 1 | Question: 15 top☝ ☛ https://gateoverflow.in/357437
Consider a linear list based directory implementation in a file system. Each directory is a list of nodes, where each node
contains the file name along with the file metadata, such as the list of pointers to the data blocks. Consider a given directory foo .
Which of the following operations will necessarily require a full scan of foo for successful completion?
A. Creation of a new file in foo
B. Deletion of an existing file from foo
C. Renaming of an existing file in foo
D. Opening of an existing file in foo
Answer ☟
In a particular Unix OS, each data block is of size 1024 bytes, each node has 10 direct data block addresses and three
additional addresses: one for single indirect block, one for double indirect block and one for triple indirect block. Also, each block
can contain addresses for 128 blocks. Which one of the following is approximately the maximum size of a file in the file system?
A. 512 MB
B. 2 GB
C. 8 GB
D. 16 GB
Answer ☟
5.5.1 File System: GATE CSE 2002 | Question: 2.22 top☝ ☛ https://gateoverflow.in/852
In Index allocation size of maximum file can be derived like following:
No of addressable blocks using one Index block (A) = Size of block / Size of block address
Answer is (C).
The data blocks of a very large file in the unix file system are allocated using an extension of indexed allocation or EXT2
file system. Hence, option (D) is the right answer.
5.5.3 File System: GATE CSE 2017 Set 2 | Question: 08 top☝ ☛ https://gateoverflow.in/118437
Both Linked and Indexed allocation free from external fragmentation
Refer:galvin
Reference: https://webservices.ignou.ac.in/virtualcampus/adit/course/cst101/block4/unit4/cst101-bl4-u4-06.htm
References
Given 12 direct, 1 single indirect, 1 double indirect pointers
4kB
Number of addresses= Size of disk block/address size = 4B
= 210
Maximum possible file size= 12 ∗ 4kB + 210 ∗ 4kB + 210 ∗ 210 ∗ 4kB
= 4.00395 GB ≃ 4 GB
5.5.5 File System: GATE CSE 2021 Set 1 | Question: 15 top☝ ☛ https://gateoverflow.in/357437
Correct Options: A, C
' (Note: In the question it’s given “which of the following options require a full scan of foo for successful
completion” . Meaning the best algorithm scans the list entirely for each type of input to verify the correctness of the
procedure and ,can’t partially scan and complete for any particular instance...)
Each File in Directory is uniquely referenced by its name. So different files must have different names!
So,
A. Creation of a New File: For creating new file, we’ve to check whether the new name is same as the existing files. Hence,
the linked list must be scanned in its entirety.
B. Deletion of an Existing File: Deletion of a file doesn’t give rise to name conflicts, hence if the node representing the files
is found earlier, it can be deleted without a through scan.
C. Renaming a File: Can give rise to name conflicts, same reason can be given as option A.
Answer: (B)
Maximum file size = 10 × 1024 Bytes +1 × 128 × 1024 Bytes +1 × 128 × 128 × 1024 Bytes
+1 × 128 × 128 × 128 × 1024 Bytes = approx 2 GB .
Let u, v be the values printed by the parent process and x, y be the values printed by the child process. Which one of the following
is TRUE?
A. u = x + 10 and v = y
B. u = x + 10 and v! = y
C. u + 10 = x and v = y
D. u + 10 = x and v! = y
Answer ☟
Answer ☟
Answer ☟
Answer ☟
Answer ☟
It should be Option C.
#include<stdio.h>
#include<stdlib.h>
void main()
{
int a =100;
if(fork()==0)
{
a=a+5;
printf("%d %d \n",a,&a );
}
else
{
a=a-5;
printf("%d %d \n",a,&a );
}
}
Output:
95 is printed by parent : u
105 is printed by child : x
⇒ u + 10 = x
The logical addresses remains the same between the parent and child processes.
Hence, answer should be:
u + 10 = x and v = y
(c) is the answer. Child is incrementing a by 5 and parent is decrementing a by 5. So, x = u + 10.
During fork(), address space of parent is copied for the child. So, any modifications to child variable won't affect the parent
variable or vice-verse. But this copy is for physical pages of memory. The logical addresses remains the same between the parent
and child processes.
27 votes -- gatecse (63.3k points)
Each fork() creates a child which start executing from that point onward. So, number of child processes created will be
At each fork, the number of processes doubles like from 1 − 2 − 4 − 8. . . 2n . Of these except 1, all are child processes.
Reference: https://gateoverflow.in/3707/gate2004-it_64
References
At each fork() the no. of processes becomes doubled. So, after 3 fork calls, the total no. of processes will be 8. Out of this
1 is the parent process and 7 are child processes. So, total number of child processes created is 7.
42 votes -- Arjun Suresh (332k points)
Answer is 31
Fork is called whenever i is even, so we can re-write the code as
for(i=0; i<10; i=i+2)
fork();
Option (C).
At each fork, the number of processes doubles like from 1 − 2 − 4 − 8. . . 2n . Of these except 1, all are child processes.
5.7.1 Inter Process Communication: GATE CSE 1997 | Question: 3.7 top☝ ☛ https://gateoverflow.in/2238
I/O redirection
Answer ☟
Answer: (B)
Typically, the syntax of these characters is as follows, using < to redirect input, and > to redirect output.
executes command1, placing the output in file1, as opposed to displaying it at the terminal, which is the usual destination for
standard output. This will clobber any existing data in file1.
Using,
executes command1, with file1 as the source of input, as opposed to the keyboard, which is the usual source for standard input.
combines the two capabilities: command1 reads from infile and writes to outfile.
Given that an interrupt input arrives every 1 msec, what is the percentage of the total time that the CPU devotes for the main
program execution.
Answer ☟
Answer ☟
Answer ☟
Which of the following devices should get higher priority in assigning interrupts?
A. Hard disk
B. Printer
C. Keyboard
D. Floppy disk
Answer ☟
Listed below are some operating system abstractions (in the left column) and the hardware components (in the right column)
Answer ☟
Answer ☟
A computer handles several interrupt sources of which of the following are relevant for this question.
Interrupt from CPU temperature sensor (raises interrupt if CPU temperature is too high)
Interrupt from Mouse (raises Interrupt if the mouse is moved or a button is pressed)
Interrupt from Keyboard (raises Interrupt if a key is pressed or released)
Interrupt from Hard Disk (raises Interrupt when a disk read is completed)
Answer ☟
The following are some events that occur after a device controller issues an interrupt while process L is under execution.
P. The processor pushes the process status of L onto the control stack
Q. The processor finishes the execution of the current instruction
R. The processor executes the interrupt service routine
S. The processor pops the process status of L from the control stack
T. The processor loads the new PC value based on the interrupt
Which of the following is the correct order in which the events above occur?
A. QPTRS
B. PTRSQ
C. TRPQS
D. QTPRS
Answer ☟
Answers: Interrupts
Time to service an interrupt = saving of cpu state + ISR execution + restoring of CPU state
= (80 + 10 + 10) × 10−6 = 100 microseconds
Thus, for every 1000 microseconds, (1000 − 100) = 900 microseconds of main program and 100 microseconds of interrupt
overhead exists.
(C) is answer. Interrupt processing is LIFO because when we are processing an interrupt, we disable the interrupts
originating from lower priority devices so lower priority interrupts can not be raised. If an interrupt is detected then it means that
it has higher priority than currently executing interrupt so this new interrupt will preempt the current interrupt so, LIFO. Other
Think about this:
When a process is running and after time slot is over, who schedules new process?
- Scheduler.
Now think if user invokes a system call, System call in effect leads to interrupt, and after this interrupt CPU resumes execution
of current running process,
It should be a Hard disk. I don't think there is a rule like that. But hard disk makes sense compared to others here.
http://www.ibm1130.net/functional/IOInterrupts.html
References
Answer: (C) A - 3, B - 2, C - 4, D - 1
Why?
Answer is (C).
Answer should be (D) Higher priority interrupt levels are assigned to requests which, if delayed or interrupted,could
have serious consequences. Devices with high speed transfer such as magnetic disks are given high priority, and slow devices
such as keyboard receive low priority. We know that mouse pointer movements are more frequent than keyboard ticks. So its
obvious that its data transfer rate is higher than keyboard. Delaying a CPU temperature sensor could have serious consequences,
overheat can damage CPU circuitry. From the above information we can conclude that priorities are-
CPU temperature sensor > Hard Disk > Mouse > Keyboard
Answer should be A.
5.9.1 Io Handling: GATE CSE 1996 | Question: 1.20, ISRO2008-56 top☝ ☛ https://gateoverflow.in/2724
Answer ☟
A. The terminal used to enter the input data for the C program being executed
B. An output device used to print the output of a number of jobs
C. The secondary memory device in a virtual storage system
D. The swapping area on a disk used by the swapper
Answer ☟
Which one of the following is true for a CPU having a single interrupt request line and a single interrupt grant line?
Answer ☟
Normally user programs are prevented from handling I/O directly by I/O instructions in them. For CPUs having explicit I/O
instructions, such I/O protection is ensured by having the I/O instruction privileged. In a CPU with memory mapped I/O, there is no
explicit I/O instruction. Which one of the following is true for a CPU with memory mapped I/O?
Answer ☟
What is the bit rate of a video terminal unit with 80 characters/line, 8 bits/character and horizontal sweep time of 100 µs
(including 20 µs of retrace time)?
A. 8 Mbps
B. 6.4 Mbps
C. 0.8 Mbps
D. 0.64 Mbps
Answer ☟
Which of the following DMA transfer modes and interrupt handling mechanisms will enable the highest I/O band-width?
A. Transparent DMA and Polling interrupts
Answer ☟
Answers: Io Handling
5.9.1 Io Handling: GATE CSE 1996 | Question: 1.20, ISRO2008-56 top☝ ☛ https://gateoverflow.in/2724
Answer is (A).
Spooling(simultaneous peripheral operations online) is a technique in which an intermediate device such as disk is interposed
between process and low speed i/o device. For ex. in printer if a process attempt to print a document but printer is busy printing
another document, the process, instead of waiting for printer to become available,write its output to disk. When the printer
become available the data on disk is printed. Spooling allows process to request operation from peripheral device without
requiring that the device be ready to service the request.
Answer : Option (B)
SPOOLing (Simultaneous Peripheral Operations OnLine) is a technique in which an intermediate device such as disk is
interposed between process and low speed I/O device like a printer. If a process attempts to print a document but printer is busy
printing another document, the process, instead of waiting for printer to become available, write its output to disk. When the
printer become available the data on disk is printed. Spooling allows process to request operations from peripheral devices
without requiring that the device be ready to service the request.
20 votes -- Tilak D. Nanavati (2.9k points)
(C) is the correct answer. We can use one Interrupt line for all the devices connected and pass it through OR gate. On
receiving by the CPU, it executes the corresponding ISR and after exec INTA is sent via one line. For Vectored Interrupts it is
always possible if we implement in daisy chain mechanism.
Option (A). User applications are not allowed to perform I/O in user mode - All I/O requests are handled through system
calls that must be performed in kernel mode.
Answer: (B)
CPU get highest bandwidth in transparent DMA and polling. but it asked for I/O bandwidth not cpu bandwidth so option
(A) is wrong.
In case of Cycle stealing, in each cycle time device send data then wait again after few CPU cycle it sends to memory . So option
(B) is wrong.
In case of Polling CPU takes the initiative so I/O bandwidth can not be high so option (D) is wrong .
Consider Block transfer, in each single block device send data so bandwidth ( means the amount of data ) must be high . This
makes option (C) correct.
5.10.1 Memory Management: GATE CSE 1992 | Question: 12-b top☝ ☛ https://gateoverflow.in/43582
Let the page reference and the working set window be c c d b c e c e a d and 4, respectively. The initial working set at time
t = 0 contains the pages {a, d, e} , where a was referenced at time t = 0, d was referenced at time t = −1 , and e was referenced at
time t = −2 . Determine the total number of page faults and the average number of page frames used by computing the working set
at each reference.
Answer ☟
A computer installation has 1000k of main memory. The jobs arrive and finish in the following sequences.
Job 1 requiring 200k arrives
Job 2 requiring 350k arrives
Job 3 requiring 300k arrives
Job 1 finishes
Job 4 requiring 120k arrives
Job 5 requiring 150k arrives
Job 6 requiring 80k arrives
A. Draw the memory allocation table using Best Fit and First Fit algorithms.
B. Which algorithm performs better for this sequence?
Answer ☟
5.10.3 Memory Management: GATE CSE 1996 | Question: 2.18 top☝ ☛ https://gateoverflow.in/2747
A 1000 Kbyte memory is managed using variable partitions but no compaction. It currently has two partitions of sizes 200
Kbyte and 260 Kbyte respectively. The smallest allocation request in Kbyte that could be denied is for
A. 151
B. 181
C. 231
D. 541
5.10.4 Memory Management: GATE CSE 1998 | Question: 2.16 top☝ ☛ https://gateoverflow.in/1689
What will be the size of the partition (in physical memory) required to load (and run) this program?
A. 12 KB
B. 14 KB
C. 10 KB
D. 8 KB
Answer ☟
5.10.5 Memory Management: GATE CSE 2014 Set 2 | Question: 55 top☝ ☛ https://gateoverflow.in/2022
Consider the main memory system that consists of 8 memory modules attached to the system bus, which is one word wide.
When a write request is made, the bus is occupied for 100 nanoseconds (ns) by the data, address, and control signals. During the
same 100 ns, and for 500 ns thereafter, the addressed memory module executes one cycle accepting and storing the data. The
(internal) operation of different memory modules may overlap in time, but only one request can be on the bus at any time. The
maximum number of stores (of one word each) that can be initiated in 1 millisecond is ________
Answer ☟
5.10.6 Memory Management: GATE CSE 2015 Set 2 | Question: 30 top☝ ☛ https://gateoverflow.in/8145
Consider 6 memory partitions of sizes 200 KB , 400 KB , 600 KB , 500 KB , 300 KB and 250 KB , where KB refers to
kilobyte . These partitions need to be allotted to four processes of sizes 357 KB , 210 KB , 468 KB , 491 KB in that order. If the
best-fit algorithm is used, which partitions are NOT allotted to any process?
A. 200 KB and 300 KB
B. 200 KB and 250 KB
C. 250 KB and 300 KB
D. 300 KB and 400 KB
Answer ☟
Consider allocation of memory to a new process. Assume that none of the existing holes in the memory will exactly fit the
process’s memory requirement. Hence, a new hole of smaller size will be created if allocation is made in any of the existing holes.
Which one of the following statement is TRUE?
A. The hole created by first fit is always larger than the hole created by next fit.
B. The hole created by worst fit is always larger than the hole created by first fit.
C. The hole created by best fit is never larger than the hole created by first fit.
D. The hole created by next fit is never larger than the hole created by best fit.
Answer ☟
For each of the four processes P1 , P2 , P3 , and P4 . The total size in kilobytes (KB) and the number of segments are given
below.
The page size is 1 KB . The size of an entry in the page table is 4 bytes . The size of an entry in the segment table is 8 bytes . The
maximum size of a segment is 256 KB . The paging method for memory management uses two-level paging, and its storage
overhead is P . The storage overhead for the segmentation method is S . The storage overhead for the segmentation and paging
method is T . What is the relation among the overheads for the different methods of memory management in the concurrent
execution of the above four processes?
A. P<S<T
B. S<P<T
C. S<T<P
D. T<S<P
Answer ☟
Let a memory have four free blocks of sizes 4k, 8k, 20k , 2k. These blocks are allocated following the best-fit strategy. The
allocation requests are stored in a queue as shown below.
Request No J1 J2 J3 J4 J5 J6 J7 J8
Request Sizes 2k 14k 3k 6k 6k 10k 7k 20k
Usage Time 4 10 2 8 4 1 8 6
Answer ☟
5.10.1 Memory Management: GATE CSE 1992 | Question: 12-b top☝ ☛ https://gateoverflow.in/43582
Window size of working set = 4
Initial pages in the working set window = {e, d, a}
Initial there is 1000k main memory available.
Then job 1 arrive and occupied 200k, then job 2 arrive, occupy 350k, after that job 3 arrive and occupy 300k (assume
continuous allocation ) now free memory is 1000 − 850(200 + 350 + 300) = 150k (till these jobs first fit and best fit are
same)
Now, job 1 is finished. So, that space is also free. So, here 200k slot and 150k slots are free.
Now, job 4 arrives which is 120k.
Case 1:
First fit, so it will be in 200 k slot (free slot ) and now free is = 200 − 120 = 80k ,
Now 150k arrive which will be in 150 k slot
Then, 80k arrive which will occupy in 80k slot (200 − 120) so, all jobs will be allocated successfully.
Case 2:
Best fit : 120k job will occupy best fit free space which is 150k so, now remaining 150 − 120 = 30k ,
Then 150k job arrive it will be occupied in 200k slot, which is best fit for this job. So, free space = 200 − 150 = 50 ,
Now, job 80k arrive, but there is no continuous 80k memory free. So, it will not be allocated successfully.
5.10.3 Memory Management: GATE CSE 1996 | Question: 2.18 top☝ ☛ https://gateoverflow.in/2747
The answer is (B). Since the total size of the memory is 1000 KB , let's assume that the partitioning for the current
allocation is done in such a way that it will leave minimum free space.
Partitioning the 1000 KB as below will allow gaps of 180 KB each and hence a request of 181 KB will not be met.
[180 KB − 200 KB − 180 KB − 260 KB − 180 KB] . The reasoning is more of an intuition rather than any formula.
5.10.4 Memory Management: GATE CSE 1998 | Question: 2.16 top☝ ☛ https://gateoverflow.in/1689
"To enable a process to be larger than the amount of memory allocated to it, we can use overlays. The idea of overlays is
to keep in memory only those instructions and data that are needed at any given time. When other instructions are needed, they
are loaded into space occupied previously by instructions that are no longer needed." For the above program, maximum memory
will be required when running code portion present at leaves. Max requirement = (max of requirements of D, E, F , and G.
= MAX(12, 14, 10, 14) = 14 (Answer)
5.10.5 Memory Management: GATE CSE 2014 Set 2 | Question: 55 top☝ ☛ https://gateoverflow.in/2022
When a write request is made, the bus is occupied for 100 ns. So, between 2 writes at least 100 ns interval must be there.
Now, after a write request, for 100 + 500 = 600 ns, the corresponding memory module is busy storing the data. But, assuming
the next stores are to a different memory module (we have totally 8 modules in question), we can have consecutive stores at
intervals of 100 ns. So, maximum number of stores in 1 ms
5.10.6 Memory Management: GATE CSE 2015 Set 2 | Question: 30 top☝ ☛ https://gateoverflow.in/8145
Option (A) is correct because we have 6 memory partitions of sizes 200 KB, 400 KB, 600 KB, 500 KB, 300 KB
and 250 KB and the partition allotted to the process using best fit is given below:
Best fit will search for the smallest block which is able to accommodate the request. So, the hole created by the Best Fit is
always less than or equal to the hole created using any other method.
Worst fit search for the biggest possible block which is able to accommodate the request. It might be the case that block biggest
possible block may be in the first block and both worst and first fit select the same block.
So, we can't say that hole formed by worst fit is always greater than first. The size of the hole can be same too. (B) is false
Ans: (C) Hole created by the best fit is never larger than the hole created by first fit,
The hole created by the Best Fit is equal to the hole created by first fit when the first fit happens to select the smallest block
which can accommodate the required size.
Page size is 1KB. So, no. of pages required for P1 = 195 . An entry in page table is of size 4 bytes and assuming an inner level
page table takes the size of a page (this information is not given in question), we can have up to 256 entries in a second level
page table and we require only 195 for P1 . Thus only 1 second level page table is enough. So, memory overhead = 1KB (for
first level) (again assumed as page size as not explicitly told in question) +1KB for second level = 2KB .
For P2 and P3 also, we get 2KB each and for P4 we get 1 + 2 = 3KB as it requires 1 first level page table and 2 second level
page tables (364 > 256) . So, total overhead for their concurrent execution = 2 × 3 + 3 = 9KB .
Thus P = 9KB .
Similarly, for P2 , P3 and P4 we get 5 × 8 , 3 × 8 and 8 × 8 bytes respectively and the total overhead will be
32 + 40 + 24 + 64 = 160 bytes.
So, S = 160B .
Here we segment first and then page. So, we need the page table size. We are given maximum size of a segment is 256 KB and
page size is 1KB and thus we require 256 entries in the page table. So, total size of page table = 256 × 4 = 1024 bytes
(exactly 1 page size).
So, now for P1 we require 1 segment table of size 32 bytes plus 4 page table of size 1KB for the 4 segments. Similarly,
P2 − 40 bytes and 5 KB
P3 − 24 bytes and 3 KB
P4 − 64 bytes and 8 KB .
So, T = 20640B .
PS: Since the block sizes are given, we cannot assume further splitting of them.
Also, the question implies a multiprocessing environment and we can assume the execution of a process is not affecting other
process' runtime.
At t=0 At t=8
Memory Size Job Memory Size Job
Block Block
A 4k J3 (finishes at t = 2) A 4k
B 8k J4 (finishes at t = 8) B 8k J5 (finishes at t=12)
C 20k J2 (finishes at t = 10) C 20k J2 (finishes at t = 10)
D 2k J1 (finishes at t=4) D 2k
At t=10 At t=11
Memory Size Job Memory Size Job
Block Block
A 4k A 4k
B 8k J5 (finishes at t=12) B 8k J5 (finishes at t=12)
C 20k J6 (finishes at t = 11) C 20k J7 (finishes at t = 19)
D 2k D 2k
So, J7 finishes at t = 19 .
Reference: http://thumbsup2life.blogspot.fr/2011/02/best-fit-first-fit-and-worst-fit-memory.html
Correct Answer: B
References
5.11.1 Os Protection: GATE CSE 1999 | Question: 1.11, UGCNET-Dec2015-II: 44 top☝ ☛ https://gateoverflow.in/1464
Answer ☟
A CPU has two modes -- privileged and non-privileged. In order to change the mode from privileged to non-privileged
Answer ☟
A user level process in Unix traps the signal sent on a Ctrl-C input, and has a signal handling routine that saves appropriate
files before terminating the process. When a Ctrl-C input is given to this process, what is the mode in which the signal handling
routine executes?
A. User mode
B. Kernel mode
C. Superuser mode
D. Privileged mode
Answer ☟
Answers: Os Protection
5.11.1 Os Protection: GATE CSE 1999 | Question: 1.11, UGCNET-Dec2015-II: 44 top☝ ☛ https://gateoverflow.in/1464
Software interrupt is the answer.
Privileged instruction cannot be the answer as system call is done from user mode and privileged instruction cannot be done
from user mode.
44 votes -- Arjun Suresh (332k points)
Answer should be (D). Changing from privileged to non-privileged doesn't require an interrupt unlike from non-privileged to
privileged. Also, to loose a privilege we don't need a privileged instruction though a privileged instruction does no harm.
http://web.cse.ohio-state.edu/~teodores/download/teaching/cse675.au08/CSE675.02_MIPS-ISA_part3.pdf
References
When an user send an input to the process it can not be in privileged mode as it is coming from an user so option D
, Privileged mode can not be possible here ..
That means both option B and option D are equal. As option D can not be possible , option B also false.
There is nothing called superuser mode so option C is clearly wrong .
Only option A is left , when an user input come like ' ctrl+c' the signal handling routine executes in user mode only as
a user level process in UNIX traps the signal.
The following page addresses, in the given sequence, were generated by a program:
12341352154323
This program is run on a demand paged virtual memory system, with main memory size equal to 4 pages. Indicate the page
references for which page faults occur for the following page replacement algorithms.
A. LRU
B. FIFO
Answer ☟
5.12.2 Page Replacement: GATE CSE 1994 | Question: 1.13 top☝ ☛ https://gateoverflow.in/2454
A memory page containing a heavily used variable that was initialized very early and is in constant use is removed then
A. LRU page replacement algorithm is used
B. FIFO page replacement algorithm is used
C. LFU page replacement algorithm is used
D. None of the above
5.12.3 Page Replacement: GATE CSE 1994 | Question: 1.24 top☝ ☛ https://gateoverflow.in/2467
Consider the following heap (figure) in which blank regions are not in use and hatched region are in use.
The sequence of requests for blocks of sizes 300, 25, 125, 50 can be satisfied if we use
A. either first fit or best fit policy (any one)
B. first fit but not best fit policy
C. best fit but not first fit policy
D. None of the above
Answer ☟
5.12.4 Page Replacement: GATE CSE 1995 | Question: 1.8 top☝ ☛ https://gateoverflow.in/2595
Which of the following page replacement algorithms suffers from Belady’s anamoly?
A. Optimal replacement
B. LRU
C. FIFO
D. Both (A) and (C)
Answer ☟
5.12.5 Page Replacement: GATE CSE 1995 | Question: 2.7 top☝ ☛ https://gateoverflow.in/2619
The address sequence generated by tracing a particular program executing in a pure demand based paging system with 100
records per page with 1 free main memory frame is recorded as follows. What is the number of page faults?
0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260, 0320, 0370
A. 13
B. 8
C. 7
D. 10
Answer ☟
5.12.6 Page Replacement: GATE CSE 1997 | Question: 3.10, ISRO2008-57, ISRO2015-64 top☝ ☛ https://gateoverflow.in/2241
Answer ☟
5.12.7 Page Replacement: GATE CSE 1997 | Question: 3.5 top☝ ☛ https://gateoverflow.in/2236
Locality of reference implies that the page reference being made by a process
Answer ☟
5.12.8 Page Replacement: GATE CSE 1997 | Question: 3.9 top☝ ☛ https://gateoverflow.in/2240
Thrashing
A. reduces page I/O
B. decreases the degree of multiprogramming
C. implies excessive page I/O
D. improve the system performance
Answer ☟
5.12.9 Page Replacement: GATE CSE 2001 | Question: 1.21 top☝ ☛ https://gateoverflow.in/714
Consider a virtual memory system with FIFO page replacement policy. For an arbitrary page access pattern, increasing the
number of page frames in main memory will
A. always decrease the number of page faults
B. always increase the number of page faults
C. sometimes increase the number of page faults
D. never affect the number of page faults
Answer ☟
5.12.10 Page Replacement: GATE CSE 2002 | Question: 1.23 top☝ ☛ https://gateoverflow.in/828
The optimal page replacement algorithm will select the page that
A. Has not been used for the longest time in the past
B. Will not be used for the longest time in the future
C. Has been used least number of times
D. Has been used most number of times
Answer ☟
5.12.11 Page Replacement: GATE CSE 2004 | Question: 21, ISRO2007-44 top☝ ☛ https://gateoverflow.in/1018
The minimum number of page frames that must be allocated to a running process in a virtual memory environment is
determined by
Answer ☟
5.12.12 Page Replacement: GATE CSE 2005 | Question: 22, ISRO2015-36 top☝ ☛ https://gateoverflow.in/1358
Answer ☟
A virtual memory system uses First In First Out (FIFO) page replacement policy and allocates a fixed number of frames to a
process. Consider the following statements:
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate.
Q: Some programs do not exhibit locality of reference.
Which one of the following is TRUE?
A. Both P and Q are true, and Q is the reason for P
B. Both P and Q are true, but Q is not the reason for P.
C. P is false but Q is true
D. Both P and Q are false.
Answer ☟
A process has been allocated 3 page frames. Assume that none of the pages of the process are available in the memory
initially. The process makes the following sequence of page references (reference string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1
If optimal page replacement policy is used, how many page faults occur for the above reference string?
A. 7
B. 8
C. 9
D. 10
Answer ☟
A process, has been allocated 3 page frames. Assume that none of the pages of the process are available in the memory
initially. The process makes the following sequence of page references (reference string): 1, 2, 1, 3, 7, 4, 5, 6, 3, 1
Least Recently Used (LRU) page replacement policy is a practical approximation to optimal page replacement. For the above
reference string, how many more page faults occur with LRU than with the optimal page replacement policy?
Answer ☟
5.12.16 Page Replacement: GATE CSE 2009 | Question: 9, ISRO2016-52 top☝ ☛ https://gateoverflow.in/1301
In which one of the following page replacement policies, Belady's anomaly may occur?
A. FIFO
B. Optimal
C. LRU
D. MRU
Answer ☟
A system uses FIFO policy for system replacement. It has 4 page frames with no pages loaded to begin with. The system first
accesses 100 distinct pages in some order and then accesses the same 100 pages but now in the reverse order. How many page
faults will occur?
A. 196
B. 192
C. 197
D. 195
Answer ☟
1, 2, 3, 2, 4, 1, 3, 2, 4, 1
on a demand paged virtual memory system running on a computer system that has main memory size of 3 page frames which are
initially empty. Let LRU, FIFO and OPTIMAL denote the number of page faults under the corresponding page replacement
policy. Then
A. OPTIMAL < LRU < FIFO
B. OPTIMAL < FIFO < LRU
C. OPTIMAL = LRU
D. OPTIMAL = FIFO
Answer ☟
5.12.19 Page Replacement: GATE CSE 2014 Set 1 | Question: 33 top☝ ☛ https://gateoverflow.in/1805
Assume that there are 3 page frames which are initially empty. If the page reference string is 1, 2, 3, 4, 2, 1, 5, 3, 2, 4, 6 the
number of page faults using the optimal replacement policy is__________.
Answer ☟
A computer has twenty physical page frames which contain pages numbered 101 through 120. Now a program accesses the
pages numbered 1, 2, ..., 100 in that order, and repeats the access sequence THRICE. Which one of the following page
replacement policies experiences the same number of page faults as the optimal page replacement policy for this program?
A. Least-recently-used
B. First-in-first-out
C. Last-in-first-out
D. Most-recently-used
Answer ☟
5.12.21 Page Replacement: GATE CSE 2014 Set 3 | Question: 20 top☝ ☛ https://gateoverflow.in/2054
A system uses 3 page frames for storing process pages in main memory. It uses the Least Recently Used (LRU) page
replacement policy. Assume that all the page frames are initially empty. What is the total number of page faults that will occur while
processing the page reference string given below?
4, 7, 6, 1, 7, 6, 1, 2, 7, 2
Answer ☟
5.12.22 Page Replacement: GATE CSE 2015 Set 1 | Question: 47 top☝ ☛ https://gateoverflow.in/8353
Consider a main memory with five-page frames and the following sequence of page references:
3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3 . Which one of the following is true with respect to page replacement policies First In First
Out (FIFO) and Least Recently Used (LRU)?
Answer ☟
5.12.23 Page Replacement: GATE CSE 2016 Set 1 | Question: 49 top☝ ☛ https://gateoverflow.in/39711
Consider a computer system with ten physical page frames. The system is provided with an access sequence
(a1 , a2 , . . . . , a20 , a1 , a2 , . . . a20 ) , where each ai is a distinct virtual page number. The difference in the number of page faults
between the last-in-first-out page replacement policy and the optimal page replacement policy is_________.
Answer ☟
5.12.24 Page Replacement: GATE CSE 2016 Set 2 | Question: 20 top☝ ☛ https://gateoverflow.in/39559
In which one of the following page replacement algorithms it is possible for the page fault rate to increase even when the
number of allocated frames increases?
A. LRU (Least Recently Used)
B. OPT (Optimal Page Replacement)
C. MRU (Most Recently Used)
D. FIFO (First In First Out)
Answer ☟
Recall that Belady's anomaly is that the page-fault rate may increase as the number of allocated frames increases. Now,
consider the following statements:
S1 : Random page replacement algorithm (where a page chosen at random is replaced) suffers from Belady's anomaly.
S2 : LRU page replacement algorithm suffers from Belady's anomaly.
Answer ☟
5.12.26 Page Replacement: GATE CSE 2021 Set 1 | Question: 11 top☝ ☛ https://gateoverflow.in/357441
In the context of operating systems, which of the following statements is/are correct with respect to paging?
Answer ☟
5.12.27 Page Replacement: GATE CSE 2021 Set 2 | Question: 48 top☝ ☛ https://gateoverflow.in/357489
Consider a three-level page table to translate a 39−bit virtual address to a physical address as shown below:
The page size is 4 KB (1KB = 210 bytes) and page table entry size at every level is 8 bytes. A process P is currently using
2GB (1GB = 230 bytes) virtual memory which is mapped to 2GB of physical memory. The minimum amount of memory
required for the page table of P across all levels is _________ KB .
Answer ☟
The address sequence generated by tracing a particular program executing in a pure demand paging system with 100 bytes per
page is
0100, 0200, 0430, 0499, 0510, 0530, 0560, 0120, 0220, 0240, 0260, 0320, 0410.
Suppose that the memory can store only one page and if x is the address which causes a page fault then the bytes from addresses x
to x + 99 are loaded on to the memory.
How many page faults will occur?
A. 0
B. 4
C. 7
8
Answer ☟
A demand paging system takes 100 time units to service a page fault and 300 time units to replace a dirty page. Memory
access time is 1 time unit. The probability of a page fault is p . In case of a page fault, the probability of page being dirty is also p . It
is observed that the average access time is 3 time units. Then the value of p is
A. 0.194
B. 0.233
C. 0.514
D. 0.981
Answer ☟
Assume that a main memory with only 4 pages, each of 16 bytes, is initially empty. The CPU generates the following
sequence of virtual addresses and uses the Least Recently Used (LRU) page replacement policy.
0, 4, 8, 20, 24, 36, 44, 12, 68, 72, 80, 84, 28, 32, 88, 92
How many page faults does this sequence cause? What are the page numbers of the pages present in the main memory at the end of
the sequence?
A. 6 and 1, 2, 3, 4
B. 7 and 1, 2, 4, 5
C. 8 and 1, 2, 4, 5
D. 9 and 1, 2, 3, 5
Answer ☟
LRU : 1, 2, 3, 4, 5, 2, 4, 3, 2
FIFO : 1, 2, 3, 4, 5, 1, 2, 3
17 votes -- Digvijay (44.9k points)
5.12.2 Page Replacement: GATE CSE 1994 | Question: 1.13 top☝ ☛ https://gateoverflow.in/2454
FIFO replaces a page which was brought into memory first will be removed first so since the variable was initialized very
early. it is in the set of first in pages. so it will be removed answer: (B) if you use LRU - since it is used constantly it is a recently
used item always. so cannot be removed. If you use LFU - the frequency of the page is more since it is in constant use. So cannot
be replaced.
34 votes -- Sankaranarayanan P.N (8.5k points)
5.12.3 Page Replacement: GATE CSE 1994 | Question: 1.24 top☝ ☛ https://gateoverflow.in/2467
In the first fit, block requests will be satisfied from the first free block that fits it.
The request for 300 will be satisfied by a 350 size block reducing the free size to 50.
Request for 25, satisfied by 150 size block, reducing it to 125.
Request for 125 satisfied by 125 size block.
In the best fit strategy, a block request is satisfied by the smallest block that can fit it.
The request for 300 will be satisfied by a 350 size block reducing the free size to 50.
Request for 25, satisfied by 50 size block as its the smallest size that fits 25, reducing it to 25.
Request for 125, satisfied by 150 size block, reducing it to 25.
Now, the request for 50 cannot be satisfied as the two 25 size blocks are not contiguous.
5.12.4 Page Replacement: GATE CSE 1995 | Question: 1.8 top☝ ☛ https://gateoverflow.in/2595
Answer is (C).
FIFO sufferes from Belady's anomaly. Optimal replacement never suffers from Belady's anomaly.
5.12.5 Page Replacement: GATE CSE 1995 | Question: 2.7 top☝ ☛ https://gateoverflow.in/2619
0100 − 1 page fault. Records 0100 − 0199 in memory
0200 − 2 page faults. Records 0200 − 0299 in memory
0430 − 3 page faults. Records 0400 − 0499 in memory
0499 − 3 page faults. Records 0400 − 0499 in memory
0510 − 4 page faults. Records 0500 − 0599 in memory
0530 − 4 page faults. Records 0500 − 0599 in memory
0560 − 4 page faults. Records 0500 − 0599 in memory
0120 − 5 page faults. Records 0100 − 0199 in memory
0220 − 6 page faults. Records 0200 − 0299 in memory
0240 − 6 page faults. Records 0200 − 0299 in memory
0260 − 6 page faults. Records 0200 − 0299 in memory
0320 − 7 page faults. Records 0300 − 0399 in memory
0370 − 7 page faults. Records 0300 − 0399 in memory
5.12.6 Page Replacement: GATE CSE 1997 | Question: 3.10, ISRO2008-57, ISRO2015-64 top☝ ☛ https://gateoverflow.in/2241
The dirty bit allows for a performance optimization. A page on disk that is paged in to physical memory, then read from,
and subsequently paged out again does not need to be written back to disk, since the page hasn't changed. However, if the page
was written to after it's paged in, its dirty bit will be set, indicating that the page must be written back to the backing store
answer: (A)
5.12.7 Page Replacement: GATE CSE 1997 | Question: 3.5 top☝ ☛ https://gateoverflow.in/2236
Answer is (B)
Locality of reference is also called as principle of locality. It means that same data values or related storage locations are
frequently accessed. This in turn saves time. There are mainly three types of principle of locality:
1. temporal locality
2. spatial locality
3. sequential locality
5.12.8 Page Replacement: GATE CSE 1997 | Question: 3.9 top☝ ☛ https://gateoverflow.in/2240
(C)- implies excessive page I/O
http://en.wikipedia.org/wiki/Thrashing_%28computer_science%29
References
5.12.9 Page Replacement: GATE CSE 2001 | Question: 1.21 top☝ ☛ https://gateoverflow.in/714
Answer is (C).
Belady anomaly is the name given to the phenomenon in which increasing the number of page frames results in an increase in
the number of page faults for certain memory access patterns. This phenomenon is commonly experienced when using the First
in First Out (FIFO) page replacement algorithm
References
5.12.10 Page Replacement: GATE CSE 2002 | Question: 1.23 top☝ ☛ https://gateoverflow.in/828
Optimal page replacement algorithm will always select the page that will not be used for the longest time in the future for
replacement, and that is why the it is called optimal page replacement algorithm. Hence, (B) choice.
5.12.11 Page Replacement: GATE CSE 2004 | Question: 21, ISRO2007-44 top☝ ☛ https://gateoverflow.in/1018
Its instruction set architecture .if you have no indirect addressing then you need at least two pages in physical memory.
One for instruction (code part) and another for if the data references memory.if there is one level of indirection then you will
need at least three pages one for the instruction(code) and another two for the indirect addressing. If there three indirection then
minimum 4 frames are allocated.
http://stackoverflow.com/questions/11213013/minimum-page-frames
References
So, answer → (C).
1. Virtual Memory increases → This option is false. Because Virtual Memory of Computer do not depend on RAM. Virtual
Memory concept iteself was introduced so Programs larger than RAM can be executed.
2. Larger RAMs are faster → No This option is false. Size of ram does not determine it's speed, Type of ram does, SRAM is
faster, DRAM is slower.
3. Fewer page faults occur → This is true, more pages can be in Main memory .
4. Fewer segmentation faults occur → "Segementation Fault" → A segmentation fault (aka segfault) is a common condition
that causes programs to crash; they are often associated with a file named core . Segfaults are caused by a program trying
to read or write an illegal memory location. It is clear that segmentation fault is not related to size of main memory. This
is false.
P: Increasing the number of page frames allocated to a process sometimes increases the page fault rate.
This is true,
example : FIFO suffers from Bélády's anomaly which means that on Increasing the number of page frames allocated to a process
it may sometimes increase the total number of page faults.
This is true : it is easy to write a program which jumps around a lot & which do not exhibit locality of reference.
Example : Assume that array is stored in Row Major order & We are accessing it in column major order.
So, answer is option (B). (As there is no relation between P & Q. As it is clear from example, they are independent.)
References
Optimal replacement policy means a page which is "farthest" in the future to be accessed will be replaced next.
Frame 0 1
Frame 1 2 7 4 5 6
Frame 2 3
3 initial page faults for pages 1, 2, 3 and then for pages 7, 4, 5, 6 ⟹ 7 page faults occur.
Answer is (A).
Using LRU = 9 Page Fault
So, LRU-OPTIMAL = 2
Option (C).
5.12.16 Page Replacement: GATE CSE 2009 | Question: 9, ISRO2016-52 top☝ ☛ https://gateoverflow.in/1301
It is (A).
http://en.wikipedia.org/wiki/B%C3%A9l%C3%A1dy%27s_anomaly
References
Answer is (A).
When we access 100 distinct page in some order (for example 1, 2, 3 … 100 ) then total number of page faults = 100 . At last,
the 4 page frames will contain the pages 100, 99, 98 and 97. When we reverse the string (100, 99, 98, … , 1) then first four page
accesses will not cause the page fault because they are already present in page frames. But the remaining 96 page accesses will
cause 96 page faults. So, total number of page faults = 100 + 96 = 196 .
Page fault for LRU = 9, FIFO = 6, OPTIMAL = 5
Answer is (B).
5.12.19 Page Replacement: GATE CSE 2014 Set 1 | Question: 33 top☝ ☛ https://gateoverflow.in/1805
In Optimal page replacement a page which will be farthest accessed in future will be replaced first.
Here, we have 3 page frames. Since, initially they are empty the first 3 distinct page references will cause page faults.
Page Frames Next Use Order
After 3 distinct page accesses we have : 1 2 3 2 1 3 .
Based on the Next Use Order, the next replacement will be 3. Proceeding like this we get
Request Page Frames Next Use Order
4 : 1 2 4 2 1 4 − Miss.
Request Page Frames Next Use Order
2 : 1 2 4 2 1 4 − Hit.
Request Page Frames Next Use Order
1 : 1 2 4 2 4 1 − Hit.
Request Page Frames Next Use Order
5 : 5 2 4 2 4 5 − Miss.
5.12.20 Page Replacement: GATE CSE 2014 Set 2 | Question: 33 top☝ ☛ https://gateoverflow.in/1992
It will be (D) i.e Most-recently-used.
To be clear "repeats the access sequence THRICE" means totally the sequence of page numbers are accessed 4 times though this
is not important for the answer here.
If we go optimal page replacement algorithm it replaces the page which will be least used in near future.
Now we have frame size 20 and reference string is
First 20 accesses will cause page faults - the initial pages are no longer used and hence optimal page replacement replaces them
first. Now, for page 21, according to reference string page 1 will be used again after 100 and similarly 2 will be used after 1 so,
on and so the least likely to be used page in future is page 20. So, for 21st reference page 20 will be replaced and then for 22nd
page reference, page 21 will be replaced and so on which is MOST RECENTLY USED page replacement policy.
PS: Even for Most Recently Used page replacement at first all empty (invalid) pages frames are replaced and then only most
recently used ones are replaced.
5.12.21 Page Replacement: GATE CSE 2014 Set 3 | Question: 20 top☝ ☛ https://gateoverflow.in/2054
Total page faults = 6.
4 7 6 1 7 6 1 2 7 2
6 6 6 6 6 6 7 7
F F
7 7 7 7 7 7 2 2 2 ⟹ 6 faults
F F
4 4 4 1 1 1 1 1 1 1
F F
6 6 67 7
7 72 2 2 ⟹ 3 faults + 3 initial access faults = 6 page faults
4 1 1 1 1
OR
67
72 ⟹ 3 faults + 3 initial access faults = 6 page faults
41
5.12.22 Page Replacement: GATE CSE 2015 Set 1 | Question: 47 top☝ ☛ https://gateoverflow.in/8353
Requested Page references are 3, 8, 2, 3, 9, 1, 6, 3, 8, 9, 3, 6, 2, 1, 3 and number of page frames is 5.
In FIFO Page replacement will take place in sequence in pattern First In first Out, as following
Request 3 8 2 3 9 1 6 3 8 9 3 6 2 1 3
Frame 5 1 1 1 1 1 1 1 1 1 1
Frame 4 9 9 9 9 9 9 9 9 2 2 2
Frame 3 2 2 2 2 2 2 8 8 8 8 8 8 8
Frame 2 8 8 8 8 8 8 3 3 3 3 3 3 3 3
Frame 1 3 3 3 3 3 3 6 6 6 6 6 6 6 6 6
Miss/hit F F F H F F F F F H H H F H H
Using Least Recently Used (LRU) page replacement will be the page which is visited least recently (which is not used for the
longest time), as following:
Request 3 8 2 3 9 1 6 3 8 9 3 6 2 1 3
Frame 5 1 1 1 1 1 1 1 2 2 2
Frame 4 9 9 9 9 9 9 9 9 9 9 9
Frame 3 2 2 2 2 2 2 8 8 8 8 8 1 1
Frame 2 8 8 8 8 8 6 6 6 6 6 6 6 6 6
Frame 1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
Miss/hit F F F H F F F H F H H H F F H
Correct Answer: A
31 votes -- Raghuveer Dhakad (1.6k points)
5.12.23 Page Replacement: GATE CSE 2016 Set 1 | Question: 49 top☝ ☛ https://gateoverflow.in/39711
Answer is 1.
In LIFO first 20 are page faults followed by next 9 hits then next 11 page faults. (After a10 , a11 replaces a10 , a12 replaces a11
and so on)
In optimal first 20 are page faults followed by next 9 hits then next 10 page faults followed by last page hit.
70 votes -- Krishna murthy (271 points)
5.12.24 Page Replacement: GATE CSE 2016 Set 2 | Question: 20 top☝ ☛ https://gateoverflow.in/39559
Option D. FIFO suffers from Belady's anomaly.
https://gateoverflow.in/1301/gate2009_9
https://gateoverflow.in/1254/gate2007_56
https://gateoverflow.in/2595/gate1995_1-8
References
5.12.25 Page Replacement: GATE CSE 2017 Set 1 | Question: 40 top☝ ☛ https://gateoverflow.in/118323
A page replacement algorithm suffers from Belady's anamoly when it is not a stack algorithm.
A stack algorithm is one that satisfies the inclusion property. The inclusion property states that, at a given time, the
contents(pages) of a memory of size k page-frames is a subset of the contents of memory of size k + 1 page-frames, for the same
sequence of accesses. The advantage is that running the same algorithm with more pages(i.e. larger memory) will never increase
the number of page faults.
5.12.26 Page Replacement: GATE CSE 2021 Set 1 | Question: 11 top☝ ☛ https://gateoverflow.in/357441
Pages are divided into fixed size slots , so no external fragmentation
But applications smaller than page size cause internal fragmentation
Page tables take extra pages in memory. Therefore incur extra cost
Correct ans A and C
5.12.27 Page Replacement: GATE CSE 2021 Set 2 | Question: 48 top☝ ☛ https://gateoverflow.in/357489
Given :
Three level pages tables with address division (9, 9, 9, 12) means:
The entries of the level-1 page table are pointers to a level-2 page table, the entries of the level-2 page table are pointers to a
level-3 page table, and the entries of the level-3 page table are PTEs that contain actual frame number where our desired word
resides.
But level 3 page table has only 29 entries. So, one-page table of level 3 can point to 29 pages of VM only, So, we need 210 level-
3 page tables of process P.
So, at level-3, we have 210 page tables, So, we need 210 entries in Level- 2 But level 2 page table has only 29 entries, so, one-
page table of level 2 can only point to 29 page tables of level-3, So, we need 2 level-2 page tables.
So, for process P, we need only 1 Level-1 page table, 2 level-2 page tables, and 210 level-3 page tables.
Note that All the page tables, at every level, have same size which is 29 × 8 B = 212 B = 4 KB
( Because every page table at every level has 29 entries and Page table entry size at every level is 8 B)
So, in total, we need 1 + 2 + 210 page tables (1 Level-1, 2 Level-2, 210 level-3), and each page table size is 4 KB
NOTE :
In this question, in place of Multilevel paging, If we had used Single Level Page table (also known as Flat level page table OR
linear page table), then size of page table would be 1 GB.
Single Level Page Table :
Single-Level Page Tables are single linear array of page-table entries (PTEs). Each PTE contains information about the page,
such as its physical page number (“frame” number) as well as status bits, such as whether or not the page is valid, and other bits.
the ith entry in the array gives the frame number in which the ith page is stored.
So, number of pages in Virtual address space (VAS) of each process = 239 B/4KB = 227
So, we need 227 entries in the page table. Each PTE size = 8 B
So, size of page table for the process = 229 × 8 B = 1 GB
NOTE that Single level paging CANNOT take advantage of the unused space by the process. The single level page table needs
one entry per page. Furthermore, since the process has a very sparse virtual address space, so, the vast majority of these PTEs
would simply be marked invalid. BUT space taken by single level page table will be 1GB only. It only depends on the virtual
address space, NOT depend on the used memory of process.
A Common Mistake that students make :
In this question, if in place of Multilevel paging, If we had used Single Level Page table, then what would be size pf page table
??
The mistake is that some students will consider 2 GB memory that the process is using, and will get answer
(2 GB/4 KB) × 8 B = 4 MB which is wrong.
Remember that the CORE reason why we use multilevel paging in place of single level paging is that we want to reduce size of
page table by taking advantage of unused space of process and making most entries in the outer level page table as invalid
entries.
https://people.cs.umass.edu/~emery/classes/cmpsci377/current/notes/lecture_15_vm.pdf
https://www.youtube.com/watch?v=PKy9Jxc3blw
https://www.youtube.com/watch?v=pcTAoyzW2rY
References
0100 - page fault, addresses till 199 in memory
0200 - page fault, addresses till 299 in memory
0430 - page fault, addresses till 529 in memory
0499 - no page fault
0510 - no page fault
0530 - page fault, addresses till 629 in memory
0560 - no page fault
0120 - page fault, addresses till 219 in memory
0220 - page fault, addresses till 319 in memory
0240 - no page fault
0260 - no page fault
0320 - page fault, addresses till 419 in memory
0410 - no page fault
So, 7 is the answer- (C)
p(p × 300 + (1 − p) × 100) + (1 − p) × 1 = 3
⟹ 200p2 + 99p − 2 = 0
−b+√b2 −4ac
After solving this equation using Sridharacharya formula: 2a , we get
p ≈ 0.0194.
At first we have to translate the given virtual addresses (which addresses a byte) to page addresses (which again is virtual
but addresses a page). This can be done simply by dividing the virtual addresses by page size and taking the floor value
(equivalently by removing the page offset bits). Here, page size is 16 bytes which requires 4 offset bits. So,
0, 4, 8, 20, 24, 36, 44, 12, 68, 72, 80, 84, 28, 32, 88, 92 ⟹ 0, 0, 0, 1, 1, 2, 2, 0, 4, 4, 5, 5, 1, 2, 5, 5
We have 4 spaces for a page and there will be a replacement only when a 5th distinct page comes. Lets see what happens for the
sequence of memory accesses:
5.13.1 Precedence Graph: GATE CSE 1989 | Question: 11b top☝ ☛ https://gateoverflow.in/91096
Consider the following precedence graph (Fig.6) of processes where a node denotes a process and a directed edge from node
Pi to node Pj implies; that Pi must complete before Pj commences. Implement the graph using FORK and JOIN constructs. The
actual computation done by a process may be indicated by a comment line.
Answer ☟
5.13.2 Precedence Graph: GATE CSE 1991 | Question: 01-xii top☝ ☛ https://gateoverflow.in/508
A given set of processes can be implemented by using only parbegin/parend statement, if the precedence graph of these
processes is ______
Answer ☟
Draw the precedence graph for the concurrent program given below
S1
parbegin
begin
S2:S4
end;
begin
S3;
parbegin
S5;
begin
S6:S8
end
parend
end;
S7
parend;
S9
Answer ☟
5.13.1 Precedence Graph: GATE CSE 1989 | Question: 11b top☝ ☛ https://gateoverflow.in/91096
Step 1 :
P1
fork L1
P2
fork L2
L1 : fork L2
P3
goto L3
Step 2 :
and
L2 : Join C1
P4
goto L4
L3 : Join C2
P5
goto L4
Step 3 :
5.13.2 Precedence Graph: GATE CSE 1991 | Question: 01-xii top☝ ☛ https://gateoverflow.in/508
A given set of processes can be implemented by using only parbegin/parendstatement, if the precedence graph of these
processes is properly nested
Reference : http://nob.cs.ucdavis.edu/classes/ecs150-2008-04/handouts/sync.pdf
https://gateoverflow.in/1739/gate1998_24#viewbutton
1. All the process execute concurrently, closed under par begin and par end.
2. If you see all the serial execution come then signal the resource and and parallel process down the value (resource ) similar
all the process which are which are dependent to other one, other one release the resource then it will be got that with down
and after release the its own resource. In the sense all the process are executing concurrently.
References
5.13.3 Precedence Graph: GATE CSE 1992 | Question: 12-a top☝ ☛ https://gateoverflow.in/591
parbegin-parend shows parallel execution while begin-end shows serial execution
Answer ☟
Answer ☟
Which combination of the following features will suffice to characterize an OS as a multi-programmed OS?
a. More than one program may be loaded into main memory at the same time for execution
b. If a program waits for certain events such as I/O, another program is immediately scheduled for execution
c. If the execution of a program terminates, another program is immediately scheduled for execution.
A. (a)
B. (a) and (b)
C. (a) and (c)
D. (a), (b) and (c)
Answer ☟
Answer ☟
Answers: Process
Answer is (B). The transition from running to ready indicates that the process in the running state can be preempted and
brought back to ready state.
Answer is (C).
Timer and disk both makes interrupt and power failure will also interrupt the system. Only a scheduler process will not interrupt
the running process as schduler process gets called only when no other process is running (preemption if any would have
happened before scheduler starts execution).
Quote from wikipedia
https://www.quora.com/How-does-the-timer-interrupt-invoke-the-process-scheduler
References
(A) and (B) suffice multi programming concept. For multi programming more than one program should be in memory
and if any program goes for Io another can be scheduled to use CPU as shown below:
So ans is (B).
Answer (B).
Explanation:
5.15.1 Process Scheduling: GATE CSE 1988 | Question: 2xa top☝ ☛ https://gateoverflow.in/93951
State any undesirable characteristic of the following criteria for measuring performance of an operating system:
Turn around time
Answer ☟
5.15.2 Process Scheduling: GATE CSE 1988 | Question: 2xb top☝ ☛ https://gateoverflow.in/93953
State any undesirable characteristic of the following criteria for measuring performance of an operating system:
Waiting time
Answer ☟
5.15.3 Process Scheduling: GATE CSE 1990 | Question: 1-vi top☝ ☛ https://gateoverflow.in/83850
The highest-response ratio next scheduling policy favours ___________ jobs, but it also limits the waiting time of _________
jobs.
Answer ☟
5.15.4 Process Scheduling: GATE CSE 1993 | Question: 7.10 top☝ ☛ https://gateoverflow.in/2298
Assume that the following jobs are to be executed on a single processor system
The jobs are assumed to have arrived at time 0+ and in the order p, q, r, s, t . Calculate the departure time (completion time) for job
p if scheduling is round robin with time slice 1
A. 4
B. 10
C. 11
D. 12
E. None of the above
Answer ☟
5.15.5 Process Scheduling: GATE CSE 1995 | Question: 1.15 top☝ ☛ https://gateoverflow.in/2602
Which scheduling policy is most suitable for a time shared operating system?
A. Shortest Job First
B. Round Robin
C. First Come First Serve
D. Elevator
Answer ☟
5.15.6 Process Scheduling: GATE CSE 1995 | Question: 2.6 top☝ ☛ https://gateoverflow.in/2618
The sequence __________ is an optimal non-preemptive scheduling sequence for the following jobs which leaves the CPU
idle for ________ unit(s) of time.
A. {3, 2, 1}, 1
B. {2, 1, 3}, 0
C. {3, 2, 1}, 0
D. {1, 2, 3}, 5
Answer ☟
5.15.7 Process Scheduling: GATE CSE 1996 | Question: 2.20, ISRO2008-15 top☝ ☛ https://gateoverflow.in/2749
Four jobs to be executed on a single processor system arrive at time 0 in the order A, B, C, D . Their burst CPU time
requirements are 4, 1, 8, 1 time units respectively. The completion time of A under round robin scheduling with time slice of one
time unit is
A. 10
B. 4
C. 8
D. 9
Answer ☟
5.15.8 Process Scheduling: GATE CSE 1998 | Question: 2.17, UGCNET-Dec2012-III: 43 top☝ ☛ https://gateoverflow.in/1690
Consider n processes sharing the CPU in a round-robin fashion. Assuming that each process switch takes s seconds, what
must be the quantum size q such that the overhead resulting from process switching is minimized but at the same time each process
is guaranteed to get its turn at the CPU at least every t seconds?
t−ns
A. q ≤ n−1
t−ns
B. q ≥ n−1
t−ns
C. q ≤ n+1
t−ns
D. q ≥ n+1
Answer ☟
a. Four jobs are waiting to be run. Their expected run times are 6, 3, 5 and x. In what order should they be run to minimize the
average response time?
b. Write a concurrent program using par begin-par end to represent the precedence graph shown below.
Answer ☟
5.15.10 Process Scheduling: GATE CSE 1998 | Question: 7-b top☝ ☛ https://gateoverflow.in/12963
In a computer system where the ‘best-fit’ algorithm is used for allocating ‘jobs’ to ‘memory partitions’, the following situation
Answer ☟
5.15.11 Process Scheduling: GATE CSE 2002 | Question: 1.22 top☝ ☛ https://gateoverflow.in/827
Answer ☟
A uni-processor computer system only has two processes, both of which alternate 10 ms CPU bursts with 90 ms I/O bursts.
Both the processes were created at nearly the same time. The I/O of both processes can proceed in parallel. Which of the following
scheduling strategies will result in the least CPU utilization (over a long period of time) for this system?
Answer ☟
Consider the following set of processes, with the arrival times and the CPU-burst times gives in milliseconds.
What is the average turnaround time for these processes with the preemptive shortest remaining processing time first (SRPT)
algorithm?
A. 5.50
B. 5.75
C. 6.00
D. 6.25
Answer ☟
Consider three CPU-intensive processes, which require 10, 20 and 30 time units and arrive at times 0, 2 and 6, respectively.
How many context switches are needed if the operating system implements a shortest remaining time first scheduling algorithm? Do
not count the context switches at time zero and at the end.
A. 1
B. 2
C. 3
D. 4
Answer ☟
Consider three processes (process id 0, 1, 2 respectively) with compute time bursts 2, 4 and 8 time units. All processes arrive
at time zero. Consider the longest remaining time first (LRTF) scheduling algorithm. In LRTF ties are broken by giving priority to
the process with the lowest process id. The average turn around time is:
A. 13 units
B. 14 units
C. 15 units
D. 16 units
Answer ☟
Consider three processes, all arriving at time zero, with total execution time of 10, 20 and 30 units, respectively. Each process
spends the first 20% of execution time doing I/O, the next 70% of time doing computation, and the last 10% of time doing I/O
again. The operating system uses a shortest remaining compute time first scheduling algorithm and schedules a new process either
when the running process gets blocked on I/O or when the running process finishes its compute burst. Assume that all I/O
operations can be overlapped as much as possible. For what percentage of time does the CPU remain idle?
A. 0%
B. 10.6%
C. 30.0%
D. 89.4%
Answer ☟
Group 1 contains some CPU scheduling algorithms and Group 2 contains some applications. Match entries in Group 1 to
entries in Group 2.
Group I Group II
(P) Gang Scheduling (1) Guaranteed Scheduling
(Q) Rate Monotonic Scheduling (2) Real-time Scheduling
(R) Fair Share Scheduling (3) Thread Scheduling
A. P − 3; Q − 2; R − 1
B. P − 1; Q − 2; R − 3
C. P − 2; Q − 3; R − 1
D. P − 1; Q − 3; R − 2
Answer ☟
An operating system used Shortest Remaining System Time first (SRT) process scheduling algorithm. Consider the arrival
times and execution times for the following processes:
Answer ☟
In the following process state transition diagram for a uniprocessor system, assume that there are always some processes in the
ready state:
I. If a process makes a transition D, it would result in another process making transition A immediately.
II. A process P2 in blocked state can make transition E while another process P1 is in running state.
III. The OS uses preemptive scheduling.
IV. The OS uses non-preemptive scheduling.
Answer ☟
Answer ☟
Consider the following table of arrival time and burst time for three processes P0, P1 and P2.
The pre-emptive shortest job first scheduling algorithm is used. Scheduling is carried out only at arrival or completion of processes.
What is the average waiting time for the three processes?
A. 5.0 ms
B. 4.33 ms
C. 6.33 ms
D. 7.33 ms
Answer ☟
The completion order of the 3 processes under the policies FCFS and RR2 (round robin scheduling with CPU quantum of 2 time
units) are
Answer ☟
A scheduling algorithm assigns priority proportional to the waiting time of a process. Every process starts with zero (the
lowest priority). The scheduler re-evaluates the process priorities every T time units and decides the next process to schedule.
Which one of the following is TRUE if the processes have no I/O operations and all arrive at time zero?
5.15.24 Process Scheduling: GATE CSE 2014 Set 1 | Question: 32 top☝ ☛ https://gateoverflow.in/1803
Consider the following set of processes that need to be scheduled on a single CPU. All the times are given in milliseconds.
Using the shortest remaining time first scheduling algorithm, the average process turnaround time (in msec) is
____________________.
Answer ☟
5.15.25 Process Scheduling: GATE CSE 2014 Set 2 | Question: 32 top☝ ☛ https://gateoverflow.in/1991
Three processes A, B and C each execute a loop of 100 iterations. In each iteration of the loop, a process performs a single
computation that requires tc CPU milliseconds and then initiates a single I/O operation that lasts for tio milliseconds. It is assumed
that the computer where the processes execute has sufficient number of I/O devices and the OS of the computer assigns different I/O
devices to each process. Also, the scheduling overhead of the OS is negligible. The processes have the following characteristics:
Process id tc tio
A 100 ms 500 ms
B 350 ms 500 ms
C 200 ms 500 ms
The processes A, B, and C are started at times 0, 5 and 10 milliseconds respectively, in a pure time sharing system (round robin
scheduling) that uses a time slice of 50 milliseconds. The time in milliseconds at which process C would complete its first I/O
operation is ___________.
Answer ☟
5.15.26 Process Scheduling: GATE CSE 2014 Set 3 | Question: 32 top☝ ☛ https://gateoverflow.in/2066
An operating system uses shortest remaining time first scheduling algorithm for pre-emptive scheduling of processes.
Consider the following set of processes with their arrival times and CPU burst times (in milliseconds):
Answer ☟
5.15.27 Process Scheduling: GATE CSE 2015 Set 1 | Question: 46 top☝ ☛ https://gateoverflow.in/8330
Consider a uniprocessor system executing three tasks T1 , T2 and T3 each of which is composed of an infinite sequence of jobs
(or instances) which arrive periodically at intervals of 3, 7 and 20 milliseconds, respectively. The priority of each task is the inverse
of its period, and the available tasks are scheduled in order of priority, which is the highest priority task scheduled first. Each
instance of T1 , T2 and T3 requires an execution time of 1, 2 and 4 milliseconds, respectively. Given that all tasks initially arrive at
Answer ☟
5.15.28 Process Scheduling: GATE CSE 2015 Set 3 | Question: 1 top☝ ☛ https://gateoverflow.in/8390
The maximum number of processes that can be in Ready state for a computer system with n CPUs is :
A. n
B. n2
C. 2n
D. Independent of n
Answer ☟
5.15.29 Process Scheduling: GATE CSE 2015 Set 3 | Question: 34 top☝ ☛ https://gateoverflow.in/8492
For the processes listed in the following table, which of the following scheduling schemes will give the lowest average
turnaround time?
Answer ☟
5.15.30 Process Scheduling: GATE CSE 2016 Set 1 | Question: 20 top☝ ☛ https://gateoverflow.in/39655
Consider an arbitrary set of CPU-bound processes with unequal CPU burst lengths submitted at the same time to a computer
system. Which one of the following process scheduling algorithms would minimize the average waiting time in the ready queue?
Answer ☟
5.15.31 Process Scheduling: GATE CSE 2016 Set 2 | Question: 47 top☝ ☛ https://gateoverflow.in/39625
Consider the following processes, with the arrival time and the length of the CPU burst given in milliseconds. The scheduling
algorithm used is preemptive shortest remaining-time first.
Answer ☟
5.15.32 Process Scheduling: GATE CSE 2017 Set 1 | Question: 24 top☝ ☛ https://gateoverflow.in/118304
Consider the following CPU processes with arrival times (in milliseconds) and length of CPU bursts (in milliseconds) as given
below:
If the pre-emptive shortest remaining time first scheduling algorithm is used to schedule the processes, then the average waiting
time across all processes is _____________ milliseconds.
Answer ☟
5.15.33 Process Scheduling: GATE CSE 2017 Set 2 | Question: 51 top☝ ☛ https://gateoverflow.in/118558
Consider the set of process with arrival time (in milliseonds), CPU burst time (in millisecods) and priority (0 is the highest
priority) shown below. None of the process have I/O burst time
The average waiting time (in milli seconds) of all the process using premtive priority scheduling algorithm is ______
Answer ☟
Consider the following four processes with arrival times (in milliseconds) and their length of CPU bursts (in milliseconds) as
shown below:
Process P1 P2 P3 P4
Arrival Time 0 1 3 4
CPU burst time 3 1 3 Z
These processes are run on a single processor using preemptive Shortest Remaining Time First scheduling algorithm. If the average
waiting time of the processes is 1 millisecond, then the value of Z is _____
Answer ☟
Consider the following statements about process state transitions for a system using preemptive scheduling.
Answer ☟
Consider the following set of processes, assumed to have arrived at time 0. Consider the CPU scheduling algorithms Shortest
Job First (SJF) and Round Robin (RR). For RR, assume that the processes are scheduled in the orderP1 , P2 , P3 , P4 .
Processes P1 P2 P3 P4
Burst time (in ms) 8 7 2 4
If the time quantum for RR is 4 ms, then the absolute value of the difference between the average turnaround times (in ms) of SJF
and RR (round off to 2 decimal places is_______
Answer ☟
5.15.37 Process Scheduling: GATE CSE 2021 Set 1 | Question: 25 top☝ ☛ https://gateoverflow.in/357426
Three processes arrive at time zero with CPU bursts of 16, 20 and 10 milliseconds. If the scheduler has prior knowledge
about the length of the CPU bursts, the minimum achievable average waiting time for these three processes in a non-preemptive
scheduler (rounded to nearest integer) is _____________ milliseconds.
Answer ☟
5.15.38 Process Scheduling: GATE CSE 2021 Set 2 | Question: 14 top☝ ☛ https://gateoverflow.in/357526
Which of the following statement(s) is/are correct in the context of CPU scheduling?
Answer ☟
We wish to schedule three processes P1 , P2 and P3 on a uniprocessor system. The priorities, CPU time requirements and
arrival times of the processes are as shown below.
We have a choice of preemptive or non-preemptive scheduling. In preemptive scheduling, a late-arriving higher priority process can
Answer ☟
In the working-set strategy, which of the following is done by the operating system to prevent thrashing?
A. I only
B. II only
C. Neither I nor II
D. Both I and II
Answer ☟
The arrival time, priority, and duration of the CPU and I/O bursts for each of three processes P1 , P2 and P3 are given in the
table below. Each process has a CPU burst followed by an I/O burst followed by another CPU burst. Assume that each process has
its own I/O resource.
The multi-programmed operating system uses preemptive priority scheduling. What are the finish times of the processes P1 , P2 and
P3 ?
A. 11, 15, 9
B. 10, 15, 9
C. 11, 16, 10
D. 12, 17, 11
Answer ☟
Consider n jobs J1 , J2 … Jn such that job Ji has execution time ti and a non-negative integer weight wi . The weighted mean
∑ni=1 wi Ti
completion time of the jobs is defined to be , where Ti is the completion time of job Ji . Assuming that there is only one
∑ni=1 wi
processor available, in what order must the jobs be executed in order to minimize the weighted mean completion time of the jobs?
A. Non-decreasing order of ti
B. Non-increasing order of wi
C. Non-increasing order of wi ti
D. Non-increasing order of wi /ti
Answer ☟
If the time-slice used in the round-robin scheduling policy is more than the maximum time required to execute any process,
then the policy will
A. degenerate to shortest job first
B. degenerate to priority scheduling
C. degenerate to first come first serve
D. none of the above
Answer ☟
5.15.1 Process Scheduling: GATE CSE 1988 | Question: 2xa top☝ ☛ https://gateoverflow.in/93951
By the way the turnaround time should not be a metre to evalaute the performance of an OS .
Undesirable is that : (i) long burst process are running first and smaller run after long.
4 votes -- hem chandra joshi (2.9k points)
5.15.2 Process Scheduling: GATE CSE 1988 | Question: 2xb top☝ ☛ https://gateoverflow.in/93953
“Waiting time” is one of the metric for deciding the schedule of processes. If the OS tries to minimize the average
waiting time of the processes it’ll follow the Shortest Remaining Time First algorithm which though reduces the average
waiting time of processes can still cause a long burst time process to starve.
0 votes -- Arjun Suresh (332k points)
5.15.3 Process Scheduling: GATE CSE 1990 | Question: 1-vi top☝ ☛ https://gateoverflow.in/83850
Highest response ratio next (HRRN) scheduling is a non-preemptive discipline, similar to shortest job next (SJN), in
which the priority of each job is dependent on its estimated run time, and also the amount of time it has spent waiting.
Jobs gain higher priority the longer they wait, which prevents indefinite waiting or in other words what we say starvation. In fact,
the jobs that have spent a long time waiting compete against those estimated to have short run times.
5.15.4 Process Scheduling: GATE CSE 1993 | Question: 7.10 top☝ ☛ https://gateoverflow.in/2298
Answer: (C)
Execution order: p q r s t p r t p r p r r r r r
24 votes -- Rajarshi Sarkar (27.9k points)
Answer is Round Robin (RR), option (B).
Now question is Why RR is most suitable for time shared OS?
First of all we are discussing about Time shared OS, so obviously We need to consider pre-emption .
So, FCFS and Elevator these 2 options removed first , remain SJF and RR from two remaining options.
Now in case of pre-emptive SJF which is also known as shortest remaining time first or SRTF (where we can predict the
next burst time using exponential averaging ), SRTF would NOT be optimal than RR.
There is no starvation in case of RR, since every process shares a time slice.
But In case of SRTF, there can be a starvation , in worse case you may have the highest priority process, with a huge
burst time have to wait.That means long process may have to wait indefinite time in case of SRTF.
That's why RR can be chosen over SRTF in case of time shared OS.
5.15.6 Process Scheduling: GATE CSE 1995 | Question: 2.6 top☝ ☛ https://gateoverflow.in/2618
Answer is (A).
Here, in option B and C they have given CPU idle time is 0 which is not possible as per schedule (B) and (C).
So, (B) and (C) are eliminated.
For (A),
We can see that there is no idle time at all, but in option given idle time is 5, which is not matching with our chart so option (D)
is eliminated.
5.15.7 Process Scheduling: GATE CSE 1996 | Question: 2.20, ISRO2008-15 top☝ ☛ https://gateoverflow.in/2749
The completion time of A will be 9 Unit.
Hence, option (D) is correct.
Here, is the sequence (Consider each block takes one time unit)
A B C D A C A C A
Answer: (A)
Each process runs for q period and if there are n process: p1 , p2 ,p3 ,, ....., pn ,.
Then p1 's turn comes again when it has completed time quanta for remaining process p2 to pn, i.e, it would take at most (n − 1)q
time.
So,, each process in round robin gets its turn after (n − 1)q time when we don't consider overheads but if we consider overheads
then it would be ns + (n − 1)q
So, we have ns + (n − 1)q ≤ t
a. Here, all we need to do for minimizing response time is to run jobs in increasing order of burst time.
b. Schedule shorter jobs first, which will decrease the waiting time of longer jobs, and consequently average waiting time and
average response time decreases.
c.
6, 3, 5 and x.
Idea is that if you have S1 → S2 then you create new semaphore a, assume that initial value of all semaphores is 0. Then S2
thread will invoke P(a) & will get blocked. When S1 get executed, after that it'll do V (a) which will enable S2 to run. Do like
this for all edges in graph.
Let me write program for it.
Begin
Semaphores a, b, c, d, e, f, g
ParBegin S1 V (a)V (b)V (c)V (d) Parend
ParBegin P(a)S2 V (e) Parend
ParBegin P(b)S3 V (f) Parend
ParBegin P(c)P(e)S4 V (g) Parend
ParBegin P(d)P(f)P(g)S5 Parend
End
IF you reverse engineer this program you can get how this diagram came.
P− Down, V − Up
5.15.10 Process Scheduling: GATE CSE 1998 | Question: 7-b top☝ ☛ https://gateoverflow.in/12963
The partitions are 4k, 8k, 20k, 2k , now due to the best-fit algorithm,
And next job size of 10k (5) wait for the partition of 20k and after completion of no. 2 job, job no. 5 will be executed for 1 unit
(10 to 11). Now, 20k is also waiting for a partition of 20k because it is the best fit for it. So after completion of job 5, it will be
fit. So, it will execute for 8 unit which is 11 to 19. So, at 19 unit 20k job will be completed.
5.15.11 Process Scheduling: GATE CSE 2002 | Question: 1.22 top☝ ☛ https://gateoverflow.in/827
A. Here we preempt when Time quantum is expired.
D. Here we preempt when process of higher priority arrives or when time slice of higher level finishes & we need to move
process to lower priority.
CPU utilization = CPU burst time/Total time.
FCFS:
from 0 − 10 : process 1
from 10 − 20 : process 2
from 100 − 110 : process 1
from 110 − 120 : process 2
....
So, in every 100 ms, CPU is utilized for 20 ms, CPU utilization = 20%
SRTF:
Same as FCFS as CPU burst time is same for all the processes
Static priority scheduling:
Suppose process 1 is having higher priority. Now, the scheduling will be same as FCFS. If process 2 is having higher priority,
then the scheduling will be as FCFS with process 1 and process 2 interchanged. So, CPU utilization remains at 20%
Round Robin:
Time quantum given as 5 ms.
from 0 − 5 : process 1
from 5 − 10 : process 2
from 10 − 15 : process 1
from 15 − 20 : process 2
from 105 − 110 : process 1
from 110 − 115 : process 2
...
So, in 105 ms, 20 ms of CPU burst is there. So, utilization = 20/105 = 19.05%
19.05 is less than 20, so answer is (D).
(Round robin with time quantum 10ms would have made the CPU utilization same for all the schedules)
5.15.14 Process Scheduling: GATE CSE 2006 | Question: 06, ISRO2009-14 top☝ ☛ https://gateoverflow.in/885
Processes execute as per the following Gantt chart
So, here only 2 switching possible (when we did not consider the starting and ending switching )
now here might be confusion that at t = 2 p1 is preempted and check that available process have shortest job time or not, but he
did not get anyone so it should not be consider as context switching.(same happened at t = 6)
Reference: http://stackoverflow.com/questions/8997616/does-a-context-switch-occur-in-a-system-whose-ready-queue-has-only-
one-process-a(thanks to anurag_s)
Answer is (B)
References
A.
Gantt Chart is as follows.
Scheduling Table
P.ID A.T B.T C.T T.A.T. W.T.
P0 0 2 12 12 10
P1 0 4 13 13 9
P2 0 8 14 14 6
TOTAL 39 25
2+3
CPU Idle time = × 100 = 10.6383%
47
Answer is option (B).
(A) is the answer.
http://en.wikipedia.org/wiki/Rate-monotonic_scheduling
http://en.wikipedia.org/wiki/Gang_scheduling
http://en.wikipedia.org/wiki/Fair-share_scheduling
References
The answer is (B).
Gantt Chart
Waiting time for process P2 = Completion time – Arrival time – burst time = 55– 15– 25 = 15
Answer is (D).
I. In SRTF ,job with the shorest CPU burst will be scheduled first bcz of this process with large CPU burst may suffer from
starvation
II. In preemptive scheduling , suppose process P1 is executing in CPU and after some time process P2 with high priority
then P1 will arrive in ready queue then p1 is prrempted and p2 will brought into CPU for execution. In this way if process
which is arriving in ready queue is of higher prioirity then p1, then p1 is always preempted and it may possible that it
suffer from starvation.
III. Round robin will give better response time then FCFS ,in FCFS when process is executing ,it executed upto its complete
burst time, but in round robin it will execute upto time quantum.
Answer is (A).
5ms
Gantt Chart
(0 + 4) + (0) + (11)
Average Waiting Time = = 5ms.
3
FCFS First Come First Server
RR2
In Round Robin We are using the concept called Ready Queue.
Note
at t = 2 ,
At t = 3
At t = 4
(B) Because here the quanta for round robin is T units, after a process is scheduled it gets executed for T time units and
waiting time becomes least and it again gets chance when every other process has completed T time units.
5.15.24 Process Scheduling: GATE CSE 2014 Set 1 | Question: 32 top☝ ☛ https://gateoverflow.in/1803
5.15.25 Process Scheduling: GATE CSE 2014 Set 2 | Question: 32 top☝ ☛ https://gateoverflow.in/1991
Gantt chart : ABCABCBCBC
C completes it CPU burst at= 500 milli second.
5.15.26 Process Scheduling: GATE CSE 2014 Set 3 | Question: 32 top☝ ☛ https://gateoverflow.in/2066
Gantt Chart
5.15.27 Process Scheduling: GATE CSE 2015 Set 1 | Question: 46 top☝ ☛ https://gateoverflow.in/8330
Answer is 12
T1 , T2 and T3 have infinite instances, meaning infinite burst times. Here, problem say Run "T1 for 1 ms", "T2 for 2 ms", and " T3
for 4ms". i.e., every task is run in parts. Now for timing purpose we consider t for the end of cycle number t.
Gantt Chart
T1 T2 T2 T1 T3 T3 T1 T2 T2 T1 T3 T3 … ……
0 1 2 3 4 5 6 7 8 9 10 11 12
At t = 0, No process is available
At t = 2, T2 runs because it has higher priority than T3 and no instance of T1 present
At t = 4, We have T1 arrive again and T3 waiting but T1 runs because it has higher priority
At t = 5, T3 runs because no instance of T1 or T2 is present
At t = 11, T3 runs because no instance of T1 or T2 is present
At t = 12, T3 continue run because no instance of T1 or T2 is present and first instance of T3 completes
5.15.28 Process Scheduling: GATE CSE 2015 Set 3 | Question: 1 top☝ ☛ https://gateoverflow.in/8390
(D) independent of n.
The number of processes that can be in READY state depends on the Ready Queue size and is independent of the number of
CPU's.
Turn Around Time = Completion Time − Arrival Time
FCFS
Average turn around time = [3 for A + (2 + 6) for B + (5 + 4) for C + (7 + 2) for D]/4 = 7.25
Non-preemptive Shortest Job First
Average turn around time = [3 for A + (2 + 6) for B + (3 + 2) for D + (7 + 4) for C] = 6.75
Shortest Remaining Time
Average turn around time
= [3 for A + (2 + 1) for B + (0 + 4) for C + (2 + 2) for D + (6 + 5) for remaining B ]/4 = 6.25
Round Robin
Average turn around time =
[2 for A (B comes after 1)
+(1 + 2) for B {C comes}
+(2 + 1) for A (A finishes after 3 cycles with turnaround time of 2 + 3 = 5)
+(1 + 2) for C {D comes}
+(3 + 2) for B
+(3 + 2) for D (D finishes with turnaround time of 3 + 2 = 5)
+(4 + 2) for C (C finishes with turnaround time of 3 + 6 = 9)
+(4 + 2) for B (B finishes after turnaround time of 3 + 5 + 6 = 14]
/4
= 8.25
Shortest Remaining Time First scheduling which is the preemptive version of the SJF scheduling is provably optimal for the
shortest waiting time and hence always gives the best (minimal) turn around time (waiting time + burst time). So, we can
directly give the answer here.
Correct Answer: C
5.15.30 Process Scheduling: GATE CSE 2016 Set 1 | Question: 20 top☝ ☛ https://gateoverflow.in/39655
Answer should be (A) SRTF
SJF minimizes average waiting time. Probably optimal.
Now, here as all processes arrive at the same time, SRTF would be same as SJF. and hence, the answer.
5.15.31 Process Scheduling: GATE CSE 2016 Set 2 | Question: 47 top☝ ☛ https://gateoverflow.in/39625
SRTF Preemptive hence,
5.15.32 Process Scheduling: GATE CSE 2017 Set 1 | Question: 24 top☝ ☛ https://gateoverflow.in/118304
Gantt Chart
(5+0+7+0)
Average Waiting Time = 4
= 3 milliseconds
5.15.33 Process Scheduling: GATE CSE 2017 Set 2 | Question: 51 top☝ ☛ https://gateoverflow.in/118558
Gantt Chart for above problem looks like :
Till t = 4, the waiting time of P1 = 1 and P2 = 0 and P3 = 1 but P3 has not started yet.
Case 1:
Note that if P4 burst time is less than P3 then P4 will complete and after that P3 will complete. Therefore Waiting time of P4
should be 0. And total waiting time of P3 = 1+ ( Burst time of P4 ) because until P4 completes P3 does not get a chance.
1+0+(1+x)+0
Then average waiting time = 4
=1
2+x
4
= 1 ⇒ x = 2.
Case 2:
Note that if P4 burst time is greater than P3 then P4 will complete after P3 will complete. Therefore, Waiting time of P3
remains the same. And total waiting time of P4 = ( Burst time of P3 ) because until P3 completes P4 does not get a chance.
1+0+1+3
Then average waiting time = 4
=1
5
4
≠ 1 ⇒ This case is invalid.
Correct Answer: 2
A blocked process cannot go to running state directly. Except (III), every option is viable.
Answer-(C)
SJF:
21+13+2+6
Average Turn-Around Time : 4
= 10.5
RR:
18+21+10+14
Average Turn-Around Time : 4
= 15.75
Absolute Difference =∣ 10.5 − 15.75 ∣= 5.25.
5.15.37 Process Scheduling: GATE CSE 2021 Set 1 | Question: 25 top☝ ☛ https://gateoverflow.in/357426
We get minimum achievable average waiting time using SJF scheduling.
Lets just name these processes for explanation purpose only as A = 16, B = 20 and C = 10.
Order them according to burst time as C < A < B
C will not wait for anyone, schedule first ( wait time = 0)
A will wait for only C (wait time = 10)
B will wait for both C and A (wait time = 10 + 16)
0+10+(10+16) 36
Average wait time = 3 = 3 = 12.
No need to make any table or chart.
This is all for explaining purpose, you can actually ans this within 10-15 sec after reading the complete question.
5.15.38 Process Scheduling: GATE CSE 2021 Set 2 | Question: 14 top☝ ☛ https://gateoverflow.in/357526
A. Turnaround time includes waiting time
TRUE. Turnaround Time = Waiting Time + Burst Time
B. The goal is to only maximize CPU utilization and minimize throughput
FALSE. CPU scheduling must aim to maximize CPU utilization as well as throughput. Throughput of CPU
scheduling is defined as the number of processes completed in unit time. SJF scheduling gives the highest
throughput.
C. Round-robin policy can be used even when the CPU time required by each of the processes is not known apriori
TRUE. Round-robin scheduling gives a fixed time quantum to each process and for this there is no requirement to
know the CPU time of the process apriori (which is not the case say for shortest remaining time first).
D. Implementing preemptive scheduling needs hardware support
TRUE. Preemptive scheduling needs hardware support to manage context switch which includes saving the
execution state of the current process and then loading the next process.
Answer will be (D).
The Gannt Chart for Non Preemptive scheduling will be (0)P3, (15)P1, (35)P2(45).
From above this can be inferred easily that completion time for P2 is 45, for P1 is 35 and P3 is 15.
Gantt Chart for Preemptive- (0)P3, (1)P3, (2)P3, (3)P2, (4)P2, (5)P1, (25)P2, (33)P3(45) .
Similarly take completion time from above for individual processes and subtract it from the Arrival time to get TAT.
Extract from Galvin "If there are enough extra frames, another process can be initiated. If the sum of the working-set
sizes increases, exceeding the total number of available frames,the operating system selects a process to suspend. The process’s
pages are written out (swapped), and its frames are reallocated to other processes. The suspended process can be restarted
later."
So Option (D)
GIVEN : assuming that each process has its own i/o resource.
(GANTT CHART FOR I/O OF PROCESSOR P1 , P2 , P3 )
EXPLANATION :
Here, P2 has the least priority and P1 has the highest.
P1 enters CPU at 0 and utilizes it for 1 time unit. Then it performs i/o for 5 time units.
Then P2 enters at time unit 2 and requires 3 time units of CPU. But P3 whose priority is greater than P2 arrives at time unit 3.
So, P2 IS PREEMPTED (only 1 unit of P2 is done out of 3 units. Therefore 2 units of P2 are left out) AND P3 ACQUIRES
THE CPU. Once P3 finishes, P2 enters the CPU to complete its pending 2 units job at time unit 5. AGAIN BY THEN P1
finishes its i/o and arrives with a higher priority. Therefore of 2 units P2 performs only one unit and the CPU is given to P1 .Then
when P1 is performing in CPU, P3 completes its i/o and arrives with a higher priority.Thus the CPU is given to P3 (1 UNIT IS
USED). P3 FINISHES AT TIME UNIT 9. NOW PRIORITY OF P1 IS MORE THAN P2 , SO, CPU IS USED BY P1 . P1
FINISHES BY TIME UNIT 10. THEN CPU IS ALLOCATED FOR PROCESS P2 . P2 PERFORMS REST OF ITS
WORK AND FINISHES AT TIME UNIT 15.
THEREFORE,
FINISH TIME OF P1 , P2 , P3 ARE 10, 15 AND 9 RESPECTIVELY.
Correct Answer: B
Lets take an example:
The solution above is a classical example of greedy algorithm - that is at every point we choose the best available option and this
leads to a global optimal solution. In this problem, we require to minimize the weighted mean completion time and the
denominator in it is independent of the order of execution of the jobs. So, we just need to focus on the numerator and try to
reduce it. Numerator here is a factor of the job weight and its completion time and since both are multiplied, our greedy solution
must be
to execute the shorter jobs first (so that remaining jobs have smaller completion time) and
to execute highest weighted jobs first (so that it is multiplied by smaller completion time)
So, combining both we can use wi /ti to determine the execution order of processes - which must then be executed in non-
increasing order.
Answer is (C).
RR follows FCFS with time slice if time slice is larger than the max time required to execute any process then it is simply
converged into fcfs as every process will finish in first cycle itself
5.16.1 Process Synchronization: GATE CSE 1987 | Question: 1-xvi top☝ ☛ https://gateoverflow.in/80362
A critical region is
Answer ☟
Procedure grantread;
begin
if aw = 0
then while (rr < ar) do
begin rr := rr + 1;
V (reading)
end
end;
Procedure grantwrite;
begin
if rr = 0
then while (rw < aw) do
begin rw := rw + 1;
V (writing)
end
end;
a. Give the value of the shared variables and the states of semaphores when 12 readers are reading and writers are writing.
b. Can a group of readers make waiting writers starve? Can writers starve readers?
c. Explain in two sentences why the solution is incorrect.
Answer ☟
5.16.3 Process Synchronization: GATE CSE 1988 | Question: 10iib top☝ ☛ https://gateoverflow.in/94393
Given below is solution for the critical section problem of two processes P0 and P1 sharing the following variables:
var flag :array [0..1] of boolean; (initially false)
turn: 0 .. 1;
The program below is for process Pi (i = 0 or 1) where process Pj (j = 1 or 0) being the other one.
repeat
flag[i]:= true;
while turn != i
do begin
while flag [j] do skip
turn:=i;
end
critical section
flag[i]:=false;
until false
Determine of the above solution is correct. If it is incorrect, demonstrate with an example how it violates the conditions.
Answer ☟
5.16.4 Process Synchronization: GATE CSE 1990 | Question: 2-iii top☝ ☛ https://gateoverflow.in/83859
Answer ☟
5.16.5 Process Synchronization: GATE CSE 1991 | Question: 11,a top☝ ☛ https://gateoverflow.in/538
Consider the following scheme for implementing a critical section in a situation with three processes Pi , Pj and Pk.
Pi;
repeat
flag[i] := true;
while flag [j] or flag[k] do
case turn of
j: if flag [j] then
begin
flag [i] := false;
while turn != i do skip;
flag [i] := true;
end;
k: if flag [k] then
begin
flag [i] := false,
while turn != i do skip;
flag [i] := true
end
end
critical section
if turn = i then turn := j;
flag [i] := false
non-critical section
until false;
a. Does the scheme ensure mutual exclusion in the critical section? Briefly explain.
Answer ☟
5.16.6 Process Synchronization: GATE CSE 1991 | Question: 11,b top☝ ☛ https://gateoverflow.in/43000
Consider the following scheme for implementing a critical section in a situation with three processes Pi , Pj and Pk.
Pi;
repeat
flag[i] := true;
while flag [j] or flag[k] do
case turn of
j: if flag [j] then
begin
flag [i] := false;
while turn != i do skip;
flag [i] := true;
end;
k: if flag [k] then
begin
flag [i] := false,
while turn != i do skip;
flag [i] := true
end
end
critical section
if turn = i then turn := j;
flag [i] := false
non-critical section
until false;
Is there a situation in which a waiting process can never enter the critical section? If so, explain and suggest modifications to the
code to solve this problem
Answer ☟
Write a concurrent program using parbegin-parend and semaphores to represent the precedence constraints of the
statements S1 to S6 , as shown in figure below.
Answer ☟
A. Draw a precedence graph for the following sequential code. The statements are numbered from S1 to S6
S1 read n
S2 i := 1
S3 if i > n next
S4 a(i) := i+1
S5 i := i+1
S6 next : write a(i)
B. Can this graph be converted to a concurrent program using parbegin-parend construct only?
Answer ☟
Consider the following program segment for concurrent processing using semaphore operators P and V for synchronization.
Draw the precedence graph for the statements S1 to S9 .
var
a,b,c,d,e,f,g,h,i,j,k : semaphore;
begin
cobegin
begin S1; V(a); V(b) end;
begin P(a); S2; V(c); V(d) end;
begin P(c); S4; V(e) end;
begin P(d); S5; V(f) end;
begin P(e); P(f); S7; V(k) end
begin P(b); S3; V(g); V(h) end;
begin P(g); S6; V(i) end;
begin P(h); P(i); S8; V(j) end;
begin P(j); P(k); S9 end;
coend
end;
Answer ☟
5.16.10 Process Synchronization: GATE CSE 1996 | Question: 1.19, ISRO2008-61 top☝ ☛ https://gateoverflow.in/2723
Answer ☟
5.16.11 Process Synchronization: GATE CSE 1996 | Question: 2.19 top☝ ☛ https://gateoverflow.in/2748
A. ensure that all philosophers pick up the left fork before the right fork
B. ensure that all philosophers pick up the right fork before the left fork
C. ensure that one particular philosopher picks up the left fork before the right fork, and that all other philosophers pick up the
right fork before the left fork
D. None of the above
Answer ☟
Fork <label> which creates a new process executing from the specified label
Join <variable> which decrements the specified synchronization variable (by 1) and terminates the process if the new value is not 0.
Show the precedence graph for S1, S2, S3, S4, and S5 of the concurrent program below.
N =2
M =2
Fork L3
Fork L4
S1
L1 : join N
S3
L2 : join M
S5
L3 : S2
Goto L1
L4 : S4
Goto L2
Next:
Answer ☟
5.16.13 Process Synchronization: GATE CSE 1997 | Question: 6.8 top☝ ☛ https://gateoverflow.in/2264
The code for P10 is identical except it uses V(mutex) in place of P(mutex). What is the largest number of processes that can be
inside the critical section at any moment?
A. 1
B. 2
C. 3
D. None
Answer ☟
A concurrent system consists of 3 processes using a shared resource R in a non-preemptible and mutually exclusive manner.
The processes have unique priorities in the range 1 … 3 , 3 being the highest priority. It is required to synchronize the processes
such that the resource is always allocated to the highest priority requester. The pseudo code for the system is as follows.
Shared data
mutex:semaphore = 1:/* initialized to 1*/
Procedures
procedure request_R(priority);
P(mutex);
if busy = true then
R_requested [priority]:=true;
else
begin
V(proceed [priority]);
busy:=true;
end
V(mutex)
Answer ☟
5.16.15 Process Synchronization: GATE CSE 1998 | Question: 1.30 top☝ ☛ https://gateoverflow.in/1667
When the result of a computation depends on the speed of the processes involved, there is said to be
A. cycle stealing
B. race condition
C. a time lock
D. a deadlock
Answer ☟
A certain processor provides a 'test and set' instruction that is used as follows:
TSET register, flag
This instruction atomically copies flag to register and sets flag to 1. Give pseudo-code for implementing the entry and exit code to a
critical region using this instruction.
Answer ☟
5.16.17 Process Synchronization: GATE CSE 1999 | Question: 20-b top☝ ☛ https://gateoverflow.in/205817
Consider the following solution to the producer-consumer problem using a buffer of size 1. Assume that the initial value of
count is 0. Also assume that the testing of count and assignment to count are atomic operations.
Producer:
Repeat
Produce an item;
if count = 1 then sleep;
place item in buffer.
count = 1;
Wakeup(Consumer);
Forever
Consumer:
Repeat
if count = 0 then sleep;
Remove item from buffer;
count = 0;
Wakeup(Producer);
Consume item;
Forever;
Show that in this solution it is possible that both the processes are sleeping at the same time.
Answer ☟
5.16.18 Process Synchronization: GATE CSE 2000 | Question: 1.21 top☝ ☛ https://gateoverflow.in/645
Let m[0] … m[4] be mutexes (binary semaphores) and P[0] … P[4] be processes.
Suppose each process P[i] executes the following:
wait (m[i]); wait (m(i+1) mod 4]);
...........
release (m[i]); release (m(i+1) mod 4]);
Answer ☟
a. Fill in the boxes below to get a solution for the reader-writer problem, using a single binary semaphore, mutex (initialized to
1) and busy waiting. Write the box numbers (1, 2 and 3), and their contents in your answer book.
Reader () {
wait (mutex);
if (W == 0) {
R = R + 1;
▭ ______________(1)
}
L1: else {
▭ ______________(2)
goto L1;
}
..../* do the read*/
wait (mutex);
R = R - 1;
signal (mutex);
}
Writer () {
wait (mutex);
if (▭) { _________ (3)
signal (mutex);
goto L2;
}
L2: W=1;
signal (mutex);
...../*do the write*/
wait( mutex);
W=0;
signal (mutex);
}
Answer ☟
5.16.20 Process Synchronization: GATE CSE 2001 | Question: 2.22 top☝ ☛ https://gateoverflow.in/740
Consider Peterson's algorithm for mutual exclusion between two concurrent processes i and j. The program executed by
process is shown below.
repeat
flag[i] = true;
turn = j;
while (P) do no-op;
Enter critical section, perform actions, then
exit critical section
Flag[i] = false;
Perform other non-critical section actions.
Until false;
For the program to guarantee mutual exclusion, the predicate P in the while loop should be
A. flag[j] = true and turn = i
B. flag[j] = true and turn = j
C. flag[i] = true and turn = j
D. flag[i] = true and turn = i
Answer ☟
5.16.21 Process Synchronization: GATE CSE 2002 | Question: 18-a top☝ ☛ https://gateoverflow.in/871
Draw the process state transition diagram of an OS in which (i) each process is in one of the five states: created, ready,
running, blocked (i.e., sleep or wait), or terminated, and (ii) only non-preemptive scheduling is used by the OS. Label the transitions
appropriately.
Answer ☟
5.16.22 Process Synchronization: GATE CSE 2002 | Question: 18-b top☝ ☛ https://gateoverflow.in/205818
The functionality of atomic TEST-AND-SET assembly language instruction is given by the following C function
int TEST-AND-SET (int *x)
{
int y;
i. Complete the following C functions for implementing code for entering and leaving critical sections on the above TEST-
AND-SET instruction.
int mutex=0;
void enter-cs()
{
while(......................);
}
void leave-cs()
{ .........................;
ii. Is the above solution to the critical section problem deadlock free and starvation-free?
iii. For the above solution, show by an example that mutual exclusion is not ensured if TEST-AND-SET instruction is not
atomic?
Answer ☟
The following solution to the single producer single consumer problem uses semaphores for synchronization.
#define BUFFSIZE 100
buffer buf[BUFFSIZE];
int first = last = 0;
semaphore b_full = 0;
semaphore b_empty = BUFFSIZE
void producer()
{
while(1) {
produce an item;
p1:.................;
put the item into buff (first);
first = (first+1)%BUFFSIZE;
p2: ...............;
}
}
void consumer()
{
while(1) {
c1:............
take the item from buf[last];
last = (last+1)%BUFFSIZE;
c2:............;
consume the item;
}
}
Answer ☟
Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T . The code for the
processes P and Q is shown below.
Process P: Process Q:
while(1){ while(1){
W: Y:
print '0'; print '1';
print '0'; print '1';
X: Z:
} }
Answer ☟
Suppose we want to synchronize two concurrent processes P and Q using binary semaphores S and T . The code for the
processes P and Q is shown below.
Process P: Process Q:
while(1) { while(1) {
W: Y:
print ‘0'; print ‘1';
print ‘0'; print ‘1';
X: Z:
} }
Answer ☟
Consider two processes P1 and P2 accessing the shared variables X and Y protected by two binary semaphores SX and SY
respectively, both initialized to 1. P and V denote the usual semaphore operators, where P decrements the semaphore value, and V
increments the semaphore value. The pseudo-code of P1 and P2 is as follows:
P1 : P2 :
While true do { While true do {
L1 : … … L3 : … …
L2 : … … L4 : … …
X = X + 1; Y = Y + 1;
Y = Y − 1; X = Y − 1;
V (SX ); V (SY );
V (SY ); V (SX );
} }
Answer ☟
The atomic fetch-and-set x, y instruction unconditionally sets the memory location x to 1 and fetches the old value of x in y
without allowing any intervening access to the memory location x. Consider the following implementation of P and V functions on
a binary semaphore S .
void P (binary_semaphore *s) {
unsigned y;
unsigned *x = &(s->value);
do {
fetch-and-set x, y;
} while (y);
}
Answer ☟
Barrier is a synchronization construct where a set of processes synchronizes globally i.e., each process in the set arrives at the
barrier and waits for all others to arrive and then all processes leave the barrier. Let the number of processes in the set be three and
S be a binary semaphore with the usual P and V functions. Consider the following C implementation of a barrier with line numbers
shown on left.
void barrier (void) {
1: P(S);
2: process_arrived++;
3: V(S);
4: while (process_arrived !=3);
5: P(S);
6: process_left++;
7: if (process_left==3) {
8: process_arrived = 0;
9: process_left = 0;
10: }
11: V(S);
}
The variables process_arrived and process_left are shared among all processes and are initialized to zero. In a concurrent program
all the three processes call the barrier function when they need to synchronize globally.
The above implementation of barrier is incorrect. Which one of the following is true?
Answer ☟
Barrier is a synchronization construct where a set of processes synchronizes globally i.e., each process in the set arrives at the
barrier and waits for all others to arrive and then all processes leave the barrier. Let the number of processes in the set be three and
S be a binary semaphore with the usual P and V functions. Consider the following C implementation of a barrier with line numbers
shown on left.
}
The variables process_arrived and process_left are shared among all processes and are initialized to zero. In a concurrent program
all the three processes call the barrier function when they need to synchronize globally.
Which one of the following rectifies the problem in the implementation?
Answer ☟
Two processes, P1 and P2 , need to access a critical section of code. Consider the following synchronization construct used
by the processes:
/* P1 */ /* P2 */
while (true) { while (true) {
wants1 = true; wants2 = true;
while (wants2 == true); while (wants1 == true);
/* Critical Section */ /* Critical Section */
wants1 = false; wants2=false;
} }
/* Remainder section */ /* Remainder section */
Here, wants1 and wants2 are shared variables, which are initialized to false.
Which one of the following statements is TRUE about the construct?
Answer ☟
The enter_CS() and leave_CS() functions to implement critical section of a process are realized using test-and-set instruction
as follows:
void enter_CS(X)
{
while(test-and-set(X));
}
void leave_CS(X)
{
X = 0;
}
In the above solution, X is a memory location associated with the CS and is initialized to 0. Now consider the following statements:
Answer ☟
Consider the methods used by processes P1 and P2 for accessing their critical sections whenever needed, as given below.
The initial values of shared boolean variables S1 and S2 are randomly assigned.
Answer ☟
The following program consists of 3 concurrent processes and 3 binary semaphores. The semaphores are initialized as
S0 = 1, S1 = 0 and S2 = 0.
Answer ☟
Fetch_And_Add(X,i) is an atomic Read-Modify-Write instruction that reads the value of memory location X, increments it by
the value i, and returns the old value of X. It is used in the pseudocode shown below to implement a busy-wait lock. L is an
unsigned integer shared variable initialized to 0. The value of 0 corresponds to lock being available, while any non-zero value
corresponds to the lock being not available.
AcquireLock(L){
while (Fetch_And_Add(L,1))
L = 1;
}
ReleaseLock(L){
L = 0;
}
This implementation
Answer ☟
A shared variable x, initialized to zero, is operated on by four concurrent processes W, X, Y , Z as follows. Each of the
processes W and X reads x from memory, increments by one, stores it to memory, and then terminates. Each of the processes Y
and Z reads x from memory, decrements by two, stores it to memory, and then terminates. Each process before reading x invokes
the P operation (i.e., wait) on a counting semaphore S and invokes the V operation (i.e., signal) on the semaphore S after storing x
to memory. Semaphore S is initialized to two. What is the maximum possible value of x after all processes complete execution?
A. –2
B. –1
C. 1
D. 2
Answer ☟
A certain computation generates two arrays a and b such that a[i] = f(i) for 0 ≤ i < n and b[i] = g(a[i]) for 0 ≤ i < n .
Suppose this computation is decomposed into two concurrent processes X and Y such that X computes the array a and Y computes
the array b. The processes employ two binary semaphores R and S , both initialized to zero. The array a is shared by the two
processes. The structures of the processes are shown below.
Process X:
private i;
for (i=0; i< n; i++) {
a[i] = f(i);
ExitX(R, S);
}
Process Y:
private i;
for (i=0; i< n; i++) {
EntryY(R, S);
b[i] = g(a[i]);
}
Which one of the following represents the CORRECT implementations of ExitX and EntryY?
A. ExitX(R, S) {
P(R);
V(S);
}
EntryY(R, S) {
P(S);
V(R);
}
B. ExitX(R, S) {
C. ExitX(R, S) {
P(S);
V(R);
}
EntryY(R, S) {
V(S);
P(R);
}
D. ExitX(R, S) {
V(R);
P(S);
}
EntryY(R, S) {
V(S);
P(R);
}
Answer ☟
5.16.37 Process Synchronization: GATE CSE 2014 Set 2 | Question: 31 top☝ ☛ https://gateoverflow.in/1990
Consider the procedure below for the Producer-Consumer problem which uses semaphores:
semaphore n = 0;
semaphore s = 1;
void producer()
{
while(true)
{
produce();
semWait(s);
addToBuffer();
semSignal(s);
semSignal(n);
}
}
void consumer()
{
while(true)
{
semWait(s);
semWait(n);
removeFromBuffer();
semSignal(s);
consume();
}
}
A. The producer will be able to add an item to the buffer, but the consumer can never consume it.
B. The consumer will remove no more than one item from the buffer.
C. Deadlock occurs if the consumer succeeds in acquiring semaphore s when the buffer is empty.
D. The starting value for the semaphore n must be 1 and not 0 for deadlock-free operation.
Answer ☟
5.16.38 Process Synchronization: GATE CSE 2015 Set 1 | Question: 9 top☝ ☛ https://gateoverflow.in/8121
The following two functions P1 and P2 that share a variable B with an initial value of 2 execute concurrently.
P1() { P2(){
C = B - 1; D = 2 * B;
B = 2 * C; B = D - 1;
} }
Answer ☟
5.16.39 Process Synchronization: GATE CSE 2015 Set 3 | Question: 10 top☝ ☛ https://gateoverflow.in/8405
Two processes X and Y need to access a critical section. Consider the following synchronization construct used by both the
processes
Process X Process Y
/* other code for process X*/ /* other code for process Y */
while (true) while (true)
{ {
varP = true; varQ = true;
while (varQ == true) while (varP == true)
{ {
/* Critical Section */ /* Critical Section */
varP = false; varQ = false;
} }
} }
/* other code for process X */ /* other code for process Y */
Here varP and varQ are shared variables and both are initialized to false. Which one of the following statements is true?
A. The proposed solution prevents deadlock but fails to guarantee mutual exclusion
B. The proposed solution guarantees mutual exclusion but fails to prevent deadlock
C. The proposed solution guarantees mutual exclusion and prevents deadlock
D. The proposed solution fails to prevent deadlock and fails to guarantee mutual exclusion
Answer ☟
5.16.40 Process Synchronization: GATE CSE 2016 Set 2 | Question: 48 top☝ ☛ https://gateoverflow.in/39600
PROCESS 0 Process 1
Entry: loop while (turn == 1); Entry: loop while (turn == 0);
(critical section) (critical section)
Exit: turn = 1; Exit turn = 0;
The shared variable turn is initialized to zero. Which one of the following is TRUE?
Answer ☟
A multithreaded program P executes with x number of threads and uses y number of locks for ensuring mutual exclusion
while operating on shared memory locations. All locks in the program are non-reentrant, i.e., if a thread holds a lock l, then it
cannot re-acquire lock l without releasing it. If a thread is unable to acquire a lock, it blocks until the lock becomes available.
The minimum value of x and the minimum value of y together for which execution of P can result in a deadlock are:
A. x = 1, y = 2
B. x = 2, y = 1
C. x = 2, y = 2
D. x = 1, y = 1
Answer ☟
Consider the following solution to the producer-consumer synchronization problem. The shared buffer size is N . Three
semaphores empty , full and mutex are defined with respective initial values of 0, N and 1. Semaphore empty denotes the
number of available slots in the buffer, for the consumer to read from. Semaphore full denotes the number of available slots in the
buffer, for the producer to write to. The placeholder variables, denoted by P , Q, R and S , in the code below can be assigned either
empty or full . The valid semaphore operations are: wait() and sigmal() .
Producer: Consumer:
do { do {
wait (P); wait (R);
wait (mutex); wait (mutex);
//Add item to buffer //consume item from buffer
signal (mutex); signal (mutex);
signal (Q); signal (S);
}while (1); }while (1);
Which one of the following assignments tp P , Q, R and S will yield the correct solution?
Answer ☟
Consider three concurrent processes P1 , P2 and P3 as shown below, which access a shared variable D that has been
initialized to 100.
P1 P2 P3
: : :
: : :
D = D + 20 D = D − 50 D = D + 10
: : :
: : :
The processes are executed on a uniprocessor system running a time-shared operating system. If the minimum and maximum
possible values of D after the three processes have completed execution are X and Y respectively, then the value of Y − X is
______
Consider the following snapshot of a system running n concurrent processes. Process i is holding Xi instances of a resource
R, 1 ≤ i ≤ n . Assume that all instances of R are currently in use. Further, for all i, process i can place a request for at most Yi
additional instances of R while holding the Xi instances it already has. Of the n processes, there are exactly two processes p and q
such that Yp = Yq = 0 . Which one of the following conditions guarantees that no other process apart from p and q can complete
execution?
A. Xp + Xq < Min{Yk ∣ 1 ≤ k ≤ n, k ≠ p, k ≠ q}
B. Xp + Xq < Max{Yk ∣ 1 ≤ k ≤ n, k ≠ p, k ≠ q}
C. Min(Xp , Xq ) ≥ Min{Yk ∣ 1 ≤ k ≤ n, k ≠ p, k ≠ q}
D. Min(Xp , Xq ) ≤ Max{Yk ∣ 1 ≤ k ≤ n, k ≠ p, k ≠ q}
Answer ☟
The semaphore variables full, empty and mutex are initialized to 0, n and 1, respectively. Process P 1 repeatedly adds one item
at a time to a buffer of size n, and process P2 repeatedly removes one item at a time from the same buffer using the programs given
below. In the programs, K , L , M and N are unspecified statements.
while (1) {
K;
P(mutex);
P1 Add an item to the buffer;
V(mutex);
L;
}
while (1) {
M;
P(mutex);
P2 Remove an item from the buffer;
V(mutex);
N;
}
Answer ☟
Given below is a program which when executed spawns two concurrent processes :
semaphore X := 0;
/* Process now forks into concurrent processes P1 & P2 */
P1 P2
repeat forever repeat forever
V (X); P(X);
Compute; Compute;
P(X); V (X);
Consider the following statements about processes P1 and P2 :
Answer ☟
Two concurrent processes P1 and P2 use four shared resources R1, R2, R3 and R4, as shown below.
P1 P2
Compute: Compute;
Use R1; Use R1;
Use R2; Use R2;
Use R3; Use R3;
Use R4; Use R4;
Both processes are started at the same time, and each resource can be accessed by only one process at a time The following
scheduling constraints exist between the access of resources by the processes:
There are no other scheduling constraints between the processes. If only binary semaphores are used to enforce the above scheduling
constraints, what is the minimum number of binary semaphores needed?
A. 1
B. 2
C. 3
D. 4
Answer ☟
Consider the solution to the bounded buffer producer/consumer problem by using general semaphores S, F, and E. The
semaphore S is the mutual exclusion semaphore initialized to 1. The semaphore F corresponds to the number of free slots in the
buffer and is initialized to N . The semaphore E corresponds to the number of elements in the buffer and is initialized to 0.
A. (I) only
B. (II) only
C. Neither (I) nor (II)
D. Both (I) and (II)
Answer ☟
Processes P1 and P2 use critical_flag in the following routine to achieve mutual exclusion. Assume that critical_flag is
initialized to FALSE in the main program.
get_exclusive_access ( )
{
if (critical _flag == FALSE) {
critical_flag = TRUE ;
critical_region () ;
critical_flag = FALSE;
}
}
Answer ☟
Synchronization in the classical readers and writers problem can be achieved through use of semaphores. In the following
incomplete code for readers-writers problem, two binary semaphores mutex and wrt are used to obtain synchronization
wait (wrt)
writing is performed
signal (wrt)
wait (mutex)
readcount = readcount + 1
if readcount = 1 then S1
S2
reading is performed
S3
readcount = readcount - 1
if readcount = 0 then S4
signal (mutex)
The values of S1, S2, S3, S4, (in that order) are
Answer ☟
The following is a code with two threads, producer and consumer, that can run in parallel. Further, S and Q are binary
semaphores quipped with the standard P and V operations.
semaphore S = 1, Q = 0;
integer x;
producer: consumer:
while (true) do while (true) do
P(S); P(Q);
x = produce (); consume (x);
V(Q); V(S);
done done
Answer ☟
5.16.1 Process Synchronization: GATE CSE 1987 | Question: 1-xvi top☝ ☛ https://gateoverflow.in/80362
A critical region is a program segment where shared resources are accessed, that's why we synchronize in the critical
section.
PS: It is not necessary that we must use semaphore for critical section access (any other mechanism for mutual exclusion can also
be used) and neither do sections enclosed by P and V operations are called critical sections.
Correct Answer : D.
34 votes -- kirti singh (2.6k points)
12 readers are reading means each reader has incremented the value of ar, making final value of ar to be 12.
Also each of the reader has executed grantread in which rr is incremented to the value of ar making value of rr to be 12 finally.
31 writers are waiting means each writer on arrival has incremented the value of aw, making final value of aw to be 31.
Value of rw is incremented in grantwrite only when value of rr is 0 but as 12 readers are already reading, this cannot happen,
making value of rw to be 0.
Whenever read is granted in grantread, it means value of reading semaphore is incremented to number of reader process using
V(reading). But before entering the read section, each reader decrements the reading semaphore by 1 using P(reading). The fact
that 12 readers are reading means that 12 V(reading) operations were performed and the 12 reader processes before entering read
section have performed P(reading) each to decrement the value of reading semaphore to 0 again.
Since 12 readers are already reading, value of rr is non-zero because of which V(writing) is not executed leaving the value of
writing semaphore to be 0.
------------------------------------------------------------------------------------
NO, group of readers will not starve writers as readers execute V(reading) in grantread only when aw is 0 i.e. no writer is
waiting allowing writer to execute first.
YES, writers can starve readers as writers execute V(writing) without caring about readers (ar).
-------------------------------------------------------------------------------------
The solution is incorrect because:
In reader-writer problem, only single process needs to write at a time.
But in proposed solution, consider the case: When one process is writing and another write process arrives then it is also granted
write using V(writing) without caring about the first process which is still writing.
5.16.3 Process Synchronization: GATE CSE 1988 | Question: 10iib top☝ ☛ https://gateoverflow.in/94393
the above solution for the critical section isn't correct because it satisfies Mutual exclusion and Progress but it violates
the bounded waiting.
Here is a sample run
suppose turn =j initially;
Pi runs its first statement then Pj runs its first statement then Pi run 2, 3, 4 statement, It will block on statement 4
the correct implementation ( for Bounded waiting ) is, at the exit section we have to update the turn variable at the exit section.
repeat
flag[i]:= true;
while turn != i
do begin
while flag [j] do skip
turn:=i;
end
critical section
flag[i]:=false;
turn=j;
until false
5.16.4 Process Synchronization: GATE CSE 1990 | Question: 2-iii top☝ ☛ https://gateoverflow.in/83859
A. Circular Wait is one of the conditions for deadlock.
B. To avoid race conditions, the execution of critical sections must be mutually exclusive (e.g., at most one process can be in
its critical section at any time).
C. Monitors using blocking condition variables are often called Hoare-style monitors or signal-and-urgent-wait monitors.
D. locality is commonly used to determine the number of assigned pages. The number of pages that meet the requirement of
locality is called a working set.
5.16.5 Process Synchronization: GATE CSE 1991 | Question: 11,a top☝ ☛ https://gateoverflow.in/538
Pre-requisite: Assume all 3 processes have same implementation of code except flag variable indices changes accordingly
for Pj and Pk and turn is shared variable among 3 process.
The condition:
while flag [j] or flag[k] do
ensures mutual exclusion as no process can enter critical section until flag of other processes is false.
-----------------------------------------------------------------------
will be true and it will enter the while loop. Since, turn = k, Pi will execute the loop:
while turn != i do skip;
which is false and thus the turn will remain k making Pi to execute an infinite loop until Pk arrives which can update turn = i.
So if Pk never arrives Pi will be waiting indefinitely.
5.16.6 Process Synchronization: GATE CSE 1991 | Question: 11,b top☝ ☛ https://gateoverflow.in/43000
“the process which use critical section should hold turn variable otherwise other waiting process will wait for indefinite time
if some process does not wants to enter in cs“
parbegin
parend
Here, the statement between parbegin and parend can execute in any order. But the precedence graph shows the order in which
the statements should be executed. This strict ordering is achieved using the semaphores.
Initially all the semaphores are 0.
For S1 there is no need of semaphore because it is the first one to execute.
Next S2 can execute only when S1 finishes. For this we have a semaphore a which on signal executed by S1 , gets value 1. Now
S2 which is doing a wait on a can continue execution making a = 0 ;
Likewise this is followed for all other statements.
Following must be the correct precedence graph, S1 abd S2 are independent, hence these can be executed in parallel.
For all those nodes which are independent we can execute them in parallel by creating a separate process for each node like S1
and S2 . There is an edge from S3 to S6 it means, until the process which executes S3 finishes its work, we can't start the process
which will execute S6 .
For more understanding watch the following NPTEL lectures on process management:
Video:
Precedence graph will be formed as:
5.16.10 Process Synchronization: GATE CSE 1996 | Question: 1.19, ISRO2008-61 top☝ ☛ https://gateoverflow.in/2723
A. There is no time guarantee for critical section.
B. Critical section by default doesn't avoid deadlock. While using critical section, programmer must ensure deadlock is
avoided.
C. is the answer
http://en.wikipedia.org/wiki/Critical_section
D. This is not a requirement of critical section. Only when semaphore is used for critical section management, this becomes a
necessity. But, semaphore is just ONE of the ways for managing critical section.
References
5.16.11 Process Synchronization: GATE CSE 1996 | Question: 2.19 top☝ ☛ https://gateoverflow.in/2748
Acc. to me it should be (C) because: according to condition, out of all, one philosopher will get both the forks. So,
deadlock should not be there.
Fork: L3,
At L3 these is a statement S2, Fork creates a new process suppose P1 which starts its execution from level L3, means it starts
executing S2.
P0 executes fork L4, it creates another new process P2 which starts its execution from level L4 means it starts executing S4.
When P1 finishes executing S2, it executes next line which is goto L1.
When P2 finishes executing S4, it executes next line which is goto L2.
L1 is executed by both processes P0 ( which has executed S1) and P1 ( which has executed S2)
Hence, S1 and S2 are combined together, as either P0 or P1 will terminate (∵ N = 2) and only one process will continue its
execution.
Similarly L2 is executed by two processes P2 ( which executed S4) and one of P0 or P1 ( which executed S3). So, S4 and S3 are
joined together, as one of them will terminate (∵ M = 2) and then one which survives will execute the final statement S5.
www.csc.lsu.edu/~rkannan/Fork_Cobegin_Creationtime.docx
http://www.cis.temple.edu/~giorgio/old/cis307s96/readings/precedence.html
References
5.16.13 Process Synchronization: GATE CSE 1997 | Question: 6.8 top☝ ☛ https://gateoverflow.in/2264
Answer is (D).
If initial value is 1//execute P1 or P10 first
If initial value is 0, P10 can execute and make the value 1.
Since the both code (i.e. P1 to P9 and P10 ) can be executed any number of times and code for P10 is
repeat
{
V(mutex)
C.S.
V(mutex)
}
forever
procedure release_R(priority)
begin
P(mutex); //only one process must be executing the following part at a time
R_requested[priority] = false; //this process has requested,
//allocated the resource and now finished using it
for (i = 3 downto 1)//starting from highest priority process
begin
if R_requested[i] then
begin
V(proceed[i]);//Process i is now given access to resource
break;
end
end
if (!R_requested[0] && !R_requested[1] && !R_requested[2]) then
busy = false;//no process is waiting and so next incoming resource
//can be served without any wait
V(mutex); //any other process can now request/release resource
end
5.16.15 Process Synchronization: GATE CSE 1998 | Question: 1.30 top☝ ☛ https://gateoverflow.in/1667
When final result depends on ordering of processes it is called Race condition.
Speed of processes corresponds to ordering of processes.
References
5.16.16 Process Synchronization: GATE CSE 1999 | Question: 20-a top☝ ☛ https://gateoverflow.in/1519
5.16.17 Process Synchronization: GATE CSE 1999 | Question: 20-b top☝ ☛ https://gateoverflow.in/205817
1. Run the Consumer Process, Test the condition inside "if" (It is given that the testing of count is atomic operation),
and since the Count value is initially 0, condition becomes True. After Testing (But BEFORE "Sleep" executes in consumer
process), Preempt the Consumer Process.
2. Now Run Producer Process completely (All statements of Producer process). (Note that in Producer Process, 5th line of
code, "Wakeup(Consumer);" will not cause anything because Consumer Process hasn't Slept yet (We had Preempted Consumer
process before It could go to sleep). Now at the end of One pass of Producer process, Count value is now 1. So, Now if we again
run Producer Process, "if" condition becomes true and Producer Process goes to sleep.
3. Now run the Preempted Consumer process, And It also Goes to Sleep. (Because it executes the Sleep code).
So, Now Both Processes are sleeping at the same time.
5.16.18 Process Synchronization: GATE CSE 2000 | Question: 1.21 top☝ ☛ https://gateoverflow.in/645
P0 : m[0]; m[1]
P1 : m[1]; m[2]
P2 : m[2]; m[3]
P3 : m[3]; m[0]
P4 : m[4]; m[1]
po holding mo waiting for m1
p1 holding m1 waiting for m2
p2 holding m2 waiting for m3
p3 holding m3 waiting for m0
p4 holding m4 waiting for m1
So its circular wait and no process can go into critical section even thought its free hence
Answer: (B) Deadlock.
There are four conditions that must be satisfied by any reader-writer problem solution
Now, here mutex is a semaphore variable that will be used to modify the variables R and W in a mutually exclusive way.
The reader code should be like below
Reader()
L1: wait(mutex);
if(w==0){ //no Writer present, so allow Readers to come.
signal(mutex);
goto L1;
}
/*reading performed*/
wait(mutex);
R=R-1;
signal(mutex);
Value of variable R indicates the number of readers presently reading and the value of W indicates if 1, that some writer is
present.
Writer code should be like below
Writer()
L2: wait(mutex);
if(R>0 || W!=0) //means if even one reader is present or one writer is writing
//deny access to this writer process and ask this to release
//mutex and loop back to L2.
{
signal(mutex);
goto L2;
}
//code will come here only if no writer or no reader was present.
signal(mutex); //now after updating W safely, release mutex, for other writers and
//readers to place their request.
/*Write performed*/
//writer will leave so change Value of W in a mutual exclusive manner.
wait(mutex);
W=0;
signal(mutex);
5.16.20 Process Synchronization: GATE CSE 2001 | Question: 2.22 top☝ ☛ https://gateoverflow.in/740
Answer is Option B as used in Peterson's solution for Two Process Critical Section Problem which guarantees
1. Mutual Exclusion
2. Progress
3. Bounded Waiting
Both i and j are concurrent processes. So, whichever process wants to enter critical section(CS) that will execute the given code.
A process i shows it's interest to enter CS by setting flag[i] = TRUE and only when i leaves CS it sets flag[i] = FALSE.
From this it's clear that when some process wants to enter CS then it must check value of flag[] of the other process.
∴ " flag[j] == TRUE " must be one condition that must be checked by process i.
Here, the turn variable specifies whose turn is next i.e. which process can enter the CS next time. "turn " acts like an unbiased
scheduler, it ensures giving fair chance to the processes for execution. When a process sets flag[] value, then turn value is set
equal to other process so that same process is not executed again (strict alteration when both processes are ready). i.e., usage of
turn variable here ensures "Bounded Waiting" property.
Before entering CS every process needs to check whether other process has shown interest first and which process is scheduled
by the turn variable. If other process is not ready, flag[other] will be false and the current process can enter the CS irrespective
of the value of turn. Thus, the usage of flag variable ensures "Progress" property.
If flag[other] = TRUE and turn = other, then the process has to wait until one of the conditions becomes false. (because it is
the turn of other process to enter CS). This ensures Mutual Exclusion.
** one interesting point that can be observed is, if 2 processes wants to enter the CS, the process which executes " turn = j "
statement first is always the first one to enter the CS (after the other process executes turn = j ".
5.16.21 Process Synchronization: GATE CSE 2002 | Question: 18-a top☝ ☛ https://gateoverflow.in/871
Process state transition diagram for an OS which satisfy the below two criteria will be as follows:
i. each process is in one of the five states: created, ready, running, blocked (i.e., sleep or wait), or terminated, and
ii. only non-preemptive scheduling is used by the OS.
If in question it is asked about the preemptive scheduling then after the running state a process directly go to ready state .
Solution will be
void enter-cs()
{
while(TestAndSet(&mutex));
}
void leave-cs()
{
mutex=0;
}
means, Producer have to wait only if buffer is full and it waits for consumer to remove at least one item. (See, Empty being
initialized to BUFFSIZE)
p2: V(Full)
buffer is filled, now it gives signal to consumer that it can start consuming
c1: P(Full)
means here consumer have to wait only if buffer is empty, and it waits for Producer to fill the buffer
c2: V(Empty)
Now buffer is empty and Empty semaphore gives signal to the producer that it can start filling
It is same as giving water to a thirsty man.
Here u are giving water in a glass to that thirsty man, so u are producer here
and the man drinks and makes the glass empty, so he is consumer here
b) If there are multiple user we can use mutex semaphore, so that exclusively one could enter in Critical section at a time. i.e.
p1:P(Empty)
P(mutex)
p2:V(mutex)
V(Full)
c1:P(Full)
P(mutex)
c2: V(mutex)
V(Empty)
PS: One thing to see is P(mutex) is after P(Full) and P(empty)- otherwise deadlock can happen when buffer is full and a producer
gets mutex or if buffer is empty and a consumer gets mutex.
To get pattern 001100110011
Process P should be executed first followed by Process Q.
So, at Process P : W P(S) X V (T )
And at Process Q : Y P(T ) Z V (S)
With S = 1 and T = 0 initially ( only P has to be run first then only Q is run. Both processes run on alternate way starting
with P)
So, answer is (B).
output shouldn't contain substring of given form means no concurrent execution process P as well as Q. one semaphore
is enough
So ans is (C)
A. deadlock p1 : line1|p2:line3| p1: line2(block) |p2 :line4(block)
s(x) s(y
C. p1 :line 1|p2 : line 3|p2 line 4 (block) |p1 line 2 (block) here, p1 wants sx and p2 wants sy, but both will not be release by
its process p1 and p2 because there is no way to release them. So, stuck in deadlock.
D. p1 :line 1 |p2 : line 3 (block because need sx ) |p1 line 2|p2 : still block |p1 : execute cs then up the value of sx |p2 :line 3
line 4(block need sy)|p1 up thesy |p2 :lin4 4 and easily get cs.
We can start from p2 also, as I answered according only p1, but we get same answer.
So, option (D) is correct
A. Answer :- This is correct because the implementation may not work if context switching is disabled in P , then process
which is currently blocked may never give control to the process which might eventually execute V . So Context switching
is must !
B. If we use normal load & Store instead of Fetch & Set there is good chance that more than one Process sees S.value as 0 &
then mutual exclusion wont be satisfied. So this option is wrong.
C. Here we are setting S → value to 0, which is correct. (As in fetch & Set we wait if value of S → value is 1. So
implementation is correct. This option is wrong.
D. I don't see why this code does not implement binary semaphore, only one Process can be in critical section here at a time.
So this is binary semaphore & Option (D) is wrong
(B) is the correct answer.
Let 3 processes p1 , p2 , p3 arrive at the barrier and after 4th step process_arrived=3 and the processes enter the barrier. Now
suppose process p1 executes the complete code and makes process_left=1, and tries to re-enter the barrier. Now, when it
executes 4th step, process_arrived=4. p1 is now stuck. At this point all other processes p2 and p3 also execute their section of
code and resets process_arrived=0 and process_left=0. Now, p2 and p3 also try to re-enter the barrier making
process_arrived=2. At this point all processes have arrived, but process_arrived!=3. Hence, no process can re-enter into the
barrier, therefore DEADLOCK!!
The implementation is incorrect because if two barrier invocations are used in immediate succession the system will fall
into a DEADLOCK.
Here's how: Let all three processes make process_arrived variable to the value 3, as soon as it becomes 3 previously stuck
processes at the while loop are now free, to move out of the while loop.
But for instance let say one process moves out and has bypassed the next if statement & moves out of the barrier function and
The SAME process is invoked again(its second invocation) while other processes are preempted still.
That process on its second invocation makes the process_arrived variable to 4 and gets stuck forever in the while loop with other
processes.
At this point of time they are in DEADLOCK. as only 3 processes were in the system and all are now stuck in while loop.
Q.79 answer = option (B)
P1 can do wants1 = true and then P2 can do wants2 = true. Now, both P1 and P2 will be waiting in the while loop
indefinitely without any progress of the system - deadlock.
When P1 is entering critical section it is guaranteed that wants1 = true (wants2 can be either true or false). So, this ensures P2
won't be entering the critical section at the same time. In the same way, when P2 is in critical section, P1 won't be able to enter
critical section. So, mutual exclusion condition satisfied.
Suppose P1 first enters critical section. Now suppose P2 comes and waits for CS by making wants2 = true. Now, P1 cannot
get access to CS before P2 gets and similarly if P1 is in wait, P2 cannot continue more than once getting access to CS. Thus,
there is a bound (of 1) on the number of times another process gets access to CS after a process requests access to it and hence
bounded waiting condition is satisfied.
https://cs.stackexchange.com/questions/63730/how-to-satisfy-bounded-waiting-in-case-of-deadlock
References
The answer is (A) only.
1. Mutual Exclusion as test-and-set is an indivisible (atomic) instruction (makes option (IV) wrong)
2. Progress as at initially X is 0 and at least one process can enter critical section at any time.
But no guarantee that a process eventually will enter CS and hence option (IV) is false. Also, no ordering of processes is
maintained and hence III is also false.
Answer is (A). In this mutual exclusion is satisfied,only one process can access the critical section at particular time but
here progress will not satisfied because suppose when s1 = 1 and s2 = 0 and process p1 is not interested to enter into critical
section but p2 want to enter critical section. P2 is not able to enter critical section in this as only when p1 finishes execution,
First P0 will enter the while loop as S0 is 1. Now, it releases both S1 and S2 and one of them must execute next. Let that
be P1 . Now, P0 will be waiting for P1 to finish. But in the mean time P2 can also start execution. So, there is a chance that
before P0 enters the second iteration both P1 and P2 would have done release (S0 ) which would make S1 1 only (as it is a
binary semaphore). So, P0 can do only one more iteration printing ′ 0′ two times.
If P2 does release (S0 ) only after P0 starts its second iteration, then P0 would do three iterations printing ′ 0′ three times.
If the semaphore had 3 values possible (an integer semaphore and not a binary one), exactly three ′ 0′ s would have been printed.
A process acquires a lock only when L = 0 . When L is 1, the process repeats in the while loop- there is no overflow
because after each increment to L , L is again made equal to 1. So, the only chance of overflow is if a large number of processes
(larger than sizeof(int)) execute the check condition of while loop but not L = 1 , which is highly improbable.
Acquire Lock gets success only when Fetch_And_Add gets executed with L = 0. Now suppose P1 acquires lock and make
L = 1 . P2 waits for a lock iterating the value of L between 1 and 2 (assume no other process waiting for lock). Suppose when
P1 releases lock by making L = 0 , the next statement P2 executes is L = 1 . So, value of L becomes 1 and no process is in
critical section ensuring L can never be 0 again. Thus, (B) choice.
To correct the implementation we have to replace Fetch_And_Add with Fetch_And_Make_Equal_1 and remove L = 1 in
AcquireLock(L) .
Since, initial value of semaphore is 2, two processes can enter critical section at a time- this is bad and we can see why.
Say, X and Y be the processes. X increments x by 1 and Z decrements x by 2. Now, Z stores back and after this X stores back.
So, final value of x is 1 and not −1 and two Signal operations make the semaphore value 2 again. So, now W and Z can also
execute like this and the value of x can be 2 which is the maximum possible in any order of execution of the processes.
(If the semaphore is initialized to 1, processed would execute correctly and we get the final value of x as −2 .)
A. X is waiting on R and Y is waiting on X. So, both cannot proceed.
B. Process X is doing Signal operation on R and S without any wait and hence multiple signal operations can happen on the
binary semaphore so Process Y won't be able to get exactly n successful wait operations. i.e., Process Y may not be able to
complete all the iterations.
C. Process X does Wait(S) followed by Signal(R) while Process Y does Signal(S) followed by Wait(R). So, this ensures that
no two iterations of either X or Y can proceed without an iteration of the other being executed in between. i.e., this ensures
that all n iterations of X and Y succeeds and hence the answer.
D. Process X does Signal(R) followed by Wait(S) while Process Y does Signal(S) followed by Wait(R). There is a problem
here that X can do two Signal(R) operation without a Wait(R) being done in between by Y . This happens in the following
So, this can result in some Signal operations getting lost as the semaphore is a binary one and thus Process Y may not be able to
complete all the iterations. If we change the order of Signal(S) and Wait(R) in EntryY, then (D) option also can work.
5.16.37 Process Synchronization: GATE CSE 2014 Set 2 | Question: 31 top☝ ☛ https://gateoverflow.in/1990
A. False : Producer = P (let), consumer = C (let) , once producer produce the item and put into the buffer. It will up the s
and n to 1, so consumer can easily consume the item. So, option (A) Is false.
Code can be execute in this way: P : 1 2 3 4 5| C : 1 2 3 4 5 . So, consumer can consume item after adding the item to
buffer.
B. Is also False, because whenever item is added to buffer means after producing the item, consumer can consume the item or
we can say remove the item, if here statement is like the consumer will remove no more than one item from the buffer just
after the removing one then it will be true (due n = 0 then, it will be blocked ) but here only asking about the consumer
will remove no more than one item from the buffer so, its false.
C. is true , statement says if consumer execute first means buffer is empty. Then execution will be like this.
C : 1 (wait on s, s = 0 now) 2( BLOCK n = −1) |P : 1 2 (wait on s which is already 0 so, it now block). So, c wants n
which is held by producer or we can say up by only producer and P wants s, which will be up by only consumer. (circular
wait ) surely there is deadlock.
5.16.38 Process Synchronization: GATE CSE 2015 Set 1 | Question: 9 top☝ ☛ https://gateoverflow.in/8121
3 distinct values {2, 3, 4}
P1 − P2 : B = 3
P2 − P1 : B = 4
P1 − P2 − P1 : B = 2
44 votes -- Anoop Sonkar (4.1k points)
5.16.39 Process Synchronization: GATE CSE 2015 Set 3 | Question: 10 top☝ ☛ https://gateoverflow.in/8405
When both processes try to enter critical section simultaneously, both are allowed to do so since both shared variables
varP and varQ are true. So, clearly there is NO mutual exclusion. Also, deadlock is prevented because mutual exclusion is one
of the necessary condition for deadlock to happen. Hence, answer is (A).
5.16.40 Process Synchronization: GATE CSE 2016 Set 2 | Question: 48 top☝ ☛ https://gateoverflow.in/39600
There is strict alternation i.e. after completion of process 0 if it wants to start again.It will have to wait until process 1
gives the lock.
This violates progress requirement which is, that no other process outside critical section can stop any other interested process
from entering the critical section.
Hence the answer is that it violates the progress requirement.
5.16.41 Process Synchronization: GATE CSE 2017 Set 1 | Question: 27 top☝ ☛ https://gateoverflow.in/118307
If we see definition of reentrant Lock :
In computer science, the reentrant mutex (recursive mutex, recursive lock) is particular type of mutual exclusion (mutex)
device that may be locked multiple times by the same process/thread, without causing a deadlock.
https://en.wikipedia.org/wiki/Reentrant_mutex
A Re-entrantLock is owned by the thread last successfully locking, but not yet unlocking it. A thread invoking lock will return,
successfully acquiring the lock, when the lock is not owned by another thread. The method will return immediately if the
current thread already owns the lock https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/ReentrantLock.html
Reentrant property is provided, so that a process who owns a lock, can acquire same lock multiple times. Here it is non-reentrant
as given, process cant own same lock multiple times. So if a thread tries to acquire already owned lock, will get blocked, and this
is a deadlock.
Here, the answer is (D).
References
Empty denotes number of Filled slots.
So, Producer must dealing with Empty and Consumer deals with Full
Producer must checks Full i,e. decrease Full by 1 before entering and Consumer check with Empty decrease Full by 1 before
entering
D = 100
Arithmetic operations are not ATOMIC.
These are three step process:
1. Read
2. Calculate
3. Update
Maximum value:
Run P2 for Read and Calculate. D = 100
Run P1 for read and calculate. D = 100
Run P2 update. D = 50
Run P1 update. D = 110
Run P2 read, calculate and update. D = 130
The process P , holds Xp resources currently and it doesn't request any new resources. Therefore after some time, it will
completes it's execution and release the resources which it holds.
The process Q, holds Xq resources currently and it doesn't request any new resources. Therefore after some time, it will
completes it's execution and release the resources which it holds.
If these resources can not satisfy any process new requests, then no process will be able to completes it's execution.
Xp + Xq < Min{Yk ∣ 1 ≤ k ≤ n, k ≠ p, k ≠ q} ⟹ delivers that no process going to completes except P and Q. Answer
is (A)
P1 is the producer. So, it must wait for full condition. But semaphore full is initialized to 0 and semaphore empty is
initialized to n, meaning full = 0 implies no item and empty = n implies space for n items is available. So, P1 must wait for
semaphore empty - K − P( empty ) and similarly P2 must wait for semaphore full - M − P( full ) . After accessing the
critical section (producing/consuming item) they do their respective V operation. Thus option D.
49 votes -- Arjun Suresh (332k points)
Check : What is Starvation?
Answer is (B)
It needs two semaphores. X = 0, Y = 0
Suppose the slots are full → F = 0 . Now, if Wait( F) and Wait (S) are interchanged and Wait(S) succeeds, The
producer will wait for Wait(F) which is never going to succeed as Consumer would be waiting for Wait (S) . So, deadlock can
happen.
If Signal(S) and Signal(F) are interchanged in Consumer, deadlock won't happen. It will just give priority to a producer
compared to the next consumer waiting.
So, answer (A)
(C) Both process can run the critical section concorrently. Lets say p1 starts and it enters inside if claus and just after its
entertence and before execution of critical_flag = TRUE, a context switch happens and p2 also gets entrance since the flag is still
false. So, now both process are in critical section! So, (i) is true. (ii) is false there is no way that flag is true and no process' are
inside the if clause, if someone enters the critical section, it will definetly make flag = false. So. no. deadlock.
Answer is (C)
S1: if readcount is 1 i.e., some reader is reading, DOWN on wrt so that no writer can write.
S2: After readcount has been updated, UP on mutex.
S3: DOWN on mutex to update readcount
S4: If readcount is zero i.e., no reader is reading, UP on wrt to allow some writer to write
Producer: consumer: while (true) do while (true) do 1 P(S) ; 1 P(Q) ; 2 x = produce (); 2 consume (x) ; 3 V (Q) ; 3
V (S) ; done done
Lets explain the working of this code.
It is mentioned that P and C execute parallely.
P : 123
A number of processes could be in a deadlock state if none of them can execute due to non-availability of sufficient resources.
L e t Pi , 0 ≤ i ≤ 4 represent five processes and let there be four resources types rj , 0 ≤ j ≤ 3 . Suppose the following data
structures have been used.
Available: A vector of length 4 such that if Available [i] = k , there are k instances of resource type rj available in the system.
Allocation. A 5 × 4 matrix defining the number of each type currently allocated to each process. If Allocation [i, j] = k then
process pi is currently allocated k instances of resource type rj .
Max. A 5 × 4 matrix indicating the maximum resource need of each process. If Max[i, j] = k then process pi , may need a
maximum of k instances of resource type rj in order to complete the task.
Assume that system allocated resources only when it does not lead into an unsafe state such that resource requirements in future
never cause a deadlock state. Now consider the following snapshot of the system.
Allocation Max
r0 r1 r2 r3 r0 r1 r2 r3
p0 0 0 1 2 0 0 1 2 Available
p1 1 0 0 0 1 7 5 0 r0 r1 r2 r3
p2 1 3 5 4 2 3 5 6 1 5 2 0
p3 0 6 3 2 0 6 5 2
p4 0 0 1 4 0 6 5 6
Answer ☟
5.17.2 Resource Allocation: GATE CSE 1989 | Question: 11a top☝ ☛ https://gateoverflow.in/91093
i. A system of four concurrent processes, P, Q, R and S , use shared resources A, B and C . The sequences in which processes,
P, Q, R and S request and release resources are as follows:
If a resource is free, it is granted to a requesting process immediately. There is no preemption of granted resources. A resource is
taken back from a process only when the process explicitly releases it.
Can the system of four processes get into a deadlock? If yes, give a sequence (ordering) of operations (for requesting and releasing
resources) of these processes which leads to a deadlock.
ii. Will the processes always get into a deadlock? If your answer is no, give a sequence of these operations which leads to
completion of all processes.
iii. What strategies can be used to prevent deadlocks in a system of concurrent processes using shared resources if preemption of
granted resources is not allowed?
Answer ☟
5.17.3 Resource Allocation: GATE CSE 1992 | Question: 02-xi top☝ ☛ https://gateoverflow.in/568
A computer system has 6 tape devices, with n processes competing for them. Each process may need 3 tape drives. The
maximum value of n for which the system is guaranteed to be deadlock-free is:
A. 2
B. 3
C. 4
D. 1
Answer ☟
5.17.4 Resource Allocation: GATE CSE 1993 | Question: 7.9, UGCNET-Dec2012-III: 41 top☝ ☛ https://gateoverflow.in/2297
Consider a system having m resources of the same type. These resources are shared by 3 processes A, B, and C which have
peak demands of 3, 4 , and 6 respectively. For what value of m deadlock will not occur?
A. 7
B. 9
C. 10
D. 13
E. 15
Answer ☟
A computer system uses the Banker’s Algorithm to deal with deadlocks. Its current state is shown in the table below, where
P0 , P1 , P2 are processes, and R0, R1, R2 are resources types.
Answer ☟
5.17.7 Resource Allocation: GATE CSE 1997 | Question: 6.7 top☝ ☛ https://gateoverflow.in/2263
An operating system contains 3 user processes each requiring 2 units of resource R. The minimum number of units of R such
that no deadlocks will ever arise is
A. 3
B. 5
C. 4
D. 6
Answer ☟
i. If no other process is currently holding the resource, the OS awards the resource to P .
ii. If some process Q with T S(Q) < T S(P) is holding the resource, the OS makes P wait for the resources.
iii. If some process Q with T S(Q) > T S(P) is holding the resource, the OS restarts Q and awards the resources to
P . (Restarting means taking back the resources held by a process, killing it and starting it again with the same timestamp)
When a process releases a resource, the process with the smallest timestamp (if any) amongst those waiting for the resource is
awarded the resource.
A. Can a deadlock over arise? If yes, show how. If not prove it.
B. Can a process P ever starve? If yes, show how. If not prove it.
Answer ☟
5.17.9 Resource Allocation: GATE CSE 1998 | Question: 1.32 top☝ ☛ https://gateoverflow.in/1669
A computer has six tape drives, with n processes competing for them. Each process may need two drives. What is the
maximum value of n for the system to be deadlock free?
A. 6
B. 5
C. 4
D. 3
Answer ☟
5.17.10 Resource Allocation: GATE CSE 2000 | Question: 2.23 top☝ ☛ https://gateoverflow.in/670
Answer ☟
Two concurrent processes P1 and P2 want to use resources R1 and R2 in a mutually exclusive manner. Initially, R1 and R2
are free. The programs executed by the two processes are given below.
A. Is mutual exclusion guaranteed for R1 and R2? If not show a possible interleaving of the statements of P1 and P2 such
mutual exclusion is violated (i.e., both P1 and P2 use R1 and R2 at the same time).
B. Can deadlock occur in the above program? If yes, show a possible interleaving of the statements of P1 and P2 leading to
deadlock.
C. Exchange the statements Q1 and Q3 and statements Q2 and Q4 . Is mutual exclusion guaranteed now? Can deadlock occur?
Answer ☟
Suppose n processes, P1 , … Pn share m identical resource units, which can be reserved and released one at a time. The
maximum resource requirement of process Pi is si , where si > 0. Which one of the following is a sufficient condition for ensuring
that deadlock does not occur?
A. ∀i, si , < m
B. ∀i, si < n
n
C. ∑ si < (m + n)
i=1
n
D. ∑ si < (m × n)
i=1
Answer ☟
Consider the following snapshot of a system running n processes. Process i is holding xi instances of a resource R,
1 ≤ i ≤ n . Currently, all instances of R are occupied. Further, for all i, process i has placed a request for an additional yi instances
while holding the xi instances it already has. There are exactly two processes p and q and such that yp = yq = 0 . Which one of the
following can serve as a necessary condition to guarantee that the system is not approaching a deadlock?
Answer ☟
A single processor system has three resource types X, Y and Z , which are shared by three processes. There are 5 units of each
resource type. Consider the following scenario, where the column alloc denotes the number of units of each resource type allocated
to each process, and the column request denotes the number of units of each resource type requested by a process in order to
complete execution. Which of these processes will finish LAST?
A. P0
B. P1
C. P2
D. None of the above, since the system is in a deadlock
Answer ☟
Which of the following is NOT true of deadlock prevention and deadlock avoidance schemes?
A. In deadlock prevention, the request for resources is always granted if the resulting state is safe
B. In deadlock avoidance, the request for resources is always granted if the resulting state is safe
C. Deadlock avoidance is less restrictive than deadlock prevention
D. Deadlock avoidance requires knowledge of resource requirements apriori..
Answer ☟
Consider a system with 4 types of resources R1 (3 units), R2 (2 units), R3 (3 units), R4 (2 units). A non-preemptive resource
allocation policy is used. At any given instance, a request is not entertained if it cannot be completely satisfied. Three processes P1 ,
P2 , P3 request the resources as follows if executed independently.
Which one of the following statements is TRUE if all three processes run concurrently starting at time t = 0?
A. All processes will finish without any deadlock
B. Only P1 and P2 will be in deadlock
C. Only P1 and P3 will be in deadlock
D. All three processes will be in deadlock
Answer ☟
A system has n resources R0 , … , Rn−1 , and k processes P0 , … , Pk−1 . The implementation of the resource request logic of
each process Pi is as follows:
if(i%2 == 0){ if(i < n) request Ri ; if(i + 2 < n) request Ri+2 ; }else{ if(i < n) request Rn−i ; if(i + 2 < n) request Rn−
In which of the following situations is a deadlock possible?
A. n = 40, k = 26
B. n = 21, k = 12
C. n = 20, k = 10
D. n = 41, k = 19
Answer ☟
Three concurrent processes X, Y , and Z execute three different code segments that access and update certain shared variables.
Process X executes the P operation (i.e., wait) on semaphores a , b and c; process Y executes the P operation on semaphores b, c
a n d d ; process Z executes the P operation on semaphores c, d , and a before entering the respective code segments. After
completing the execution of its code segment, each process invokes the V operation (i.e., signal) on its three semaphores. All
semaphores are binary semaphores initialized to one. Which one of the following represents a deadlock-free order of invoking the P
operations by the processes?
Answer ☟
5.17.19 Resource Allocation: GATE CSE 2014 Set 1 | Question: 31 top☝ ☛ https://gateoverflow.in/1800
An operating system uses the Banker's algorithm for deadlock avoidance when managing the allocation of three resource
types X, Y , and Z to three processes P0, P1, and P2. The table given below presents the current system state. Here,
the Allocation matrix shows the current number of resources of each type allocated to each process and the Max matrix shows the
maximum number of resources of each type required by each process during its execution.
Allocation Max
X Y Z X Y Z
P0 0 0 1 8 4 3
P1 3 2 0 6 2 0
P2 2 1 1 3 3 3
There are 3 units of type X, 2 units of type Y and 2 units of type Z still available. The system is currently in a safe state. Consider
the following independent requests for additional resources in the current state:
REQ1: P0 requests 0 units of X, 0 units of Y and 2 units of Z
REQ2: P1 requests 2 units of X, 0 units of Y and 0 units of Z
Which one of the following is TRUE?
A. Only REQ1 can be permitted.
B. Only REQ2 can be permitted.
C. Both REQ1 and REQ2 can be permitted.
D. Neither REQ1 nor REQ2 can be permitted.
Answer ☟
A system contains three programs and each requires three tape units for its operation. The minimum number of tape units
which the system must have such that deadlocks never arise is _________.
Answer ☟
5.17.21 Resource Allocation: GATE CSE 2015 Set 2 | Question: 23 top☝ ☛ https://gateoverflow.in/8114
A system has 6 identical resources and N processes competing for them. Each process can request at most 2 requests. Which
one of the following values of N could lead to a deadlock?
A. 1
B. 2
C. 3
D. 4
Answer ☟
5.17.22 Resource Allocation: GATE CSE 2015 Set 3 | Question: 52 top☝ ☛ https://gateoverflow.in/8561
Consider the following policies for preventing deadlock in a system with mutually exclusive resources.
I. Process should acquire all their resources at the beginning of execution. If any resource is not available, all resources acquired
so far are released.
II. The resources are numbered uniquely, and processes are allowed to request for resources only in increasing resource numbers
III. The resources are numbered uniquely, and processes are allowed to request for resources only in deccreasing resource
numbers
IV. The resources are numbered uniquely. A processes is allowed to request for resources only for a resource with resource
number larger than its currently held resources
Answer ☟
5.17.23 Resource Allocation: GATE CSE 2016 Set 1 | Question: 50 top☝ ☛ https://gateoverflow.in/39719
Consider the following proposed solution for the critical section problem. There are n processes : P0 . . . . Pn−1 . In the code,
function pmax returns an integer not smaller than any of its arguments .For all i, t[i] is initialized to zero.
Code for Pi ;
do {
c[i]=1; t[i]= pmax (t[0],....,t[n-1])+1; c[i]=0;
for every j != i in {0,....,n-1} {
while (c[j]);
while (t[j] != 0 && t[j] <=t[i]);
}
Critical Section;
t[i]=0;
Remainder Section;
} while (true);
Answer ☟
5.17.24 Resource Allocation: GATE CSE 2017 Set 2 | Question: 33 top☝ ☛ https://gateoverflow.in/118375
A system shares 9 tape drives. The current allocation and maximum requirement of tape drives for that processes are shown
below:
Answer ☟
Two shared resources R1 and R2 are used by processes P1 and P2 . Each process has a certain priority for accessing each
resource. Let Tij denote the priority of Pi for accessing Rj . A process Pi can snatch a resource Rk from process Pj if Tik is greater
than Tjk .
Given the following :
Which of the following conditions ensures that P1 and P2 can never deadlock?
A. (I) and (IV)
B. (II) and (III)
C. (I) and (II)
D. None of the above
Answer ☟
An operating system implements a policy that requires a process to release all resources before making a request for another
resource. Select the TRUE statement from the following:
Answer ☟
Here, we are asked to "Avoid Deadlock" and Bankers Algorithm is the algorithm for this.
The crux of the algorithm is to allocate resources to a process only if there exist a safe sequence after the allocation. i.e., after
allocating the requested resources there exist a sequence of execution of the processes such that deadlock would not happen.
There can be multiple safe sequences but we need to get any one of them to say that a state is safe.
Now coming to the given question, first lets make the NEED matrix which shows the future need of all the processes and can be
obtained by Max − Allocation.
Since P0 does not require any more resource we can finish this first releasing 1 instance of r2 and 2 instances of r3 . Thus our
Available vector becomes
[1 5 2 0]+[0 0 1 2] = [1 5 3 2].
Now, either p2 or p3 can finish as both their requirements are not greater than the Available vector. Say, p2 finishes. It releases
[ 2 3 5 6 ] and our Available becomes
[1 5 3 2]+[2 3 5 6] = [3 8 8 8].
Now, any of p1 , p3 , p4 can finish and so we do not need to proceed further to determine that the state is safe. One of the possible
safe sequence is
p0 − p2 − p1 − p3 − p4 .
5.17.2 Resource Allocation: GATE CSE 1989 | Question: 11a top☝ ☛ https://gateoverflow.in/91093
5.17.3 Resource Allocation: GATE CSE 1992 | Question: 02-xi top☝ ☛ https://gateoverflow.in/568
Allocate max-1 resources to all processes and add one more resource to any process (Pigeon hole principle) so that this
particular process can be completed (resources can be freed) and there is no deadlock.
∴ (3 − 1) ∗ n + 1 = 6
n = ⌊ 52 ⌋ = 2
Correct Answer: A
10 votes -- Manoja Rajalakshmi Aravindakshan (7.7k points)
Answer: (A).
5.17.4 Resource Allocation: GATE CSE 1993 | Question: 7.9, UGCNET-Dec2012-III: 41 top☝ ☛ https://gateoverflow.in/2297
13 and 15.
Consider the worst scenario: all processes require one more instance of the resource. So, P1 would have got 2, P2 − 3 and
P3 − 5 . Now, if one more resource is available at least one of the processes could be finished and all resources allotted to it will
be free which will lead to other processes also getting freed. So, 2 + 3 + 5 = 10 would be the maximum value of m so that a
deadlock can occur.
41 votes -- Arjun Suresh (332k points)
From the RAG we can make the necessary matrices.
Total = (2 3 2)
Allocated = (2 3 1)
Available = Total − Allocated = (0 0 1)
(0 0 1)
A = (0 0 1) + (0 1 0) = (0 1 1)
and it releases
A = (0 1 1) + (1 0 1) = (1 1 2)
Allocation MAX NEED Future Need
R0 R1 R2 R0 R1 R2 R0 R1 R2
P0 1 0 2 P0 4 1 2 P0 3 1 0
P1 0 3 1 P1 1 5 1 P1 1 2 0
P2 1 0 2 P2 1 2 3 P2 0 2 1
Available = (2 2 0)
P1(1 2 0) 's needs can be met. P1 executes and completes releases its allocated resources.
A = (2 2 0) + (0 3 1) = (2 5 1)
Further P2(0 2 1) s needs can be met.
A = (2 5 1) + (1 0 2) = (3 5 3)
next P0 s needs can be met.
Thus safe sequence exists P1P2P0.
Available = (2 2−1 = 1 0)
Here, also not a single request need by any process can be made.
5.17.7 Resource Allocation: GATE CSE 1997 | Question: 6.7 top☝ ☛ https://gateoverflow.in/2263
If we have X number of resources where X is sum of ri − 1 where ri is the resource requirement of process i, we might
have a deadlock. But if we have one more resource, then as per Pigeonhole principle, one of the process must complete and this
can eventually lead to all processes completing and thus no deadlock.
Here, n = 3 and ri = 2 for all i. So, in order to avoid deadlock minimum no. of resources required
= (2 − 1) + 1 = 3 + 1 = 4.
© Copyright GATE Overflow. Some rights reserved.
3
= ∑(2 − 1) + 1 = 3 + 1 = 4.
i=1
PS: Note the minimum word, any higher number will also cause no deadlock.
Correct Answer: C
A. Can Deadlock occur. No, because every time Older Process who wants some resources which are already acquired
by some younger process. In this condition Younger will be killed and release its resources which is now taken by
now older process. So never more than one process will wait for some resources indefinitely. Timestamp will also be
unique.
B. Can a process Starve. No, because every time when Younger process is getting killed, it is restarted with same
timestamp which he had at time of killing. So it will act as an elder even after killing for all those who came after it..
5.17.9 Resource Allocation: GATE CSE 1998 | Question: 1.32 top☝ ☛ https://gateoverflow.in/1669
Each process needs 2 drives
Consider this scenario
P1 P2 P3 P4 P5 P6
1 1 1 1 1 1
This is scenario when a deadlock would happen, as each of the process is waiting for 1 more process to run to completion. And
there are no more Resources available as max 6 reached. If we could have provided one more R to any of the process, any of the
process could have executed to completion, then released its resources, which further when assigned to other and then other
would have broken the deadlock situation.
In case of processes, if there are less than 6 processes, then no deadlock occurs.
Consider the maximum case of 5 processes.
P1 P2 P3 P4 P5
1 1 1 1 1
In this case system has 6 resources max,and hence we still have 1 more R left which can be given to any of the processes, which
in turn runs to completion, releases its resources and in turn others can run to completion too.
Answer (B).
5.17.10 Resource Allocation: GATE CSE 2000 | Question: 2.23 top☝ ☛ https://gateoverflow.in/670
The answer is (C).
A. Mutual exclusion is not guaranteed;
P1 will start and check the condition (R1 ==busy) it will be evaluated as false and P1 will be preempted.
Then, P2 will start and check the condition (R1 == busy) it will be evaluated as false and P2 will be preempted.
Now, again P1 will start execution and set R1 = busy then preempted again.
Then P2 will start execution and set R1 = busy which was already updated by P1 and now P2 will be preempted.
After that P 1 will start execution and same scenario happen again with both P1 and P2.
Both set R2 = busy and enter into critical section together.
B. Here, deadlock is not possible, because at least one process is able to proceed and enter into critical section.
C. If Q1 and Q3 ; Q2 and Q4 will be interchanged then Mutual exclusion is guaranteed but deadlock is possible.
Here, both process will not be able to enter critical section together.
For deadlock:
If P1 sets R1 = busy and then preempted, and P2 sets R2 = busy then preempted.
In this scenario no process can proceed further, as both holding the resource that is required by other to enter into CS.
To ensure deadlock never happens allocate resources to each process in following manner:
Worst Case Allocation (maximum resources in use without any completion) will be (max requirement − 1) allocations for
each process. i.e., si − 1 for each i
n
Now, if ∑ (si − 1) ≤ m dead lock can occur if m resources are split equally among the n processes and all of them will be
i=1
requiring one more resource instance for completion.
Now, if we add just one more resource, one of the process can complete, and that will release the resources and this will
eventually result in the completion of all the processes and deadlock can be avoided. i.e., to avoid deadlock
n
∑ (si − 1) + 1 ≤ m
i=1
n
⟹ ∑ si − n + 1 ≤ m
i=1
n
⟹ ∑ si < (m + n).
i=1
Correct Answer: C
104 votes -- Digvijay (44.9k points)
B. xp + xq ≥ mink≠p,q yk
The question asks for "necessary" condition to guarantee no deadlock. i.e., without satisfying this condition "deadlock" MUST be
there.
PS: Condition B just ensures that the system can proceed from the current state. It does not guarantee that there won't be a
deadlock before all processes are finished.
65 votes -- Arjun Suresh (332k points)
The answer is (C).
Available Resources
X Y Z
0 1 2
Now, P1 will execute first, As it meets the needs. After completion, The available resources are updated.
Updated Available Resources
X Y Z
2 1 3
(A). In deadlock prevention, we just need to ensure one of the four necessary conditions of deadlock doesn't occur. So, it
may be the case that a resource request might be rejected even if the resulting state is safe. (One example, is when we impose a
strict ordering for the processes to request resources).
Deadlock avoidance is less restrictive than deadlock prevention. Deadlock avoidance is like a police man and deadlock
prevention is like a traffic light. The former is less restrictive and allows more concurrency.
Reference: http://www.cs.jhu.edu/~yairamir/cs418/os4/tsld010.htm
References
At t = 3, the process P1 has to wait because available R1 = 1, but P1 needs 2 R1. so P1 is blocked.
Similarly, at various times what is happening can be analyzed by the table below.
From the resource allocation logic, it's clear that even numbered processes are taking even numbered resources and all
even numbered processes share no more than 1 resource. Now, if we make sure that all odd numbered processes take odd
numbered resources without a cycle, then deadlock cannot occur. The "else" case of the resource allocation logic, is trying to do
that. But, if n is odd, Rn−i and Rn−i−2 will be even and there is possibility of deadlock, when two processes requests the same
Ri and Rj . So, only B and D are the possible answers.
Now, in D, we can see that P0 requests R0 and R2 , P2 requests R2 and R4 , so on until, P18 requests R18 and R20 . At the same
time P1 requests R40 and R38 , P3 requests R38 and R36 , so on until, P17 requests R24 and R22 . i.e.; there are no two processes
requesting the same two resources and hence there can't be a cycle of dependencies which means, no deadlock is possible.
But for B, P8 requests R8 and R10 and P11 also requests R10 and R8 . Hence, a deadlock is possible. (Suppose P8 comes first
and occupies R8 . Then P11 comes and occupies R10 . Now, if P8 requests R10 and P11 requests R8 , there will be deadlock)
Correct Answer: B
279 votes -- Arjun Suresh (332k points)
For deadlock-free invocation, X, Y and Z must access the semaphores in the same order so that there won't be a case
where one process is waiting for a semaphore while holding some other semaphore. This is satisfied only by option B.
In option A, X can hold a and wait for c while Z can hold c and wait for a
In option C, X can hold b and wait for c, while Y can hold c and wait for b
In option D, X can hold a and wait for c while Z can hold c and wait for a
So, a deadlock is possible for all choices except B.
http://www.eee.metu.edu.tr/~halici/courses/442/Ch5%20Deadlocks.pdf
References
5.17.19 Resource Allocation: GATE CSE 2014 Set 1 | Question: 31 top☝ ☛ https://gateoverflow.in/1800
Option (B)
Available : X = 3, Y = 2, Z = 0
Since, Z is not available now, neither P 0′ s nor P 2′ s requirement can be satisfied. So. it is an unsafe state.
5.17.20 Resource Allocation: GATE CSE 2014 Set 3 | Question: 31 top☝ ☛ https://gateoverflow.in/2065
Up to, 6 resources, there can be a case that all process have 2 each and dead lock can occur. With 7 resources, at least one
process's need is satisfied and hence it must go ahead and finish and release all 3 resources it held. So, no dead lock is possible.
25 votes -- Arjun Suresh (332k points)
For these type of problems in which every process is making same number of requests, use the formula
n. (m − 1) + 1 ≤ r
where,
n = no. of processes
m = resource requests made by processes
r = no. of resources
5.17.21 Resource Allocation: GATE CSE 2015 Set 2 | Question: 23 top☝ ☛ https://gateoverflow.in/8114
3×2 = 6
4×2 = 8
I guess a question can't get easier than this- (D) choice. (Also, we can simply take the greatest value among choice for this
question)
[There are 6 resources and all of them must be in use for deadlock. If the system has no other resource dependence, N = 4
cannot lead to a deadlock. But if N = 4 , the system can be in deadlock in presence of other dependencies.
Why N = 3 cannot cause deadlock? It can cause deadlock, only if the system is already in deadlock and so the deadlock is
independent of the considered resource. Till N = 3, all requests for considered resource will always be satisfied and hence there
won't be a waiting and hence no deadlock with respect to the considered resource. ]
36 votes -- Arjun Suresh (332k points)
5.17.22 Resource Allocation: GATE CSE 2015 Set 3 | Question: 52 top☝ ☛ https://gateoverflow.in/8561
A deadlock will not occur if any one of the below four conditions are prevented:
Now,
Option-1 if implemented violates 1 so deadlock cannot occur.
Option-2 if implemented violates circular wait (making the dependency graph acyclic)
Option-3 if implemented violates circular wait (making the dependency graph acyclic)
Option-4 it is equivalent to options 2 and 3
So, the correct option is 4 as all of them are methods to prevent deadlock.
http://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/7_Deadlocks.html
References
5.17.23 Resource Allocation: GATE CSE 2016 Set 1 | Question: 50 top☝ ☛ https://gateoverflow.in/39719
Answer is (A).
This ensures that when a process i reaches Critical Section, all processes j which started before it must have its t[j] = 0 . This
means no two process can be in critical section at same time as one of them must be started earlier.
is the issue here for deadlock. This means two processes can have same t value and hence
while (t[j] != 0 && t[j] <=t[i]);
can go to infinite wait. (t[j] == t[i] ). Starvation is also possible as there is nothing to ensure that a request is granted in a timed
manner. But bounded waiting (as defined in Galvin) is guaranteed here as when a process i starts and gets t[i] value, no new
process can enter critical section before i (as their t value will be higher) and this ensures that access to critical section is granted
only to a finite number of processes (those which started before) before eventually process i gets access.
But in some places bounded waiting is defined as finite waiting (see one here from CMU) and since deadlock is possible here,
bounded waiting is not guaranteed as per that definition.
References
Given question is a wrongly modified version of actual bakery algorithm, used for N-process critical section problem.
Bakery algorithm code goes as follows : (as in William stalling book page 209, 7th edition)
Entering[i] = true;
Number[i] = 1 + max(Number[1], ..., Number[NUM_THREADS]);
Entering[i] = false;
// Wait until all threads with smaller numbers or with the same
<Critical Section>
Number[i] = 0;
/*remainder section */
code explanation:
The important point here is that due to lack of atomicity of max function multiple processes may calculate the same Number.
In that situation to choose between two processes, we prioritize the lower process_id.
(Number[j], j) < (Number[i], i)) this is a tuple comparison and it allows us to correctly select only one process out of i
and j.but not both (when Number[i] = Number[j] )
Bounded waiting :
If the process i is waiting and looping inside the for loop. Why is it waiting there ? Two reasons,
Reason1 does not dissatisfy bounded waiting , because if the process i has the Number value = 5 then all processes having less
positive Number will enter CS first and will exit. Then Process i will definitely get a chance to enter into CS.
Reason2 dissatisfy bounded waiting because assume process 3 and 4 are fighting with the equal Number value of 5. whenever
one of them (say 4) is scheduled by the short term scheduler to the CPU, it goes on looping on Number[3] ⇐ Number[4]
.Similarly with process 3 also. But when they are removed from the Running state by the scheduler , other processes may
continue normal operation. So for process 3 and 4 although they have requested very early, because of their own reason, other
processes are getting a chance of entering into CS. B is wrong.
note : in this all the processes go into deadlock anyway after a while.
1. Processes which are now testing the while condition inside the for loop.
2. Processes which are now in the reminder section.
3. Processes which are now about to calculate its Number values.
In Category 1, assume process i wins the testing condition, that means no one else can win the test because i has the lowest
positive value among the 1st category of processes.
Category 3 processes will calculate Number value more than the Number of i using max the function.
Same goes with Category 2 processes if they ever try to re-enter.
5.17.24 Resource Allocation: GATE CSE 2017 Set 2 | Question: 33 top☝ ☛ https://gateoverflow.in/118375
If any process has highest priority over all the resources then it can snatch any resource from any other process and so no
deadlock can occur with another process as this highest priority process will eventually finish and release all the resources for the
other less priority process.
In case of (I) and (II) process 1 has given highest priority over all the resources and hence deadlock cannot occur.
Similarly, in the case of (III) and (IV) process 2 has given highest priority over all the resources and hence deadlock cannot
occur.
If we consider option (A) (I) and (IV)
T11 > T21 // for resource 1, process 1 has the highest priority
T22 > T12 // for resource 2 , process 2 has highest priority
Answer: (B)
Starvation can occur as each time a process requests a resource it has to release all its resources. Now, maybe the process has not
used the resources properly yet. This will happen again when the process requests another resource. So, the process starves for
proper utilisation of resources.
Match the pairs in the following questions by writing the corresponding letters only.
Answer ☟
5.18.2 Runtime Environments: GATE CSE 1996 | Question: 2.17 top☝ ☛ https://gateoverflow.in/2746
Answer ☟
5.18.3 Runtime Environments: GATE CSE 2002 | Question: 2.20 top☝ ☛ https://gateoverflow.in/850
A. Security is dynamic
B. The path for searching dynamic libraries is not known till runtime
C. Linking is insecure
D. Cryptographic procedures are not available for dynamic linking
Answer ☟
5.18.1 Runtime Environments: GATE CSE 1991 | Question: 02-iii top☝ ☛ https://gateoverflow.in/513
(a) − (r), (b) − (p), (c) − (s), (d) − (q)
21 votes -- Gate Keeda (15.9k points)
5.18.2 Runtime Environments: GATE CSE 1996 | Question: 2.17 top☝ ☛ https://gateoverflow.in/2746
(D) Option
An assembler uses location counter value to give address to each instruction which is needed for relative addressing as well as
for jump labels.
Linker Loader is a loader which can load several compiled codes and link them together into a single executable. Thus it needs to
do relocation of the object codes.
5.18.3 Runtime Environments: GATE CSE 2002 | Question: 2.20 top☝ ☛ https://gateoverflow.in/850
A. Nonsense option, No idea why it is here.
B. The path for searching dynamic libraries is not known till runtime -> This seems most correct answer.
C. This is not true. Linking in itself not insecure.
D. There is no relation between Cryptographic procedures & Dynamic linking.
Semaphore operations are atomic because they are implemented within the OS _________.
Answer ☟
5.19.2 Semaphores: GATE CSE 1992 | Question: 02,x, ISRO2015-35 top☝ ☛ https://gateoverflow.in/564
At a particular time of computation, the value of a counting semaphore is 7. Then 20 P operations and 15 V operations were
completed on this semaphore. The resulting value of the semaphore is :
A. 42
B. 2
C. 7
D. 12
Answer ☟
A counting semaphore was initialized to 10. Then 6P (wait) operations and 4V (signal) operations were completed on this
semaphore. The resulting value of the semaphore is
A. 0
B. 8
C. 10
D. 12
Answer ☟
The P and V operations on counting semaphores, where s is a counting semaphore, are defined as follows:
s = s − 1;
P(s) : If s < 0 then wait;
s = s + 1;
V (s) : If s ≤ 0 then wake up process waiting on s;
Pb (xb );
s = s − 1;
if (s < 0)
{
P(s) : Vb (xb );
Pb (yb );
}
else Vb (xb );
Pb (xb );
s = s + 1;
V (s) :
if (s ≤ 0)Vb (yb );
Vb (xb );
The initial values of xb and yb are respectively
A. 0 and 0
B. 0 and 1
C. 1 and 0
D. 1 and 1
Answer ☟
Consider a non-negative counting semaphore S . The operation P(S) decrements S , and V (S) increments S . During an
execution, 20 P(S) operations and 12 V (S) operations are issued in some order. The largest initial value of S for which at least
one P(S) operation will remain blocked is _______
Answer ☟
Each of a set of n processes executes the following code using two semaphores a and b initialized to 1 and 0, respectively.
Assume that count is a shared variable initialized to 0 and not used in CODE SECTION P.
CODE SECTION P
wait(a); count=count+1;
if (count==n) signal (b);
signal (a): wait (b) ; signal (b);
CODE SECTION Q
What does the code achieve?
A. It ensures that no process executes CODE SECTION Q before every process has finished CODE SECTION P.
B. It ensures that two processes are in CODE SECTION Q at any time.
C. It ensures that all processes execute CODE SECTION P mutually exclusively.
D. It ensures that at most n − 1 processes are in CODE SECTION P at any time.
Answer ☟
Consider the following pseudocode, where S is a semaphore initialized to 5 in line #2 and counter is a shared variable
initialized to 0 in line #1 . Assume that the increment operation in line #7 is not atomic.
1. int counter = 0;
2. Semaphore S = init(5);
3. void parop(void)
4. {
5. wait(S);
If five threads execute the function parop concurrently, which of the following program behavior(s) is/are possible?
A. The value of counter is 5 after all the threads successfully complete the execution of parop
B. The value of counter is 1 after all the threads successfully complete the execution of parop
C. The value of counter is 0 after all the threads successfully complete the execution of parop
D. There is a deadlock involving all the threads
Answer ☟
The wait and signal operations of a monitor are implemented using semaphores as follows. In the following,
x is a condition variable,
mutex is a semaphore initialized to 1,
x_sem is a semaphore initialized to 0,
x_count is the number of processes waiting on semaphore x_sem, initially 0,
next is a semaphore initialized to 0,
next_count is the number of processes waiting on semaphore next, initially 0.
The body of each procedure that is visible outside the monitor is replaced with the following:
P(mutex);
...
body of procedure
...
if (next_count > 0)
V(next);
else
V(mutex);
x_count = x_count + 1;
if (next_count > 0)
V(next);
else
V(mutex);
------------------------------------------------------------ E1;
x_count = x_count - 1;
if (x_count > 0)
{
next_count = next_count + 1;
------------------- E2;
P(next);
next_count = next_count - 1;
}
A. P(x_sem), V (next)
B. V (next), P(x_sem)
C. P(next), V (x_sem)
D. P(x_sem), V (x_sem)
Answer ☟
Answers: Semaphores
The concept of semaphores is used for synchronization.
Semaphore is an integer with a difference. Well, actually a few differences.
You set the value of the integer when you create it, but can never access the value directly after that; you must use one of the
semaphore functions to adjust it, and you cannot ask for the current value.
There are semaphore functions to increment or decrement the value of the integer by one.
Decrementing is a (possibly) blocking function. If the resulting semaphore value is negative, the calling thread or process is
blocked, and cannot continue until some other thread or process increments it.
Incrementing the semaphore when it is negative causes one (and only one) of the threads blocked by this semaphore to become
unblocked and runnable.
5.19.2 Semaphores: GATE CSE 1992 | Question: 02,x, ISRO2015-35 top☝ ☛ https://gateoverflow.in/564
The answer is option B.
Currently semaphore is 7 so, after 20 P (wait) operation it will come to −13 then for 15 V(signal) operation the value comes to
2.
Answer is option (B)
Initially semaphore is 10, then 6 down operations are performed means (10 − 6 = 4) and 4 up operations means (4 + 4 = 8)
Answer is (C) .
Reasoning :-
First let me explain what is counting semaphore & How it works. Counting semaphore gives count, i.e. no of processes that can
be in Critical section at same time. Here value of S denotes that count. So suppose S = 3 , we need to be able to have 3 processes
in Critical section at max. Also when counting semaphore S has negative value we need to have Absolute value of S as no of
processes waiting for critical section.
(A) & (B) are out of option, because Xb must be 1, otherwise our counting semaphore will get blocked without doing anything.
Now consider options (C) & (D).
Option (D) :-
Y b = 1, Xb = 1
Assume that initial value of S = 2 . (At max 2 processes must be in Critical Section.)
We have 4 processes, P1, P2, P3&P4.
P1 enters critical section , It calls P(s), S = S − 1 = 1. As S > 1 , we do not call Pb(Y b).
P2 enters critical section , It calls P(s), S = S − 1 = 0. As S > 0 we do not call Pb(Y b).
Now P3 comes, it should be blocked but when it calls P(s), S = S − 1 = 0 − 1 = −1 As S < 0 ,Now we do call Pb(Y b).
Still P3 enters into critical section & We do not get blocked as Y b's Initial value was 1.
This violates property of counting semaphore. S is now −1 , & No process is waiting. Also we are allowing 1 more process than
what counting semaphore permits.
If Y b would have been 0, P3 would have been blocked here & So Answer is (C).
Pb(yb);
Answer: (7) . Take any sequence of 20P and 12V operations, atleast one process will always remain blocked.
29 votes -- Ashish Deshmukh (1.3k points)
Answer: A. It ensures that no process executes CODE SECTION Q before every process has finished CODE SECTION
P.
Explanation
In short, semaphore 'a' controls mutually exclusive execution of statement count+=1 and semaphore 'b' controls entry to
CODE SECTION Q when all the process have executed CODE SECTION P. As checked by given condition if(count==n)
signal(b); the semaphore 'b' is initialized to 0 and only increments when this condition is TRUE. (Side fact, processes do
not enter the CODE SECTION Q in mutual exclusion, the moment all have executed CODE SECTION P, process will enter
CODE SECTION Q in any order.)
Detailed explanation:-
Consider this situation as the processes need to execute three stages- Section P, then the given code and finally Section Q.
It is evident that semaphores do not control Section P hence, There is no restriction in execution of P.
Now, we are given 2 semaphores 'a' and 'b' initialized to '1' and '0' respectively.
Take an example of 3 processes (hence n=3, count=0(initially) ) and lets say first of them has finished executing Section P and
enters the given code. It does following changes:-
1. will execute wait(a) hence making semaphore a=0
2. increment the count from 0 to 1 (first time)
3. If(count==n) evaluates FALSE and hence signal(b) is not executed. So semaphore b remains 0
4. signal(a) hence making semaphore a=1
5. wait(b) But since semaphore b is already 0, The process will be in blocked/waiting state.
First out of the three processes is unable to enter the CODE SECTION Q !
Now say second process completes CODE SECTION P and starts executing the given code. It can be concluded that it will
follow the same sequence (5 steps) as mentioned above and status of variables will be:- count = 2 (still count<n), semaphore a=1,
semaphore b=0 (no change)
Finally the last process finishes execution of CODE SECTION P.
It will follow same steps 1 and 2 making semaphore a=0 and count = 3
3. if(count==n) evalueates TRUE! and hence signal(b) is executed marking semaphore b = 1 FOR THE FIRST TIME.
4 and 5 will be executed the same way.
Now the moment this last process signaled b, the previously blocked process will be able to execute wait(b) and the very next
moment execute signal(b) to allow other blocked/waiting process to proceed.
This way all the processes enter CODE SECTION Q after executing CODE SECTION P.
Correct Options: A,B,D
The given code allows up to 2 threads to be in the critical section as the initial value of semaphore is 5 and 2 wait operations are
necessary to enter the critical section (⌈5/2⌉ = 2).
In the critical section the increment operation is not atomic. So, multiple threads entering the critical section simultaneously can
cause race condition.
A. Assume that the 5 threads execute sequentially with no interleaving then after each thread ends the counter value
increments by 1. Hence after 5 threads finish, counter value will be incremented 5 times from 0 to 5. Possible.
B. Let’s assume that a process used 2 waits and reads the counter value and didn’t update the value yet, all the other process
let’s say the other processes executed sequentially incremented and stored the value as 4 but since the value isn’t written
the first process yet the current value is overwritten by the first process as 1. Possible
C. There exists no pattern of execution in which the process increments the current value and completes while maintaining 0
as the counter value.Not possible
D. Assume that all the process use up the first wait operation, the semaphore value will now become zero and deadlock
x_count is the number of processes waiting on semaphore x_sem, initially 0,
x_count is incremented and decremented in x.wait, which shows that in between them wait(x_sem) must happen which is
P(x_sem). Correspondingly V(x_sem) must happen in x.signal. So, D choice.
What is a monitor?
References
5.20.1 System Call: GATE CSE 2021 Set 1 | Question: 14 top☝ ☛ https://gateoverflow.in/357438
Which of the following standard C library functions will always invoke a system call when executed from a single-threaded
process in a UNIX/Linux operating system?
A. exit
B. malloc
C. sleep
D. strlen
Answer ☟
5.20.1 System Call: GATE CSE 2021 Set 1 | Question: 14 top☝ ☛ https://gateoverflow.in/357438
System calls are used to get some service from operating system which generally requires some higher level of privilege.
This question uses two important words “always” and “standard C library functions”.
Let’s check options
1. exit- This is a function defined in standard C library and it always invokes system call every time, flushes the streams, and
terminates the caller.
2. malloc – This is a function defined in standard C library and it does not always invoke the system call. When a process is
created, certain amount of heap memory is already allocated to it, when required to expand or shrink that memory, it
internally uses sbrk/brk system call on Unix/Linux. i.e., not every malloc call needs a system call but if the current
allocated size is not enough, it’ll do a system call to get more memory.
3. sleep- This is not even standard C library function, it is a POSIX standard C library function. Unix and Windows uses
different header files for it. Now as question has said the following standard C library function, let’s consider it as that
way. Yes it always invokes the system call .
4. strlen – This is a function defined in standard C library and doesn’t require any system call to perform its function of
calculating the string length.
Answer : A,C
Consider the following statements with respect to user-level threads and kernel-supported threads
Answer ☟
Consider the following statements about user level threads and kernel level threads. Which one of the following statements is
FALSE?
A. Context switch time is longer for kernel level threads than for user level threads.
B. User level threads do not need any hardware support.
C. Related kernel level threads can be scheduled on different processors in a multi-processor system.
D. Blocking one kernel level thread blocks all related threads.
Answer ☟
5.21.3 Threads: GATE CSE 2011 | Question: 16, UGCNET-June2013-III: 65 top☝ ☛ https://gateoverflow.in/2118
A thread is usually defined as a light weight process because an Operating System (OS) maintains smaller data structure for a
thread than for a process. In relation to this, which of the following statement is correct?
Answer ☟
Answer ☟
Answer ☟
I. Program counter
II. Stack
III. Address space
IV. Registers
Answer ☟
Consider the following multi-threaded code segment (in a mix of C and pseudo-code), invoked by two processes P1 and P2 ,
and each of the processes spawns two threads T1 and T2 :
int x = 0; // global
Lock L1; // global
main () {
create a thread to execute foo(); // Thread T1
create a thread to execute foo(); // Thread T2
wait for the two threads to finish execution;
print(x);}
foo() {
int y = 0;
Acquire L1;
x = x + 1;
y = y + 1;
Release L1;
print (y);}
Answer ☟
Which one of the following is NOT shared by the threads of the same process ?
A. Stack
B. Address Space
C. File Descriptor Table
D. Message Queue
Answer ☟
Answers: Threads
Answer: (A)
I. User level thread switching is faster than kernel level switching. So, (I) is false.
II. is true.
III. is true.
IV. User level threads are transparent to the kernel
In case of Computing transparent means functioning without being aware. In our case user level threads are functioning
without kernel being aware about them. So (IV) is actually correct.
User level threads can switch almost as fast as a procedure call. Kernel supported threads switch much slower. So, I is false.
II, III and IV are TRUE. So A.
"The kernel knows nothing about user-level threads and manages them as if they were single-threaded processes"
Ref: http://stackoverflow.com/questions/15983872/difference-between-user-level-and-kernel-supported-threads
References
Answer: (D)
A. Context switch time is longer for kernel level threads than for user level threads. − This is True, as Kernel level threads are
managed by OS and Kernel maintains lot of data structures. There are many overheads involved in Kernel level thread
management, which are not present in User level thread management !
B. User level threads do not need any hardware support.− This is true, as User level threads are implemented by Libraries
programmably, Kernel does not sees them.
C. Related kernel level threads can be scheduled on different processors in a multi-processor system. − This is true.
D. Blocking one kernel level thread blocks all related threads. − This is false. If it had been user Level threads this would
have been true, (In One to one, or many to one model !) Kernel level threads are independent.
5.21.3 Threads: GATE CSE 2011 | Question: 16, UGCNET-June2013-III: 65 top☝ ☛ https://gateoverflow.in/2118
Answer to this question is (C).
OS , on per thread basis, maintains ONLY TWO things : CPU Register state and Stack space. It does not maintain anything else
for individual thread. Code segment and Global variables are shared. Even TLB and Page Tables are also shared since they
belong to same process.
A. option (A) would have been correct if 'ONLY' word were not there. It NOT only maintains register state BUT stack space
also.
(D) is the answer. Threads can share the Code segments. They have only separate Registers and stack.
User level threads are scheduled by the thread library and kernel knows nothing about it. So, A is TRUE.
When a user level thread is blocked, all other threads of its process are blocked. So, B is TRUE. (With a multi-threaded kernel,
user level threads can make non-blocking system calls without getting blocked. But in this option, it is explicitly said 'a thread is
blocked'.)
Context switching between user level threads is faster as they actually have no context-switch- nothing is saved and restored
while for kernel level thread, Registers, PC and SP must be saved and restored. So, C also TRUE.
Reference: http://www.cs.cornell.edu/courses/cs4410/2008fa/homework/hw1_soln.pdf
References
A thread shares with other threads a process’s (to which it belongs to) :
Code section
Data section (static + heap)
Address Space
Permissions
Other resources (e.g. files)
Thread is light weight process, and every thread have its own, stack, register, and PC (one of the register in CPU contain
address of next instruction to be executed), so only address space that is shared by all thread for a single process.
So, option (B) is correct answer.
Each process has its own address space.
1. P1 :
Two threads T11 , T12 are created in main.
Both execute foo function and threads don’t wait for each other. Due to explicit locking mechanism here mutual exclusion
is there and hence no race condition inside foo().
y being thread local, both the threads will print the value of y as 1.
Due to the wait in main, the print(x) will happen only after both the threads finish. So, x will have become 2.
PS: Even if x was not assigned 0 explicitly in C all global and static variables are initialized to 0 value.
Suppose wait is removed from the main(). Then the possible x values can be 0, 1, 2 as the main thread as well as the two
created threads can execute in any order.
Suppose locking mechanism is removed from foo() and assignments are not atomic. (If increment is atomic here, then
locking is not required). Then race condition can happen and so one of the increments can overwrite the other. So, in main,
x value printed can be either 1 or 2.
Now suppose we had just one process which does a fork() inside main before creating the threads. How the answer should
change?
Stack is not shared
29 votes -- Sankaranarayanan P.N (8.5k points)
5.22.1 Virtual Memory: GATE CSE 1989 | Question: 2-iv top☝ ☛ https://gateoverflow.in/87081
Answer ☟
5.22.2 Virtual Memory: GATE CSE 1990 | Question: 1-v top☝ ☛ https://gateoverflow.in/83833
Under paged memory management scheme, simple lock and key memory protection arrangement may still be required if the
_________ processors do not have address mapping hardware.
Answer ☟
5.22.3 Virtual Memory: GATE CSE 1990 | Question: 7-b top☝ ☛ https://gateoverflow.in/85404
In a two-level virtual memory, the memory access time for main memory, tM = 10−8 sec, and the memory access time for
the secondary memory, tD = 10−3 sec. What must be the hit ratio, H such that the access efficiency is within 80 percent of its
maximum value?
Answer ☟
5.22.4 Virtual Memory: GATE CSE 1991 | Question: 03-xi top☝ ☛ https://gateoverflow.in/525
Indicate all the false statements from the statements given below:
A. The amount of virtual memory available is limited by the availability of the secondary memory
B. Any implementation of a critical section requires the use of an indivisible machine- instruction ,such as test-and-set.
Answer ☟
5.22.5 Virtual Memory: GATE CSE 1994 | Question: 1.21 top☝ ☛ https://gateoverflow.in/2464
A. Macro definitions cannot appear within other macro definitions in assembly language programs
B. Overlaying is used to run a program which is longer than the address space of a computer
C. Virtual memory can be used to accommodate a program which is longer than the address space of a computer
D. It is not possible to write interrupt service routines in a high level language
Answer ☟
5.22.6 Virtual Memory: GATE CSE 1995 | Question: 1.7 top☝ ☛ https://gateoverflow.in/2594
In a paged segmented scheme of memory management, the segment table itself must have a page table because
Answer ☟
5.22.7 Virtual Memory: GATE CSE 1995 | Question: 2.16 top☝ ☛ https://gateoverflow.in/2628
In a virtual memory system the address space specified by the address lines of the CPU must be _____ than the physical
memory size and ____ than the secondary storage size.
A. smaller, smaller
B. smaller, larger
C. larger, smaller
D. larger, larger
Answer ☟
A demand paged virtual memory system uses 16 bit virtual address, page size of 256 bytes, and has 1 Kbyte of main memory.
LRU page replacement is implemented using the list, whose current status (page number is decimal) is
Answer ☟
5.22.9 Virtual Memory: GATE CSE 1998 | Question: 2.18, UGCNET-June2012-III: 48 top☝ ☛ https://gateoverflow.in/1691
If an instruction takes i microseconds and a page fault takes an additional j microseconds, the effective instruction time if on
the average a page fault occurs every k instruction is:
j
A. i +
k
B. i + (j × k)
i+j
C.
k
D. (i + j ) × k
Answer ☟
A certain computer system has the segmented paging architecture for virtual memory. The memory is byte addressable. Both
virtual and physical address spaces contain 216 bytes each. The virtual address space is divided into 8 non-overlapping equal size
segments. The memory management unit (MMU) has a hardware segment table, each entry of which contains the physical address
of the page table for the segment. Page tables are stored in the main memory and consists of 2 byte page table entries.
a. What is the minimum page size in bytes so that the page table for a segment requires at most one page to store it? Assume that
the page size can only be a power of 2.
b. Now suppose that the pages size is 512 bytes. It is proposed to provide a TLB (Transaction look-aside buffer) for speeding up
address translation. The proposed TLB will be capable of storing page table entries for 16 recently referenced virtual pages, in
a fast cache that will use the direct mapping scheme. What is the number of tag bits that will need to be associated with each
cache entry?
c. Assume that each page table entry contains (besides other information) 1 valid bit, 3 bits for page protection and 1 dirty bit.
How many bits are available in page table entry for storing the aging information for the page? Assume that the page size is
512 bytes.
Answer ☟
5.22.11 Virtual Memory: GATE CSE 1999 | Question: 2.10 top☝ ☛ https://gateoverflow.in/1488
A multi-user, multi-processing operating system cannot be implemented on hardware that does not support
A. Address translation
B. DMA for disk transfer
C. At least two modes of CPU execution (privileged and non-privileged)
D. Demand paging
Answer ☟
Answer ☟
5.22.13 Virtual Memory: GATE CSE 2000 | Question: 2.22 top☝ ☛ https://gateoverflow.in/669
Suppose the time to service a page fault is on the average 10 milliseconds, while a memory access takes 1 microsecond. Then
a 99.99% hit ratio results in average memory access time of
A. 1.9999 milliseconds
B. 1 millisecond
C. 9.999 microseconds
D. 1.9999 microseconds
Answer ☟
5.22.14 Virtual Memory: GATE CSE 2001 | Question: 1.20 top☝ ☛ https://gateoverflow.in/713
Answer ☟
5.22.15 Virtual Memory: GATE CSE 2001 | Question: 1.8 top☝ ☛ https://gateoverflow.in/701
A. Virtual memory implements the translation of a program's address space into physical memory address space
B. Virtual memory allows each program to exceed the size of the primary memory
C. Virtual memory increases the degree of multiprogramming
D. Virtual memory reduces the context switching overhead
Answer ☟
5.22.16 Virtual Memory: GATE CSE 2001 | Question: 2.21 top☝ ☛ https://gateoverflow.in/739
Consider a machine with 64 MB physical memory and a 32-bit virtual address space. If the page size s 4 KB, what is the
approximate size of the page table?
A. 16 MB
B. 8 MB
C. 2 MB
D. 24 MB
A computer uses 32 − bit virtual address, and 32 − bit physical address. The physical memory is byte addressable, and the
page size is 4 Kbytes. It is decided to use two level page tables to translate from virtual address to physical address. Equal number
of bits should be used for indexing first level and second level page table, and the size of each table entry is 4 bytes.
A. Give a diagram showing how a virtual address would be translated to a physical address.
B. What is the number of page table entries that can be contained in each page?
C. How many bits are available for storing protection and other information in each page table entry?
Answer ☟
In a system with 32 bit virtual addresses and 1 KB page size, use of one-level page tables for virtual to physical address
translation is not practical because of
Answer ☟
A processor uses 2 − level page tables for virtual to physical address translation. Page tables for both levels are stored in the
main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte addressable. For virtual to physical address
translation, the 10 most significant bits of the virtual address are used as index into the first level page table while the next 10 bits
are used as index into the second level page table. The 12 least significant bits of the virtual address are used as offset within the
page. Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor has a translation look-
aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual page numbers and the corresponding physical page
numbers. The processor also has a physically addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache
access time is 1 ns, and TLB access time is also 1 ns.
Assuming that no page faults occur, the average time taken to access a virtual address is approximately (to the nearest 0.5 ns)
A. 1.5 ns
B. 2 ns
C. 3 ns
D. 4 ns
Answer ☟
A processor uses 2-level page tables for virtual to physical address translation. Page tables for both levels are stored in the
main memory. Virtual and physical addresses are both 32 bits wide. The memory is byte addressable. For virtual to physical address
translation, the 10 most significant bits of the virtual address are used as index into the first level page table while the next 10 bits
are used as index into the second level page table. The 12 least significant bits of the virtual address are used as offset within the
page. Assume that the page table entries in both levels of page tables are 4 bytes wide. Further, the processor has a translation look-
aside buffer (TLB), with a hit rate of 96%. The TLB caches recently used virtual page numbers and the corresponding physical page
numbers. The processor also has a physically addressed cache with a hit rate of 90%. Main memory access time is 10 ns, cache
access time is 1 ns, and TLB access time is also 1 ns.
Suppose a process has only the following pages in its virtual address space: two contiguous code pages starting at virtual address
0x00000000 , two contiguous data pages starting at virtual address 0x00400000 , and a stack page starting at virtual address
0xFFFFF000 . The amount of memory required for storing the page tables of this process is
8 KB
Answer ☟
5.22.21 Virtual Memory: GATE CSE 2006 | Question: 62, ISRO2016-50 top☝ ☛ https://gateoverflow.in/1840
A CPU generates 32-bit virtual addresses. The page size is 4 KB. The processor has a translation look-aside buffer (TLB)
which can hold a total of 128 page table entries and is 4-way set associative. The minimum size of the TLB tag is:
A. 11 bits
B. 13 bits
C. 15 bits
D. 20 bits
Answer ☟
5.22.22 Virtual Memory: GATE CSE 2006 | Question: 63, UGCNET-June2012-III: 45 top☝ ☛ https://gateoverflow.in/1841
A computer system supports 32-bit virtual addresses as well as 32-bit physical addresses. Since the virtual address space is of
the same size as the physical address space, the operating system designers decide to get rid of the virtual memory entirely. Which
one of the following is true?
Answer ☟
A processor uses 36 bit physical address and 32 bit virtual addresses, with a page frame size of 4 Kbytes. Each page table
entry is of size 4 bytes. A three level page table is used for virtual to physical address translation, where the virtual address is used
as follows:
Bits 30 − 31 are used to index into the first level page table.
Bits 21 − 29 are used to index into the 2nd level page table.
Bits 12 − 20 are used to index into the 3rd level page table.
Bits 0 − 11 are used as offset within the page.
The number of bits required for addressing the next level page table(or page frame) in the page table entry of the first, second and
third level page tables are respectively
A. 20,20,20
B. 24,24,24
C. 24,24,20
D. 25,25,24
Answer ☟
Answer ☟
A multilevel page table is preferred in comparison to a single level page table for translating virtual address to physical
address because
Answer ☟
5.22.26 Virtual Memory: GATE CSE 2011 | Question: 20, UGCNET-June2013-II: 48 top☝ ☛ https://gateoverflow.in/2122
Let the page fault service time be 10 milliseconds(ms) in a computer with average memory access time being 20 nanoseconds
(ns). If one page fault is generated every 106 memory accesses, what is the effective access time for memory?
A. 21 ns
B. 30 ns
C. 23 ns
D. 35 ns
Answer ☟
A computer uses 46 − bit virtual address, 32 − bit physical address, and a three–level paged page table organization. The
page table base register stores the base address of the first-level table (T 1), which occupies exactly one page. Each entry of T 1
stores the base address of a page of the second-level table (T 2). Each entry of T 2 stores the base address of a page of the third-level
table (T 3). Each entry of T 3 stores a page table entry (PT E ). The PT E is 32 bits in size. The processor used in the computer has
a 1 MB 16 way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.
Answer ☟
A computer uses 46 − bit virtual address, 32 − bit physical address, and a three–level paged page table organization. The
page table base register stores the base address of the first-level table (T 1), which occupies exactly one page. Each entry of T 1
stores the base address of a page of the second-level table (T 2). Each entry of T 2 stores the base address of a page of the third-level
table (T 3). Each entry of T 3 stores a page table entry (PT E ). The PT E is 32 bits in size. The processor used in the computer has
a 1 MB 16 way set associative virtually indexed physically tagged cache. The cache block size is 64 bytes.
Answer ☟
5.22.29 Virtual Memory: GATE CSE 2014 Set 3 | Question: 33 top☝ ☛ https://gateoverflow.in/2067
Consider a paging hardware with a T LB. Assume that the entire page table and all the pages are in the physical memory. It
takes 10 milliseconds to search the T LB and 80 milliseconds to access the physical memory. If the T LB hit ratio is 0.6 , the
effective memory access time (in milliseconds) is _________.
Answer ☟
5.22.30 Virtual Memory: GATE CSE 2015 Set 1 | Question: 12 top☝ ☛ https://gateoverflow.in/8186
Consider a system with byte-addressable memory, 32 − bit logical addresses, 4 kilobyte page size and page table entries of 4
bytes each. The size of the page table in the system in megabytes is_________________.
Answer ☟
5.22.31 Virtual Memory: GATE CSE 2015 Set 2 | Question: 25 top☝ ☛ https://gateoverflow.in/8120
A computer system implements a 40 − bit virtual address, page size of 8 kilobytes , and a 128 − entry translation look-
aside buffer (T LB) organized into 32 sets each having 4 ways. Assume that the T LB tag does not store any process id. The
minimum length of the T LB tag in bits is ____.
Answer ☟
5.22.32 Virtual Memory: GATE CSE 2015 Set 2 | Question: 47 top☝ ☛ https://gateoverflow.in/8247
A computer system implements 8 kilobyte pages and a 32 − bit physical address space. Each page table entry contains a
valid bit, a dirty bit, three permission bits, and the translation. If the maximum size of the page table of a process is 24 megabytes ,
the length of the virtual address supported by the system is _______ bits.
Answer ☟
5.22.33 Virtual Memory: GATE CSE 2016 Set 1 | Question: 47 top☝ ☛ https://gateoverflow.in/39690
Consider a computer system with 40-bit virtual addressing and page size of sixteen kilobytes. If the computer system has a
one-level page table per process and each page table entry requires 48 bits, then the size of the per-process page table is __________
megabytes.
Answer ☟
Consider a process executing on an operating system that uses demand paging. The average time for a memory access in the
system is M units if the corresponding memory page is available in memory, and D units if the memory access causes a page fault.
A. (D − M)/X − M)
B. (X − M)/D − M)
C. (D − X)/D − M)
D. (X − M)/D − X)
Answer ☟
Assume that in a certain computer, the virtual addresses are 64 bits long and the physical addresses are 48 bits long. The
memory is word addressible. The page size is 8 kB and the word size is 4 bytes. The Translation Look-aside Buffer (TLB) in the
address translation path has 128 valid entries. At most how many distinct virtual addresses can be translated without any TLB miss?
A. 16 × 210
B. 256 × 210
C. 4 × 220
D. 8 × 220
Answer ☟
Consider a paging system that uses 1-level page table residing in main memory and a TLB for address translation. Each main
memory access takes 100 ns and TLB lookup takes 20 ns. Each page transfer to/from the disk takes 5000 ns. Assume that the TLB
hit ratio is 95%, page fault rate is 10%. Assume that for 20% of the total page faults, a dirty page has to be written back to disk
before the required page is read from disk. TLB update time is negligible. The average memory access time in ns (round off to 1
decimal places) is ___________
Answer ☟
In a virtual memory system, size of the virtual address is 32-bit, size of the physical address is 30-bit, page size is 4 Kbyte and
size of each page table entry is 32-bit. The main memory is byte addressable. Which one of the following is the maximum number
of bits that can be used for storing protection and other information in each page table entry?
A. 2
B. 10
C. 12
D. 14
Answer ☟
A paging scheme uses a Translation Look-aside Buffer (TLB). A TLB-access takes 10 ns and the main memory access takes
50 ns. What is the effective access time(in ns) if the TLB hit ratio is 90% and there is no page-fault?
A. 54
B. 60
C. 65
D. 75
Match the following flag bits used in the context of virtual memory management on the left side with the different purposes on
the right side of the table below.
Answer ☟
5.22.1 Virtual Memory: GATE CSE 1989 | Question: 2-iv top☝ ☛ https://gateoverflow.in/87081
https://gateoverflow.in/3304/difference-between-translation-buffer-translation-buffer
References
5.22.2 Virtual Memory: GATE CSE 1990 | Question: 1-v top☝ ☛ https://gateoverflow.in/83833
i/o processors because processor will issue address for device controller and if there is no translation hardware then it ain't
gonna be peachy.
14 votes -- ashish gusai (523 points)
5.22.3 Virtual Memory: GATE CSE 1990 | Question: 7-b top☝ ☛ https://gateoverflow.in/85404
In 2 level virtual memory, for every memory access, we need 2 page table access (TLB is missing in the question) and 1
memory access for data. In the question TLB is not mentioned (old architecture). So, best case memory access time
= 3 × 10−8 s .
We are given
8×10−4 −6×10−9
⟹ 0.6 × 10−8 = 0.8 × 10−3 − 0.8h × 10−3 ⟹ h = = 1 − 0.75 × 10−5 ≈ 99.99%
8×10−4
5.22.4 Virtual Memory: GATE CSE 1991 | Question: 03-xi top☝ ☛ https://gateoverflow.in/525
A. True.
B. This is false. Example:- Peterson's solution is a purely software-based solution without the use of
hardware.https://en.wikipedia.org/wiki/Peterson's_algorithm
C. False. Reference: https://en.wikipedia.org/wiki/Monitor_(synchronization)
D. True. This will happen if the page getting replaced is immediately referred to in the next cycle.
E. False. Memory can get fragmented with the best fit.
References
5.22.5 Virtual Memory: GATE CSE 1994 | Question: 1.21 top☝ ☛ https://gateoverflow.in/2464
A. Is TRUE.
B. False. Overlaying is used to increase the address space usage when physical memory is limited on systems where virtual
memory is absent. But it cannot increase the address space (logical) of a computer.
C. False. Like above is true for physical memory but here it is specified address space which should mean logical address
space.
D. Is false. We can write in high level language just that the performance will be bad.
References
5.22.6 Virtual Memory: GATE CSE 1995 | Question: 1.7 top☝ ☛ https://gateoverflow.in/2594
Option (B) is true for segmented paging(segment size becomes large so paging done on each segment) which is different
from paged segmentation(segment table size becomes large and paging done on segment table)
Here option (A) is true , as segment table are sometimes too large to keep in one pages. So, segment table divided into pages.
Thus page table for each Segment Table pages are created.
For reference , read below :
https://stackoverflow.com/questions/16643180/differences-or-similarities-between-segmented-paging-and-paged-segmentation
Differences or similarities between Segmented paging and Paged segmentation scheme.
References
5.22.7 Virtual Memory: GATE CSE 1995 | Question: 2.16 top☝ ☛ https://gateoverflow.in/2628
Answer is (C).
Given that page size is 256 bytes (28 ) and Main memory (MM) is 1KB (210 ).
210
So total number of pages that can be accommodated in MM = = 4.
28
So, essentially, there are 4 frames that can be used for paging (or page replacements).
The current sequence of pages in memory shows 3 pages (17, 1, 63). So, there is 1 empty frame left. It also says that the least
recently used page is 17.
Now, since page size given is 8 bits wide (256 B), and virtual memory is of 16 bit, we can say that 8 bits are used for offset.
The given address sequence is hexadecimal can be divided accordingly:
We only need the Page numbers, which can be represented in decimal as: 0, 1, 16, 17.
Now, if we apply LRU algorithm to the existing frame with these incoming pages, we get the following states:
0 Miss 17 1 63 0
1 Hit 17 1 63 0
16 Miss 16 1 63 0
17 Miss 16 1 17 0
5.22.9 Virtual Memory: GATE CSE 1998 | Question: 2.18, UGCNET-June2012-III: 48 top☝ ☛ https://gateoverflow.in/1691
1
Page fault rate =
k
1
Page hit rate = 1 −
k
Service time = i
1 1
= × (i + j) + (1 − ) × i
k k
(i + j) i
= +i−
k k
i j i
= + +i−
k k k
j
=i+
k
216
a. Size of each segment = 8 = 213
We need a page table entry for each page. For a segment of size 213 , number of pages required will be
213−k and so we need 213−k page table entries. Now, the size of these many entries must be less than or equal to the page
size, for the page table of a segment to be requiring at most one page. So,
k = 7 bits
213
Each segment will have = 24 page table entries
29
So, all page table entries of a segment will reside in the cache and segment number will differentiate between page table
entry of each segment in the TLB cache.
Total segments = 8
216
c. Number of Pages for a segment = = 27
29
Bits needed for page frame identification
= 7 bits
+1 valid bit
+3 page protection bits
+1 dirty bit
= 12 bits needed for a page table entry
Answer should be both (A) and (C) (Earlier GATE questions had multiple answers and marks were given only if all
correct answers were selected).
Address translation is needed to provide memory protection so that a given process does not interfere with another. Otherwise we
must fix the number of processors to some limit and divide the memory space among them -- which is not an "efficient"
mechanism.
We also need at least 2 modes of execution to ensure user processes share resources properly and OS maintains control. This is
not required for a single user OS like early version of MS-DOS.
Demand paging and DMA enhances the performances- not a strict necessity.
Ref: Hardware protection section in Galvin
49 votes -- Arjun Suresh (332k points)
5.22.12 Virtual Memory: GATE CSE 1999 | Question: 2.11 top☝ ☛ https://gateoverflow.in/1489
Virtual memory provides an interface through which processes access the physical memory. So,
B. Is true as without virtual memory it is difficult to give protected address space to processes as they will be accessing
physical memory directly. No protection mechanism can be done inside the physical memory as processes are dynamic and
number of processes changes from time to time.
D. This is one primary use of virtual memory. Virtual memory allows a process to run using a virtual address space and as
and when memory space is required, pages are swapped in/out from the disk if physical memory gets full.
5.22.13 Virtual Memory: GATE CSE 2000 | Question: 2.22 top☝ ☛ https://gateoverflow.in/669
Since nothing is told about page tables, we can assume page table access time is included in memory access time.
Correct Answer: D
46 votes -- Arjun Suresh (332k points)
5.22.14 Virtual Memory: GATE CSE 2001 | Question: 1.20 top☝ ☛ https://gateoverflow.in/713
Option (B) is correct.
Swap space is the area on a hard disk which is part of the Virtual Memory of your machine, which is a combination of accessible
physical memory (RAM) and the swap space. Swap space temporarily holds memory pages that are inactive. Swap space is used
when your system decides that it needs physical memory for active processes and there is insufficient unused physical memory
available. If the system happens to need more memory resources or space, inactive pages in physical memory are then moved to
the swap space therefore freeing up that physical memory for other uses. Note that the access time for swap is slower therefore
do not consider it to be a complete replacement for the physical memory. Swap space can be a dedicated swap partition
(recommended), a swap file, or a combination of swap partitions and swap files.
(D) should be the answer.
(B), (C) - The main advantage of VM is the increased address space for programs, and independence of address space, which
allows more degree of multiprogramming as well as option for process security.
(D) - VM requires switching of page tables (this is done very fast via switching of pointers) for the new process and thus it is
theoretically slower than without VM. In anyway VM doesn't directly decrease the context switching overhead.
5.22.16 Virtual Memory: GATE CSE 2001 | Question: 2.21 top☝ ☛ https://gateoverflow.in/739
Number of pages = 232 /4KB = 220 as we need to map every possible virtual address.
So, we need 220 entries in the page table. Physical memory being 64 MB , a physical address must be 26 bits and a page (of size
4KB) address needs 26 − 12 = 14 address bits. So, each page table entry must be at least 14 bits.
So, total size of page table = 220 × 14 bits ≈ 2 MB (assuming PTE is 2 bytes)
Correct Answer: C
54 votes -- Arjun Suresh (332k points)
V A = 32 bits
PA = 32 bits
Page size = 4 KB = 212 B
PTE = 4 B
10
= = 1024.
232
= log2 ⌈ ⌉
210 ×4
= log2 (220 )
= 20 bits.
So here also, the no. of bits available for storing protection and other information = 32 − 20 = 12 bits.
A. Internal fragmentation exists only in the last level of paging.
B. There is no External fragmentation in the paging.
32
C. 210 = 222 = 4M entries in the page table which is very large. (Answer)
2
D. Not much relevant.
78. It's given cache is physically addressed. So, address translation is needed for all memory accesses. (I assume page
table lookup happens after TLB is missed, and main memory lookup after cache is missed)
Average access time = Average address translation time + Average memory access time
= 1ns
(TLB is accessed for all accesses)
+ 2*10*0.04
(2 page tables accessed from main memory in case of TLB miss)
+ Average memory access time
= 1.8ns + Cache access time + Average main memory access time
= 1.8ns + 1 * 0.9 (90% cache hit)
+ 0.1 * (10+1) (main memory is accessed for cache misses only)
= 1.8ns + 0.9 + 1.1
= 3.8ns
We assumed that page table is in main memory and not cached. This is given in question also, though they do not explicitly say
that page tables are not cached. But in practice this is common as given here. So, in such a system,
Average address translation time
= 1ns (TLB is accessed for all accesses)
+ 2*0.04 * [0.9 * 1 + 0.1 * 10]
(2 page tables accessed in case of TLB miss and they go through cache)
= 1 ns + 1.9 × .08
= 1.152 ns
and average memory access time = 1.152 ns + 2 ns = 3.152 ns
' If the same thing is repeated now probably you would get marks for both. 2003 is a long way back -- then page table
caching never existed as given in the SE answers. Since it exists now, IIT profs will make this clear in question itself.
References
First level page table is addressed using 10 bits and hence contains 210 entries. Each entry is 4 bytes and hence this table
requires 4 KB. Now, the process uses only 3 unique entries from this 1024 possible entries (two code pages starting from
0x00000000 and two data pages starting from 0x00400000 have same first 10 bits). Hence, there are only 3 second level page
tables. Each of these second level page tables are also addressed using 10 bits and hence of size 4 KB. So,
total page table size of the process
= 4 KB + 3 * 4 KB
= 16 KB
Correct Answer: C
5.22.21 Virtual Memory: GATE CSE 2006 | Question: 62, ISRO2016-50 top☝ ☛ https://gateoverflow.in/1840
The page size of 4 KB. So, offset bits are 12 bits.
So, the remaining bits of virtual address, 32 − 12 = 20 bits, will be used for indexing.
Correct option C.
50 votes -- Vicky Bajoria (4.1k points)
5.22.22 Virtual Memory: GATE CSE 2006 | Question: 63, UGCNET-June2012-III: 45 top☝ ☛ https://gateoverflow.in/1841
A is the best answer here.
Virtual memory provides
So, when we don't need more address space, even if we get rid of virtual memory, we need hardware support for the other two.
Without hardware support for memory protection and relocation, we can design a system (by either doing them in software or by
partitioning the memory for different users) but those are highly inefficient mechanisms. i.e., there we have to divide the physical
memory equally among all users and this limits the memory usage per user and also restricts the maximum number of users.
Physical address is 36 bits. So, number of bits to represent a page frame = 36 − 12 = 24 bits (12 offset bits as given in
question to address 4 KB assuming byte addressing). So, each entry in a third level page table must have 24 bits for addressing
the page frames.
A page in logical address space corresponds to a page frame in physical address space. So, in logical address space also we need
12 bits as offset bits. From the logical address which is of 32 bits, we are now left with 32 − 12 = 20 bits ; these 20 bits will be
divided into three partitions (as given in the question) so that each partition represents 'which entry' in the ith level page table we
are referring to.
(i + 1 th
Now, there is only 1 first level page table. But there can be many second level and third level page tables and "how many" of
these exist depends on the physical memory capacity. (In actual the no. of such page tables depend on the memory usage of a
given process, but for addressing we need to consider the worst case scenario). The simple formula for getting the number of
page tables possible at a level is to divide the available physical memory size by the size of a given level page table.
Physical memory size
Number of third level page tables possible = Size of a third level page table
236
= Number of entries in a single third level page table× Size of an entry
236
= ∵ (bits 12-20 gives 9 bits)
29×4
236
=
211
25
=2
PS: No. of third level page tables possible means the no. of distinct addresses a page table can have. At any given time, no. of
page tables at level j is equal to the no. of entries in the level j − 1 , but here we are considering the possible page table
addresses.
http://www.cs.utexas.edu/~lorenzo/corsi/cs372/06F/hw/3sol.html See Problem 3, second part solution - It clearly says that we
should not assume that page tables are page aligned (page table size need not be same as page size unless told so in the question
and different level page tables can have different sizes).
So, we need 25 bits in second level page table for addressing the third level page tables.
Similarly we need to find the no. of possible second level page tables and we need to address each of them in first level page
table.
Now,
Physical memory size
Number of second level page tables possible = Size of a second level page table
236
= Number of entries in a single second level page table× Size of an entry
236
= ∵ (bits 21-29 gives 9 bits)
29×4
236
=
211
25
=2
So, we need 25 bits for addressing the second level page tables as well.
It is (B).
Option - > (B)
A. It reduces the memory access time to read or write a memory location. -> No This is false. Actually because of multi level
5.22.26 Virtual Memory: GATE CSE 2011 | Question: 20, UGCNET-June2013-II: 48 top☝ ☛ https://gateoverflow.in/2122
Open slides 12-13 to check :
http://web.cs.ucla.edu/~ani/classes/cs111.08w/Notes/Lecture%2016.pdf
1 1
EMAT = 6
× 10 ms + (1 − ) × 20 ns
10 106
= 29.99998 ns
≈ 30 ns
Answer = option B
References
Let the page size be x.
246
Since virtual address is 46 bits, we have total number of pages = x
We should have an entry for each page in last level page table which here is T 3. So,
46
Number of entries in T 3 (sum of entries across all possible T 3 tables) = 2x
246 248
Each entry takes 32 bits = 4 bytes. So, total size of T 3 tables = x ×4 = x bytes
Now, no. of T 3 tables will be Total size of T 3 tables/page table size and for each of these page tables, we must have a T 2 entry.
248
248
Taking T 3 size as page size, no. of entries across all T 2 tables = =
x
x x2
250
248 x2 250
Now, no. of T 2 tables (assuming T 2 size as page size) = x2
× 4 bytes = x = x3
.
250 252
And size of T 1 = x3
×4 = x3
Let the page size be x.
246
Since virtual address is 46 bits, we have total number of pages = x
We should have an entry for each page in last level page table which here is T 3. So,
246
Number of entries in T 3 (sum of entries across all possible T 3 tables) = x
246 248
Each entry takes 32 bits = 4 bytes. So, total size of T 3 tables = x ×4 = x bytes
Now, no. of T 3 tables will be Total size of T 3 tables/page table size and for each of these page tables, we must have a T 2 entry.
Taking T 3 size as page size, no. of entries across all T 2 tables
248
248
= x
x = x2
250
248 x2 250
Now, no. of T2 tables (assuming T2 size as pagesize) = x2
× 4 bytes = x = x3
.
Now, for each of these page table, we must have an entry in T 1. So, number of entries in T 1
250
= x3
250 252
And size of T 1 = x3
×4 = x3
252
x= x3
⟹ x = 213 = 8KB
Min. no. of page color bits = No. of set index bits + no. of offset bits − no. of page index bits (This ensures no synonym maps
to different sets in the cache)
We have 1MB cache and 64B cache block size. So,
number of sets = 1MB/(64 B × Number of blocks in each set )= 16K /16(16 way set associative) = 1K = 210 .
So, we need 10 index bits. Now, each block being 64(26 ) bytes means we need 6 offset bits.
And we already found page size = 8KB = 213 , so 13 bits to index a page
Thus, no. of page color bits = 10 + 6 − 13 = 3.
With 3 page color bits we need to have 23 = 8 different page colors
More Explanation:
A synonym is a physical page having multiple virtual addresses referring to it. So, what we want is no two synonym virtual
addresses to map to two different sets, which would mean a physical page could be in two different cache sets. This problem
never occurs in a physically indexed cache as indexing happens via physical address bits and so one physical page can never go
to two different sets in cache. In virtually indexed cache, we can avoid this problem by ensuring that the bits used for locating a
cache block (index+offset) of the virtual and physical addresses are the same.
In our case we have 6 offset bits +10 bits for indexing. So, we want to make these 16 bits same for both physical and virtual
address. One thing is that the page offset bits −13 bits for 8 KB page, is always the same for physical and virtual addresses as
they are never translated. So, we don't need to make these 13 bits same. We have to only make the remaining 10 + 6 − 13 = 3
bits same. Page coloring is a way to do this. Here, all the physical pages are colored and a physical page of one color is mapped
to a virtual address by OS in such a way that a set in cache always gets pages of the same color. So, in order to make the 3 bits
same, we take all combinations of it (23 = 8) and colors the physical pages with 8 colors and a cache set always gets a page of
one color only. (In page coloring, it is the job of OS to ensure that the 3 bits are the same).
http://ece.umd.edu/courses/enee646.F2007/Cekleov1.pdf
http://cseweb.ucsd.edu/classes/fa14/cse240A-a/pdf/08/CSE240A-MBT-L18-VirtualMemory.ppt.pdf
https://en.wikipedia.org/wiki/CPU_cache#Address_translation
https://en.wikipedia.org/wiki/Cache_coloring
Correct Answer: C
References
5.22.29 Virtual Memory: GATE CSE 2014 Set 3 | Question: 33 top☝ ☛ https://gateoverflow.in/2067
EMAT=TLB hit × (TLB access time + memory access time) + TLB miss(TLB access time + page table access
time+memory access time)
= 54 + 68
= 122 msec
54 votes -- neha pawar (3.3k points)
5.22.30 Virtual Memory: GATE CSE 2015 Set 1 | Question: 12 top☝ ☛ https://gateoverflow.in/8186
232
total no of pages = = 220
212
5.22.31 Virtual Memory: GATE CSE 2015 Set 2 | Question: 25 top☝ ☛ https://gateoverflow.in/8120
Ans 40 − (5 + 13) = 22 bits
TLB maps a virtual address to the physical address of the page. (The lower bits of page address (page offset bits) are not used in
TLB as they are the same for virtual as well as physical addresses). Here, for 8 kB page size we require 13 page offset bits.
In TLB we have 32 sets and so virtual address space is divided into 32 using 5 set bits. (Associativity doesn't affect the set bits
as they just adds extra slots in each set).
8 KB pages means 13 offset bits.
For 32 bit physical address, 32 − 13 = 19 page frame bits must be there in each PTE (Page Table Entry).
We also have 1 valid bit, 1 dirty bit and 3 permission bits.
So, total size of a PTE (Page Table Entry) = 19 + 5 = 24 bits = 3 bytes.
Virtual address supported = No. of PTEs * Page size (As we need a PTE for each page and assuming single-level paging)
= 8M *8KB
= 64GB= 236 Bytes
5.22.33 Virtual Memory: GATE CSE 2016 Set 1 | Question: 47 top☝ ☛ https://gateoverflow.in/39690
No. of pages(N) = 226 = No. of entries in Page Table
Page Table Entry Size(E) = 6 bytes
Let P be the page fault rate.
Average memory access time = (1− page fault rate)× memory access time when no page fault + Page fault rate × Memory
access time when page fault.
X = (1 − P)M + P D
X = M + P(D − M)
P = (X − M)/(D − M)
(B) is the answer.
TLB Entry: Page Number Frame Number
Memory is word addressable.
Option B. At most, 256 ∗ 210 distinct virtual addresses can be translated without any TLB miss.
Given,
1. You need to lookup the page table for the entry and then access the required location, requiring 2 memory accesses -
Assuming No Page fault occurs.
2. If there is a page fault, Then 1 memory access was wasted (you can only know that the page is not present in memory by
checking the corresponding page entry in the page table). 80 % of the time, you'll only be fetching a page from secondary
storage which takes 5000 ns, 20% of the time, you'll need to write a dirty page back to disk and bring the page (which
caused the page fault) back to main memory, requiring 5000 + 5000 ns
For all memory accesses in a system with virtual memory we need Virtual Address to Physical Address translation and
this goes through TLB.
On TLB hit, we get the physical address.
On TLB miss, we have to do page table access which always resides in physical memory (no page fault possible here).
In the question it is given 1− level page table is used. So, TLB miss will need one physical memory access to get the
physical address.
Question mentions page fault rate as 10% and this should default to 10 page faults every 100 memory accesses. (Since
TLB miss rate is 5% and for normal program run a TLB hit and page fault cannot happen for a memory access (can
happen for invalid memory accesses), it is also possible to consider page fault rate as 10% of all TLB misses. See the last
part of the answer for this.
In the question page transfer time is given. This is different from page fault service time which includes the page transfer time +
the memory access time as once the page is filled, a new memory access is initiated.
So, Average Memory Access Time = Address Translation Time + Data Retrieval Time
= TLB access time + TLB Miss ratio × Page Table Access time + Main memory access time + Page fault rate × (Page fill Tim
= 20 + 0.05 × 100 + 100 + 0.1 × (5000 + 20 + 0.05 × 100 + 100) + 0.1 × 0.2 × 5000
= 20 + 5 + 100 + 512.5 + 100
= 737.5 ns
PS: If the question had given page fault service time also as 5000 answer will be
20 + 0.05 × 100 + 0.9 × 100 + 0.1 × 5000 + 0.1 × 0.2 × 5000 = 25 + 90 + 500 + 100 = 715 ns
"Assume that the TLB hit ratio is 95%, page fault rate is 10%"
If this statement is changed to
"Assume that the TLB hit ratio is 95%, and when TLB miss happens page fault rate is 10%"
Average Memory Access Time = Address Translation Time + Data Retrieval Time
= TLB access time + TLB Miss ratio × Page Table Access time + Main memory access time + Page fault rate × (Page fill Tim
= 20 + 0.05 × 100 + 100 + 0.05 × 0.1 × (5000 + 20 + 0.05 × 100 + 100) + 0.05 × 0.1 × 0.2 × 5000
= 20 + 5 + 100 + 25.625 + 5
= 155.625 ns
If "memory access being restarted" is ignored for page fault, this will be
= 20 + 0.05 × 100 + 100 + 0.05 × 0.1 × (5000) + 0.05 × 0.1 × 0.2 × 5000
Ideally the answer key should be 715 − 738 due to the confusion in the meaning of page transfer time as most standard
resources use page fault service time instead.
If we assume page fault rate is given "only when TLB miss happens" answer should be 155 − 155.7
A previous year question where page fault rate "per instruction" is clearly mentioned in
question: https://gateoverflow.in/318/gate2004-47. This GATE2020 question is VERY POORLY framed and must be
challenged.
Another similar question where TLB miss is taken as per memory access is given below (See the equation used in 3-e)
https://gateoverflow.in/?qa=blob&qa_blobid=5047954265438465988
References
Answer is (D).
Page table entry must contain bits for representing frames and other bits for storing information like dirty bit,reference bit etc
No. of frames (no. of possible pages) = Physical memory size/ Page size = 230 /212 = 218
x = 14 bits
Effective access time = hit ratio × time during hit + miss ratio × time during miss
In both cases TLB is accessed and assuming page table is accessed from memory only when TLB misses.
= 54 + 11 = 65
Correct Answer: C
41 votes -- Arjun Suresh (332k points)
Option (D).
Dirty bit : The dirty bit is set when the processor writes to (modifies) this memory. The bit indicates that its associated block of
memory has been modified and has not been saved to storage yet. Dirty bits are used by the CPU cache and in the page
replacement algorithms of an operating system.
R/W bit : If the bit is set, the page is read/write. Otherwise when it is not set, the page is read-only.
Reference bit is used in a version of FIFO called second chance (SC) policy, in order to avoid replacement of heavily used page..
It is set to one when a page is used heavily and periodically set to 0. Since it is used in a version FIFO which is a page
replacement policy, this bit is come under category of page replacement.
Answer Keys
5.16.2 N/A 5.16.3 N/A 5.16.4 N/A 5.16.5 N/A 5.16.6 N/A