Endsem2018 Soln
Endsem2018 Soln
1
Events for state transitions:
Admitted: New - ready
Interrupt: Running - ready
Scheduler dispatch: Ready - running
I/O or event wait: Running - waiting
I/O or event completion: Waiting - Ready
Exit: Running - terminated
(g) Briefly discuss the pros and cons of shared memory and message passing
schemes in the context of inter-process communication.
Shared Memory: Less delay (faster access), less load on operating system, It is con-
venient for processes sharing the information within a system, Symultaneous modi-
fications on the shared data to be avoided.
Message passing: Efficient in case of passing the messages betwen the processes
present in different systems. Generally used for exchanging small pieces of informa-
tion. It is slow compared to shared memory. The entire communication takes palce
through system calls and hence OS is heavily loaded.
(h) Discuss the salient features of ordinary pipes in the context of inter-
process communication.
Unidirectional communication
Parent-child relationship is required for communication
(1+1+1+2+2+4+2+2 = 15M)
2. (a) How process and thread creations differ? Why threads are considered as
light-weight-processes?
New process is created in separate address space. Whereas threads are created within
the address space of a process. Only CPU-registers and stack are specific to each
thread and the rest are common to all threads.
As threads are just an executional entities and deals with limited resources such
as stack, CPU state and PC, it can be viewed as light-weight-process. Where as
process has two roles: execution as well as maintaining all its resources (Address
space,,Global variables,Open files, Child processes, Pending alarms Signals and signal
handlers, Accounting infor mation).
(b) Mention various multi-threading models, and discuss their features.
Many-to-One, One-to-One, Many-to-Many and Two-level
(c) Briefly explain the following in the context of threads: (i) Signal Handling
and (ii) Thread pools.
Signal handling: (i) Deliver the signal to the thread to which the signal applies
(ii)Deliver the signal to every thread in the process (iii) Deliver the signal to certain
threads in the process and (iv) Assign a specific thread to receive all signals for the
process
Thread pools: Create a number of threads in a pool where they await work
Advantages: (i) Usually slightly faster to service a request with an existing thread
than create a new thread (ii) Allows the number of threads in the application(s) to
be bound to the size of the pool
(d) What is preemptive and non-preemptive scheduling mechanisms? Pro-
vide an example to each.
2
Non-preemptive scheduling: When the process is scheduled to CPU for its execution,
it will continue to execute till its termination or some I/O requirement. (FCFS,
SJF)
Preemptive scheduling: The scheduled process will be supended in case of any inter-
rupt/signal/event (ex: round-robin scheduling, priority-based preemptive scheduling)
(e) Mention various parameters (criteria) considered for CPU scheduling.
CPU utilization, throughput, turn-around time, waiting time and response time
(f ) Briefly discuss the following in the context of multi-processor scheduling:
(i) processor affinity, (ii) soft affinity, (iii) hard affinity, (iv) push migra-
tion and (v) pull migration.
Processor affinity: process has affinity for processor on which it is currently running.
Otherwise, if we migrate a process from processor-i to processor-j, then lot of over
head is involved in clearing the cache of processor-i and cache of processor-j has to
be re-populated.
soft affinity: OS want to keep the process on the same processor which it has affinity,
but it will not gaurantee to do that always.
hard affinity: Some systems using some system calls ensure that the process will be
atached to the desired processor always.
push migration: Periodically checks the load on each processor and if it finds an
imbalance, it evenly distributes the load by moving (or pushing) processes from over-
loaded to idle or less-busy processors.
pull migration: An idle processor pulls the processes from the ready queue of a heavily
loaded (busy) processor.
(g) Consider the following set of processes, with the length of the CPU burst
given in milliseconds:
i. Draw four Gnatt charts that illustrate the execution of these processes
using the folloing scheduling algorithms: FCFS, SJF, non-preemptive
priority (a larger priority number implies a highest priority), and RR
(quantum = 2).
ii. What is the turnaround time of each process for each of the scheduling
algorithms in part-i?
FCFS: T1 = 2, T2 = 3, T3 = 11, T4 = 15 and T5 = 20
SJF: T1 = 3, T2 = 1, T3 = 20, T4 = 7 and T5 = 12
NPP: T1 = 15 (19), T2 = 20, T3 = 8, T4 = 19 (17) and T5 = 13
RR: T1 = 2, T2 = 3, T3 = 20, T4 = 13 and T5 = 18
iii. What is the waiting time of each process for each of these scheduling
algorithms?
FCFS: W1 = 0, W2 = 2, W3 = 3, W4 = 11 and W5 = 15; Wav = 6.2
SJF: W1 = 1, W2 = 0, W3 = 12, W4 = 3 and W5 = 7; Wav = 4.6
3
NPP: W1 = 13 (17), W2 = 19, W3 = 0, W4 = 15 (13) and W5 = 8; Wav =
11 (11.4)
RR: W1 = 0, W2 = 2, W3 = 12 W4 = 9 and W5 = 13; Wav = 7.2
iv. Which of the algoritms results in the minimum average waiting time
(over all processes)?
SJF
(2+2+2+2+2+5+8 = 23M)
4
wait (S) ;
critical section
signal(S);
remainder section
} while (TRUE);
Semaphore operations (with buzy-waiting):
wait(S) {
while (S <= 0) ; no-op
S−−;
}
signal(S) {
S++;
}
Semaphore operations (without buzy-waiting):
wait(semaphore *S) {
S− >value−−;
if (S− >value < 0) {
add this process to S− >list;
block();
}
}
signal(semaphore *S) {
S− >value++;
if (S− >value <= 0) {
remove a process P from S− >list;
wakeup(P);
}
}
(e) Illustrate the problem of deadlock using pair of processes want to access
a pair of critical sections.
Suppose a pair of processes P1 and P2 want to access a pair of critical sections C1
and C2 which are protected by S1 and S2 semaphores. If the sequence of operations
by the processes are as follows:
S1 = S2 = 1;
P1: wait (S1);
P2: wait (S2);
P1: wait (S2);
P2: wait (S1);
.....
.....
P1: signal (S1);
P2: signal (S2);
P1: signal (S2);
P2: signal (S1);
5
Mutual exclusion, hold-and-wait, no preemption and circular wait
(g) Consider the following snapshot of a system. P0, P1, P2, P3, P4 are the
processes and A, B, C, D are the resourse types. The values in the table
indicates the number of instances of a specific resource (for example: 3 3
2 1 under the last column indicates that there are 3 A-type, 3 B-type, 2
C-type and 1 D-type resources are available after allocating the resources
to all five processes). The numbers under allocation-column indicate that
those number of respources are allocated to various processes mentioned
in the first column. The numbers under Max-column indicate the maxi-
mum number of resources required by the processes. For example: in 1st
row under allocation-column 2 0 0 1 indicate there are 2 A-type, 0 B-type,
0 C-type and 1 D-type resources are allocated to process P0. Whereas 4 2
1 2 under Max-column indicate that process P0’s maximum requirement
is 4 A-type, 2 B-type, 1 C-type and 2 D-type resources. Answer the fol-
Process Allocation Max Available
ABCD ABCD ABCD
P0 2001 4212 3321
P1 3121 5252
P2 2103 2316
P3 1312 1424
P4 1432 3665
iii. Verify whether the snapshot of the present system is in a safe state
by demonstrating an order in which the processes may complete.
The given snapshot of the system is in safe state. With the available resources
the processes will complete their execution with the following sequences: P0-P3-
(any combination of P1, P2 and P4)
iv. If a request from process P1 arrives for (1,1,0,0), can the request be
granted immediately?
Yes, the request (1 1 0 0) may be granted to P1, and the system will remain
in safe state. With the available resources the processes will complete their ex-
ecution with the following sequences: P0-P3-(any combination of P1, P2 and
P4)
v. If a request from process P4 arrives for (0,0,2,0), can the request be
granted immediately?
6
No, the above said request (0 0 2 0) cannot be granted to P4. In case, if you
grant the resources the system will go to un-safe state and all the processes will
be in deadlock.
(h) Briefly discuss the policies to recover from the deadlock.
Process temination and Resource preemption
(2+2+4+7+2+2+6+2 = 27M)
4. (a) What are logical and physical addresses of a process? How logical address
is converted to physical address? With appropriate figure, discuss how
memory management unit (hardware) protects the memory address space
of other processes and operating system.
Logical addresses of a process are generated by CPU, they start from zero for all
processes. Physical addresses are related to the address space in the physical memory
(Main memory) where the process is actually present (loaded) in the main memory.
For each process there is an associated base and limit registers, which caontains
the starting physical memory address of the process and total size of the process.
The physical address will be generated from the logical address by adding adding the
contents of base register to logical address.
The generated physical address of a process should be less than the sum of the contents
of base and limit registers. Otherwise it will be treated as illegal memory access, and
trap will be generated to OS.
(b) Discuss 3 different stages of address binding.
(i) Compile-time, (ii) Load-time and (iii) Execution-time
(c) Briefly discuss about external and internal fragmentations in the context
of contiguous memory space allocation.
External Fragmentation: In case of contiguous allocation, even though the total avail-
able space is more than the required space of a process, but still process will be unable
to load into memory, because of non-availability of continuous space required by the
process.
Internal Fragmentation: When you allocate a block of memory to a process, the space
left-over in the block, after process has loaded into the memory.
(d) Show the process of paging (conversion of logical address to physical
address) with TLBs using neat diagram.
First the logical address will be checked whether it is in valid logical address space.
If it is not the valid logical address, then the process will be terminated. If it is valid
logical address then it will be searched first in TLB and then in Page table. From
the logical address, page number is identified and check with the entries of TLB.
If it is present in TLB, then physical address can be computed by adding offset to
frame starting address. If the page is not found in TLB, then from the page table the
frame containing the page is identified and desired physical address will be generated
by adding the offset to frame start address.
(e) Consider a computer system with a 32-bit logical address and 4-KB page
size. The system supports up to 512-MB of physical memory.
i. How many entries will be there in a conventional single-level page
table and inverted page table?
7
32
Entries in conventional single-level page table = 2212 = 220 = 106
29
Entries in inverted page table = 2212 = 217 = 128 × 103
ii. What will be the memory requirement for storing these tables?
Memory for storing single-level page table = number of pages × number of bits
for representing frame = 106 × 17bits = 106 × 3bytes = 3M B
Memory for storing inverted page table = number of frames × number of bits
for representing page = 128 × 103 × 20bits = 128 × 103 × 3bytes = 384KB
iii. If a memory reference takes 50 nanoseconds, how long does a paged
memory reference take in the context of conventional page table?
Access time to a paged memory reference take in the context of conventional
page table = 2 × memory access time = 2 × 50 = 100 ns
iv. If we add TLBs, and 75% of all page-table references are found in the
TLBs, what is the effective memory reference time? (Assume that
finding a page-table entry in the TLBs takes 2 nanoseconds, if the
entry is present.)
Access time in the context of TLB + Page Table (TLB hit = 75% and TLB miss
= 25%) = (0.75 × (50+2)) + (0.25×(2+50+50)) = 64.5 ns
(f ) What is hashed-page table? How address translation (logical to physical)
is carried out using hashed-page table?
If the logical address space is very large (32/64 bit), then paging using conventional
page tables will be very complex due to huge memory space requirement for storing the
page tables in main memory. To overcome this one of the solutions is use of hashed
page tables. Hashed page table size is fixed, and all logical pages are mapped to fixed
number of outputs. From the logical address, page number is obtained and then the
page is hashed and it produces one of the output. Here in hashing large number of
inputs are mapped to small number of outputs, and hence the output entry is related
to multiple inputs. Therefore the output is represented using a list of inputs that are
mapped to it. Based on this, the logical page is first hashed to one of the outputs and
the list attached to the output is searched for the given page-frame mappping. Once
the desired frame is found from hashed table, the physical address is determined by
adding the offset to the frame start address.
(g) What is page fault? With appropriate diagram clearly discuss the steps
involved in handling the page fault by an operating system.
If the page associated to a logical address is valid and it is not present in the physical
memory (present on the disk), then it is know to be page fault.
Handling the Page Fault:
1. Trap to the operating system
2. Save the user registers and process state
3. Determine the location of the page on the disk
4. Issue a read from the disk to a free frame:
i. Wait in a queue for this device until the read request is serviced
ii. Wait for the device seek and/or latency time
iii. Begin the transfer of the page to a free frame
5. While waiting, allocate the CPU to some other user
6. Receive an interrupt from the disk I/O subsystem (I/O completed)
7. Save the registers and process state for the other user
8. Determine that the interrupt was from the disk
9. Correct the page table and other tables to show page is now in memory
8
10. Wait for the CPU to be allocated to this process again
11. Restore the user registers, process state, and new page table, and then resume
the interrupted instruction
(h) With appropriate diagram explain the concept of copy-on-write in the
context of process creation.
When a new process is created logical address spaces of both parent and child are
mapped to single phisical address space. There after, if any of the process (either
parent or child) what to modify data, then page/pages corresponds to that modifica-
tion will be first copied into free frame/frames and then the required modifications
will be performed on the copied page and these copied pages are updated in the page
table of the corresponding process.
(i) Consider the following page reference string: 7,2,3,1,2,5,3,4,6,7,7,1,0,5,
4,6,2,3,0,1. Assume demand paging with 3 frames, how many page faults
would occur for the following replacement algorithms?
i. LRU replacement
18 page faults
ii. FIFO replacement
17 page faults
iii. Optimal replacement
13 page faults
(j) What is thrashing? What is the working-set model (WSM) of a process?
How do you track the working-set? Comment on the relation between
WSM and size of the physical memory in view of thrashing. If the processes
present in the system (main memory), doesn’t have enough pages, then each process
generage page fault. These page faults replace the pages with new ones, soon after
that again page faults occur requesting for the just recently removed pages. This con-
tinues and CPU will be mostly idle waiting for the new pages and most of the time
will be spent for disk I/O. With this OS will load new processes from disk to increase
the multiprogramming for enhancing the CPU utilization, and it further worsen the
overall performance. This phenomenon is known to be thrashing.
Working-set model: The set of pages referred in the recent time interval is know as
working set of a process at that time. Ex: 0,2,0,0,0,4,4,4,2,2 for this history, the
working-set model is {0,2,4}.
Tracking the working set: Tracking the working set means as per the present re-
quirment of page references, if page fault occurs, then to decide which page from the
present WS has to be replaced, is based on the page-usage in the previous WSs
Relation between WSM and size of the physical memory: If the size of sum of op-
timal WSMs of all processes present in memory is greater than the size of physical
memory, then thrashing will result. So to avoid thrashing we need to ensure that size
of sum of optimal WSMs of all processes present in memory should be less than the
size of physical memory. Suppose, if there is a severe thrashing, then to maintain
the above relation to reduce thrashing, we need to swapout some processes to disk.
(3+3+2+2+7+2+4+2+6+4 = 35M)