0% found this document useful (0 votes)
78 views40 pages

Os Unit 2

The document discusses various concepts related to CPU scheduling and threads in operating systems. It includes definitions of threads, benefits of multithreading, types of threads, differences between user threads and kernel threads, thread cancellation methods, CPU scheduling concepts like criteria, algorithms and context switching. It also defines terms like throughput, turnaround time, race condition, aging, starvation and context switch.

Uploaded by

Ax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views40 pages

Os Unit 2

The document discusses various concepts related to CPU scheduling and threads in operating systems. It includes definitions of threads, benefits of multithreading, types of threads, differences between user threads and kernel threads, thread cancellation methods, CPU scheduling concepts like criteria, algorithms and context switching. It also defines terms like throughput, turnaround time, race condition, aging, starvation and context switch.

Uploaded by

Ax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

UNIT II

Overview of threads –Multithreading Models – Threading Issues -- Basic Concepts of process


scheduling – Scheduling Criteria – Scheduling Algorithms – Multiple Processor Scheduling, Dead
Lock: Characterization, Prevention Detection, Avoidance and Recovery. Case Study: Linux
Scheduling.

2 Marks
1. What is a thread? (APR’15, NOV ‘15)
A thread otherwise called a lightweight process (LWP) is a basic unit of CPU utilization, it
comprises of a thread id, a program counter, a register set and a stack. It shares with other threads
belonging to the same process its code section, data section, and operating system resources such as
open files and signals.

2. What are the benefits of multithreaded programming? (NOV’14)


The benefits of multithreaded programming can be broken down into four major categories:
• Responsiveness
• Resource sharing
• Economy
• Utilization of multiprocessor architectures

3. Write the types of Thread?


 Kernel-supported threads (e.g. Mach and OS/2) - kernel of O/S sees threads and manages
switching between threads i.e. in terms of analogy boss (OS) tells person (CPU) which thread in
process to do next.
 User-level threads - supported above the kernel, via a set of library calls at the user level. Kernel
only sees process as whole and is completely unaware of any threads i.e. in terms of analogy
manual of procedures (user code) tells person (CPU) to stop current thread and start another
(using library call to switch threads)

4. Compare user threads and kernel threads? (May’16)


User threads
User threads are supported above the kernel and are implemented by a thread library at the
user level. Thread creation & scheduling are done in the user space, without kernel intervention.
Therefore they are fast to create and manage blocking system call will cause the entire process to block.

Page |1 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Kernel threads
Kernel threads are supported directly by the operating system .Thread creation; scheduling and
management are done by the operating system. Therefore they are slower to create & manage
compared to user threads. If the thread performs a blocking system call, the kernel can schedule
another thread in the application for execution

5. Define thread cancellation & target thread.


The thread cancellation is the task of terminating a thread before it has completed. A thread that
is to be cancelled is often referred to as the target thread.
For example, if multiple threads are concurrently searching through a database and one thread
returns the result, the remaining threads might be cancelled.

6. What are the different ways in which a thread can be cancelled?


Cancellation of a target thread may occur in two different scenarios:
• Asynchronous cancellation: One thread immediately terminates the target thread is called
asynchronous cancellation.
• Deferred cancellation: The target thread can periodically check if it should terminate, allowing the
target thread an opportunity to terminate itself in an orderly fashion.

7. Define CPU scheduling (APR’15)


CPU scheduling is the process of switching the CPU among various processes. CPU scheduling is
the basis of multiprogrammed operating systems. By switching the CPU among processes, the operating
system can make the computer more productive.

8. What is preemptive and non preemptive scheduling?


Under nonpreemptive scheduling once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or switching to the waiting state.
Preemptive scheduling can preempt a process which is utilizing the CPU in between its execution and
give the CPU to another process.

9. What is a Dispatcher? (NOV ‘11)(Apr’17)


The dispatcher is the module that gives control of the CPU to the process selected by the short-
term scheduler. This function involves:
• Switching context

Page |2 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

• Switching to user mode


• Jumping to the proper location in the user program to restart that program.

10. What is dispatch latency?


The time taken by the dispatcher to stop one process and start another running is known as
dispatch latency.

11. What are the various scheduling criteria for CPU scheduling?
The various scheduling criteria are
• CPU utilization
• Throughput
• Turnaround time
• Waiting time
• Response time

12. Define throughput? (NOV ‘11)(ARPIL ‘14)


Throughput in CPU scheduling is the number of processes that are completed per unit time. For
long processes, this rate may be one process per hour; for short transactions, throughput might be 10
processes per second.

13. What is turnaround time?


Turnaround time is the interval from the time of submission to the time of completion of a
process. It is the sum of the periods spent waiting to get into memory, waiting in the ready queue,
executing on the CPU, and doing I/O.

14. Define race condition? (ARPIL ‘14)


When several process access and manipulate same data concurrently, then the outcome of the
execution depends on particular order in which the access takes place is called race condition. To avoid
race condition, only one process at a time can manipulate the shared variable.

Page |3 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

15.What happens if the time allocated in a Round Robin Scheduling is very large? And what
happens if the time allocated is very low?
It results in a FCFS scheduling. If time is too low, the processor through put is reduced. More
time is spent on context switching

16. Write down Scheduling Criteria?


 CPU utilization i.e. CPU usage - to maximize
 Throughput = number of processes that complete their execution per time unit - to
maximize
 Turnaround time = amount of time to execute a particular process - to minimize
 Waiting time = amount of time a process has been waiting in the ready queue - to minimize
 Response time = amount of time it takes from when a job was submitted until it initiates its
first response (output), not to time it completes output of its first response - to minimize

17. What is the difference between process and thread? (May 2017)
1. Threads are easier to create than processes since they don't require a separate address space.
2. Multithreading requires careful programming since threads share data structures that should only be
modified by one thread at a time. Unlike threads, processes don't share the same address space.
3. Threads are considered lightweight because they use far less resources than processes.
4. Processes are independent of each other. Threads, since they share the same address space are
interdependent, so caution must be taken so that different threads don't step on each other.
This is really another way of stating #2 above.
5. A process can consist of multiple threads.

18. What is Spooling?


Acronym for simultaneous peripheral operations on line. Spooling refers to putting jobs in a
buffer, a special area in memory or on a disk where a device can access them when it is ready. Spooling
is useful because device access data that different rates. The buffer provides a waiting station where
data can rest while the slower device catches up the spooling.

19. List the advantage of Spooling?


1. The spooling operation uses a disk as a very large buffer.

Page |4 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

2. Spooling is however capable of overlapping I/O operation for one job with processor operations for
another job.

20. What is meant by CPU–I/O Burst Cycle, CPU burst, I/O burst?

 CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait.
 CPU burst is length of time process needs to use CPU before it next makes a system call
(normally request for I/O).
 I/O burst is the length of time process spends waiting for I/O to complete.

21. Define Aging and starvation? (APR’ 14)


Starvation: Starvation is a resource management problem where a process does not get the
resources it needs for a long time because the resources are being allocated to other processes.
Aging: Aging is a technique to avoid starvation in a scheduling system. It works by adding an
aging factor to the priority of each request. The aging factor must increase the requests priority as time
passes and must ensure that a request will eventually be the highest priority request (after it has
waited long enough).

22. What is context switch? (APR ‘11)


In a multitasking operating system stops running one process and starts running another. Many
operating systems implement concurrency by maintaining separate environments or "contexts" for
each process. The amount of separation between processes, and the amount of information in a context,
depends on the operating system but generally the OS should prevent processes interfering with each
other, e.g. by modifying each other's memory.
A context switch can be as simple as changing the value of the program counter and stack
pointer or it might involve resetting the MMU to make a different set of memory pages available.

23. What are the uses of job queues, ready queue and device queue? (MAY’17)
The Operating System maintains the following important process scheduling queues −
 Job queue − This queue keeps all the processes in the system.
 Ready queue − This queue keeps a set of all processes residing in main memory, ready and
waiting to execute. A new process is always put in this queue.
 Device queues − The processes which are blocked due to unavailability of an I/O device
constitute this queue.

Page |5 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

24. Define Medium Term Scheduler (NOV’16)


Medium-term scheduling is a part of swapping. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium-term scheduler is in-charge of handling the
swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from memory
and make space for other processes, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or rolled out. Swapping may be
necessary to improve the process mix.

25. Write about real time scheduling? (NOV 18)


Real time system means that the system is subjected to real time, i.e., response should be
guaranteed within a specified timing constraint or system should meet the specified deadline. For
example: flight control system, real time monitors etc.

26. Define deadlock. (NOV 13, NOV ‘15)


A process requests resources; if the resources are not available at that time, the process enters a
wait state. Waiting processes may never again change state, because the resources they have requested
are held by other waiting processes. This situation is called a deadlock.

27. What is the sequence in which resources may be utilized?


Under normal mode of operation, a process may utilize a resource in the following sequence:
 Request: If the request cannot be granted immediately, then the requesting process must wait
until it can acquire the resource.
 Use: The process can operate on the resource.
 Release: The process releases the resource.

28. What are conditions under which a deadlock situation may arise? Or Write four general
strategies for dealing with deadlocks? (APR’15)(NOV’14)
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
 Mutual exclusion
 Hold and wait

Page |6 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 No pre-emption
 Circular-wait

29. What is a resource-allocation graph? (APR’12) (APR’15) (APR’17)


Deadlocks can be described more precisely in terms of a directed graph called a system resource
allocation graph. This graph consists of a set of vertices V and a set of edges E. The set of vertices V is
partitioned into two different types of nodes; P the set consisting of all active processes in the system
and R the set consisting of all resource types in the system.

30. Define request edge and assignment edge.


A directed edge from process Pi to resource type Rj is denoted by Pi, Rj; it signifies that process
Pi requested an instance of resource type Rj and is currently waiting for that resource. A directed edge
from resource type Rj to process Pi is denoted by Rj, Pi, it signifies that an instance of resource type has
been allocated to a process Pi. A directed edge Pi ,Rj is called a request edge. A directed edge Rj,Pi is
called an assignment edge.

31. What are the methods for handling deadlocks? (or)


Write down three ways to deal with deadlock problem. (Nov’ 2017)
The deadlock problem can be dealt with in one of the three ways:
 Use a protocol to prevent or avoid deadlocks, ensuring that the system will never enter a
deadlock state.
 Allow the system to enter the deadlock state, detect it and then recover.
 Ignore the problem all together, and pretend that deadlocks never occur in the system.

32. Define deadlock prevention.


Deadlock prevention is a set of methods for ensuring that at least one of the four necessary
conditions like mutual exclusion, hold and wait, no preemption and circular wait cannot hold. By
ensuring that that at least one of these conditions cannot hold, the occurrence of a deadlock can be
prevented.

33. Define deadlock avoidance.


An alternative method for avoiding deadlocks is to require additional information about how
resources are to be requested. Each request requires the system consider the resources currently

Page |7 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

available, the resources currently allocated to each process, and the future requests and releases of
each process, to decide whether the could be satisfied or must wait to avoid a possible future deadlock.

34. What are a safe state and an unsafe state?


A state is safe if the system can allocate resources to each process in some order and still void a
deadlock. A system is in safe state only if there exists a safe sequence. A sequence of processes
<P1,P2,....Pn> is a safe sequence for the current allocation state if, for each Pi, the resource that Pi can
still request can be satisfied by the current available resource plus the resource held by all the Pj, with
j<i. if no such sequence exists, then the system state is said to be unsafe.

35. What is banker's algorithm?


Banker's algorithm is a deadlock avoidance algorithm that is applicable to a resource allocation
system with multiple instances of each resource type. The two algorithms used for its implementation
are:
 Safety algorithm: The algorithm for finding out whether or not a system is in a safe state.
 Resource-request algorithm: if the resulting resource allocations safe, the transaction is
completed and process Pi is allocated its resources. If the new state is unsafe Pi must wait and
the old resource-allocation state is restored.

Page |8 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

11 Marks

1. Write short notes on CPU scheduler (10)


CPU Scheduling:
Basic concepts:
DEFINITION:
 CPU scheduling is the process of switching the CPU among various processes.
 CPU scheduling is the basic of multiprogrammed operating systems.
 By switching the CPU among processes, the operating system can make the computer more
productive.
CPU – I/O BURST CYCLE:
The success of CPU scheduling depends on the property of processes:
 The process execution consists of a cycle of CPU execution and I/O wait.
 Processes alternate between these two states.
 Process execution begin with a CPU burst; followed by an I/O burst, then another CPU
burst then another I/O burst and so on.
 Last CPU burst will end with a system request to terminate execution.
 An I/O bound program would have many, very short CPU bursts.
 A CPU bound program might have a few very long CPU bursts.
Fig: Alternating sequence of CPU and I/O bursts
.
.
Load store
Add store CPU burst
Read from file

I/O burst

Wait for I/O

Store increment index


Write to file CPU burst
Wait for I/O
I/O burst

Page |9 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Load store
Add store CPU burst
Read from file
Wait for I/O I/O burst
.
.
CPU Scheduler
 CPU scheduler selects one of the processes in the ready queue to be executed.
 There are two types of scheduling .they are
1. Preemptive scheduling
2. Non preemptive scheduling
 Preemptive scheduling-during process with the ,if is possible to remove the CPU from the process
then it is called preemptive scheduling.
 Non preemptive scheduling-during processing with the CPU from the process then it is not possible
to remove the CPU from the process then it is called non-preemptive scheduling.
 CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
 Scheduling under 1 and 4 is non-preemptive.
 All other scheduling is preemptive.
Dispatcher
 Dispatcher module gives control of the CPU to the process selected by the short-term scheduler; this
involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart that program
 Dispatch latency – time it takes for the dispatcher to stop one process and start another running.
Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – # of processes that complete their execution per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the ready queue

P a g e | 10 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 Response time – amount of time it takes from when a request was submitted until the first response
is produced, not output (for time-sharing environment)
Optimization Criteria
 Max CPU utilization
 Max throughput
 Min turnaround time
 Min waiting time
 Min response time
Scheduling algorithms:
 First –come first served scheduling(FCFS)
 Shortest-job-first scheduling(SJF)
 Priority scheduling
 Round-robin scheduling(RR)
 Multilevel queue scheduling
 Multilevel feedback queue scheduling

2. Explain FCFS (5)


It is a non pre emptive algorithm
 The process which requests the CPU first us allocated to the CPU first.
 Demerit –A long CPU bound job may take the CPU and force short job to wait for a long tern
called convoy effect
 Gantt chart -It represents the order in which the process is executed
 Example
Process Burst time
P1 3
P2 6
P3 4
P4 2
Gantt chart:
P1 P2 P3 P4
0 3 9 13 15

P a g e | 11 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Waiting time
Process waiting time
P1 0
P2 3
P3 9
P4 13

Average waiting time=(0+3+9+13)/4 = 6.25 ms


Turn around time (TAT):
TAT=waiting time + burst time

Process TAT
P1 3
P2 9
P3 13
P4 15
Average TAT=(3+9+13+15)/4=10 ms

3. Write short notes on shortest job scheduling (5)


 CPU is allocated to a job with smaller CPU burst that is the shortest CPU burst job will have the
higher priority over other jobs
 Two schemes:
 nonpreemptive – once CPU given to the process it cannot be preempted until completes its CPU
burst.
 preemptive – if a new process arrives with CPU burst length less than remaining time of current
executing process, preempt. This scheme is known as the Shortest-Remaining-Time-First (SRTF).
 SJF is optimal – gives minimum average waiting time for a given set of processes.
 Example
Process Burst time
P1 6
P2 8
P3 7

P a g e | 12 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

P4 3

Gantt chart:
P4 P1 P3 P2
0 3 9 16 24
Waiting time
Process waiting time
P1 3
P2 16
P3 9
P4 0

Average waiting time=(3+16+9+0)/4 = 7 ms


Turn around time (TAT):
TAT=waiting time + burst time
Process TAT
P1 9
P2 24
P3 16
P4 3
Average TAT=(9+24+16+3)/4=13 ms

4. Write short notes on priority scheduling (5)


 A priority number (integer) is associated with each process
 The CPU is allocated to the process with the highest priority (smallest integer  highest priority).
 Preemptive
 nonpreemptive
 preemptive-preempt the CPU if the priority of the newly arrived process is higher than the priority
of the currently running process
 non preemptive - allows the currently running process to complete its CPU burst.
 SJF is a priority scheduling where priority is the predicted next CPU burst time.
 Problem in priority scheduling is Starvation –i.e low priority processes may never execute.
 Solution -Aging – as time progresses increase the priority of the process.

P a g e | 13 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 Example
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
Gantt chart:
P2 P5 P1 P3 P4
0 1 6 16 18 19
Waiting time
Process waiting time
P1 6
P2 0
P3 16
P4 18
P5 1

Average waiting time=(6+0+16+18+1)/5 = 8.2ms


Turn around time (TAT):
TAT=waiting time + burst time
Process TAT
P1 16
P2 1
P3 18
P4 19
P5 6

Average TAT=(16+1+18+19+6)/5=12 ms

5. Explain Round Robin (RR) (5)


 It is a pre-emptive scheduling

P a g e | 14 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 Each process gets a small unit of CPU time (time quantum), usually 10-100 milliseconds. After this
time has elapsed, the process is preempted and added to the end of the ready queue.
 If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of
the CPU time in chunks of at most q time units at once. No process waits more than (n-1)q time
units.
 Performance
 q large  FIFO
 q small  q must be large with respect to context switch, otherwise overhead is too high.
Example given time quantum=4ms

Process Burst time


P1 24
P2 3
P3 3
Gantt chart:
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30
Waiting time
Process waiting time
P1 10-4=6
P2 4
P3 7

Average waiting time=(6+4+7)/3 = 5.6 ms


Turn around time (TAT):
TAT=waiting time + burst time
Process TAT
P1 30
P2 7
P3 10

Average TAT=(30+7+10)/3=15.6 ms

P a g e | 15 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

6. Explain Multi level queue scheduling (5)


 Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
 Each process is permanently assigned to one queue based on some property eg: process type
 Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
 Scheduling must be done between the queues.
 Fixed priority scheduling; (i.e., serve all from foreground then from background). Possibility of
starvation.
 Time slice – each queue gets a certain amount of CPU time which it can schedule amongst its
processes; i.e., 80% to foreground in RR
 20% to background in FCFS

7.Explain Multilevel Feedback Queue (5)


 A process can move between the various queues; aging can be implemented this way.
 Multilevel-feedback-queue scheduler defined by the following parameters:
 number of queues
 scheduling algorithms for each queue
 method used to determine when to upgrade a process
 method used to determine when to demote a process
 method used to determine which queue a process will enter when that process needs service

P a g e | 16 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Example of Multilevel Feedback Queue


 Three queues:
 Q0 – time quantum 8 milliseconds
 Q1 – time quantum 16 milliseconds
 Q2 – FCFS
 Scheduling
 A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If
it does not finish in 8 milliseconds, job is moved to queue Q1.
 At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue Q2.

8. Write short notes on Multiple-Processor Scheduling (5)


 Here each queue can have separate processor
 Suppose if any of the queue is empty then that processor connected to it become idle. To avoid this
problem we have to use common ready queue
 All the processes are sent to common ready queue and the processor has to take the process from
that queue therefore here the processor is self scheduling
 Here also problem arise if two processor selects same process to avoid this one of the processor has
to act as master for other processors that is master slave relationship
 The master has to select job from common ready queue and allocate that process to any one of slave
processor.
Real-Time Scheduling
 Hard real-time systems – required to complete a critical task within a guaranteed amount of time.
 Soft real-time computing – requires that critical processes receive priority over less fortunate ones.

P a g e | 17 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

9. Explain real time scheduling? (10)


Real-Time Scheduling
Real-time computing is divided into two types:
 HARD REAL-TIME SYSTEMS
 SOFT REAL-TIME COMPUTING
HARD REAL-TIME SYSTEMS:
 Hard real-time systems are required to complete a critical task within a guaranteed
amount of time.
 Generally, a process is submitted along with a statement of the amount of time in which it
needs to complete or perform I/O.
 The scheduler then either admits the process, guaranteeing that the process will
complete on time, or rejects the request as impossible.
 This is known as resource reservation.
 Such a guarantee requires that the scheduler know exactly how long each type of
operating-system function takes to perform, and therefore each operation must be
guaranteed to take a maximum amount of time.
 Such a guarantee is impossible in a system with secondary storage or virtual memory, as
we shall show in the next few chapters, because these subsystems cause unavoidable and
unforeseeable variation in the amount of time to execute a particular process.
 Therefore, hard real-time systems are composed of special-purpose software running on
hardware dedicated to their critical process, and lack the full functionality of modern
computers and operating systems.
SOFT REAL TIME COMPUTING:
 Soft real-time computing is less restrictive. It requires that critical processes receive priority
over less fortunate ones.
 Although adding soft real-time functionality to a time-sharing system may cause an unfair
allocation of resources and may result in longer delays, or even starvation, for some processes, it
is at least possible to achieve.
 The result is a general-purpose system that can also support multimedia, high-speed interactive
graphics, and a variety of tasks that would not function acceptably in an environment that does
not support soft real-time computing. Implementing soft real-time functionality requires careful
design of the scheduler and related aspects of the operating system.

P a g e | 18 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 First, the system must have priority scheduling, and real-time processes must have the highest
priority.
 The priority of real-time processes must not degrade over time, even though the priority of non-
real-time processes may. Second, the dispatch latency must be small. The smaller the latency, the
faster a real-time process can start executing once it is runnable. The high-priority process
would be waiting for a lower-priority one to finish. This situation is known as priority
inversion.
 In fact, a chain of processes could all be accessing resources that the high-priority process
needs. This problem can be solved via the priority-inheritance protocol, in which all these
processes (the ones accessing resources that the high-priority process needs) inherit the high
priority until they are done with the resource in question. When they are finished, their priority
reverts to its original value.
The conflict phase of dispatch latency has two components:
1. Preemption of any process running in the kernel
2. Release by low-priority processes resources needed by the high-priority process
As an example, in Solaris 2, the dispatch latency with preemption disabled is over 100 milliseconds.
However, the dispatch latency with preemption enabled is usually reduced to 2 milliseconds.
THREADS: OVERVIEW
DEFINITION
 A thread called as a light weight process (LWP) is a basic unit of CPU utilization.
 It comprises a thread ID, a program counter, a register set and a stack.
 It shares with other threads belonging to the same process its code section, data section, and
other operating system resources such as open files and signals.
Benefits of multi threaded programming
1. Responsiveness
2. Resource sharing
3. Economy
4. Utilization of multiprocessor architecture.
User and Kernel Threads
Threads maybe provided at either the user level, for user threads or by the kernel for kernel threads.
USER THREADS:
 Supported above the kernel and are implemented by a thread library at the user level.
 Library provides support for thread execution, scheduling and management with no support
from kernel.

P a g e | 19 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 User threads are fast to create and manage.


 Blocking system call will cause the entire process to block.
KERNEL THREADS:
 Supported directly by the operating system.
 Kernel performs thread creation, scheduling and management in kernel space.
 They are slower to create and manage than the user thread.
 If thread performs blocking system call, the kernel can schedule another thread in the
application for execution.

10. Explain in detail about the threading issues (10)


Threading Issues:
FORK AND EXEC SYSTEM CALLS:
Fork system call – create a separate, duplicate process.
Exec system call – runs an executable file.
In multithreaded program ; semantics of fork and exec change
Fork
- duplicate all threads
- duplicates only the thread that invoked fork ()
Exec
 Program specified in the parameter to exec will replace the entire process.
Thread cancellation
It is a task of terminating a thread before it has completed.
Example:
When a user presses a button on a web browser that stops a web page from loading any further.
Often a web is loaded in a separate thread when a user pressed the stop button, the thread loading the
page is cancelled.

Target thread:
A thread that is to be cancelled is often referred to as the target thread.
Cancellations of a target thread may occur in two situations:
Two general approaches:
1. Asynchronous cancellation terminates the target thread immediately
2. Deferred cancellation allows the target thread to periodically check if it should be cancelled
 Allow cancellation at safe points

P a g e | 20 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 Pthreads refer to safe point as cancellation points

Thread pools
Motivating example:
A web server creates anew thread to service each request.
Two concerns:
1. The amount of time required to create the thread prior to servicing the request ,compounded with
the fact that this thread will be discarded once it has completed its work that is overhead to create
thread
2. No limit on the number of thread created, may exhaust system resources, such as CPU time or
memory .
To overcome the above said problem we need thread pools.
General idea:
 Create a pool of threads at process startup.
 If request comes in, then wakeup a thread from pool, assign the request to it,if no thread
available ,server waits until one is free
 After completing the service, thread returns to pool.
Advantages:
 Usually slightly faster to service a request with an existing thread than create a new thread
 Allows the number of threads in the application(s) to be bound to the size of the pool

11. Consider the following set of processes with the length of the CPU burst given in
milliseconds (10)
Process Burst time Priority
P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2

The processes arrived in the order P1,P2,P3,P4,P5 at time 0.

P a g e | 21 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

(a) Draw Gantt charts that illustrate the execution of these processes using the following svheduling
algorithms : FCFS,SJF ,non-preemptive priority (a smaller priority number implies higher
priority)
(b) What is the turnaround time of all processes for each scheduling algorithms.
FCFS:
Gantt chart:
P1 P2 P3 P4 P5
0 10 11 13 14 19
Waiting time
Process waiting time
P1 0
P2 10
P3 11
P4 13
P5 14

Average waiting time=(0+10+11+13+14)/5 = 9.6ms


Turn around time (TAT):
TAT=waiting time + burst time
Process TAT
P1 10
P2 11
P3 13
P4 14
P5 19

Average TAT=(10+11+13+14+19)/5=13.4 ms

SJF:

Gantt chart:
P2 P4 P3 P5 P1
0 1 2 4 9 19

P a g e | 22 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Waiting time
Process waiting time
P1 9
P2 0
P3 2
P4 1
P5 4

Average waiting time=(9+0+2+1+4)/5 = 3.2ms


Turn around time (TAT):
TAT=waiting time + burst time
Process TAT
P1 19
P2 1
P3 4
P4 2
P5 9

Average TAT=(19+1+4+2+9)/5=7 ms
NON-PREEMPTIVE PRIORITY SCHEDULING:

Gantt chart:
P2 P5 P1 P3 P4
0 1 6 16 18 19

Waiting time
Process waiting time
P1 6
P2 0
P3 16
P4 18
P5 1

P a g e | 23 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Average waiting time=(6+0+16+18+1)/5 = 8.2ms


Turn around time (TAT):
TAT=waiting time + burst time
Process TAT
P1 16
P2 1
P3 18
P4 19
P5 6

Average TAT=(16+1+18+19+6)/5=12 ms

12. Calculate average waiting time and average turnaround time for the following algorithms’

(a)FCFS

(b) Preemptive SJF(SRTF)

( c ) Round robin(quantum = 1ms)

Process Arrival time Burst time


P1 0 7
P2 1 3
P3 2 8
P4 3 5

FCFS:
Gantt chart:
P1 P2 P3 P4
0 7 10 18 23
Waiting time
Process waiting time
P1 0-0=0
P2 7-1=6

P a g e | 24 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

P3 10-2=8
P4 18-3=15

Average waiting time=(0+6+8+15)/4 = 7.25ms


Turn around time (TAT):
TAT=waiting time + burst time
Process TAT
P1 7
P2 10
P3 18
P4 23

Average TAT=(7+10+18+23)/4=14.5 ms

Preemptive SJF(SRTF):

Gantt chart:
P1 P2 P2 P2 P4 P1 P3
0 1 2 3 4 9 15 23
Waiting time
Process waiting time
P1 9-0-1=8
P2 3-1-2=0
P3 15-2=13
P4 4-3=1

Average waiting time=(8+0+13+1)/4 = 5.5ms


Turn around time (TAT):
TAT=waiting time + burst time
Process TAT
P1 15
P2 4
P3 23
P4 9

P a g e | 25 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Average TAT=(15+4+23+9)/4=12.75 ms

Round robin (quantum = 1ms):

Gantt chart:
P P P P P P P P P P P P P P P P P P P P P P P
1 2 3 4 1 2 3 4 1 2 3 4 1 3 4 1 3 4 1 3 1 3 3

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Waiting time
Process waiting time
P1 20-6=14
P2 9-2=7
P3 22-7=15
P4 17-4=13

Average waiting time=(14+7+15+13)/4 = 12.25ms


Turn around time (TAT):
TAT=waiting time + burst time
Process TAT
P1 21
P2 10
P3 23
P4 18

Average TAT=(21+10+23+18)/4=18 ms

P a g e | 26 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

13. Write short notes on Dead lock and its characteristics?(6 Marks)
A process request for resource if it is not available at the time, a process enters wait state.
Waiting process may never again change the state because the resource they requested are held by
other process, this situation is called deadlock.
Example:
Suppose a computer has one tape drive and one plotter, the process A and B request tape drive and
plotter respectively .Both request are granted.
Now A request the plotter and B request tape drive. Without giving up of its resources, then both the
request cannot be granted. This situation is called deadlock.
Example
Semaphores A and B, initialized to 1
P0 P1
wait (A); wait(B);
wait (B); wait(A);
SYSTEM MODEL:
Under the normal mode of operation a process may utilize a resource in only the following
sequence:
1. Request:
If the request cannot be granted immediately(eg. the resource is being used by another
process), then the requesting process must wait until it can acquire the resource.
2. Use:
The process can operate on the resource (for eg.if the resource is a printer, the process can
print)
3. Release:
The process releases the resource.
Deadlock Characterization
Necessary condition:
A deadlock situation can arise if the following four conditions hold simultaneously in a system:
 Mutual exclusion: At least one resource must be held in a non-sharable mode; that is,only one
process at a time can use a resource. If another process requests that resource, the requesting
process must be delayed until the resource has been released.

P a g e | 27 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 Hold and wait: a process must be holding at least one resource is waiting to acquire additional
resources held by other processes.
 No preemption: a resource can be released only voluntarily by the process holding it, after that
process has completed its task.
 Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a
resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn-1 is waiting for a
resource that is held by Pn, and P0 is waiting for a resource that is held by P0.
 Resource allocation graph:deadlocks can be described more precisely in terms of directed graph
called a system resource allocation graph.
This graph consists of a set of vertices V and a set of edges E.
-the set of vertices V is partitioned into two different types;
P-set containing all active processes.
R-set consisting of all resource types.
R1 R3

P1 P2 P3

R2 R4
Request edge:Directed edge pi->rj is called a request edge.
Assignment edge:A directed edge rj->pi is called an assignment edge.

14. Explain Deadlock Prevention in detail? (6 Marks) (APR’15)


Deadlock prevention:
-Deadlock occurs if each of the four necessary conditions hold.
-By ensuring that at least one of these conditions cannot hold; the occurrence of deadlock can be
prevented.
 Mutual Exclusion – not required for sharable resources; must hold for non sharable resources.

P a g e | 28 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 Hold and Wait – must guarantee that whenever a process requests a resource, it does not hold
any other resources.
Require process to request and be allocated all its resources before it begins execution, or allow process
to request resources only when the process has none. Low resource utilization; starvation possible.
 No Preemption – If a process that is holding some resources requests another resource that
cannot be immediately allocated to it, then all resources currently being held are released.
Preempted resources are added to the list of resources for which the process is waiting.
Process will be restarted only when it can regain its old resources, as well as the new ones that it is
requesting.
 Circular Wait – impose a total ordering of all resource types, and require that each process
requests resources in an increasing order of enumeration.

15. Explain Deadlock Avoidance in details? (APR 2012)


Simplest and most useful model requires that each process declare the maximum number of
resources of each type that it may need.
The deadlock-avoidance algorithm dynamically examines the resource-allocation state to ensure
that there can never be a circular-wait condition.
Resource-allocation state is defined by the number of available and allocated resources, and the
maximum demands of the processes.
Safe State:
When a process requests an available resource, system must decide if immediate allocation
leaves the system in a safe state.
System is in safe state if there exists a safe sequence of all processes.
Sequence <P1, P2…, Pn> is safe if for each Pi , the resources that Pi can still request can be
satisfied by currently available resources + resources held by all the Pj , with j < i.
If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished.
When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and
terminate.
When Pi terminates, Pi+1 can obtain its needed resources, and so on.
Basic Facts:
If a system is in safe state then no deadlocks. If a system is in unsafe state then possibility of deadlock
to happen.
Avoidance an ensure that a system will never enter an unsafe state.
Safe, unsafe, dead lock state

P a g e | 29 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Resource-Allocation Graph Algorithm


 Claim edge Pi
Pi->Rj
Indicated that process Pj may request resource Rj; represented by a dashed line.
 Claim edge converts to request edge when a process requests a resource.
 When a resource is released by a process, assignment edge reconverts to a claim edge.
 Resources must be claimed a priori in the system.

Resource allocation graph for dead lock avoidance (for one instance of each resource)

Unsafe state in resource allocation graph

16. Explain Banker’s Algorithm (6) (APR 2012, NOV ‘15)


 Applicable to systems with multiple instances of each resource type.

P a g e | 30 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 Each process must declare the maximum number of instance required for each resource type
upon entering the system.
 When a process requests a set of resources, the system determines whether the allocation of
these resources will have the system in a safe state.
 Yes: allocate the resources
 No: the process must wait
Data Structures for the Banker’s Algorithm
 Available: Vector of length m. If available [j] = k, there are k instances of resource type Rj
available.
 Max: n x m matrix. If Max [i,j] = k, then process Pi may request at most k instances of resource
type Rj.
 Allocation: n x m matrix. If Allocation [i,j] = k, then Pi is currently allocated k instances of Rj.
 Need: n x m matrix. If Need [i,j] = k, then Pi may need k more instances of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].

Safety Algorithm
1.Let Work and Finish be vectors of length m and n, respectively.
Initialize Work = Available
Finish [i] = false for i = 1, 2, 3, …, n.
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi<= Work
If no such i exists, go to step 4.
3. Work = Work + Allocation i
Finish[i] = true go to step 2.
4. If Finish [i] == true for all i, then the system is in a safe state.

Resource-Request Algorithm for Process Pi


Requesti = request vector for process Pi.
If Request [j] = k then process Pi wants k instances of resource type Rj.
1. If Requesti < =Need i go to step 2. Otherwise, raise error condition, since process has exceeded its
maximum claim.
2. If Requesti< =Available, go to step 3. Otherwise Pi must wait, since resources are not available.
3. Pretend to allocate requested resources to Pi by modifying the state as follows:

P a g e | 31 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Available = Available – Requesti;


Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
 If safe the resources are allocated to Pi.
 If unsafe Pi must wait, and the old resource-allocation state is restored

17. Explain Deadlock Detection (10 Marks)?


In an Deadlock detection the following methods are followed
 Allow system to enter deadlock state
 Detection algorithm
 Recovery scheme
Single Instance of Each Resource Type
 Maintain wait-for graph Nodes are processes.
 Pi<Pj if Pi is waiting for Pj.
 Periodically invoke an algorithm that searches for a cycle in the graph.
 An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the
number of vertices in the graph.
Several Instances of a Resource Type Resource allocation graph

Corresponding wait graph

P a g e | 32 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 Available: A vector of length m indicates the number of available resources of each type.
 Allocation: An n x m matrix defines the number of resources of each type currently allocated to
each process.
 Request: An n x m matrix indicates the current request of each process.
 If Request[i,j] = k, then process Pi is requesting k more instances of resource type Rj.
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively. Initialize as follows:
(a) Work = Available
(b) For i = 1,2, …, n,
If Allocation[i]#0, then Finish[i] = false;
Otherwise, Finish[i] = true.
2.Find an index i such that both:
(a) Finish[i] == false
(b) Requesti<=Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish[i] == false, for some i, 1 <=i <=n, then the system is in deadlock state. Moreover,
If Finish[i] == false, then Pi is deadlocked.
Detection-Algorithm Usage
 When, and how often, to invoke depends on.
 How often a deadlock is likely to occur.
 How many processes will need to be rolled back.
One for each disjoint cycle
 If detection algorithm is invoked arbitrarily, there may be many cycles in the resource graph and
so we would not be able to tell which of the many deadlocked processes “caused” the deadlock.

18. Write short notes on Recovery from Deadlock (5 Marks)


Process Termination
 Abort all deadlocked processes.
 Abort one process at a time until the deadlock cycle is eliminated.
 In which order should we choose to abort.
 Priority of the process.

P a g e | 33 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

 How long process has computed, and how much longer to completion.
 Resources the process has used.
 Resources process needs to complete.
 How many processes will need to be terminated.
 Is process interactive or batch system
Recovery from Deadlock:
Resource Preemption
 Selecting a victim – minimize cost.
 Rollback – return to some safe state, restart process for that state.
 Starvation – same process may always be picked as victim, include number of rollback in cost
factor.
19. Consider the following snapshot of a system: (10)

Allocation Max Available


ABCD ABCD ABCD
Po 0011 0011 1522
P1 1001 1751
P2 1351 2352
P3 0531 1652
P4 0011 5651

Answer the following using banker’s algorithm


a. What is the content of the matrix Need?
b. Is the system in a safe state?

a. the content of the matrix Need


A B C D
P0 0 0 0 0
P1 0 7 5 0
P2 1 0 0 1
P3 1 1 2 1
P4 5 6 4 0

b. the system in a safe state


No the system is not in the safe state
20 . Explain in detail about Process Scheduling in linux?
Linux uses two process-scheduling algorithms:
1.A time-sharing algorithm for fair preemptive scheduling between multiple processes
2. A real-time algorithm for tasks where absolute priorities are more important than fairness

P a g e | 34 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

A process’s scheduling class defines which algorithm to apply .For time-sharing processes; Linux
uses a prioritized, credit based algorithm. The crediting rule factors in both the process’s history and
its priority This crediting system automatically prioritizes interactive or I/O-bound processes. Linux
implements the FIFO and round-robin real-time scheduling classes; in both cases, each process has a
priority in addition to its scheduling class.
The scheduler runs the process with the highest priority; for equal-priority processes, it runs the
process waiting the longest. FIFO processes continue to run until they either exit or block .A round-
robin process will be preempted after a while and moved to the end of the scheduling queue, so that
round-roping processes of equal priority automatically time-share between themselves.
Symmetric Multiprocessing
Linux 2.0 was the first Linux kernel to support SMP hardware; separate processes or threads can
execute in parallel on separate processors. To preserve the kernel’s nonpreemptible synchronization
requirements, SMP imposes the restriction, via a single kernel spin lock, that only one processor at a
time may execute kernel-mode code
The job of allocating CPU time to different tasks within an operating system. While scheduling is
normally thought of as the running and interrupting of processes, in Linux, scheduling also includes
the running of the various kernel tasks .Running kernel tasks encompasses both tasks that are
requested by a running process and tasks that execute internally on behalf of a device driver. new
scheduling algorithm – preemptive, priority-based
 Real-time range
 Nice value
Kernel Synchronization
A request for kernel-mode execution can occur in two ways:
1. A running program may request an operating system service, either explicitly via a system
call, or implicitly, for example, when a page fault occurs
2. A device driver may deliver a hardware interrupt that causes the CPU to start executing a
kernel-defined handler for that interrupt.
Kernel synchronization requires a framework that will allow the kernel’s critical sections to run
without interruption by another critical section.
Linux uses two techniques to protect critical sections:
1.Normal kernel code is nonpreemptible when a time interrupt is received while a process
is executing a kernel system service routine, the kernel’s need reached flag is set so that the
scheduler will run once the system call has completed and control is about to be returned to
user mode.

P a g e | 35 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

2. The second technique applies to critical sections that occur in an interrupt service routine By using
the processor’s interrupt control hardware to disable interrupts during a critical section, the kernel
guarantees that it can proceed without the risk of concurrent access of shared data structures.
To avoid performance penalties, Linux’s kernel uses a synchronization architecture that allows
long critical sections to run without having interrupts disabled for the critical section’s entire duration.
Interrupt service routines are separated into a top half and a bottom half.
The top half is a normal interrupt service routine, and runs with recursive interrupts disabled. The
bottom half is run, with all interrupts enabled, by a miniature scheduler. That ensures that bottom
halves never interrupt themselves this architecture is completed by a mechanism for disabling
selected bottom halves while executing normal, foreground kernel code.
Interrupt Protection Levels

Top-half interrupt handlers


Bottom-half interrupt handlers
Kernel system service routines (preemptible)
User mode programs (preemtible)

Fig 5.2 Interrupt protocol Level


Each level may be interrupted by code running at a higher level, but will never be interrupted by
code running at the same or a lower level. User processes can always be preempted by another process
when a time-sharing scheduling interrupt occurs.

2 Marks
2 Marks
1. What is a Dispatcher?
2. Define throughput?
5. Define Aging and starvation?
6. What is context switch?
7. What are the benefits of multithreaded programming?
9. What is a thread?
10. Define CPU scheduling
12. Define Medium Term Scheduler
14. What is the difference between process and thread?
15. What are the uses of job queues, ready queue and device queue?

P a g e | 36 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

16. Compare user threads and kernel threads?


17. What are the requirements that a solution to the critical section problem must satisfy?

18. Write about real time scheduling?

19. Define deadlock

20. What are conditions under which a deadlock situation may arise?

21. Write the three ways to deal the deadlock problem?

11 MARKS

1. Write short notes on CPU scheduler?


2. Explain FCFS?
3. Explain Multi level queue scheduling?
4. Explain briefly about round robin scheduling with diagram.
5. Consider the following set of processes with the length of the CPU burst given in milliseconds

Process Burst time Priority


P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2

The processes arrived in the order P1,P2,P3,P4,P5 at time 0.


(a) Draw Gantt charts that illustrate the execution of these processes using the following svheduling
algorithms : FCFS,SJF ,non-preemptive priority (a smaller priority number implies higher
priority) and RR (quantum=1) (3)
(b) What is the turnaround time of all processes for each scheduling algorithms in (i) (2).
(iii)What is the waiting time of each process for each of the scheduling algorithms in (i) (2)
(iv)Which of the algorithms in part (i) results in the minimum average waiting time (over all
processes)? (2)
6. Calculate average waiting time and average turnaround time for the following algorithms .
(a)FCFS

(b) Preemptive SJF(SRTF)

P a g e | 37 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

( c ) Round robin(quantum = 1ms)

Process Arrival time Burst time


P1 0 7
P2 1 3
P3 2 8
P4 3 5

7. Write short notes on Semaphores and its types with an example


8. Consider the following set of processes with the length of the CPU burst given in milliseconds

Process Burst time Priority


P1 10 3
P2 1 1
P3 2 3
P4 1 4
P5 5 2

The processes arrived in the order P1,P2,P3,P4,P5 at time 0.


(c) Draw Gantt charts that illustrate the execution of these processes using the following svheduling
algorithms : FCFS,SJF ,non-preemptive priority (a smaller priority number implies higher
priority)
(d) What is the turnaround time of all processes for each scheduling algorithms.
(Ref.Pg.No.42 Qn.No.22)
9. Consider the following set of processes with the length of the CPU burst given in milliseconds (11)
(NOV ’18)

Process Burst time


P1 6
P2 10
P3 3
P4 4
P5 2
a)Draw the Gantt charts illustrating execution of these processes for round robin scheduling
(quantum=2)& FCFS
b) Calculate waiting time for each process for each scheduling algorithm.

P a g e | 38 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

c) Calculate average waiting time for each scheduling algorithm.


Consider all processes arrive in order P1,P2,P3,P4,P5 at time zero.

10. Explain Deadlock Avoidance in details?


11. Describe bankers algorithms?
12. Explain Deadlock Detection?
13. Write short notes on Recovery from Deadlock?
14. Consider the following snapshot of a system: (UQ NOV’13) (Ref.Pg.No.21 Qn.No.12)
Allocation Max Available
A B C D A B C D A B C D
Po 0 0 1 2 0 0 1 2 1 5 2 0
P1 1 0 0 0 1 7 5 0
P2 1 3 5 4 2 3 5 6
P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 6
a. What is the content of the matrix Need?
b. Is the system in a safe state?
c. If a request from process P1 arrives for (0, 4, 2, 0), can the request be granted immediately?

15. Explain storage management .A system has 2 A resources 3 B and 6 C resources .5 processes their
current allocation and their maximum allocation are shown below. Is the system in a safe state? If so
,show one sequence of processes which allow the system to complete .If not, explain why.
Allocation Max
A B C A B C
Po 0 0 2 2 0 3
P1 1 1 0 2 3 5
P2 0 0 1 1 2 3
P3 1 0 0 2 0 3
P4 0 0 2 0 1 5
16. Explain Deadlock Prevention in detail?
17. What are the various address translation mechanism used in paging
18. Consider the following snapshot of a system:

Allocation Max Available


ABCD ABCD ABCD

P a g e | 39 Operating systems DEPARTMENT OF CSE


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE

Po 0011 0011 1522


P1 1001 1751
P2 1351 2352
P3 0531 1652
P4 0011 5651

Answer the following using banker’s algorithm


a. What is the content of the matrix Need?
b. Is the system in a safe state?

P a g e | 40 Operating systems DEPARTMENT OF CSE

You might also like