0% found this document useful (0 votes)
4 views23 pages

Os Unit 4

The document covers CPU scheduling and deadlock, detailing various scheduling types and algorithms such as FCFS, SJF, RR, and priority scheduling. It discusses the objectives and goals of process scheduling, including CPU utilization, fairness, and minimizing wait times. Additionally, it addresses the critical section problem and the conditions leading to deadlocks, along with strategies for prevention and recovery.

Uploaded by

aruu0196
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views23 pages

Os Unit 4

The document covers CPU scheduling and deadlock, detailing various scheduling types and algorithms such as FCFS, SJF, RR, and priority scheduling. It discusses the objectives and goals of process scheduling, including CPU utilization, fairness, and minimizing wait times. Additionally, it addresses the critical section problem and the conditions leading to deadlocks, along with strategies for prevention and recovery.

Uploaded by

aruu0196
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

CM3101

Unit 4 - CPU SCHEDULING AND DEADLOCK

4.1 Scheduling types-Scheduling objective, CPU and I/O burst cycles, Pre-
emptive, Non-Pre-emptive.
4.2 Types of scheduling algorithms-First come first served (FCFS), shortest job
first (SJF), Shortest Remaining Time (SRTN), Round Ribon(RR) Priority
scheduling, multilevel queue scheduling.
4.3 Critical section problem.
4.4 Deadlock- system, Models, Necessary condition leading to Deadlocks,
Deadlock Handling-Preventions, avoidance and Recovery
****************************************************************
Scheduling
• The scheduler (dispatcher) is the module that gets invoked when a
context switch needs to happen and manipulates the queues, moving jobs
to and fro.
• The scheduling algorithm determines which jobs are chosen to run next
and what queues they wait on.
• In general, the scheduler runs:
• When a job switches states (running, waiting, etc.)
• When an interrupt occurs
• When a job is created or terminated
• Scheduling algorithms in two contexts as below-
• A preemptive scheduler can interrupt a running job• A non-
preemptive scheduler waits for running job to block
• Many potential goals of scheduling algorithms
• Utilization, throughput, wait time, response time, etc.
• Various algorithms to meet these goals -FCFS/FIFO, SJF, Priority, RR.

CPU Scheduling
• CPU scheduling is the process of switching the CPU among various
processes, the operating system can make the computer more productive.

1|Page 186_Shraddha Keshao Bonde_N2


CM3101

• CPU scheduling is the basis of multi-programmed operating systems.


• Basic Concepts - The idea of multiprogramming is relatively simple. A
process is executed until it must wait, typically for the completion of
some I/O request. In a simple computer system, the CPU would then just
sit idle.
• Scheduling is a fundamental operating-system function. Almost all
computer resources are scheduled before use.
• Used to describe various CPU-scheduling algorithms
• To discuss evaluation criteria for selecting a CPU-scheduling algorithm
for a particular system
• Maximum CPU utilization obtained with multiprogramming.
• Selects from among the processes in memory that are ready to execute,
and allocates the CPU to one of them
• CPU scheduling decisions may take place when a process:
1. Switches from running to waiting state. (for. example, I/O request, or
invocation of wait for the termination of one of the child processes).
2. Switches from running to ready state. (for example, when an interrupt
occurs).
3. Switches from waiting to ready. (Ex- completion of I/O). 4. When a
process Terminates.
• Scheduling under 1 and 4 as mentioned above are non-pre-emptive.
• All other scheduling ( 2 and 3) is pre-emptive.

Objectives of Process Scheduling


• Utilization of CPU at maximum level. Keep CPU as busy as possible.
• Allocation of CPU should be fair.
• Throughput should be Maximum. i.e. Number of processes that complete
their execution per time unit should be maximized.
• Minimum turnaround time, i.e. time taken by a process to finish
execution should be the least.
• There should be a minimum waiting time and the process should not
starve in the ready queue.
• Minimum response time. It means that the time when a process produces
the first response should be as less as possible.

2|Page 186_Shraddha Keshao Bonde_N2


CM3101

Goals of Scheduling algorithm


• Arrival Time: Time at which the process arrives in the ready queue.
• Completion Time: Time at which process completes its execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time and arrival
time.
Turn Around Time = Completion Time – Arrival Time
• Waiting Time(W.T): Time Difference between turnaround time and burst
time.
Waiting Time = Turn Around Time – Burst Time

CPU and I/O Burst Cycle


• CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU
execution and I/O wait
• Provide CPU burst distribution as per need.
• Processes alternate back and forth between these two states
• Process execution begins with a CPU burst, followed by an I/O burst,
then another CPU burst ... etc
• The last CPU burst will end with a system request to terminate execution
rather than with another I/O burst.
• The duration of these CPU burst have been measured.
• An I/O-bound program would typically have many short CPU bursts and
A CPU-bound program might have a few very long CPU bursts.
• This can help to select an appropriate CPU-scheduling algorithm.

3|Page 186_Shraddha Keshao Bonde_N2


CM3101

Pre-emptive Scheduling
• Preemptive scheduling is used when a process switches from running
state to ready state or from waiting state to ready state.
• The resources (mainly CPU cycles) are allocated to the process for the
limited amount of time and then is taken away, and the process is
again placed back in the ready queue if that process still has CPU
burst time remaining.
• That process stays in ready queue till it gets next chance to execute.

Non-Pre-emptive Scheduling
• Non-pre-emptive Scheduling is used when a process terminates, or a
process switch from running to waiting state.
• In this scheduling, once the resources (CPU cycles) is allocated to a
process, the process holds the CPU till it gets terminated or it reaches
a waiting state.
• In case of non-pre-emptive scheduling does not interrupt a process
running CPU in middle of the execution.
• Instead, it waits till the process complete its CPU burst time and then
it can allocate the CPU to another process.

Preemptive Scheduling Vs Non-Preemptive Scheduling

4|Page 186_Shraddha Keshao Bonde_N2


CM3101

Types of scheduling algorithms

Types of Scheduling Algorithms


1. First-Come, First-Served (FCFS) Scheduling
2. Shortest-Job-First (SJF) Scheduling
3. Shortest Remaining Time (SRTN) Scheduling
4. Round-Robin (RR) Scheduling
5. Priority Scheduling
6. Multilevel Queue Scheduling

First Come First Serve (FCFS)


• In FCFS Scheduling-
• The process which arrives first in the ready queue is firstly assigned the
CPU.
• In case of a tie, process with smaller process id is executed first.
• It is always non-preemptive in nature.
• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
5|Page 186_Shraddha Keshao Bonde_N2
CM3101

• Its implementation is based on FIFO queue.


• Poor in performance as average wait time is high.

Advantages-
• It is simple and easy to understand.
• It can be easily implemented using queue data structure.
• It does not lead to starvation.
Disadvantages-
• It does not consider the priority or burst time of the processes.
• It suffers from convoy effect i.e. processes with higher burst time arrived
before the processes with smaller burst time

FCFS Example 1
Consider the processes P1, P2, P3 given in the below table, arrives for execution
in the same order, with Arrival Time 0, and given Burst Time

Solution Gantt

ChartTotal Wait Time = 0 + 24 + 27 = 51 ms


Average Waiting Time = (Total Wait Time) / (Total number of
processes) = 51/3 = 17 ms

6|Page 186_Shraddha Keshao Bonde_N2


CM3101

Total Turn Around Time: 24 + 27 + 30 = 81 ms


Average Turn Around time = (Total Turn Around Time) / (Total number of
processes) = 81 / 3 = 27 ms
Throughput = 3 jobs/30 sec = 0.1 jobs/sec

Consider the processes P1, P2, P3, P4 given in the below table, arrives for
execution in the same order, with given Arrival Time and Burst Time.

Solution Gantt

ChartTotal Wait Time:= 0 + 7 + 10 + 18 = 35 ms


Average Waiting Time = (Total Wait Time) / (Total number of
processes)= 35/4 = 8.75 ms
Total Turn Around Time: 8 + 11 + 19 + 23 = 61 ms
Average Turn Around time = (Total Turn Around Time) / (Total number of
processes) 61/4 = 15.25 ms
Throughput: 4 jobs/26 sec = 0.15385 jobs/sec

7|Page 186_Shraddha Keshao Bonde_N2


CM3101

Shortest Job First (SJF)


• This algorithm associates with each process the length of its next CPU
burst. Use these lengths to schedule the process with the shortest time.
• When the CPU is available, it is assigned to the process that has the
smallest next CPU burst. It is also called as shortest next CPU burst. Best
approach to minimize waiting time.
• If two processes have the same length next CPU burst, FCFS scheduling
is used to break the tie.
• SJF is optimal – gives minimum average waiting time for a given set of
processes (by moving the short process before a long one, the waiting
time of the short process decreases more than it increases the waiting time
of the long process, therefore the average waiting time decreases.
• The difficulty with the SJF algorithm is knowing the length of the next
CPU request. The processer should know in advance how much time
process will take.

Advantages-
• SJF is optimal and guarantees the minimum average waiting time.
• It provides a standard for other algorithms since no other algorithm
performs better than it.
Disadvantages-
• It can not be implemented practically since burst time of the processes
can not be known in advance.
• It leads to starvation for processes with larger burst time.
• Priorities can not be set for the processes.
• Processes with larger burst time have poor response time.

8|Page 186_Shraddha Keshao Bonde_N2


CM3101

9|Page 186_Shraddha Keshao Bonde_N2


CM3101

Round Robin Scheduling (RR)


• CPU is assigned to the process
on the basis of FCFS for a fixed
amount of time.
• This fixed amount of time is
called as time quantum or time
slice.
• After the time quantum expires,
the running process is preempted
and sent to the ready queue.
• Then, the processor is assigned
to the next arrived process.
• It is always preemptive in nature
• With decreasing value of time
quantum,
→ Number of context switch
increases
→ Response time decreases
→ Chances of starvation decreases
• Smaller value of time quantum is better in terms of response time
• With increasing value of time quantum,
→ Number of context switch decreases
→ Response time increases
→ Chances of starvation increases
• Thus, higher value of time quantum is better in terms of number of
context switch.
• With increasing
value of time
quantum, Round
Robin
• Scheduling tends to
become FCFS
Scheduling.
• The performance of
Round Robin
scheduling heavily depends on the value of time quantum. The value of
time quantum should be such that it is neither too big nor too small.

10 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

Advantages-
• It gives the best performance in terms of average response time.
• It is best suited for time sharing system, client server architecture and
interactive system
Disadvantages-
• It leads to starvation for processes with larger burst time as they have to
repeat the cycle many times.
• Its performance heavily depends on time quantum.
• Priorities can not be set for the processes.

11 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

Priority Scheduling
• Priority Scheduling
→ Choose next job based on priority
→ Airline check-in for first class passengers
→ Can implement SJF, priority = 1/(expected CPU burst)
→ Also can be either preemptive or non-preemptive
→ This is what you’re implementing in Nachos in Project 1
• Problem
→ Starvation – low priority jobs can wait indefinitely
• Solution
• “Age” processes
• Increase priority as a function of waiting time
• Decrease priority as a function of CPU consumption
• The SJF algorithm is a special case of the general priorityscheduling
algorithm.
• A priority number (integer) is associated with each process and the CPU
is allocated to the process with the highest priority.
• Equal-priority processes are scheduled in FCFS order.
• The CPU is allocated to the process with the highest priority (smallest
integer º highest priority)
• There are two types of priority scheduling.
• Preemptive priority scheduling.
• Non-preemptive priority scheduling.
• SJF is a priority scheduling where priority is the predicted next CPU burst
time.
• Priorities can be defined either internally or externally.
o Internally defined priorities use some measurable quantities to
compute priority of a process. For example, time limits, memory
requirements, the number of open files etc.
o External priorities are set by criteria that are external to the operating
system, such as importance of the process and other political factors.
• Preemptive priority scheduling: A “Preemptive priority” scheduling
algorithm will preempt the CPU if the priority of the newly arrived
process is higher than the priority of the currently running process.
• Non-premptive priority scheduling: A “non-preemptive priority”
scheduling algorithm will simply put the new process at the head of the
ready queue.
• Problem “ Starvation” – low priority processes may never execute

12 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

• Solution “Aging” – as time progresses increase the priority of the process

Priority Scheduling Drawback:


o Starvation
• Starvation or indefinite blocking is the major problem in priority
scheduling algorithms.
• A process that is ready to run but waiting for the CPU can be
considered blocked.
• A priority scheduling algorithm can leave some low priority
processes indefinitely.
• High priority processes can prevent low- priority processes from ever
getting the CPU.
o Aging
• Aging is a solution to the problem of indefinite blocking of low –
priority processes.
• Aging is a technique of gradually increasing the priority of a process
that wait in the system for a long time.
• For ex: Priorities range from low to high that is 127 to 0, we could
increase the priority of a waiting process by 1 every 15 minutes

Priority Scheduling -Advantages &Disadvantages


Advantages-
• It considers the priority of the processes and allows the important
processes to run first.
• Priority scheduling in pre-emptive mode is best suited for real time
operating system
Disadvantages-
• Processes with lesser priority may starve for CPU.
• There is no idea of response time and waiting time.

13 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

Multilevel queue scheduling


• Multiple-level queues are not an independent scheduling algorithm. They
make use of other existing algorithms to group and schedule jobs with
common characteristics.

14 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

• Multiple queues are maintained for processes with common


characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
• For example, CPU-bound jobs can be scheduled in one queue and all I/O-
bound jobs in another queue.
• The Process Scheduler then alternately selects jobs from each queue and
assigns them to the CPU based on the algorithm assigned to the queue.
• This scheduling algorithm is created for situations in which processes are
easily classified into different groups as below-
• Foreground or interactive processes.
• Background or batch processes.
• These two types of processes have different response-time requirements
and so may have different scheduling needs.
• It is very useful for shared memory problems.
• This algorithm partitions the ready queue into several separate queues.
• The processes are permanently assigned to one queue based on properties
like process size, process priority or process type.
• Each queue has its own algorithm for example the foreground queue
might be scheduled by an RR algorithm and the background queue is
scheduled by an FCFS algorithm.
• The foreground queue have absolute priority over background queue.
• Scheduling must be done between the queues
• Fixed priority scheduling; (i.e., serve all from foreground then from
background). Due to fixed priority scheduling there are possibility of
starvation.
• The solution to this problem is : “Time slice” – each queue gets a certain
amount of CPU time which it can schedule amongst its processes; i.e.,
80% to foreground in RR 20% to background in FCFS.

Example of Multilevel Queue Scheduling


Let us consider an example of a multilevel
queue-scheduling algorithm with five queues:
1. System Processes
2. Interactive Processes

15 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

3. Interactive Editing Processes


4. Batch Processes
5. Student Processes
• Each queue has absolute priority over
lower-priority queues.
• No process in the batch queue, for
example, could run unless the queues for
system processes, interactive processes,
and interactive editing processes were
all empty.
• If an interactive editing process entered
the ready queue while a batch process
was running, the batch process will be
pre-empted

Shortest Remaining Time First(SRTF)-Preemptive SJF


• The SJF algorithm may be either “preemptive” or “nonpreemptive”.
• The choice arises when a new process arrives at the ready queue while a
previous process is executing.
• The new process may have a shorter next CPU burst than what is left of
the currently executing process.
• Then a preemptive SJF algorithm will preempt the currently executing
process, where as a non-preemptive will allow the currently running
process to finish its CPU burst.
• The “Preemptive” SJF scheduling is sometimes called as
• “Shortest-remaining-time-first” scheduling
• Preemptive – if a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This scheme is
known as the Shortest-Remaining-Time-First (SRTF)

SRTF - Advantages And Disadvantages


Advantages-

16 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

• SRTF is optimal and guarantees the minimum average waiting time.


SRTF algorithm makes the processing of the jobs faster than SJN
algorithm, given it’s overhead charges are not counted.
• Processes with a shorter burst time are executed quickly.
• Since it's a preemptive algorithm, whenever a new process is added to the
queue, it just has to compare the presently executing process and the new
one.
Disadvantages-
• The context switch is done a lot more times in SRTF than in SJN, and
consumes CPU’s valuable time for processing. This adds up to it’s
processing time and decreases its benefit of fast processing.
• It has the potential for process starvation since it always selects the
shortest jobs first.
• If the shorter process is continuously added, the longer processes may be
held off indefinitely.

17 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

Critical Section problem


• Consider a system consists of ‘n’ processes{P0, P1, ..., Pn-1}. Each
process has segment of code called a critical section, in which the process
may be changing common variables, updating a table, writing a file and
so on.
• The important feature of the system is that -
o When one process is executing in its critical section, no other process
can allowed executing in its critical section.
o That is, no two processes are executing in their critical sections at the
same time.

Define Entry Section and Exit Section


• The critical section problem is to design a protocol
that the processes can use to cooperate.
• Each process must request permission to enter its
critical section.
• The section of the code implementing this request
is the entry section.
• The critical section is followed by an exit section.
• The remaining code is the remainder section.
18 | P a g e 186_Shraddha Keshao Bonde_N2
CM3101

• Figure: General structure of a typical process pi.

Solution To The Critical Section problem


• The requirements that a solution to the Critical Section
• Problem must satisfy below three requirements-
• Mutual exclusion: If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections.
• Progress: If no process is executing in its critical section and some
processes wish to enter their critical sections, then only those processes
that are not executing in their remainder sections can participate in
deciding which will enter its critical section next, and this selection
cannot be postponed indefinitely.
• Bounded waiting: There exists a bound, or limit, on the number of times
that other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted.

Deadlock
• In multiprogramming environment, several processes may compete for a
finite number of resources.
• A process requests
resources; if the resources
are not available at that
time, the process enters a
wait state. Waiting
processes may never again
change state, because the
resources they have
requested are held by other
waiting processes. This
situation is called a
deadlock.
• When a process request for resources held by another waiting process
which in turn is waiting for resources held by another waiting process and
not a single process can execute its task, then deadlock occurs in the
system.

19 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

• Consider a system with three disk drives and three processes.


• When each process request one disk drive, system allocates one disk
drive to each process.
• Now there is no more drive available in the system.
• If all three processes request for one more disk drive, then all three
processes will go into the waiting state and system will go in deadlock
state.
• Because any one process from the three can execute only when one of
them will release the disk drive allocated to it.

Deadlock System Models


• A system consists of a finite number of resources to be distributed among
a number of competing processes.
• The resources may be partitioned into several types (or classes), each
consisting of some number of identical instances.
• CPU cycles, files, and I/O devices (such as printers and DVD drives) are
examples of resource types.
• If a system has two CPUs, then the resource type CPU has two instances.
• Similarly, the resource type printer may have five instances.
• Under the normal mode of operation, a process may utilize a resource in
only the following sequence:
1. Request: The process requests the resource. If the request cannot be
granted immediately (for example, if the resource is being used by
another process), then the requesting process must wait until it can
acquire the resource
2. Use: The process can operate on the resource (for example, if the
resource is a printer, the process can print on the printer).
3. Release: The process releases the resource. The request and release of
resources may be system calls. Examples are the request() and release()
device, open() and close() file, and allocate() and free() memory system
calls.

List necessary conditions for dead-locks


• Conditions under which a deadlock situation may arise are
• A deadlock situation can arise if the following four conditions hold
simultaneously in a system:
Mutual exclusion

20 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

Hold and wait


No pre-emption
Circular wait

Necessary Conditions for dead-locks


• There are four conditions that must be met in order to achieve deadlock
as follows.
• Mutual Exclusion – At least one resource must be kept in a non-shareable
state; if another process requests it, it must wait for it to be released.
• Hold and Wait – A process must hold at least one resource while also
waiting for at least one resource that another process is currently holding.
• No preemption – Once a process holds a resource (i.e. after its request is
granted), that resource cannot be taken away from that process until the
process voluntarily releases it.
• Circular Wait – There must be a set of processes P0, P1, P2,…, PN such
that every P[I] is waiting for P[(I + 1) percent (N + 1)]. (It is important to
note that this condition implies the hold-and-wait condition, but dealing
with the four conditions is easier if they are considered separately).

Methods for Handling Deadlocks


• The deadlock problem can be dealt with in one of the three ways:
• Use a protocol to prevent or avoid deadlocks, ensuring that the system
will never enter a deadlock state.
• Allow the system to enter the deadlock state, detect it and then recover.
• Ignore the problem all together, and pretend that deadlocks never occur in
the system. This solution is the one used by most operating systems,
including UNIX.
• Deadlock Prevention- Use a protocol to ensure that the system will never
enter a deadlock state.
• Deadlock Avoidance (Banker’s Algorithm) and
• Deadlock Detection and Recovery - Allow the system to enter the
deadlock state and then recover.
• Deadlock Ignorance (Ostrich Method)- Ignore the problem all together,
and pretend that deadlocks never occur in the system. No algorithm as
adding it will impact the performance /Speed, as it occur rarely so most of
Operating system used.

21 | P a g e 186_Shraddha Keshao Bonde_N2


CM3101

Deadlock Prevention
• Deadlock prevention is a set of methods for ensure that at least any one of
the four necessary conditions (like mutual exclusion, hold and wait, no
pre-emption and circular wait) cannot hold.
• These methods prevent deadlocks by constraining how requests for
resources can be made.
• By ensuring that that at least one of these conditions cannot hold, the
occurrence of a deadlock can be prevented.

Deadlock Avoidance
• An alternative method for avoiding deadlocks is to require additional
information about how resources are to be requested.
• Each request requires the system consider the resources currently
available, the resources currently allocated to each process, and the future
requests and releases of each process, to decide whether the could be
satisfied or must wait to avoid a possible future deadlock.

Deadlock Recovery
• Deadlock recovery occurs when a deadlock is detected. When a deadlock
is detected, our system stops working, and when the deadlock is resolved,
our system resumes operation.
• As a result, after detecting deadlock, a method or way must be required to
recover that deadlock in order to restart the system. The technique is
known as deadlock recovery.
• There are three basic approaches to getting out of a bind:
o Inform the system operator and give him/her permission to intervene
manually.
o Stop one or more of the processes involved in the deadlock.
o Prevent the use of resources.
• There are three basic methods of deadlock recovery:
1. Deadlock recovery through preemption–
o The ability to take a resource away from a process, have another
process use it, and then give it back without the process noticing. It
is highly dependent on the nature of the resource.
o Deadlock recovery through preemption is too difficult or
sometimes impossible.
22 | P a g e 186_Shraddha Keshao Bonde_N2
CM3101

2. Deadlock recovery through rollback-


o whenever a deadlock is detected, it is easy to see which resources are
needed.
o To recover from deadlock, a process that owns a needed resource is
rolled back to a point in time before it acquired some other resource
just by starting one of its earlier checkpoints.
3. Deadlock recovery through killing processes–
o This method of deadlock recovery via killing processes is the most
basic method of deadlock recovery.
o Sometimes it is best to kill a process that can be restarted from the
beginning with no ill effects.

***************************************************************

23 | P a g e 186_Shraddha Keshao Bonde_N2

You might also like