0% found this document useful (0 votes)
29 views66 pages

Chapter-IV Process Scheduling

The document discusses CPU scheduling and algorithms. It covers scheduling types like preemptive and non-preemptive scheduling. It also describes common scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin. SJF can be preemptive, allowing a shorter job to preempt a currently running longer job, or non-preemptive where once a CPU is assigned, it cannot be preempted. The goal of scheduling is to maximize CPU utilization and minimize average wait times.

Uploaded by

shrey6346
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views66 pages

Chapter-IV Process Scheduling

The document discusses CPU scheduling and algorithms. It covers scheduling types like preemptive and non-preemptive scheduling. It also describes common scheduling algorithms like first come first serve (FCFS), shortest job first (SJF), priority scheduling, and round robin. SJF can be preemptive, allowing a shorter job to preempt a currently running longer job, or non-preemptive where once a CPU is assigned, it cannot be preempted. The goal of scheduling is to maximize CPU utilization and minimize average wait times.

Uploaded by

shrey6346
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 66

OPERATING SYSTEMS

OSY - 22516

UNIT-IV
CPU SCHEDULING & ALGORITHMS

Mr. Naresh A. Kamble


POINTS TO BE COVERED
• INTRODUCTION
• SCHEDULING TYPES
• TYPES OF SCHEDULING ALGORITHM
• DEADLOCK
INTRODUCTION
• Single processor system allows only one process at a time.

• The objective of multiprogramming is to have some process


running at all times, to maximize CPU utilization.

• CPU–I/O Burst Cycle – Process execution consists of a cycle of


CPU execution and I/O wait.

• CPU burst distribution.


SCHEDULING TYPES
• Definition of Scheduling

• “The determination of time that is required to perform each


operation and also the time required to perform the entire
series of operations as routed.”
SCHEDULING TYPES
• Scheduling objectives
1. In order to generate proper output the sequence of
operations is properly planned.

2. To have minimum total time of processing by having better


resources utilization.

3. For having maximum capacity utilization and reducing the


processing cost by minimization of idleness of machines and
manpower.

4. To avoid unbalanced allocation of work among the software


and hardware.
SCHEDULING TYPES
• CPU – I/O Burst Cycle - Process execution consists of a cycle
of CPU execution and I/O wait.
• CPU Bound Process - The process which spends more time in
computations or with CPU and very rarely with the I/O
devices is called as CPU bound process.
• I/O Bound Process – The process which spends more time in
I/O operation that computation (time spends with CPU) I/O
bound process.
SCHEDULING CRITERIA
• Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – of processes that complete their execution per
time unit
• Turnaround time – amount of time to execute a particular
process
– waiting to get into memory + waiting in the ready queue +
executing on the CPU + I/O
• Waiting time – amount of time a process has been waiting in
the ready queue
• Response time – amount of time it takes from when a request
was submitted until the first response is produced, not output
(for time-sharing environment)
SCHEDULING TYPES
• Types of Scheduling
• A. Pre-Emptive Scheduling
• Allows higher priority process to replace a currently running
process.

• This scheduling is based on priority.

• It allows real multiprogramming.

• But they are complex and can lead the system to race
condition.
SCHEDULING TYPES
• B. Non-Preemptive Scheduling
• Once the CPU is assigned to a process then the processor do
not release until the completion of that process.

• This scheduling assigns CPU to only one process.

• They are simple and they cannot lead the system to race
condition.

• But they do not allow real multi programming.


SCHEDULING TYPES
• Dispatcher ( Courier Boy)
• Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
– switching context
– switching to user mode
– jumping to the proper location in the user program to
restart that program
SCHEDULING ALOGORITHMS
1. FIRST COME FIRST SERVE (FCFS) SCHEDULING ALGORITHM

2. SHORTEST JOB FIRST (SJF) SCHEDULING ALGORITHM

3. SHORTEST REMAINING TIME NEXT (SRTN) SCHEDULING


ALGORITHM

4. PRIORITY SCHEDULING ALGORITHM

5. ROUND ROBIN (RR) SCHEDULING ALGORITHM

6. MULTI LEVEL QUEUE (MLQ) SCHEDULING ALGORITHM


SCHEDULING ALOGORITHMS
• FIRST COME FIRST SERVE (FCFS) SCHEDULING ALGORITHM
• The process that request the CPU first is allocated the CPU
first.

• FCFS policy is easily managed with a FIFO queue.

• When a process enters the ready queue, its PCB is linked onto
the tail of the queue.

• When the CPU is free, it is allocated to the process at the


head of the queue.

• The running process is then removed from the queue.


SCHEDULING ALOGORITHMS
Process CPU Burst Time
P1 24
P2 3
P3 3

• Suppose that the processes arrive in the order: P1 , P2 , P3


The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
SCHEDULING ALOGORITHMS
• Suppose that the processes arrive in the order
P2 , P3 , P1
• The Gantt chart for the schedule is:
P2 P3 P1

0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect (short process behind long process)
• Advantage: Relatively simple algorithm. It is non-preemptive.
• Disadvantage: long waiting times
SCHEDULING ALOGORITHMS
• SHORTEST JOB FIRST (SJF) SCHEDULING ALGORITHM
• This algorithm associates with each process length of the
process’s next CPU burst.

• When the CPU is available, it is assigned to the process that


has smallest next CPU burst.

• Is the next CPU burst of two processes are same , FCFS


scheduling is used to break the tie.
SCHEDULING ALOGORITHMS
• The job with the shortest next CPU burst time is selected

• Example :
– CPU job burst times and ready queue order:
• P1: 20
• P2: 12
• P3: 8
• P4: 16
• P5: 4

– Draw Gantt chart and compute the average waiting time


given SJF CPU scheduling
– Assume 0 context switch time
SCHEDULING ALOGORITHMS
• Waiting times (how long did process wait before being
scheduled on the CPU?):
P1: 20, P2: 12, P3: 8, P4: 16, P5: 4

P1: 40
P5 P3 P2 P4 P1
P2: 12
P3: 4 0 4 12 24 40 60
P4: 24
P5: 0
Average wait time: (0+4+12+24+40 )= 80/5 = 16
SCHEDULING ALOGORITHMS
• TYPES OF SJF SCHEDULING ALGORITHM

1. Preemptive
If a new process arrives with CPU burst length less than
remaining time of current executing process, preempt. This
scheme is know as the Shortest-Remaining-Time-First (SRTF).

2. Non-preemptive
Once CPU is given to the process it cannot be preempted until
completes its CPU burst.
SCHEDULING ALOGORITHMS
• SJF-PREEMPTIVE ALGORITHM
PROCESS ARRIVAL TIME BURST TIME REMAINING STATUS
TIME
P1 0 7 5 SCHEDULED
P2 2 4 2 SCHEDULED
P3 4 1 0 DONE
P4 5 4 0 DONE

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

• Average waiting time


SCHEDULING ALOGORITHMS
• SJF-NON-PREEMPTIVE ALGORITHM
PROCESS ARRIVAL TIME BURST TIME
P1 0 10
P2 1 5
P3 2 8
P4 3 15

P1 P2 P3 P4

0 10 15 23 38

• Average waiting time


SCHEDULING ALOGORITHMS
• Example 1 – SJF - Preemptive

PROCESS ARRIVAL TIME BURST TIME REMAINING TIME STATUS


P1 0 7 6,5 SCHEDULED
P2 1 4 3 SCHEDULED
P3 2 10 9 SCHEDULED
P4 3 6 0 DONE
P5 4 8 0 DONE

P1-1 P2-1 P3-1 P1-1 P2-3 P1-5 P4-6 P5-8

0 1 2 3 4 7 12 18 26
P3-9
35

Average waiting time = [ (7-4-1)+(4-2)+(26-3)+(12-3) +(18-4)] = [2+2+23+9+14] = 50/5


SCHEDULING ALOGORITHMS
• Example 1 – SJF – Non-Preemptive
PROCESS ARRIVAL TIME BURST TIME
P1 0 7
P2 1 4
P3 2 10
P4 3 6
P5 4 8

P1 P2 P3 P4 P5

0 7 11 21 27 35

Average waiting time = [ (0)+(7-1)+(11-2)+(21-3) +(27-4)] = [0+6+9+18+23] = 56/5 =


SCHEDULING ALOGORITHMS
• Example 2 – SJF - Preemptive
PROCESS ARRIVAL TIME BURST TIME REMAINING TIME STATUS
P1 0 8 7,0 SCHEDULED
P2 1 4 3,2,1,0 SCHEDULED
P3 2 9 0 DONE
P4 3 5 0 DONE

P1-1 P2-1 P2-1 P2-1 P2-1 P4-5 P1-7 P3-9

0 1 5 10 17 26

Average waiting time = [ (10-1)+(1-1)+(17-2)+(5-3)] = [9+0+15+2] = 50/5 = 6.5


SCHEDULING ALOGORITHMS
• Example 1 – SJF – Non-Preemptive
PROCESS ARRIVAL TIME BURST TIME
P1 0 8
P2 1 4
P3 2 9
P4 3 5

P1 P2 P3 P4

0 8 12 21 26

Average waiting time = [ (0)+(8-1)+(12-2)+(21-3)] = [0+7+10+18] = 56/5 = 8.75 ms


SCHEDULING ALOGORITHMS
• SHORTEST REMAINING TIME NEXT SCHEDULING (SRTN)

• In preemptive SJF, if newly arrived process has less execution


time with compare to currently executing process, the CPU
will be given newly arrived process.

• This is called as SRTN.


SCHEDULING ALOGORITHMS
• EXAMPLE - SRTN
PROCESS ARRIVAL TIME BURST TIME REMAINING STATUS
TIME
P1 0 10 9 SCHEDULED
P2 1 5 0 DONE
P3 2 8 0 DONE
P4 3 15 0 DONE

P1 P2 P3 P1 P4

0 1 6 14 23 38

• Average waiting time


SCHEDULING ALOGORITHMS
• PRIORITY SCHEDULING ALGORITHM

• It is special case of general SJF algorithm.

• A priority is associated with each process, and the CPU is


allocated to the process with the highest priority.

• Equal-priority process are scheduled in FCFS order.


SCHEDULING ALOGORITHMS
• EXAMPLE
PROCESS BURST TIME PRIORITY
P1 12 4
P2 10 5
P3 5 2
P4 4 1
P5 3 3

P4 P3 P5 P1 P2
0 4 9 12 24 34

• Average waiting time


SCHEDULING ALOGORITHMS
• Priorities can be defined as

• 1. Internal Priority
– Use some measurable quantity or quantities to compute
the priority of a process.
– Like time limits, memory requirement, number of open
files.

• 2. External Priority
– They are set by criteria outside the operating system.
– Like importance of process.
SCHEDULING ALOGORITHMS
• Major disadvantage of priority scheduling algorithm is
indefinite blocking or starvation.

• A process that is ready to run but waiting for the CPU can be
considered blocked.

• A solution to the problem of indefinite blockage of low


priority processes is aging.

• It increases the priority of processes that wait in the system


for a long time.
SCHEDULING ALOGORITHMS
• ROUND ROBIN (RR) SCHEDULING ALGORITHM
• Round Robin (RR) scheduling algorithm is designed specially for
time sharing systems.

• It is similar to FCFS scheduling, but preemption is added to enable


the system to switch between process.

• A small unit of time called a time quantum or time slice is defined.

• Time quantum is generally from 10 to 100 milliseconds in length.

• The ready queue is treated as circular queue. The CPU scheduler


goes around the ready queue, allocating the CPU to each process
for a time interval of up to 1 time quantum.
SCHEDULING ALOGORITHMS
• WORKING
• To implement RR scheduling , the ready queue is kept as FIFO
queue of processes.

• New processes are added to the tail of the ready queue .

• The CPU scheduler picks the first process from the ready
queue, sets a timer to interrupt after 1 time quantum, and
dispatches the process.
SCHEDULING ALOGORITHMS
• EXAMPLE
PROCESS BURST TIME TIME QUANTUM
P1 53
P2 17
20
P3 68
P4 24

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162


SCHEDULING ALOGORITHMS
• WAITING TIME OF EACH PROCESS

• P1 = 0 + (77-20) + (121-97) = 81

• P2 = 20

• P3 = 37 + (97-37) + (134-97) + (154-134) = 154

• P4 = 57 + (117-77) = 97

• Average waiting time


• = [81+20+154+97] = 352/4 = 88 ms
SCHEDULING ALOGORITHMS
• MULTI LEVEL QUEUE (MLQ) SCHEDULING ALGORITHM

• Ready queue is partitioned into separate queues:


foreground (interactive)
background (batch)

• Each queue has its own scheduling algorithm


– foreground – RR
– background – FCFS
SCHEDULING ALOGORITHMS
• MULTI LEVEL QUEUE (MLQ) SCHEDULING ALGORITHM

• Scheduling must be done between the queues


– Fixed priority scheduling; (i.e., serve all from foreground
then from background).
– Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
– 20% to background in FCFS
• Each queue has absolute priority over lower priority queues

• A lower priority process that is executing would be preempted


if a higher priority process arrived
SCHEDULING ALOGORITHMS
• Three queues:
– Q0 – time quantum 8 milliseconds
– Q1 – time quantum 16 milliseconds
– Q2 – FCFS

• Scheduling

– A new job enters queue Q0 which is served FCFS. When it gains


CPU, job receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to queue Q1.

– At Q1 job is again served FCFS and receives 16 additional


milliseconds. If it still does not complete, it is preempted and
moved to queue Q .
• Processes with a CPU burst of 8 msec or less are given highest
priority
• Those that need more than 8 but less than 24 msec are given
next highest priority
• Anything longer sink to the bottom queue and are serviced
only when the first two queues are idle
DEADLOCK
• INTRODUCTION
• In a multiprogramming environment, several processes may
compete for a finite number of resources.

• A process requests resources, if the resources are not


available at that time, the process enters a waiting state.

• Sometimes a waiting process is never again able to change


state, because the resources it has requested are held by
other waiting processes.

• This situation is called as deadlocks.


DEADLOCK
• A deadlock is a situation in which two computer programs
sharing the same resource are effectively preventing each
other from accessing the resource, resulting in both programs
ceasing to function.
PROBLEM OF DEADLOCK
• Deadlock explained by Traffic Case
SYSTEM MODEL
• A system has finite number of resources to be distributed
among a number of competing processes.
• The resource are partitioned into several types, each
consisting of some number of identical instances.
• The types of resources are
– Memory space.
– CPU cycles.
– Files.
– I / O devices.
• A process must request a resource before using it & must
release it after using it.
• The process may utilize a resource in following sequence
– Request.
– Use.
– Release.
DEADLOCK CONDITIONS
• In a deadlock, processes never finish executing & system
resources are tied up, preventing jobs from starting.

• Deadlock can arise if four conditions hold simultaneously.


• MUTUAL EXCLUSION
• HOLD & WAIT
• NO PREEMPTION
• CIRCULAR WAIT
DEADLOCK CONDITIONS
• MUTUAL EXCLUSION: only one process at a time can use a
resource.

ACCESSING
P1

WAITING FOR P1 TO
RELEASE PRINTER

P2
DEADLOCK CONDITIONS
• HOLD AND WAIT: a process holding at least one resource is
waiting to acquire additional resources held by other
processes.
P1 ACCESSING FAX

TRYING TO AQUIRE PRINTER


P1

P2 ACCESSING PRINTER
P2
DEADLOCK CONDITIONS
• NO PREEMPTION: a resource can be released only
voluntarily by the process holding it, after that process has
completed its task.
ACCESSING PRINTER
P1 I WILL ONLY RELEASE PRINTER ONCE
I FINISH MY TASK

WAITING FOR P1 TO
RELEASE PRINTER

P2
DEADLOCK CONDITIONS
• CIRCULAR WAIT: there exists a set {P0, P1, …, Pn} of waiting

processes such that P0 is waiting for a resource that is held by

P1 & P1 is waiting for a resource that is held by P2, …, Pn–1 is

waiting for a resource that is held by Pn, and Pn is waiting for

a resource that is held by P0.

P1 P2
DEADLOCK HANDLING
• A deadlock in operating system can be handled in following 4
different conditions.
• 1. Adopt methods for avoiding the deadlock.

• 2. Prevent the deadlock from occurring (use a protocol).

• 3. Ignore the deadlock.

• 4. Allow the deadlock to occur, detect it and recover from it.


DEADLOCK PREVENTION
• A deadlock in operating system can be prevented by not
allowing all 4 conditions to be satisfied simultaneously.

• 1. Eliminating Mutual Exclusion Condition.

• 2. Eliminating Hold and Wait Condition.

• 3. Eliminating No Preemption Condition.


DEADLOCK AVOIDANCE
• Deadlock avoidance approach ensure that deadlocks cannot
arise/occur in a system, by not allowing the conditions for
deadlock to hold simultaneously.

• Deadlock avoidance require that the operating system be


given information in advance regarding the resources a
process will request and use.

• This information is used by OS to schedule that allocation of


DEADLOCK AVOIDANCE
• SAFE STATE
• When a process requests an available resource, system must
decide if immediate allocation leaves the system in a safe
state

• System is in safe state if there exists a safe sequence of all


processes.

• Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources
that Pi can still request can be satisfied by currently available
resources + resources held by all the Pj, with j<i.
– If Pi resource needs are not immediately available, then Pi can wait
until all Pj have finished.
– When Pj is finished, Pi can obtain needed resources, execute, return
allocated resources, and terminate.
DEADLOCK AVOIDANCE
• If a system is in safe state  no deadlocks.

• If a system is in unsafe state  possibility of deadlock.

• Avoidance  ensure that a system will never enter an unsafe


state.
Deadlock Avoidance

Safe, Unsafe , Deadlock State


DEADLOCK AVOIDANCE
• BANKERS ALGORITHM
• Developed by Dijikstra in 1965 for deadlock avoidance.

• It is names as Bankers algorithm because it is used in Banking


systems to determine whether a loan can be granted or not.

• Banker algorithm in OS is such that it can know in advance


before a resource is allocated to a process, whether it can
lead to deadlock (unsafe state) or it can certainly manage to
avoid it (safe state).
DEADLOCK AVOIDANCE
• AIM OF BANKERS ALGORITHM

• Each process must have a priori claim (maximum use).

• When a process requests a resource it may have to wait.

• When a process gets all its resources it must return them in a


finite amount of time.
DEADLOCK AVOIDANCE
• Data Structures for the Banker’s Algorithm

• Let n = number of processes, and m = number


of resources types.

• Available: Vector of length m.


• If Available [j] = k, there are k instances of resource type Rj
available.

• Max: n x m matrix.
• If Max [i,j] = k, then process Pi may request at most k
instances of resource type Rj.
DEADLOCK AVOIDANCE
• Allocation: n x m matrix.
• If Allocation[i,j] = k then Pi is currently allocated k instances of
Rj.

• Need: n x m matrix.
• If Need[i,j] = k, then Pi may need k more instances of Rj to
complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
DEADLOCK AVOIDANCE
Process MAX ALLOCATED NEED
A B C A B C A B C
P0 7 5 3 0 1 0 7 4 3
P1 3 2 2 2 0 0 1 2 2
P2 9 0 2 3 0 2 6 0 0
P3 2 2 2 2 1 1 0 1 1
P4 4 3 3 0 0 2 4 3 1

AVAILABLE
A (10) B (5) C (7)
3 3 2

• The content of the matrix Need is defined to be Max – Allocation.

• The system is in a safe state since the sequence < P1, P3, P4, P2, P0>
satisfies safety criteria.
DEADLOCK AVOIDANCE
• SAFETY ALGORITHM
Tells us whether or not a system is in a safe state
1. Let Work and Finish be vectors of length m and n, respectively.
Initialize:
Work = Available
Finish [i] = false for i = 1,2, …, n.

2. Find an index i such that both:


(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4.

3. Work = Work + Allocation;


Finish[i] = true
go to step 2.

4. If Finish [i] == true for all i, then the system is in a safe state.
DEADLOCK AVOIDANCE
Process ALLOCATED MAX NEED
A B C A B C A B C
P0 5 2 1 3 0 1 2 2 0
P1 3 2 2 0 2 1 3 0 1
P2 3 2 2 2 0 0 1 2 2
P3 7 3 1 2 1 0 5 2 1
P4 5 0 3 4 0 2 1 0 1

AVAILABLE
A (12) B (9) C (6)
1 6 2

• The content of the matrix Need is defined to be Max – Allocation.

• The system is in a safe state since the sequence < P2, P1, P0, P4, P3>
satisfies safety criteria.
DEADLOCK AVOIDANCE
• Process P2
• Need < = Available = (1,2,2) <= (1,6,2) is TRUE
• Available = Available – Need = (1,6,2) – (1,2,2) = (0,4,0)
• Total Available = Available + Allocated = (0,4,0) + (3,2,2) = (3,6,2)
• After P2 finishes its job.

• Process P1
• Need < = Available = (3,0,1) <= (3,6,2) is TRUE
• Available = Available – Need = (3,6,2) – (3,0,1) = (0,6,1)
• Total Available = Available + Allocated = (0,6,1) + (3,2,2) = (3,8,3)
• After P1 finishes its job.

• Process P0
• Need < = Available = (2,2,0) <= (3,8,3) is TRUE
• Available = Available – Need = (3,8,3) – (2,2,0) = (1,6,3)
• Total Available = Available + Allocated = (1,6,3) + (5,2,1) = (6,8,4)
• After P0 finishes its job.
DEADLOCK AVOIDANCE
• Process P4
• Need < = Available = (1,0,1) <= (6,8,4) is TRUE
• Available = Available – Need = (6,8,4) – (1,0,1) = (5,8,3)
• Total Available = Available + Allocated = (5,8,3) + (5,0,3) = (10,8,6)
• After P4 finishes its job.

• Process P3
• Need < = Available = (5,2,1) <= (10,8,2) is TRUE
• Available = Available – Need = (10,8,6) – (5,2,1) = (5,6,5)
• Total Available = Available + Allocated = (5,6,5) + (7,3,1) = (12,9,6)
• After P3 finishes its job.
DEADLOCK AVOIDANCE
RESOURCE REQUEST ALGORITHM
Request = request vector for process Pi. If Requesti [j] = k then process
Pi wants k instances of resource type Rj.
1. If Requesti  Needi go to step 2. Otherwise, raise error condition,
since process has exceeded its maximum claim.
2. If Requesti  Available, go to step 3. Otherwise Pi must wait, since
resources are not available.
3. Pretend to allocate requested resources to Pi by modifying the state as
follows:
Available = Available - Requesti ;
Allocationi = Allocationi + Request;
Needi = Needi – Requesti;

• If safe  the resources are allocated to Pi.


• If unsafe  Pi must wait, and the old resource-allocation state is
restored
DEADLOCK AVOIDANCE
Process ALLOCATION MAX NEED
A B C A B C A B C
P0 0 1 0 7 5 3 7 4 3
P1 2 0 0 3 2 2 1 2 2
P2 3 0 2 9 0 2 6 0 0
P3 2 1 1 2 2 2 0 1 1
P4 0 0 2 4 3 3 4 3 1

AVAILABLE
A (10) B (5) C (7)
3 3 2

You might also like