0% found this document useful (0 votes)
30 views62 pages

Unit 2

Cs

Uploaded by

devileela921
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views62 pages

Unit 2

Cs

Uploaded by

devileela921
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Scheduling

Basic Concepts
• Maximum CPU utilization obtained with
multiprogramming
• CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and I/O
wait.
• CPU burst distribution
Alternating Sequence of CPU And I/O Bursts
CPU Scheduler
• Selects from among the processes in memory
that are ready to execute, and allocates the CPU
to one of them.
• CPU scheduling decisions may take place when a
process:
1. Switches from running to waiting state.
2. Switches from running to ready state.
3. Switches from waiting to ready.
4. Terminates.
• Scheduling under 1 and 4 is nonpreemptive.
• All other scheduling is preemptive.
Dispatcher
• Dispatcher module gives control of the CPU to
the process selected by the short-term scheduler;
this involves:
– switching context
– switching to user mode
– jumping to the proper location in the user program to
restart that program
• Dispatch latency – time it takes for the dispatcher
to stop one process and start another running.
Scheduling Criteria
• CPU utilization – keep the CPU as busy as possible
• Throughput – No. of processes that complete
their execution per time unit
• Turnaround time – amount of time to execute a
particular process
• Waiting time – amount of time a process has been
waiting in the ready queue
• Response time – amount of time it takes from
when a request was submitted until the first
response is produced, not output (for time-
sharing environment)
Optimization Criteria
• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
First-Come, First-Served (FCFS) Scheduling

Process Burst Time


P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

P1 P2 P3

0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27


• Average waiting time: (0 + 24 + 27)/3 = 17
FCFS Scheduling (Cont.)
Suppose that the processes arrive in the order
P2 , P3 , P1 .
• The Gantt chart for the schedule is:
P2 P3 P1

0 3 6 30

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case.
• Convoy effect short process behind long process
Shortest-Job-First (SJR) Scheduling
• Associate with each process the length of its next CPU
burst. Use these lengths to schedule the process with
the shortest time.
• Two schemes:
– nonpreemptive – once CPU given to the process it cannot
be preempted until completes its CPU burst.
– preemptive – if a new process arrives with CPU burst
length less than remaining time of current executing
process, preempt. This scheme is know as the
Shortest-Remaining-Time-First (SRTF).
• SJF is optimal – gives minimum average waiting time
for a given set of processes.
Example of Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive)
P1 P3 P2 P4

0 3 7 8 12 16

• Average waiting time = (0 + 6 + 3 + 7)/4


Example of Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)
P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16

• Average waiting time = (9 + 1 + 0 +2)/4 - 3


Priority Scheduling
• A priority number (integer) is associated with each
process
• The CPU is allocated to the process with the highest
priority (smallest integer  highest priority).
– Preemptive
– nonpreemptive
• SJF is a priority scheduling where priority is the
predicted next CPU burst time.
• Problem  Starvation – low priority processes may
never execute.
• Solution  Aging – as time progresses increase the
priority of the process.
Round Robin (RR)
• Each process gets a small unit of CPU time (time
quantum), usually 10-100 milliseconds. After this time
has elapsed, the process is preempted and added to
the end of the ready queue.
• If there are n processes in the ready queue and the
time quantum is q, then each process gets 1/n of the
CPU time in chunks of at most q time units at once. No
process waits more than (n-1)q time units.
• Performance
– q large  FIFO
– q small  q must be large with respect to context switch,
otherwise overhead is too high.
Example of RR with Time Quantum =
20
Process Burst Time
P1 53
P2 17
P3 68
P4 24
• The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

• Typically, higher average turnaround than SJF, but better


response.
Time Quantum and Context Switch Time
Multilevel Queue
• Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
• Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
• Scheduling must be done between the queues.
– Fixed priority scheduling; (i.e., serve all from foreground then
from background). Possibility of starvation.
– Time slice – each queue gets a certain amount of CPU time
which it can schedule amongst its processes; i.e., 80% to
foreground in RR
– 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Feedback Queue
• A process can move between the various queues;
aging can be implemented this way.
• Multilevel-feedback-queue scheduler defined by
the following parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a process
– method used to determine when to demote a process
– method used to determine which queue a process will
enter when that process needs service
Example of Multilevel Feedback
Queue
• Three queues:
– Q0 – time quantum 8 milliseconds
– Q1 – time quantum 16 milliseconds
– Q2 – FCFS
• Scheduling
– A new job enters queue Q0 which is served FCFS.
When it gains CPU, job receives 8 milliseconds. If it
does not finish in 8 milliseconds, job is moved to
queue Q1.
– At Q1 job is again served FCFS and receives 16
additional milliseconds. If it still does not complete, it
is preempted and moved to queue Q2.
Multilevel Feedback Queues
Multiple-Processor Scheduling
• CPU scheduling more complex when multiple
CPUs are available.
• Homogeneous processors within a
multiprocessor.
• Load sharing
• Asymmetric multiprocessing – only one
processor accesses the system data structures,
lessen the need for data sharing.
Real-Time Scheduling
• Hard real-time systems – required to complete
a critical task within a guaranteed amount of
time.
• Soft real-time computing – requires that
critical processes receive priority over less
fortunate ones.
Deadlocks
The Deadlock Problem
• A set of blocked processes each holding a resource and
waiting to acquire a resource held by another process in the
set.
• Example
– System has 2 tape drives.
– P1 and P2 each hold one tape drive and each needs another one.
• Example
– semaphores A and B, initialized to 1

P0 P1
wait (A); wait(B)
wait (B); wait(A)
Bridge Crossing Example

• Traffic only in one direction.


• Each section of a bridge can be viewed as a resource.
• If a deadlock occurs, it can be resolved if one car backs
up (preempt resources and rollback).
• Several cars may have to be backed up if a deadlock
occurs.
• Starvation is possible.
System Model
• Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
• Each resource type Ri has Wi instances.
• Each process utilizes a resource as follows:
– request
– use
– release
Deadlock Characterization
Deadlock can arise if four conditions hold simultaneously.
• Mutual exclusion: only one process at a time can use a
resource.
• Hold and wait: a process holding at least one resource is
waiting to acquire additional resources held by other
processes.
• No preemption: a resource can be released only
voluntarily by the process holding it, after that process
has completed its task.
• Circular wait: there exists a set {P0, P1, …, P0} of
waiting processes such that P0 is waiting for a resource
that is held by P1, P1 is waiting for a resource that is held
by
P2, …, Pn–1 is waiting for a resource that is held by
Pn, and P0 is waiting for a resource that is held by P0.
Resource-Allocation Graph
A set of vertices V and a set of edges E.
• V is partitioned into two types:
– P = {P1, P2, …, Pn}, the set consisting of all the
processes in the system.

– R = {R1, R2, …, Rm}, the set consisting of all


resource types in the system.
• request edge – directed edge P1  Rj
• assignment edge – directed edge Rj  Pi
Resource-Allocation Graph (Cont.)
• Process

• Resource Type with 4 instances

• Pi requests instance of Rj
Pi
Rj

• Pi is holding an instance of Rj
Pi
Rj
Example of a Resource Allocation Graph
Resource Allocation Graph With A Deadlock
Resource Allocation Graph With A Cycle But No Deadlock
Basic Facts
• If graph contains no cycles  no deadlock.

• If graph contains a cycle 


– if only one instance per resource type, then
deadlock.
– if several instances per resource type, possibility of
deadlock.
Methods for Handling Deadlocks
• Ensure that the system will never enter a
deadlock state.

• Allow the system to enter a deadlock state and


then recover.

• Ignore the problem and pretend that deadlocks


never occur in the system; used by most
operating systems, including UNIX.
Deadlock Prevention
Restrain the ways request can be made.
• Mutual Exclusion – not required for sharable
resources; must hold for nonsharable
resources.

• Hold and Wait – must guarantee that


whenever a process requests a resource, it does
not hold any other resources.
– Require process to request and be allocated all its
resources before it begins execution, or allow
process to request resources only when the
process has none.
– Low resource utilization; starvation possible.
Deadlock Prevention (Cont.)
• No Preemption –
– If a process that is holding some resources requests another
resource that cannot be immediately allocated to it, then all
resources currently being held are released.
– Preempted resources are added to the list of resources for
which the process is waiting.
– Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.

• Circular Wait – impose a total ordering of all resource


types, and require that each process requests resources
in an increasing order of enumeration.
Deadlock Avoidance
Requires that the system has some additional a priori information
available.
• Simplest and most useful model requires that each
process declare the maximum number of resources
of each type that it may need.

• The deadlock-avoidance algorithm dynamically


examines the resource-allocation state to ensure
that there can never be a circular-wait condition.

• Resource-allocation state is defined by the number


of available and allocated resources, and the
maximum demands of the processes.
Safe State
• When a process requests an available resource, system
must decide if immediate allocation leaves the system in a
safe state.
• System is in safe state if there exists a safe sequence of all
processes.
• Sequence <P1, P2, …, Pn> is safe if for each Pi, the
resources that Pi can still request can be satisfied by
currently available resources + resources held by all the
Pj, with j<I.
– If Pi resource needs are not immediately available, then Pi can
wait until all Pj have finished.
– When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate.
– When Pi terminates, Pi+1 can obtain its needed resources, and so
on.
Basic Facts
• If a system is in safe state  no deadlocks.

• If a system is in unsafe state  possibility of


deadlock.

• Avoidance  ensure that a system will never


enter an unsafe state.
Safe, Unsafe , Deadlock State
Resource-Allocation Graph Algorithm
• Claim edge Pi  Rj indicated that process Pj may
request resource Rj; represented by a dashed line.

• Claim edge converts to request edge when a


process requests a resource.

• When a resource is released by a process,


assignment edge reconverts to a claim edge.

• Resources must be claimed a priori in the system.


Resource-Allocation Graph For Deadlock Avoidance
Unsafe State In Resource-Allocation Graph
Banker’s Algorithm
• Multiple instances.

• Each process must a priori claim maximum use.

• When a process requests a resource it may have to


wait.

• When a process gets all its resources it must


return them in a finite amount of time.
Data Structures for the Banker’s Algorithm

Let n = number of processes, and m = number of resources types.


• Available: Vector of length m. If available [j] = k,
there are k instances of resource type Rj available.
• Max: n x m matrix. If Max [i,j] = k, then process
Pi may request at most k instances of resource type
Rj.
• Allocation: n x m matrix. If Allocation[i,j] = k
then Pi is currently allocated k instances of Rj.
• Need: n x m matrix. If Need[i,j] = k, then Pi may
need k more instances of Rj to complete its task.
Need [i,j] = Max[i,j] – Allocation [i,j].
Safety Algorithm
1. Let Work and Finish be vectors of length m
and n, respectively. Initialize:
Work = Available
Finish [i] = false for i - 1,3, …, n.
2. Find and i such that both:
(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4.
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish [i] == true for all i, then the system is
in a safe state.
Resource-Request Algorithm for Process Pi

Request = request vector for process Pi. If


Requesti [j] = k then process Pi wants k
instances of resource type Rj.
1. If Requesti  Needi go to step 2. Otherwise, raise
error condition, since process has exceeded its
maximum claim.
2. If Requesti  Available, go to step 3. Otherwise Pi
must wait, since resources are not available.
3. Pretend to allocate requested resources to Pi by
modifying the state as follows:
Available = Available - Requesti;
Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;;
• If safe  the resources are allocated to Pi.
• If unsafe  Pi must wait, and the old resource-allocation
state is restored
Example of Banker’s Algorithm
• 5 processes P0 through P4; 3 resource types A
(10 instances),
B (5instances, and C (7 instances).
• Snapshot at time T0:
Allocation Max Available
ABC ABC ABC
P0 0 1 0 7 5 3 332
P1 2 0 0 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3
Example (Cont.)
• The content of the matrix. Need is defined to be Max
– Allocation.
Need
ABC
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1
• The system is in a safe state since the sequence < P1,
P3, P4, P2, P0> satisfies safety criteria.
Example P1 Request (1,0,2) (Cont.)
• Check that Request  Available (that is, (1,0,2)  (3,3,2)  true.
Allocation Need Available
ABC ABC ABC
P0 010 743 230
P1 302 020
P2 301 600
P3 211 011
P4 002 431
• Executing safety algorithm shows that sequence <P1, P3, P4, P0,
P2> satisfies safety requirement.
• Can request for (3,3,0) by P4 be granted?
• Can request for (0,2,0) by P0 be granted?
Deadlock Detection
• Allow system to enter deadlock state

• Detection algorithm

• Recovery scheme
Single Instance of Each Resource
Type
• Maintain wait-for graph
– Nodes are processes.
– Pi  Pj if Pi is waiting for Pj.

• Periodically invoke an algorithm that searches for


a cycle in the graph.

• An algorithm to detect a cycle in a graph requires


an order of n2 operations, where n is the number
of vertices in the graph.
Resource-Allocation Graph and Wait-for Graph

Resource-Allocation Graph Corresponding wait-for graph


Several Instances of a Resource
Type
• Available: A vector of length m indicates the
number of available resources of each type.

• Allocation: An n x m matrix defines the


number of resources of each type currently
allocated to each process.

• Request: An n x m matrix indicates the current


request of each process. If Request [ij] = k,
then process Pi is requesting k more instances
of resource type. Rj.
Detection Algorithm
1. Let Work and Finish be vectors of length m and n,
respectively Initialize:
(a) Work = Available
(b)For i = 1,2, …, n, if Allocationi  0, then
Finish[i] = false;otherwise, Finish[i] = true.
2. Find an index i such that both:
(a)Finish[i] == false
(b)Requesti  Work

If no such i exists, go to step 4.


Detection Algorithm (Cont.)
3. Work = Work + Allocationi
Finish[i] = true
go to step 2.
4. If Finish[i] == false, for some i, 1  i  n, then the system is in
deadlock state. Moreover, if Finish[i] == false, then Pi is
deadlocked.

Algorithm requires an order of O(m x n2)


operations to detect whether the system is in
deadlocked state.
Example of Detection Algorithm
• Five processes P0 through P4; three resource types
A (7 instances), B (2 instances), and C (6 instances).
• Snapshot at time T0:
Allocation Request Available
ABC ABC ABC
P0 0 1 0 000 000
P1 2 0 0 202
P2 3 0 3 000
P3 2 1 1 100
P4 0 0 2 002
• Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true
for all i.
Example (Cont.)
• P2 requests an additional instance of type C.
Request
ABC
P0 0 0 0
P1 2 0 1
P2 0 0 1
P3 1 0 0
P4 0 0 2
• State of system?
– Can reclaim resources held by process P0, but insufficient
resources to fulfill other processes; requests.
– Deadlock exists, consisting of processes P1, P2, P3, and P4.
Detection-Algorithm Usage
• When, and how often, to invoke depends on:
– How often a deadlock is likely to occur?
– How many processes will need to be rolled back?
• one for each disjoint cycle

• If detection algorithm is invoked arbitrarily,


there may be many cycles in the resource
graph and so we would not be able to tell
which of the many deadlocked processes
“caused” the deadlock.
Recovery from Deadlock: Process Termination

• Abort all deadlocked processes.


• Abort one process at a time until the deadlock cycle is
eliminated.
• In which order should we choose to abort?
– Priority of the process.
– How long process has computed, and how much longer to
completion.
– Resources the process has used.
– Resources process needs to complete.
– How many processes will need to be terminated.
– Is process interactive or batch?
Recovery from Deadlock: Resource Preemption

• Selecting a victim – minimize cost.

• Rollback – return to some safe state, restart


process for that state.

• Starvation – same process may always be


picked as victim, include number of rollback in
cost factor.
Combined Approach to Deadlock Handling

• Combine the three basic approaches


– prevention
– avoidance
– detection
allowing the use of the optimal approach for each of
resources in the system.

• Partition resources into hierarchically ordered classes.

• Use most appropriate technique for handling deadlocks


within each class.

You might also like