Lec10 Scheduling
Lec10 Scheduling
Deadlock (cont’d)
Thread Scheduling
T1 T2 T3
T1 T2 T3
T1 T3
R3 T4
R3 R2
R4
R4
Simple Resource Allocation Graph Allocation Graph
Allocation Graph With Deadlock With Cycle, but
No Deadlock
2/22/06 Joseph CS162 ©UCB Spring 2006 Lec 10.3
Review: Methods for Handling Deadlocks
D By
is R
al u
lo le
we
d
• Preventing Deadlock
• Scheduling Policy goals
• Policy Options
• Implementation Considerations
Time
2/22/06 Joseph CS162 ©UCB Spring 2006 Lec 10.14
Assumption: CPU Bursts
0 24 27 30
– Waiting time for P1 = 0; P2 = 24; P3 = 27
– Average waiting time: (0 + 24 + 27)/3 = 17
– Average Completion time: (24 + 27 + 30)/3 = 27
• Convoy effect: Joseph
2/22/06 shortCS162
process behind
©UCB Spring 2006 long process
Lec 10.17
FCFS Scheduling (Cont.)
• Example continued:
– Suppose that processes arrive in order: P2 , P3 , P1
Now, the Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30
– Waiting time for P1 = 6; P2 = 0; P3 = 3
– Average waiting time: (6 + 0 + 3)/3 = 3
– Average Completion time: (3 + 6 + 30)/3 = 13
• In second case:
– average waiting time is much better (before it was 17)
– Average completion time is better (before it was 27)
• FIFO Pros and Cons:
– Simple (+)
– Short jobs get stuck behind long ones (-)
» Safeway: Getting milk, always stuck behind cart full of
2/22/06
small items. Upside: get to read about space aliens!Lec 10.18
Joseph CS162 ©UCB Spring 2006
Round Robin (RR)
• FCFS Scheme: Potentially bad for short jobs!
– Depends on submit order
– If you are first in line at supermarket with milk, you
don’t care who is behind you, on the other hand…
• Round Robin Scheme
– Each process gets a small unit of CPU time
(time quantum), usually 10-100 milliseconds
– After quantum expires, the process is preempted
and added to the end of the ready queue.
– n processes in ready queue and time quantum is q ⇒
» Each process gets 1/n of the CPU time
» In chunks of at most q time units
» No process waits more than (n-1)q time units
• Performance
– q large ⇒ FCFS
– q small ⇒ Interleaved (really small ⇒ hyperthreading?)
– q must be large with respect to context switch,
otherwise overhead is too high (all overhead)
2/22/06 Joseph CS162 ©UCB Spring 2006 Lec 10.19
Example of RR with Time Quantum = 20
• Example: Process Burst Time
P1 53
P2 8
P3 68
P4 24
– The Gantt chart is:
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
A or B C
SRTF
C’s C’s
I/O I/O
2/22/06 Joseph CS162 ©UCB Spring 2006 Lec 10.27
SRTF Further discussion
• Starvation
– SRTF can lead to starvation if many small jobs!
– Large jobs never get to run
• Somehow need to predict future
– How can we do this?
– Some systems ask the user
» When you submit a job, have to say how long it will take
» To stop cheating, system kills job if takes too long
– But: Even non-malicious users have trouble predicting
runtime of their jobs
• Bottom line, can’t really know how long job will take
– However, can use SRTF as a yardstick
for measuring other policies
– Optimal, so can’t do any better
• SRTF Pros & Cons
– Optimal (average response time) (+)
– Hard to predict future (-)
– Unfair (-)
2/22/06 Joseph CS162 ©UCB Spring 2006 Lec 10.28
Predicting the Length of the Next CPU Burst
• Adaptive: Changing policy based on past behavior
– CPU scheduling, in virtual memory, in file systems, etc
– Works because programs have predictable behavior
» If program was I/O bound in past, likely in future
» If computer behavior were random, wouldn’t help
• Example: SRTF with estimated burst length
– Use an estimator function on previous bursts:
Let tn-1, tn-2, tn-3, etc. be previous CPU burst lengths.
Estimate next burst τ n = f(tn-1, tn-2, tn-3, …)
– Function f could be one of many different time series
estimation schemes (Kalman filters, etc)
– For instance,
exponential averaging
τ n = αtn-1+(1-α)τ n-1
with (0<α≤1)
Long-Running Compute
Tasks Demoted to
Low Priority
Response
» Assuming you’re paying for worse
time
response time in reduced productivity,
customer angst, etc…
100%
» Might think that you should buy a
faster X when X is utilized 100%,
but usually, response time goes
to infinity as utilization⇒100% Utilization
• An interesting implication of this curve:
– Most scheduling algorithms work fine in the “linear”
portion of the load curve, fail otherwise
– Argues for buying a faster X when hit “knee” of curve
2/22/06 Joseph CS162 ©UCB Spring 2006 Lec 10.37
Summary
• Scheduling: selecting a waiting process from the ready
queue and allocating the CPU to it
• FCFS Scheduling:
– Run threads to completion in order of submission
– Pros: Simple
– Cons: Short jobs get stuck behind long ones
• Round-Robin Scheduling:
– Give each thread a small amount of CPU time when it
executes; cycle between all ready threads
– Pros: Better for short jobs
– Cons: Poor when jobs are same length