Recall: Scheduling Policy Goals/Criteria: CS162! Operating Systems And! Systems Programming! Lecture 10! ! Scheduling
Recall: Scheduling Policy Goals/Criteria: CS162! Operating Systems And! Systems Programming! Lecture 10! ! Scheduling
2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.11 2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.12
Lottery Scheduling
• Yet another alternative: Lottery Scheduling
– Give each job some number of lottery tickets
– On each time slice, randomly pick a winning ticket
– On average, CPU time is proportional to number of tickets given
to each job
• How to assign tickets?
– To approximate SRTF, short running jobs get more, long running
jobs get fewer
– To avoid starvation, every job gets at least one ticket (everyone
makes progress)
• Advantage over strict priority scheduling: behaves gracefully as
BREAK load changes
– Adding or deleting a job affects all jobs proportionally,
independent of how many tickets each job possesses
2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.13 2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.14
2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.15 2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.16
Recall: Assumption – CPU Bursts How to Handle Simultaneous Mix of Diff Types of Apps?
• Can we use Burst Time (observed) to decide which application gets CPU
time?
Weighted toward small bursts
• Consider mix of interactive and high throughput apps:
– How to best schedule them?
– How to recognize one from the other?
» Do you trust app to say that it is “interactive”?
– Should you schedule the set of apps identically on servers, workstations, pads,
and cellphones?
• Assumptions encoded into many schedulers:
• Execution model: programs alternate between bursts of CPU and I/O – Apps that sleep a lot and have short bursts must be interactive apps – they
should get high priority
– Program typically uses the CPU for some period of time, then does I/O,
then uses CPU again – Apps that compute a lot should get low(er?) priority, since they won’t notice
– Each scheduling decision is about which job to give to the CPU for use intermittent bursts from interactive apps
by its next CPU burst • Hard to characterize apps:
– With timeslicing, thread may be forced to give up CPU before finishing – What about apps that sleep for a long time, but then compute for a long time?
current CPU burst
– Or, what about apps that must run under all circumstances (say periodically)
2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.17 2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.18
SRTF Further discussion Predicting the Length of the Next CPU Burst
• Starvation • Adaptive: Changing policy based on past behavior
– SRTF can lead to starvation if many small jobs! – CPU scheduling, in virtual memory, in file systems, etc
– Large jobs never get to run – Works because programs have predictable behavior
• Somehow need to predict future » If program was I/O bound in past, likely in future
– How can we do this? » If computer behavior were random, wouldn’t help
– Some systems ask the user • Example: SRTF with estimated burst length
» When you submit a job, have to say how long it will take
– Use an estimator function on previous bursts: !
» To stop cheating, system kills job if takes too long Let tn-1, tn-2, tn-3, etc. be previous CPU burst lengths. Estimate next
– But: Even non-malicious users have trouble predicting runtime of their burst τn = f(tn-1, tn-2, tn-3, …)
jobs – Function f could be one of many different time series estimation
• Bottom line, can’t really know how long job will take schemes (Kalman filters, etc)
– However, can use SRTF as a yardstick ! – For instance, !
for measuring other policies exponential averaging!
– Optimal, so can’t do any better τn = αtn-1+(1-α)τn-1!
• SRTF Pros & Cons with (0<α≤1)
– Optimal (average response time) (+) !
– Hard to predict future (-)
– Unfair (-)
2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.23 2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.24
Multi-Level Feedback Scheduling Scheduling Details
• Result approximates SRTF:
Long-Running Compute! – CPU bound jobs drop like a rock
Tasks Demoted to !
Low Priority – Short-running I/O bound jobs stay near top
• Scheduling must be done between the queues
– Fixed priority scheduling:
» serve all from highest priority, then next priority, etc.
• Another method for exploiting past behavior – Time slice:
– First used in CTSS » each queue gets a certain amount of CPU time
– Multiple queues, each with different priority » e.g., 70% to highest, 20% next, 10% lowest
» Higher priority queues often considered “foreground” tasks
• Countermeasure: user action that can foil intent of the OS
– Each queue has its own scheduling algorithm designer
» e.g. foreground – RR, background – FCFS – For multilevel feedback, put in a bunch of meaningless I/O to keep
» Sometimes multiple RR priorities with quantum increasing exponentially job’s priority high
(highest:1ms, next:2ms, next: 4ms, etc)
– Of course, if everyone did this, wouldn’t work!
• Adjust each job’s priority as follows (details vary)
• Example of Othello program:
– Job starts in highest priority queue
– Playing against competitor, so key was to do computing at higher
– If timeout expires, drop one level priority the competitors.
– If timeout doesn’t expire, push up one level (or to top) » Put in printf’s, ran much faster!
2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.25 2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.26
2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.29 2/24/16 Joseph CS162 ©UCB Spring 2016 Lec 10.30