0% found this document useful (0 votes)
18 views38 pages

Lec09 Schedule

This document discusses synchronization and scheduling in operating systems. It covers topics like spin locks, mutexes, condition variables, barriers, and scheduling in Linux. Examples of synchronization primitives in pThreads and the Linux kernel are provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views38 pages

Lec09 Schedule

This document discusses synchronization and scheduling in operating systems. It covers topics like spin locks, mutexes, condition variables, barriers, and scheduling in Linux. Examples of synchronization primitives in pThreads and the Linux kernel are provided.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

CS194-24

Advanced Operating Systems


Structures and Implementation
Lecture 9

Synchronization (con’t)
Scheduling Review

February 27th, 2013


Prof. John Kubiatowicz
http://inst.eecs.berkeley.edu/~cs194-24
Goals for Today

• Synchronization (finish up)


• Scheduling

Interactive is important!
Ask Questions!

Note: Some slides and/or pictures in the following are


adapted from slides ©2013

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.2


Recall: Implementing Locks with test&set: Spin Lock
• The Test&Test&Set lock:
int value = 0; // Free
Acquire() {
while (true) {
while(value); // Locked, spin with reads
if (!test&set(value))
break;// Success!
}
}
Release() {
value = 0;
}
• Significant problems with Test&Test&Set?
– Multiple processors spinning on same memory location
» Release/Reacquire causes lots of cache invalidation traffic
» No guarantees of fairness – potential livelock
– Scales poorly with number of processors
» Because of bus traffic, average time until some processor acquires
lock grows with number of processors
• Busy-Waiting:
2/27/13 thread consumes
Kubiatowicz cycles
CS194-24 ©UCBwhile waiting
Fall 2013 Lec 9.3
Recall: Mellor-Crummey-Scott Lock
• Nice properties of MCS Lock
– Lock Free internal implementation
– Never more than 2 processors spinning on one address
– Completely fair – once on queue, are guaranteed to get
your turn in FIFO order
» Alternate release procedure doesn’t use compare&swap
but doesn’t guarantee FIFO order
• Bad properties of MCS Lock
– Takes longer (more instructions) than T&T&S if no
contention
– Releaser may be forced to spin in rare circumstances
• Hardware support?
– Some proposed hardware queueing primitives such as
QOLB (Queue on Lock Bit)
– Not broadly available

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.4


Recall: Busy-wait vs Blocking
• Busy-wait: I.e. spin lock
– Keep trying to acquire lock until read
– Very low latency/processor overhead!
– Very high system overhead!
» Causing stress on network while spinning
» Processor is not doing anything else useful
• Blocking:
– If can’t acquire lock, deschedule process (I.e. unload state)
– Higher latency/processor overhead (1000s of cycles?)
» Takes time to unload/restart task
» Notification mechanism needed
– Low system overheadd
» No stress on network
» Processor does something useful
• Hybrid:
– Spin for a while, then block
– 2-competitive: spin until have waited blocking time

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.5


What about barriers?
• Barrier – global (/coordinated) synchronization
– simple use of barriers -- all threads hit the same one
work_on_my_subgrid();
barrier();
read_neighboring_values();
barrier();
– barriers are not provided in all thread libraries
• How to implement barrier?
– Global counter representing number of threads still
waiting to arrive and parity representing phase
» Initialize counter to zero, set parity variable to “even”
» Each thread that enters saves parity variable and
• Atomically increments counter if even
• Atomically decrements counter if odd
» If counter not at extreme value spin until parity changes
• i.e. Num threads if “even” or zero if “odd”
» Else, flip parity, exit barrier
– Better for large numbers of processors – implement
atomic counter via combining tree
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.6
pThreads Synchronization
• The pThreads API has a number of synchronization
options available to you
– Mutexes, Monitors/Condition Variables, Barriers
• Mutex
– Creation: Can be created either statically or dynamically
» Static: pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER;
» Dynamic: pthread_mutex_init(&mutex,&attr);
pthread_mutex_destroy(&mutex);
Here, attr contains protocol for dealing with priority
inversion, priority ceiling, and process sharing properties
– Use: Simple locking and unlocking (and “try”):
» pthread_mutex_lock(&mutex);
pthread_mutex_trylock(&mutex);
pthread_mutex_unlock(&mutex);

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.7


pThreads Synchronization (Con’t)

• Condition Variables
– Creation: Can be created either statically or dynamically
» Static: pthread_cond_t mycond = PTHREAD_COND_INITIALIZER;
» Dynamic: pthread_mutex_init(&cond,&attr);
pthread_mutex_destroy(&cond);
Here, attr has only one element: process-shared
– Use: Simple locking and unlocking (and “try”):
» pthread_cond_wait(&cond);
pthread_cond_signal(&cond);
pthread_mutex_unlock(&cond);
• Barriers:
– Creation: pthread_barrier_init(&barrier,&attr,count);
pthread_barrier_destroy(&barrier);
– Use: pthread_barrier_wait(&barrier);

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.8


Linux Synchronization (in Kernel)
• Atomic Operations (Love Book, Chapter 10)
– Declaration: include <asm/atomic.h>
atomic_t v;
atomic_t u = ATOMIC_INIT(0);
– Use: Simple operations
atomic_set(&v, 4); // v=4 (atomically)
atomic_add(2,&v); // v = v+2 (atomically)
atomic_inc(&v); // v = v+1 (atomically)
atomic_read(&v); // Atomically read v and return
– Atomic OP+Test: Perform op, then test
// atomically subtract value and return true if result zero
atomic_sub_and_test(value,&v);
– 64-bit Atomic versions exist, atomic bit manipulation
• Spin Locks:
– Declaration: include <asm/spinlock.h>
include <linux/spinlock.h>

DEFINE_SPINLOCK(my_lock);
spin_lock(&mr_lock);
// Cricical section
spin_unlock(&mr_lock);
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.9
Linux Synchronization (in Kernel), Con’t
• Spin Locks:
– Simple Use: include <asm/spinlock.h>
include <linux/spinlock.h>

DEFINE_SPINLOCK(my_lock);
spin_lock(&mr_lock);

// Cricical section

spin_unlock(&mr_lock);
– In Interrupt handlers:
DEFINE_SPINLOCK(my_lock);
unsigned long flags;

// Save state of interupts, then disable ints


spin_lock_irqsave(&mr_lock, flags);

// Cricical section

spin_unlock_irqrestore(&mr_lock, flags);

– Also, Readers-Writers locks


» Multiple Readers, single writer

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.10


Linux Synchronization (In Kernel), Con’t
• Sleeping locks: Semaphores, Mutexes
– Both implemented with same mechanism
– Mutexes recommended for new code
• Completion variables
– Sorta like a condition variable, except do not sleep in critical
section
– Functionally like initializing semaphore to 0
• Sequential locks:
– Like Read/Write lock favors writers (can always write)
– Reader must abort and retry if contend with writer
• Big Kernel Lock (BKL)
– Methods: lock_kernel();
unlock_kernel();
kernel_locked() // Returns true if lock held
– Single lock for whole kernel
– You can sleep while holding it
– Only useful in process context (not interrupt context!)
– DON’T USE BKL!
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.11
Linux Memory Barriers
• In some rare instances (like device drivers) may need to
worry about ordering between memory operations to
different addresses
– Problem – compiler and hardware can reorder loads and
stores relative to one another
• Linux provides memory barriers to handle this
– If you find yourself using memory barriers, rethink what
you are doing first!
– Read end of Love Chapter 10!
• Operations:
rmb() // Prevents loads from being reordered across barrier
read_barrier_depends() // Prevents data-dependent loads from reordering
wmb() // Prevents stores from being reordered across barrier
mb() // Prevents loads and stores from reordering
barrier() // Prevents compiler from moving loads/stores across
barrier

smp_xxx() // does xxx() on multiprocessor, barrier() on uniprocessor

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.12


Linux Memory Barrier Example

• Here is an example of two threads with memory


barriers in use. Assume a=b=1 to start

Thread 1 Thread 2
a = 3;
mb();
b=4; c = b;
rmb();
d = a;

• What are the valid values for c & d?


– c=1, d=1
– c=1, d=3
– c=4, d=3
• Without barriers, could have fourth option: c=4,d=1
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.13
Administrivia
• Lab 1 Code due today!
– Tomorrow, Lab 1 design document
– Also, group evaluations due tomorrow (if we get the
mechanism up)
• Lab 2 not quite ready
– Will try to post it as soon as we can
• Midterm I: Wednesday 3/13 – Two weeks from today!
– All topics up to that Monday (3/11) are fair game
– Closed book, 1 Sheet for notes (both sides, handwritten)

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.14


Review: CPU Scheduling

• Earlier, we talked about the life-cycle of a thread


– Active threads work their way from Ready queue to
Running to various waiting queues.
• Question: How is the OS to decide which of several
tasks to take off a queue?
– Obvious queue to worry about is ready queue
– Others can be scheduled as well, however
• Scheduling: deciding which threads are given access
to resources from moment to moment
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.15
Scheduling Assumptions
• CPU scheduling big area of research in early 70’s
• Many implicit assumptions for CPU scheduling:
– One program per user
– One thread per program
– Programs are independent
• Clearly, these are unrealistic but they simplify the
problem so it can be solved
– For instance: is “fair” about fairness among users or
programs?
» If I run one compilation job and you run five, you get five
times as much CPU on many operating systems
• The high-level goal: Dole out CPU time to optimize
some desired parameters of system

USER1 USER2 USER3 USER1 USER2

Time
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.16
Assumption: CPU Bursts

Weighted toward small bursts

• Execution model: programs alternate between bursts of


CPU and I/O
– Program typically uses the CPU for some period of time,
then does I/O, then uses CPU again
– Each scheduling decision is about which job to give to the
CPU for use by its next CPU burst
– With timeslicing, thread may be forced to give up CPU
before finishing current CPU burst
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.17
Scheduling Policy Goals/Criteria
• Minimize Response Time
– Minimize elapsed time to do an operation (or job)
– Response time is what the user sees:
» Time to echo a keystroke in editor
» Time to compile a program
» Real-time Tasks: Must meet deadlines imposed by World
• Maximize Throughput
– Maximize operations (or jobs) per second
– Throughput related to response time, but not identical:
» Minimizing response time will lead to more context
switching than if you only maximized throughput
– Two parts to maximizing throughput
» Minimize overhead (for example, context-switching)
» Efficient use of resources (CPU, disk, memory, etc)
• Fairness
– Share CPU among users in some equitable way
– Fairness is not minimizing average response time:
» Better average response time by making system less fair

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.18


First-Come, First-Served (FCFS) Scheduling
• First-Come, First-Served (FCFS)
– Also “First In, First Out” (FIFO) or “Run until done”
» In early systems, FCFS meant one program
scheduled until done (including I/O)
» Now, means keep CPU until thread blocks
• Example: Process Burst Time
P1 24
P2 3
P3 3
– Suppose processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3

0 24 27 30
– Waiting time for P1 = 0; P2 = 24; P3 = 27
– Average waiting time: (0 + 24 + 27)/3 = 17
– Average Completion time: (24 + 27 + 30)/3 = 27
• Convoy effect: short process behind long process
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.19
FCFS Scheduling (Cont.)
• Example continued:
– Suppose that processes arrive in order: P2 , P3 , P1
Now, the Gantt chart for the schedule is:
P2 P3 P1

0 3 6 30
– Waiting time for P1 = 6; P2 = 0; P3 = 3
– Average waiting time: (6 + 0 + 3)/3 = 3
– Average Completion time: (3 + 6 + 30)/3 = 13
• In second case:
– average waiting time is much better (before it was 17)
– Average completion time is better (before it was 27)
• FIFO Pros and Cons:
– Simple (+)
– Short jobs get stuck behind long ones (-)
» Safeway: Getting milk, always stuck behind cart full of
small items. Upside: get to read about space aliens!
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.20
Round Robin (RR)
• FCFS Scheme: Potentially bad for short jobs!
– Depends on submit order
– If you are first in line at supermarket with milk, you
don’t care who is behind you, on the other hand…
• Round Robin Scheme
– Each process gets a small unit of CPU time
(time quantum), usually 10-100 milliseconds
– After quantum expires, the process is preempted
and added to the end of the ready queue.
– n processes in ready queue and time quantum is q 
» Each process gets 1/n of the CPU time
» In chunks of at most q time units
» No process waits more than (n-1)q time units
• Performance
– q large  FCFS
– q small  Interleaved (really small  hyperthreading?)
– q must be large with respect to context switch,
otherwise overhead is too high (all overhead)
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.21
Example of RR with Time Quantum = 20
• Example: Process Burst Time
P1 53
P2 8
P3 68
P4 24
– The Gantt chart is:
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 28 48 68 88 108 112 125 145 153

– Waiting time for P1=(68-20)+(112-88)=72


P2=(20-0)=20
P3=(28-0)+(88-48)+(125-108)=85
P4=(48-0)+(108-68)=88
– Average waiting time = (72+20+85+88)/4=66¼
– Average completion time = (125+28+153+112)/4 = 104½
• Thus, Round-Robin Pros and Cons:
– Better for short jobs, Fair (+)
2/27/13– Context-switching time adds
Kubiatowicz up ©UCB
CS194-24 for long jobs (-)
Fall 2013 Lec 9.22
Round-Robin Discussion
• How do you choose time slice?
– What if too big?
» Response time suffers
– What if infinite ()?
» Get back FIFO
– What if time slice too small?
» Throughput suffers!
• Actual choices of timeslice:
– Initially, UNIX timeslice one second:
» Worked ok when UNIX was used by one or two people.
» What if three compilations going on? 3 seconds to echo
each keystroke!
– In practice, need to balance short-job performance
and long-job throughput:
» Typical time slice today is between 10ms – 100ms
» Typical context-switching overhead is 0.1ms – 1ms
» Roughly 1% overhead due to context-switching

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.23


Comparisons between FCFS and Round Robin
• Assuming zero-cost context-switching time, is RR
always better than FCFS?
• Simple example: 10 jobs, each take 100s of CPU time
RR scheduler quantum of 1s
All jobs start at the same time
• Completion Times: Job # FIFO RR
1 100 991
2 200 992
… … …
9 900 999
– Both RR and FCFS finish
10 at1000
the same time
1000
– Average response time is much worse under RR!
» Bad when all jobs same length
• Also: Cache state must be shared between all jobs
with RR but can be devoted to each job with FIFO
– Total time for RR longer even for zero-cost switch!
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.24
Earlier Example with Different Time Quantum
P2 P4 P1 P3
Best FCFS:
[8] [24] [53] [68]
0 8 32 85 153
Quantum P1 P2 P3 P4 Average
Best FCFS 32 0 85 8 31¼
Q = 1 84 22 85 57 62
Q = 5 82 20 85 58 61¼
Wait
Q = 8 80 8 85 56 57¼
Time
Q = 10 82 10 85 68 61¼
Q = 20 72 20 85 88 66¼
Worst FCFS 68 145 0 121 83½
Best FCFS 85 8 153 32 69½
Q = 1 137 30 153 81 100½
Q = 5 135 28 153 82 99½
Completion
Q = 8 133 16 153 80 95½
Time
Q = 10 135 18 153 92 99½
Q = 20 125 28 153 112 104½
Worst FCFS 121 153 68 145 121¾
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.25
What if we Knew the Future?
• Could we always mirror best FCFS?
• Shortest Job First (SJF):
– Run whatever job has the least amount of
computation to do
– Sometimes called “Shortest Time to
Completion First” (STCF)
• Shortest Remaining Time First (SRTF):
– Preemptive version of SJF: if job arrives and has a
shorter time to completion than the remaining time on
the current job, immediately preempt CPU
– Sometimes called “Shortest Remaining Time to
Completion First” (SRTCF)
• These can be applied either to a whole program or
the current CPU burst of each program
– Idea is to get short jobs out of the system
– Big effect on short jobs, only small effect on long ones
– Result is better average response time
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.26
Discussion
• SJF/SRTF are the best you can do at minimizing
average response time
– Provably optimal (SJF among non-preemptive, SRTF
among preemptive)
– Since SRTF is always at least as good as SJF, focus on
SRTF
• Comparison of SRTF with FCFS and RR
– What if all jobs the same length?
» SRTF becomes the same as FCFS (i.e. FCFS is best can
do if all jobs the same length)
– What if jobs have varying length?
» SRTF (and RR): short jobs not stuck behind long ones

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.27


Example to illustrate benefits of SRTF

A or B C

C’s C’s C’s


I/O I/O I/O
• Three jobs:
– A,B: both CPU bound, run for week
C: I/O bound, loop 1ms CPU, 9ms disk I/O
– If only one at a time, C uses 90% of the disk, A or B
could use 100% of the CPU
• With FIFO:
– Once A or B get in, keep CPU for two weeks
• What about RR or SRTF?
– Easier to see with a timeline

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.28


SRTF Example continued:
Disk Utilization:
C A B 9/201
C ~ 4.5%

C’s RR 100ms time slice DiskC’s


Utilization:
I/O ~90%I/O but lots of
wakeups!
CABAB… C

RR 1ms time slice


C’s C’s
I/O I/O
Disk Utilization:
C A A A 90%

SRTF
C’s C’s
I/O I/O

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.29


SRTF Further discussion
• Starvation
– SRTF can lead to starvation if many small jobs!
– Large jobs never get to run
• Somehow need to predict future
– How can we do this?
– Some systems ask the user
» When you submit a job, have to say how long it will take
» To stop cheating, system kills job if takes too long
– But: Even non-malicious users have trouble predicting
runtime of their jobs
• Bottom line, can’t really know how long job will take
– However, can use SRTF as a yardstick
for measuring other policies
– Optimal, so can’t do any better
• SRTF Pros & Cons
– Optimal (average response time) (+)
– Hard to predict future (-)
– Unfair (-)
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.30
Predicting the Length of the Next CPU Burst
• Adaptive: Changing policy based on past behavior
– CPU scheduling, in virtual memory, in file systems, etc
– Works because programs have predictable behavior
» If program was I/O bound in past, likely in future
» If computer behavior were random, wouldn’t help
• Example: SRTF with estimated burst length
– Use an estimator function on previous bursts:
Let tn-1, tn-2, tn-3, etc. be previous CPU burst lengths.
Estimate next burst n = f(tn-1, tn-2, tn-3, …)
– Function f could be one of many different time series
estimation schemes (Kalman filters, etc)
– For instance,
exponential averaging
n = tn-1+(1-)n-1
with (0<1)

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.31


Multi-Level Feedback Scheduling

Long-Running Compute
Tasks Demoted to
Low Priority

• Another method for exploiting past behavior


– First used in CTSS
– Multiple queues, each with different priority
» Higher priority queues often considered “foreground” tasks
– Each queue has its own scheduling algorithm
» e.g. foreground – RR, background – FCFS
» Sometimes multiple RR priorities with quantum increasing
exponentially (highest:1ms, next:2ms, next: 4ms, etc)
• Adjust each job’s priority as follows (details vary)
– Job starts in highest priority queue
– If timeout expires, drop one level
– If timeout doesn’t expire, push up one level (or to top)
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.32
Scheduling Details
• Result approximates SRTF:
– CPU bound jobs drop like a rock
– Short-running I/O bound jobs stay near top
• Scheduling must be done between the queues
– Fixed priority scheduling:
» serve all from highest priority, then next priority, etc.
– Time slice:
» each queue gets a certain amount of CPU time
» e.g., 70% to highest, 20% next, 10% lowest
• Countermeasure: user action that can foil intent of
the OS designer
– For multilevel feedback, put in a bunch of meaningless
I/O to keep job’s priority high
– Of course, if everyone did this, wouldn’t work!
• Example of Othello program:
– Playing against competitor, so key was to do computing
at higher priority the competitors.
» Put in printf’s, ran much faster!
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.33
Scheduling Fairness
• What about fairness?
– Strict fixed-priority scheduling between queues is unfair
(run highest, then next, etc):
» long running jobs may never get CPU
» In Multics, shut down machine, found 10-year-old job
– Must give long-running jobs a fraction of the CPU even
when there are shorter jobs to run
– Tradeoff: fairness gained by hurting avg response time!
• How to implement fairness?
– Could give each queue some fraction of the CPU
» What if one long-running job and 100 short-running ones?
» Like express lanes in a supermarket—sometimes express
lanes get so long, get better service by going into one of
the other lines
– Could increase priority of jobs that don’t get service
» What is done in some variants of UNIX
» This is ad hoc—what rate should you increase priorities?
» And, as system gets overloaded, no job gets CPU time, so
everyone increases in priorityInteractive jobs suffer
2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.34
Lottery Scheduling

• Yet another alternative: Lottery Scheduling


– Give each job some number of lottery tickets
– On each time slice, randomly pick a winning ticket
– On average, CPU time is proportional to number of
tickets given to each job
• How to assign tickets?
– To approximate SRTF, short running jobs get more,
long running jobs get fewer
– To avoid starvation, every job gets at least one
ticket (everyone makes progress)
• Advantage over strict priority scheduling: behaves
gracefully as load changes
– Adding or deleting a job affects all jobs
proportionally, independent of how many tickets each
job possesses

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.35


Lottery Scheduling Example
• Lottery Scheduling Example
– Assume short jobs get 10 tickets, long jobs get 1 ticket

# short jobs/ % of CPU each % of CPU each


# long jobs short jobs gets long jobs gets
1/1 91% 9%
0/2 N/A 50%
2/0 50% N/A
10/1 9.9% 0.99%
1/10 50% 5%

– What if too many short jobs to give reasonable


response time?
» If load average is 100, hard to make progress
2/27/13
» One approach: log some
Kubiatowicz user ©UCB
CS194-24 out Fall 2013 Lec 9.36
Summary
• Scheduling: selecting a waiting process from the ready
queue and allocating the CPU to it
• FCFS Scheduling:
– Run threads to completion in order of submission
– Pros: Simple
– Cons: Short jobs get stuck behind long ones
• Round-Robin Scheduling:
– Give each thread a small amount of CPU time when it
executes; cycle between all ready threads
– Pros: Better for short jobs
– Cons: Poor when jobs are same length
• Shortest Job First (SJF)/Shortest Remaining Time
First (SRTF):
– Run whatever job has the least amount of computation to
do/least remaining amount of computation to do
– Pros: Optimal (average response time)
– Cons: Hard to predict future, Unfair

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.37


Summary (Con’t)
• Multi-Level Feedback Scheduling:
– Multiple queues of different priorities
– Automatic promotion/demotion of process priority in
order to approximate SJF/SRTF
• Lottery Scheduling:
– Give each thread a priority-dependent number of
tokens (short tasksmore tokens)
– Reserve a minimum number of tokens for every thread
to ensure forward progress/fairness
• Next time: More Interesting Schedulers
– O(1) scheduler (Linux 2.6.x)
– Completely Fair Scheduler (Linux 2.6.23)
– EDF (Earliest Deadline First)
– CBS (Constant Bandwidth Scheduler)

2/27/13 Kubiatowicz CS194-24 ©UCB Fall 2013 Lec 9.38

You might also like