Chapter 3 Process Scheduling

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

20/09/2012

A. CPU BURST TIME


3.4 PROCESS SCHEDULING - almost all processes alternate bursts of computing and I/O
requests
• Burst time
• Scheduling issues
- how long a process requires the CPU
• General Algorithms
– FCFS
• Compute Bound Process
– SJF
- CPU bound processes
– Round Robin
- spend most of their time computing
– Priority
– Multiple Queue
• I/O Bound Process
– Guaranteed scheduling
- processes that do lot of I/O
– Lottery scheduling - may spend most of the time waiting for I/O

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


1 Chapter 3 Processes and Threads 2 Chapter 3 Processes and Threads

B. SCHEDULERS • Short Term Scheduler (or CPU scheduler)


• Long-term scheduler (or job scheduler) - selects from among the processes in memory that are ready to
- selects which processes should be brought into the ready execute, and allocates the CPU to one of them
queue, based on the characteristics of the job - also decides when to interrupt processes and the appropriate
- invoked very infrequently (seconds, minutes) ⇒ (may be slow) queue to move them to
- controls the degree of multiprogramming - invoked very frequently (milliseconds) ⇒ (must be fast)
- CPU scheduling decisions may take place when a process:
i. switches from running to waiting state
ii. switches from running to ready state
iii. switches from waiting to ready
iv. terminates
 scheduling can be nonpreemptive or
 it can be preemptive

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


3 Chapter 3 Processes and Threads 4 Chapter 3 Processes and Threads

• Dispatcher module Scheduling Criteria


- gives control of the CPU to the process selected by the • CPU utilization – keep the CPU as busy as possible
short-term scheduler • Throughput – # of processes that complete their
- this involves: execution per time unit
• switching context • Turnaround time – amount of time to execute a
• switching to user mode particular process from submission to completion
• jumping to the proper location in the user program to • Waiting time – amount of time a process has been
restart that program waiting in the ready queue
- dispatch latency – time it takes for the dispatcher to • Response time – amount of time it takes from when a
stop one process and start another running request was submitted until the first response is
produced, not output
• Fairness to all jobs – everyone has equal amount of
CPU and I/O time
CSI354 Operating Systems 2012 CSI354 Operating Systems 2012
5 Chapter 3 Processes and Threads 6 Chapter 3 Processes and Threads

1
20/09/2012

C. PROCESS SCHEDULING ALGORITHMS 1. First-


First-Come, First-
First-Served (FCFS) Scheduling
- process scheduler relies on a process scheduling algorithm to
allocate the CPU - historically used by many Oss
- early systems used non-preemptive policies - jobs serviced according to arrival time in the ready queue
- most current systems emphasize interactive use and response  the earlier they arrive, the sooner they are served
time - non preemptive
- therefore use algorithms that takes care of immediate - very simple to use
requests of interactive users
- okay for batch systems
- unacceptable for interactive systems
- by definition the fairest algorithm

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


7 Chapter 3 Processes and Threads 8 Chapter 3 Processes and Threads

- now suppose that the processes arrive in the order

Process Burst Time P2 , P3 , P1


P1 24 - Gantt chart for the schedule is:
P2 3
P2 P3 P1
P3 3
- suppose that the processes arrive in the order: P1 , P2 , P3
0 3 6 30
- Gantt Chart for the schedule is:

P1 P2 P3
- waiting time for P1 = 6;P2 = 0; P3 = 3
- average waiting time: (6 + 0 + 3)/3 = 3
0 24 27 30
- much better than previous case
- waiting time for P1 = 0; P2 = 24; P3 = 27 - Convoy effect: short process arriving after long process may
- average waiting time: (0 + 24 + 27)/3 = 17 have to wait a long time

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


9 Chapter 3 Processes and Threads 10 Chapter 3 Processes and Threads

2. Shortest-Job-First (SJF) Scheduling Process Burst Time


- associate with each process the length of its next CPU burst P1 6
- use these lengths to schedule the process with the shortest P2 8
time P3 7
- SJF is optimal : it gives minimum average waiting time and P4 3
shortest turnaround time for a given set of processes
- the difficulty is knowing the length of the next CPU request
• can only estimate the length of next CPU burst
• can be done by using the length of previous CPU bursts
- can be
• preemptive (SRTF)
• non preemptive
- processes with long burst times may starve

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


11 Chapter 3 Processes and Threads 12 Chapter 3 Processes and Threads

2
20/09/2012

Process Burst Time Arrival Time 3. Priority Scheduling


P1 7 0 - a priority number (integer) is associated with each process
P2 4 2 - CPU is allocated to the process with the highest priority (e.g.
P3 1 4 in UNIX smallest integer ≡ highest priority)
P4 4 5 • can be preemptive or
• nonpreemptive
- SJF is a priority scheduling where priority is the predicted
next CPU burst time
- problem ≡ Starvation – low priority processes may never
execute
- solution ≡ Aging – as time progresses increase the priority
of the process

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


13 Chapter 3 Processes and Threads 14 Chapter 3 Processes and Threads

Process Burst Time Arrival Time Priority 4. Round Robin (RR)


P1 20 0 4 - each process gets a small unit of CPU time (time
P2 2 2 2 quantum), usually 10-100 milliseconds
P3 2 2 1 - after this time has elapsed, the process is preempted
and added to the end of the ready queue
- if there are n processes in the ready queue and the time
quantum is q, then each process gets 1/n of the CPU
 A low number means a high priority
time in chunks of at most q time units at once
 no process waits more than (n-1)q time units
- performance
• q large ⇒ FIFO
• q small ⇒ q must be large with respect to context
switch, otherwise overhead is too high

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


15 Chapter 3 Processes and Threads 16 Chapter 3 Processes and Threads

- example with Time Quantum = 4 Process Burst Time Arrival Time


P1 7 0
Process Burst Time
P2 4 4
P1 24
P3 1 5
P2 3
P4 1 9
P3 3
P5 3 12

Use a time quantum of 3

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


17 Chapter 3 Processes and Threads 18 Chapter 3 Processes and Threads

3
20/09/2012

Time Quantum and Context Switch Time Turnaround Time Varies With The Time Quantum

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


19 Chapter 3 Processes and Threads 20 Chapter 3 Processes and Threads

5. Multilevel Queue
- ready queue is partitioned into separate queues
- a new job is placed in one of the queues
- each queue has its own scheduling algorithm, for example
• foreground (interactive) – RR
• background (batch) – FCFS
- scheduling must be done between the queues
• fixed priority scheduling; (i.e., serve all from foreground then
from background) - possibility of starvation
• time slice – each queue gets a certain amount of CPU time which
it can schedule amongst its processes
i.e., 80% to foreground in RR, 20% to background in FCFS

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


21 Chapter 3 Processes and Threads 22 Chapter 3 Processes and Threads

Multilevel Feedback Queue • Example


- a process can move between the various queues - three queues:
 aging can be implemented this way
• Q0 – RR with time quantum 8 milliseconds
- assumption is that a process in interactive, therefore placed in • Q1 – RR time quantum 16 milliseconds
the highest priority queue
• Q2 – FCFS
- as the process uses its time quantum, it moves to other queus
- scheduling
- multilevel-feedback-queue scheduler defined by the following
• a new job enters queue Q0 which is served FCFS
parameters:
• when it gains CPU, job receives 8 milliseconds
• number of queues
• if it does not finish in 8 milliseconds, job is moved to queue Q1
• scheduling algorithms for each queue
• method used to determine when to upgrade a process • at Q1 job is again served FCFS and receives 16 additional
• method used to determine when to demote a process
milliseconds
• method used to determine which queue a process will enter • if it still does not complete, it is preempted and moved to queue
when that process needs service Q2

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


23 Chapter 3 Processes and Threads 24 Chapter 3 Processes and Threads

4
20/09/2012

6. Guaranteed Scheduling
- what if we want to guarantee that a process get x% of the
CPU?
 How do we write the scheduler?
- scheduling algorithm would compute the ratio of fraction of
• CPU time a process has used since the process began
• CPU time it is supposed to have
- process with the lowest ratio would run next
- difficult to implement

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


25 Chapter 3 Processes and Threads 26 Chapter 3 Processes and Threads

7. Lottery
- issues lottery tickets to processes for various resources D. ALGORITHM EVALUATION
- the scheduler then randomly selects a lottery number - many scheduling algorithms
- the winning process gets to run - need criteria to select one for a system
- the more lottery tickets you have, the better your chance of - also need to evaluate algorithms
“winning” - four evaluation methods
- processes can give (or lend) their tickets to their children or • deterministic modeling
to other processes • queuing models
- more important processes can be given more tickets • simulations
• implementation

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


27 Chapter 3 Processes and Threads 28 Chapter 3 Processes and Threads

1. Deterministic modeling - advantages:


- also called discreet modeling • simple and fast
- takes a particular predetermined workload and defines the • gives exact numbers for comparison
performance of each algorithm for that workload • useful if the same processes will be run many times
 e.g. given a set of processes and their burst times, which • can identify trends (e.g. SJF results in minimum waiting
of the following algorithms gives minimum average waiting time)
time: FCFS, SJF, RR(quantum = 10)? Assume all processes - disadvantages:
arrive at time 0 and in the order given. • requires exact numbers
Process Burst time
• results only apply to the cases that were used to evaluate
P1 10
P2 29
the algorithms
P3 3
P4 7
P5 12

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


29 Chapter 3 Processes and Threads 30 Chapter 3 Processes and Threads

5
20/09/2012

2. Queuing Models - let


• W=average waiting time
- on many systems, many different types of processes run,
• λ =average arrival rate of new processes
therefore deterministic modeling cannot be used
• n =average queue length, excluding process running
- however, we can analyze characteristics of the processes, e.g. - then in a steady state (i.e. number of processes leaving the
• CPU times queue equals the number of processes arriving)
• I/O bursts times  n = λ x W (this is known as Little’s formula)
• arrival times i.e. during W , λ *W processes arrive in the ready queue
• service rates
- these characteristics are represented by distributions - Little’s formula is valid for any scheduling algorithm and
(mathematical formulas) arrival distribution
- once we have these distributions, we can compute the - supposing we want to find W and know that:
throughput, utilization, average queue lengths, waiting times, • n =14, and λ =7 processes/second
etc. • What is the average waiting time?

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


31 Chapter 3 Processes and Threads 32 Chapter 3 Processes and Threads

- advantages: 3. Simulation
• very fast way of analyzing systems - program a model of a computer system
- disadvantages: - software data structures represent system components
• mathematics can become complex, so processes and systems are
- a clock is maintained
usually modeled in unrealistic but tractable ways
• many assumptions may have to be made
- as the clock is advanced, the state of the various components
• accuracy is therefore questionable
in the system are modified
- if a process in the real system requires 5 seconds of CPU
time, it will actually execute in a moment in the simulation
(the clock will simply be advanced)
- the simulator keeps track of variables in order to calculate
statistics for utilization, throughput, waiting time, etc.

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


33 Chapter 3 Processes and Threads 34 Chapter 3 Processes and Threads

- can use random number generation Evaluation of CPU schedulers by Simulation


• to model arrival times, CPU times, etc.
• does not capture the order of events
- a trace from a real system can be used
• order of events is preserved
• but trace can be very large
- advantages:
• very accurate
• level of detail is controlled
- disadvantages:
• accuracy may require very complex models to be programmed,
which takes a long time
• traces may require large amounts of storage

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


35 Chapter 3 Processes and Threads 36 Chapter 3 Processes and Threads

6
20/09/2012

4. Implementation E. MULTIPROCESSOR SCHEDULING


- concentrate on homogeneous processors within a
- actually build the system with desired features: most accurate
multiprocessor
way of evaluating performance.
- CPU scheduling more complex
- use a benchmark to measure performance
 load sharing needed
• a workload is run and performance is measured
• the workload mimics processes behavior of actual processes that • Asymmetric multiprocessing
will run on the system - master server handles all scheduling decisions

- big disadvantage: very expensive • Symmetric multiprocessing (SMP)


- each processor is self-scheduling
- could have all processes in a common ready queue
- or each CPU could have its own private queue of ready processes

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


37 Chapter 3 Processes and Threads 38 Chapter 3 Processes and Threads

F. THREAD SCHEDULING
- some issues concerning SMP include
- distinction between user-level and kernel-level threads

• Processor affinity - many-to-one and many-to-many models

- process has affinity for processor on which it is currently • thread library schedules user-level threads to run on LWP
running; avoid migration of processes • use process-contention scope (PCS)
- a process has a value that indicates preference • scheduling competition is within the process
• soft affinity – possible migration - for one-to-one
• hard affinity – no migration
• kernel thread scheduled onto available CPU using system-
contention scope (SCS)
• Load balancing
• competition among all threads in system
- keep workload evenly distributed across all processors
• push migration
• pull migration

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


39 Chapter 3 Processes and Threads 40 Chapter 3 Processes and Threads

Java Thread Scheduling


- loosely-defined scheduling policy
- each thread has a priority ranging between 1 and 10
- a thread runs until:
- a thread is given a default priority of 5 when created
• it’s time quantum expires
- higher priority processes have preference
• it blocks for I/O
 FIFO used if there are multiple threads with the same priority
• it exits its run() method
- Thread.MIN_PRIORITY Minimum Thread Priority (1)
- some systems may support preemption
- Thread.MAX_PRIORITY Maximum Thread Priority (10)
- JVM schedules a thread to run when
- Thread.NORM_PRIORITY Default Thread Priority (5) • the currently running thread exits the Runnable state
- priorities are not adjusted dynamically • a higher priority thread enters the Runnable state
 priority will only change if this is done explicitly in the program
 using setPriority() method:
setPriority(Thread.NORM_PRIORITY + 2);

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


41 Chapter 3 Processes and Threads 42 Chapter 3 Processes and Threads

7
20/09/2012

- since the JVM doesn’t ensure time-slicing, the yield() method Relationship between Java and Win32 Priorities
may be used:

 this yields control to another thread of equal priority

while (true) {
// perform CPU-intensive task
...
Thread.yield();
}

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


43 Chapter 3 Processes and Threads 44 Chapter 3 Processes and Threads

G. LINUX SHEDULING i. Priorities


- tries to have good response time and throughput - two priority ranges: real-time and nice values
- increased support for SMP, as well as processor affinity and - real-time range from 0 to 99, nice value from 100 to 140
load balancing - higher priority tasks have longer time quanta
- preemptive, priority-based scheduling algorithm
- lower priority tasks have shorter time quanta
- good interactive performance: even during high load,
- task are eligible for execution on the CPU as long as it has
interactive tasks should be scheduled immediately
time remaining in its time slice
- fairness: no process should be deprived of a time slice for a
- when time slice is exhausted, the task is considered
reasonable amount of time; no process should get an unfairly
exhausted and is not eligible to run until all other tasks have
high amount of time slice
exhausted their time slices

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


45 Chapter 3 Processes and Threads 46 Chapter 3 Processes and Threads

Priorities and time slice length ii. Scheduling Data Structures


- a structure called runqueue is used to store all runnable
processes
- each processor has its own runqueue data structure and
schedules itself independently
- runqueue has an active array and an expired array
- active array has all tasks with time remaining in their time slices
- expired array contains all expired tasks
- processes in both arrays are indexed by priority
- scheduler picks highest priority task from the active array
- if the process exhaust its time slice, it is placed in the expired
array
- when the active array is empty, the two arrays are swapped
CSI354 Operating Systems 2012 CSI354 Operating Systems 2012
47 Chapter 3 Processes and Threads 48 Chapter 3 Processes and Threads

8
20/09/2012

List of Tasks Indexed According to Priorities iii. Real-time scheduling


- real-time tasks have static priorities
- all other tasks have dynamic priorities based on nice values +/- 5
- interactivity determines whether we will have nice+5 or nice-5
- if a task has been sleeping a long time for I/O, then it is nice-5
(since it is more interactive)
- when a task has exhausted its time slice its dynamic priority is
recalculated
- thus when active and expired arrays are exchanged, all tasks
already have new priorities

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


49 Chapter 3 Processes and Threads 50 Chapter 3 Processes and Threads

iv. Recalculating Time Slices v. Linux 2.4 Scheduler


- processes are assigned CPU time once each epoch
- many OS have an explicit method of recalculating time slices
when they all reach zero (including older Linux versions) - a new epoch begins when no ready job has any CPU time left
- process can carry over half its unused CPU time from last epoch
for(each task in the system) { - priority (known as goodness) is recalculated each time the
recalculate priority scheduler runs
recalculate time slice - goodness is largely determined by the unused CPU time
}
- preference is given to a process if it uses the same memory space
- it can take a long time as the last process
- non-determinism of a randomly occurring recalculation of the - so the memory management cache doesn’t need to be cleared
time slice is a problem with deterministic real-time programs - in a multiprocessor machine, preference given to process if it last
- with new O(1) scheduler, recalculation is as simple as switching ran on the current processor.
the active and expired arrays - improves cache hits

CSI354 Operating Systems 2012 CSI354 Operating Systems 2012


51 Chapter 3 Processes and Threads 52 Chapter 3 Processes and Threads

vi. Linux 2.6 Scheduler


- 140 levels : first 100 are “real time” , last 40 for “user”
H. WINDOWS XP SCHEDULING
- Active vs. expired arrays - schedules threads rather than processes
- active array holds processes to be scheduled - uses quantum based preemptive priority scheduling
- when user process uses up its quantum it moves to the expired array - 32 priority levels : 31 is the highest, O is the lowest
- priority is then recalculated based on “interactivity”: • 0 is reserved for the thread used for memory management
• ratio of how much it executed compared to how much it slept • 1-15 is the variable class, for normal applications
• adjusts priority ±5.
• and 26-31 is the real time class for real time processes
• quantum is based on priority
 better priority has longer quantum - there is a queue for each priority
(Note: different sources quote different ranges…) • scheduler traverse queues from highest to lowest until it finds
- queues are swapped when no active user process left. a thread to run
- like the 2.4 scheduler this allows low priority processes to get a • if there is no thread to run, the idle thread is executed
chance.
- separate structures for each cpu, but migration is possible
CSI354 Operating Systems 2012 CSI354 Operating Systems 2012
53 Chapter 3 Processes and Threads 54 Chapter 3 Processes and Threads

9
20/09/2012

- when the thread’s quantum expires, its priority is lowered


- when a thread becomes ready after waiting, it is given a
priority boost
- also viewed as multiple feedback queue algorithm
- a thread is preempted if
• a higher priority thread becomes ready
• thread quantum expires
• thread blocks

CSI354 Operating Systems 2012


55 Chapter 3 Processes and Threads

10

You might also like