0% found this document useful (0 votes)
19 views7 pages

05 Scheduling

The document discusses CPU scheduling in operating systems. It covers concepts like CPU bursts, preemptive vs. nonpreemptive scheduling, and scheduling algorithms like first-come first-served, shortest job first, priority scheduling, and round robin. It provides examples and comparisons of these scheduling algorithms.

Uploaded by

jdrpp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views7 pages

05 Scheduling

The document discusses CPU scheduling in operating systems. It covers concepts like CPU bursts, preemptive vs. nonpreemptive scheduling, and scheduling algorithms like first-come first-served, shortest job first, priority scheduling, and round robin. It provides examples and comparisons of these scheduling algorithms.

Uploaded by

jdrpp
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Basic Concepts

■ Maximum CPU utilization obtained with


multiprogramming
Operating Systems I ■ CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and I/O
wait.
Unit 5 – CPU Scheduling ■ CPU burst distribution

Prof. Dr. Alejandro Zunino


ISISTAN - CONICET

Alternating Sequence of CPU And


Histogram of CPU-burst Times
I/O Bursts

CPU Scheduler Decision Mode

■ Selects from among the processes in ■ Nonpreemptive


memory that are ready to execute, and ■ Once a process is in the running state, it will
allocates the CPU to one of them. continue until it terminates or blocks itself for I/O
■ CPU scheduling decisions may take place ■ Preemptive
when a process: ■ Currently running process may be interrupted
1. Switches from running to waiting state. and moved to the Ready state by the operating
2. Switches from running to ready state. system
3. Switches from waiting to ready. ■ Allows for better service since any one process
4. Terminates.
cannot monopolize the processor for very long
■ Scheduling under 1 and 4 is nonpreemptive.
■ All other scheduling is preemptive.
Dispatcher Scheduling Criteria
■ CPU utilization – keep the CPU as busy as
■ Dispatcher module gives control of the CPU possible
to the process selected by the short-term ■ Throughput – # of processes that complete
scheduler; this involves: their execution per time unit
■ switching context ■ Turnaround time – amount of time to
■ switching to user mode execute a particular process
■ jumping to the proper location in the user ■ Waiting time – amount of time a process
program to restart that program has been waiting in the ready queue
■ Dispatch latency – time it takes for the ■ Response time – amount of time it takes
dispatcher to stop one process and start from when a request was submitted until
another running. the first response is produced, not output
(for time-sharing environment)

First-Come, First-Served (FCFS)


Optimization Criteria
Scheduling
Process Burst Time
P1 24
■ Max CPU utilization
P2 3
■ Max throughput (processes completed per P3 3
time unit) ■ Suppose that the processes arrive in the order: P1 , P2
■ Min turnaround time (interval between , P3
The Gantt Chart for the schedule is:
submission and completion)
■ Min waiting time P1 P2 P3
■ Min response time
0
■ Waiting time for P1 = 0; P24
2
= 24;27P3 = 27
30

■ Average waiting time: (0 + 24 + 27)/3 = 17

Shortest-Job-First (SJR)
FSFC Scheduling (cont.)
Scheduling
Suppose that the processes arrive in the order ■ Associate with each process the length of its
P2 , P3 , P1 . next CPU burst. Use these lengths to
■ The Gantt chart for the schedule is: schedule the process with the shortest time.
■ Two schemes:
P2 P3 P1 ■ nonpreemptive – once CPU given to the process it
cannot be preempted until completes its CPU burst.
0 3 6 30 ■ preemptive – if a new process arrives with CPU
■ Waiting time for P1 = 6; P2 = 0; P3 = 3 burst length less than remaining time of current
■ Average waiting time: (6 + 0 + 3)/3 = 3 executing process, preempt. This scheme is know
■ Much better than previous case. as the
■ Convoy effect short process behind long process Shortest-Remaining-Time-First (SRTF).
■ Favors CPU-bound processes: I/O processes have ■ SJF is optimal – gives minimum average
to wait waiting time for a given set of processes.
Example of Non-Preemptive SJF Example of Preemptive SJF

Process Arrival Time Burst Time Process Arrival Time Burst Time
P1 0.0 7 P1 0.0 7
P2 2.0 4 P2 2.0 4
P3 4.0 1 P3 4.0 1
P4 5.0 4 P4 5.0 4
■ SJF (non-preemptive) ■ SJF (preemptive)

P1 P3 P2 P4
P1 P2 P3 P2 P4 P1

0 3 7 8 12 16
■ Average waiting time = (0 + 6 + 3 + 7)/4 - 4 ■
0 2
Average 4 5
waiting
11 16
time7 = (9 + 1 + 0 +2)/4 - 3

Determining Length of Next CPU Prediction of the Length of the


Burst Next CPU Burst
■ Can only estimate the length.
■ Can be done by using the length of previous
CPU bursts, using exponential averaging.

Examples of Exponential
Priority Scheduling
Averaging
■ α =0 ■ A priority number (integer) is associated with each
■ τn+1 = τn
process
■ Recent history does not count.
■ The CPU is allocated to the process with the
■ α =1 highest priority (smallest integer ≡ highest
■ τn+1 = tn
priority).
■ Only the actual last CPU burst counts.
■ Preemptive
■ If we expand the formula, we get: ■ nonpreemptive
τn+1 = α tn+(1 - α)α tn -1 + …
+(1 - α )j α tn -j + …
■ SJF is a priority scheduling where priority is the
+(1 - α )n +1 τ0 predicted next CPU burst time.
■ Since both α and (1 - α) are less than or ■ Problem ≡ Starvation – low priority processes may
equal to 1, each successive term has less never execute.
weight than its predecessor. ■ Solution ≡ Aging – as time progresses increase the
priority of the process.
Example of RR with Time
Round Robin (RR)
Quantum = 20
■ Each process gets a small unit of CPU time (time
Process Burst Time
quantum), usually 10-100 milliseconds. After this P1 53
time has elapsed, the process is preempted and P2 17
added to the end of the ready queue. P3 68
■ If there are n processes in the ready queue and P4 24
■ The Gantt chart is:
the time quantum is q, then each process gets 1/n
of the CPU time in chunks of at most q time units P1 P2 P3 P4 P1 P3 P4 P1 P3 P3
at once. No process waits more than (n-1)q time
0 20 37 57 77 97 117 121 134 154 162
units. ■ Typically, higher average turnaround than SJF, but better
■ Performance response.
q should be large compared to context switch time
■ q large ⇒ FIFO

q usually 10 milliseconds to 100 milliseconds,


■ q small ⇒ q must be large with respect to

■ Context switch < 10 microseconds


context switch, otherwise overhead is too high.

Time Quantum and Context


Multilevel Queue
Switch Time
■ Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
■ Each queue has its own scheduling algorithm,
foreground – RR
background – FCFS
■ Scheduling must be done between the queues.
■ Fixed priority scheduling; (i.e., serve all from
foreground then from background). Possibility of
starvation.
■ Time slice – each queue gets a certain amount of
CPU time which it can schedule amongst its
processes; i.e., 80% to foreground in RR
■ 20% to background in FCFS

Multilevel Queue Scheduling Multilevel Feedback Queue


■ A process can move between the various queues;
aging can be implemented this way.
■ Multilevel-feedback-queue scheduler defined by the
following parameters:
■ number of queues

■ scheduling algorithms for each queue

■ method used to determine when to upgrade a

process
■ method used to determine when to demote a

process
■ method used to determine which queue a process

will enter when that process needs service


■ Aging can be implemented using multilevel feedback
queue
Example of Multilevel Feedback
Multiple-Processor Scheduling
Queue
■ Three queues:
■ Q0 – time quantum 8 milliseconds ■ CPU scheduling more complex when
■ Q1 – time quantum 16 milliseconds
■ Q2 – FCFS multiple CPUs are available.
Scheduling

■ A new process enters queue Q0 which is served in RR ■ Multiprocess may be any one of the
■ When it gains CPU, the process receives 8 milliseconds following architectures:
■ If it does not finish in 8 milliseconds, the process is moved to queue Q1
■ At Q1 job is again served in RR and receives 16 additional milliseconds ■ Multicore CPUs
■ If it still does not complete, it is preempted and moved to queue Q2
■ Multithreaded cores
■ Heterogeneous multiprocessing

Multiple-Processor Scheduling Multicore Processors

■ Symmetric multiprocessing (SMP) is where ■ Multiple processor cores on same physical


each processor is self scheduling. chip
■ All threads may be in a common ready queue (a) ■ Faster and consumes less power
■ Each processor may have its own private queue ■ Multiple threads per core also growing
of threads (b)
■ Takes advantage of memory stall to make
progress on another thread while memory
retrieve happens

Multithreaded Multicore System Multithreaded Multicore System

■ Each core has > 1 hardware threads. ■ Chip-multithreading (CMT)


■ If one thread has a memory stall, switch to assigns each core multiple
another thread! hardware threads. (Intel refers
to this as hyperthreading)

■ On a quad-core system with 2


hardware threads per core, the
operating system sees 8 logical
processors.
Multithreaded Multicore System Load Balancing

■ Two levels of scheduling: ■ If SMP, need to keep all CPUs loaded for
efficiency
■ The operating system
deciding which software ■ Load balancing attempts to keep workload
thread to run on a logical evenly distributed
CPU
■ Push migration – periodic task checks load on
each processor, and if found pushes task from
■ How each core decides
which hardware thread to overloaded CPU to other CPUs
run on the physical core. ■ Pull migration – idle processors pulls waiting
task from busy processor

Processor Affinity Real-Time Scheduling

■ When a thread has been running on one processor, the ■ Hard real-time systems – required to
cache contents of that processor stores the memory
accesses by that thread. complete a critical task within a guaranteed
■ A thread having affinity for a processor (i.e. amount of time.
“processor affinity”) ■ Soft real-time computing – requires that
■ Load balancing may affect processor affinity as a thread may critical processes receive priority over less
be moved from one processor to another to balance loads,
yet that thread loses the contents of what it had in the fortunate ones.
cache of the processor it was moved off of.
■ Soft affinity – the operating system attempts to keep a
thread running on the same processor, but no guarantees.
■ Hard affinity – allows a process to specify a set of
processors it may run on.

Windows Scheduling Linux Scheduling v2.5: O(1)

■ Windows uses priority-based preemptive scheduling ■ Preemptive, priority based


Two priority ranges: time-sharing and real-time
Highest-priority thread runs next


■ Real-time range from 0 to 99 and nice value from 100 to 140
■ Thread runs until (1) blocks, (2) uses time slice, (3) ■ Map into global priority with numerically lower values indicating higher
preempted by higher-priority thread priority
■ Real-time threads can preempt non-real-time ■ Higher priority gets larger q

■ 32-level priority scheme ■ Task run-able as long as time left in time slice (active)
■ If no time left (expired), not run-able until all other tasks use their
■ Variable class is 1-15, real-time class is 16-31 slices
■ Priority 0 is memory-management thread ■ All run-able tasks tracked in per-CPU runqueue data structure
■ Queue for each priority ■ Two priority arrays (active, expired)
Tasks indexed by priority
■ If no run-able thread, runs idle thread ■

■ When no more active, arrays are exchanged


■ Worked well, but poor response times for interactive processes
Linux Scheduling v2.6.24: CFS Linux Scheduling v2.6.24: CFS

■ Scheduling classes ■ CFS scheduler maintains per task virtual run


Each has specific priority

time in variable vruntime
■ Scheduler picks highest priority task in highest scheduling class
■ Rather than quantum based on fixed time allotments, based on ■ Associated with decay factor based on
proportion of CPU time priority of task – lower priority is higher
■ 2 scheduling classes included, others can be added decay rate
■ default
■ Real-time ■ Normal default priority yields:
Quantum calculated based on nice value from -20 to +19

■ virtual run time = actual run time
■ Lower value is higher priority
■ Calculates target latency – interval of time during which task should run at ■ To decide next task to run, scheduler picks
least once
■ Target latency can increase if say number of active tasks increases
task with lowest virtual run time

You might also like