0% found this document useful (0 votes)
23 views27 pages

Feleke OS Lecture 4 (CPU Scheduling)

Feleke__OS__Lecture__4(CPU__Scheduling) (2)

Uploaded by

Abrham Yeshitla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views27 pages

Feleke OS Lecture 4 (CPU Scheduling)

Feleke__OS__Lecture__4(CPU__Scheduling) (2)

Uploaded by

Abrham Yeshitla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Topic: CPU Scheduling

Source: Feleke Merin (Dr. – Engr.)


Senior Asst. Professor
Objectives
After completing this module, you will be able to
understand the following:
 Scheduling Objectives
 Levels of Scheduling
 Scheduling Criteria
 Scheduling Algorithms
 FCFS,
 Shortest Job First,
 Priority,
 Round Robin

Source: Feleke Merin (Dr. - Engr.) 2


CPU―Scheduling Objectives
 Enforcement of fairness
 in allocating resources to processes
 Enforcement of priorities
 Make best use of available system resources
 Give preference to processes holding key
resources.
 Give preference to processes exhibiting good
behavior.
 Degrade gracefully under heavy loads.

Source: Feleke Merin (Dr. - Engr.) 3


Levels of Scheduling
 High Level Scheduling or Job Scheduling
 Selects jobs allowed to compete for CPU and other
system resources.
 Intermediate Level Scheduling or Medium
Term Scheduling
 Selects which jobs to temporarily suspend/resume to
smooth fluctuations in system load.
 Low Level (CPU) Scheduling or Dispatching
 Selects the ready process that will be assigned the
CPU.
 Ready Queue contains PCBs of processes.

Source: Feleke Merin (Dr. - Engr.) 4


CPU Scheduler
 Selects from among the processes in
memory that are ready to execute, and
allocates the CPU to one of them.
 Non-preemptive Scheduling
 Once CPU has been allocated to a process, the process
keeps the CPU until
 Process exits OR
 Process switches to waiting state

 Preemptive Scheduling
 Process can be interrupted and must release the CPU.
 Need to coordinate access to shared data

Source: Feleke Merin (Dr. - Engr.) 5


CPU Scheduling Decisions
 CPU scheduling decisions may take place when
a process:
1. switches from running state to waiting state

2. switches from running state to ready state

3. switches from waiting to ready

4. Terminates

 Scheduling under 1 and 4 is non-preemptive.


 All other scheduling is preemptive.

Source: Feleke Merin (Dr. - Engr.) 6


CPU scheduling decisions

Source: Feleke Merin (Dr. - Engr.) 7


Dispatcher
 Dispatcher module gives control of the CPU
to the process selected by the short-term
scheduler. This involves:
 switching context
 switching to user mode
 jumping to the proper location in the user program to restart
that program

 Dispatch Latency:
 time it takes for the dispatcher to stop one process and
start another running.
 Dispatcher must be fast.

Source: Feleke Merin (Dr. - Engr.) 8


CPU―Scheduling Criteria
 CPU Utilization
 Keep the CPU and other resources as busy as possible
 Throughput
 # of processes that complete their execution per time unit.
 Turnaround time
 amount of time to execute a particular process from its entry time.
 Waiting time
 amount of time a process has been waiting in the ready queue.
 Response Time (in a time-sharing environment)
 amount of time it takes from when a request was submitted until the
first response is produced, NOT output.

Source: Feleke Merin (Dr. - Engr.) 9


Optimization Criteria
 Maximize CPU Utilization

 Maximize Throughput

 Minimize Turnaround time (or Completion time)

 Minimize Waiting time

 Minimize response time

Source: Feleke Merin (Dr. - Engr.) 10


Observations: Scheduling Criteria
 Throughput vs. response time
 Throughput related to response time, but not identical:
 Minimizing response time will lead to more context switching than if
you only maximized throughput

 Two parts to maximizing throughput


 Minimize overhead (for example, context-switching)

 Efficient use of resources (CPU, disk, memory, etc)


 Fairness vs. response time
 Share CPU among users in some equitable way
 Fairness is not minimizing average response time:
 Better average response time by making system less fair

Source: Feleke Merin (Dr. - Engr.) 11


CPU―Scheduling Policies
 First-Come First-Serve (FCFS)

 Shortest Job First (SJF)


 Non-preemptive

 Pre-emptive

 Priority

 Round-Robin

Source: Feleke Merin (Dr. - Engr.) 12


First Come First Serve (FCFS) Scheduling
 Policy: Process that requests the CPU FIRST
is allocated the CPU FIRST.
 FCFS is a non-preemptive algorithm.
 Implementation - using FIFO queues
 incoming process is added to the tail of the queue.
 Process selected for execution is taken from head of
queue.
 Performance metric - Average waiting time in
queue.
 Gantt Charts are used to visualize schedules.

Source: Feleke Merin (Dr. - Engr.) 13


First-Come, First-Served(FCFS) Scheduling
 Example  Suppose the arrival
Process Burst Time order for the processes
P1 24 is
 P1, P2, P3
P2 3
P3 3  Waiting time
 P1 = 0;
 P2 = 24;
Gantt Chart for Schedule  P3 = 27;

P1 P2 P3  Average waiting time


 (0+24+27)/3 = 17
0 24 27 30
 Average completion time
 (24+27+30)/3 = 27

Source: Feleke Merin (Dr. - Engr.) 14


FCFS Scheduling (cont.)
 Example  Suppose the arrival order
for the processes is
Process Burst Time  P2, P3, P1
P1 24  Waiting time
P2 3  P1 = 6; P2 = 0; P3 = 3;
P3 3  Average waiting time
 (6+0+3)/3 = 3 , better..
Gantt Chart for Schedule  Average waiting time
 (3+6+30)/3 = 13 , better..
P2 P3 P1  Convoy Effect:
 short process behind long
0 3 6 30 process, e.g. 1 CPU bound
process, many I/O bound
processes.

Source: Feleke Merin (Dr. - Engr.) 15


Shortest-Job-First(SJF) Scheduling
 Associate with each process the length of its next
CPU burst.
 Use these lengths to schedule the process with
the shortest time.
 Two Schemes:
 Scheme 1: Non-preemptive
 Once CPU is given to the process it cannot be preempted
until it completes its CPU burst.
 Scheme 2: Preemptive
 If a new CPU process arrives with CPU burst length less
than remaining time of current executing process, preempt.
Also called Shortest-Remaining-Time-First (SRTF)..

Source: Feleke Merin (Dr. - Engr.) 16


SJF and SRTF (Example)
Process Arrival TimeBurst Time
P1 0 7
P2 2 4
P3 4 1
P4 5 4

Non-Preemptive SJF Scheduling Preemptive SJF Scheduling


Gantt Chart for Schedule Gantt Chart for Schedule

P1 P3 P2 P4 P1 P2 P3 P2 P4 P1

0 7 8 12 16 0 2 4 5 7 11 16

Average waiting time = Average waiting time =


(0+6+3+7)/4 = 4 (9+1+0+2)/4 = 3

Source: Feleke Merin (Dr. - Engr.) 17


SJFor SRTF Discussion
 SJFor SRTF are the best you can do at minimizing
average response time
 Provably optimal (SJF among non-preemptive, SRTF among
preemptive)
 Since SRTF is always at least as good as SJF, focus on SRTF
 Comparison of SRTF with FCFS and RR
 What if all jobs the same length?
 SRTF becomes the same as FCFS (i.e. FCFS is best can do if all jobs
the same length)
 What if jobs have varying length?
 SRTF (and RR): short jobs not stuck behind long ones
 Starvation
 SRTF can lead to starvation if many small jobs!
 Large jobs never get to run
Source: Feleke Merin (Dr. - Engr.) 18
SRTF Further discussion
 Some how need to predict future
 How can we do this?
 Some systems ask the user
 When you submit a job, have to say how long it will take
 To stop cheating, system kills job if takes too long
 But: Even non-malicious users have trouble predicting runtime of their
jobs
 Bottom line, can’t really know how long job will take
 However, can use SRTF as a yardstick
for measuring other policies
 Optimal, so can’t do any better
 SRTF Pros & Cons
 Optimal (average response time) (+)
 Hard to predict future (-)
 Unfair (-)

Source: Feleke Merin (Dr. - Engr.) 19


Determining Length of Next CPU Burst

 One can only estimate the length of burst.


 Use the length of previous CPU bursts and
perform exponential averaging.
 tn = actual length of nth burst
 n+1 =predicted value for the next CPU burst
  = 0, 0    1
 Define
 n+1 =  tn + (1- ) n

Source: Feleke Merin (Dr. - Engr.) 20


Exponential Averaging(cont.)
 =0
 n+1 = n; Recent history does not count

 = 1
 n+1 = tn; Only the actual last CPU burst counts.

 Similarly, expanding the formula:


 n+1 = tn + (1-) tn-1 + …+
(1-)^j tn-j + …
j

(1-)^(n+1) 0
 Each successive term has less weight than its predecessor.

Source: Feleke Merin (Dr. - Engr.) 21


Priority Scheduling
 A priority value (integer) is associated with
each process.

 CPU is allocated to process with the highest


priority.
 Preemptive

 Nonpreemptive

Source: Feleke Merin (Dr. - Engr.) 22


Priority Scheduling
 Consider the following set of processes, assumed to have arrived at
time 0 in the order P1, P2, ꞏꞏꞏ , P5, with CPU burst given in
milliseconds:
Using priority scheduling, we
would schedule these processes
according to the following
Gantt chart:

 Waiting time Tw for each process: P1=6 , P2=0 , P3=16 , P4=18

 The average waiting time =(6+16+18+1)/5 = 8.2 milliseconds

 The average completion time =(16+1+18+19+6)/5 = 12 milliseconds

Source: Feleke Merin (Dr. - Engr.) 23


Round Robin (RR)
 Each process gets a small unit of CPU time
 Time quantum usually 10-100 milliseconds.
 After this time has elapsed, the process is preempted and
added to the end of the ready queue.
 n processes, time quantum = q
 Each process gets 1/n CPU time in chunks of at most q
time units at a time.
 No process waits more than (n-1)q time units.
 Performance
 Time slice q too large – response time poor
 Time slice ()? -- reduces to FIFO behavior
 Time slice q too small - Overhead of context switch is
too expensive. Throughput poor

Source: Feleke Merin (Dr. - Engr.) 24


Example of RR with Time Quantum = 20ms
 Example: Process Burst Time (ms)
P1 53
P2 8
P3 68
P4 24
 The Gantt chart is:
P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

 Waiting time 0 20 28 48 68 88 108 112 125 145 153


 P1=(68-20)+(112-88)=72 ms
 P2=(20-0)=20 ms
 P3=(28-0)+(88-48)+(125-108)=85 ms
 P4=(48-0)+(108-68)=88 ms
 Average waiting time = (72+20+85+88)/4=66¼ ms
 Average completion time = (125+28+153+112)/4 = 104½ ms

 Thus, Round-Robin Pros and Cons:


 Better for short jobs, Fair (+)
 Context-switching time adds up for long jobs (-)

Source: Feleke Merin (Dr. - Engr.) 25


Round Robin (Effect of time quantum tq , with tq=min., and tq=max. )
 The performance of the RR algorithm depends heavily on the size of the time quantum.
 If the time quantum is extremely large, the RR policy is the same as the FCFS policy.
 If the time quantum is extremely small (say, 1 millisecond), the RR approach can result in a large number of context
switches.
 If the quantum is 12 time units, the process finishes in less than 1 time quantum, with no overhead.
 If the quantum is 6 time units, however, the process requires 2quanta, resulting in a context switch.
 If the time quantum is 1 time unit, then nine context switches will occur, slowing the execution of the process
accordingly

Source: Feleke Merin (Dr. - Engr.) 26


CPU-Scheduling Exercises
 Consider the following set of processes, with the length of the CPU-
burst time:
Process Burst Time (ms) Priority

P1 10 2
 Required: P2 5 1
a) Draw four Gantt charts illustrating the
P3 14 5
execution of these processes using:
P4 20 4
i. FCFS,
P5 5 3
ii. SJF (non-preemptive case),
iii. Priority (a smaller priority number implies a higher priority), and
iv. Round-Robin (quantum = 5ms) scheduling.

b) Find the turnaround time of each process for each of the scheduling algorithms in part a?

c) Find the waiting time of each process for each of the scheduling algorithms in part a?

============== The End ! ==============

Source: Feleke Merin (Dr. - Engr.) 27

You might also like