0% found this document useful (0 votes)
295 views

Lecture 5 Scheduling Algorithms

Scheduling refers to how processes are assigned priorities and allocated CPU time by the scheduler. The scheduler uses scheduling algorithms like first-come, first-served (FCFS), shortest job next (SJN), or round robin (RR) to determine which ready process runs next. These algorithms aim to optimize performance metrics including throughput, waiting time, response time, and fairness. While scheduling is a difficult problem, preemptive algorithms allow the operating system to periodically reconsider decisions and learn about process behaviors.

Uploaded by

Marvin Bucsit
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
295 views

Lecture 5 Scheduling Algorithms

Scheduling refers to how processes are assigned priorities and allocated CPU time by the scheduler. The scheduler uses scheduling algorithms like first-come, first-served (FCFS), shortest job next (SJN), or round robin (RR) to determine which ready process runs next. These algorithms aim to optimize performance metrics including throughput, waiting time, response time, and fairness. While scheduling is a difficult problem, preemptive algorithms allow the operating system to periodically reconsider decisions and learn about process behaviors.

Uploaded by

Marvin Bucsit
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 27

Scheduling and

Scheduling Algorithms
IT6 – Operating Systems
Scheduling
 refers to the way processes are assigned priorities in
a priority queue.
 This assignment is carried out by software known as
a scheduler.
 Scheduler decides the process to run first by using a
scheduling algorithm
 Long-term scheduler (or job scheduler) – selects which
processes should be brought into the ready queue
 Short-term scheduler (or CPU scheduler) – selects
which process should be executed next and allocates CPU
2
Scheduling Algorithms
 the method by which processes are given access to
system resources, usually processor time.
 The need for a scheduling algorithm arises from the
requirement of most modern systems to execute more
than one process at a time
 Scheduling algorithms are generally only used in a time
slice multiplexing kernel - in order to effectively load
balance a system, the kernel must be able to suspend
execution of threads forcibly in order to begin
execution of the next thread.
3
Performance Metrics
 CPU Efficiency
 Throughput
 Turnaround time
 Waiting Time
 Response Time
 Fairness

4
CPU Efficiency
 Sometimes referred to also as CPU utilization
 the average percentage of the hardware (or the
CPU) that is actually used.
 If the utilization is high, you are getting more
value for the money invested in buying the
computer.
 keeping CPU busy 100% of the time
 Minimize idle times for the CPU
5
Throughput
 the number of jobs completed in a unit of
time.
 If there are more jobs completed, there should
be more happy users.
 Maximize the number of jobs processed per
hour.

6
Turnaround Time
 the average time from submitting a job until it
terminates.
 This is the sum of the time spent waiting in
the queue, and the time actually running.

7
Waiting Time
 the time a job waits (in the ready state) until it
runs
 reducing the time a job waits until it runs also
reduces its response time
 As the system has direct control over the
waiting time, but little control over the actual
run time, it should focus on the wait time

8
Response Time
 Minimize response time by quickly dealing
with interactive requests, and letting batch
requests wait.
 This normalizes all jobs to the same scale:
long jobs can wait more, and don’t count
more than short ones.

9
Preemptive Scheduling
 A process scheduling strategy in which the
processing of a job is interrupted and the CPU
is transferred to another job. Also called
context switching.
 Temporarily suspend the logically runnable
processes

10
Non-Preemptive Scheduling
 A job scheduling strategy in which the job
captures the processor and begins execution
and runs uninterrupted until it issues an IO
request or it is finished.
 Run a process to completion

11
Non-preemptive Scheduling Algo.
 First-come, first-served (FCFS)
 Shortest Job Next (SJN)
 Priority Scheduling

12
First-Come First-Served (FCFS)
 a service policy whereby the requests of
customers or clients are attended to in the
order that they arrived, without other biases or
preferences.
 behavior: what comes in first is handled first,
what comes in next waits until the first is
finished, etc.

13
Shortest Job Next (SJN)
 Associate the length of the next CPU burst with each
process
 The job which has the shortest burst time gets the
CPU
 Assign the process with shortest CPU burst
requirement to the CPU
 Specially suitable to batch processing (long term
scheduling)

14
Priority Scheduling
 Most common method in batch systems, but may
give slow turnaround to some users.
 Gives preferential treatment to important jobs.
 Jobs with the same priority are treated FCFS.
 Priority could be based on:
 Least amount of memory required
 Least number of peripheral devices needed
 Shorted estimated CPU time
 Time already spent waiting.
15
Priority Function
 Memory requirements
 Important due to swapping overhead
 Smaller memory size => Less swapping overhead
 Smaller memory size => More processes can be
serviced
 Timeliness
 Dependent upon the urgency of a task
 Deadlines

16
Priority Function
 Total service time
 Total CPU time consumed by the process during
its lifetime
 Equals attained service time when the process
terminates
 Higher priority for shorter processes
 Preferential treatment of shorter processes reduces
the average time a process spends in the system

17
Priority Function
 External priorities
 Differentiate between classes of user and system
processes
 Interactive processes => Higher priority
 Batch processes => Lower priority
 Accounting for the resource utilization
 Attained service time
 Total time when the process is in the running state

18
Shortest Remaining Time (SRT)
 Preemptive version of shortest job next
scheduling
 Preemptive in nature (only at arrival time)
 The processor is allocated to the job closest to
completion.
 Can’t be implemented on interactive system
because it requires advance knowledge of
required CPU time.
19
Round Robin (RR)
 Preemptive in nature
 Preemption based on time slices or time quanta;
Time quantum between 10 and 100 milliseconds
 All user processes treated to be at the same priority
 No process is allocated CPU for more than 1
quantum in a row
 Timer interrupt results in context switch and the
process is put at the rear of the ready queue

20
Round Robin (RR)
 Ready queue treated as a circular queue
 New processes added to the rear of the ready queue
 Preempted processes added to the rear of the ready
queue
 Scheduler picks up a process from the head of the
queue and dispatches it with a timer interrupt set after
the time quantum
 CPU burst < 1 quantum ) process releases CPU
voluntarily
21
Round Robin (RR)
 Large time quantum => FIFO scheduling
 Small time quantum => Large context
switching overhead
 Rule of thumb: 80% of the CPU bursts
should be shorter than the time quantum

22
Comparison of scheduling methods
 Average waiting time
 Average turnaround time

23
Summary
 Scheduling - which is what “resource
management” is in the context of processes -
is a hard problem.
 It requires detailed knowledge, e.g. how long
a job will run, which is typically not
available. And then it turns out to be NP-
complete.

24
Summary
 However, this doesn’t mean that operating
systems can’t do anything. The main idea is
to use preemption.
 This allows the operating system to learn
about the behavior of different jobs, and to
reconsider its decisions periodically.

25
Summary
 Periodical preemptions are not guaranteed to
improve average response times, but they do.
 The reason is that the distribution of process
runtimes is heavy-tailed.
 This is one of the best examples of a
widespread operating system policy that is
based on an empirical observation about
workloads.
26
END OF LECTURE

27

You might also like