Lecture 3 - Scheduling
Lecture 3 - Scheduling
(Chapter 7 & 8)
Prepared by:
Azizur Rahman Anik
Lecturer, department of CSE, UIU
Chapter 7- Scheduling: Introduction
THE CRUX: HOW TO DEVELOP SCHEDULING POLICY
✔ Tturnaround = Tcompletion −
Scheduling: First In, First Out (FIFO)
✔ Now let’s relax assumption 1: Each job runs for the same amount of
time.
✔ No longer assume that each job runs for the same amount of time.
✔ How does FIFO perform now? What kind of workload could you
construct to make FIFO perform poorly?
✔ How can we develop a better algorithm to deal with our new reality of
jobs that run for different amounts of time?
✔ It runs the shortest job first, then the next shortest, and so on
✔ Tturnaround = Tcompletion −
Tarrival
Scheduling: Shortest Job First (SJF)
✔ Given our assumptions about jobs all arriving at the same time, we
could prove that SJF is indeed an optimal scheduling algorithm.
✔ However, you are in a systems class, not theory or operations research;
no proofs are allowed.
✔ Let’s relax our assumption 2: All jobs arrive at the same time.
✔ Now assume that jobs can arrive at any time instead of all at once.
Scheduling: Shortest Job First (SJF)
✔ Assume job A arrives at t=0 and needs to run for 100 seconds.
✔ B and C arrive at t=10 and each need to run for 10 seconds.
✔ The shorter it is, the better the performance of RR under the response-time
metric.
✔ However, making the time slice too short is problematic: suddenly the cost
of context switching will dominate overall performance.
✔ Thus, deciding on the length of the time slice presents a trade-off to a system
designer, making it long enough to amortize the cost of switching without
making it so long that the system is no longer responsive.
Scheduling: Incorporating I/O
✔ First, we relax assumption 4: All jobs only use the CPU (i.e., they
perform no I/O)
✔ A job doesn’t use the CPU during the I/O operation
✔ Therefore, the scheduler should schedule another job on the CPU at that
time.
✔ When I/O request is initiated, the process goes to Blocked state from the
Running state.
✔ When the I/O is completed, an interrupt is raised. Then the OS moves that
process from Blocked State to Ready State.
Scheduling: How to deal a job that performs I/O?
❖ How can we design a scheduler that both minimizes response time for
interactive jobs while also minimizing turnaround time without a priori
knowledge of job length?
✔ The fundamental problem MLFQ tries to address is two-fold
✔ First, it would like to optimize turnaround time, which is done by running
shorter jobs first.
✔ Unfortunately, the OS doesn’t generally know how long a job will run for.
✔ Second, MLFQ would like to make a system feel responsive to interactive users
(i.e., users sitting and staring at the screen, waiting for a process to finish), and
thus minimize response time
✔ Unfortunately, algorithms like Round Robin reduce response time but are
terrible for turnaround time.
MLFQ: Basic Rules
✔ The MLFQ has a number of distinct queues, each assigned a different priority
level.
✔ At any given time, a job that is ready to run is on a single queue.
✔ MLFQ uses priorities to decide which job should run at a given time: a job with
higher priority (i.e., a job on a higher queue) is chosen to run.
✔ More than one job may be on a given queue, and thus have the same priority. In
this case, we will just use round-robin scheduling among those jobs
MLFQ: Basic Rules
✔ Rather than giving a fixed priority to each job, MLFQ varies the priority of a job
based on its observed behavior.
✔ A job that repeatedly relinquishes CPU -> interactive job -> MLFQ will give it
higher priority
✔ A job that uses CPU intensively for long periods of time -> MLFQ will reduce its
priority.
MLFQ: Basic Rules
✔ if there are “too many” interactive jobs in the system, they will combine to
consume all CPU time, and thus long-running jobs will never receive any CPU
time (they starve).
✔ We’d like to make some progress on these jobs even in this scenario.
Problems With Our Current MLFQ : Game The Scheduler
✔ Gaming the scheduler generally refers to the idea of doing something sneaky to
trick the scheduler into giving you more than your fair share of the resource.
✔ Before the time slice is over, issue an I/O operation (to some file you don’t care
about) and thus relinquish the CPU
✔ Doing so allows you to remain in the same queue, and thus gain a higher
percentage of CPU time.
✔ When done right (e.g., by running for 99% of a time slice before relinquishing
the CPU), a job could nearly monopolize the CPU.
Attempt #2: The Priority Boost
✔ To avoid the problem of starvation, periodically boost the priority of all the jobs
in system.