Lecture 3
Lecture 3
Dynamic-priority Scheduling
Dynamic-Priority Scheduling - 1
◆ The remaining scheduling disciplines we consider
are priority-based.
http://www.cse.unl.edu/~goddard/Courses/RealTimeSystems
» Each job is assigned a priority, and the highest-priority
CSCE 990: Real-Time Systems
priority than job Jj,m of task Tj, but job Ji,l of Ti has
lower priority than job Jj,n of Tj.
[email protected]
Real-Time Systems
Jim Anderson
2
Outline Optimality of EDF
◆ We consider both earliest-deadline-first (EDF)
Theorem 4-1: [Liu and Layland] When preemption is allowed and
and least-laxity-first (LLF) (called least-slack- jobs do not contend for resources, the EDF algorithm can produce a
time-first by Liu) scheduling. feasible schedule of a set J of independent jobs with arbitrary release
times and deadlines on a processor if and only if J has feasible
◆ Outline: schedules.
3 4
Proof of Theorem 4-1 Proof (Continued)
We show that any feasible schedule of J can be systematically If we inductively repeat this procedure, we can eliminate all
transformed into an EDF schedule. out-of-order violations.
Suppose parts of two jobs Ji and Jk are executed out of EDF order: The resulting schedule may still fail to be an EDF schedule because
it has idle intervals where some job is ready:
Ji Jk
Jk Jk Ji
rk dk di
This situation can be corrected by performing a “swap”: Such idle intervals can be eliminated by moving some jobs forward:
Jk Jk Ji Jk Jk Ji
Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 5 Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 6
5 6
LLF Scheduling Optimality of LLF
◆ Definition: At any time t, the slack (or laxity)
Theorem 4-3: When preemption is allowed and jobs do not contend
of a job with deadline d is equal to d − t minus for resources, the LLF algorithm can produce a feasible schedule of
the time required to complete the remaining a set J of independent jobs with arbitrary release times and deadlines
portion of the job. on a processor if and only if J has feasible schedules.
deadline
Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 7 Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 8
7 8
Preemptive vs. Nonpreemptive EDF Proof (Continued)
The rest of our discussion of dynamic-priority scheduling will focus But under non-preemptive EDF, a deadline is missed!
preemptive and non-preemptive EDF. We first show the following:
J1 J2 J3
Theorem: Non-preemptive EDF is not optimal.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Proof: Consider a system of three jobs J1, J2, and J3 such that r1 r2 r3 J3’s deadline
(r1, e1, d1) = (0, 3, 10), (r2, e2, d2) = (2, 6, 14), (r3, e3, d3) = (4, 4, 12).
Here’s a schedule: Question: Should we conclude from this result that preemptive EDF
is always better than non-preemptive EDF in practice?
J1 J3 J2 • Note: The EDF optimality proof assumes there is no penalty due to preemption.
Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 9 Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 10
9 10
Utilization-based Schedulability Test Setting Up the Proof
for (Preemptive) EDF We wish to show: U ≤ 1 ⇒ T is schedulable.
We prove the contrapositive, i.e., T is not schedulable ⇒ U > 1.
Note: Whenever we say “EDF” from now on, we mean preemptive
EDF, unless specified otherwise. Assume T is not schedulable.
Let Ji,k be the first job to miss its deadline.
Theorem 6-1: [Liu and Layland] A system T of independent,
preemptable, periodic tasks with relative deadlines equal to their
periods can be feasibly scheduled (under EDF) on one processor
if and only if its total utilization U is at most one.
Ti
Proof: The “only if” part is obvious: If U > 1, then some task
clearly must miss a deadline. So, we concentrate on the “if” part. t-1 ri,k ri,k+1
11 12
Proof (Continued) Proof (Continued)
Because Ji,k missed its deadline…
the demand placed on the processor in [t-1, ri,k+1) by jobs with deadlines Thus, we have Note: This proof is
≤ ri,k+1 is greater than the available processor time in [t-1, ri,k+1]. N ri,k +1 − t −1 actually still valid if
ri, k +1 − t −1 < ∑ ⋅ e j. deadlines are larger
Thus, j=1 pj
than periods.
ri, k +1 − t −1
Cancelling ri,k +1 − t −1 yields
= available processor time in [t -1 , ri, k +1 ]
N ej
< demand placed on the processor in [t -1 , ri, k +1 ) by jobs with deadlines ≤ ri, k +1 1< ∑ ,
N
j=1 pj
= ∑ (the number of jobs of Tj with deadlines ≤ ri, k +1 released in [t -1 , ri, k +1 )) ⋅ e j
j=1 i.e.,
r − t
1 < U.
N
≤ ∑ i, k +1 −1 ⋅ e j
j=1
pj
This completes the proof.
N
ri, k +1 − t −1
≤∑ ⋅ ej
j=1 pj
Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 13 Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 14
13 14
EDF with Deadlines < Periods Deadlines < Periods (Continued)
If deadlines are less than periods than U ≤ 1 is no longer a sufficient Theorem 6-2: A system T of independent, preemptable, periodic
schedulability condition. tasks can be feasibly scheduled on one processor if its density is
at most one.
This is easy to see. Consider two tasks such that, for both,
ei = 1 and pi = 2. If both have deadlines at 1.9, then the system is
not schedulable, even though U = 1. The proof is similar to that for Theorem 6-1 and is left as an
exercise.
For these kinds of systems, we work with densities instead of
Note: This theorem only gives sufficient condition.
utilizations.
We refer to the following as the schedulability condition for EDF:
Definition: The density of task Tk is defined to be
δk = ek/min(Dk, pk). The density of the system is defined to be n
ek
∆ = ∑k=1.,,,.,N δk. ∑ min(D
k =1 , pk )
≤1
k
Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 15 Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 16
15 16
Proof of Non-tightness Non-preemptive EDF
(Jeffay et al.)
To see that ∆ > 1 doesn’t imply non-schedulability, consider the Theorem : Let T = {T1, T2, …, Tn} be a system of independent,
following example. periodic tasks with relative deadlines equal to their periods such that
the tasks in T are indexed in non-decreasing order by period (i.e.,
Example: We have two tasks T1 = (2, 0.6, 1) and T2 = (5, 2.3). if i < j, then pi ≤ pj). T can be scheduled by the non-preemptive EDF
∆ = 0.6/1 + 2.3/5 = 1.06. Nonetheless, we can schedule this task algorithm if:
set under EDF: n
ei
1) ∑p
i =1
≤1
i
i -1
∀L : p < L < p :: L ≥ e + ∑ L − 1 ⋅ e
T1
2) ∀i : 1 ≤ i ≤ n ::
1 i i
j=1
j
repeats p j
T2
Note: This condition is actually necessary and sufficient for “real-world”
0 1 2 3 4 5 6 7 8 9 10
sporadic tasks.
Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 17 Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 18
17 18
Explanation Second Condition (Continued)
Here’s an illustration:
The first condition is just a constraint on utilization.
T1
In the second condition, the term T2
L − 1
i -1
L ≥ ei + ∑ ⋅ej T3
p j
j=1
Ti
gives an upper bound on processor demand in an interval [t, t+L].
For any L over the range p1 < L < pi, the total demand on the processor
Intuition: The “worst-case” pattern of job releases occurs when a job
in [t, t + L] due to jobs with deadlines at or before t + L is:
of some Ti begins executing (non-preemptively!) one time unit before
i -1
L − 1
some tasks with smaller periods begin releasing some jobs. ei + ∑ ⋅ej
j=1 p j
These other jobs are blocked by the job of Ti.
For the system to be schedulable, this demand must not exceed the
length of the interval (which is L).
Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 19 Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 20
19 20
Proof of Theorem Proof (Continued)
Suppose conditions (1) and (2) hold for T but a deadline is missed.
Let td be the earliest point in time at which a deadline is missed. Case 2: Some job with a deadline after time td is scheduled prior to
time td.
There are two cases.
Let Ti be the task with the last job with deadline after td that is
Case 1: No job with a deadline after time td is scheduled prior to time scheduled prior to td. Then, we have the following:
td. The analysis is just like with preemptive EDF.
21 22
Proof (Continued) Proof (Continued)
◆ Observe the following: From these facts, we conclude that demand over [ti, td] is less than or
» pi > td − ti. equal to
i -1
t − (t i + 1)
• This follows from the fact that the job of task Ti scheduled ei + ∑ d ⋅ e j.
at time ti had a deadline after td. j=1 pj
» No task with index greater than i is scheduled in the
interval [ti, td]. Let L = td − ti. Then,
» Other than a job of task Ti, no job scheduled in
i -1
L − 1
[ti, td] could have been released at time ti. L < ei + ∑ ⋅ e j.
j=1 p j
» There is no idle time in the interval [ti, td].
» There is a least one job that is released in [ti, td] with
This contradicts condition (2).
a deadline at or before time td.
Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 23 Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 24
23 24
Notes
◆ Note that this scheduling condition requires pseudo-
polynomial time to evaluate. (Why?)
◆ Using “real-world” terminology, this condition is
necessary and sufficient for sporadic and non-concrete
periodic task systems. (Why?)
• “Concrete” = fixed release times (though maybe not all 0).
• For a non-concrete task system to be feasible, it must be schedulable for
any initial phasing.
◆ In the rest of the paper, it is shown that the feasibility
problem for non-preemptive concrete periodic task
systems is NP-hard in the strong sense.
• Implies that a pseudo-polynomial-time feasibility test is unlikely for such
systems. (We cover this result later when we consider intractability.)
Jim Anderson Real-Time Systems Dynamic-Priority Scheduling - 25
25