0% found this document useful (0 votes)
22 views

Lecture 4

This document discusses static-priority scheduling of real-time systems. It describes rate-monotonic and deadline-monotonic scheduling algorithms, and proves that neither is optimal. It also defines simply periodic systems and proves a schedulability condition for such systems under rate-monotonic scheduling when total utilization is at most one.

Uploaded by

narendra29000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Lecture 4

This document discusses static-priority scheduling of real-time systems. It describes rate-monotonic and deadline-monotonic scheduling algorithms, and proves that neither is optimal. It also defines simply periodic systems and proves a schedulability condition for such systems under rate-monotonic scheduling when total utilization is at most one.

Uploaded by

narendra29000
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

1

Static-priority Scheduling

Static-Priority Scheduling - 1
◆ We now consider static-priority scheduling.
» Under static-priority scheduling, different jobs of a task

http://www.cse.unl.edu/~goddard/Courses/RealTimeSystems
are assigned the same priority.
CSCE 990: Real-Time Systems

» We will assume that tasks are indexed in decreasing


priority order, i.e., Ti has higher priority than Tk if i < k.
» Notation:
• πi denotes the priority of Ti.
• Ti denotes the subset of tasks with equal or higher priority than
Ti.
• Note: In some of the papers we will read, it is assumed no two
Static-Priority

[email protected]

tasks have the same priority. (Is this OK?)


Scheduling
Steve Goddard

Jim Anderson Real-Time Systems Static-Priority Scheduling - 2

Real-Time Systems
Jim Anderson

2
Rate-monotonic Scheduling Deadline-monotonic Scheduling
(Liu and Layland) (Leung and Whitehead)
Priority Definition: Tasks with smaller relative deadlines have
Priority Definition: Tasks with smaller periods have higher priority. higher priority.
Example Schedule: Three tasks, T1 = (3,0.5), T2 = (4,1), T3 = (6,2). Same as rate-monotonic if each task’s relative deadline equals its
period.

T1 Example Schedule: Let’s change the RM example by giving


T2 a tighter deadline: T1 = (3,0.5), T2 = (4,1,2), T3 = (6,2).
T2
T′1 = T2
T3
T′2 = T1

T′3 = T3
Jim Anderson Real-Time Systems Static-Priority Scheduling - 3 Jim Anderson Real-Time Systems Static-Priority Scheduling - 4

3 4
Optimality of RM and DM Simply Periodic Systems
(Section 6.4 of Liu)

Definition: A system of periodic tasks is simply periodic if for every


Theorem: Neither RM nor DM is optimal. pair of tasks Ti and Tk in the system where pi < pk, pk is an integer
multiple of pi.
Proof:
Theorem 6-3: A system T of simply periodic, independent,
Consider T1 = (2,1) and T2 = (5, 2.5).
preemptable tasks, whose relative deadlines are at least their
Total utilization is one, so the system is schedulable. periods, is schedulable on one processor according to the RM
algorithm if and only if its total utilization is at most one.
However, under RM or DM, a deadline will be missed,
regardless of how we choose to (statically) prioritize
T1 and T2.
The details are left as an exercise.
Jim Anderson Real-Time Systems Static-Priority Scheduling - 5 Jim Anderson Real-Time Systems Static-Priority Scheduling - 6

5 6
Proof of Theorem 6-3 Proof (Continued)
We wish to show: U ≤ 1 ⇒ T is schedulable. Because Ji,k missed its deadline…

We prove the contrapositive, i.e., T is not schedulable ⇒ U > 1. the demand placed on the processor in [t-1, ri,k+1) by jobs of tasks
T1, …, Ti is greater than the available processor time in [t-1, ri,k+1].
Assume T is not schedulable.
Thus,
Let Ji,k be the first job to miss its deadline.
ri, k +1 − t −1
= available processor time in [t -1 , ri, k +1 ]
< demand placed on the processor in [t -1 , ri,k +1 ) by jobs of T1 ,..., Ti
i
= ∑ (the number of jobs of Tj released in [t -1 , ri, k +1 )) ⋅ e j
j=1
i
ri,k +1 − t −1
≤∑
Ti
⋅ej
ri,k j=1 pj
t-1 ri,k+1
ri, k +1 − t −1
[Note : Because the system is simply periodic, pj
is an integer.]
this is the last “idle instant” for jobs of T1, …, Ti
Jim Anderson Real-Time Systems Static-Priority Scheduling - 7 Jim Anderson Real-Time Systems Static-Priority Scheduling - 8

7 8
Proof (Continued) Optimality Among Fixed-Priority Algs.
Thus, we have Theorem 6-4: A system T of independent, preemptable periodic
i ri, k +1 − t −1 tasks that are in phase and have relative deadlines at most
ri, k +1 − t −1 < ∑ ⋅ e j. their respective periods can be feasibly scheduled on one
j=1 pj
processor according to the DM algorithm whenever it can be
Cancelling ri,k +1 − t −1 yields feasibly scheduled according to any fixed-priority algorithm.
i ej
1< ∑ ,
j=1 pj
i.e., Corollary: The RM algorithm is optimal among all fixed-priority
1 < U i ≤ U. algorithms whenever the relative deadlines of all tasks are
proportional to their periods.
This completes the proof.

Jim Anderson Real-Time Systems Static-Priority Scheduling - 9 Jim Anderson Real-Time Systems Static-Priority Scheduling - 10

9 10
Proof of Theorem 6-4 Proof of Theorem 6-4
Suppose T1, …, Ti are prioritized in accordance with DM. Suppose T1, …, Ti are prioritized in accordance with DM.
Suppose Ti has a longer relative deadline than Ti+1, but Ti has a higher Suppose Ti has a longer relative deadline than Ti+1, but Ti has a higher
priority than Ti+1. priority than Ti+1.
Then, we can interchange Ti and Ti+1 and adjust the schedule Then, we can interchange Ti and Ti+1 and adjust the schedule
accordingly by swapping “pieces” of Ti with “pieces” of Ti+1. accordingly by swapping “pieces” of Ti with “pieces” of Ti+1.

Ti Ti+1

Ti+1 Ti

Ti+2 Ti+2

By induction, we can correct all such situations.


Jim Anderson Real-Time Systems Static-Priority Scheduling - 11 Jim Anderson Real-Time Systems Static-Priority Scheduling - 12

11 12
Utilization-based RM Schedulability Test URM(n) as a Function of n
(Section 6.7 of Liu)

Theorem 6-11: [Liu and Layland] A system of n independent, n URM(n) truncated to three digits
preemptable periodic tasks with relative deadlines equal to their 2 0.828
respective periods can be feasibly scheduled on a processor according 3 0.779
to the RM algorithm if its total utilization U is at most 4 0.756
5 0.743
URM(n) = n(21/n − 1) 6 0.734
7 0.728
8 0.724
9 0.720
10 0.717
Note that this is only a sufficient schedulability test. M M
∞ ln 2 ≈ 0.693

Jim Anderson Real-Time Systems Static-Priority Scheduling - 13 Jim Anderson Real-Time Systems Static-Priority Scheduling - 14

13 14
Proof Sketch for Theorem 6-11 Special Case: pn ≤ 2p1
We will assume that all priorities are distinct, i.e., p1 < p2 < … < pn. Definition: A system is difficult-to-schedule if it is schedulable
according to the RM algorithm, but it fully utilizes the processor
Note: The original proof for this theorem by Liu and Layland is for some interval of time so that any increase in the execution
incorrect. For a complete, correct proof, see Ed Overton’s M.S. thesis time or decrease in the period of some task will make the system
on my web page. Overton’s thesis also points out where the error is unschedulable.
in Liu and Layland’s proof.
We seek the most difficult-to-schedule system, i.e., the system
We will present our proof sketch in two parts: whose utilization is smallest among all difficult-to-schedule
systems.
• First, we consider the special case where pn ≤ 2p1.
The proof for the special case pn ≤ 2p1 consists of four steps,
• Then, we will remove this restriction. described next.

Jim Anderson Real-Time Systems Static-Priority Scheduling - 15 Jim Anderson Real-Time Systems Static-Priority Scheduling - 16

15 16
Four Steps of the Proof Aside: Critical Instants
◆ Step 1: Identify the phases in the most difficult-to- Definition: A critical instant of a task Ti is a time instant such that:
schedule system.
(1) the job of Ti released at this instant has the maximum response
◆ Step 2: Define the periods and execution times for time of all jobs in Ti, if the response time of every job of Ti
the most difficult-to-schedule system. is at most Di, the relative deadline of Ti, and
◆ Step 3: Show that any difficult-to-schedule system
(2) the response time of the job released at this instant is greater
whose parameters are not like in Step 2 has than Di of the response time of some jobs in Ti exceeds Di.
utilization that is at least that of the most difficult-
Informally, a critical instant of Ti represents a worst-case scenario
to-schedule system. from Ti’s standpoint.
◆ Step 4: Compute an expression for URM(n).

Jim Anderson Real-Time Systems Static-Priority Scheduling - 17 Jim Anderson Real-Time Systems Static-Priority Scheduling - 18

17 18
Critical Instants in Fixed-Priority Systems Proof of Theorem 6-5
Consider a system such that T1, …, Ti all release jobs together at
Theorem 6-5: [Liu and Layland] In a fixed-priority system where some time instant t. Suppose t is not a critical instant for Ti, i.e.,
every job completes before the next job of the same task is released, Ti has a job released at another time t′ that has a longer response time
a critical instant of any task Ti occurs when one of its job Ji,c is than its job released at t.
released at the same time with a job of every higher priority task.
Example:

We are not saying that T1, …, Ti will all necessarily release jobs at T1
the same time, but if this does happen, we are claiming that the time
of release will be a critical instant for Ti. T2

We give a different (probably more hand-waving) proof of T3


Theorem 6-5 than that found in Liu.
T4
t′
Jim Anderson Real-Time Systems Static-Priority Scheduling - 19 Jim Anderson Real-Time Systems Static-Priority Scheduling - 20

19 20
Proof (Continued) Proof (Continued)
Let t-1 be the latest “idle instant” for T1, …, Ti at or before t′. If we (artificially) redefine J’s release time to be t -1, then tR remains
Let J be Ti’s job released at t′. unchanged (but J’s response time may increase).
Let tR denote the time instant when J completes.

Example: Example:

T1 T1

T2 T2

T3 T3

T4 T4
t-1 t′ tR t-1 t′ tR
Jim Anderson Real-Time Systems Static-Priority Scheduling - 21 Jim Anderson Real-Time Systems Static-Priority Scheduling - 22

21 22
Proof (Continued) Proof (Continued)
If we (artificially) redefine J’s release time to be t -1, then tR remains Starting with T1, let us “left-shift” any task whose first job is released
unchanged (but J’s response time may increase). after t-1 so that its first job is released at t-1.

With each shift, Ti’s response time does not decrease. Why?
Example:
Example: Shift over T1 ...
T1 T1

T2 T2

T3 T3

T4 T4
t-1 t′ tR t-1 t′ tR
Jim Anderson Real-Time Systems Static-Priority Scheduling - 23 Jim Anderson Real-Time Systems Static-Priority Scheduling - 24

23 24
Proof (Continued) Proof (Continued)
Starting with T1, let us “left-shift” any task whose first job is released Starting with T1, let us “left-shift” any task whose first job is released
after t-1 so that its first job is released at t-1. after t-1 so that its first job is released at t-1.

With each shift, Ti’s response time does not decrease. Why? With each shift, Ti’s response time does not decrease. Why?

Example: Shift over T2 ... Example:

T1 T1

T2 T2

T3 T3

T4 T4
t-1 t′ tR t-1 t′ tR
Jim Anderson Real-Time Systems Static-Priority Scheduling - 25 Jim Anderson Real-Time Systems Static-Priority Scheduling - 26

25 26
Proof (Continued) Step 1: Phases
We have constructed a portion of the schedule that is identical to
that which occurs at time t (when T1, …, Ti all release jobs together). ◆ Back to the proof of Theorem 6-11…

Moreover, the response time of Ti’s job released at t is at least that ◆ Recall that Step 1 is to identify the phases in the
of Ti’s job released at t′ .
most difficult-to-schedule system.
This contradicts our assumption that Ti’s job released at t′ has a
longer response time than Ti’s job released at t. ◆ By Theorem 6-5, we can assume that each task
Thus, t is a critical instant. in the most difficult-to-schedule system releases
its first job at time 0.

Jim Anderson Real-Time Systems Static-Priority Scheduling - 27 Jim Anderson Real-Time Systems Static-Priority Scheduling - 28

27 28
Step 2: Periods and Execution Times Step 2 (Continued)
◆ By Theorem 6-5, we can limit attention to the Let us define ek = pk+1 − pk for k = 1, 2, …, n-1
en = pn − 2∑ k=1,…,n-1 ek.
first period of each task.

◆ We need to make sure that each task’s first job


T1
completes by the end of its first period.
T2
◆ We will define the system’s parameters so that T3
the tasks keep the processor busy from time 0 O
until at least pn, the end of the first period of the Tn-1
lowest-priority task. Tn

Jim Anderson Real-Time Systems Static-Priority Scheduling - 29 Jim Anderson Real-Time Systems Static-Priority Scheduling - 30

29 30
Step 2 (Continued) Step 3: Showing it’s the Most D-T-S
Notes:
We still need to show that the system from Step 2 is the most
• This task system is difficult-to-schedule. (Why?)
difficult-to-schedule system.
• The processor is fully utilized up to pn.
We must show that other difficult-to-schedule systems have equal
or higher utilization.
T1
Other difficult-to-schedule systems can be obtained from the one
T2
in Step 2 by systematically increasing or decreasing the execution
T3 times of some of the tasks.
O
Tn-1 We show that any small increase or decrease results in a utilization
that’s at least as big as that of the original task system.
Tn
(Convince yourself that this argument generalizes.)
Jim Anderson Real-Time Systems Static-Priority Scheduling - 31 Jim Anderson Real-Time Systems Static-Priority Scheduling - 32

31 32
Step 3 (Continued) Step 3 (Continued)
Let’s increase the execution of some task, say T1, by ε, i.e., e1′ e′k e1 e k
The difference in utilization is: U′ − U = + − −
e′1 = p2 − p1 + ε = e1 + ε. p1 p k p1 p k
We can keep the processor busy until pn by decreasing some Tk’s, ε ε
= −
k ≠ 1, execution time by ε: p1 p k
e′k = ek − ε. >0 since p1 < p k

T1 T1
T2 T2
T3 T3
O O
Tn-1 Tn-1

Tn Tn
Jim Anderson Real-Time Systems Static-Priority Scheduling - 33 Jim Anderson Real-Time Systems Static-Priority Scheduling - 34

33 34
Step 3 (Continued) Step 3 (Continued)
Let’s decrease the execution of some task, say T1, by ε, i.e.,
e′′1 = p2 − p1 − ε. 2ε ε
The difference in utilization is: U′′ − U = −
We can keep the processor busy until pn by increasing some Tk’s, p k p1
k ≠ 1, execution time by 2ε: ≥0 since p k ≤ 2p1
e′′k = ek + 2ε.

T1 T1
T2 T2
T3 T3
O O
Tn-1 Tn-1

Tn Tn
Jim Anderson Real-Time Systems Static-Priority Scheduling - 35 Jim Anderson Real-Time Systems Static-Priority Scheduling - 36

35 36
Step 4: Calculate URM(n) Removing the pn ≤ 2p1 Restriction
n
ek
Let U(n) = ∑ denote the utilization of the system in Step 2. Definition: The ratio qn,1 = pn/p1 is the period ratio of the system.
k =1 pk
Define q k, j = p k /p i . Then,
We have proven Theorem 6-11 only for systems with period ratios
2
U(n) = q 2,1 + q 3,2 + L + q n,(n −1) + − n. of at most 2.
q 2,1q 3,2 L q n,(n −1)
To find the minimum, we take the partial derivative of U(n) with respect to each
To deal with systems with period ratios larger than 2, we show the
adjacent period ratio q k +1,k and set the derivative to zero. This gives us the following
following n - 1 equations
2 (1) Corresponding to every difficult-to-schedule n-task system
1− for all k = 1, 2, ..., n − 1.
q 2,1q 3,2 L q (k2 +1),k L q n,(n −1) T whose period ratio is larger than 2 there is a difficult-to-
Solving these equations for q (k +1),k , we find that U(n) is at its minimum when all the schedule n-task system T′ whose period ratio is at most 2, and
n − 1 adjacent period ratios q k +1,k are equal to 21/n. Thus,
(2) T’s utilization is at least T′’s.
U(n) = n(21/n − 1).

Jim Anderson Real-Time Systems Static-Priority Scheduling - 37 Jim Anderson Real-Time Systems Static-Priority Scheduling - 38

37 38
Proof of (1) Proof of (1)
We show we can transform T step-by-step to get T′. Convince yourself that
At each step, we find a task Tk whose period is such that • The resulting system is difficult-to-schedule.
lpk < pn ≤ (l +1)pk, where l is an integer that is at least 2. We modify • We eventually will get a system with a period ratio of at most 2.
(only) Tk and Tn as follows.
ek ek
Tk … Tk …
0 pk 2pk (l-1)pk lpk 0 pk 2pk (l-1)pk lpk
en en
Tn … Tn …
0 pn 0 pn

e′k = ek e′k = ek
T′k … T′k …
0 p′k = lpk 0 p′k = lpk
e′n = en + (l-1)ek e′n = en + (l-1)ek
T′n … T′n …
0 p′n = pn 0 p′n = pn
Jim Anderson Real-Time Systems Static-Priority Scheduling - 39 Jim Anderson Real-Time Systems Static-Priority Scheduling - 40

39 40
Proof of (2) Other Utilization-based Tests
◆ The book presents several other utilization-based
It suffices to look at the difference between the utilization of the old
and new system when one of the steps in the proof of (1) is applied. schedulability tests.
• Some of these tests result in higher schedulable utilizations for
This difference is: certain kinds of task sets.
e k e k (l − 1)e k – For example, Theorem 6-13 considers systems in which we can partition
− − the tasks into subsets, where in each subset, tasks are simply periodic.
p k lp k pn
• Other of these tests allow more detail into the model.
 1 1  – For example, Theorem 6-17 considers multi-frame tasks, which are
=  − (l − 1)e k tasks where execution costs vary from job to job. (The motivation for this
 k
l p p n  was MPEG video.)
> 0 because lp k < p n .
◆ You will have a better understanding of why people
This concludes the proof of (2) and (finally!) the proof of are interested in utilization-based tests later, when
Theorem 6-11. we talk about intractability.
Jim Anderson Real-Time Systems Static-Priority Scheduling - 41 Jim Anderson Real-Time Systems Static-Priority Scheduling - 42

41 42
Time-Demand Analysis Scheduling Condition
(Section 6.5.2 of Liu)
Definition: The time-demand function of the task Ti, denoted wi(t),
◆ Time-demand analysis was proposed by is defined as follows.
Note: We are
Lehoczky, Sha, and Ding. i −1
 t 
w i (t) = ei + ∑   ⋅ e k for 0 < t ≤ p i still assuming
k =1  p k  tasks are indexed
◆ TDA can be applied to produce a schedulability by priority.
test for any fixed-priority algorithm that ensures For any fixed-priority algorithm A that ensures that each job of every
that each job of every task completes before the task completes by the time the next job of that task is released…
next job of that task is released. Theorem: A system T of periodic, independent, preemptable tasks is
schedulable on one processor by algorithm A if
◆ For some important task models and scheduling (∀i:: (∃t: 0 < t ≤ pi:: wi(t) ≤ t))
holds. This condition is also necessary for synchronous, real-world
algorithms, this schedulability test will be periodic task systems and also real-world sporadic (= periodic here)
necessary and sufficient. task systems.
Jim Anderson Real-Time Systems Static-Priority Scheduling - 43 Jim Anderson Real-Time Systems Static-Priority Scheduling - 44

43 44
Sufficiency Proof Proof (Continued)
We wish to show: (∀i:: (∃t: 0 < t ≤ pi:: wi(t) ≤ t)) ⇒ T is schedulable. Because Ji,k missed its deadline…
We prove the contrapositive, i.e., at all instants t in (t-1, ri,k+1], the demand placed on the processor in
T is not schedulable ⇒ (∃ i:: (∀ t: 0 < t ≤ pi:: wi(t) > t)). [t-1, t) by jobs of tasks T1, …, Ti is greater than the available
processor time in [t-1, t].
Assume T is not schedulable.
Let Ji,k be the first job to miss its deadline. Thus, for any t in (t-1, ri,k+1],
t − t −1
= available processor time in [t -1 , t]
< demand placed on the processor in [t -1 , t) by jobs of T1 ,..., Ti
i
= ∑ (the number of jobs of Tj released in [t -1 , t)) ⋅ e j
j=1
Ti
i 
t − t −1 
t-1 ri,k ri,k+1 ≤∑   ⋅ ej
j=1  p j 
this is the last “idle instant” for jobs of T1, …, Ti
Jim Anderson Real-Time Systems Static-Priority Scheduling - 45 Jim Anderson Real-Time Systems Static-Priority Scheduling - 46

45 46
Proof (Continued) Necessity and Efficiency
To recapitulate, we have, for any t in (t-1, ri,k+1], ◆ The condition (∀i:: (∃t: 0 < t ≤ pi:: wi(t) ≤ t)) is
i 
necessary for
t − t −1 
t − t −1 < ∑   ⋅ ej » synchronous, real-world periodic task systems, and
j=1  p j 
» real-world sporadic (= periodic here) task systems.
Replacing t − t-1 by t′ in (0, ri,k+1 − t-1], we have » Why?

i 
◆ For a given i, we don’t really have to consider all t
t′ 
t′ < ∑   ⋅ e j in the range 0 < t ≤ pi. Two ways to avoid this:
j=1  p j 
» Iterate using “t(k+1) := wi(t(k))”, starting with a suitable t(0), and
stopping when, for some n, t(n) ≥ w(t(n)) or t(n) > pi.
Because pi ≤ ri,k+1 − t-1, we have (∃ i:: (∀ t′: 0 < t′ ≤ pi:: wi(t′) > t′)). » Only consider t = j⋅pk, where k = 1, 2, …, i;
j = 1, 2, …, min(pi, Di)/pk.
• See Liu for an explanation of this.
Jim Anderson Real-Time Systems Static-Priority Scheduling - 47 Jim Anderson Real-Time Systems Static-Priority Scheduling - 48

47 48
Fixed-Priority Tasks with Arbitrary Busy Intervals
Response Times Definition: A level-πi busy interval (t0, t] begins at an instant t0 when
(Section 6.6 of Liu)
(1) all jobs in Ti released before this instant have completed, and
◆ The TDA scheduling condition is valid only if (2) a job in Ti is released.
The interval ends at the first instant t after t0 when all jobs in Ti
each job of every task completes before the next released since t0 are complete.
job of that task is released.
Example:
◆ We now consider a schedulability check due to
Lehoczky in which tasks may have relative
deadlines larger than their periods.
Ti
» Note: In this model, a task may have multiple ready
jobs. We assume they are scheduled on a FIFO basis. t0 Busy Interval t

Jim Anderson Real-Time Systems Static-Priority Scheduling - 49 Jim Anderson Real-Time Systems Static-Priority Scheduling - 50

49 50
Busy Intervals (Continued) It Ain’t So Easy …
For systems in which each task’s relative deadline is at most its period,
◆ For any t that would qualify as the end of a
we argued that an upper bound on a task’s response time could be
level-πi busy interval, a corresponding t0 exists. computed by considering a “critical instant” scenario in which that
» Why? task releases a job together with all higher-priority tasks.

◆ During a level-πi busy interval, the processor In other words, we just consider the first job of each task in an in-phase
only executes tasks in Ti  other tasks can be system.
ignored.
For many years, people just assumed this approach would work if a
◆ Definition: We say that a level-πi busy interval task’s relative deadline could exceed its period.
is in phase if the first job of all tasks that
Lehoczky showed that this “folk wisdom”  that only each task’s first
execute in the interval are released at the same job must be considered  is false by means of a counterexample.
time.
Jim Anderson Real-Time Systems Static-Priority Scheduling - 51 Jim Anderson Real-Time Systems Static-Priority Scheduling - 52

51 52
Lehoczky’s Counterexample General TDA Method
Test one task at a time starting with the highest-priority task T1 in order of decreasing
Consider: T1 = (70, 26), T2 = (100, 62) [Note: the book has a typo here.] priority. For the purpose of determining whether a task Ti is schedulable, assume that
all the tasks are in phase and the first level-πi busy interval begins at time zero.
Here’s a schedule:
While testing whether all the jobs in Ti can meet their deadlines (i.e., whether Ti is
schedulable), consider the subset Ti of tasks with priorities πi or higher.
T1
0 70 140 210 280 350 420 490 560 630 700 (i) If the first job of every task in Ti completes by the end of the first period of that
task, check whether the first job Ji,1 in Ti meets its deadline. Ti is schedulable if
Ji,1 completes in time. Otherwise, Ti is not schedulable.
T2
0 100 200 300 400 500 600 700 (ii) If the first job of some task in Ti does not complete by the end of the first period
of the task, do the following.
T2’s seven jobs have the following response times, respectively: (a) Compute the length of the in-phase level-πi busy interval by solving the equation
114, 102, 116, 104, 118, 106, 94. t = ∑k=1,…,i t/pkek iteratively, starting from t(1) = ∑k=1,…,i ek until t(l+1) = t(l) for
some l ≥ 1. The solution t(l) is the length of the level-πi busy interval.
Note that the first job’s response time is not the longest. (b) Compute the maximum response times of all t(l)/pi jobs of Ti in the in-phase
level-πi busy interval in the manner described below and determine whether
Bottom Line: We have to consider all jobs in an in-phase busy they complete in time. Ti is schedulable if all of these jobs complete in time;
interval. otherwise Ti is not schedulable.
Jim Anderson Real-Time Systems Static-Priority Scheduling - 53 Jim Anderson Real-Time Systems Static-Priority Scheduling - 54

53 54
Computing Response Times Response Times (Continued)
Computing the response time of Ti’s first job is almost like before.
Lemma 6-6: The maximum response time Wi,j of the j-th job of
The time demand function is defined as follows. Ti in an in-phase level-πi busy period is equal to the smallest value
of t that satisfies the equation
i −1
 t  t = wi,j(t + (j − 1)pi) − (j − 1)pi
w i,1 (t) = ei + ∑   ⋅ e k for 0 < t ≤ w i,1 (t)
k =1  p k 
where
i −1
 t 
The only difference from before. w i, j (t) = je i + ∑   ⋅ e k for (j − 1)p i < t ≤ w i, j (t)
k =1  p k 

The maximum response time Wi,1 of Ji,1 is equal to the smallest t


that satisfies the equation t = wi,1(t).
The recurrence given in the lemma can be solved iteratively, as
Can do this computation iteratively, as describe before. (Is termination described before. (Once again, is termination a problem?)
a problem?)
Jim Anderson Real-Time Systems Static-Priority Scheduling - 55 Jim Anderson Real-Time Systems Static-Priority Scheduling - 56

55 56
Example Example (Continued)
Let’s apply Lemma 6-6 to our previous example: T1 = (70, 26), T2 = (100, 62)
T1 = (70, 26), T2 = (100, 62)
T1
T1 0 70 140 210 280 350 420 490 560 630 700
0 70 140 210 280 350 420 490 560 630 700

T2
T2 0 100 200 300 400 500 600 700
0 100 200 300 400 500 600 700
 202 
W2,1 = minimum t s.t. W2,2 = minimum t s.t. ?? 102 = 124 +   ⋅ 26 − 100
114   70 
t = w 2,1 (t) ?? 114 = 62 +  ⋅ 26
 70  t = w 2,2 (t + p 2 ) − p 2
= 124 + 3 ⋅ 26 − 100
 t 
i −1
= 62 + 2 ⋅ 26 i −1
 t + 100  = 102 Yes!
= e2 + ∑   ⋅ ek = 2 ⋅ e2 + ∑   ⋅ e k − 100
k =1  p k  = 114 Yes! k =1  pk 
 t   t + 100 
= 62 +   ⋅ 26 = 124 +  ⋅ 26 − 100
 70   70 
Jim Anderson Real-Time Systems Static-Priority Scheduling - 57 Jim Anderson Real-Time Systems Static-Priority Scheduling - 58

57 58
Example (Continued) Correctness of the General
T1 = (70, 26), T2 = (100, 62) Schedulability Test
T1 The general schedulability test hinges upon the assumption that the
0 70 140 210 280 350 420 490 560 630 700
job with the maximum response occurs within an in-phase busy
T2 interval.
0 100 200 300 400 500 600 700
We must confirm that this is so.
 316 
W2,3 = minimum t s.t. ?? 116 = 186 +   ⋅ 26 − 200
 70 
t = w 2,3 (t + 2 ⋅ p 2 ) − 2 ⋅ p 2 Aside: In steps (i) and (ii)-(b), the conclusion that “Ti is not
= 186 + 5 ⋅ 26 − 200
 t + 200 
i −1 schedulable” is stated. In other words, Liu is presenting this
= 3 ⋅ e2 + ∑  = 116 Yes!
 ⋅ e k − 200 as a necessary and sufficient schedulability test. Why is
k =1  pk 
it OK to conclude that the test is necessary?
 t + 200 
= 186 +  ⋅ 26 − 200
 70 
Jim Anderson Real-Time Systems Static-Priority Scheduling - 59 Jim Anderson Real-Time Systems Static-Priority Scheduling - 60

59 60
Correctness (Continued) Correctness (Continued)
Correctness follows from several lemmas…

Lemma 6-7: Let t0 be a time instant at which a job of every task in Lemma 6-8: When a system of independent, preemptive periodic
Ti is released. All the jobs in Ti released prior to t0 have been tasks is scheduled on a processor according to a fixed-priority
completed at t0. algorithm, the time-demand function wi,1(t) of a job in Ti released
at the same time with a job in every higher priority task is given
by
Proof Sketch: i −1
 t 
w i,1 (t) = ei + ∑   ⋅ e k for 0 < t ≤ w i,1 (t).
Let t-1 be the beginning of the latest busy interval. k =1  p k 

Demand created by jobs of tasks in Ti released in [t-1, t0) must be


fulfilled in [t-1, t0], or else Ui exceeds 1.
This is similar to the proof of Theorem 6-3. This should be pretty obvious to you by now …
Note: To reach the stated conclusion about utilization, we must avoid introducing
those nasty ceiling operators like in the proof of the original TDA schedulability
test. Why is this not a problem here?
Jim Anderson Real-Time Systems Static-Priority Scheduling - 61 Jim Anderson Real-Time Systems Static-Priority Scheduling - 62

61 62
Correctness (Continued) Correctness (Continued)

Theorem 6-9: The response time Wi,j of the j-th job of Ti executed Lemma 6-10: The number of jobs in Ti that are executed in an
in an in-phase level-πi busy interval is no less than the response in-phase level-πi busy interval is never less than the number of
time of the j-th job of Ti executed in any level-πi busy interval. jobs in this task that are executed in a level-πi busy interval of
arbitrary phase.

This is just the good old critical instant argument … Intuitively, demand on the processor is maximal for a busy
interval that is in phase.

Thus, an in-phase busy interval should never be “shorter” than


an arbitrary busy interval.

Jim Anderson Real-Time Systems Static-Priority Scheduling - 63 Jim Anderson Real-Time Systems Static-Priority Scheduling - 64

63 64
Wrapping Up
◆ Convince yourself that Lemmas 6-7 through 6-10
imply that we need only look at the jobs that
execute within an in-phase busy interval.

◆ Think carefully about necessity.


» For which task models is the schedulability test
necessary?
» For which is it not necessary?

Jim Anderson Real-Time Systems Static-Priority Scheduling - 65

65

You might also like