0% found this document useful (0 votes)
14 views14 pages

Scheduling Periodic

Scheduling

Uploaded by

dhraj514
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views14 pages

Scheduling Periodic

Scheduling

Uploaded by

dhraj514
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

The simple case: Cyclic execution

SCHEDULING PERIODIC TASKS


Repeat a set of aperiodic tasks
at a specific rate (cycle)

1 2

Periodic tasks Periodic tasks (the simplified case)


Scheduled to run
Arrival time C: computing time Arrival time
Finishing/response time
F: finishing/response time computing

time time
T:period
T:period
D: deadline D: deadline
R: release time R: release time

3 4

Assumptions on task sets Periodic task model


Each task is released at a given constant rate A task = (C, T)
Given by the period T C: worst case execution time/computing time (C<=T!)
All instances of a task have: T: period (D=T)
The same worst case computing time: C (reasonable)
The same relative deadline: D=T (not a restriction) The simplest case: a task = T
The same relative arrival time: A=0 (not a restriction) C=unknown, D=T
The same release time R=0, released as soon as they arrive
All tasks are independent
A task set: (Ci,Ti)
No sharing resources (consider this later)
All tasks are independent
All overheads in the kernel are assumed to be zero The periods of tasks start at 0 simultaneously (not necessary)
E.g context switch etc (consider this later)

5 6

1
CPU utilization Scheduling periodic tasks
C/T is the CPU utilization of a task Assume a set of independent periodic tasks: (Ci,Ti)
U=Σ Ci/Ti is the CPU utilization of a task set Schedulability analysis:
Note that the CPU utilization is a measure on how busy the
is it possible to meet all deadlines in all periods?
processor could be during the shortest repeating cycle:
T1*T2*...*Tn A task set is schedulable/feasible if it can be scheduled so
that all instances of all tasks meet deadlines
U>1 (overload): some task will fail to meet its deadline no matter
what algorithms you use! If yes,
U<=1: it will depend on the scheduling algorithms how to schedule all task instances to meet all deadlines?
If U=1 and the CPU is kept busy (non idle algorithms e.g. EDF), all
deadlines will be met
Optimal scheduling algorithms?

7 8

Solutions: SCS,EDF,EMS,DMS Static cyclic scheduling


Static Cyclic Scheduling (SCS) Shortest repeating cycle = least common
Earliest Deadline First (EDF) multiple (LCM)
Rate Monotonic Scheduling (RMS) Within the cycle, it is possible to construct a
Deadline Monotonic Scheduling (DMS) static schedule i.e. a time table
Schedule task instances according to the time
table within each cycle

9 10

Example: the Car Controller The car controller: static cyclic scheduling
Activities of a car control system. Let The cycle = 80ms
C= worst case execution time
The tasks within the cycle:
1.

2. T= (sampling) period
3. D= deadline
Speed measurment: C=4ms, T=20ms, D=20ms 0 20 40 60 80
ABS control: C=10ms,T=40ms, D=40ms Speed Speed Speed Speed
ABS ABS
Fuel injection: C=40ms,T=80ms, D=80ms
Fuel
Other software with soft deadlines e.g audio, air condition etc

11 12

2
The car controller: time table Static cyclic scheduling: + and –
80 0
4 Deterministic: predictable (+)
76 Soft RT tasks speed 14 Easy to implement (+)
ABS
Inflexible (-)
FUEL-4 Difficult to modify, e.g adding another task
FUEL-1 20 Difficult to handle external events
64
speed The table can be huge (-)
speed A feasible Schedule!
Huge memory-usage
24
Difficult to construct the time table
FUEL-3
60 Fuel-2
ABS
speed
54 40
44 13 14

Example: shortest repeating cycle Earliest Deadline First (EDF)


OBS: The LCM determines the size of the time table Task model
LCM =50ms for tasks with periods: 5ms, 10ms and 25ms a set of independent periodic tasks (not necessarily the simplified
LCM =7*13*23=2093 ms for tasks with periods: 7ms, 13ms and task model)
23ms (very much bigger) EDF:
So if possible, manipulate the periods so that they are multiples Whenever a new task arrive, sort the ready queue so that the task
of each other closest to the end of its period assigned the highest priority
Easier to find a feasible schedule and Preempt the running task if it is not placed in the first of the queue
Reduce the size of the static schedule, thus less memory usage in the last sorting
FACT 1: EDF is optimal
EDF can schedule the task set if any one else can
FACT 2 (Scedulability test):
Σ Ci/Ti <= 1 iff the task set is schedulable

15 16

Example EDF: + and –

Task set: {(2,5),(4,7)} Note that this is just the simple EDF algorithm; it works for all
types of tasks: periodic or non periodic
U = 2/5 + 4/7= 34/35 ~ 0.97 (schedulable!) It is simple and works nicely in theory (+)
Simple schedulability test: U <= 1 (+)
Optimal (+)
Best CPU utilization (+)
Difficult to implement in practice. It is not very often adopted
0 5 10 15 35 due to the dynamic priority-assignment (expensive to sort the
ready queue on-line), which has nothing to do with the periods
of tasks. Note that Any task could get the highest priority (-)
Non stable: if any task instance fails to meet its deadline, the
0 7 14 35 system is not predictable, any instance of any task may fail (-)

We use periods to assign static priorities: RMS


17 18

3
Rate Monotonic Scheduling: task model RMS: fixed/static-priority scheduling

Assume a set of periodic tasks: (Ci,Ti) Rate Monotonic Fixed-Priority Assignment:


Di=Ti Tasks with smaller periods get higher priorities
Tasks are always released at the start of their periods Run-Time Scheduling:
Preemptive highest priority first
Tasks are independent

FACT: RMS is optimal in the sense:


If a task set is schedulable with any fixed-priority
scheduling algorithm, it is also schedulable with RMS

19 20

Example Example
{(20,100),(40,150),(100,350)} Pr(T1)=1, Pr(T2)=2, Pr(T3)=3
Task set: T1=(2,5), T2=(4,7)
T1 20 20 20 20 U = 2/5 + 4/7= 34/35 ~ 0.97 (schedulable?)
0 100 200 300 RMS priority assignment: Pr(T1)=1, Pr(T2)=2

T2 40 40 40 0 2 5 10 15 35
0 150 300 Missing the deadline!

T3 40 30 10 20 0 2 5 7 14 35
0 350

21 22

The famous Utilization Bound test (UB test)


RMS: schedulability test [by Liu and Layland, 1973: a classic result]
U<1 doesn’t imply ’schedulable’ with RMS Assume a set of n independent tasks:
OBS: the previous example is schedulable by EDF, not RMS S= {(C1,T1)(C2,T2)...(Cn,Tn)} and U = Σ Ci/Ti
It is always possible in theory to construct a schedule
for the shortest repeating cycle (LCM) and check if FACT: if U<= n*(21/n-1), then S is schedulable by RMS
the schedule is feasible or not, BUT this may be
difficult/impossible in practice if the number of tasks Note that the bound depends only on the size of the task set
is too big
Idea: utilization bound
Given a task set S, find X(S) such that U<= X(S) if and only
if S is schedulable by RMS (necessary and sufficient test)
Note that the bound X(S) for EDF is 1

23 24

4
UB test is only sufficient, not necessay! Example: Utilization bounds

Let U= Σ Ci/Ti and B(n) = n*(21/n-1)


B(1)=1.0 B(4)=0.756 B(7)=0.728
Three possible outcomes:
0<= U<= B(n): schedulable B(2)=0.828 B(5)=0.743 B(8)=0.724
B(n)<U<=1: no conclusion
1< U : overload B(3)=0.779 B(6)=0.734 U(∞)=0.693
Thus, the test may be too conservative, more precise
Note that U(∞)=0.693 !
test (in fact, exact test) will be given later
(Unfortunately, it is not obvious how to calculate the
necessary utilization bound for a given task set)

25 26

Example: applying UB Test Example: Run-time RM Scheduling


{(20,100),(40,150),(100,350)}
C T (D=T) C/T
Task 1 20 100 0.200 20 20 20 20
0 100 200 300
Task 2 40 150 0.267
Task 3 100 350 0.286
40 40 40
Total utilization: U=0.2+0.267+0.286=0.753<B(3)=0.779! 0 150 300
The task set is schedulable

40 30 10 20
0 350

27 28

Example: UB test is sufficient, not necessary


Response times?
Assume a task set: {(1,3),(1,5),(1,6),(2,10)} {(1,3),(1,5),(1,6),(2,10)} Worst case? First period?
Why?
CPU utilization U= 1/3+1/5+1/6+2/10=0.899
The utilization bound B(4)=0.756 0 3 6 9 12 15 18
The task set fails in the UB test due to U>B(4)
Question: is the task set schedulable? 0 5 10 15 20
Answer: YES
0 6 12 18

0 10 20

29 30

5
How to deal with tasks with the same period RMS: Summary

What should we do if tasks have the same period? Task model:


Should we assign the same priority to the tasks? priodic, independent, D=T, and a task= (Ci,Ti)
How about the UB test? Is it still sufficient? Fixed-priority assignment:
smaller periods = higher priorities
What happens at run time?
Run time scheduling: Preemptive HPF
Sufficient schedulability test: U<= n*(21/n-1)
Precise/exact schedulability test exists

31 32

RMS: + and – Critical instant: an important observation


Simple to understand (and remember!) (+) Note that in our examples, we have assumed that all tasks are
Easy to implement (static/fixed priority assignment)(+) released at the same time: this is to consider the critical instant
Stable: though some of the lower priority tasks fail to meet (the worst case senario)
deadlines, others may meet deadlines (+) If tasks meet the first deadlines (the first periods), they will do so
in the future (why?)
Critical instant of a task is the time at which the release of the
”lower” CPU utilization (-) task will yield the largest response time. It occurs when the task
Requires D=T (-) is released simultaneously with higher priority tasks
Only deal with independent tasks (-) Note that the start of a task period is not necessarily the same
Non-precise schedulability analysis (-) as any of the other periods: but the delay between two releases
But these are not really disadvantages;they can be fixed (+++) should be equal to the constant period (otherwise we have
We can solve all these problems except “lower” utilization jitters)

33 34

Worst case response time calculation: example


Sufficient and necessary schedulability analysis
Response times?
Simple ideas [Mathai Joseph and Paritosh Pandya, 1986]: {(1,3),(1,5),(1,6),(2,10)} Worst case? First period?
Critical instant: the worst case response time for all tasks is Why?
given when all tasks are released at the same time
Calculate the worst case response time R for each task with 0 3 6 9 12 15 18
deadline D. If R<=D, the task is schedulable/feasible.
Repeat the same check for all tasks
If all tasks pass the test, the task set is schedulable 0 5 10 15 20
If some tasks pass the test, they will meet their deadlines
even the other don’t (stable and predictable)
Question: 0 6 12 18
how to calculate the worst case response times?
We did this before!
0 10 20

35 What to do if too many? 36

6
Worst case response time calculation: example Calculation of worst case response times
[Mathai Joseph and Paritosh Pandya, 1986]

Response times?
{(1,3),(1,5),(1,6),(2,10)} Worst case? First period? Let Ri stand for the response time for task i. Then
Why? Ri= Ci + ∑j I(i,j)
WCR=1 Ci is the computing time
0 3 6 9 12 15 18 I(i,j) is the so-called interference of task j to i
I(i,j) = 0 if task i has higher priority than j
WCR=2
0 5 10 15 20 I(i,j) = Ri/Tj*Cj if task i has lower priority than j
You don’t have to
Check this area! x denotes the least integer larger than x
WCR=3 E.g 3.2 = 4, 3 =3, 1.9 =2
0 6 12 18
Ri= Ci + ∑j ∈ HP(i) Ri/Tj*Cj
WCR=9
0 10 20

What to do if too many? 37 38

Intuition on the equation Equation solving and schedulability analysis

Ri= Ci + ∑j ∈ HP(i) Ri/Tj*Cj We need to solve the equation:


Ri/Tj is the number of instances of task j during Rj Ri= Ci + ∑j ∈ HP(i) Ri/Tj*Cj
Ri/Tj*Cj is the time needed to execute all instances of task This can be done by numerical methods to compute
j released within Rj
the fixed point of the equation e.g. By iteration: let
∑j ∈ HP(i) Ri/Tj*Cj is the time needed to execute instances
Ri0 = Ci + ∑j ∈ HP(i) Cj = C1+C2+...+Ci (the first guess)
of tasks with higher priorities than task i, released during Rj
Rik+1 = Ci + ∑j ∈ HP(i)  Rik/Tj*Cj (the (k+1)th guess)
Rj is the sum of the time required for executing task
instances with higher priorities than task j and its own The iteration stops when either
computing time Rim+1>Ti or non schedulable
Rim<Ti and Rim+1 = Rim schedulable
This is the so called Precise test

39 40

Example (response time calculation) Exercise

Assume a task set: {(1,3),(1,5),(1,6),(2,10)} Calculate R3 and R4 for the above example
Question: is the task set schedulable? Construct the run-time RMS schedule and check if
Answer: YES your calculation is correct
Because
R11 = R10 = C1=1 (done)
R20 = C2 + C1=2,
R21 = C2 + R20/T1*C1=1+ 2/3*1=2 (done)

41 42

7
Example (combine UB test and precise test) Combine UB and Precise tests
Consider the task set: {(1,3),(1,5),(1,6),(3,10)} Order tasks according to their priorities (periods)
(this is not the previous example!)
CPU utilization U= 1/3+1/5+1/6+3/10=0.899> B(4)= 0.756 Use UB test as far as you can until you find the first
Fail the UB test! non-schedulable task
But U(3)= 1/3+1/5+1/6=0.699<B(3)=0.779 Calculate response time for the task and all tasks
This means that the first 3 tasks are schedulable with lower priority
Thus we do not need to calculate for R1, R2, R3!
Question: is task 4 set schedulable?
R40 = C1+C2+C3+C4= 6
R41 = C4+R40/T1*C1+R40/T2*C2+R40/T3*C3
= 3 + 6/3*1+6/5*1+6/6*1=8
R42 = C4+R41/T1*C1+R41/T2*C2+R41/T3*C3
= 3 + 8/3*1+8/5*1+8/6*1
= 3+4+2+2
= 11 (task 4 is non schedulable!) 43 44

Example Calculate response time for task 3


R30 = C1+C2+C3= 180
R31 = C3+R30/T1*C1+R30/T2*C2
C T C/T =100+ 180/100*40+180/150*40
Task 1 40 100 0.400 =100+2*40+2*40=260
Task 2 40 150 0.267 R32 =C3+R31/T1*C1+R31/T2*C2
=100+ 260/100*40+260/150*40=300
Task 3 100 350 0.286
R33 =C3+R32/T1*C1+R32/T2*C2
Total utilization: U=0.4+0.267+0.286= 0.953>B(3)=0.779! =100+ 300/100*40+300/150*40=300 (done)
UB test is inclusive: we need Precise test
but we do have U(T1)+U(T2)= 0.4+0.267= 0.667<U(2)=0.828 Task 3 is schedulable and so are the others!
so we need to calculate R3 only!

45 46

Question: other priority-assignments Precedence constraints

Could we calculate the response times by the same How to handle precedence constraints?
equation for different priority assignment? We can always try the ’old’ method: static cyclic scheduling!

Alternatively, take the precedence constraints (DAG) into


account in priority assignment: the priority-ordering must satisfy
the precedence constraints
Precise schedulability test is valid: use the same method as beforee
to calculate the response times.

47 48

8
Summary: Three ways to check schedulability Summary: UB and precise test

1. UB test UB test is simple but conservative


2. Response time calculation Response time test is precise
3. Construct a schedule for the first periods Both share the same limitations:
assume the first instances arrive at time 0 (critical instant) D=T (deadline = period)
draw the schedule for the first periods Independent tasks
if all tasks are finished before the end of the first periods, No interrupts
schedulable, otherwise NO Zero context switch: OH=0
No synchronization between tasks!

49 50

Extensions to the basic RMS RMS for tasks with D <= T

Deadline <= Period RMS is no longer optimal (example?)


Interrupt handling Utilization bound test must be modified
Non zero OH for context switch Response time test is still applicable
Non preemptive sections Assuming that fixed-priority assignment is adopted
But considering the critical instant and checking the first
Sharing resources
deadlines principle are still applicable

51 52

Deadline Monotonic Scheduling (DMS)


[Leung et al, 1982] Example

Task model: the same as for RMS but Di<=Ti C T D


Priority-Assignment: tasks with shorter deadline are Task 1 1 4 3 4 8 12 16
assigned higher priorities Task 2 1 5 5
Task 3 2 6 4 10 15
Run-time scheduling: preemptive HPF 5

Task 4 1 11 10
6 12
FACTS:
DMS is optimal 11
RMS is a special case of DMS R1=1
DMS is often refered as Rate Monotonic Scheduling
for historical reasons and they are so similar
R2=4
R3=3
R4=10

53 54

9
DMS: Schedulability analysis DMS: Schedulability analysis

UB test (sufficient): Assume that all tasks (or their first instances) arrive at time 0:
critical instant
Σ Ci/Di <= n*(21/n-1) implies schedulable by DMS
Construct a schedule for the first periods: draw the diagram!
Prescise test (exactly the same as for RMS):
Response time calculation: Ri= Ci + ∑j ∈ HP(i) Ri/Tj*Cj If all tasks meet their deadlines within the first periods,
Ri0 = Ci + ∑j ∈ HP(i) Cj = C1+C2+...+Ci (the first guess) schedulable, otherwise not!
Rik+1 = Ci + ∑j ∈ HP(i)  Rik/Tj*Cj (the (k+1)th guess)
The iteration stops when either
Rim+1>Di or non schedulable
Rim<Di and Rim+1 = Rim schedulable

55 56

Summary: 3 ways for DMS schedulability check EDF for tasks with D <= T

UB test (sufficient, inconclusive) You can always use EDF and it is always optimal to
Response time calculation schedule tasks with deadlines
We have a precise UB test for EDF for tasks with Di=Ti:
Draw the schedule for the first periods
U<=1 iff task set is schedulable
Unfortunately, for tasks with Di<=Ti, schedulability analysis
is more complicated (out of scope of the course, further
reading [Giorgio Buttazzo’s book])
We can always check the whole LCM

57 58

Handling context switch overhands


Summary: schedulability analysis in schedulability analysis

Di=Ti Di<=Ti Assume that


Static/Fixed- RMS DMS Cl is the extra time required to load the context for a new
priority Sufficient test Sufficient test task (load contents of registers etc from TCB)
Σ Ci/Ti <= n*(21/n-1) Σ Ci/Di <= n*(21/n-1) Cs is the extra time required to save the context for a
current task (save contents of registers etc to TCB)
Precise test Precise test
Note that in most cases, Cl=Cs, which is a parameter
Ri= Ci + Ri= Ci +
depending on hardware
∑j ∈ HP(i) Ri/Tj*Cj ∑j ∈ HP(i) Ri/Tj*Cj Cl
Cs
Ri<=Ti Ri<=Di
Task 1
Dynamic EDF EDF Dispatch/context switch
priority Precise test ?
Task 2
Σ Ci/Ti <=1
59 60

10
Handling context switch overheads ? Handling context switch

Thus, the real computing time for a task should be Ri= Ci+ 2Ccs + ∑j ∈ HP(i) Ri/Tj*(Cj + 2Ccs)
Ci*= Ci+Cl+Cs This is wrong!

The schedulability analysis techniques we studied so


Ri= Ci+ 2Ccs + ∑j ∈ HP(i) Ri/Tj*Cj
far are applicable if we use the new computing time
+ ∑j ∈ HP(i) Ri/Tj*4Ccs
C*.
Unfortunately this is not right
= Ci+ 2Ccs + ∑j ∈ HP(i) Ri/Tj*(Cj +4Ccs)
This is right

61 62

Handling interrupts: problem and example Handling interrupts: solution

Task 0
Whenever possible: move code from the interrupt
0 200
Task 0 is the interrupt handler
60 100 handler to a special application task with the same
with highest priority rate as the interrupt handler to make the interrupt
Released here Missing deadline = 50
C T=D Response time = 70
handler (with high priority) as shorter as possible
IH, task 0 60 200 Interrupt processing can be inconsistent with RM
Task 1 10 50
Task 1
0 50 60 priority assignment, and therefore can effect
Task 2 40 250 schedulability of task set (previous example)
Interrupt handler runs with high priority despites its period
Task 2 Interrupt processing may delay tasks with shorter periods
(deadlines)
how to calculate the worst case response time ?

63 64

Handling interrupts: example Handling non-preemtive sections

IH So far, we have assumed that all tasks are


200
Task 0 is the interrupt handler preemptive regions of code. This not always the case
with highest priority e.g code for context switch though it may be short,
C T=D and the short part of the interrupt handler as we
IH 10 200 considered before
Task 1
Task 1 10 50 50 100 150 Some section of a task is non preemptive
Task 2 40 150
In general, we may assume an extra parameter B in
Task 3 50 200
Task 2 the task model, which is the computing time for the
150 non preemtive section of a task.
Bi = computing time of non preemptive section of task i
Task 3

65 66

11
Handling non preemptive sections: Handling non-preemtive sections:
Problem and Example Response time calculation
Task 3 is an interrupt handler with highest priority
Task 4 has a non preemptive section of 20 sec The equation for response time calculation:
C T=D blocking blocked Ri= Bi + Ci + ∑j ∈ HP(i) Ri/Tj*Cj
Task 1 20 100 0 20 Where Bi is the longest time that task i can be
Task 2 40 150 0 20
blocked by lower-priority tasks with non preemptive
Task 3 60 200 0 20
Missing deadline 150 section
Task 4 40 350 20 0
Bi = max{Cbj | task j has lower priority than task i}
IH Task 3 60 60 Note that a task preempts only one task with lower priority
100 200 within each period
Task 1

Task 2
150
Task 4 20 20

Non preemptive/non interruptible section of 20 67 68

So now, we have an equation: The Jitter Problem

So far, we have assumed that tasks are released at a


constant rate (at the start of a constant period)
Ri= Bi + Ci+2Ccs + ∑j ∈ HP(i) Ri/Tj*(Cj +4*Ccs) This is true in practice and a realistic assumption
However, there are situations where the period or
rather the release time may ’jitter’ or change a little,
but the jitter is bounded with some constant J
The jitter may cause some task missing deadline

69 70

Jitter: Example Effect of Jitters


{(20,100),(40,150),(20, T3)}
C D T R H
T1 20 20 20 20 10 30 60
H 10 20 30 10
0 100 200 300
L 15 25 1000 25
T2 40 40 40 L
0 150 300 25
130 170
H Released with jitter 10
Normally release at 0 OBS: only first instance has jitter 10!
T3 20
0 150 300 0 10 30 60

L released
T3 is activated by T2 when it finishes within each period
Note that because the response time for T2 is not a constant,
the period between two instances of T3 is not a constant: 170, 130 12
missing deadline at 27
71 72

12
Jitter: Definition Jitter: Example
{(20,100),(40,150),(20, T3)} Pr(T1)=1, Pr(T2)=2, Pr(T3)=3
J(biggest)=maximal delay from period-start
J(smallest)=minimal delay from period-start T1 20 20 20 20
Jitter= J(biggest)-J(smallest) 0 100 200 300

T2 40 40 40
If J(biggest)=J(smallest), then no influence on the 0 150 300
130 170
other tasks with lower priorities
T3 20
0 150 300

T3 is activated by T2 by the end of each instance


J(biggest)= R2(worst case), J(smallest)= R2(best case)
Jitter = J(biggest)- J(smallest)=60-40=20

73 74

Jitter: Example The number of preemptions due to Jitter


{(20,100),(40,150),(20, T3)}
One release
T1 20 20 20 20
0 100 200 300 Rlow
T2 40 40 40
Task L
0 150 300
90 210 t
Tlow
One release
T3 20 Which preempts L Jhigh
0 150 300 Task H

T3 is activated by T2 at any time during its execution of an instance


t+ε Thigh+ ε
J(biggest)= R2(worst case), J(smallest)= R2(best case)-C2
Jitter = J(biggest)- J(smallest)=60-0=60 One more release due to the jitter
Which preempts L, one more time
75 76

Task L will be preempted at least 2 times if Rlow > Thigh -Jhigh Task L will be preempted at least 3 times if Rlow > 2Thigh -Jhigh

Rlow Rlow
One release One release
Task L Task L
0 0
Tlow Tlow
Jhigh One more release due to the jitter
Jhigh One more release due to the jitter
Task H Which preempts L, one more time Task H Which preempts L, one more time
0 Thigh 0 Thigh 2Thigh

77 78

13
The number of preemptions/blocking
when jitters occur Handling Jitters in schedulability analysis

Task L will be preempted at least 2 times if Rlow > Thigh -Jhigh Ri= Ci + ∑j ∈ HP(i) ”number of preemptions” *Cj
Task L will be preempted at least 3 times if Rlow > 2 *Thigh -Jhigh Ri* = Ri + Ji(biggest)
...
Task L will be preempted at least n times if
Rlow > (n-1)* Thigh –Jhigh
if Ri* < Di, task i is schedulable otherwise no
Thus (Rlow +Jhigh)/Tj > n-1
the largest n satisfying the condition is given by
n= (Rlow + Jhigh)/ Thigh 

79 80

Handling Jitters in schedulability analysis Finally, we have an equation (why?):

Ri= Ci+ 2Ccs + Bi + ∑j ∈ HP(i) (Ri+Jj)/Tj*(Cj +4Ccs)


Ri= Ci + ∑j ∈ HP(i) (Ri+Jj)/Tj*C j
Ri* = Ri + Ji(biggest)

why Ri+Ji(biggest) ? The response time for task i


Ri* = Ri+Ji(biggest)
if Ri* < Di, task i is schedulable, otherwise no Ji(biggest) is the ”biggest jitter” for task i

81 82

Summary: + and -
Static Cyclic Scheduling (SCS)
Simple, and reliable, may be difficult to construct the time
table and difficult to modify and (inflexible)
Earliest Deadline First (EDF)
Simple in theory, but difficult to implement, non-stable
no precise analysis for tasks D<T
Rate Monotonic Scheduling (RMS)
Simple in theory and practice, and easy to implement
Deadline Monotonic Scheduling (DMS)
Similar to RMS
Handling overheads, blocking

83

14

You might also like