0% found this document useful (0 votes)
93 views

EE5903 RTS Real Time Scheduling Policies: Bharadwaj Veeravalli

The document discusses real-time scheduling policies for periodic and aperiodic tasks on uniprocessors. It outlines several real-time scheduling algorithms including Earliest Due Date (EDD), Earliest Deadline First (EDF), and Least Laxity First (LLF). It provides examples of how each algorithm works and discusses properties like optimality and time complexity. It also notes some implementation issues with LLF and that EDF is not optimal for non-preemptive scheduling.

Uploaded by

jitendra
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views

EE5903 RTS Real Time Scheduling Policies: Bharadwaj Veeravalli

The document discusses real-time scheduling policies for periodic and aperiodic tasks on uniprocessors. It outlines several real-time scheduling algorithms including Earliest Due Date (EDD), Earliest Deadline First (EDF), and Least Laxity First (LLF). It provides examples of how each algorithm works and discusses properties like optimality and time complexity. It also notes some implementation issues with LLF and that EDF is not optimal for non-preemptive scheduling.

Uploaded by

jitendra
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

EE5903 RTS

Chapter 4
Real Time Scheduling Policies

Bharadwaj Veeravalli
[email protected]

Acknowledgements: Parts of this material from Hard Real-Time Computing Systems, G.C, Buttazzo, Springer, 3rd Edition;
RTS, J W Liu;
Outline of this Chapter
n  RT Scheduling Algorithms
-- EDD, EDF, LLF
n  Non-Preemptive Scheduling
n  Time-Triggered Systems
n  Scheduling Anomalies – Priority, # of CPUs, task sizes, resource
constraints, etc
n  Handling Overloaded Conditions
n  Imprecise task scheduling strategies (uniprocessor)
n  Handling Periodic Tasks with EDF, RMA and DMA
n  Quick note on Event Driven Schedulers
n  Precedence Constraints – Static Scheduling algorithm
n  Soft-real time issues (if time permits…)

(c) Bharadwaj V 2016 2


Real-Time Aperiodic/Periodic
Scheduling Algorithms
n  Tasks can be scheduled by
n  relative deadlines Di (static)
n  absolute deadlines di (dynamic)

Di

di
Reference
For periodic tasks, this may coincide
with period

(c) Bharadwaj V 2016 3


RT Scheduling algorithms
n  Earliest Due Date (EDD)
n  Earliest Deadline First (EDF)
n  Least Laxity First (LLF)
n  Rate Monotonic Algorithm (RMA)

n  Classification of Scheduling Problems –


Notational ease!
α / β / γ [Graham et. Al 1979]
machine infrastructure / task information / metric
(c) Bharadwaj V 2016 4
Classification Examples
n  Example 1:

α / β / γ: 1 / prec / Lmax
Denotes the problem of scheduling a set of tasks with
precedence constraints on a uniprocessor machine in order
to minimize the maximum lateness.

If no additional constraints are indicated in the second field,


preemption is allowed at any time, and tasks can have
arbitrary arrivals.

(c) Bharadwaj V 2016 5


Classification
n  Example 2:

α / β / γ: 3 / no-preem / Σfi

Denotes the problem of scheduling a set of tasks on a three-


processor machine. Preemption is not allowed and the objective
is to minimize the sum of the finishing times. Since no other
constraints are indicated in the second field, tasks do not have
precedence nor resource constraints but have arbitrary arrival
times.

(c) Bharadwaj V 2016 6


Classification
n  Example 3:

α / β / γ: 2 / sync / ΣLatei

Denotes the problem of scheduling a set of tasks on a two


processor machine. Tasks have synchronous arrival times and do
not have other constraints.

The objective is to minimize the number of late tasks.

(c) Bharadwaj V 2016 7


Earliest Due Date (EDD): 1 /synch / Lmax

n  It selects the task with the earliest relative


deadline [Jackson’s Algorithm].
n  All tasks arrive simultaneously
n  Fixed priority (Di is known in advance)
n  Preemption is not an issue (why?)
n  It minimizes the maximum lateness (Lmax)

--

8
In this case Di is also the absolute deadline. (c) Bharadwaj V 2016
EDD (Cont’d)…

Lateness: Li=fi-di

Li>0

ri di fi

Li<0

ri (c) Bharadwaj V 2016


fi di 9
EDD (Cont’d)…

Maximum Lateness: Lmax=maxi(Li)

if (Lmax < 0) then


no task misses its deadline

How do we prove Jackson’s rule? We


will use a simple interchange logic and
applying this logic until optimal schedule is
reached. (c) Bharadwaj V 2016 10
EDD Optimality – swapping argument

(c) Bharadwaj V 2016 11


EDD – Example 1
Note: Under EDD –
All tasks arrive simultaneously

(c) Bharadwaj V 2016 12


EDD Example 2
This is an example
wherein we demonstrate
that EDD can result in an
infeasible schedule

(c) Bharadwaj V 2016 13


EDD - Guaranteed feasibility
Important Note: The optimality of the EDD algorithm cannot guarantee
the feasibility of the schedule for any task set.

It only guarantees that if a feasible schedule exists for a task


set, then EDD will find it. (Our Example 2 demonstrated this!)

Order tasks by increasing deadlines. Then:

This implies,

Which implies,

(c) Bharadwaj V 2016 14


EDD Optimality Replace tasks one-by-
one to move earlier
deadlines earlier

σ σ’ σ’’ … σ*

Lmax (σ ) ≥ Lmax (σ ' ) ≥ Lmax (σ ' ' )... ≥ Lmax (σ *)

σ*= σEDD
Lmax (σ EDD ) is the minimum value
achievable by any algorithm

(c) Bharadwaj V 2016 15


Earliest Deadline First (EDF):
1 /preem/ Lmax
n  It selects the task with the earliest absolute deadline
[Horn’s Algorithm].
n  Tasks may arrive at any time
n  Dynamic priority (di depends on arrival)
n  Fully preemptive tasks
n  It minimizes the maximum lateness (Lmax)
n  Newly arrived task is inserted into a queue of ready
tasks, sorted by their absolute deadlines. Task at
head of queue is executed.
n  If a newly arrived task is inserted at the head of the
queue, the currently executing task is preempted.

(c) Bharadwaj V 2016 16


EDF Example

D
4

C
2

B
3

A
1
(c) Bharadwaj V 2016 17
EDF Guarantee test (online)
i t
Note: ck(t) – Remaining execution time
For all i ∑ ck (t ) ≤ d i − t from time t
k =1
c1(t)

c2(t)

c3(t)

c4(t)

(c) Bharadwaj V 2016 18


EDF - Example

(c) Bharadwaj V 2016 19


EDF for tasks with Equal Ready Times

(c) Bharadwaj V 2016 20


EDF Optimality for Identical Ready
Times: Single CPU

(c) Bharadwaj V 2016 21


Proof of Optimality

22
(c) Bharadwaj V 2016
EDF Properties
n  EDF is optimal for a uniprocessor schedule
under pre-emption;
- If there exists a feasible schedule then EDF
will schedule the tasks

n  EDF achieves 100% processor utilization – If


under EDF a deadline is missed then this
means system is overloaded.

(c) Bharadwaj V 2016 23


EDF on Multiprocessors - Example

Tasks

(c) Bharadwaj V 2016 24


EDF on MPS
J
(not following EDF!)

? Where exactly is the


problem?
(c) Bharadwaj V 2016 25
Least Laxity First (LLF)
So we have another (better ?) algorithm than EDF that takes into account
the remaining time of computation.

Laxity (or slack): di - t - ci(t) (Recall – Chapter 2 definition!)

where ci(t) is the residual WCET.

(c) Bharadwaj V 2016 26


LLF Example

(c) Bharadwaj V 2016 27


LLF – Implementation issues
n  Although the schedule gets generated, following are
the issues when we attempt to implement:

-  No look-ahead style of working – decides based


on the current status; This means when the time
intervals of arrivals are very short with short
deadlines, the schedule generated by LLF may not
be optimal; It can assure only a good (!) solution.
-  Overhead in determining the T(i) at every
interval O(n) (n: tasks)
-  Space complexity – Entire status of each task need to
be maintained until it gets completed;
(c) Bharadwaj V 2016 28
LLF – Identical ready times

(c) Bharadwaj V 2016 29


Time complexity issues
n  EDD
n  O(n log n) to order the task set
n  O(n) to guarantee the whole task set
n  EDF
n  O(n) to insert a new task in the queue
n  O(n) to guarantee a new task

Remark on an important property of Optimal algorithms - If


an optimal algorithm (in the sense of feasibility) produces
an infeasible schedule, then no algorithm can generate a
feasible schedule.
(c) Bharadwaj V 2016 30
EDF on Non-Preemptive Scheduling
criteria
n  Under non-preemptive execution, EDF is not optimal.
Feasible schedule

2
0 1 2 3 4 5 6 7 8 9
EDF under
Cannot be preempted
non-preemptive
mode
1

Deadline missed!

2
0 1 2 3 4 5 6 7 8 9
(c) Bharadwaj V 2016 31
Clairvoyant strategy

n  To achieve optimality, an algorithm should be


clairvoyant, and decide to leave the CPU idle
in the presence of ready tasks.
CPU left idle

2
0 1 2 3 4 5 6 7 8 9

Note: If we forbid to leave the CPU idle in the presence of ready


tasks, then EDF is optimal.
(c) Bharadwaj V 2016 32
Non-Preemptive Scheduling

Non-Preemptive-EDF is optimal among


work-conserving scheduling algorithms

n  Work-conserving: Defined as an algorithm


that does not leave the processor idle, if there
is work to do i.e., non-idle algorithm.

(c) Bharadwaj V 2016 33


Non-Preemptive Scheduling
Algorithms
n  The problem of finding a feasible schedule is
NP hard and is treated off-line with tree
search algorithms.
n  Examples of tree algorithm
n  Bratley’sAlgorithm
n  Spring algorithm (Self-learning exercise!)

(c) Bharadwaj V 2016 34


Time-Triggered Systems - Non-Preemptive tasks with
arbitrary arrival times - Bratley’s Algorithm to
generate a feasible schedule

n  Time-triggered systems – System that triggers events


at predefined time instants for autonomous control
n  Assumption – Arrival times (arbitrary) are known in
advance; No preemption allowed;
n  Key Idea – At every step of the search do the
following:
- (a) Check if a task misses its deadline;
- (b) See if you have obtained a feasible
schedule;
If (a): fully abandon the search along that path (pruning technique);
Feasible solution sequence - Backtrack
Refer to an example shown in the next slide
(c) Bharadwaj V 2016 35
Example – Bratley’s algorithm

Arrival times are


known in advance; No
preemption allowed;

Time
Complexity ?
(c) Bharadwaj V 2016 36
RT Scheduling Anomalies

Claim - Real-time computing is not equivalent to fast


computing!

Some questions…

1)  Does increase of hardware result in superior


performance always?
2)  Does use of shortest tasks first lead to minimum
makespan?

Above situations – Richard’s Anomalies described by
Graham (1976); We will use task sets with precedence
relations executed in a multiprocessor environment
(c) Bharadwaj V 2016 37
RT Scheduling Anomalies

n  Theorem (Graham, 1976) If a task set is optimally


scheduled on a multiprocessor with some priority
assignment, a fixed number of processors, fixed
execution times, and precedence constraints, then
increasing the number of processors, reducing
execution times, or weakening the precedence
constraints can/may increase the schedule length.

n  This implies that, if tasks have deadlines, then adding


resources (for example, an extra processor) or relaxing
constraints (less precedence among tasks or fewer
execution times requirements) can/may make things
worse! (c) Bharadwaj V 2016 38
Example – Scheduling Anomalies

Precedence constraint
task;

Numbers in the brackets


indicate execution
times of the tasks;

(c) Bharadwaj V 2016 39


Case: Anomalies under increased
processors
With three processors;
T1:3 T9:9
Criteria: Highest priority task is
T8:4
T2:2 assigned to the first available
T7:4 processor
T3:2
T6:4
T4:2
T5:4

P1 T1 T9
P2 T2 T4 T5 T7 Tr=12
P3 T3 T6 T8
t
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
(c) Bharadwaj V 2016 40
Case: Anomalies under increased
processors

T1:3 T9:9 T8:4


T2:2
T7:4
T3:2
T6:4
T4:2
T5:4

P1 T1 T8
P2 T2 T5 T9
P3
P4
T3
T4
T6
T7
L
t
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Tr=15 (c) Bharadwaj V 2016 41


Case: Anomalies when using shorter
tasks

T1:2 T9:8 T8:3


T2:1
T7:3
Criteria: Highest priority task is T3:1
assigned to the first available T6:3
Processor; Reduce the task comp T4:1
time by 1 unit for all tasks; T5:3

P1 T1 T5 T8
P2 T2
P3 T3
T4 T6
T7
T9 L
t
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

(c) Bharadwaj V 2016 Tr=13 42


Case: Anomalies under precedence
constraints weakened/released

T1:3 T9:9
T8:4 Removing the precedence
T2:2
relations between job J4 and
T7:4 jobs J7 and J8
T3:2
T6:4

L
T4:2
T5:4

P1 T1 T6 T9
P2 T2 T4 T7
P3 T3 T5 T8
t
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

(c) Bharadwaj V 2016


Tr=16 43
Handling overloaded conditions
n  Situation - Computational demand requested by the
task set exceeds the processor capacity, and hence
not all tasks can complete within their deadlines;
n  Why this happens?
- Bad design of the RTS (Challenge during peak loads)
- Inability to handle simultaneous events/tasks;
- Hardware faults (Malfunctioning of i/p devices, resulting
in generation of anomalous interrupts);
- OS exemptions – Too much racing with the
input task arrival rates;
- Other environmental related factors;
(c) Bharadwaj V 2016 44
Handling overloaded conditions

n  Workload definition:
Non-RT applications: As per queuing theory the
definition of workload (or also referred to as traffic intensity)
is: ρ = λ.Ĉ, where λ is the mean arrival rate of
tasks, and Ĉ: mean service time of a task;

RT Applications: A system is overloaded when,


based on worst-case assumptions, there is no
feasible schedule for the current task set, and
hence, one or more tasks will miss their
deadlines. (c) Bharadwaj V 2016 45
Handling overloaded conditions

n  Preemptable periodic tasks: If the task set consists


of n independent preemptable periodic tasks, whose
relative deadlines are equal to their period, then the
system load ρ is equivalent to the processor
utilization factor U:
Where Ci is the execution time
and Ti is the period of the task i

n  In this case, a load ρ > 1 means that the total


computation time requested by the periodic activities
in their hyper-period exceeds the available time on
the processor; therefore, the task set cannot be
scheduled by any algorithm.
(c) Bharadwaj V 2016 46
Handling overloaded conditions

n  For a generic set of loads occurring in a dynamic


RTS, what is an apt definition of the workload?
n  In such systems, the load varies at each job
activation and it is a function of the jobs’ deadlines. In
general, the load in a given interval [ta , tb ] can be
defined as: Not suitable!
Why?
g(.,.) is the processor demand

n  Instantaneous load (suitable for dynamic RTS) ρ(t):


- Practical definition that can be used to estimate the
current load in dynamic real-time systems
(c) Bharadwaj V 2016
What is it? 47
Handling overloaded conditions

n  Deriving instantaneous load:


Compute the load in all intervals from the current time t
and each deadline (di ) of the active jobs. Hence, the
intervals that need to be considered for the computation
are: [t, d1], [t, d2] , . . . , [t,dn] . In each interval [t, di] , the
partial load ρi(t) due to the first i jobs is given by:

where ck(t) refers to the remaining execution time of job


Jk with deadline less than or equal to di . Therefore, the
total load at time t is ρ( t) = max{ρi( t)}
i
(c) Bharadwaj V 2016 48
Handling overloaded conditions

Example: For a set of 3 jobs, with their respective


deadlines shown, at t=3, the total load is given by:

max

(c) Bharadwaj V 2016 49


Handling overloaded conditions

Example
(Cont’d):

(c) Bharadwaj V 2016 50


Handling overloaded conditions

n  After obtaining the profile of the total load as shown above,


required amount of computational resources can be dispatched,
if anomalies can be taken care. This is purely in view of meeting
the deadlines.

n  Definition: A computing system is said to experience an


overload when the computation time demanded by the task set
in a certain interval of time exceeds the available processing
time in the same interval.
n  Definition: A task (or a job) is said to experience an overrun
when exceeding its expected utilization. An overrun may occur
either because the next job is activated before its expected
arrival time (activation overrun), or because the job computation
time exceeds its expected value (execution overrun).

(c) Bharadwaj V 2016 51


Imprecise task scheduling
n  For certain applications, it may be advantageous not to
pursue the highest possible precision of computation so
that task computation time can be reduced and overall
task response time can be improved.

n  Thus, a task is viewed to have two components:


- Mandatory part (compulsory to generate minimum level
of acceptable results)

- Optional part (depending on the resource availability and


deadline violations this part can be taken for
processing)
Ref: EKP Chong and W Zhao, “Performance
Evaluation of Scheduling Algorithms for
imprecise computer systems: J. Systems (c) Bharadwaj V 2016 52
Software, 1991, Vol.15, pp. 261-277.
Imprecise task scheduling
n  For imprecise task scheduling, relevant
performance metrics are
(1) Mean task waiting time - this metric assesses the system timing
behavior;
(2) Mean computation time a task receives;
(3) The fraction of tasks that are fully processed;

Two commonly available strategies – even hardware


solutions can be implemented:
(a) Interrupt scheme
(b) Non-interrupt scheme
(c) Bharadwaj V 2016 53
Imprecise task scheduling
n  System: Uniprocessor;
n  System has a finite size ready queue to hold ready
tasks; Acceptance test defines and distinguishes the
above two strategies;
n  Once a task arrives to the system depending on the
acceptance test the task is put in the ready Q;
n  A controller, mostly a software process, is built-in as
a part of the scheduler, to monitor the task arrivals,
execution profiles and any ready Q overflow events;
n  Q overflow may adversely affect the performance
when the arrival rates spontaneously increase;

(c) Bharadwaj V 2016 54


Imprecise task scheduling
n  (a) Interrupt Scheme:
Step 1: Ready queue is checked upon every task arrival;
Step 2: If (total # of tasks in the system, including the one being
executed, > M): // M: system level parameter;
Currently running task is executed at the reduced level to
minimize the computation time;
The above condition can happen even if the running task is in the
middle of its computation. In this case, the currently running task is
interrupted and resumed at the reduced level for the rest of its
execution time.

Remark: Controller is built-in as a process and executed only during the


execution of the task; It is possible to design a controller to execute at the
time of task arrival;
(c) Bharadwaj V 2016 55
Imprecise task scheduling
n  (b) Non-Interrupt Scheme:
n  Step 1: The total #of tasks in the system is only checked at the
start of the execution of a task.
n  Step 2: If (the # of tasks in the system > M):
Run the task at the reduced level;
Else:
Run the task at the full level.
Condition: While a task is in execution, the number of tasks in the
system is not monitored. A task receives a full-level computation
even if the number of tasks in the system exceeds M (due to new
arrivals during its execution), as long as at the beginning of its
execution, the number of tasks was not more than M. Also, in this
scheme, a task’s execution is not interrupted by the scheduler;

(c) Bharadwaj V 2016 56


Imprecise task scheduling

M1

M2 < M 1
M2

(c) Bharadwaj V 2016 57


Imprecise task scheduling
Difference between the schemes:
n  The interrupt scheme interrupts a running task (to switch
its execution to the reduced level mode) whenever the total
number of tasks changes from M to M + 1, but the non-
interrupt scheme never interrupts a running task.

n  Interrupt scheme controls the system in a more


“responsive” way in the sense that any time the queue
length becomes larger, a reduced level computation is
performed.

(c) Bharadwaj V 2016 58


Imprecise task scheduling
n  The interrupt scheme involves a high overhead
because of it needs to check the number of tasks in
the system upon each task arrival and to interrupt the
running tasks.

n  Secondly, for some application programs, it may be


very difficult or simply impossible to change the
control (or the algorithm) in the middle of its
execution.

n  Non-interrupt scheme – Incurs less overhead as it


does not need to poll constantly the arrivals;
(c) Bharadwaj V 2016 59
Imprecise task scheduling
n  Non-interrupt scheme may result in a situation
where the queue length has become larger than
M and the running task still receives the full
level computation!

n  Practically useful scheme - Non-interrupt


method;

(c) Bharadwaj V 2016 60


Handling Periodic Tasks using EDF

A periodic task set is schedulable under EDF if and only if it satisfies the
following criteria:

n n

∑e k / pk = ∑ uk ≤1
k=1 k=1

In this expression, we assumed that the period of the task is same as its
deadline. In reality this need not be true. In this case, the schedulability
criteria need to be changed as:

Thus, if pi > di then each task needs ei amount of computing time every
Min(pi,di) duration of time. Therefore we can rewrite the denominator pi of
the above expression as min(pi,di).

(c) Bharadwaj V 2016 61


EDF on periodic tasks (Cont’d)

Note that if pi < di it is possible that a set of tasks is schedulable


under EDF even if the task set fails to meet the above criteria.

This means the above criteria is conservative when pi < di and is not
a necessary condition, but only a sufficient condition for EDF
schedulable.

Exercise 4.1: Determine whether the task set given below is


EDF schedulable.
T1 = (e1=10, p1=20)
T2 = (e2=5, p2=50)
T3 = (e3=10, p3=35)
(c) Bharadwaj V 2016 62
Handling Periodic tasks: Rate Monotonic
Algorithm (RMA)
n  Under EDF there is no strict notion of priority of tasks
– It uses deadlines as the key criteria
n  In RMA, tasks are assigned priorities based on the
frequency of occurrences, i.e., tasks arrival rates (Ex:
20msecs task has higher priority than a 50msecs task which in turn has
higher priority than a 100msecs task)
n  Scheduling Policy: Looking at the task characteristics
programmer decides the priority of the tasks before
they are scheduled and then run RMA to schedule
the tasks; At any point in time, tasks with highest
priority are alone scheduled for the set of ready tasks.

(c) Bharadwaj V 2016 63


RMA (Cont’d)…

n  Optimal Uniprocessor (single CPU/core) static priority


scheduling algorithm ( di ≠ pi ):
“If RMA cannot schedule a set of periodic
tasks, no other algorithm can schedule”

n  Schedulability Criteria:
Utilization Bound 1: Sum of the utilization due to tasks
must be less than or equal to 1 (Necessary Condition)
e: Execution time
p: Period of the task
n: Number of tasks
ui: Utilization due to task i
(c) Bharadwaj V 2016 64
RMA (Cont’d)…

n  Schedulability Criteria: Utilization Bound 2 -


Sufficient Condition (by Liu & Leyland 1971)

Q: How does the plot of the bound look like? What do we infer from
the trend?

Q: For very large n, what is the maximum utilization that can be


achieved?

(c) Bharadwaj V 2016 65


RMA (Cont’d)…

n  Exercise 4.2– Test if the following tasks can be scheduled


using RMA;
Task: (execution time, periodicity of the task, deadline=period)
T1: (20,100); T2: (30,150); T3: (60,200)
Solution to be discussed in the lecture

n  Liu & Leyland condition– Conservative criteria!


It assures only 69% (ln 2) utilization as n tends to
infinity!
n  It is only a sufficient condition – That is, if LL bound is
satisfied then definitely tasks are schedulable using
RMA;
(c) Bharadwaj V 2016 66
RMA (Cont’d)…

n  However, if a task set fails to satisfy LL bound, still it may be


schedulable! This is shown by Completion Time Theorem (Liu
& Lehoczky, 1989)

What about this area? (Use Completion


Definitely schedulable Time Theorem)
LL UB1
| |

May be schedulable

Completion Time Theorem: If each of a set of tasks individually meets


its first deadline under zero phasing then the task set is RMA schedulable
for all task phasings. (Lehoczky criterion)

(c) Bharadwaj V 2016 67


RMA (Cont’d)…

Checking the schedulability of a task set:


1.  Consider zero phasing for all tasks
2.  Derive schedules till the first deadline of each task
3.  Observe if each task is schedulable
4.  Then the entire task set is schedulable using RMA

Phase (delay in the occurrence of the first instance of a task i) (c) Bharadwaj V 2016
68
RMA (Cont’d)…

n  Exercise 4.3– Test if the following tasks can be scheduled


using RMA ;
Task: (execution time, periodicity of the task, deadline=period)
T1: (20,100); T2: (30,150); T3: (90,200)

Solution to be discussed in the lecture

n  Exercise 4.4 – Can we derive a formal expression


based on the above observation from Exercise 4.3?

Solution – Solution will be provided; But follow Ex 4.3 to generalize.

(c) Bharadwaj V 2016 69


RMA - Analysis with context
switching overhead
So far the influence of overheads were ignored. Let us include an
overhead – context-switching time to see its effect on the schedulability
criteria and hence, the performance of the algorithm.

We assume that the context switching time is constant and equals c secs.
Thus the increase in execution time of ei of each task Ti is at most
(ei+2c). The factor 2c is the worst case number of content switches per
task (per preemption) - first switch when a task preempts a currently
running task and another c when this task completes.

Thus in the RMA schedulability criterion, we replace ei by (ei+2c) for


each Ti.
Exercise 4.5: Determine whether the task set given below is RMA
schedulable. T1 = (e1=20, p1=100); T2 = (e2=30, p2=150); T3 = (e3=90,
p3=200); Overhead factor c does not exceed 1 msec.
(c) Bharadwaj V 2016 70
Task self-suspension

A task might suspend itself from execution when it


needs to perform I/O operations or when it waits for some
events/conditions to occur/satisfy.

So, what happens when a self suspension happens?

When a task suspends itself, it is removed from the


ready queue and it will be put in a blocked queue. OS
takes next eligible ready task to run.

We will derive a formal expression for a revised


schedulability criterion when a task undergoes at most
single suspension. (c) Bharadwaj V 2016 71
Task self-suspension
Consider an ordered set of tasks {T1,…,Tn}, such that
pri(Tj) > pri(Tj+1). When a task Ti suspends itself we are
concerned about how much time this overhead will
add to the overall execution such that the given task set is
schedulable.

Assume that each higher priority task suspends only once.

Then, if bi denotes the duration of a task suspension time


then the total delay (bti) that task Ti might incur:
(bi) due to its own self suspension + (single) suspension of all
higher priority tasks influences the finish time of task Ti.
Question is by how much?
(c) Bharadwaj V 2016 72
Task self-suspension

bti = bi + Σk min(ek,bk), k = 1,…,i-1 (worst case)

Convince yourself with the following values. Let k be a higher


priority task and i be a lower priority task and let ei = 10

Case 1: Let ek = 3, bk = 5 (ek < bk)

Case 2: Let ek = 5, bk = 3 (ek > bk)

In Case 1, max execution of i happens during bk and i needs to wait


for ek to finish. Hence, it is ek = min(ek ,bk) influences the response time.
In Case 2, max possible execution by task i only upto bk
(overlap period of suspension time of k), which is again a min of ek and bk.

Hence in both cases min(ek ,bk) is to be considered.


(c) Bharadwaj V 2016 73
Task self-suspension

Exercise 4.6 First derive a generalized version to take


into account the self suspension overheads. Then,
verify if the following tasks are RMA Schedulable with
self-suspension overheads.

T1(10,50), T2(25,150), T3(50,200). Let the respective


maximum task suspension times be 3, 3, and 5 msecs.

Note that in this case tasks are already in increasing rates.

(c) Bharadwaj V 2016 74


RMA for Harmonic tasks

While RMA has an interesting property in deciding the


priority of the tasks, it is still difficult to know exactly the start
time and finish time of each task within a given period.
Especially for tasks with low priority, i.e., long periods, the
start and finish times could be different.

The hyper-period = LCM(pi : for all i) is by and large


too big. However, suppose if the task periods are in
harmonic progression, i.e. multiples, it is possible to find
the start time and finish time of each task quickly because
the resulting schedule becomes more regular.
Remark: Assumes periodic time-triggered inputs by an application in which case
start times of tasks may be predictable with high confidence.
75
(c) Bharadwaj V 2016
RMA for harmonic tasks

The task periods in a task set are said to be harmonically


related, IFF for any two tasks i and k, whenever pi > pk, it
should imply that pi is an integral multiple of pk. For
example, T1 = (5,30), T2=(12,60), T3=(8,120) are
harmonically related tasks.
Theorem: For a set of harmonically related
tasks, RMA schedulability criterion is given by
Σi(ei/pi) = Σiui ≤ 1.

Proof - Exercise 4.7 - To be discussed during the


lecture.
(c) Bharadwaj V 2016 76
Deadline Monotonic Algorithm
(DMA)
RMA no longer remains as an optimal algorithm for periodic
real-time tasks when task deadlines are such that di ≠ pi.

For such cases we can employ DMA which is more efficient


than RMA.

Basic Idea of DMA - Assigns higher priorities to tasks based


on their deadlines, rather than assigning priorities based on
task periods as done by RMA.

Thus, DMA assigns higher priorities to tasks with shorter


deadlines!
(c) Bharadwaj V 2016 77
Deadline Monotonic Algorithm (DMA)

Q: What can you say when relative deadline of


every task is proportional to its period ?
- RMA and DMA would produce identical results

Q: What happens when relative deadlines are arbitrary?


- DMA is more proficient than RMA in the sense
that it can sometimes generate feasible
schedule when RMA fails.

Note that whenever DMA fails RMA always fail.

(c) Bharadwaj V 2016 78


Deadline Monotonic Algorithm (DMA)

Exercise 4.8: Determine whether the task set given


below is RMA and/or DMA schedulable.

T1 = (e1=10, p1=50, d1=35);


T2 = (e2=15, p2=100, d2=20 );
T3 = (e3=20, p3=200, d3=200);

(c) Bharadwaj V 2016 79


Quick note on Real-time Event
Driven Schedulers
A foreground-background scheduler (FBS) is the simplest
priority-driven preemptive scheduler.

Foreground (FG) tasks - RT periodic tasks


Background (BG) tasks - Sporadic, aperiodic and non-RT tasks

A BG task can run when none of the FG tasks are ready to run.
BG tasks are treated as low priority tasks.

Let there be n FG tasks. Let TB be the only BG task and let


eB be the execution time of BG task. Thus the completion time
CTB of BG task is:

CTB = ( eB) / (1 – Σi=1,n(ei / pi) )


(c) Bharadwaj V 2016 80
Quick note on Real-time Event Driven Schedulers

The above expression means the following:

When any FG task is executing currently, the


BG task waits.

The average CPU utilization due to a FG task Ti is (ei / pi).


This means all FG tasks would consume Σi=1,n(ei / pi).
Thus, the available time for BG tasks in every unit of time
is given by 1 - Σi=1,n(ei / pi).

(c) Bharadwaj V 2016 81


Precedence constraints – Static approach

n  List Scheduling Algorithm – Many DAG algorithms


use this idea

Some of the scheduling algorithms which consider the


inter-task communication assume the availability of
unlimited number of processors - UNC (unbounded
number of clusters) scheduling algorithms.

Some other algorithms assume a limited number of


processors - BNP (bounded number of processors)
scheduling algorithms.

(c) Bharadwaj V 2016 82


List scheduling – Static approach (cont’d)…

The basic idea of list scheduling is to make a scheduling


list (a sequence of nodes for scheduling) by assigning them some
priorities, and then repeatedly execute the following two
steps until all the nodes in the graph are scheduled:

1) Remove the first node from the scheduling list(ready);

2) Allocate the node to a processor which allows the


earliest start-time.

How to decide about the priorities?

(c) Bharadwaj V 2016 83


List scheduling (cont’d)…

n  There are various ways to determine the priorities of


nodes such as HLF (Highest level First), LP (Longest
Path), LPT (Longest Processing Time) and CP
(Critical Path).

n  In a traditional scheduling algorithm, the scheduling list is


statically constructed before node allocation begins, and most
importantly the sequencing in the list is not modified.

n  In contrast, after each allocation, recent algorithms re-compute


the priorities of all unscheduled nodes which are then used to
rearrange the sequencing of the nodes in the list.

(c) Bharadwaj V 2016 84


List scheduling (cont’d)…

Following three-step approach is used:


1) Determine new priorities of all unscheduled
nodes;
2) Select the node with the highest priority for
scheduling;
3) Allocate the node to the processor which
allows the earliest start-time.
How do we re-compute the priorities?
Scheduling algorithms which employ this three-step
approach can potentially generate better schedules.

(c) Bharadwaj V 2016 85


List scheduling (cont’d)…

n  Two frequently used attributes for assigning priority


are: t-level (top level) and b-level (bottom level)
n  What is a t-level approach?
The t-level of a node n is the length of a longest path
(there can be more than one longest path) from an entry node to
(excluding) that node n. Here, the length of a path is the
sum of all the node and edge weights along the path.

As such, the t-level of ni highly correlates with its earliest


start-time, denoted by Ts(ni), which is determined after it
is scheduled to a processor. This is because after it is
scheduled, it is simply the length of the longest path
reaching it. (c) Bharadwaj V 2016 86
List scheduling (cont’d)…

n  What is b-level approach?

The b-level of a node n is the length of a longest path


from n to an exit node. The b-level of a node is bounded
from above by the length of a critical path.

A critical path (CP) of a DAG, which is an important


structure in the DAG, is a longest path in the DAG.

Clearly, a DAG can have more than one CP.

(c) Bharadwaj V 2016 87


List scheduling (cont’d)…

Different algorithms use the t-level and b-level in different


ways.

Some algorithms assign a higher priority to a node


with a smaller t-level while some algorithms assign a higher
priority to a node with a larger b-level. Still some algorithms
assign a higher priority to a node with a larger
(b-level – t-level).

In general, scheduling in a descending order of


b-level tends to schedule critical path nodes first, while
scheduling in an ascending order of t-level tends to
schedule nodes in a topological order.
(c) Bharadwaj V 2016 88
Example - List scheduling (cont’d)…

Sl – Static level – path length


from a node to exit node using only
execution times along the longest
execution path
(c) Bharadwaj V 2016 89
Final remarks: Achieving predictability

n  The operating system is the part most


responsible for a predictable behavior.
n  Concurrency control must be enforced by:
n  appropriate scheduling algorithms
n  appropriate synchronization protocols
n  efficient communication mechanisms
n  predictable interrupt handling

Chapter 6 – We will see some deadlock, synchronization protocols and interrupt


Handling mechanisms

(c) Bharadwaj V 2016 90

You might also like