0% found this document useful (0 votes)
51 views91 pages

Module 4

Uploaded by

joyalgigi799
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views91 pages

Module 4

Uploaded by

joyalgigi799
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

1

MODULE 4
Task constraints, Task scheduling: Aperiodic task scheduling: EDD. EDF, LDF, EDF with
precedence constraints.
Periodic task scheduling: Rate monotonic and Deadline monotonic, Real time Kernel-
Structure, State transition diagram, Kernel primitives.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


2 TASK CONSTRAINTS

 Typical constraints that can be specified on real-time tasks are of three


classes:
 Timing constraints,
 Precedence relations, and
 mutual exclusion constraints on shared resources.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


3
Timing constraints
 Real-time systems are characterized by computational activities with
stringent timing constraints that must be met in order to achieve the desired
behavior.
 A typical timing constraint on a task is the deadline, which represents the
time before which a process should complete its execution without causing
any damage to the system.
 Depending on the consequences of a missed deadline, real-time tasks are
usually distinguished in two classes:
 Hard : A task is said to be hard if a completion after its deadline can
cause catastrophic consequences on the system. In this case, any instance
of the task should a priori be guaranteed in the worst-case scenario.
 Soft : A task is said to be soft if missing its deadline decreases the
performance of the system but does not threaten its correct behavior.
Prepared by Divya Harikumar,Asst.Professor,SCTCE
4 In general, a real-time task Ji can be characterized by the following parameters:
Arrival time ai :
 It is the time at which a task becomes ready for execution; it is also referred as
request time or release time and indicated by ri;
Computation time Ci:
 It is the time necessary to the processor for executing the task without
interruption;
Deadline di:
 It is the time before which a task should be complete to avoid damage to the
system;
Start time si :
 It is the time at which a task starts its execution;
Finishing time fi :
 It is the time at which a task finishes its execution;
Prepared by Divya Harikumar,Asst.Professor,SCTCE
5 Criticalness:
 It is a parameter related to the consequences of missing the deadline
(typically, it can be hard or soft);
Value Vi :
 It represents the relative importance of the task with respect to the other
tasks in the system;
Lateness Li :
 Li = fi - di represents the delay of a task completion with respect to its
deadline; note that if a task completes before the deadline, its lateness is
negative;
Tardiness or Exceeding time Ei:
 Ei = max{0, Li) is the time a task stays active after its deadline;
Laxity or Slack time Xi :
 Xi = di - ai - Ci is the maximum time a task can be delayed on its activation to
complete within its deadline.
Prepared by Divya Harikumar,Asst.Professor,SCTCE
6

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 Tasks can be defined as periodic or aperiodic.
7
 Periodic tasks consist of an infinite sequence of identical activities, called instances
or jobs, that are regularly activated at a constant rate.
 For the sake of clarity, from now on, periodic task will be denoted by τi , whereas
an aperiodic job by Jj.
 The activation time of the first periodic instance is called phase.
 If φi is the phase of the periodic task τi , the activation time of the kth instance is
given by
φi + (k - 1)Ti, where Ti is called period of the task.
 The parameters Ci , Ti and Di are considered to be constant for each instance.
 Aperiodic tasks also consist of an infinite sequence of identical activities
(instances); however, their activations are not regular.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


8

Prepared by Divya Harikumar,Asst.Professor,SCTCE


9
Precedence constraints

 In certain applications, computational activities have to respect some


precedence relations defined at the design stage.
 Such precedence relations are usually described through a directed acyclic
graph G, where tasks are represented by nodes and precedence relations by
arrows.
 A precedence graph G induces a partial order on the task set.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


10

Prepared by Divya Harikumar,Asst.Professor,SCTCE


11

 Figure illustrates a directed acyclic graph that describes the precedence


constraints among five tasks.
 From the graph structure we observe that task J1 is the only one that can start
executing since it does not have predecessors.
 Tasks with no predecessors are called beginning tasks.
 As J1 is completed, either J2 or J3 can start.
 Task J4 can start only when J2 is completed, whereas J5 must wait the
completion of J2 and J3.
 Tasks with no successors, as J4 and J5, are called ending tasks.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


12
Resource constraints

 From a process point of view, a resource is any software structure that can be
used by the process to advance its execution.
 Typically, a resource can be a data structure, a set of variables, a main
memory area, a file, a piece of program, or a set of registers of a peripheral
device.
 A resource dedicated to a particular process is said to be private, whereas a
resource that can be used by more tasks is called a shared resource.
 To maintain data consistency, many shared resources do not allow
simultaneous accesses but require mutual exclusion among competing tasks.
They are called exclusive resources.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


13

 Let R be an exclusive resource shared by tasks Ja and Jb. If A is the operation


performed on R by Ja , and B is the operation performed on R by Jb , then A
and B must never be executed at the same time.
 A piece of code executed under mutual exclusion constraints is called a
critical section.
 Synchronization mechanism (such as semaphores) can be used by tasks to
create critical sections of code.
 Hence, when we say that two or more tasks have resource constraints, we
mean that they have to be synchronized since they share exclusive resources.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


14

Consider two tasks J1 and J2 that share an exclusive resource R on which two operations (such as
insert and remove) are defined.
The code implementing such operations is thus a critical section that must be executed in mutual
exclusion.
If a binary semaphore s is used for this purpose, then each critical section must begin with a wait(s)
primitive and must end with a signal(s) primitive.
Prepared by Divya Harikumar,Asst.Professor,SCTCE
 If preemption is allowed and J1 has a higher priority than J2, then J1 can block in the
15
situation depicted in Figure
 Here, task J2 is activated first, and, after a while, it enters the critical section and locks the
semaphore.
 While J2 is executing the critical section, task J1 arrives, and, since it has a higher priority,
it preempts J2 and starts executing.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 However, at time t1, when attempting to enter its critical section, it is blocked on the
16 semaphore and J2 is resumed.
 J1 is blocked until time t2, when J2 releases the critical section by executing the
signal(s) primitive, which unlocks the semaphore.
 A task waiting for an exclusive resource is said to be blocked on that resource.
 All tasks blocked on the same resource are kept in a queue associated with the semaphore,
which protects the resource.
 When a running task executes a wait primitive on a locked semaphore, it enters a waiting
state, until another task executes a signal primitive that unlocks the semaphore.
 When a task leaves the waiting state, it does not go in the running state, but in the ready
state, so that the CPU can be assigned to the highest-priority task by the scheduling
algorithm.
 The state transition diagram relative to the situation described above is shown in Figure
17
SCHEDULING PROBLEMS

 To define a scheduling problem we need to specify three sets:


 a set of n tasks J {J1, J2, • • •, Jn},
 a set of m processors P = {P1, P2, • • •, Pm}and
 a set of s types of resources R {R1,R2,..., Rs}
 Scheduling means to assign processors from P and resources from R to tasks
from J in order to complete all tasks under the imposed constraints.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


18
Classification of scheduling algorithms

Prepared by Divya Harikumar,Asst.Professor,SCTCE


19

Prepared by Divya Harikumar,Asst.Professor,SCTCE


20 Aperiodic task scheduling

JACKSON'S ALGORITHM /EDD ALGORITHM


 The problem considered by this algorithm is 1 | sync \ Lmax
 That is, a set J of n aperiodic tasks has to be scheduled on a single processor,
minimizing the maximum lateness.
 All tasks consist of a single job, have synchronous arrival times, but can have
different computation times and deadlines.
 No other constraints are considered, hence tasks must be independent; that
is, cannot have precedence relations and cannot share resources in exclusive
mode.
 Since all tasks arrive at the same time, preemption is not an issue in this
problem.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 Assume that all tasks are activated at time t = 0, so that each job Ji can be
21 completely characterized by two parameters:
• a computation time Ci and
• a relative deadline Di (which, in this case, is also equal to the absolute
deadline).
 Thus, the task set J can be denoted as

 A simple algorithm that solves this problem was found by Jackson and it is
called Earliest Due Date (EDD) and can be expressed by the Jackson’s rule

JACKSON’S RULE : Given a set of n independent tasks, any algorithm that


executes the tasks in order of nondecreasing deadlines is optimal with respect to
minimizing the maximum lateness.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


22
 As illustrated in Figure , interchanging the position of Ja and Jb in σ cannot
23 increase the maximum lateness.
 The maximum lateness between Ja and Jb in σ is Lmax(a,b) = fa - da, whereas the maximum
lateness between Ja and Jb in σ ' can be written as L’max(a,b) = max{La', Lb').
 Two cases must be considered:

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 The complexity required by Jackson's algorithm to build the optimal
24 schedule is due to the procedure that sorts the tasks by increasing deadlines.
 Hence, if the task set consists of n tasks, the complexity of the EDD
algorithm is
O(n log n).
 EXAMPLE 1

Consider a set of five tasks, simultaneously activated at time t = 0, whose


parameters (worst-case computation times and deadlines) are indicated in the
table shown in Figure . Find maximum lateness.
25  The schedule of the tasks produced by the EDD algorithm is also depicted in Figure
below. The maximum lateness is equal to -1 and it is due to task J4, which completes a
unit of time before its deadline.
Since the maximum lateness is negative, we can conclude that all tasks have
been executed within their deadlines.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


26

 Figure illustrates an example in which the task set cannot be feasibly scheduled.
 Still, however, EDD produces the optimal schedule that minimizes the maximum
lateness. Notice that, since J4 misses its deadline, the maximum lateness is greater
than zero {Lmax = L4 ==2).

Prepared by Divya Harikumar,Asst.Professor,SCTCE


27  To guarantee that a set of tasks can be feasibly scheduled by the EDD
algorithm, we need to show that, in the worst case, all tasks can complete
before their deadlines.
 This means that we have to show that for each task, the worst-case finishing
time fi is less than or equal to its deadline di:

 If tasks have hard timing requirements, such a schedulability analysis must


be done before actual tasks' execution.
 Without loss of generality, we can assume that tasks J1, J 2 , . . . , Jn are listed
by increasing deadlines, so that Ji is the task with the earliest deadline.
 In this case, the worst-case finishing time of task Ji can be easily computed
as

Prepared by Divya Harikumar,Asst.Professor,SCTCE


28

Prepared by Divya Harikumar,Asst.Professor,SCTCE


29 HORN'S ALGORITHM

 If tasks are not synchronous but can have arbitrary arrival times, then
preemption becomes an important factor.
 If preemption is allowed, a task can be interrupted if a more important task
arrives.
 Horn found an elegant solution to the problem of scheduling a set of n
independent tasks on a uniprocessor system, when tasks may have dynamic
arrivals and preemption is allowed (1 | preem \ Lmax)

Prepared by Divya Harikumar,Asst.Professor,SCTCE


30

 The algorithm, called Earliest Deadline First (EDF), can be expressed by the
following theorem

• The complexity of the algorithm is O(n) per task, since inserting the newly
arrived task into an ordered queue(the ready queue) of n elements may require
up to n steps.
• Hence, the overall complexity of EDF for the whole task set is O(n2)

Prepared by Divya Harikumar,Asst.Professor,SCTCE


31 EDF optimality

 If there exists a feasible schedule for a task set J, then EDF is able to find it.
 The proof can easily be extended to show that EDF also minimizes the
maximum lateness.
 An algorithm that minimizes the maximum lateness is also optimal in the
sense of feasibility. The contrary is not true.
 Let σ be the schedule produced by a generic algorithm A and let σEDF be the
schedule obtained by the EDF algorithm.
 Since preemption is allowed, each task can be executed in disjointed time
intervals.
 Without loss of generality, the schedule σ can be divided into time slices of
one unit of time each.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


32

 To simplify the formulation of the proof, following abbreviations are defined:

Prepared by Divya Harikumar,Asst.Professor,SCTCE


33

Figure : Proof of the optimality of the EDF algorithm, a. schedule σ at time t = 4. b . new schedule
obtained after a transposition.
34

 For each time slice t, the algorithm verifies whether the task σ (t) scheduled
in the slice t is the one with the earliest deadline, E(t).
 If it is, nothing is done, otherwise a transposition takes place and the slices at
t and tE are exchanged (see Figure).
 In particular, the slice of task E(t) is anticipated at time t, while the slice of
task σ (t) is postponed at time tE
 After each transposition the maximum lateness cannot increase; therefore,
EDF is optimal.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


35  In the figure , first transposition occurs at time t = 4.
 At this time, in fact, the CPU is assigned to J4, but the task with the earliest deadline
is J2, which is scheduled at time tE = 6. As a consequence, the two slices in gray are
exchanged and the resulting schedule is shown in Figure b.
 The algorithm examines all slices, until t = D, performing a slice exchange when
necessary

Transformation algorithm used by Dertouzos to


prove the optimality of EDF

Prepared by Divya Harikumar,Asst.Professor,SCTCE


36

 If a slice of Ji is postponed at tE and σ is feasible, it must be (tE + 1) ≤ dE, dE being


the earliest deadline.
 Since dE < di for any i, then we have be (tE + 1) ≤ di, which guarantees the schedulability
of the slice postponed at tE

Prepared by Divya Harikumar,Asst.Professor,SCTCE


37 Example

 An example of schedule produced by the EDF algorithm on a set of five tasks is


shown.
 At time t = 0, tasks J1 and J2 arrive and, since d1 < d2, the processor is assigned to
J1, which completes at time t = 1.
 At time t = 2, when J2 is executing, task J3 arrives and preempts J2, being d3 < d2.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 At time t = 3, the arrival of J4 does not interrupt the execution of J3, because d3 < d4.
38
 As J3 is completed, the processor is assigned to J2, which resumes and executes until
completion.
 Then J4 starts at t = 5, but, at time t = 6, it is preempted by J5, which has an earlier
deadline.
 Task J4 resumes at time t = 8, when J5 is completed.
 Notice that all tasks meet their deadlines and the maximum lateness is Lmax = L2 = 0.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


39 Guarantee

 When tasks have dynamic activations and the arrival times are not known a
priori, the guarantee test has to be done dynamically, whenever a new task
enters the system.
 Let J be the current set of active tasks, which have been previously
guaranteed, and let Jnew be a newly arrived task.
 In order to accept Jnew in the system we have to guarantee that the new task set
J' = J U{Jnew} is also schedulable.
 To guarantee that the set J' is feasibly schedulable by EDF, we need to show
that, in the worst case, all tasks in J' will complete before their deadlines.
 This means that we have to show that, for each task, the worst-case finishing
time fi is less than or equal to its deadline di.
Prepared by Divya Harikumar,Asst.Professor,SCTCE
 we can assume that all tasks in J' (including Jnew) are ordered by increasing
40 deadlines, so that J1 is the task with the earliest deadline.
 Moreover, since tasks are pre-emptable, when Jnew arrives at time t some
tasks could have been partially executed.
 Thus, let Ci(t) be the remaining worst-case execution time of task Ji .
 Hence, at time t, the worst-case finishing time of task Ji can be easily
computed as

Prepared by Divya Harikumar,Asst.Professor,SCTCE


41 SCHEDULING WITH PRECEDENCE
CONSTRAINTS

 Two algorithms are used that minimize the maximum lateness by assuming
synchronous activations and preemptive scheduling, respectively.
 Latest Deadline First (1 / prec,sync / Lmax)
 Earliest Deadline First (1 / prec,preem / Lmax)

Prepared by Divya Harikumar,Asst.Professor,SCTCE


42 LATEST DEADLINE FIRST (LDF)

 Lawler presented an optimal algorithm that minimizes the maximum lateness of a set
of tasks with precedence relations and simultaneous arrival times.
 The algorithm is called Latest Deadline First (LDF) and can be executed in polynomial
time with respect to the number of tasks in the set.
 Given a set J of n tasks and a directed acyclic graph (DAG) describing their
precedence relations.
 LDF builds the scheduling queue from tail to head: among the tasks without
successors or whose successors have been all selected, LDF selects the task with the
latest deadline to be scheduled last.
 This procedure is repeated until all tasks in the set are selected.
 At run time, tasks are extracted from the head of the queue, so that the first task
inserted in the queue will be executed last, whereas the last task inserted in the queue
will be executed first.
Prepared by Divya Harikumar, Asst.Professor, SCTCE
 Let J be the complete set of tasks to be scheduled , let 𝜞 ⊆ J be the subset of tasks
43
without successors, and let Jl be the task in 𝜞 with the latest deadline dl.
 If σ is any schedule that does not follow the EDL rule, then the last scheduled task, say
Jk, will not be the one with the latest deadline; thus dk ≤ dl .
 Since Jl is scheduled before Jk, let us partition 𝜞 into four subsets, so that 𝜞 = AU {Jl}
U B U {Jk}
 In σ the maximum lateness for 𝜞 is greater or equal to Lk = f — dk , where

, finishing time of task Jk.


Moving Jl to the end of the schedule cannot increase the maximum lateness in 𝜞,
which proves the optimality of LDF.
 To do that, let σ* be the schedule obtained from σ after moving task Jl to the end of
the queue and shifting all other tasks to the left.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


In σ * maximum lateness for 𝜞 is given by,

44

Prepared by Divya Harikumar,Asst.Professor,SCTCE


45

 Moving Jl to the end of the schedule does not increase the maximum lateness in F.
 This means that scheduling the task Jl to the last with the latest deadline minimizes the
maximum lateness in 𝜞.
 Then, removing this task from J and repeating the argument for the remaining n - 1
tasks in the set J - {Jl}, LDF can find the second-to-last task in the schedule, and so on.
 The complexity of the LDF algorithm is O(n) since for each of the n steps it needs to visit
the precedence graph to find the subset F with no successors.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


46  Consider the example depicted in Figure , which shows the parameters of six tasks together
with their precedence graph.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


47

Prepared by Divya Harikumar,Asst.Professor,SCTCE


48 EDF with precedence constraints

 The problem of scheduling a set of n tasks with precedence constraints and


dynamic activations can be solved in polynomial time complexity only if tasks
are preemptable.
 The basic idea is to transform a set J of dependent tasks into a set J* of
independent tasks by an adequate modification of timing parameters.
 Then, tasks are scheduled by the Earliest Deadline First (EDF) algorithm. The
transformation algorithm ensures that J is schedulable and the precedence
constraints are obeyed if and only if J* is schedulable.
 Basically, all release times and deadlines are modified so that each task cannot
start before its predecessors and cannot preempt their successors.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


49 Modification of the release times
 The rule for modifying task’s release times is based on the following observation:
 Given two tasks Ja and Jb , such that Ja → Jb , then in any valid schedule that meets
precedence constraints ,the following conditions must be satisfied

Prepared by Divya Harikumar,Asst.Professor,SCTCE


50
 If Ja → Jb , then the release time of Jb can be replaced by max(rb , ra + Ca)
 Let rb * be the new release time of Jb Then,

Prepared by Divya Harikumar,Asst.Professor,SCTCE


51 Modification of the deadlines
 The rule for modifying task’s deadlines is based on the following observation:
 Given two tasks Ja and Jb, such that Ja → Jb , then in any feasible schedule that meets
the precedence constraints the following conditions must be satisfied

Prepared by Divya Harikumar,Asst.Professor,SCTCE


52  Therefore, the deadline da of Ja can be replaced by the minimum between da
and (db — Cb) without changing the problem.
 Let da* be the new deadline of Ja. Then,

Prepared by Divya Harikumar,Asst.Professor,SCTCE


53 Proof of optimality
 To show that precedence relations in J are not violated, consider the example shown in
Figure , where J1 must precede J 2(i.e., J1 → J2 ), but J2 arrives before J1 and has an earlier
deadline. follows:
 Clearly, if the two tasks are executed under EDF, their precedence relation cannot be met.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


54

 If we apply the transformation algorithm, the time constraints are modified as


follows:

Prepared by Divya Harikumar,Asst.Professor,SCTCE


55

Prepared by Divya Harikumar,Asst.Professor,SCTCE


56 PERIODIC TASK SCHEDULING

 When a control application consists of several concurrent periodic tasks with


individual timing constraints, the operating system has to guarantee that each
periodic instance is regularly activated at its proper rate and is completed within its
deadline.
 Basic algorithms for handling periodic tasks are :
 Rate Monotonic
 Deadline Monotonic

Prepared by Divya Harikumar,Asst.Professor,SCTCE


57

Prepared by Divya Harikumar,Asst.Professor,SCTCE


58 Assumptions

Prepared by Divya Harikumar,Asst.Professor,SCTCE


59

Prepared by Divya Harikumar,Asst.Professor,SCTCE


60

Prepared by Divya Harikumar,Asst.Professor,SCTCE


61  A periodic task τi is said to be feasible if all its instances finish within their
deadlines.
 A task set 𝜞 is said to be schedulable (or feasible) if all tasks in 𝜞 are feasible.
Processor utilization factor

Given a set 𝜞 of n periodic tasks, the processor utilization factor U is the fraction
of processor time spent in the execution of the task set.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 The processor utilization factor provides a measure of the computational load
62 on the CPU due to the periodic task set.
 Although the CPU utilization can be improved by increasing task‘s
computation times or by decreasing their periods, there exists a maximum
value of U below which 𝜞 is schedulable and above which 𝜞 is not
schedulable.
 Let Uub (𝜞 , A) be the upper bound of the processor utilization factor for a
task set 𝜞 under a given algorithm A.
 When U = Uub (𝜞 , A) the set 𝜞 is said to fully utilize the processor.
 In this situation, 𝜞 is schedulable by A, but an increase in the computation
time in any of the tasks will make the set infeasible.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


63

Prepared by Divya Harikumar,Asst.Professor,SCTCE


64  For a given algorithm A, the least upper bound Ulub(A) of the processor
utilization factor is the minimum of the utilization factors over all task sets
that fully utilize the processor:

Prepared by Divya Harikumar,Asst.Professor,SCTCE


65  Ulub defines an important characteristic of a scheduling algorithm useful for
easily verifying the schedulability of a task set.
 Any task set whose processor utilization factor is less than or equal to Ulub is
schedulable by the algorithm.
 On the other hand, when Ulub < U ≤ 1.0, the schedulability can be achieved
only if the task periods are suitably related.
 If the utilization factor of a task set is greater than 1, the task set cannot be
scheduled by any algorithm.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


66 RATE MONOTONIC SCHEDULING

 The Rate Monotonic (RM) scheduling algorithm is a simple rule that assigns
priorities to tasks according to their request rates.
 Specifically, tasks with higher request rates (that is, with shorter periods) will have
higher priorities.
 Since periods are constant, RM is a fixed-priority assignment.
 Priorities are assigned to tasks before execution and do not change over time.
 RM is intrinsically preemptive: the currently executing task is preempted by a newly
arrived task with shorter period.
 RM is optimal among all fixed priority assignments in the sense that no other fixed-
priority algorithms can schedule a task set that cannot be scheduled by RM.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


67 Optimality of RM Scheduling

 In order to prove the optimality of the RM algorithm, we first show that a critical
instant for any task occurs whenever the task is released simultaneously with all
higher-priority tasks.
 Let 𝜞= {τ1, τ2,... , τn} be the set of periodic tasks ordered by increasing periods, with τn
being the task with the longest period.
 According to RM, τn will also be the task with the lowest priority.
 The response time of task τn is delayed by the interference of τi with higher priority.
 Advancing the release time of τi may increase the completion time of τn.
 The response time of τn is largest (worst) when it is released simultaneously with τi.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


68

Prepared by Divya Harikumar,Asst.Professor,SCTCE


69  Consider a set of two periodic tasks τ1 and τ2, with T1 < T2.
 If priorities are not assigned according to RM, then task T2 will receive the
highest priority.
 At critical instants, the schedule is feasible if the following inequality is
satisfied:

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 On the other hand, if priorities are assigned according to RM, task T1 will receive the
70 highest priority.
 In this situation, in order to guarantee a feasible schedule two cases must be considered.
 CASE 1:
 Let F = T2/T1 be the number of periods of T1 entirely contained in T2

Prepared by Divya Harikumar,Asst.Professor,SCTCE


71  CASE 2:

Given two periodic tasks τ1 and τ2 , with T1< T2, if the schedule is feasible by an
arbitrary priority assignment, then it is also feasible by RM. That is, RM is
optimal.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 The Rate Monotonic algorithm has been proved to be optimal among all fixed-priority
72 assignments, in the sense that no other fixed-priority algorithms can schedule
a task set that cannot be scheduled by RM.
 Despite being optimal, RM has a limitation: CPU utilization is bounded, and it is not always
possible fully to maximize CPU resources.
 For a set of n periodic tasks with unique periods, a feasible schedule that will always meet
deadlines exists if the CPU utilization is below a specific bound (depending on the number of
tasks). The schedulability test for RMS is:

 For example, U ≤ 0.8284 for two processes.


 When the number of processes tends towards infinity (n≥ 𝟏𝟎) , this expression will tend
towards:

 Therefore, a rough estimate when n≥ 𝟏𝟎 is that RMS can meet all of the deadlines if total
CPU utilization, U, is less than 70%. The other 30% of the CPU can be dedicated to lower-
priority, non-real-time tasks.
Prepared by Divya Harikumar,Asst.Professor,SCTCE
73

Prepared by Divya Harikumar,Asst.Professor,SCTCE


74

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 The combined utilization of three processes is less than the threshold of these
75 processes which means the above set of processes is schedulable and thus
satisfies the above equation of the algorithm.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 Process P2 will execute two times for every 5 time units, Process P3 will execute two
76
times for every 10 time units and Process P1 will execute three times in 20 time units. This
has to be kept in mind for understanding the entire execution of the algorithm below.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


77

Prepared by Divya Harikumar,Asst.Professor,SCTCE


78

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 Disadvantages:
79
 It is very difficult to support aperiodic and periodic tasks under RMA.
 RMA is not optimal when the task period and deadline differ.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


80 DEADLINE MONOTONIC

 The Deadline Monotonic (DM) priority assignment weakens the "period equals
deadline" constraint within a static priority scheduling scheme.
 This algorithm was proposed as an extension of Rate Monotonic where tasks can have
a relative deadline less than their period.
 Specifically, each periodic task τi is characterized by four parameters:

Prepared by Divya Harikumar,Asst.Professor,SCTCE


81

Prepared by Divya Harikumar,Asst.Professor,SCTCE


82  According to the DM algorithm, each task is assigned a priority inversely
proportional to its relative deadline.
 Thus, at any instant, the task with the shortest relative deadline is executed.
 Since relative deadlines are constant, DM is a static priority assignment.
 As RM, DM is preemptive; that is, the currently executing task is preempted
by a newly arrived task with shorter relative deadline.
 The Deadline-Monotonic priority assignment is optimal, meaning that if any
static priority scheduling algorithm can schedule a set of tasks with deadlines
unequal to their periods, then DM will also schedule that task set.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


83 Schedulability Analysis

 The feasibility of a set of tasks with deadlines unequal to their periods could be
guaranteed using the Rate-Monotonic schedulability test, by reducing task’s periods
to relative deadlines:

Prepared by Divya Harikumar,Asst.Professor,SCTCE


84

This test is sufficient but not necessary for guaranteeing the schedulability of the task set
since the actual interference can be smaller than Ii, since τi may terminate earlier.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


85

 To find a sufficient and necessary schedulability test for DM, the exact interleaving of
higher-priority tasks must be evaluated for each process.
 In general, this procedure is quite costly since, for each task τi , it requires the
construction of the schedule until Di.
 Hence an efficient method was proposed for evaluating the exact interference on
periodic tasks and derived a sufficient and necessary schedulability test for DM.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


86 Sufficient and necessary schedulability test

 The longest response time Ri of a periodic task τi is computed, at the critical instant,
as the sum of its computation time and the interference due to preemption by higher-
priority tasks:

(1)

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 The worst-case response time of task τi is given by the smallest value of Ri that
87 satisfies the equation (1)
 let Rik be the kth estimate of Ri and let Iik be the interference on task τi in the interval [0,Rik]
88

Example

In order to guarantee τ4, we have to calculate R4 and verify that R4 < D4.

Prepared by Divya Harikumar,Asst.Professor,SCTCE


89

Prepared by Divya Harikumar,Asst.Professor,SCTCE


90

Prepared by Divya Harikumar,Asst.Professor,SCTCE


 Since R4 < D4, τ4 is schedulable within its deadline.
91  If Ri < Di for all tasks, we conclude that the task set is schedulable by DM

Prepared by Divya Harikumar,Asst.Professor,SCTCE

You might also like