0% found this document useful (0 votes)
14 views63 pages

Task Constraints and Task Scheduling

Uploaded by

Sisira Sreenath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views63 pages

Task Constraints and Task Scheduling

Uploaded by

Sisira Sreenath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Module 4

Task constraints and Task scheduling


TASK CONSTRAINTS

• Constraints that can be specified on real-time tasks are of three classes:


1.timing constraints,
2.precedence relations
3.mutual exclusion constraints on shared resources.
Timing constraints

• RTOS have strict timing constraints which characterize its computational activities

• Timing constraints must be met in order to achieve the expected performance.

• A typical timing constraint on a task is the deadline, which represents the time before which a
process should complete its execution without causing any damage to the system.

• Depending on the consequences of a missed deadline, tasks in RTOS are classified into two classes:

• Hard. A task is said to be hard if a completion after its deadline can cause terrible consequences on
the system.

• Soft. A task is said to be soft if missing its deadline decreases the performance of the system but does
not threaten its correct behavior.
Parameters of a Real time Tasks

Arrival time (ai) time at which a task becomes ready for execution; it is also referred as request time or
release time
Computation time time necessary to the processor for executing the task without interruption
(Ci)
Deadline (di) time before which a task should be complete to avoid damage to the system
Start time (si) time at which a task starts its execution
Finishing time (fi) time at which a task finishes its execution
Criticalness parameter related to the consequences of missing the deadline
Value (vi) represents the relative importance of the task with respect to the other tasks in the system
Lateness (Li) represents the delay of a task completion with respect to its deadline;
If a task completes before the deadline, its lateness is negative
Tardiness or the time a task stays active after its deadline (Ei= max(0, Li )
Exceeding time (Ei)
Laxity or Slack time maximum time a task can be delayed on its activation to complete within its deadline
Xi (Xi = di - ai -Ci)
• Based on regularity of its activation tasks can be defined as periodic or aperiodic.

• Periodic tasks consist of an infinite sequence of identical activities, called instances or jobs, that are
regularly activated at a constant rate.

• Aperiodic tasks also consist of an infinite sequence of identical activities (instances); however, their
activations are not regular
Precedence constraints

• In some applications, computational activities cannot be executed in random order but based on some
precedence relations defined at the design stage.

• Such precedence relations are usually described through a directed acyclic graph G, where tasks are
represented by nodes and precedence relations by arrows.
• Task J1 is the only one that can start executing since it does not have predecessors.

• Tasks with no predecessors are called beginning tasks.

• As J1 is completed, either J2 or J3 can start.

• Task J4 can start only when J2 is completed, whereas J5 must wait the completion of J2 and J3.

• Tasks with no successors, as J4 and J5, are called ending tasks


• Here, a number of objects moving on a
Example conveyor belt must be recognized and
classified using a stereo vision system,
consisting of two cameras mounted in a
suitable location.

• Suppose that the recognition process is


carried out by integrating the 2D features of
the top view of the objects with the height
information extracted by the pixel disparity

on the two images.


• Computational activities of the application can be organized by defining the following tasks:
• Tasks acql and acq2 can be executed in parallel before any other activity.

• Tasks edgel and edge2 can also be executed in parallel, but each task cannot start before the
associated acquisition task completes.

• Task shape is based on the object contour extracted by the low-level image processing, therefore it
must wait the termination of both edgel and edge2.

• The same is true for task disp, which however can be executed in parallel with task shape.

• Then, task H can only start as disp completes and, finally, task rec must wait the completion of

H and shape.
Resource constraints

• A resource is any software structure that can be used by the process to advance its execution

• A resource can be a data structure, a set of variables, a main memory area, a file, a piece of

program, or a set of registers of a peripheral device.

• A resource dedicated to a particular process is said to be private, whereas a resource that can be used
by more tasks is called a shared resource.

• To maintain data reliability, many shared resources do not allow simultaneous accesses but require
mutual exclusion among competing tasks. They are called exclusive resources.
• A piece of code executed under mutual exclusion constraints is called a critical section
In exclusive resources

• Let R be an exclusive resource shared by tasks Ja and Jb.

• If A is the operation performed on R by Ja, and B is the operation performed on R by Jb, then A and
B must never be executed at the same time.

• To ensure sequential accesses to exclusive resources, operating systems provide a synchronization


mechanism (such as semaphores) that can be used by tasks to create critical sections of code.

• Hence, when two or more tasks have resource constraints, they have to be synchronized since they
share exclusive resources.
• Consider two tasks J1 and J2 that share an
exclusive resource R (for instance, a list),
on which two operations (such as insert
and remove) are defined.

• The code implementing such operations is


thus a critical section that must be
executed in mutual exclusion.

• If a binary semaphore s is used for this


purpose, then each critical section must
begin with a wait(s) primitive and must
end with a signal(s) primitive
• A task waiting for an exclusive resource is said to be blocked on that resource.

• All tasks blocked on the same resource are kept in a queue associated with the semaphore, which
protects the resource.

• When a running task executes a wait primitive on a locked semaphore, it enters a waiting state, until
another task executes a signal primitive that unlocks the semaphore.

• When a task leaves the waiting state, it does not go in the running state, but in the ready state, so that
the CPU can be assigned to the highest-priority task by the scheduling algorithm.
Waiting state caused by resource constraints
• If preemption is allowed and J1 has a higher
priority than J2, then J1 can block in the
situation depicted in Figure.
• Here, task J2 is activated first, and, after a
while, it enters the critical section and locks
the semaphore.
• While J2 is executing the critical section, task
J1 arrives, and, since it has a higher priority,
it preempts J2 and starts executing.
• But, at time t1, when attempting to enter its
critical section, it is blocked on the
semaphore and J2 is resumed.
• J1 is blocked until time t2, when J2 releases
the critical section by executing the signal(s)
primitive, which unlocks the semaphore.
Classification of scheduling algorithms
• Moreover, an algorithm is said to be clairvoyant (means far sighted) if it knows the future; that is, if
it knows in advance the arrival times of all the tasks.

• Although such an algorithm does not exist in reality, it can be used for comparing the performance of
real algorithms against the best possible one.
APERIODIC TASK SCHEDULING
Example
Jobs with Deadlines
Scheduling Jobs with Deadlines: Earliest Due Date

• For scheduling jobs on one machine to meet all their deadlines, there is an optimal method called
Earliest Due Date (EDD), Earliest Deadline First (EDF), or Jackson’s Rule.
• As the names suggest, it schedules the job with the earliest deadline first, and then repeatedly
schedules one with the earliest deadline among remaining jobs

• Every job except J3 completes before Deadline


• Assume that the jobs are numbered in non-decreasing order of their due dates dj .
• In order to minimize the maximum lateness it is quite natural to process the jobs in this order.
• This dispatching rule is known as EDD (Earliest Due Date), sometimes it is called Jackson's rule
• A set J of n aperiodic tasks has to be scheduled on a single processor, minimizing the maximum
lateness.

• All tasks consist of a single job, have synchronous arrival times, but can have different computation
times and deadlines.

• No other constraints (Eg: precedence relations and cannot share resources in exclusive mode ) are
considered, hence tasks must be independent;

• Since all tasks arrive at the same time, preemption is not an issue in this problem.

• In fact, preemption is effective only when tasks may arrive dynamically and newly arriving tasks
have higher priority than currently executing tasks
• We assume that all tasks are activated at time t = 0, so that each job Ji can be completely
characterized by two parameters:

1. Computation time Ci

2. Relative deadline Di

• A simple algorithm that solves this problem was found by Jackson in 1955. It is called Earliest Due
Date (EDD)
Earliest Due Date (EDD) - Jackson’s Rule

Set of tasks:

Problem:

Algorithm:

“select the task with the earliest relative deadline”


Jackson's theorem

• Let σ be a schedule produced by any algorithm A.

• If A is different than EDD, then there exist two tasks Ja and Jb, with da < db, such that Jb
immediately precedes Ja in σ.

• Now, let σ' be a schedule obtained from σ by exchanging Ja with Jb, so that Ja immediately precedes
Jb in σ'.
Earliest Deadline First (EDF) / HORN'S ALGORITHM

• At any instant execute the task with the earliest absolute deadline among all the ready tasks

• If tasks are not synchronous, tasks can be activated dynamically during execution and then preemption becomes
an important factor.

• A scheduling problem in which preemption is allowed is always easier than its nonpreemptive counterpart.

• In a nonpreemptive scheduling algorithm, the scheduler must ensure that a newly arriving task will

never need to interrupt a currently executing task in order to meet its own deadline.

• This guarantee requires a considerable amount of searching.

• If preemption is allowed, this searching is unnecessary, since a task can be interrupted if a more important task
arrives
HORN'S ALGORITHM

• Horn found a solution to the problem of scheduling a set of n independent tasks on a uniprocessor
system, when tasks may have dynamic arrivals and preemption is allowed

(1 | preem | Lmax)
Example
Latest Deadline First

• In 1973, Lawler presented an optimal algorithm that minimizes the maximum lateness of a set of
tasks with precedence relations and simultaneous arrival times.

• The algorithm is called Latest Deadline First (LDF) and can be executed in polynomial time with
respect to the number of tasks in the set.
• Given a set J of n tasks and a directed acyclic graph (DAG) describing their precedence relations

• LDF builds the scheduling queue from tail to head:

Select (i) the tasks without successors or

(ii) tasks whose all successors have been selected

LDF selects the task with the latest deadline to be scheduled last.

This procedure is repeated until all tasks in the set are selected.

• At run time, tasks are extracted from the head of the queue, so that the first task inserted in the queue
will be executed last, whereas the last task inserted in the queue will be executed first.
Algorithm

• Among tasks without successors, select the task with the latest deadline

• Remove this task from the precedence graph and put it into a stack

• Repeat until all tasks are in the stack.

• The stack represents the order in which tasks should be scheduled


• Let J be the complete set of tasks to be scheduled
• let Г ⸦ J be the subset of tasks without successors, and
• let Ji be the task in Г with the latest deadline di.
• If σ is any schedule that does not follow the EDL rule,
then the last scheduled task, say Jk, will not be the one with the latest deadline; thus dk < di.
• Since Jl is scheduled before Jk, let us partition F into four subsets, so that Г = A U {Ji} U B U {Jk}-
• maximum lateness for Г = greater or equal to
Lk = f — dk

• finishing time of task Jk = f =

• Proof for the optimality of LDF:

• Show that moving Jl to the end of the


schedule cannot increase the maximum
lateness in Г.

• To do that, let σ* be the schedule obtained


from σ after moving task Jl to the end of the
queue and shifting all other tasks to the left.
EDF with precedence constraints
• (1 | prec.preem | Lmax)

• The scheduling of a set of n tasks with precedence constraints and dynamic activations can be solved only if tasks
are preemptable.

• In 1990, Ghetto, Silly, and Bouchentouf presented an algorithm that solves this problem

• A set J of dependent tasks was transformed into a set J* of independent tasks by an adequate modification of
timing parameters and then tasks are scheduled by the Earliest Deadline First (EDF) algorithm

• The transformation algorithm ensures that J is schedulable and the precedence constraints are obeyed if and only
if J* is schedulable.

• ie, all release times and deadlines are modified so that each task cannot start before its predecessors

and cannot preempt their successors.


Modification of the arrival / release times
• Given two tasks Ja and Jb, such that J a →Jb, then in any valid schedule that meets precedence
constraints the following conditions must be satisfied
Modification of the deadlines
• Given two tasks Ja and Jb, such that J a →Jb, then in any valid schedule that meets precedence
constraints the following conditions must be satisfied

You might also like