Task Constraints and Task Scheduling
Task Constraints and Task Scheduling
• RTOS have strict timing constraints which characterize its computational activities
• A typical timing constraint on a task is the deadline, which represents the time before which a
process should complete its execution without causing any damage to the system.
• Depending on the consequences of a missed deadline, tasks in RTOS are classified into two classes:
• Hard. A task is said to be hard if a completion after its deadline can cause terrible consequences on
the system.
• Soft. A task is said to be soft if missing its deadline decreases the performance of the system but does
not threaten its correct behavior.
Parameters of a Real time Tasks
Arrival time (ai) time at which a task becomes ready for execution; it is also referred as request time or
release time
Computation time time necessary to the processor for executing the task without interruption
(Ci)
Deadline (di) time before which a task should be complete to avoid damage to the system
Start time (si) time at which a task starts its execution
Finishing time (fi) time at which a task finishes its execution
Criticalness parameter related to the consequences of missing the deadline
Value (vi) represents the relative importance of the task with respect to the other tasks in the system
Lateness (Li) represents the delay of a task completion with respect to its deadline;
If a task completes before the deadline, its lateness is negative
Tardiness or the time a task stays active after its deadline (Ei= max(0, Li )
Exceeding time (Ei)
Laxity or Slack time maximum time a task can be delayed on its activation to complete within its deadline
Xi (Xi = di - ai -Ci)
• Based on regularity of its activation tasks can be defined as periodic or aperiodic.
• Periodic tasks consist of an infinite sequence of identical activities, called instances or jobs, that are
regularly activated at a constant rate.
• Aperiodic tasks also consist of an infinite sequence of identical activities (instances); however, their
activations are not regular
Precedence constraints
• In some applications, computational activities cannot be executed in random order but based on some
precedence relations defined at the design stage.
• Such precedence relations are usually described through a directed acyclic graph G, where tasks are
represented by nodes and precedence relations by arrows.
• Task J1 is the only one that can start executing since it does not have predecessors.
• Task J4 can start only when J2 is completed, whereas J5 must wait the completion of J2 and J3.
• Tasks edgel and edge2 can also be executed in parallel, but each task cannot start before the
associated acquisition task completes.
• Task shape is based on the object contour extracted by the low-level image processing, therefore it
must wait the termination of both edgel and edge2.
• The same is true for task disp, which however can be executed in parallel with task shape.
• Then, task H can only start as disp completes and, finally, task rec must wait the completion of
H and shape.
Resource constraints
• A resource is any software structure that can be used by the process to advance its execution
• A resource can be a data structure, a set of variables, a main memory area, a file, a piece of
• A resource dedicated to a particular process is said to be private, whereas a resource that can be used
by more tasks is called a shared resource.
• To maintain data reliability, many shared resources do not allow simultaneous accesses but require
mutual exclusion among competing tasks. They are called exclusive resources.
• A piece of code executed under mutual exclusion constraints is called a critical section
In exclusive resources
• If A is the operation performed on R by Ja, and B is the operation performed on R by Jb, then A and
B must never be executed at the same time.
• Hence, when two or more tasks have resource constraints, they have to be synchronized since they
share exclusive resources.
• Consider two tasks J1 and J2 that share an
exclusive resource R (for instance, a list),
on which two operations (such as insert
and remove) are defined.
• All tasks blocked on the same resource are kept in a queue associated with the semaphore, which
protects the resource.
• When a running task executes a wait primitive on a locked semaphore, it enters a waiting state, until
another task executes a signal primitive that unlocks the semaphore.
• When a task leaves the waiting state, it does not go in the running state, but in the ready state, so that
the CPU can be assigned to the highest-priority task by the scheduling algorithm.
Waiting state caused by resource constraints
• If preemption is allowed and J1 has a higher
priority than J2, then J1 can block in the
situation depicted in Figure.
• Here, task J2 is activated first, and, after a
while, it enters the critical section and locks
the semaphore.
• While J2 is executing the critical section, task
J1 arrives, and, since it has a higher priority,
it preempts J2 and starts executing.
• But, at time t1, when attempting to enter its
critical section, it is blocked on the
semaphore and J2 is resumed.
• J1 is blocked until time t2, when J2 releases
the critical section by executing the signal(s)
primitive, which unlocks the semaphore.
Classification of scheduling algorithms
• Moreover, an algorithm is said to be clairvoyant (means far sighted) if it knows the future; that is, if
it knows in advance the arrival times of all the tasks.
• Although such an algorithm does not exist in reality, it can be used for comparing the performance of
real algorithms against the best possible one.
APERIODIC TASK SCHEDULING
Example
Jobs with Deadlines
Scheduling Jobs with Deadlines: Earliest Due Date
• For scheduling jobs on one machine to meet all their deadlines, there is an optimal method called
Earliest Due Date (EDD), Earliest Deadline First (EDF), or Jackson’s Rule.
• As the names suggest, it schedules the job with the earliest deadline first, and then repeatedly
schedules one with the earliest deadline among remaining jobs
• All tasks consist of a single job, have synchronous arrival times, but can have different computation
times and deadlines.
• No other constraints (Eg: precedence relations and cannot share resources in exclusive mode ) are
considered, hence tasks must be independent;
• Since all tasks arrive at the same time, preemption is not an issue in this problem.
• In fact, preemption is effective only when tasks may arrive dynamically and newly arriving tasks
have higher priority than currently executing tasks
• We assume that all tasks are activated at time t = 0, so that each job Ji can be completely
characterized by two parameters:
1. Computation time Ci
2. Relative deadline Di
• A simple algorithm that solves this problem was found by Jackson in 1955. It is called Earliest Due
Date (EDD)
Earliest Due Date (EDD) - Jackson’s Rule
Set of tasks:
Problem:
Algorithm:
• If A is different than EDD, then there exist two tasks Ja and Jb, with da < db, such that Jb
immediately precedes Ja in σ.
• Now, let σ' be a schedule obtained from σ by exchanging Ja with Jb, so that Ja immediately precedes
Jb in σ'.
Earliest Deadline First (EDF) / HORN'S ALGORITHM
• At any instant execute the task with the earliest absolute deadline among all the ready tasks
• If tasks are not synchronous, tasks can be activated dynamically during execution and then preemption becomes
an important factor.
• A scheduling problem in which preemption is allowed is always easier than its nonpreemptive counterpart.
• In a nonpreemptive scheduling algorithm, the scheduler must ensure that a newly arriving task will
never need to interrupt a currently executing task in order to meet its own deadline.
• If preemption is allowed, this searching is unnecessary, since a task can be interrupted if a more important task
arrives
HORN'S ALGORITHM
• Horn found a solution to the problem of scheduling a set of n independent tasks on a uniprocessor
system, when tasks may have dynamic arrivals and preemption is allowed
(1 | preem | Lmax)
Example
Latest Deadline First
• In 1973, Lawler presented an optimal algorithm that minimizes the maximum lateness of a set of
tasks with precedence relations and simultaneous arrival times.
• The algorithm is called Latest Deadline First (LDF) and can be executed in polynomial time with
respect to the number of tasks in the set.
• Given a set J of n tasks and a directed acyclic graph (DAG) describing their precedence relations
LDF selects the task with the latest deadline to be scheduled last.
This procedure is repeated until all tasks in the set are selected.
• At run time, tasks are extracted from the head of the queue, so that the first task inserted in the queue
will be executed last, whereas the last task inserted in the queue will be executed first.
Algorithm
• Among tasks without successors, select the task with the latest deadline
• Remove this task from the precedence graph and put it into a stack
• The scheduling of a set of n tasks with precedence constraints and dynamic activations can be solved only if tasks
are preemptable.
• In 1990, Ghetto, Silly, and Bouchentouf presented an algorithm that solves this problem
• A set J of dependent tasks was transformed into a set J* of independent tasks by an adequate modification of
timing parameters and then tasks are scheduled by the Earliest Deadline First (EDF) algorithm
• The transformation algorithm ensures that J is schedulable and the precedence constraints are obeyed if and only
if J* is schedulable.
• ie, all release times and deadlines are modified so that each task cannot start before its predecessors