0% found this document useful (0 votes)
49 views7 pages

Priority Based Scheduling

Uploaded by

vaishukg112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views7 pages

Priority Based Scheduling

Uploaded by

vaishukg112
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Priority Based Scheduling

Earliest-Deadline-First Scheduling
Earliest Deadline First (EDF) is one of the best known algorithms for real
time processing. It is an optimal dynamic algorithm. In dynamic priority
algorithms, the priority of a task can change during its execution. It produces a
valid schedule whenever one exists.
EDF is a preemptive scheduling algorithm that dispatches the process with
the earliest deadline. If an arriving process has an earlier deadline than the running
process, the system preempts the running process and dispatches the arriving
process.
A task with a shorter deadline has a higher priority. It executes a job with
the earliest deadline. Tasks cannot be scheduled by rate monotonic algorithm.
EDF is optimal among all scheduling algorithms not keeping the processor
idle at certain times. Upper bound of process utilization is 100 %.
Whenever a new task arrive, sort the ready queue so that the task closest to
the end of its period assigned the highest priority. System preempt the running
task if it is not placed in the first of the queue in the last sorting.
If two tasks have the same absolute deadlines, choose one of the two at
random (ties can be broken arbitrarily). The priority is dynamic since it changes
for different jobs of the same task.
EDF can also be applied to aperiodic task sets. Its optimality guarantees
that the maximal lateness is minimized when EDF is applied.
Many real time systems do not provide hardware preemption, so other
algorithm must be employed.
In scheduling theory, a real-time system comprises a set of real-time tasks;
each task consists of an infinite or finite stream of jobs. The task set can be
scheduled by a number of policies including fixed priority or dynamic priority
algorithms.
The success of a real-time system depends on whether all the jobs of all the
tasks can be guaranteed to complete their executions before their deadlines. If
they can, then we say the task set is schedulable.
The schedulability condition is that the total utilization of the task set must
be less than or equal to 1.
Implementation of earliest deadline first : Is it really not feasible to
implement EDF scheduling ?

Task Arrival Duration Deadline


T1 0 10 33
T2 4 3 28
T3 5 10 29

Problems for implementations :


1. Absolute deadlines change for each new task instance, therefore the priority
needs to be updated every time the task moves back to the ready queue.
2. More important, absolute deadlines are always increasing, how can we
associate a finite priority value to an ever increasing deadline value.
3. Most important, absolute deadlines are impossible to compute a-priori.
EDF properties :
1. EDF is optimal with respect to feasibility (i.e. schedulability).
2. EDF is optimal with respect to minimizing the maximum lateness.
Advantages
1. It is optimal algorithm.
2. Periodic, aperiodic and sporadic tasks are scheduled using EDF algorithm.
3. Gives best CPU utilization.
Disadvantages
1. Needs priority queue for storing deadlines
2. Needs dynamic priorities
3. Typically no OS support
4. Behaves badly under overload
5. Difficult to implement.
Rate Monotonic Scheduling
Rate Monotonic Priority Assignment (RM) is a so called static priority
round robin scheduling algorithm.
In this algorithm, priority is increases with the rate at which a process must
be scheduled. The process of lowest period will get the highest priority.
The priorities are assigned to tasks before execution and do not change over
time. RM scheduling is preemptive, i.e., a task can be preempted by a task with
higher priority.
In RM algorithms, the assigned priority is never modified during runtime
of the system. RM assigns priorities simply in accordance with its periods, i.e. the
priority is as higher as shorter is the period which means as higher is the activation
rate. So RM is a scheduling algorithm for periodic task sets.
If a lower priority process is running and a higher priority process becomes
available to run, it will preempt the lower priority process. Each periodic task is
assigned a priority inversely based on its period :
1. The shorter the period, the higher the priority.
2. The longer the period, the lower the priority.
The algorithm was proven under the following assumptions :
1. Tasks are periodic.
2. Each task must be completed before the next request occurs.
3. All tasks are independent.
4. Run time of each task request is constant.
5. Any non-periodic task in the system has no required deadlines.
RMS is optimal among all fixed priority scheduling algorithms for
scheduling periodic tasks where the deadlines of the tasks equal their periods.
Advantages :
1. Simple to understand. 2. Easy to implement. 3. Stable algorithm.
Disadvantages :
1. Lower CPU utilization.
2. Only deal with independent tasks.
3. Non-precise schedulability analysis
Comparison between RMS and EDF

Parameters RMS EDF


Priorities Static Dynamic
Works with OS with fixed priorities Yes No
Uses full computational power of processor No Yes
Possible to exploit full computational power of No Yes
Processor without provisioning for slack
Priority Inversion
Priority inversion occurs when a low-priority job executes while some
ready higher-priority job waits.
Consider three tasks Tl, T2 and T3 with decreasing priorities. Task T1 and
T3 share some data or resource that requires exclusive access, while T2 does not
interact with either of the other two tasks.
Task T3 starts at time t0 and locks semaphore s at time tv At time t2, Tl
arrives and preempts T3 inside its critical section. After a while, Tl requests to
use the shared resource by attempting to lock s, but it gets blocked, as T3 is
currently using it. Hence, at time t3 continues to execute inside its critical section.
Next, when T2 arrives at time t4, it preempts T3, as it has a higher priority and
does not interact with either Tl or T3.

The execution time of T2 increases the blocking time of Tl, as it is no


longer dependent solely on the length of the critical section executed by T3.
When tasks share resources, there may be priority inversions.

Priority inversion is not avoidable; However, in some cases, the priority

inversion could be too large.


Simple solutions:
1. Make critical sections non-preemptable.
2. Execute critical sections at the highest priority of the task that could use it.
The solution of the problem is rather simple; while the low priority task
blocks an higher priority task, it inherits the priority of the higher priority task; in
this way, every medium priority task cannot make preemption.
Timing anomalies
As seen, contention for resources can cause timing anomalies due to
priority inversion and deadlock. Unless controlled, these anomalies can be
arbitrary duration, and can seriously disrupt system timing.
It cannot eliminate these anomalies, but several protocols exist to control
them :
1. Priority inheritance protocol
2. Basic priority ceiling protocol
3. Stack-based priority ceiling protocol
Wait for graph
Wait-for graph is used for representing dynamic-blocking relationship
among jobs. In the wait-for graph of a system, every job that requires some
resource is represented by a vertex labeled by the name of the job.
At any time, the wait-for graph contains an (ownership) edge with label x
from a resource vertex to a job vertex if x units of the resource are allocated to
the job at the time.
Wait-for-graph is used to model resource contention. Every serial reusable
resource is modeled. Every job which requires a resource is modeled by vertex
with arrow pointing towards the resource.
Every job holding a resource is represented by a vertex pointing away from
the resource and towards the job. A cyclic path in a wait-for-graph indicates
deadlock.
J3 has locked the single unit of resource R and J2 is waiting to lock it.
A minimum of two system resources are required in a deadlock.

You might also like