0% found this document useful (0 votes)
34 views91 pages

Rts 4

The document describes priority driven scheduling of periodic tasks. It discusses key assumptions like tasks being independent and preemptible. It presents the general architecture of a priority driven scheduler that accepts tasks based on feasibility tests and maintains a priority queue. Fixed priority and dynamic priority algorithms are classified. Rate Monotonic and Deadline Monotonic algorithms are explained that assign priority based on period and relative deadline respectively. Optimality properties and schedulability tests of these algorithms along with EDF and LST are covered. Finally, a schedulability test for fixed priority tasks with short response times is presented.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views91 pages

Rts 4

The document describes priority driven scheduling of periodic tasks. It discusses key assumptions like tasks being independent and preemptible. It presents the general architecture of a priority driven scheduler that accepts tasks based on feasibility tests and maintains a priority queue. Fixed priority and dynamic priority algorithms are classified. Rate Monotonic and Deadline Monotonic algorithms are explained that assign priority based on period and relative deadline respectively. Optimality properties and schedulability tests of these algorithms along with EDF and LST are covered. Finally, a schedulability test for fixed priority tasks with short response times is presented.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 91

Priority Driven

Scheduling
P e r i o d i c Ta s k s
K i z h e p p a t t Vi p i n
Assumptions
Tasks are independent

There are no aperiodic/sporadic tasks

All jobs are preemptible

Cost of preemption is negligible

Every job is ready for execution on release

Scheduling decisions are made as soon as jobs are released and completed  online
scheduling, event triggered

Period  Minimum interrelease time


General Architecture

, D )
( p ,e 1
1
1
Scheduler

Application 1

Application (software) requests the scheduler for scheduling a task and provides all
relevant parameters (period, execution time, relative deadline)
General Architecture

, D )
( p ,e 1
1
1
Scheduler J1,1
Priority Q
Application 1

T1
Accepted Tasks

Scheduler runs an acceptance test to check the feasibility and if accepted puts the
task into an accepted task list

The currently running jobs are kept in a priority queue and the first job in the
queue always gets executed
General Architecture

J2,1
Scheduler J1,1
Priority Q

)
T2

2
2 D
,e ,
T1

2
(p
Accepted Tasks

Application 2
General Architecture

J2,1
J1,1
Scheduler J3,1

) Priority Q
e ,D 3
, 3
(p 3 T3
T2
Application 3 T1
Accepted Tasks

When a higher priority job comes it may become first in the queue

Effectively it is pre-empting the currently running job


General Architecture

J2,1

Scheduler J1,1
Priority Q

T3
T2
T1
Accepted Tasks

Once a job is completed, it is removed from the queue


Fixed Priority vs Dynamic Priority
We classify algorithms for scheduling periodic tasks into two types: fixed priority and
dynamic priority.

Fixed priority algorithm assigns same priority to all jobs of a task


 Priority of a task is fixed compared to other tasks
 RM algorithm, where task with lower period has higher priority

Job level fixed and task level dynamic priority algorithm assigns fixed priority to jobs but at
task level priority can vary at run-time
 Priority of a job is fixed at the time of its release and it does not change
 Priority of a task with respect to that of the other tasks changes as jobs are released and
completed

Job level and task level dynamic priority algorithm assigns different priorities to individual
jobs of each task
 Priority of jobs can vary at run time when other jobs are released
Rate Monotonics (RM) Algorithm
Priorities assigned based on period  lower the period, higher the priority

Rate  inverse of period  number of jobs released per unit time

Higher the rate  higher the priority


Rate Monotonic Algorithm (RM Algorithm)

Task Period Execution


time
T1 7 2
T2 16 4
T3 31 7

Task Priority Period Execution


time
T1 1 7 2
T2 2 16 4
T3 3 31 7
Rate Monotonic Algorithm (RM Algorithm)
Task Priority Perio Execution
d time
T1 1 7 2
T2 2 16 4
T1
T3 3 31 7

T2

T3
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

From a Single processor perspective


P1
Optimality of RM algorithm
Fixed priority algorithms cannot be optimal  Even if RM cannot find a feasible
schedule, some other algorithm might be able to find a feasible schedule (like edf)

RM is optimal in a special case, where the system is simply periodic

A system of periodic tasks is simply periodic if for every pair of tasks T i and Tk in the
system and pi < pk , pk is an integer multiple of pi

Eg: T1(5,3) T2(10,3) T3(30,3)


Schedulability of RM algorithm
Like any other algorithm U  1 but it is necessary but not sufficient

Sufficient condition for schedulability U  n(2 -1), n is the number of independent,


1/n

preemptive periodic tasks in the system (Liu & Layland utilization bound)

Sufficient condition  Even if this condition is not satisfied, it might be possible to


schedule. If satisfied will be definitely able to schedule

For large value of n: U  ln(2) = 0.69

Simulation studies show that schedulability can be achieved if U  0.88


Deadline Monotonic (DM) Algorithm
Algorithm assigns priority based on the relative deadline of tasks

If relative deadline of all tasks are same as period, DM becomes RM

Eg: They are T1 = (50, 50, 25, 100), T2 = (0, 62.5, 10, 20), and T3 = (0, 125, 25, 50)

T2 has the highest priority followed by T3 and T1


Deadline Monotonic (DM) Algorithm

Task Priority P e D
T1 3 50 25 100
T2 1 62.5 10 20
T3 2 125 25 50
Deadline Monotonic (DM) Algorithm
When relative deadlines are arbitrary, DM performs better than RM  In some cases
DM can produce a feasible algorithm, where RM cannot
Previous case with RM T1 has highest priority followed by T2 and then T3
EDF Algorithm
Dynamic priority algorithm

Priority based on the absolute deadline of jobs  earlier the deadline, higher the
priority

More details refer to previous lectures


Optimality of EDF Algorithm
EDF is an optimal algorithm

Details refer to the previous lectures


Schedulability of EDF Algorithm
When relative deadline ≥ period, necessary and sufficient condition for schedulability
∑ei/pi = U  1

If Di < pi for some tasks sufficient condition


∑ei/min(pi,Di)  1

Sufficient condition  Even if this condition is not satisfied, we might be still able to
schedule
LST Algorithm
Dynamic priority algorithm

Priority based on the slack of jobs  lesser the slack, higher the priority

More details refer to previous lectures


Optimality of DM Algorithm
Although fixed-priority scheduling is not optimal in general, we still
use because it leads to a more predictable and stable system

Among fixed priority algorithms the best one is DM algorithm

A system T of independent, pre-emptible periodic tasks that are in


phase and have relative deadlines equal to or less than their respective periods can be
feasibly scheduled on one processor according to the DM algorithm whenever it can
be feasibly scheduled according to any fixed-priority algorithm

Proof: Any feasible fixed priority schedule can be converted into DM schedule  Use
the same steps used for EDF optimality proof
SCHEDULABILITY TEST FOR FIXED-PRIORITY TASKS
WITH SHORT RESPONSE TIMES
Assumptions

The response times of the jobs are smaller than or equal to their respective periods 
every job completes before the next job in the same task is released Relative
deadline  period

Utilization U  1

We will represent the set of all tasks by T and i task in T as Ti


th

In T, all tasks are arranged based on their deadline in such a way that priority of Ti >
priority of Tj if i < j
Critical Instant
Input to schedulability test: sets {pi} and {ei } of periods and execution
times of the tasks in T

The test checks one task Ti at a time to determine whether the response times of all its
jobs are equal to or less than its relative deadline Di

We must first identify the worst-case combination of release times of any job Ji,c in Ti
and all the jobs that have higher priorities than Ji,c

This combination is the worst because the response time of a job Ji,c released under
this condition is the largest

We call this release time the critical instant


Critical Instant
The response time of a job in Ti released at a critical instant has the maximum
(possible) response time of the task and denote it by Wi

Theorem:
In a fixed-priority system where every job completes before the next job in the same
task is released, a critical instant of any task Ti occurs when one of its job Ji,c is
released at the same time with a job in every higher-priority task, that is,
ri,c = rk,l for some l for every k = 1, 2, . . . , i − 1
Critical Instant
RM schedule of the three jobs, (2, 0.6), (2.5, 0.2), and (3, 1.2). Time 0 is critical instant
for both T2 and T3

T1

0 1 2 3 4 5 6 7 8 9 10 11 12

T2

0 1 2 3 4 5 6 7 8 9 10 11 12

T3
0 1 2 3 4 5 6 7 8 9 10 11 12
Response times of the jobs in (2.5, 0.2): 0.8, 0.3, 0.2, 0.2, 0.8
Response times of the jobs in (3, 1.2): 2, 1.8, 2, 2
Critical Instant
RM schedule of the three jobs, (2, 0.6), (1, 2.5, 0.2, 2.5), and (3, 1.2). Time 6 is critical
instant for both T2 and T3 since ф of T2 is 1

T1

0 1 2 3 4 5 6 7 8 9 10 11 12

T2

0 1 2 3 4 5 6 7 8 9 10 11 12

T3
0 1 2 3 4 5 6 7 8 9 10 11 12
Maximum response times of the jobs in (2.5, 0.2): 0.8
Response times of the jobs in (3, 1.2): 2
Time Demand Analysis
To determine whether a task can meet all its deadlines, we first compute the total
demand for processor time by a job released at a critical instant of the task and by all
the higher-priority tasks as a function of time from the critical instant

Then check whether this demand can be met before the deadline of the job. For this
reason, we name this test a time-demand analysis.

To carry out the time-demand analysis on T, we consider one task at a time, starting
from the task T1 with the highest priority in order of decreasing priority
Time Demand Analysis
Ti releases at time t0
At time t0+ t for t  0, the processor demand for this job and all higher priority jobs released
between [t0,t] is given by

i - 1
wi(t) = ei +  k=1
t/pkek for 0 < t ≤ pi

Ti can meet its deadline t0 + Di if at some time t0 + t processor time t  processor demand
time wi(t)

In other words Ti can meet its deadline if we can find a t in such a way that w i(t) ≤ t and t ≤
Di and Di ≤ pi
Time Demand Analysis

 p e

T1 1 3 1
T2 2 5 1.5
T3 3 7 1.25
T4 4 9 0.5

U = 0.867
Liu-Leyland bound = 4(21/4 -1) = 0.757
Time Demand Analysis

Time Demand Functions

Time
Time Demand Analysis

Time Demand Functions

w1(t)
Time Demand Analysis

Time Demand Functions

w1(t)
Time Demand Analysis

Time Demand Functions

w1(t)
Time Demand Analysis

Time Demand Functions

w1(t)
Time Demand Analysis

Time Demand Functions

w2(t)

w1(t)
Time Demand Analysis

Time Demand Functions

w2(t)

w1(t)
Time Demand Analysis

Time Demand Functions

w2(t)

w1(t)
Time Demand Analysis

Time Demand Functions

w2(t)

w1(t)
Time Demand Analysis

Time Demand Functions

w2(t)

w1(t)
Time Demand Analysis

Time Demand Functions

w3(t)

w2(t)

w1(t)
Time Demand Analysis

w4(t)
Time Demand Functions

w3(t)

w2(t)

w1(t)
Time Demand Analysis

w4(t)
Time Demand Functions

w3(t)

w2(t)

w1(t)
Time Demand Analysis

w4(t)
Time Demand Functions

w3(t)

w2(t)

w1(t)
Time Demand Analysis

 p e

T1 1 3 1
T2 2 5 1.5
T3 3 7 1.25
T4 4 9 0.5
T5 5 10 1
U = 0.967
Time Demand Analysis

w5(t)
w4(t)
Time Demand Functions

w3(t)

w2(t)

w1(t)
Time Demand Analysis
You will notice time-demand function of Ti is a staircase function

The rises in the function occur at time instants which are integer multiples of periods
of higher-priority tasks

Steps in time demand analysis

1. compute the time-demand function wi(t)

2. Check whether the inequality


t ≤ wi(t)
is satisfied for values of t that are equal to
t = jpk ; k = 1, 2, . . . i ; j = 1, 2, …..  min(pi,Di)/pk 
Worst Case Simulation Method
We can also determine whether a system of independent preemptable tasks is
schedulable by simply simulating and observing whether the system is then
schedulable

It is enough to do simulation until the largest period among tasks since we are starting
simulation from a critical instant

For RM that task has the lowest priority. So if we can schedule it released from a
critical instance, any job can be scheduled

For DM at least one job’s deadline will happen with in one period of the largest
task
Worst Case Simulation Method
SCHEDULABILITY TEST FOR FIXED-PRIORITY TASKS
WITH ARBITRARY RESPONSE TIMES
Relative deadlines are larger than their respective periods

Since the response time of a task may be larger than its period, it may have more than
one job ready for execution at any time

Ready jobs in the same task are scheduled on the FIFO basis
Busy Intervals
A level-πi (π is used to indicate priority) busy interval (t0, t ] begins at an instant t0
when
(1) all jobs in Ti released before the instant have completed and
(2) a job in Ti is released

The interval ends at the first instant t after t0 when all the jobs in Ti released since t0
are complete

In the interval (t0, t ], the processor is busy all the time executing
jobs with priorities πi or higher

All the jobs in Ti executed in the busy interval are released in the
interval, and at the end of the interval there is no backlog of jobs to be executed
afterwards
Busy Intervals
When computing the response times of jobs in Ti , we can consider every level-πi busy
interval independently from other level-πi busy intervals

A level-πi busy interval is in phase if the first jobs of all tasks that have priorities equal
to or higher than priority πi and are executed in this
interval have the same release time
Busy Intervals

T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3


Busy Intervals
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3
Level 1 busy interval
Busy Intervals
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3
Level 2 busy interval
Busy Intervals
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3
Level 3 busy interval
General Schedulability Test
Here also we consider the case where all tasks are in phase (Eg: all tasks have ф=0)

To determine whether a task Ti is schedulable, we must examine all the jobs of Ti that
are executed in the first level-πi busy interval

First level-πi will be in phase when the tasks are in phase

If the response times of all these jobs are no greater than the relative deadline of Ti , Ti
is schedulable; otherwise, Ti may not be schedulable
General Time-Demand Analysis Method
Test one task at a time starting from the highest priority task T1 in order of decreasing
priority

To determine whether a task Ti is schedulable, assume that all the tasks are in phase
and the first level-πi busy interval begins at time 0
General Time-Demand Analysis Method
To test whether all the jobs in Ti can meet their deadlines (i.e., whether Ti is
schedulable), consider the subset Ti of tasks with priorities πi or higher

(i) If the first job of every task in Ti completes by the end of the first period of the
task, check whether the first job Ji,1 in Ti meets its deadline. Ti is schedulable if Ji,1
completes in time. Otherwise, Ti is not schedulable
General Time-Demand Analysis Method
(ii) If the first job of some task in Ti does not complete by the end of the first period of
the task, do the following:

(a) Compute the length of the in phase level-πi busy interval by solving the equation t
i
= k =1 t/pk ek iteratively
i
Start with t = k =1ek for next iteration use it as the t value in the right hand side

Continue this until you get same t value in the consecutive steps  tl = tl+1 for some
iteration number l  1

Compute the maximum response times of all t /pi jobs of Ti in the in-phase level-πi
l

busy interval and determine whether they complete in time.


Ti is schedulable if all these jobs complete in time; otherwise Ti is not
schedulable.
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3

Time Demand Functions

w1,1(t)

Time
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3

Time Demand Functions

w1,1(t)

Time
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3

Time Demand Functions

w1,1(t)

Time
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3

Time Demand Functions

w2,1(t)

w1,1(t)

Time
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3

Time Demand Functions

w3,1(t)

w2,1(t)

w1,1(t)

Time
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25,3.5), and T3 = (5, 0.25,6) π1 = 1, π2 = 2, and π3 = 3

J3,1 completes at 5.75


Response time = 5.75-
Time Demand Functions

0=5.75

w3,1(t)
J2,1 completes at 3.25
w2,1(t) Response time = 3.25-
0=3.25

w1,1(t)

Time
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3

Time Demand Functions

w1,2(t)

Time
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3

Time Demand Functions

w2,2(t)

w1,2(t)

Time
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25,3.5), and T3 = (5, 0.25,6) π1 = 1, π2 = 2, and π3 = 3

J3,2 completes at 6
Response time = 6-
Time Demand Functions

5=1

J2,2 completes at 5.5


w3,,2(t)
w2,2(t) Response time = 5.5-3
= 2.5

w1,2(t)

Time
General Time-Demand Analysis Method
T1 = (2, 1), T2 = (3, 1.25,3.5), and T3 = (5, 0.25,6) π1 = 1, π2 = 2, and π3 = 3

J3,2 completes at 6
Response time = 6-
Time Demand Functions

5=1

J2,2 completes at 5.5


w3,,2(t)
w2,2(t) Response time = 5.5-3
= 2.5

w1,2(t)

Time
General Time-Demand Analysis Method
Worst-case response time Wi,j of the j job of Ti in an in-phase level-πi busy period is
th

wi,j(t) = jei + k=1 i-1


 (t+ri,j)/pkek - ri,j

We can solve it iteratively. Start with t=ei


Eg: for W2,2, start with t = 1.25
We get W2,2 as 2*1.25 +  (1.25+3)/2 - 3 = 2.5
Next substitute t as 2.5 we will still get 2.5. So we can stop iteration

We can say the task Ti is schedulable if worst case response time among all its job in a
in-phase level-πi busy period (we can always take first level-πi) is less than its relative
deadline

Wi ≤ Di
General Schedulability through Simulation
T1 = (2, 1), T2 = (3, 1.25), and T3 = (5, 0.25) π1 = 1, π2 = 2, and π3 = 3
General Time-Demand Analysis Method
Worst-case response time Wi,j of the j job of Ti in an in-phase level-πi busy period is
th

wi,j(t) = jei + k=1 i-1


 (t+ri,j)/pkek - ri,j

We can solve it iteratively. Start with t=ei


Eg: for W2,2, start with t = 1.25
We get W2,2 as 2*1.25 +  (1.25+3)/2 - 3 = 2.5
Next substitute t as 2.5 we will still get 2.5. So we can stop iteration

We can say the task Ti is schedulable if worst case response time among all its job in a
in-phase level-πi busy period (we can always take first level-πi) is less than its relative
deadline

Wi ≤ Di
Practical Factors
Till now we assumed

 every job is preemptable at any time


 once a job is released, it never suspends itself
 scheduling and context-switch overhead is negligible
 the scheduler is event-driven and acts immediately upon event occurrences
 every task (or job) has a distinct priority
 every job in a fixed priority system is scheduled at a constant priority

These assumptions are often not valid!!


Nonpreemptability
Reasons:

Job is using a resource that must be used in a mutual exclusive manner

Preemption may be too costly. Eg: Job is using the disk

This may cause blocking

Blocking: A (ready) job Ji is blocked when it is prevented from executing by a lower-


priority job
Nonpreemptability
Consider the fixed priority task set

T1 = (, 4,1)
T2 = (, 5, 1.5)
T3 = (9,2)

U = 0.772

3*(2 (1/3)
– 1) = 0.779
Nonpreemptability

Time Demand Functions W3(t)

W2(t)

W1(t)
Non-preemptability
Now consider T3 is non-preemptible

When the jobs J1,1 and J2,1 become ready at ε, the first job J3,1 of T3 is executing

Since J3,1 cannot be preempted, both J1,1 and J2,1 get blocked and they have to wait until
J3,1 completes

So there is a priority inversion during (ε, 2)

Due to this J2,1 will not meet its deadline

J3,1 J1,1 J2,1 J1,2 J2,1

0 1 2 3 4 5 6 7 8 9 10 11 12
One way to check this is to add the priority inversion duration (2 units) during the
calculation of time demand of higher priority tasks
Nonpreemptability

Time Demand Functions with blocking Not schedulable

W2(t)

W1(t)
Effect of Nonpreemptability on Schedulability
We use θi (θi ≤ ei ) to denote the maximum execution time of the longest non-
preemptable portion of jobs in the task Ti

Blocking time due to non-preemptibility, denoted by bi(np) , refers to the maximum


total duration for which each job in task Ti may be delayed by lower-priority jobs

bi(np) = max(θk)
i+1 ≤ k ≤ n

wi(t) = ei + bi +  i - 1
k=1
t/pk ek for 0 < t ≤ min(Di,pi)
i-1
wi,j(t) = jei + bi + k=1 (t+ri,j)/pkek - ri,j
Self Suspension
While executing, a job may invoke some external operation, for example, an I/O
operation

Self-blocking or self-suspension occurs when the job is suspended and waits until such
an operation completes before its execution can continue

While it waits, the operating system removes it from the ready queue
and places it in a blocked queue

We assume that the maximum amount of time each external operation takes to
complete and, hence, the maximum duration of each self-suspension, is known
Schedulability Under Self Suspension

Condition 1: If every job in a task Ti self-suspends for x units of time immediately


after it is released (e.g., due to input data transmission)

The job is ready for execution x time units after it is released

Hence, the time from the instant when the job is ready to its deadline
is only Di − x, not Di

To determine whether the task Ti is schedulable, we use the shortened deadline Di − x


in the schedulability test
Schedulability Under Self Suspension

Condition 2: Different jobs of task Ti may self suspend for different amount of time

As a consequence, the task no longer behaves as a periodic task

T1 = (4,2.5)
T2 = (3,7,2)

Only J1,1 suspends at the beginning for 1.5 unit of time


Schedulability Under Self Suspension
T1 = (4,2.5) T2 = (3,7,2)

T1

Deadline is broken
T2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
No self suspension

T1

T2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Self suspension for 1.5 units for J1,1
Schedulability Under Self Suspension
i - 1
bi(ss) = max self suspension of Ti +  k=1 min(ek,max self suspension of Tk)

In a system where some tasks are non-preemptable, the effect of self-suspension is


more severe

The reason is that every time a job suspends itself, it loses the processor

It may be blocked again by a nonpreemptive lower-priority job when it resumes


after the suspension

if each job in a task Ti can self-suspend for a maximum number Ki times after it
starts execution, its total blocking time bi is given by

bi = bi (ss) + (Ki + 1)bi (np)


Context Switches
For accounting context switching, we will need to know the number of preemptions
Task Priority Perio Execution
d time
T1 1 7 2
T1 T2 2 16 4
T3 3 31 7
T2

T3
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Context Switches
We can account for the context-switch overhead in a schedulability test by including
the time spent for the two context switches at the start and completion of each job as
part of the execution time of the job

Include the context-switch time in the execution time of the preempting job

Let CS denote the context-switch time of the system

ei = ei + 2*CS

If some job self-suspends, the job incurs two more context switches each time it self-
suspends

ei = ei+2(Ki + 1)CS
Limited Priority Levels
A real-life system can support only a limited number of priority levels

As a consequence, tasks (or jobs) may have non-distinct priorities

When tasks in a fixed-priority system have nondistinct priorities, a job Ji,j of Ti may be
delayed by a job of an equal priority task Tk

Jobs of equal priorities are scheduled either on a FIFO basis or on a round-robin basis

Let TE(i) denote the subset of tasks, other than Ti , that have the
same priority as Ti and TH(i) the subset of tasks with higher priority In the worst case

wi(t) = ei + bi + ek + t/pkek for 0 < t ≤ min(Di,pi)


Tk є TE(i) Tk є TH(i)
Tick Scheduling
In the previous discussions we assumed the scheduler in event driven

Hence every job is inserted into ready job queue as soon as it becomes ready

But most of the time this assumption is not valid

We mostly implement the scheduler as clock driven and the execution of the scheduler
is triggered by a timer interrupt

 A ready job won’t be noticed until the next clock interrupt


 ready job yet to be noticed by the scheduler must be held in a separate queue
(pending queue)

When scheduler executes, it moves the jobs in the pending queue to the ready job
queue and places them there in order of their priorities
Tick Scheduling
The time the scheduler takes to move the jobs introduces additional scheduling
overhead

We can model the scheduler as a periodic task T0 whose period is p0 (tick size)

This task has highest priority

Its execution time e0 is the amount of time the scheduler takes to service the clock
interrupt

CS0 - time taken for job to move from PQ to RQ


Tick Scheduling
T1 (0.1,4,1), T2 (0.1,5,1.8), T3 (20,5) – non-preemptible at beginning for 1.1
p0 – 1, e0 – 0.05, CS0 – 0.06. Relative deadlines same as period

Deadline is broken
Tick Scheduling
Additional time demand introduced by tick scheduling in a fixed-priority system by
modified task parameters in the computation of the time-demand function of task Ti :
1. include the task T0 = (p0, e0) in the set of higher-priority tasks;
2. add (Kk + 1)CS0 to the execution time ek of every higher-priority task Tk (i.e., for k =
1, 2, . . . , i ), where Kk is the number times Tk may self-suspend;
3. for every lower-priority task Tk , k = i + 1, . . . , n, add a task (pk ,CS0) in the set of
higher-priority tasks; and
4. make the blocking time bi(np) due to nonpreemptability of Ti to

(max + 1)p0
i+1≤ k ≤ n
where θk is the maximum execution time of non-preemptable sections of the
lowerpriority task Tk

You might also like