0% found this document useful (0 votes)
63 views58 pages

Is It Time Yet?: Wing On Chan

The document discusses various concepts related to real-time scheduling algorithms. It defines hard and soft real-time systems, preemptive and non-preemptive scheduling, periodic and sporadic tasks. It also describes different scheduling algorithms like rate monotonic, earliest deadline first, and least laxity. The document discusses issues like priority inversion and proposes the priority ceiling protocol to address it. It also covers scheduling of dependent tasks and challenges in distributed real-time systems.

Uploaded by

Shashank Singh
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views58 pages

Is It Time Yet?: Wing On Chan

The document discusses various concepts related to real-time scheduling algorithms. It defines hard and soft real-time systems, preemptive and non-preemptive scheduling, periodic and sporadic tasks. It also describes different scheduling algorithms like rate monotonic, earliest deadline first, and least laxity. The document discusses issues like priority inversion and proposes the priority ceiling protocol to address it. It also covers scheduling of dependent tasks and challenges in distributed real-time systems.

Uploaded by

Shashank Singh
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 58

Is It Time Yet?

Wing On Chan

Distributed Systems Chapter 18 - Scheduling


Hermann Kopetz

Scheduling Algorithm Classifications


Real-Time Scheduling
Soft Hard Dynamic Preemptive Non-preemptive Static Preemptive Non-preemptive
3

Scheduling Problem
Distributed hard real-time systems Execute a set of concurrent RT transactions such that all time-critical transactions meet their deadlines Transactions need resources (Computational, communication, and data) Decomposition Within a node Communication resources
4

Hard RT Vs Soft RT Scheduling


Hard RT Systems
Deadlines must be guaranteed before execution starts. Probability that the transaction finishes before its deadline not enough. Off-line schedulability tests Feasible static schedules

Hard RT Vs Soft RT Scheduling


Soft RT Systems
Violation of timing not critical Cheaper resource-inadequate solutions can be used Under adverse conditions, it is tolerable that transactions not meet their timing constraints

Dynamic Vs Static Scheduling


Dynamic (On-line) Scheduling
Only considers
Actual requests Execution time parameters

Costly to find a schedule

Static Scheduling (Off-line)


Complete knowledge
Maximum execution time Precedence constraints Mutual exclusion constraints Deadlines
7

Preemptive Vs Non-Preemptive
Preemptive
Can be interrupted by more urgent tasks
Safety assertions

Non-Preemptive
No interruptions Shortest response = Longest + Shortest task Reasonable for scenarios with many short tasks
8

Central Vs Distributed
Dynamic distributed RT systems
Central Scheduling Distributed Algorithms
Requires up-to-date information in all nodes Significant communication costs

Schedulability Test
Determine if a schedule exists
Exact Necessary Sufficient

Optimal scheduler
Optimal if schedule can be found when the exact schedulability test says there is one

Exact schedulability test


Belongs to the class of NP-complete problems
10

Schedulability Test
Sufficient schedulability test
Sufficient but not necessary condition

Necessary schedulability test


Necessary but not sufficient condition
Difference between deadline di and computation time ci (laxity) be non-negative

11

Periodic Tasks
Periodic Tasks
After the initial task request
All future requests known Adding multiples of known period to initial request time

12

Periodic Tasks
Task set {Ti} of periodic tasks
Periods - pi Deadlines - di Processing requirements ci

Sufficient to examine schedules with length equal to the least common multiple of the periods in {Ti}
13

Periodic Tasks
Necessary schedulability test
Sum of utilization factors i must be less than or equal to n, where n is the number of processors

= (ci / pi) <= n

i = Percentage of time the task Ti requires the service of a CPU


14

Sporadic Tasks
Request times are not known beforehand Must be a minimum interval pi between any two request times of sporadic tasks If pi doesnt exist, then the necessary schedulability test will fail Aperiodic tasks = No constraints on request times
15

Optimal Dynamic Scheduling


Consider a dynamic scheduler with full past knowledge only
Exact schedulability is impossible New definition of optimal dynamic scheduler
Optimal if it can find a schedule whenever a clairvoyant scheduler can find a schedule

16

Adversary Argument
If there are mutual exclusion constraints between periodic and sporadic tasks, then in general, it is impossible to find an optimal totally online dynamic scheduler.

17

Adversary Argument

18

Adversary Argument
Necessary schedulability test
= (ci / pi) <= n = (2/4) + (1/4) = (3/4) <= 1

Suppose that when T1 starts, T2 requests service


Mutually exclusive T2 has a laxity of d2 c2 = 1 - 1 = 0 T2 will miss its deadline
19

Adversary Argument
Clairvoyant scheduler
Schedule periodic task between sporadic tasks Laxity of periodic task > execution time of sporadic task, so scheduler will always find a schedule

20

Adversary Argument
If the on-line scheduler has no future knowledge about sporadic tasks, scheduling becomes unsolvable. Predictable hard RT systems are only feasible if there are regularity assumptions

21

Dynamic Scheduling
Dynamic scheduling algorithm
Determines task after occurrence of a significant event Based on current task requests

22

Rate Monotonic Algorithm


Classic algorithm for hard RT systems with a single CPU Dynamic preemptive algorithm Static task priorities

23

Rate Monotonic Algorithm


Assumptions
1. All requests in set {Ti} are periodic 2. All tasks are independent. No precedence or mutual exclusion constraints 3. di = pi 4. The maximum ci is known and constant 5. Context switching time is ignored 6. = (ci / pi) <= n (21/n 1) [approaches ln 2 or 0.7]
24

Rate Monotonic Algorithm


Algorithm defines task priorities
Short pi tasks get higher priority Longer pi tasks get lower priority During run-time, always run the highest priority

If all assumptions are met, all Ti meet their deadlines Optimal for single processor systems
25

Earliest-Deadline-First Algorithm
Optimal dynamic preemptive algorithm Uses dynamic priorities Assumptions 1-5 of the Rate Monotonic Algorithm must also hold can go up to 1, even with tasks that do not have pis that are multiples of the shortest period After a significant event
Task with the shortest di gets the highest dynamic priority
26

Least-Laxity Algorithm
Optimal in single processor system Same assumptions as Earliest-Deadline-First algorithm At scheduling decision point
Task with the shortest laxity (di ci) is given the highest dynamic priority

In multiprocessor systems
Earliest-deadline-first and least-laxity algorithms are not optimal Least-laxity algorithm is able to handle task scenarios that the Earliest-deadline-first algorithm could not

27

Scheduling Dependent Tasks


Analysis of tasks with precedence and mutual exclusion constraints more useful Scheduling competing with tasks for resources Possible solutions
Provide extra resources. Simpler sufficient schedulability tests and algorithms. Divide problem into 2 parts
One solved at compile time One solved during run-time (Simpler of the two)

Add restricting regularity assumptions


28

Kernelized Monitor
For a set of short critical sections, the longest critical section less than a given duration q. Allocates processor time in uninterruptible quantums of q.
Assumes all critical sections can be started and completed within this single uninterruptible Process may only be interrupted after xq where x is an integer
29

Kernelized Monitor
Example:
Assume there are two periodic tasks
T1: c1 = 2, d1 = 2, p1 = 5 T2: c21 = 2, c22 = 2, d2 = 10, p2 = 10 T2 has two scheduling blocks
C22 of T2 is mutually exclusive to T1

q=2

30

Kernelized Monitor

At t = 5, the earliest-deadline algorithm will need to schedule T1 again but it cant since T22 is block the critical section between T1 and T22
31

Kernelized Monitor

Region before the second activation of T1 is blocked


Forbidden region
Dispatcher must know about all forbidden regions during compile time
32

Priority Inversion
Consider three tasks T1, T2, and T3 with T1 having the highest priority
Scheduled with rate-monotonic algorithm T1 and T3 require exclusive access to a resource protected by a semaphore S

33

Priority Inversion
T3 starts and has exclusive access to resource T1 requests service but is blocked by T3 T2 requests service and is granted service T2 finishes T3 finishes and releases S T1 starts and finishes Actual execution is T2, T3, then T1 Solution: Priority Ceiling Protocol
34

Priority Ceiling Protocol


Priority ceiling (PC) of S = priority of the highest task that can lock S T only enters a new critical section if its priority is higher than the PC of all semaphores locked by tasks != T Runs at assigned priority unless in critical region and blocks higher priority tasks
Inherits highest priority of blocked tasks while in the critical region Returns to assigned priority when exiting
35

Priority Ceiling Protocol

1. 2. 3. 4.

T3 starts T3 locks S3 T2 starts and preempts T3 T2 is blocked when locking S3. T3 resumes at T2s inherited priority
36

Priority Ceiling Protocol

5. T3 enters nested critical region and locks S1. 6. T1 starts and preempts T3 7. T1 is blocked when locking S1. T3 resumes 8. T3 unlocks S2. T1 awakens and preempts T3. T1 locks S1
37

Priority Ceiling Protocol

9. T1 unlocks S1 10. T1 locks S2 11. T1 unlocks S2 12. T1 completes. T3 resumes at priority of T2


38

Priority Ceiling Protocol

13. T3 unlocks S3. T2 preempts T3 and locks S3 14. T2 unlocks S3 15. T2 completes. T3 resumes 16. T3 completes
39

Priority Ceiling Protocol


One sufficient schedulability test
Set of n periodic tests {Ti}, periods pi, computation time ci Worse case blocking time by lower priority tasks = Bi
i, 1 i n : (c1/p1) + (c2/p2) + + (ci/pi) + (Bi/p2) i(21/i 1)

Not the only test. There are more complex ones. Priority ceiling protocol Predictable, nondeterministic scheduling protocol
40

Dynamic Scheduling In Distributed Systems


Hard to guarantee deadlines in single processor systems Even harder in distributed systems or multiprocessor systems due to communication Applications required to tolerate transient faults like message losses as well as detect permanent faults
41

Dynamic Scheduling In Distributed Systems


Positive Acknowldgement or Retransmission (PAR)
Large temporal uncertainty between shortest and longest execution time Worse case assume longest time.
Poor responsiveness of system

Masking Protocols
Send message k + 1 in case the tolerance of k is required.
No temporal problem but cant detect permanent faults due to unidirectional communication
42

Dynamic Scheduling In Distributed Systems


Solutions?
No idea Providing good temporal performance is a fashionable research topic

43

Static Scheduling
Static schedules guarantees all deadlines, based on known resources, precedence, and synchronization requirements, is calculated off-line Strong regularity assumptions Known times when external events will be serviced
44

Static Scheduling
System design
Maximum delay time until request is recognized + maximum transaction response time < service deadline

Time
Generally a periodic time-triggered schedule Time line divided into a sequence of granules (cycle time) Only one interrupt, a periodic clock interrupt for the start of a new granule In distributed systems, synchronized to a precision of less than a granule
45

Static Scheduling
Periodic with pi being a multiple of the basic granule Schedule period = least common multiple of all pi All scheduling decisions made at compiletime and executed at run-time Optimal schedule in a distributed system => NP complete
46

Search Tree
Precedence Graph
Tasks = Nodes, Edges = dependencies

Search Tree
Level = unit of time, Depth = period Path to a leaf node = complete schedule Goal: Find a complete schedule that observes all precedence and mutual exclusion constraints before the deadline
47

Heurisitc Function
Two terms: Actual cost of path, estimated cost to goal Example
Estimate time needed to complete precedence graph (Time Until Response) (TUR) Necessary estimate of TUR = (max exec time + communications) If necessary estimate > deadline, prune branches of the node and backtrack to the parent
48

Increasing Adaptability
Weakness: Assumption of strictly periodic tasks Proposed solutions for flexibility
Transformation of sporadic requests into periodic requests Sporadic server task Mode changes
49

Transformation Of Sporadic Requests To Periodic Requests


Possible to find a schedule if the sporadic task has a laxity One solution: Replace T with a quasisporadic task T
c = c d = d p = min(p d + 1, p)

Sporadic task with a short latency will demand a lot of resources, but will request it infrequently
50

Sporadic Server Task


Periodic server task of high priority created
Maintains execution time for duration of servers period When sporadic task arrives, services with servers priority (Depletes execution time) Replenishes execution time when active Sporadic server task is dynamically scheduled in response to a sporadic request
51

Mode Change
During system design, identify all modes For each mode, generate a static schedule off-line Analyze mode changes and develop mode change schedule During run-time, when a mode change is requested, change to corresponding static schedule
52

Comparisons
Predictability
Static Scheduling
Accurate planning of schedule, so precise predictability

Dynamic Scheduling
No schedulability tests exist for distributed system with mutual exclusion and precedence relations Dynamic nature can not guarantee timeliness

53

Comparisons
Testability
Static Scheduling
Performance tests of every task can be compared with established plans Systematic and constructive since all input cases can be observed

Dynamic Scheduling
Confidence of timeliness based on simulations Real loads not enough since rare events dont occur often Are the simulated loads representative of real loads?
54

Comparisons
Resource Utilization
Static Scheduling
Planned for peak load with time for each task at least the maximum execution time If many operating modes, can lead to combinatorial explosion of static schedules

Dynamic Scheduling
Processor available more quickly Resources needed to do dynamic scheduling
55

Comparisons
Resource Utilization
Dynamic Scheduling (Contd)
If loads low, better utilization than static schedule If loads high, more resources used for dynamic scheduling and less for execution of tasks

56

Comparisons
Extensibility
Static Scheduling
If a new task is added or the maximum execution time is modified, the schedule needs to be recalculated If the new node sends information into the system, the communications schedule needs to be recalculated Impossible to calculate static schedule if the number of tasks changes dynamically during run-time
57

Comparisons
Extensibility
Dynamic Scheduling
Easy to add/modify tasks Change can ripple through system Probability of change and system test-time are proportional to tasks. Assessing the consequences increase more than linearly with the number of tasks Scales poorly for large applications
58

You might also like