Is It Time Yet?: Wing On Chan
Is It Time Yet?: Wing On Chan
Wing On Chan
Scheduling Problem
Distributed hard real-time systems Execute a set of concurrent RT transactions such that all time-critical transactions meet their deadlines Transactions need resources (Computational, communication, and data) Decomposition Within a node Communication resources
4
Preemptive Vs Non-Preemptive
Preemptive
Can be interrupted by more urgent tasks
Safety assertions
Non-Preemptive
No interruptions Shortest response = Longest + Shortest task Reasonable for scenarios with many short tasks
8
Central Vs Distributed
Dynamic distributed RT systems
Central Scheduling Distributed Algorithms
Requires up-to-date information in all nodes Significant communication costs
Schedulability Test
Determine if a schedule exists
Exact Necessary Sufficient
Optimal scheduler
Optimal if schedule can be found when the exact schedulability test says there is one
Schedulability Test
Sufficient schedulability test
Sufficient but not necessary condition
11
Periodic Tasks
Periodic Tasks
After the initial task request
All future requests known Adding multiples of known period to initial request time
12
Periodic Tasks
Task set {Ti} of periodic tasks
Periods - pi Deadlines - di Processing requirements ci
Sufficient to examine schedules with length equal to the least common multiple of the periods in {Ti}
13
Periodic Tasks
Necessary schedulability test
Sum of utilization factors i must be less than or equal to n, where n is the number of processors
Sporadic Tasks
Request times are not known beforehand Must be a minimum interval pi between any two request times of sporadic tasks If pi doesnt exist, then the necessary schedulability test will fail Aperiodic tasks = No constraints on request times
15
16
Adversary Argument
If there are mutual exclusion constraints between periodic and sporadic tasks, then in general, it is impossible to find an optimal totally online dynamic scheduler.
17
Adversary Argument
18
Adversary Argument
Necessary schedulability test
= (ci / pi) <= n = (2/4) + (1/4) = (3/4) <= 1
Adversary Argument
Clairvoyant scheduler
Schedule periodic task between sporadic tasks Laxity of periodic task > execution time of sporadic task, so scheduler will always find a schedule
20
Adversary Argument
If the on-line scheduler has no future knowledge about sporadic tasks, scheduling becomes unsolvable. Predictable hard RT systems are only feasible if there are regularity assumptions
21
Dynamic Scheduling
Dynamic scheduling algorithm
Determines task after occurrence of a significant event Based on current task requests
22
23
If all assumptions are met, all Ti meet their deadlines Optimal for single processor systems
25
Earliest-Deadline-First Algorithm
Optimal dynamic preemptive algorithm Uses dynamic priorities Assumptions 1-5 of the Rate Monotonic Algorithm must also hold can go up to 1, even with tasks that do not have pis that are multiples of the shortest period After a significant event
Task with the shortest di gets the highest dynamic priority
26
Least-Laxity Algorithm
Optimal in single processor system Same assumptions as Earliest-Deadline-First algorithm At scheduling decision point
Task with the shortest laxity (di ci) is given the highest dynamic priority
In multiprocessor systems
Earliest-deadline-first and least-laxity algorithms are not optimal Least-laxity algorithm is able to handle task scenarios that the Earliest-deadline-first algorithm could not
27
Kernelized Monitor
For a set of short critical sections, the longest critical section less than a given duration q. Allocates processor time in uninterruptible quantums of q.
Assumes all critical sections can be started and completed within this single uninterruptible Process may only be interrupted after xq where x is an integer
29
Kernelized Monitor
Example:
Assume there are two periodic tasks
T1: c1 = 2, d1 = 2, p1 = 5 T2: c21 = 2, c22 = 2, d2 = 10, p2 = 10 T2 has two scheduling blocks
C22 of T2 is mutually exclusive to T1
q=2
30
Kernelized Monitor
At t = 5, the earliest-deadline algorithm will need to schedule T1 again but it cant since T22 is block the critical section between T1 and T22
31
Kernelized Monitor
Priority Inversion
Consider three tasks T1, T2, and T3 with T1 having the highest priority
Scheduled with rate-monotonic algorithm T1 and T3 require exclusive access to a resource protected by a semaphore S
33
Priority Inversion
T3 starts and has exclusive access to resource T1 requests service but is blocked by T3 T2 requests service and is granted service T2 finishes T3 finishes and releases S T1 starts and finishes Actual execution is T2, T3, then T1 Solution: Priority Ceiling Protocol
34
1. 2. 3. 4.
T3 starts T3 locks S3 T2 starts and preempts T3 T2 is blocked when locking S3. T3 resumes at T2s inherited priority
36
5. T3 enters nested critical region and locks S1. 6. T1 starts and preempts T3 7. T1 is blocked when locking S1. T3 resumes 8. T3 unlocks S2. T1 awakens and preempts T3. T1 locks S1
37
13. T3 unlocks S3. T2 preempts T3 and locks S3 14. T2 unlocks S3 15. T2 completes. T3 resumes 16. T3 completes
39
Not the only test. There are more complex ones. Priority ceiling protocol Predictable, nondeterministic scheduling protocol
40
Masking Protocols
Send message k + 1 in case the tolerance of k is required.
No temporal problem but cant detect permanent faults due to unidirectional communication
42
43
Static Scheduling
Static schedules guarantees all deadlines, based on known resources, precedence, and synchronization requirements, is calculated off-line Strong regularity assumptions Known times when external events will be serviced
44
Static Scheduling
System design
Maximum delay time until request is recognized + maximum transaction response time < service deadline
Time
Generally a periodic time-triggered schedule Time line divided into a sequence of granules (cycle time) Only one interrupt, a periodic clock interrupt for the start of a new granule In distributed systems, synchronized to a precision of less than a granule
45
Static Scheduling
Periodic with pi being a multiple of the basic granule Schedule period = least common multiple of all pi All scheduling decisions made at compiletime and executed at run-time Optimal schedule in a distributed system => NP complete
46
Search Tree
Precedence Graph
Tasks = Nodes, Edges = dependencies
Search Tree
Level = unit of time, Depth = period Path to a leaf node = complete schedule Goal: Find a complete schedule that observes all precedence and mutual exclusion constraints before the deadline
47
Heurisitc Function
Two terms: Actual cost of path, estimated cost to goal Example
Estimate time needed to complete precedence graph (Time Until Response) (TUR) Necessary estimate of TUR = (max exec time + communications) If necessary estimate > deadline, prune branches of the node and backtrack to the parent
48
Increasing Adaptability
Weakness: Assumption of strictly periodic tasks Proposed solutions for flexibility
Transformation of sporadic requests into periodic requests Sporadic server task Mode changes
49
Sporadic task with a short latency will demand a lot of resources, but will request it infrequently
50
Mode Change
During system design, identify all modes For each mode, generate a static schedule off-line Analyze mode changes and develop mode change schedule During run-time, when a mode change is requested, change to corresponding static schedule
52
Comparisons
Predictability
Static Scheduling
Accurate planning of schedule, so precise predictability
Dynamic Scheduling
No schedulability tests exist for distributed system with mutual exclusion and precedence relations Dynamic nature can not guarantee timeliness
53
Comparisons
Testability
Static Scheduling
Performance tests of every task can be compared with established plans Systematic and constructive since all input cases can be observed
Dynamic Scheduling
Confidence of timeliness based on simulations Real loads not enough since rare events dont occur often Are the simulated loads representative of real loads?
54
Comparisons
Resource Utilization
Static Scheduling
Planned for peak load with time for each task at least the maximum execution time If many operating modes, can lead to combinatorial explosion of static schedules
Dynamic Scheduling
Processor available more quickly Resources needed to do dynamic scheduling
55
Comparisons
Resource Utilization
Dynamic Scheduling (Contd)
If loads low, better utilization than static schedule If loads high, more resources used for dynamic scheduling and less for execution of tasks
56
Comparisons
Extensibility
Static Scheduling
If a new task is added or the maximum execution time is modified, the schedule needs to be recalculated If the new node sends information into the system, the communications schedule needs to be recalculated Impossible to calculate static schedule if the number of tasks changes dynamically during run-time
57
Comparisons
Extensibility
Dynamic Scheduling
Easy to add/modify tasks Change can ripple through system Probability of change and system test-time are proportional to tasks. Assessing the consequences increase more than linearly with the number of tasks Scales poorly for large applications
58