5 - Task Scheduling
5 - Task Scheduling
Task Scheduling
● Timeliness is as important as functional correctness; tasks must complete before their deadlines to
avoid failure.
● Proper scheduling ensures optimal CPU utilization, predictability, and system responsiveness.
● In critical applications like flight control or life-support, even small scheduling errors can have
catastrophic consequences.
● Efficient scheduling improves system throughput and power consumption, especially in embedded
environments.
● Understanding task scheduling helps developers choose the right RTOS, algorithm, and system
architecture.
Types of Real-Time Tasks: Periodic, Aperiodic, Sporadic
● Periodic tasks arrive at regular intervals and have predictable execution patterns (e.g., reading a
sensor every 100ms).
● Aperiodic tasks do not follow a fixed pattern but still require timely processing (e.g., user input or
network packets).
● Sporadic tasks arrive irregularly like aperiodic tasks but have a minimum inter-arrival time
guarantee.
● Scheduling mechanisms must consider these task types and assign priority or time slots accordingly.
● Many real-time systems combine all three types, requiring hybrid scheduling strategies to manage
complexity.
Real-World Example: Fire Alarm as a Sporadic Task
Arrival Pattern Irregular, but with a minimum Completely irregular, no guaranteed spacing
inter-arrival time
Deadline Requirements Often strict, hard deadlines May or may not have deadlines
Use Case Examples Emergency stop, fire alarm, hardware Logging, diagnostics, non-critical inputs
failure
Classification of Scheduling Algorithms
● Scheduling algorithms can be classified as static or dynamic, depending on whether decisions are
made at compile-time or run-time.
● Static (clock-driven) algorithms are used when task behavior is predictable and timing is
guaranteed.
● Dynamic (priority-driven) algorithms make decisions based on current task states, deadlines, and
system load.
● Algorithms may also be preemptive (tasks can be interrupted) or non-preemptive (tasks run to
completion).
● Each scheduling class has strengths and trade-offs based on system complexity, responsiveness,
and resource availability.
Key Assumptions in Real-Time Scheduling Theory
● Most scheduling theory assumes tasks are independent, meaning they don’t block or wait on
shared resources.
● Execution times are considered known and deterministic, which simplifies analysis but isn’t always
realistic.
● Tasks are typically assumed to be ready at time zero or at predictable intervals.
● The system is often considered single-processor with no resource contention, which may differ
from real-world setups.
● While idealized, these assumptions help develop baseline algorithms, which can later be extended
to handle complexities.
Hard vs. Soft Scheduling Constraints
● Hard real-time constraints require tasks to meet deadlines without exception; failure can lead to
system breakdown or physical harm.
● Soft real-time constraints allow occasional deadline misses but may degrade performance or user
experience (e.g., video streaming).
● Systems may have mixed criticality, where both hard and soft tasks coexist and need coordinated
scheduling.
● Scheduling strategies differ significantly between the two—hard real-time requires predictability,
soft real-time tolerates flexibility.
● Understanding task criticality is essential for prioritizing resources and selecting appropriate
scheduling algorithms.
Definition Review: Task Period, Deadline, Execution Time
● Period (T) is the time interval between consecutive releases of a periodic task (e.g., every 50 ms).
● Deadline (D) is the time by which a task must complete after it becomes ready to avoid failure.
● Execution time (C) is the CPU time required to finish the task once it starts running.
● In many systems, D = T, but in some, deadlines may be less than or greater than the task period.
● These three parameters form the foundation of schedulability analysis and algorithm selection.
Scheduling Goals and Performance Metrics
● The primary goal is to ensure all critical tasks meet their deadlines under all system conditions.
● Key performance metrics include CPU utilization, task lateness, response time, and schedulability.
● A good scheduler aims for predictability and fairness, especially in soft real-time systems.
● In safety-critical systems, zero deadline misses is the top metric, often sacrificing utilization for
reliability.
● Metrics guide design choices like algorithm type, priority assignments, and resource allocation.
Timing Diagrams and Task Execution Timelines
● Timing diagrams visualize task start, execution, preemption, and completion over a shared timeline.
● These diagrams help explain how scheduling decisions impact task overlaps and deadlines.
● Tasks may be delayed by higher-priority tasks in preemptive environments, visible in these
visualizations.
● Timing diagrams are crucial tools in debugging schedulability issues and visualizing worst-case
scenarios.
● Engineers use them in design reviews, simulation tools, and documentation to validate system
timing.
Preemptive vs. Non-Preemptive Scheduling
● In preemptive scheduling, a higher-priority task can interrupt a running lower-priority task at any
time.
● In non-preemptive scheduling, once a task starts, it runs until completion, regardless of incoming
task priorities.
● Preemption increases responsiveness and deadline adherence but introduces context switch
overhead and complexity.
● Non-preemptive scheduling reduces overhead but may cause deadline misses for higher-priority
tasks.
● The choice depends on task criticality, system overhead tolerance, and responsiveness
requirements.
Introduction to Clock-Driven Scheduling
● In static scheduling, task behavior and timing requirements are fully known at compile time.
● A fixed schedule table is created that defines exactly when each task will execute.
● This approach is used in flight control, nuclear plant monitoring, and medical systems where
predictability is key.
● There is no runtime scheduling overhead, which improves performance and reduces jitter.
● However, even small changes to task parameters require a complete re-analysis of the schedule.
Cyclic Executive: Concept and Design
● A cyclic executive is a loop that repeatedly runs a set of functions at predefined intervals.
● It divides time into slots (minor cycles), fitting tasks into these slots in a repeating structure (major
cycle).
● Task allocation is based on multiples of the system tick, ensuring timing alignment.
● The main control loop executes in a non-preemptive fashion, simplifying timing analysis.
● This structure is best suited for systems with stable, periodic behavior and no aperiodic task
requirements.
Constructing the Scheduling Table
● The schedule table defines exact start times and order of tasks for each frame within the major
cycle.
● To build it, we calculate least common multiple (LCM) of task periods to determine the major cycle.
● Each task is assigned to one or more minor cycles, based on its frequency and execution time.
● The process requires careful balancing to avoid missed deadlines or underutilization.
● Tools and spreadsheets are often used to simulate execution and verify feasibility before
deployment.
Hyperperiod Concept and Calculation
● The hyperperiod is the least common multiple (LCM) of all task periods in the system.
● It defines the total length over which the entire schedule repeats identically.
● Helps identify task release times, determine frame alignment, and avoid schedule drift.
● For three tasks with periods 20ms, 40ms, and 100ms, the hyperperiod is LCM(20, 40, 100) =
200ms.
● A well-chosen hyperperiod simplifies scheduling table construction and visualization.
Example: Cyclic Executive for Three Periodic Tasks
● Consider three tasks with periods of 20ms, 40ms, and 100ms and known execution times.
● The hyperperiod (LCM) is calculated to find the overall repetition cycle—e.g., 200ms.
● Divide this into minor cycles (e.g., 20ms) and fit tasks into the slots based on their frequency.
● Show a table-based layout of when each task is scheduled within each minor cycle.
● Use a timing diagram to illustrate how the cyclic executive runs predictably and without
preemption.
Example: Cyclic Executive for Three Periodic Tasks
Advantages of Clock-Driven Scheduling
● Does not adapt well to runtime variations or system disturbances (e.g., aperiodic events).
● Adding or modifying tasks requires full regeneration and validation of the schedule table.
● Cannot handle tasks with dynamic execution times or event-driven behavior efficiently.
● Typically assumes known and fixed periods, making it unsuitable for adaptive systems.
● May lead to underutilized CPU resources due to conservative timing allocations.
Time Slotting and Minor/Major Cycles
● Time is divided into minor cycles, the smallest time unit that aligns with all task periods.
● The major cycle, or hyperperiod, is the LCM of all task periods and defines when the schedule
repeats.
● Each task is assigned to execute in one or more minor cycle frames across the major cycle.
● Accurate slotting ensures no task overlaps or deadline misses in the precomputed schedule.
● Selection of minor cycle length must balance task granularity, system tick, and overhead.
Clock-Driven Scheduling in Industrial Systems
● Used in programmable logic controllers (PLCs) and embedded systems in manufacturing, robotics,
and avionics.
● Ensures precise task triggering, crucial for synchronized machinery or sensor-actuator control.
● Enables certification and validation under safety standards like DO-178C, IEC 61508, ISO 26262.
● Well-suited for systems with minimal runtime variability and strict compliance needs.
● Often used with hardware timers, watchdogs, and RTOS cyclic handlers for accurate task release.
Introduction to Priority-Driven Scheduling
● Priority-driven scheduling makes runtime decisions based on task priorities, allowing for more
flexible execution.
● Tasks can be assigned fixed or dynamic priorities, depending on the algorithm used.
● Unlike clock-driven methods, these schedulers handle task arrivals dynamically and are more
adaptive.
● This category includes famous algorithms like RMS, EDF, and LLF, widely used in real-time systems.
● It is more suitable for complex systems with mixed task sets and varying timing needs.
Dynamic vs. Fixed Priority Scheduling
● In fixed-priority scheduling, a task's priority remains unchanged throughout its lifecycle (e.g., RMS).
● In dynamic-priority scheduling, the priority can change over time based on deadline or slack (e.g.,
EDF, LLF).
● Fixed-priority is easier to analyze but may lead to lower CPU utilization.
● Dynamic-priority systems offer better schedulability but are harder to verify and more
computationally expensive.
● The choice depends on system needs—simplicity and predictability vs. flexibility and utilization.
Rate Monotonic Scheduling (RMS): Overview
● RMS is a fixed-priority algorithm where shorter-period tasks are given higher priority.
● It assumes independent, periodic, preemptive tasks with constant execution times.
● It’s simple to implement and widely accepted in industrial and automotive RTOS.
● RMS works well for hard real-time tasks when total utilization is within schedulable bounds.
● If all assumptions hold, RMS offers provable guarantees for deadline adherence.
RMS Assumptions and Preconditions
● All tasks are periodic, with deadlines equal to their periods (i.e., D = T).
● Tasks are preemptible, with no blocking or resource sharing delays.
● Execution times, periods, and deadlines are known at design time.
● Tasks are independent, i.e., no task depends on the completion of another.
● There is zero context switch overhead assumed in theoretical analysis (relaxed in practice).
RMS Utilization Bound Theorem
● RMS can guarantee schedulability if total CPU utilization is below a specific bound.
● The bound for n tasks is:, which approaches ~69% as n → ∞.
● For example, 3 tasks can be guaranteed if total utilization is ≤ 0.779.
● This is a sufficient but not necessary condition—some task sets above the bound may still be
schedulable.
● The theorem provides a quick analytical check before deeper simulation or testing.
RMS Schedulability Test
● The utilization-based test checks whether the total CPU usage stays within the RMS bound for n
tasks.
● If the condition ∑(Ci/Ti) ≤ n(2^(1/n) – 1) holds, the task set is guaranteed to be schedulable under
RMS.
● This test is sufficient but not exact—some sets that exceed the bound might still meet all deadlines.
● For tighter analysis, we can apply Response Time Analysis (RTA) to validate each task’s worst-case
start time.
● Engineers often use both the utilization test and response-time simulation to ensure reliability.
Example: RMS Schedulability Analysis
● EDF is a dynamic-priority scheduling algorithm that always executes the task with the earliest
absolute deadline.
● It can fully utilize the CPU up to 100% utilization under ideal conditions.
● EDF handles periodic and aperiodic tasks more gracefully than RMS in many systems.
● Priorities change dynamically as deadlines approach, offering high responsiveness.
● It is optimal in single-processor systems but requires careful timing management.
EDF Schedulability Test and Proof
● The primary schedulability condition for EDF is: Total CPU utilization ≤ 1.0.
● If ∑(Ci/Ti) ≤ 1, the system is guaranteed to be schedulable under EDF (in theory).
● Unlike RMS, EDF doesn’t have a decreasing bound—it is more efficient for high-utilization systems.
● The proof is based on the processor demand analysis and the greedy nature of the algorithm.
● However, EDF is less predictable under overload conditions—deadline misses may cascade.
Example: EDF Task Set Evaluation
● LLF assigns priority based on task laxity, defined as (Deadline – Remaining Execution Time).
● Tasks with least slack time are given the highest priority, ensuring urgent tasks are completed first.
● LLF recalculates laxity at every time unit, making it highly responsive but computationally
expensive.
● LLF is optimal like EDF but suffers from frequent priority changes and context switching.
● It is best suited for systems with frequent deadline changes and fine-grained preemption.
Least Laxity First (LLF) Scheduling
● Task Set:
○ T1: C=2, D=5 (starts at t=0)
○ T2: C=1, D=4 (starts at t=0)
● Initial Laxities at t=0:
○ T1 Laxity = 5 – 2 = 3
○ T2 Laxity = 4 – 1 = 3
○ (Same laxity → break tie arbitrarily)
● At t=1, T2 has executed 1 unit (completed),
○ T1 remains: C=2, Laxity = 5 – 2 – 1 = 2
● At t=1 onward, T1 runs for 2 units
○ Laxity reduces but remains positive → finishes before deadline
● Outcome:
○ Both tasks meet deadlines using LLF, but note how priorities change over time dynamically based on remaining execution.
Comparison – EDF vs. RMS vs. LLF
● 1⃣ RMS (Rate Monotonic Scheduling) assigns fixed priorities based on task frequency—simple but
less flexible.
● EDF (Earliest Deadline First) is dynamic and fully utilizes CPU up to 100%, but can collapse under
overload.
● LLF (Least Laxity First) is theoretically optimal, but impractical due to frequent recalculation and
jitter.
● RMS is predictable and analyzable, EDF is more efficient, LLF is theoretically tight but
computationally heavy.
● The choice depends on system type—certified systems prefer RMS, while soft real-time or
adaptive systems benefit from EDF.
Handling Aperiodic and Sporadic Tasks
● Aperiodic tasks arrive without a fixed period but must still be serviced with minimal delay.
● Sporadic tasks have a minimum inter-arrival time, making them easier to budget into schedules.
● These tasks are typically handled using server-based mechanisms to prevent interference with
periodic tasks.
● Real-time systems allocate reserved CPU bandwidth or time slots for such tasks.
● Efficient handling ensures responsiveness without compromising the timing guarantees of periodic
tasks.
Polling Server Mechanism
● A polling server is a periodic task that checks for aperiodic jobs during its scheduled execution.
● If no aperiodic task is present, the server remains idle for its slot—this leads to wasted CPU cycles.
● It's simple to implement but inefficient if aperiodic events are rare or unpredictable.
● Polling servers are often non-preemptive and scheduled like other periodic tasks.
● Suitable for low-priority or non-critical background events.
Deferrable Server Algorithm
● A deferrable server retains its execution budget for the entire period, enabling quicker aperiodic
response.
● It is scheduled as a periodic task but defers execution until an aperiodic job arrives.
● Improves responsiveness without sacrificing periodic task guarantees.
● However, improper budget configuration can lead to deadline violations or CPU overuse.
● More efficient than polling servers but requires careful budget tuning.
Sporadic Server Algorithm
● A sporadic server is designed to replenish its capacity only after actual execution, ensuring better
schedulability.
● It behaves like a real task, but its capacity is consumed and replenished based on task arrival and
use.
● More suitable for systems with unpredictable but critical aperiodic events.
● It maintains better utilization bounds and avoids the timing anomalies of deferrable servers.
● Commonly used in RTOS implementations like VxWorks and FreeRTOS for aperiodic load handling.
Processor Demand vs. Time Demand Analysis
● Processor Demand Analysis (PDA) calculates whether the CPU can handle all tasks in a time
window.
● It ensures that the sum of execution times of all active jobs never exceeds the time available.
● PDA is especially useful in EDF systems where utilization can approach 100%.
● Time Demand Analysis (TDA) is more common in RMS—checks if each task completes before its
deadline.
● Both are critical for formal schedulability testing and are often implemented in RTOS validation
tools.
Scheduling Overheads and Context Switch Costs
● Real-time scheduling incurs overhead through context switches, timer interrupts, and preemption.
● These non-functional costs can erode available CPU time, leading to unexpected deadline misses.
● Preemptive schedulers (RMS, EDF, LLF) suffer more context-switch overhead than
non-preemptive ones.
● The more dynamic the scheduler (like LLF), the higher the computational burden at runtime.
● Accurate analysis must include system call costs, timer latency, and scheduling jitter.
Scheduling in Multi-Processor RTS – Part 1
● Modern systems often have multiple cores, requiring scheduling strategies that go beyond single
CPU assumptions.
● Partitioned scheduling assigns tasks to specific cores—each with its own local scheduler (like RMS).
● Global scheduling treats all tasks and processors as a single unit, where the highest-priority task
wins any available core.
● Partitioned methods are easier to analyze, but may lead to underutilized processors.
● Global methods are more efficient, but significantly harder to predict and verify.
Scheduling in Multi-Processor RTS – Part 2
● Migration overhead is a concern in global scheduling, where tasks can jump across CPUs.
● Cache misses, pipeline flushes, and context reloads add latency and jitter in real-time systems.
● Hybrid methods like semi-partitioned scheduling balance efficiency with predictability.
● Algorithms like Global EDF or Pfair are popular but have complex implementation requirements.
● RTOS support for multi-core must handle affinity, inter-core communication, and load balancing
carefully.
Scheduling in Multi-Processor RTS – Part 3
● In complex embedded platforms, multi-core systems are often heterogeneous—each core may
have different performance or role.
● Real-time scheduling must consider task affinity, mapping high-load tasks to faster cores.
● Some systems dedicate a core to critical periodic tasks, isolating them from shared jitter.
● Advanced approaches use performance-aware and energy-aware scheduling together.
● Multi-processor scheduling must also protect shared resources to avoid blocking or priority
inversion.
● In RTOS-based systems, tasks must be modeled with period, deadline, WCET (worst-case
execution time), and priority.
● Tasks may also have dependencies, resource needs (semaphores, I/O), and state transitions.
● RTOS APIs like xTaskCreate() (FreeRTOS) or taskSpawn() (VxWorks) require precise task
definitions.
● Mischaracterizing task behavior can lead to schedulability failure even if the system seems idle.
● RTOS-level modeling tools (e.g., Tracealyzer, Cheddar, Simulink) help validate and simulate
behavior before deployment.
Rate vs. Deadline Miss Handling Policies
● Rate-miss occurs when a periodic task is not released at its intended interval (e.g., every 50ms).
● Deadline-miss occurs when a task finishes execution after its deadline has passed.
● Systems handle these differently—hard real-time systems may trigger a fail-safe or emergency
mode.
● Soft real-time systems may allow skipping tasks, reducing quality, or rescheduling.
● Engineers must decide between discarding late tasks, re-executing them, or degrading service
gracefully.
CENG453: Real-Time Systems
Questions