0% found this document useful (0 votes)
32 views50 pages

5 - Task Scheduling

The document discusses the importance of task scheduling in real-time systems, emphasizing the need for timely task completion to avoid catastrophic failures. It categorizes real-time tasks into periodic, aperiodic, and sporadic types, and outlines various scheduling strategies, including static and dynamic algorithms. Key concepts such as minimum inter-arrival time, hard vs. soft constraints, and the Rate Monotonic Scheduling (RMS) algorithm are also explained, highlighting their significance in ensuring system reliability and performance.

Uploaded by

kaganakinci60
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views50 pages

5 - Task Scheduling

The document discusses the importance of task scheduling in real-time systems, emphasizing the need for timely task completion to avoid catastrophic failures. It categorizes real-time tasks into periodic, aperiodic, and sporadic types, and outlines various scheduling strategies, including static and dynamic algorithms. Key concepts such as minimum inter-arrival time, hard vs. soft constraints, and the Rate Monotonic Scheduling (RMS) algorithm are also explained, highlighting their significance in ensuring system reliability and performance.

Uploaded by

kaganakinci60
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

CENG453: Real-Time Systems

Task Scheduling

Dr. Hüseyin TEMUÇİN


Gazi University Computer Engineering Department
CENG453: Real-Time Systems

Why Scheduling Matters in Real-Time Systems

● Timeliness is as important as functional correctness; tasks must complete before their deadlines to
avoid failure.
● Proper scheduling ensures optimal CPU utilization, predictability, and system responsiveness.
● In critical applications like flight control or life-support, even small scheduling errors can have
catastrophic consequences.
● Efficient scheduling improves system throughput and power consumption, especially in embedded
environments.
● Understanding task scheduling helps developers choose the right RTOS, algorithm, and system
architecture.
Types of Real-Time Tasks: Periodic, Aperiodic, Sporadic

● Periodic tasks arrive at regular intervals and have predictable execution patterns (e.g., reading a
sensor every 100ms).
● Aperiodic tasks do not follow a fixed pattern but still require timely processing (e.g., user input or
network packets).
● Sporadic tasks arrive irregularly like aperiodic tasks but have a minimum inter-arrival time
guarantee.
● Scheduling mechanisms must consider these task types and assign priority or time slots accordingly.
● Many real-time systems combine all three types, requiring hybrid scheduling strategies to manage
complexity.
Real-World Example: Fire Alarm as a Sporadic Task

● Triggered by external sensors (e.g., smoke, heat, flame detectors).


● Treated as sporadic due to unpredictable yet rare occurrences.
● System sets a minimum inter-arrival time, e.g., 10 seconds, to avoid overload from sensor jitter or
repeated triggers.
● Requires guaranteed response—system must react within a tight deadline (e.g., start evacuation in
≤1 second).
● May be scheduled using a Sporadic Server to ensure critical response time is always reserved.
Understanding Minimum Inter-Arrival Time
● 1⃣ What Does Minimum Inter-Arrival Time (T<sub>min</sub>) Mean?

● It's the shortest allowed time between two activations of a sporadic task.

● 2⃣ Why Is T<sub>min</sub> Important?

● Prevents system overload from repeated triggers.

● Helps the scheduler reserve enough CPU capacity.

● 3⃣ Design Factors That Influence T<sub>min</sub>:

● Sensor debounce logic (to avoid false triggers).

● Recovery time after handling the first event.

● CPU load and task schedulability under worst-case conditions.

● 4⃣ Fire Alarm as a Sporadic Task Example:

● Treated as a sporadic task with T<sub>min</sub> = 10 seconds.

● System guarantees: “No two alarms processed < 10s apart.”

● 5⃣ Clarification – Real Fires vs. Real-Time Model:

● Multiple fires can still occur—but the system assumes it won’t be required to respond faster than T<sub>min</sub>.

● If alarms exceed rate, tasks are queued, skipped, or handled by a reserved server.
Understanding Minimum Inter-Arrival Time

● What Does Minimum Inter-Arrival Time (T<sub>min</sub>) Mean?


○ It's the shortest allowed time between two activations of a sporadic task.
● Why Is T<sub>min</sub> Important?
○ Prevents system overload from repeated triggers.
○ Helps the scheduler reserve enough CPU capacity.
● Design Factors That Influence T<sub>min</sub>:
○ Sensor debounce logic (to avoid false triggers).
○ Recovery time after handling the first event.
● CPU load and task schedulability under worst-case conditions.
○ Fire Alarm as a Sporadic Task Example:
○ Treated as a sporadic task with T<sub>min</sub> = 10 seconds.
○ System guarantees: “No two alarms processed < 10s apart.”
● Clarification – Real Fires vs. Real-Time Model:
○ Multiple fires can still occur—but the system assumes it won’t be required to respond faster than T<sub>min</sub>.
○ If alarms exceed rate, tasks are queued, skipped, or handled by a reserved server.
Periodic vs Sporadic Tasks
Feature Sporadic Task Aperiodic Task

Arrival Pattern Irregular, but with a minimum Completely irregular, no guaranteed spacing
inter-arrival time

Deadline Requirements Often strict, hard deadlines May or may not have deadlines

Priority / Criticality Typically high-critical Usually medium/low priority

Scheduling Strategy Requires reserved capacity (e.g., sporadic Served opportunistically


server)

Use Case Examples Emergency stop, fire alarm, hardware Logging, diagnostics, non-critical inputs
failure
Classification of Scheduling Algorithms

● Scheduling algorithms can be classified as static or dynamic, depending on whether decisions are
made at compile-time or run-time.
● Static (clock-driven) algorithms are used when task behavior is predictable and timing is
guaranteed.
● Dynamic (priority-driven) algorithms make decisions based on current task states, deadlines, and
system load.
● Algorithms may also be preemptive (tasks can be interrupted) or non-preemptive (tasks run to
completion).
● Each scheduling class has strengths and trade-offs based on system complexity, responsiveness,
and resource availability.
Key Assumptions in Real-Time Scheduling Theory

● Most scheduling theory assumes tasks are independent, meaning they don’t block or wait on
shared resources.
● Execution times are considered known and deterministic, which simplifies analysis but isn’t always
realistic.
● Tasks are typically assumed to be ready at time zero or at predictable intervals.
● The system is often considered single-processor with no resource contention, which may differ
from real-world setups.
● While idealized, these assumptions help develop baseline algorithms, which can later be extended
to handle complexities.
Hard vs. Soft Scheduling Constraints

● Hard real-time constraints require tasks to meet deadlines without exception; failure can lead to
system breakdown or physical harm.
● Soft real-time constraints allow occasional deadline misses but may degrade performance or user
experience (e.g., video streaming).
● Systems may have mixed criticality, where both hard and soft tasks coexist and need coordinated
scheduling.
● Scheduling strategies differ significantly between the two—hard real-time requires predictability,
soft real-time tolerates flexibility.
● Understanding task criticality is essential for prioritizing resources and selecting appropriate
scheduling algorithms.
Definition Review: Task Period, Deadline, Execution Time

● Period (T) is the time interval between consecutive releases of a periodic task (e.g., every 50 ms).
● Deadline (D) is the time by which a task must complete after it becomes ready to avoid failure.
● Execution time (C) is the CPU time required to finish the task once it starts running.
● In many systems, D = T, but in some, deadlines may be less than or greater than the task period.
● These three parameters form the foundation of schedulability analysis and algorithm selection.
Scheduling Goals and Performance Metrics

● The primary goal is to ensure all critical tasks meet their deadlines under all system conditions.
● Key performance metrics include CPU utilization, task lateness, response time, and schedulability.
● A good scheduler aims for predictability and fairness, especially in soft real-time systems.
● In safety-critical systems, zero deadline misses is the top metric, often sacrificing utilization for
reliability.
● Metrics guide design choices like algorithm type, priority assignments, and resource allocation.
Timing Diagrams and Task Execution Timelines

● Timing diagrams visualize task start, execution, preemption, and completion over a shared timeline.
● These diagrams help explain how scheduling decisions impact task overlaps and deadlines.
● Tasks may be delayed by higher-priority tasks in preemptive environments, visible in these
visualizations.
● Timing diagrams are crucial tools in debugging schedulability issues and visualizing worst-case
scenarios.
● Engineers use them in design reviews, simulation tools, and documentation to validate system
timing.
Preemptive vs. Non-Preemptive Scheduling

● In preemptive scheduling, a higher-priority task can interrupt a running lower-priority task at any
time.
● In non-preemptive scheduling, once a task starts, it runs until completion, regardless of incoming
task priorities.
● Preemption increases responsiveness and deadline adherence but introduces context switch
overhead and complexity.
● Non-preemptive scheduling reduces overhead but may cause deadline misses for higher-priority
tasks.
● The choice depends on task criticality, system overhead tolerance, and responsiveness
requirements.
Introduction to Clock-Driven Scheduling

● Clock-driven (or time-driven) scheduling relies on a precomputed schedule based on task


periodicity.
● All decisions are made offline during system design, using known task parameters such as period
and execution time.
● The system uses a timer interrupt or system clock to trigger task releases at pre-scheduled
moments.
● This approach is predictable and highly deterministic, making it ideal for safety-critical
applications.
● However, clock-driven scheduling lacks flexibility for dynamic or aperiodic tasks.
Static Scheduling: Overview & Applications

● In static scheduling, task behavior and timing requirements are fully known at compile time.
● A fixed schedule table is created that defines exactly when each task will execute.
● This approach is used in flight control, nuclear plant monitoring, and medical systems where
predictability is key.
● There is no runtime scheduling overhead, which improves performance and reduces jitter.
● However, even small changes to task parameters require a complete re-analysis of the schedule.
Cyclic Executive: Concept and Design

● A cyclic executive is a loop that repeatedly runs a set of functions at predefined intervals.
● It divides time into slots (minor cycles), fitting tasks into these slots in a repeating structure (major
cycle).
● Task allocation is based on multiples of the system tick, ensuring timing alignment.
● The main control loop executes in a non-preemptive fashion, simplifying timing analysis.
● This structure is best suited for systems with stable, periodic behavior and no aperiodic task
requirements.
Constructing the Scheduling Table

● The schedule table defines exact start times and order of tasks for each frame within the major
cycle.
● To build it, we calculate least common multiple (LCM) of task periods to determine the major cycle.
● Each task is assigned to one or more minor cycles, based on its frequency and execution time.
● The process requires careful balancing to avoid missed deadlines or underutilization.
● Tools and spreadsheets are often used to simulate execution and verify feasibility before
deployment.
Hyperperiod Concept and Calculation

● The hyperperiod is the least common multiple (LCM) of all task periods in the system.
● It defines the total length over which the entire schedule repeats identically.
● Helps identify task release times, determine frame alignment, and avoid schedule drift.
● For three tasks with periods 20ms, 40ms, and 100ms, the hyperperiod is LCM(20, 40, 100) =
200ms.
● A well-chosen hyperperiod simplifies scheduling table construction and visualization.
Example: Cyclic Executive for Three Periodic Tasks

● Consider three tasks with periods of 20ms, 40ms, and 100ms and known execution times.
● The hyperperiod (LCM) is calculated to find the overall repetition cycle—e.g., 200ms.
● Divide this into minor cycles (e.g., 20ms) and fit tasks into the slots based on their frequency.
● Show a table-based layout of when each task is scheduled within each minor cycle.
● Use a timing diagram to illustrate how the cyclic executive runs predictably and without
preemption.
Example: Cyclic Executive for Three Periodic Tasks
Advantages of Clock-Driven Scheduling

● Provides deterministic execution, ensuring tasks run at known, predictable times.


● Enables simple verification and timing analysis, ideal for certification in safety-critical systems.
● Has no runtime decision-making overhead, making it highly efficient during execution.
● Eliminates the need for dynamic schedulers, reducing jitter and scheduling complexity.
● Works well in resource-constrained embedded systems where behavior is highly regular.
Limitations and Rigidity of Static Scheduling

● Does not adapt well to runtime variations or system disturbances (e.g., aperiodic events).
● Adding or modifying tasks requires full regeneration and validation of the schedule table.
● Cannot handle tasks with dynamic execution times or event-driven behavior efficiently.
● Typically assumes known and fixed periods, making it unsuitable for adaptive systems.
● May lead to underutilized CPU resources due to conservative timing allocations.
Time Slotting and Minor/Major Cycles

● Time is divided into minor cycles, the smallest time unit that aligns with all task periods.
● The major cycle, or hyperperiod, is the LCM of all task periods and defines when the schedule
repeats.
● Each task is assigned to execute in one or more minor cycle frames across the major cycle.
● Accurate slotting ensures no task overlaps or deadline misses in the precomputed schedule.
● Selection of minor cycle length must balance task granularity, system tick, and overhead.
Clock-Driven Scheduling in Industrial Systems

● Used in programmable logic controllers (PLCs) and embedded systems in manufacturing, robotics,
and avionics.
● Ensures precise task triggering, crucial for synchronized machinery or sensor-actuator control.
● Enables certification and validation under safety standards like DO-178C, IEC 61508, ISO 26262.
● Well-suited for systems with minimal runtime variability and strict compliance needs.
● Often used with hardware timers, watchdogs, and RTOS cyclic handlers for accurate task release.
Introduction to Priority-Driven Scheduling

● Priority-driven scheduling makes runtime decisions based on task priorities, allowing for more
flexible execution.
● Tasks can be assigned fixed or dynamic priorities, depending on the algorithm used.
● Unlike clock-driven methods, these schedulers handle task arrivals dynamically and are more
adaptive.
● This category includes famous algorithms like RMS, EDF, and LLF, widely used in real-time systems.
● It is more suitable for complex systems with mixed task sets and varying timing needs.
Dynamic vs. Fixed Priority Scheduling

● In fixed-priority scheduling, a task's priority remains unchanged throughout its lifecycle (e.g., RMS).
● In dynamic-priority scheduling, the priority can change over time based on deadline or slack (e.g.,
EDF, LLF).
● Fixed-priority is easier to analyze but may lead to lower CPU utilization.
● Dynamic-priority systems offer better schedulability but are harder to verify and more
computationally expensive.
● The choice depends on system needs—simplicity and predictability vs. flexibility and utilization.
Rate Monotonic Scheduling (RMS): Overview

● RMS is a fixed-priority algorithm where shorter-period tasks are given higher priority.
● It assumes independent, periodic, preemptive tasks with constant execution times.
● It’s simple to implement and widely accepted in industrial and automotive RTOS.
● RMS works well for hard real-time tasks when total utilization is within schedulable bounds.
● If all assumptions hold, RMS offers provable guarantees for deadline adherence.
RMS Assumptions and Preconditions

● All tasks are periodic, with deadlines equal to their periods (i.e., D = T).
● Tasks are preemptible, with no blocking or resource sharing delays.
● Execution times, periods, and deadlines are known at design time.
● Tasks are independent, i.e., no task depends on the completion of another.
● There is zero context switch overhead assumed in theoretical analysis (relaxed in practice).
RMS Utilization Bound Theorem

● RMS can guarantee schedulability if total CPU utilization is below a specific bound.
● The bound for n tasks is:, which approaches ~69% as n → ∞.
● For example, 3 tasks can be guaranteed if total utilization is ≤ 0.779.
● This is a sufficient but not necessary condition—some task sets above the bound may still be
schedulable.
● The theorem provides a quick analytical check before deeper simulation or testing.
RMS Schedulability Test

● The utilization-based test checks whether the total CPU usage stays within the RMS bound for n
tasks.
● If the condition ∑(Ci/Ti) ≤ n(2^(1/n) – 1) holds, the task set is guaranteed to be schedulable under
RMS.
● This test is sufficient but not exact—some sets that exceed the bound might still meet all deadlines.
● For tighter analysis, we can apply Response Time Analysis (RTA) to validate each task’s worst-case
start time.
● Engineers often use both the utilization test and response-time simulation to ensure reliability.
Example: RMS Schedulability Analysis

● Consider three tasks:


a. T1 (C=1, T=4),
b. T2 (C=2, T=5),
c. T3 (C=3, T=20).

● Calculate utilization: U = 1/4 + 2/5 + 3/20 = 0.25 + 0.4 + 0.15 = 0.8.


● For n = 3, the RMS bound is 0.779, so this task set exceeds the theoretical bound.
● However, by checking actual execution timelines, we may find it still meets deadlines.
● This illustrates why real-time analysis often requires simulation beyond theoretical thresholds
Earliest Deadline First (EDF): Overview

● EDF is a dynamic-priority scheduling algorithm that always executes the task with the earliest
absolute deadline.
● It can fully utilize the CPU up to 100% utilization under ideal conditions.
● EDF handles periodic and aperiodic tasks more gracefully than RMS in many systems.
● Priorities change dynamically as deadlines approach, offering high responsiveness.
● It is optimal in single-processor systems but requires careful timing management.
EDF Schedulability Test and Proof

● The primary schedulability condition for EDF is: Total CPU utilization ≤ 1.0.
● If ∑(Ci/Ti) ≤ 1, the system is guaranteed to be schedulable under EDF (in theory).
● Unlike RMS, EDF doesn’t have a decreasing bound—it is more efficient for high-utilization systems.
● The proof is based on the processor demand analysis and the greedy nature of the algorithm.
● However, EDF is less predictable under overload conditions—deadline misses may cascade.
Example: EDF Task Set Evaluation

● Consider two tasks:


○ T1 (C=2, T=5),
○ T2 (C=3, T=10).
● Total utilization: U = 2/5 + 3/10 = 0.4 + 0.3 = 0.7, which is less than 1 → schedulable under EDF.
● In EDF, priorities will shift dynamically depending on task release times and deadlines.
● Build a timeline of task arrivals, deadlines, and execution slots to visualize scheduling.
● This example shows EDF’s flexibility and full CPU usage even when RMS might fail
Least Laxity First (LLF) Scheduling

● LLF assigns priority based on task laxity, defined as (Deadline – Remaining Execution Time).
● Tasks with least slack time are given the highest priority, ensuring urgent tasks are completed first.
● LLF recalculates laxity at every time unit, making it highly responsive but computationally
expensive.
● LLF is optimal like EDF but suffers from frequent priority changes and context switching.
● It is best suited for systems with frequent deadline changes and fine-grained preemption.
Least Laxity First (LLF) Scheduling

● Task Set:
○ T1: C=2, D=5 (starts at t=0)
○ T2: C=1, D=4 (starts at t=0)
● Initial Laxities at t=0:
○ T1 Laxity = 5 – 2 = 3
○ T2 Laxity = 4 – 1 = 3
○ (Same laxity → break tie arbitrarily)
● At t=1, T2 has executed 1 unit (completed),
○ T1 remains: C=2, Laxity = 5 – 2 – 1 = 2
● At t=1 onward, T1 runs for 2 units
○ Laxity reduces but remains positive → finishes before deadline
● Outcome:
○ Both tasks meet deadlines using LLF, but note how priorities change over time dynamically based on remaining execution.
Comparison – EDF vs. RMS vs. LLF

● 1⃣ RMS (Rate Monotonic Scheduling) assigns fixed priorities based on task frequency—simple but
less flexible.
● EDF (Earliest Deadline First) is dynamic and fully utilizes CPU up to 100%, but can collapse under
overload.
● LLF (Least Laxity First) is theoretically optimal, but impractical due to frequent recalculation and
jitter.
● RMS is predictable and analyzable, EDF is more efficient, LLF is theoretically tight but
computationally heavy.
● The choice depends on system type—certified systems prefer RMS, while soft real-time or
adaptive systems benefit from EDF.
Handling Aperiodic and Sporadic Tasks

● Aperiodic tasks arrive without a fixed period but must still be serviced with minimal delay.
● Sporadic tasks have a minimum inter-arrival time, making them easier to budget into schedules.
● These tasks are typically handled using server-based mechanisms to prevent interference with
periodic tasks.
● Real-time systems allocate reserved CPU bandwidth or time slots for such tasks.
● Efficient handling ensures responsiveness without compromising the timing guarantees of periodic
tasks.
Polling Server Mechanism

● A polling server is a periodic task that checks for aperiodic jobs during its scheduled execution.
● If no aperiodic task is present, the server remains idle for its slot—this leads to wasted CPU cycles.
● It's simple to implement but inefficient if aperiodic events are rare or unpredictable.
● Polling servers are often non-preemptive and scheduled like other periodic tasks.
● Suitable for low-priority or non-critical background events.
Deferrable Server Algorithm

● A deferrable server retains its execution budget for the entire period, enabling quicker aperiodic
response.
● It is scheduled as a periodic task but defers execution until an aperiodic job arrives.
● Improves responsiveness without sacrificing periodic task guarantees.
● However, improper budget configuration can lead to deadline violations or CPU overuse.
● More efficient than polling servers but requires careful budget tuning.
Sporadic Server Algorithm

● A sporadic server is designed to replenish its capacity only after actual execution, ensuring better
schedulability.
● It behaves like a real task, but its capacity is consumed and replenished based on task arrival and
use.
● More suitable for systems with unpredictable but critical aperiodic events.
● It maintains better utilization bounds and avoids the timing anomalies of deferrable servers.
● Commonly used in RTOS implementations like VxWorks and FreeRTOS for aperiodic load handling.
Processor Demand vs. Time Demand Analysis

● Processor Demand Analysis (PDA) calculates whether the CPU can handle all tasks in a time
window.
● It ensures that the sum of execution times of all active jobs never exceeds the time available.
● PDA is especially useful in EDF systems where utilization can approach 100%.
● Time Demand Analysis (TDA) is more common in RMS—checks if each task completes before its
deadline.
● Both are critical for formal schedulability testing and are often implemented in RTOS validation
tools.
Scheduling Overheads and Context Switch Costs

● Real-time scheduling incurs overhead through context switches, timer interrupts, and preemption.
● These non-functional costs can erode available CPU time, leading to unexpected deadline misses.
● Preemptive schedulers (RMS, EDF, LLF) suffer more context-switch overhead than
non-preemptive ones.
● The more dynamic the scheduler (like LLF), the higher the computational burden at runtime.
● Accurate analysis must include system call costs, timer latency, and scheduling jitter.
Scheduling in Multi-Processor RTS – Part 1

● Modern systems often have multiple cores, requiring scheduling strategies that go beyond single
CPU assumptions.
● Partitioned scheduling assigns tasks to specific cores—each with its own local scheduler (like RMS).
● Global scheduling treats all tasks and processors as a single unit, where the highest-priority task
wins any available core.
● Partitioned methods are easier to analyze, but may lead to underutilized processors.
● Global methods are more efficient, but significantly harder to predict and verify.
Scheduling in Multi-Processor RTS – Part 2

● Migration overhead is a concern in global scheduling, where tasks can jump across CPUs.
● Cache misses, pipeline flushes, and context reloads add latency and jitter in real-time systems.
● Hybrid methods like semi-partitioned scheduling balance efficiency with predictability.
● Algorithms like Global EDF or Pfair are popular but have complex implementation requirements.
● RTOS support for multi-core must handle affinity, inter-core communication, and load balancing
carefully.
Scheduling in Multi-Processor RTS – Part 3

● In complex embedded platforms, multi-core systems are often heterogeneous—each core may
have different performance or role.
● Real-time scheduling must consider task affinity, mapping high-load tasks to faster cores.
● Some systems dedicate a core to critical periodic tasks, isolating them from shared jitter.
● Advanced approaches use performance-aware and energy-aware scheduling together.
● Multi-processor scheduling must also protect shared resources to avoid blocking or priority
inversion.
● In RTOS-based systems, tasks must be modeled with period, deadline, WCET (worst-case
execution time), and priority.
● Tasks may also have dependencies, resource needs (semaphores, I/O), and state transitions.
● RTOS APIs like xTaskCreate() (FreeRTOS) or taskSpawn() (VxWorks) require precise task
definitions.
● Mischaracterizing task behavior can lead to schedulability failure even if the system seems idle.
● RTOS-level modeling tools (e.g., Tracealyzer, Cheddar, Simulink) help validate and simulate
behavior before deployment.
Rate vs. Deadline Miss Handling Policies

● Rate-miss occurs when a periodic task is not released at its intended interval (e.g., every 50ms).
● Deadline-miss occurs when a task finishes execution after its deadline has passed.
● Systems handle these differently—hard real-time systems may trigger a fail-safe or emergency
mode.
● Soft real-time systems may allow skipping tasks, reducing quality, or rescheduling.
● Engineers must decide between discarding late tasks, re-executing them, or degrading service
gracefully.
CENG453: Real-Time Systems

Questions

You might also like