0% found this document useful (0 votes)
64 views24 pages

Chapte4 Zamzn

The document discusses different scheduling approaches for real-time embedded systems, including: 1) Task slicing which breaks large tasks into smaller ones to meet scheduling constraints. 2) Scheduling aperiodic tasks by allowing them to steal slack time from periodic tasks or using a polling server approach. 3) Scheduling sporadic tasks by testing if newly arrived tasks can meet their deadlines given existing schedules, and notifying the system if not.

Uploaded by

AHMED DARAJ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views24 pages

Chapte4 Zamzn

The document discusses different scheduling approaches for real-time embedded systems, including: 1) Task slicing which breaks large tasks into smaller ones to meet scheduling constraints. 2) Scheduling aperiodic tasks by allowing them to steal slack time from periodic tasks or using a polling server approach. 3) Scheduling sporadic tasks by testing if newly arrived tasks can meet their deadlines given existing schedules, and notifying the system if not.

Uploaded by

AHMED DARAJ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Real Time Embedded System

Presented by: zaman najii


4.1.2.2 Task Slicing

It is possible that for a set of periodic tasks, there is no feasible frame size that meets all the three
constraints. This is typically because one or more tasks have large executions times. In that case, the first
constraint may conflict with the third constraint. One way to resolve the issue is to slice one or more big
tasks (tasks with longer execution times) into several smaller ones.

Example 4.5
4.2.2 Scheduling Aperiodic Tasks
Aperiodic tasks are typically results of external events. They do not have hard deadlines. However, it is
desirable to service aperiodic tasks as soon as possible in order to reduce the latency and improve the
system’s performance.

Example 4.6 Slack Stealing


Figure 4.9a is a copy of the schedule in Example 4.4. Figure 4.9b shows the arrival of two aperiodic
tasks. The first one arrives at 2 with 1.5 units of execution time. The second one arrives at 7 with 0.5
units of execution time. If we schedule them at the background of the schedule in Figure 4.9a, the first
task will be completed at 10 and the second one will be completed at 12, as shown in Figure 4.9c.
However, if we move the execution of the second instance of T2 to the interval of [9, 10] and the third
instance of T2 to [11, 12], we are able to complete the first aperiodic task at 8 and the second
aperiodic task at 11, as shown in Figure 4.9d.
4.2.3 Scheduling Sporadic Tasks
• Sporadic tasks have hard deadlines, but their parameters (release time, execution time, and
deadline) are unknown a priori, so there is no way for the scheduler to guarantee that they meet
their deadlines.
• when a sporadic task is released with a known deadline, the scheduler will test to see if it can
schedule the task such that its deadline can be met, given that all periodic tasks and maybe other
sporadic tasks have already been scheduled. If the newly released sporadic task can pass the
test, it will be scheduled for execution. Otherwise, the scheduler rejects the task and notifies the
system so that it can take necessary recovery action.

• Example 4.7 Scheduling Sporadic Tasks


In general, when periodic, aperiodic, and sporadic tasks are mixed together, the cyclic scheduler places
all aperiodic tasks in one queue and all sporadic tasks in another queue. The sporadic task queue is a
priority queue. The task with the earliest deadline is placed at the head of the queue. Of course, only
sporadic tasks that pass the acceptance test are placed in the queue. Periodic tasks should be
scheduled toward their deadlines as long as the deadlines are met, to allow aperiodic tasks to steal
slacks and complete execution as early as possible.
4.3 Round-Robin Approach

• The round-robin approach is a time-sharing scheduling algorithm.


• a time slice is assigned to each task in a circular order.
• There is an FIFS (first-in-first-service) queue that stores all tasks in the ready state.
• If the task is not finished within the assigned time slice, it is placed at the tail of the FIFS queue to
wait for its next turn.
• Since each time a task only gets a small portion executed, all tasks are assumed to be
preemptable.
• An improved version of round-robin is weighted round-robin.
4.4 Priority-Driven Scheduling Algorithms
• scheduling decisions are made when a new task (instance) is released or a task (instance) is
completed.
• Priority is assigned to each task. Priority assignment can be done statically or dynamically while the
system is running.
• scheduling algorithm that assigns priority to tasks statically is called a static-priority or fixed-priority
algorithm, and an algorithm that assigns priority to tasks dynamically is said to be dynamic-priority
algorithm. Priority-driven scheduling is easy to implement.
• System prosperities:
1) There are only periodic tasks in the system under consideration.
2) The relative deadline of each task is the same as its period.
3) All tasks are independent; there are no precedence constraints.
4) All tasks are preemptable, and the cost of context switches is negligible.
4.4.1 Fixed-Priority Algorithms

• The most well-known fixed-priority algorithm is the rate-monotonic (RM) algorithm.

• The algorithm assigns priorities based on the period of tasks.

Example 4.8 RM Scheduling


shows an RM-based schedule of three periodic tasks:
T1 = (4, 1), T2 = (5, 1), T3 = (10, 3).

P3>P2>P3

T3<T2<T1
Example 4.9 Task missing deadline by RM

Figure 4.13 shows that when we schedule three periodic tasks


T1 = (4, 1), T2 = (5, 2), T3 = (10, 3.1).
Rate monotonic(RM) Algorithm
4.4.1.1 Schedulability Test Based on Time Demand Analysis (TDA)
Example 4.10 Schedulability Test
Consider the following four periodic tasks: T1 = (3, 1), T2 = (4, 1), T3 = (6, 1), T4 = (12, 1).
We want to test their schedulability based on the TDA.
Task T1 is schedulable, because u1 = 0.33 < 1.
To test T2, we first list the time instants when instances in T1 are released in [0, 4], and they are 0 and 3.
at t=3
4.4.2 Dynamic-Priority Algorithms

4.4.2.1 Earliest-Deadline-First (EDF) Algorithm

Example 4.12 EDF Scheduling of One-Shot Tasks

Example 4.13 EDF Scheduling of Periodic Tasks

T1 = (4, 1), T2 = (5, 2), T3 = (10, 3.1)


Optimality of EDF:-

■ EDF is an optimal uniprocessor scheduling algorithm.


■ as long as a set of tasks has feasible schedules. EDF can produce a feasible
schedule.
■ If S is not an EDF schedule, there must be a situation as illustrated in Figure 4.17a,
that is, a task with a later deadline is scheduled to run before a task with an earlier
deadline.
We consider three cases:-
1. Interval I1 is shorter than I2. In this case, we partition Ti into two subtasks Ti1 and
Ti2, where Ti1 is the first portion of Ti and its execution time is equal to the length
of I1. We place Ti1 in I1, move Ti2 to the beginning of I2, and place Tj right after Ti2,
as shown in Figure 4.17b.
2. Interval I1 is longer than I2. In this case, we partition Tj into two subtasks Tj1 and
Tj2 , where Tj1 is the first portion of Tj and its execution time is equal to the
difference in the lengths of I1 and I2. We place Ti and Tj1 in I1 and place Tj2 in I2,
as shown in Figure 4.17c.
3. Intervals I1 and I2 are equally long.
In this case, we simply switch the two tasks.
How do we test the schedulability of a
set of tasks by the EDF algorithm?

■ The necessary condition is obvious, because the utilization of a single processor


cannot exceed 1.
■ We focus on the proof of the sufficient condition.
■ We prove it by showing that if in an EDF schedule, some tasks of a system fail to
meet their deadlines, then the total utilization of the system must be greater than 1.
■ assume that the system starts execution at 0, and the event that the task, Ti ,
misses deadline occurs at t. Assume that the processor never idles prior to t.
Consider two cases:
1. period of every task starts at or after ri.
2. the current periods of some tasks begin before ri.
1- Ti misses its deadline at t indicates two things:
Any current task instance whose deadline is aftert is not •
given any processor time to execute before t.
• The total processor time required to complete the Ti
instance and all other task instances with deadlines at
or before t exceeds the total supply of processor time, t.
(indicated from the schedulability test)

2- Denote by T′ the set of tasks whose current period of


instances are released before ri and have deadlines after t.
• It is possible that processor time before ri was given to
the current instances of some tasks in T′ . Tk is such a
task in the figure. Let t−1 when an instance of Tk ends
its execution at t−1.
• Then, during the interval [t−1, t], no instances in tasks
in T −T′ whose deadlines are after t are given processor
time.
• Because the current instance in Ti misses its deadline,
we must have
Scheduling of Aperiodic Tasks

■ s have either soft deadlines or no deadlines.


■ The simplest algorithm is to execute them at the slack times of the schedule of periodic tasks.
■ assigns the lowest priority to aperiodic tasks.
■ there is no guarantee of the response time of aperiodic tasks.
■ Another popular approach is polling (a polling server is introduced as a periodic task Ts = (ps, es)
to the system, where ps is the polling period and es is its execution time.
■ The scheduler treats the polling server as a periodic task assigning a relative priority to it based
on its polling period.
■ poller stops running when it has already executed for es units of time or the aperiodic task that
executes is finished.
■ if at the beginning of a polling period, no aperiodic tasks are available for execution, the poller
suspends itself immediately. And waits till the begging of a new period.
Scheduling of Aperiodic Tasks

■ If A is scheduled with a deferrable server Ts =(2.5, 0.5), its execution is finished at


5.5 as shown in Figure 4.19c. In this case, A executes immediately after it is
released at 0.2.
■ The worst impact occurs when an aperiodic task with execution time no less than
2es arrives at mps − es. In this case, the server executes for 2es units of time
consecutively and causes the longest possible delay to periodic tasks.
Example 4.14 Scheduling of Aperiodic Tasks
■ T1 = (3, 1), T2 = (10, 3) , An aperiodic task A
with an execution time of 1.3 is released at 0.2.
a) Aperiodic task scheduled at the background of
periodic tasks. completed at 7.3
b) Aperiodic task scheduled with a simple polling
server. Ts =(2.5, 0.5) s finished at 7.8
c) Aperiodic task scheduled with a deferrable
server. its execution is finished at 5.5
Scheduling of Sporadic Tasks:-

■ have hard deadlines,


■ One way of handling sporadic tasks is to treat every sporadic task as a periodic task
■ Another way is to introduce a deferrable server, the way we handle aperiodic tasks

You might also like