Rtos Classppt
Rtos Classppt
• Multitasking,
• Context switching,
• Dispatcher, and
• Scheduling algorithms.
Schedulable Entities
• A schedulable entity is a kernel object that can compete for
execution time on a system, based on a predefined
scheduling algorithm.
• Tasks and processes are all examples of schedulable entities
found in most kernel.
• Note that message queues and semaphores are not
schedulable entities. These items are inter-task
communication objects used for synchronization and
communication.
Multitasking
• Multitasking is the ability of the operating system to handle multiple
activities within set deadlines. A real-time kernel might have multiple
tasks that it has to schedule to run.
• In this scenario, the kernel multitasks in such a way that many threads
of execution appear to be running concurrently; however, the kernel is
actually interleaving executions sequentially, based on a preset
scheduling algorithm. The scheduler must ensure that the appropriate
task runs at the right time.
• An important point to note here is that the tasks follow the kernel s
scheduling algorithm, while interrupt service routines (ISR) are
triggered to run because of hardware interrupts and their established
priorities.
• As the number of tasks to schedule increases, so do CPU
performance requirements. This fact is due to increased
switching between the contexts of the different threads of
execution.
The Context Switch
Each task has its own context, which is the state of the CPU
registers required each time it is scheduled to run. A
context switch occurs when the scheduler switches from
one task to another.
Every time a new task is created, the kernel also creates
and maintains an associated task control block (TCB).
TCBs are system data structures that the kernel uses to
maintain task-specific information.
TCBs contain everything a kernel needs to know about a
particular task. When a task is running, its context is highly
dynamic.
This dynamic context is maintained in the TCB. When the
task is not running, its context is frozen within the TCB, to
be restored
the next time the task runs.
(4,1)
(5,2)
(7,2)
Real-Time Scheduling
• Determines the order of real-time task executions
• Static-priority scheduling
• Dynamic-priority scheduling
(4,1)
(5,2)
5 10 15
(7,2)
5 10 15
RM (Rate Monotonic)
• Optimal static-priority scheduling
• It assigns priority according to period
• A task with a shorter period has a higher priority
• Executes a job with the shortest period
T1 (4,1)
T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
RM (Rate Monotonic)
• Executes a job with the shortest period
T1 (4,1)
T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
RM (Rate Monotonic)
• Executes a job with the shortest period
Deadline Miss !
T1 (4,1)
T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
Response Time
• Response time
– Duration from released time to finish time
T1 (4,1)
T2 (5,2)
5 10 15
T3 (10,2)
5 10 15
Response Time
• Response time
– Duration from released time to finish time
Response Time
T1 (4,1)
T2 (5,2)
5 10 15
T3 (10,2)
5 10 15
EDF (Earliest Deadline First)
• Optimal dynamic priority scheduling
• A task with a shorter deadline has a higher priority
• Executes a job with the earliest deadline
T1 (4,1)
T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
EDF (Earliest Deadline First)
• Executes a job with the earliest deadline
T1 (4,1)
T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
EDF (Earliest Deadline First)
• Executes a job with the earliest deadline
T1 (4,1)
T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
EDF (Earliest Deadline First)
• Executes a job with the earliest deadline
T1 (4,1)
T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
October 3, 2005 CIS 700 27
EDF (Earliest Deadline First)
• Optimal scheduling algorithm
– if there is a schedule for a set of real-time tasks,
EDF can schedule it.
T1 (4,1)
T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
Scheduling Algorithms
two common scheduling algorithms
1. preemptive priority-based scheduling, and
2. round-robin scheduling.
Preemptive Priority-Based Scheduling
• Message Queues are buffer-like data structures that can be used for
synchronization, mutual exclusion, and data exchange by passing
messages between tasks. Developers creating real-time embedded
applications can combine these basic kernel objects (as well as others not
mentioned here) to solve common real-time design problems, such as
concurrency, activity synchronization, and data communication.
Services
• Along with objects, most kernels provide services that help
developers create applications for real-time embedded
systems.
• These services comprise sets of API calls that can be used to
perform operations on kernel objects or can be used in
general to facilitate timer management, interrupt handling,
device I/O, and memory management.
• Again, other services might be provided; these services are
those most commonly found in RTOS kernels.
Key Characteristics of an RTOS
• Reliability,
• Predictability,
• Performance,
• Compactness, and
• Scalability.
Tasks
• Concurrent design requires developers to decompose an
application into small, schedulable, and sequential program
units. When done correctly, concurrent design allows system
multitasking to meet performance and timing requirements
for a real-time system.
• Most RTOS kernels provide task objects and task
management services to facilitate designing concurrency
within an application.
Defining a Task
• A task is an independent thread of execution that can
compete with other concurrent tasks for processor execution
time.
• i.e developers decompose applications into multiple
concurrent tasks to optimize the handling of inputs and
outputs within set time constraints.
• A task is schedulable. The task is able to compete for
execution time on a system, based on a predefined
scheduling algorithm.
A task is defined by its distinct set of parameters and
supporting data structures.
Specifically, upon creation, each task has an associated name,
a unique ID, a priority (if part of a preemptive scheduling
plan), a task control block (TCB), a stack, and a task routine).
Together, these components make up what is known as the
task object.
When the kernel first starts, it creates its own set of
system tasks and allocates the appropriate priority for
each from a set of reserved priority levels.
System tasks are…
initialization or startup task initializes the system and creates and
starts system tasks,
In each of these cases, the task is moved from the running state to
the blocked state
Blocked State
• The possibility of blocked states is extremely important in
real-time systems because without blocked states, lower
priority tasks could not run.
• If higher priority tasks are not designed to block, CPU
starvation can result.
• CPU starvation occurs when higher priority tasks use all of
the CPU execution time and lower priority tasks do not get to
run.
• A task can only move to the blocked state by making a
blocking call, requesting that some blocking condition be
met
• a semaphore token for which a task is waiting is released,
• Developers typically create a task using one or two operations, depending on the
kernel s API.
• Some kernels allow developers first to create a task and then start it. In this case, the
task is first created and put into a suspended state;
• then, the task is moved to the ready state when it is started (made ready to run).
• Starting a task does not make it run immediately; it puts the task on the task-ready list.
• Many kernel implementations allow any task to delete any other task.
• During the deletion process, a kernel terminates the task and frees memory by deleting
the task’s TCB and stack.
• However, when tasks execute, they can acquire memory or
access resources using other kernel objects.
• If the task is deleted incorrectly, the task might not get to
release these resources
Abrupt deletion of the operating task can result in:
• Tasks typically make a request to acquire a semaphore in one of the following ways
• Wait forever task remains blocked until it is able to acquire a semaphore.
• Wait with a timeout task remains blocked until it is able to acquire a semaphore or
until a set interval of time, called the timeout interval , passes. At this point, the
task is removed from the semaphore s task-waiting list and put in either the ready
state or the running state.
• Do not wait task makes a request to acquire a semaphore token, but, if one is not
available, the task does not block
Clearing Semaphore Task-Waiting Lists
• The flush operation is useful for broadcast signaling to a group of
tasks. For example, a developer might design multiple tasks to
complete certain activities first and then block while trying to
acquire a common semaphore that is made unavailable.
• After the last task finishes doing what it needs to, the task can
execute a semaphore flush operation on the common semaphore.
This operation frees all tasks waiting in the semaphore s task
waiting list.
• The synchronization scenario just described is also called thread
rendezvous, when multiple tasks executions need to meet at some
point in time to synchronize execution control
Getting Semaphore Information
When a message queue is first created, it is assigned an associated queue control block (QCB), a message queue name,
a unique ID, memory buffers, a queue length, a maximum message length, and one
or more task-waiting lists
It is the kernel s job to assign a unique ID to a message queue and to create its
QCB and task-waiting list.
The kernel also takes developer-supplied parameters such as the length of the
queue and the maximum message length to determine how much memory is
required for the message queue.
After the kernel has this information, it allocates memory for the message queue
from either a pool of system memory or some private memory space.
The message queue itself consists of a number of elements, each of which can
hold a single message. The elements holding the first and last messages are called
the head and tail respectively.
Some elements of the queue may be empty (not containing a message).
The total number of elements (empty or not) in the queue is the total length of the
queue .
The developer specified the queue length when the queue was created.
The state diagram for a message queue.
Message copying and memory use for sending and receiving
messages.
Message Queue Storage
System Pools
• Using a system pool can be advantageous if it is certain that all message queues
will never be filled to capacity at the same time. The advantage occurs because
system pools typically save on memory use. The downside is that a message
queue with large messages can easily use most of the pooled memory, not leaving
enough memory for other message queues. Indications that this problem is
occurring include a message queue that is not full that starts rejecting messages
sent to it or a full message queue that continues to accept more messages.
• Private Buffers
• Using private buffers, on the other hand, requires enough reserved memory area
for the full capacity of every message queue that will be created. This approach
clearly uses up more memory; however, it also ensures that messages do not get
overwritten and that room is available for all messages, resulting in better
reliability than the pool approach.
Typical Message Queue Operations
Typical message queue operations include the following:
• creating and deleting message queues,
• sending and receiving messages, and
• obtaining message queue information.
Figure : Sending messages in FIFO or LIFO order Figure : FIFO and priority-based task-waiting lists
Typical Message Queue Use
• non-interlocked, one-way data communication,
• interlocked, one-way data communication,
• interlocked, two-way data communication, and
• broadcast communication.
Figure : Non-interlocked, one-way data communication
Figure : Interlocked, one-way data communication
tSourceTask ()
{
tSourceTask ()
:
{
Send message to message queue
:
:
Send message to message queue
}
Acquire binary semaphore
tSinkTask ()
:
{
}
:
tSinkTask ()
Receive message from message queue
{
:
:
}
Receive message from message queue
Give binary semaphore
:
}
Figure : Interlocked, two-way data communication
Figure : Broadcasting messages
tClientTask ()
{ tBroadcastTask ()
: {
Send a message to the requests queue :
Wait for message from the server queue Send broadcast message to queue
: :
} }
tServerTask () Note: similar code for tSignalTasks 1, 2,
{ and 3.
: tSignalTask ()
Receive a message from the requests queue {
Send a message to the client queue :
: Receive message on queue
} :
}