0% found this document useful (0 votes)
20 views76 pages

Rtos Classppt

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views76 pages

Rtos Classppt

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 76

Introduction To

Real-Time Operating Systems


Defining an RTOS
• A real-time operating system (RTOS) is a program that schedules execution in
a timely manner, manages system resources, and provides a consistent
foundation for developing application code.
• Application code designed on an RTOS can be quite diverse, ranging from a
simple application for a digital stopwatch to a much more complex
application for aircraft navigation.
• An RTOS comprises only a kernel, which is the core supervisory software that
provides minimal logic, scheduling, and resource-management algorithms.
Every RTOS has a kernel.
• On the other hand, an RTOS can be a combination of various modules,
including the kernel, a file system, networking protocol stacks, and other
components required for a particular application,
High-level view of an RTOS, its kernel, and other components found in embedded systems
• Most RTOS kernels contain the following components:

• Scheduler- is contained within each kernel and follows a set of


algorithms that determines which task executes when. Some common
examples of scheduling algorithms include round-robin and
preemptive scheduling.
• Objects-are special kernel constructs that help developers create
applications for real-time embedded systems. Common kernel objects
include tasks, semaphores, and message queues.
• Services-are operations that the kernel performs on an object or,
generally operations such as timing, interrupt handling, and resource
management.
Common components in an RTOS kernel including objects, the scheduler, and some services.
The Scheduler
• The scheduler is the heart of every kernel. A scheduler provides the
algorithms needed to determine which task executes when.
• Schedulable entities,

• Multitasking,

• Context switching,

• Dispatcher, and

• Scheduling algorithms.
Schedulable Entities
• A schedulable entity is a kernel object that can compete for
execution time on a system, based on a predefined
scheduling algorithm.
• Tasks and processes are all examples of schedulable entities
found in most kernel.
• Note that message queues and semaphores are not
schedulable entities. These items are inter-task
communication objects used for synchronization and
communication.
Multitasking
• Multitasking is the ability of the operating system to handle multiple
activities within set deadlines. A real-time kernel might have multiple
tasks that it has to schedule to run.
• In this scenario, the kernel multitasks in such a way that many threads
of execution appear to be running concurrently; however, the kernel is
actually interleaving executions sequentially, based on a preset
scheduling algorithm. The scheduler must ensure that the appropriate
task runs at the right time.
• An important point to note here is that the tasks follow the kernel s
scheduling algorithm, while interrupt service routines (ISR) are
triggered to run because of hardware interrupts and their established
priorities.
• As the number of tasks to schedule increases, so do CPU
performance requirements. This fact is due to increased
switching between the contexts of the different threads of
execution.
The Context Switch
Each task has its own context, which is the state of the CPU
registers required each time it is scheduled to run. A
context switch occurs when the scheduler switches from
one task to another.
Every time a new task is created, the kernel also creates
and maintains an associated task control block (TCB).
TCBs are system data structures that the kernel uses to
maintain task-specific information.
TCBs contain everything a kernel needs to know about a
particular task. When a task is running, its context is highly
dynamic.
This dynamic context is maintained in the TCB. When the
task is not running, its context is frozen within the TCB, to
be restored
the next time the task runs.

The time it takes for the scheduler to switch from one


task to another is the context switch time
Multitasking using a context switch.
The Dispatcher
• The dispatcher is the part of the scheduler that performs context switching
and changes the flow of execution.
• At any time an RTOS is running, the flow of execution, also known as flow of
control, is passing through one of three areas: through an application task,
through an ISR, or through the kernel.
• When a task or ISR makes a system call, the flow of control passes to the
kernel to execute one of the system routines provided by the kernel. When
it is time to leave the kernel, the dispatcher is responsible for passing
control to one of the tasks in the user s application.
• It will not necessarily be the same task that made the system call. It is the
scheduling algorithms of the scheduler that determines which task executes
next. It is the dispatcher that does the actual work of context switching and
passing execution control.
Process-Based Scheduling
• Scheduling approaches

– Fixed-Priority Scheduling (FPS)


– Earliest Deadline First (EDF)
– Value-Based Scheduling (VBS)
Fixed-Priority Scheduling (FPS)
• This is the most widely used approach and is the main focus of
this course
• Each process has a fixed, static, priority which is computer pre-
run-time
• The runnable processes are executed in the order determined by
their priority
• In real-time systems, the “priority” of a process is derived from
its temporal requirements, not its importance to the correct
functioning of the system or its integrity
Earliest Deadline First (EDF) Scheduling
• The runnable processes are executed in the order determined
by the absolute deadlines of the processes
• The next process to run being the one with the shortest
(nearest) deadline
• Although it is usual to know the relative deadlines of each
process (e.g. 25ms after release), the absolute deadlines are
computed at run time and hence the scheme is described as
dynamic.
Value-Based Scheduling (VBS)
• If a system can become overloaded then the use of simple
static priorities or deadlines is not sufficient; a more adaptive
scheme is needed
• This often takes the form of assigning a value to each process
and employing an on-line value-based scheduling algorithm to
decide which process to run next
Preemption and Non-preemption
• With priority-based scheduling, a high-priority process may be released during
the execution of a lower priority one
• In a preemptive scheme, there will be an immediate switch to the higher-priority
process
• With non-preemption, the lower-priority process will be allowed to complete
before the other executes
• Preemptive schemes enable higher-priority processes to be more reactive, and
hence they are preferred
• Alternative strategies allow a lower priority process to continue to execute for a
bounded time
• These schemes are known as deferred preemption or cooperative dispatching
• Schemes such as EDF and VBS can also take on a pre-emptive or non pre-
emptive form
Schedulability
• Property indicating whether a real-time system (a set of real-
time tasks) can meet their deadlines

(4,1)

(5,2)

(7,2)
Real-Time Scheduling
• Determines the order of real-time task executions
• Static-priority scheduling
• Dynamic-priority scheduling

(4,1)

(5,2)
5 10 15
(7,2)
5 10 15
RM (Rate Monotonic)
• Optimal static-priority scheduling
• It assigns priority according to period
• A task with a shorter period has a higher priority
• Executes a job with the shortest period

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
RM (Rate Monotonic)
• Executes a job with the shortest period

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
RM (Rate Monotonic)
• Executes a job with the shortest period

Deadline Miss !

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
Response Time
• Response time
– Duration from released time to finish time

T1 (4,1)

T2 (5,2)
5 10 15
T3 (10,2)
5 10 15
Response Time
• Response time
– Duration from released time to finish time
Response Time

T1 (4,1)

T2 (5,2)
5 10 15
T3 (10,2)
5 10 15
EDF (Earliest Deadline First)
• Optimal dynamic priority scheduling
• A task with a shorter deadline has a higher priority
• Executes a job with the earliest deadline

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
EDF (Earliest Deadline First)
• Executes a job with the earliest deadline

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
EDF (Earliest Deadline First)
• Executes a job with the earliest deadline

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
EDF (Earliest Deadline First)
• Executes a job with the earliest deadline

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
October 3, 2005 CIS 700 27
EDF (Earliest Deadline First)
• Optimal scheduling algorithm
– if there is a schedule for a set of real-time tasks,
EDF can schedule it.

T1 (4,1)

T2 (5,2)
5 10 15
T3 (7,2)
5 10 15
Scheduling Algorithms
two common scheduling algorithms
1. preemptive priority-based scheduling, and
2. round-robin scheduling.
Preemptive Priority-Based Scheduling

the task that gets to run at any point is the task


with the highest priority among all other tasks
ready to run in the system.

With a preemptive priority-based scheduler, each task has a


priority, and the highest-priority task runs first.
Real-time kernels generally support 256 priority If a task with a priority higher than the current task becomes
levels, ready to run, the kernel immediately saves the current task s
in which 0 is the highest and 255 the lowest. context in its TCB and switches to the higher-priority task.
Some kernels appoint the priorities As shown in Figure task 1 is preempted by higher-priority task
in reverse order, where 255 is the highest and 0 the 2, which is then preempted by task 3. When task 3 completes,
lowest task 2 resumes; likewise, when task 2 completes, task 1
resumes.
Round-Robin Scheduling
With time slicing, each task executes for a defined interval, or
time slice, in an ongoing cycle, which is the round robin. A run-
time counter tracks the time slice for each task, incrementing
on every clock tick. When one task s time slice completes, the
counter is cleared, and the task is placed at the end of the
cycle. Newly added tasks of the same priority are placed at the
end of the cycle, with their run-time counters initialized to 0.

Round-robin scheduling provides each task an equal share of the CPU


execution time.
Pure round-robin scheduling cannot satisfy real-time system requirements
because in real-time systems, tasks perform work of varying degrees of
importance.
Instead, preemptive, priority-based scheduling can be augmented with
round-robin scheduling which uses time slicing to achieve equal allocation
of the CPU for tasks of the same priority as shown
Objects
• Tasks are concurrent and independent threads of execution that can
compete for CPU execution time.

• Semaphores are token-like objects that can be incremented or


decremented by tasks for synchronization or mutual exclusion.

• Message Queues are buffer-like data structures that can be used for
synchronization, mutual exclusion, and data exchange by passing
messages between tasks. Developers creating real-time embedded
applications can combine these basic kernel objects (as well as others not
mentioned here) to solve common real-time design problems, such as
concurrency, activity synchronization, and data communication.
Services
• Along with objects, most kernels provide services that help
developers create applications for real-time embedded
systems.
• These services comprise sets of API calls that can be used to
perform operations on kernel objects or can be used in
general to facilitate timer management, interrupt handling,
device I/O, and memory management.
• Again, other services might be provided; these services are
those most commonly found in RTOS kernels.
Key Characteristics of an RTOS
• Reliability,

• Predictability,

• Performance,

• Compactness, and

• Scalability.
Tasks
• Concurrent design requires developers to decompose an
application into small, schedulable, and sequential program
units. When done correctly, concurrent design allows system
multitasking to meet performance and timing requirements
for a real-time system.
• Most RTOS kernels provide task objects and task
management services to facilitate designing concurrency
within an application.
Defining a Task
• A task is an independent thread of execution that can
compete with other concurrent tasks for processor execution
time.
• i.e developers decompose applications into multiple
concurrent tasks to optimize the handling of inputs and
outputs within set time constraints.
• A task is schedulable. The task is able to compete for
execution time on a system, based on a predefined
scheduling algorithm.
A task is defined by its distinct set of parameters and
supporting data structures.
Specifically, upon creation, each task has an associated name,
a unique ID, a priority (if part of a preemptive scheduling
plan), a task control block (TCB), a stack, and a task routine).
Together, these components make up what is known as the
task object.
When the kernel first starts, it creates its own set of
system tasks and allocates the appropriate priority for
each from a set of reserved priority levels.
System tasks are…
 initialization or startup task initializes the system and creates and
starts system tasks,

 idle task uses up processor idle cycles when no other activity is


present,

 logging task logs system messages,

 exception-handling task handles exceptions, and

 debug agent task allows debugging with a host debugger.


Task States and Scheduling
ready state-the task is ready to run but cannot because a
higher priority task is executing.

blocked state-the task has requested a resource that is


not available, has requested to wait until some event
occurs, or has delayed itself for some duration.

running state-the task is the highest priority task and is


running.
Running State
• Unlike a ready task, a running task can move to the blocked state
in any of the following ways:
• by making a call that requests an unavailable resource,

• by making a call that requests to wait for an event to occur, and

• by making a call to delay the task for some duration.

In each of these cases, the task is moved from the running state to
the blocked state
Blocked State
• The possibility of blocked states is extremely important in
real-time systems because without blocked states, lower
priority tasks could not run.
• If higher priority tasks are not designed to block, CPU
starvation can result.
• CPU starvation occurs when higher priority tasks use all of
the CPU execution time and lower priority tasks do not get to
run.
• A task can only move to the blocked state by making a
blocking call, requesting that some blocking condition be
met
• a semaphore token for which a task is waiting is released,

• a message, on which the task is waiting, arrives in a message queue, or ·


• a time delay imposed on the task expires.
• When a task becomes unblocked, the task might move from the blocked
state to the ready state if it is not the highest priority task. The task is
then put into the task-ready list at the appropriate priority-based
location,
• However, if the unblocked task is the highest priority task, the task
moves directly to the running state (without going through the ready
state) and preempts the currently running task. The preempted task is
then moved to the ready state and put into the appropriate priority-
based location in the task-ready list.
Typical Task Operations
• creating and deleting tasks,

• controlling task scheduling, and

• obtaining task information.


Task Creation and Deletion
Operation Description
Create Creates a task
Delete Deletes a task

• Developers typically create a task using one or two operations, depending on the
kernel s API.
• Some kernels allow developers first to create a task and then start it. In this case, the
task is first created and put into a suspended state;
• then, the task is moved to the ready state when it is started (made ready to run).
• Starting a task does not make it run immediately; it puts the task on the task-ready list.
• Many kernel implementations allow any task to delete any other task.
• During the deletion process, a kernel terminates the task and frees memory by deleting
the task’s TCB and stack.
• However, when tasks execute, they can acquire memory or
access resources using other kernel objects.
• If the task is deleted incorrectly, the task might not get to
release these resources
Abrupt deletion of the operating task can result in:

• a corrupt data structure, due to an incomplete write operation,


• an unreleased semaphore, which will not be available for other
tasks that might need to acquire it, and an inaccessible data
structure, due to the unreleased semaphore.
• As a result, premature deletion of a task can result in memory or
resource leaks.
• A memory leak occurs when memory is acquired but not
released, which causes the system to run out of memory
eventually.
• A resource leak occurs when a resource is acquired but
never released, which results in a memory leak because
each resource takes up space in memory.
• Many kernels provide task-deletion locks, a pair of calls that
protect a task from being prematurely deleted during a
critical section of code.
Task Scheduling
• many kernels provide a set of API calls that allow developers to
control when a task moves to a different state. This capability is
called manual scheduling .
Obtaining Task Information
Synchronization, Communication, and
Concurrency
• objects include semaphores, message queues, signals, and
pipes, as well as other types of objects
Semaphores
• A semaphore (sometimes called a semaphore token) is a
kernel object that one or more threads of execution can
acquire or release for the purposes of synchronization or
mutual exclusion.
• When a semaphore is first created, the kernel assigns to it
an associated semaphore control block (SCB), a unique ID, a
value (binary or a count), and a task-waiting list, as shown
• A semaphore is like a key that allows a task to carry out
some operation or to access a resource.
• If the task can acquire the semaphore, it can carry out the
intended operation or access the resource.
• A single semaphore can be acquired a finite number of
times.
• when a semaphore s limit is reached, it can no longer be
acquired until someone gives a key back or releases the
semaphore.
• The kernel tracks the number of times a semaphore has
been acquired or released by maintaining a token count,
• which is initialized to a value when the semaphore is
• If the token count reaches 0, the semaphore has no tokens left. A
requesting task, therefore, cannot acquire the semaphore, and the task
blocks if it chooses to wait for the semaphore to become available.
• The task-waiting list tracks all tasks blocked while waiting on an
unavailable semaphore.
• These blocked tasks are kept in the task-waiting list in either first in/first
out (FIFO) order or highest priority first order
• When an unavailable semaphore becomes available, the kernel allows the
first task in the task-waiting list to acquire it.
• The kernel moves this unblocked task either to the running state, if it is
the highest priority task, or to the ready state, until it becomes the
highest priority task and is able to run.
• Note that the exact implementation of a task-waiting list can vary from
one kernel to another.
A kernel can support many different types of semaphores, including binary, counting, and mutual-exclusion
(mutex) semaphores.

• A binary semaphore can have a value of either 0 or 1. When


a binary semaphore s value is 0, the semaphore is
considered unavailable (or empty); when the value is 1, the
binary semaphore is considered available (or full ).
• Note that when a binary semaphore is first created, it can be
initialized to either available or unavailable (1 or
0,respectively).
Counting Semaphores
• A counting semaphore uses a count to allow it to be acquired or released multiple
times.
• When creating a counting semaphore, assign the semaphore a count that denotes
the number of semaphore tokens it has initially. If the initial count is 0, the
counting semaphore is created in the unavailable state.
• If the count is greater than 0, the semaphore is created in the available state, and
the number of tokens it has equals its count,
Mutual Exclusion (Mutex) Semaphores
• A mutual exclusion (mutex) semaphore is a special binary semaphore
that supports ownership, recursive access, task deletion safety, and
one or more protocols for avoiding problems inherent to mutual
exclusion.
• As opposed to the available and unavailable states in binary and
counting semaphores, the states of a mutex are unlocked or locked (0
or 1, respectively). A mutex is initially created in the unlocked state, in
which it can be acquired by a task. After being acquired, the mutex
moves to the locked state. Conversely, when the task releases the
mutex, the mutex returns to the unlocked state.
• Note that some kernels might use the terms lock and unlock for a
mutex instead of acquire and release.
Ownership of a mutex is gained when a task first locks the mutex by acquiring it. Conversely, a task loses ownership
of the mutex when it unlocks it by releasing it. When a task owns the mutex, it is not possible for any other task to
lock or unlock that mutex. Contrast this concept with the binary semaphore, which can be released by any task, even
a task that did not originally acquire the semaphore.
Many mutex implementations also support recursive locking , which allows the task that owns the mutex to acquire
it multiple times in the locked state. Depending on the implementation, recursion within a mutex can be automatically
built into the mutex, or it might need to be enabled explicitly when the mutex is first created.
The mutex with recursive locking is called a recursive mutex . This type of mutex is most useful when a task
requiring exclusive access to a shared resource calls one or more routines that also require access to the same
resource.
Some mutex implementations also have built-in task deletion safety. Premature task deletion is avoided by
using task deletion locks when a task locks and unlocks a mutex. Enabling this capability within a mutex
ensures that while a task owns the mutex, the task cannot be deleted.
• Priority inversion commonly happens in poorly designed real-time embedded
applications.
• Priority inversion occurs when a higher priority task is blocked and is waiting
for a resource being used by a lower priority task, which has
itself been preempted by an unrelated medium-priority task. In this situation, the
higher priority task s priority level has effectively been inverted to the lower
priority task s level.
• priority inheritance protocol ensures that the priority level of the lower
priority task that has acquired the mutex is raised to that of the higher priority
task that has requested the mutex when inversion happens. The priority of the
raised task is lowered to its original value after the task releases the mutex that
the higher priority task requires.
• ceiling priority protocol ensures that the priority level of the task that acquires
the mutex is automatically set to the highest priority of all possible tasks that
might request that mutex when it is first acquired until it is released.
Typical Semaphore Operations
• creating and deleting semaphores,
• acquiring and releasing semaphores,
• clearing a semaphore s task-waiting list, and
• getting semaphore information.

• Tasks typically make a request to acquire a semaphore in one of the following ways
• Wait forever task remains blocked until it is able to acquire a semaphore.
• Wait with a timeout task remains blocked until it is able to acquire a semaphore or
until a set interval of time, called the timeout interval , passes. At this point, the
task is removed from the semaphore s task-waiting list and put in either the ready
state or the running state.
• Do not wait task makes a request to acquire a semaphore token, but, if one is not
available, the task does not block
Clearing Semaphore Task-Waiting Lists
• The flush operation is useful for broadcast signaling to a group of
tasks. For example, a developer might design multiple tasks to
complete certain activities first and then block while trying to
acquire a common semaphore that is made unavailable.
• After the last task finishes doing what it needs to, the task can
execute a semaphore flush operation on the common semaphore.
This operation frees all tasks waiting in the semaphore s task
waiting list.
• The synchronization scenario just described is also called thread
rendezvous, when multiple tasks executions need to meet at some
point in time to synchronize execution control
Getting Semaphore Information

• At some point in the application design, developers need to


obtain semaphore information to perform monitoring or
debugging.
Typical Semaphore Use
Semaphores are useful either for synchronizing execution of
multiple tasks or for coordinating access to a shared resource
• wait-and-signal synchronization,
• multiple-task wait-and-signal synchronization,
• credit-tracking synchronization,
• single shared-resource-access synchronization,
• recursive shared-resource-access synchronization, and
• multiple shared-resource-access synchronization.
Figure : Wait-and-signal synchronization between two tasks. Figure : Wait-and-signal synchronization between multiple tasks.
tWaitTask ()
tWaitTask ( ) {
{ :
: Do some processing specific to task Acquire
Acquire binary semaphore token binary semaphore token
: :
} }
tSignalTask ( ) tSignalTask ()
{ {
: :
Release binary semaphore token Do some processing Flush binary semaphore's
: task-waiting list
} :
}
Figure : Credit-tracking synchronization between two tasks.
Figure : Single shared-resource-access synchronization.
tWaitTask ()
{
tAccessTask ()
:
{
Acquire counting semaphore token
:
:
Acquire binary semaphore token
}
Read or write to shared resource
tSignalTask ()
Release binary semaphore token
{
:
:
}
Release counting semaphore token
:
}
tAccessTask ()
{
:
Acquire mutex
Access shared resource
Call Routine A
Release mutex
:
}
Routine A ()
{
:
Acquire mutex
Access shared resource
Call Routine B
Release mutex
:
}
Figure : Recursive shared- resource-access synchronization. Routine B ()
{
:
Acquire mutex
Access shared resource
Release mutex
:
}
tAccessTask2 ()
{
:
Multiple shared-resource-access synchronization Acquire first mutex in non-blocking way
If not successful then acquire 2nd mutex in
tAccessTask1 () a blocking way
{ Read or Write to shared resource
: Release the acquired mutex
Acquire a counting semaphore token :
Read or Write to shared resource }
Release a counting semaphore token
:
}
Message Queues
• A message queue is a buffer-like object through which tasks
and ISRs send and receive messages to communicate and
synchornize with data. A message queue is like a pipeline.
• It temporarily holds messages from a sender until the
intended receiver is ready to read them.
• This temporary buffering decouples a sending and receiving
task; that is, it frees the tasks from having to send and
receive messages simultaneously.
A message queue, its associated parameters, and supporting data structures

When a message queue is first created, it is assigned an associated queue control block (QCB), a message queue name,
a unique ID, memory buffers, a queue length, a maximum message length, and one
or more task-waiting lists
It is the kernel s job to assign a unique ID to a message queue and to create its
QCB and task-waiting list.
The kernel also takes developer-supplied parameters such as the length of the
queue and the maximum message length to determine how much memory is
required for the message queue.
After the kernel has this information, it allocates memory for the message queue
from either a pool of system memory or some private memory space.
The message queue itself consists of a number of elements, each of which can
hold a single message. The elements holding the first and last messages are called
the head and tail respectively.
Some elements of the queue may be empty (not containing a message).
The total number of elements (empty or not) in the queue is the total length of the
queue .
The developer specified the queue length when the queue was created.
The state diagram for a message queue.
Message copying and memory use for sending and receiving
messages.
Message Queue Storage

System Pools
• Using a system pool can be advantageous if it is certain that all message queues
will never be filled to capacity at the same time. The advantage occurs because
system pools typically save on memory use. The downside is that a message
queue with large messages can easily use most of the pooled memory, not leaving
enough memory for other message queues. Indications that this problem is
occurring include a message queue that is not full that starts rejecting messages
sent to it or a full message queue that continues to accept more messages.
• Private Buffers
• Using private buffers, on the other hand, requires enough reserved memory area
for the full capacity of every message queue that will be created. This approach
clearly uses up more memory; however, it also ensures that messages do not get
overwritten and that room is available for all messages, resulting in better
reliability than the pool approach.
Typical Message Queue Operations
Typical message queue operations include the following:
• creating and deleting message queues,
• sending and receiving messages, and
• obtaining message queue information.
Figure : Sending messages in FIFO or LIFO order Figure : FIFO and priority-based task-waiting lists
Typical Message Queue Use
• non-interlocked, one-way data communication,
• interlocked, one-way data communication,
• interlocked, two-way data communication, and
• broadcast communication.
Figure : Non-interlocked, one-way data communication
Figure : Interlocked, one-way data communication
tSourceTask ()
{
tSourceTask ()
:
{
Send message to message queue
:
:
Send message to message queue
}
Acquire binary semaphore
tSinkTask ()
:
{
}
:
tSinkTask ()
Receive message from message queue
{
:
:
}
Receive message from message queue
Give binary semaphore
:
}
Figure : Interlocked, two-way data communication
Figure : Broadcasting messages
tClientTask ()
{ tBroadcastTask ()
: {
Send a message to the requests queue :
Wait for message from the server queue Send broadcast message to queue
: :
} }
tServerTask () Note: similar code for tSignalTasks 1, 2,
{ and 3.
: tSignalTask ()
Receive a message from the requests queue {
Send a message to the client queue :
: Receive message on queue
} :
}

You might also like