Chapter Three
Chapter Three
Chapter three
A software framework is a concrete or conceptual platform where common code with generic
Frameworks take the form of libraries, where a well-defined application program interface (API) is
A framework can make your job easier, and save you the trouble of writing your code totally from
scratch. The right framework can also reduce your development time by making it faster to design and
troubleshoot.
Real-Time Embedded
Framework
A Real-Time Embedded Framework (RTEF) is an implementation of the Active Object des
pattern specifically designed for real-time embedded systems. RTEF provides the event-
driven
infrastructure for executing Active Objects based on a real-time kernel (RTOS kernel) to
ensure deterministic, real-time performance.
A Real-Time Embedded Framework (RTEF) is an implementation of the Active Object design
pattern specifically designed for real-time embedded systems. RTEF provides the event-driven
infrastructure for executing Active Objects based on a real-time kernel (RTOS kernel) to
ensure deterministic, real-time performance.
Con …
Active Objects can be combined with a wide variety of threading models, including real-time kernels
(RTOS kernels). In the latter case, the combination, carefully designed for deterministic performance,
is called the Real-Time Embedded Framework (RTEF).
The main difference between an RTOS and RTEF is that when you use a “naked” RTOS, you write the
main body of each and every thread in the application and from there, you call the various RTOS
services
(e.g., a time-delay or a semaphore).
When you use an RTEF, you reuse the overall architecture (such as the event loop for all private threads
of
Active Objects) and you only write the code that the RTEF calls. This leads to inversion of control
,
which allows the RTEF to automatically enforce the best practices of concurrent programming.
In
contrast, a “naked” RTOS lets you do anything and offers no help or automation for the best
practices.
Smaller than Traditional
inRTOS
the resource-constrained embedded systems, the biggest concern has always been about the size and
efficiency of such RTEFs, especially that the frameworks accompanying various UML tools have
traditionally been built on top of a conventional RTOS, which can only add memory footprint and
CPU
overhead to the final solution.
However, it turns out that an RTEF can be actually smaller than a traditional RTOS. This is possible,
because Active Objects don’t need to block internally, so most blocking mechanisms (e.g., semaphores)
of
a conventional RTOS are not needed.
Con …
RTOS RTEF
(non-
(blocking blocking)
)
time events
delay active
s object
threads s
semaphores event
message time pools
mutexes queues event
s publish
subscrib
event-flags e
state
machi
mailboxes
nes
Preemptive, Non-Blocking Real-Time
Kernels
While the Active Object model can work with traditional blocking RTOS kernels, it can also work with
other types of real-time kernels. Specifically, the non-blocking, run-to-completion Active Objects
can
work with a much simpler, fully preemptive, but non-blocking, run-to-completion kernels (see
also
basic tasks in OSEK/VDX).
The QP RTEFs natively include a selection of such a super-simple and super-fast kernels (QK and
QXK), which support fully preemptive multithreading using a single stack for all Active
Object
threads.
Higher Level of Abstraction at Lower Cost
Real time operating system is totally depending upon the clock interrupts.
RTOS implemented the Priority system for executing all types of process.
Entire RTOS is synchronized with the process, and they can make communication in between all process.
Real-time operating system (RTOS) is an operating system intended to serve real time application that process data
as it comes in, mostly without buffer delay. The full form of RTOS is Real time operating system.
In a RTOS, Processing time requirement are calculated in tenths of seconds increments of time.
In this type of system, processing must be done inside the specified constraints. Otherwise, the system will fail.
Con …
❖ When we hear the word operating system, first in our mind comes that the operating system used in
the
laptops & computers.
❖ Generally, we use different types of operating systems like windows XP, Linux, Ubuntu,
Windows
7,8.8.1, and 10.
❖ In the smartphones, the operating systems are like KitKat, Jellybean, marshmallow, and
Nougat.
❖ In a digital electronic device, there is some sort of operating system which is developed by the
microcontroller program.
❖ There are different types of operating systems to develop for the microcontroller, but here we have
discussed the real-time operating system.
3.2 Features of RTOS
• Unpredictable environment
• The Kernel saves the state of the interrupted task ad then determines which task it should run next.
• The Kernel restores the state of the task and passes control of the CPU for that task.
Characteristic of Real-
Time
operating syste
m
Characteristic Hard RTOS Soft RTOS
The prime function of RTOS provides the better management of RAM and processor as well as it gives
the access to all system resources. Read More – You Must be Known About Advantages and
Disadvantages of Operating System
Higher Priority Scheduler:
Real Time OS contains different many priorities with range (32-256) for executing to every task. This
scheduler helps to activate such process which has high priority. If current task is executing in CPU
processors ZONE, then it go to further highest priority task, and run processes.
System Clock Interrupt Routine helps to perform the highly time sensitive instructions in RTOS with
using system clocks. In which, it allots the time frame for performing the specific tasks.
Con …
• Deterministic Nature:
• Real Time OS provides the protection in using big length tasks such as 100 to 1000, and it
determines the further highest priority task then executes them without getting delay.
• For making the communication medium in among of all tasks of one system to other system,
RTOS use the synchronization and messaging. In which, synchronize the entire internally
activities of event flag and can be sent text messages with using the mailbox, pipes and
message queues.
Types of RTOS
There are three different types of RTOS which are
following
to some extent.
digital camera, mobile
phones
Real-Time CfJlltrollM
Operator
COllfjlut Object
L...
~-_
l
er ._
Syrtem
Hard Real-Time and Firm Real-
time
operation system
Hard Real-Time
This is Operating
also a type of OS System
and it is predicted by a deadline. The
predicted deadlines will react at a time t = 0. Some examples of this
operating system are air bag control in cars, anti-lock brake, and engine
control system etc.
In the firm real-time, an
operating system has certain time constraints, they
are not strict and it may cause undesired effects. Examples of this
operating system are a visual inspection in industrial automation.
There are different types of basic functionalities of an RTOS are following
Blocked
In this state, if it doesn’t have the enough resources to run, then it is sent to blocked state.
Three techniques are modified to schedule the task, there are following with their description.
Cooperating Scheduling
In this type of scheduling, the task will run until the execution is completed
In this scheduling, each process is assigned a fixed time slot and the process needs to complete its execution or
else the task loses its flow and data generation.
functionalities of an RTOS are following
Preemptive Scheduling
In generally 256 priority levels are used and each task has a unique priority level. There are some systems which support the more priority level and
multiple
tasks have some priorities.
System Clock Interrupt Routine
To perform the time sensitive operation the RTOS will provide some sort of system clocks. If there is a 1ms system clock, then you have to complete the
task
in 50ms. Usually, there is an API that follows you to say “In 50ms wake me up”. Hence the task would be in sleeping position until the RTOS will wake up.
We
have two notices that the woken up will not ensure to run exactly at that time, it depends on the priority and if the higher priority is running currently it would
be
Deterministic Behavior
delayed.
The RTOS moves to great length to protect that whether you have taken 100 tasks or 10 tasks, it does not make any difference in the distance to
switch
context and it determines the next highest priority task. In the prime area deterministic the RTOS is the interrupt handling, when the interrupt line is signaled
them
the
RTOS immediately
We have takes
to noise that thethe action of of
developers thethe
correct interrupt
project service
will write the routine
hardwareandspecific
interrupt is handled
ISR’s. Beforewithout
now theany delay.
RTOS gives the ISR’s for the serial ports,
system
clocks and it may be a networking hardware, but if there is anything specialized like pacemaker signals, actuators, etc., are not be a part of the RTOS.
This is all about the gross generalizations and there is a large variety implementation in the RTOS. Some of the RTOS are operated differently and the
above
functionalities of an RTOS are following
Synchronization and Messaging
The synchronization and messaging provides the communication between the
task of one system to another system and the messaging services are following.
To synchronize the internal activities the event flag is used and to send the text
messages we can use in the mailbox, pipes and message queues.
In the common data areas, the semaphores are used.
• Semaphores
• Event flags
• Mailboxes
• Pipes
• Message queues
RTOS Service
• Scheduler
• Synchronization mechanism
• Memory Management
• Function Library
• Interrupt Service Mechanism
• I/O Management
• Fast dispatch latency
• User-defined data objects and
•
classes
•
Hardware abstraction layer
•
Development Environment
Board Support Package
Scheduling Algorithms for RTOS
• Unique features: A good RTS should be capable, and it has some extra features like how it operates to execute
a
command, efficient protection of the memory of the system, etc.
• 24/7 performance: RTOS is ideal for those applications which require to run 24/7.
Difference between in GPOS and RTOS
❖ It used for desktop PC and laptop. ❖ It is only applied to the embedded application.
❖ No priority inversion mechanism is present in the ❖ The priority inversion mechanism is current. So it can
❖ Kernel’s operation may or may not be preempted. ❖ Kernel’s operation can be preempted.
1. Limited Tasks –
Very few tasks run simultaneously, and their concentration is very less on few applications to avoid
errors.
2. Use Heavy System Resources –
Sometimes the system resources are not so good and they are expensive as well.
3. Complex Algorithms –
The algorithms are very complex and difficult for the designer to write on.
It is not good to set thread priority as these systems are very less prone to switching tasks.
It is a set of programming interface which allow a programmer to coordinate activities among various
program processes which can run concurrently in an operating system.
This allows a specific program to handle many user requests at the same time.
Since every single user request may result in multiple processes running in the operating system, the
process may require to communicate with each other.
Each IPC protocol approach has its own advantage and limitation, so it is not unusual for a single
program to use all of the IPC methods.
Approaches for Inter-
Process
Communication
•
Inter-Process Communication Approaches
Pipes
) used for IPC
a bucket. The
ocess is retrieved
Process
write () p [1] ..
1
.
read () p [2] l1
li
Message Passing:
Message
Queue
a microprocessing unit (MPU), or even a digital signal processor (DSP) as efficiently as possible. Most RTO
kernels are written in C and require a small portion of code written in ASSEMBLY language to adapt the
kernel different CPU architectures.
An RTOS kernel provides many useful services to a programmer, such as multitasking, interrupt management, inter
task communication through message queues, signaling, resource management, time management, memory
partiti management, and more.
The application (i.e., end product) is basically split into multiple tasks, each one responsible for a portion of t
application. A task is a simple program that thinks it has the CPU all to itself. Each task is assigned a priority
base on the importance of the task.
Direct Communication:
In this type of inter-process communication process, should name each other
explicitly. In this method, a link is established between one pair of communicating
processes, and between each pair, only one link exists.
Indirect Communication:
mailbox each pair of processes sharing several communication links. A link can
communicate with many processes. The link may be bi-directional or
unidirectional.
Shared Memory:
Shared memory is a memory shared between two or more processes that are
established using shared memory between all the processes.
This type of memory requires to protected from each other by synchronizing access
across all the processes.
FIFO:
o Φi – is the phase of the task. Phase is the release time of the first job in the task. If the phase is not mentioned
then the release time of the first job is assumed to be zero.
o Pi – is the period of the task i.e. the time interval between the release times of two consecutive jobs.
For example: Consider the task Ti with period = 5 and execution time = 3
Phase is not given so, assume the release time of the first job as zero. So the job of this task is first released at t = 0
then
it executes for 3s and then the next job is released at t = 5 which executes for 3s and then the next job is released at t =
10. So jobs are released at t = 5k where k = 0, 1, . . ., n
2. Dynamic Tasks
• internal to the system. Dynamically arriving tasks can be categorized on their criticality and knowledge about
their occurrence times.
1. Aperiodic Tasks: In this type of task, jobs are released at arbitrary time intervals i.e.
randomly.
Aperiodic tasks have soft deadlines or no deadlines.
2. Sporadic Tasks: They are similar to aperiodic tasks i.e. they repeat at random instances. The
only
=(ei,
difference is that sporadic tasks have hard deadlines. A sporadic task is denoted by three tuples:
Ti
gi, Di)
Where
ei – the execution time of the task.
gi – the minimum separation between the occurrence of two consecutive instances of the task.
Di – the relative deadline of the task.
3.6 Dynamic allocation of task
One common way of implementing hard real-time systems is to use a cyclic executive
The real time scheduling is cyclic scheduling and priority scheduling
3.7.1 Cyclic scheduling
Advantages
Simple implementation (no real-time operating system is required).
Low run-time overhead.
It allows jitter control.
Disadvantages
It is not robust during overloads.
It is difficult to expand the schedule.
It is not easy to handle non periodic activities.
3.7.2 Priority-based scheduling
In this algorithm, the scheduler selects the tasks to work as per the priority.
The processes with higher priority should be carried out first, whereas jobs with equal priorities are carried
out on a round-robin or FCFS basis.
Preemptive Scheduling
In Preemptive Scheduling, the tasks are mostly assigned with their priorities. Sometimes it is important to run a
task with a higher priority before another lower priority task, even if the lower priority task is still running. The
lower priority task holds for some time and resumes when the higher priority task finishes its execution.
Non-Preemptive Scheduling
In this type of scheduling method, the CPU has been allocated to a specific process. The process that keeps the
CPU busy, will release the CPU either by switching context or terminating. It is the only method that can be
used for various hardware platforms. That’s because it doesn’t need special hardware (for example, a timer)
like preemptive scheduling.
Characteristics of Priority Scheduling
• A CPU algorithm that schedules processes based on priority.
• If two jobs having the same priority are READY, it works on a FIRST COME, FIRST SERVED
basis.
• In priority scheduling, a number is assigned to each process that indicates its priority
level.
• Lower the number, higher is the
priority.
• In this type of scheduling algorithm, if a newer process arrives, that is having a higher priority than
the
currently running process, then the currently running process is preempted.
Example
P2 1 4 0
P3 2 3 0
P4 1 7 6
3 4 11
P5
2 2 12
Step 0) At time=0, Process P1 and P2 arrive. P1 has higher priority than P2.
The execution begins with process P1, which has burst time 4. try to finish by
Priority
Scheduling
Advantages of priority scheduling lower priority processes may starve and be tponed
for an indefinite time.
• Easy to use scheduling method
• This scheduling algorithm may leave some low priority
• Processes are executed on the basis of priority so high
processes waiting indefinitely.
priority does not need to wait for long which saves time
• A process will be blocked when it is ready to run but has to
• This method provides a good mechanism where the relative
important of each process may be precisely defined. wait for the CPU because some other process is running
currently.
• Suitable for applications with fluctuating time and resource
requirements. • If a new higher priority process keeps on coming in the
ready queue, then the process which is in the waiting
Disadvantages of priority scheduling state
may need to wait for a long duration of time.
• If the system eventually crashes, all low priority processes
get lost.
enabling/disabling interrupts, preventing the scheduler from switching tasks (a.k.a. locking/unlocking
the scheduler), semaphores and mutual exclusion semaphores.
If the application is accessing simple variables and can do so in just a few CPU clock cycles, then it is
probably best to disable interrupts, access the variables and re-enable interrupts.
If accessing the resource will require thousands of CPU cycles, then it is best to use mutual exclusion
semaphores (a.k.a.. mutexes) because they avoid unbounded priority inversions, an issue that can occur
when using (regular) semaphores as they were defined by Edsger Dijkstra in the early 1960s.
11. Synchronization techniques among threads
When you create code that is thread safe but still benefits from sharing data or resources between threads,
Synchronization is the cooperative act of two or more threads that ensures that each thread reaches a known point of operation in relationship to
other
threads before continuing. Attempting to share resources without correctly using synchronization is the most common cause of damage to
application
data.
Typically, synchronizing two threads involves the use of one or more synchronization primitives. Synchronization primitives are low-level functions
or
application objects (not IBM® i objects) that your application uses or creates to provide the synchronization behavior that the application
requires.
• Here are the most common synchronization primitives in order of least to most computationally
expensive:
• Object locks
3.11 Synchronization techniques in Distributed Systems
Distributed System is a collection of computers connected via the high-speed
communication network.
In the distributed system, the hardware and software components communicate and
coordinate their actions by message passing.
Each node in distributed systems can share their resources with other nodes. So,
there is need of proper allocation of resources to preserve the state of resources and
help coordinate between the several processes.
To resolve such conflicts, synchronization is used. Synchronization in distributed
systems is achieved via clocks.
The physical clocks are used to adjust the time of nodes.
Each node in the system can share its local time with
other nodes in the system. The time is set based on UTC
(Universal Time Coordination).
UTC is used as a reference time clock for the nodes in the system.
The clock synchronization can be achieved by 2 ways:
External and Internal Clock Synchronization.
Examples of centralized are- Berkeley Algorithm, Passive Time Server, Active Time Server
etc.
1. Distributed is the one in which there is no centralized time server present. Instead, the nodes adjust
their time by using their local time and then, taking the average of the differences of time with other
nodes. Distributed algorithms overcome the issue of centralized algorithms like the scalability and
single point failure.
Examples of Distributed algorithms are – Global Averaging Algorithm, Localized
Averaging
Algorithm, NTP (Network time protocol) etc.
Logical Clocks
Logical Clocks refer to implementing a protocol on all machines within your distributed system, so that the
machines are able to maintain consistent ordering of events within some virtual timespan. A logical clock is
a
mechanism for capturing chronological and causal relationships in a distributed system. Distributed systems
may have no physically synchronous global clock, so a logical clock allows global ordering on events from
different processes in such systems.
3.12 real time application
❖ Real Time System is a system that is put through real time which means response is obtained within
a specified timing constraint or system meets the specified deadline.
❖ Real time system is of two types – Hard and Soft. Both are used in different
cases.
❖ Hard real time systems are used where even the delay of some nano or seconds are not allowed.
micro
❖ Soft real time systems provide some relaxation in time expression.
5. Defense applications:
In the new era of atomic world, defense is able to produce the missiles which have the dangerous powers and have
the great destroying ability. All these systems are real-time system and it provides the system to attack and also a
system to defend. Some of the applications of defense using real time systems are: Missile guidance system, anti-
missile system, Satellite missile system etc.
6. Aerospace applications:
The most powerful use of real time system is in aerospace applications. Basically hard real time systems are used in
aerospace applications. here the delay of even some nano second is not allowed and if it happens, system fails.
Some
of the applications of real-time systems in aerospace are: Satellite tracking system, Avionics, Flight simulation etc.
3.13 SEMARTOS SUPPORT PHORES, QUEUES AND EVENT
In order to manage a concurrent process in operating systems In 1965, Dutch
scientist Edsger Dijkstra proposed a new technique known as Semaphore.
Semaphores are integer variables that are used to solve the critical section problem.
It can be accessed through two atomic operations that are wait and signal.
They can be considered abstract data types that can be used to solve race conditions in
programs.
They can also be used for accessing files in the shared memory
Multiple concurrent threads of execution within an application
If the task can acquire the semaphore, it can carry out the intended operation or access the resource.
In this sense, acquiring a semaphore is like acquiring the duplicate of a key from an apartment manager when the apartment
manager runs out of duplicates, the manager can give out no more keys.
Likewise, when a semaphore’s limit is reached, it can no longer be acquired until someone gives a key back or releases
the
semaphore.
The kernel tracks the number of times a semaphore has been acquired or released by maintaining a token count, which is
initialized to a value when the semaphore is created.
As a task acquires the semaphore, the token count is decremented; as a task releases the semaphore, the count is incremented.
A requesting task, therefore, cannot acquire the semaphore, and the task blocks if it chooses to wait for the semaphore to
become available.
Con ..
The task-waiting list tracks all tasks blocked while waiting on an unavailable
semaphore.
These blocked tasks are kept in the task-waiting list in either first in/first out (FIFO) order or highest priority first
order.
When an unavailable semaphore becomes available, the kernel allows the first task in the task-waiting list to acquire
it.
The kernel moves this unblocked task either to the running state, if it is the highest priority task, or to the ready state, until it becomes the
highest
priority task and is able to run.
Note that the exact implementation of a task-waiting list can vary from one kernel to
another.
A kernel can support many different types of semaphores, including binary, counting, and mutual-exclusion (mutex)
semaphores.
1- Binary Semaphores :-
A binary semaphore can have a value of either 0 or
1.
When a binary semaphore’s value is 0, the semaphore is considered unavailable (or empty); when the value is 1, the binary semaphore
is
considered available (or full).
Note that when a binary semaphore is first created, it can be initialized to either available or unavailable (1 or 0,
respectively).
The state diagram of a binary semaphore is shown in
Con …
Binary semaphores are treated as global resources, them.
which means they are shared among all tasks that
Making the semaphore a global resource allows any task to release it, even if the task did not initially acquire it.
2- Counting Semaphores :-
A counting semaphore uses a count to allow it to be acquired or released multiple times.
When creating a counting semaphore, assign the semaphore a count that denotes the number of semaphore tokens it
has initially.
If the initial count is 0, the counting semaphore is created in the unavailable state.
If the count is greater than 0, the semaphore is created in the available state, and the number of tokens it has equals
its count, as shown in
Con …
One or more tasks can continue to acquire a no
token from the counting semaphore
tokens are left.
When all the tokens are gone, the count equals 0, and the counting semaphore moves from the
available state to the unavailable state.
To move from the unavailable state back to the available state, a semaphore token must be
released by any task.
Note that, as with binary semaphores, counting semaphores are global resources that can be
shared by all tasks that need them.
• This feature allows any task to release a counting semaphore token.
• Each release operation increments the count by one, even if the task making this call did
not
acquire a token in the first place.
• Some implementations of counting semaphores might allow the count to be
bounded.
• A bounded count is a count in which the initial count set for the counting semaphore,
determined
when the semaphore was first created, acts as the maximum count for the
semaphore.
• An unbounded count allows the counting semaphore to count beyond the initial count to
the
maximum value that can be held by the count’s data type (Ex :- an unsigned integer or an
unsigned
long value).
Implementati Binary
on:
struct semaphore {
semaphores
{
enum value(0, 1); if (s.q is empty) {
// q contains all Process Control Blocks s.value = 1;
(PCBs) }
// corresponding to processes got blocked else {
// while performing down operation.
Queue<process> q; // select a process from waiting queue
P(semaphore s) Process p = q.front();
{ // remove the process from waiting as it has
if (s.value == 1) { been
s.value = 0; // sent for CS
} q.pop();
else { wakeup(p);
// add the process to the waiting queue }
q.push(P) sleep(); }
}
}
• Counting Semaphores: They can have any value and are not restricted over a certain
domain. They can be used to control access to a resource that has a limitation on the
number of simultaneous accesses. The semaphore can be initialized to the number of
instances of the resource. Whenever a process wants to use that resource, it checks if
the number of remaining instances is more than zero, i.e., the process has an instance
available. Then, the process can enter its critical section thereby decreasing the value
of the counting semaphore by 1. After the process is over with the use of the instance
of the resource, it can leave the critical section thereby adding 1 to the number of
available instances of the resource.
Implementati Counting semaphore
on:
struct Semaphore { else
return;
int value; }
Events are similar to Semaphores in a sense that each instance of an Event object actually contains a
Semaphore.
The added advantage of using Events lie in the fact that tasks can be notified of specific events in a thread-safe
manner.
Queue: a FIFO buffer that allows for passing arbitrary messages to tasks.
Typically, each queue has just one specific receiver task and one or several sender tasks.
Queues are often used as input for server-style tasks that provide multiple services/commands. A common design
pattern in that case is to have common data structure for such messages consisting of a command code and
parameters, and use a switch statement in the receiver task to handle the different message codes.
If using a union structure for the parameters, or even just a void pointer, the parameters can be defined separately
for each command code.
Than you !!!
!