0% found this document useful (0 votes)
26 views14 pages

Unit Ii

This document provides an overview of Real-Time Operating Systems (RTOS), highlighting their key features such as deterministic behavior, multitasking, and prioritization. It discusses different types of real-time systems, RTOS architecture, task management, scheduling algorithms, and inter-process communication mechanisms like semaphores and mutexes. Additionally, it covers various scheduling strategies, including preemptive and non-preemptive scheduling, and explains synchronization techniques used in RTOS.

Uploaded by

kotra shankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views14 pages

Unit Ii

This document provides an overview of Real-Time Operating Systems (RTOS), highlighting their key features such as deterministic behavior, multitasking, and prioritization. It discusses different types of real-time systems, RTOS architecture, task management, scheduling algorithms, and inter-process communication mechanisms like semaphores and mutexes. Additionally, it covers various scheduling strategies, including preemptive and non-preemptive scheduling, and explains synchronization techniques used in RTOS.

Uploaded by

kotra shankar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

UNIT-II RTOS and its Overview

Introduction
Definition: An operating system designed to handle events or data as they
occur, providing timely and predictable responses.

Key Features:

Deterministic behavior: Ensures tasks are completed within a specific


time frame.

Multitasking: Manages multiple tasks simultaneously.

Prioritization: Assigns priority levels to tasks to ensure critical tasks are


executed first.

Types of Real-Time Systems

Hard Real-Time Systems: Missing a deadline can lead to catastrophic


consequences (e.g., pacemakers).
Soft Real-Time Systems: Missing a deadline may degrade performance but
is not catastrophic (e.g., video streaming).

Structure of a Real-Time System

RTOS Architecture:
For simpler applications, RTOS is usually a kernel but as complexity

increases, various modules like networking protocol stacks debugging

facilities, device I/Os are includes in addition to the kernel. The general

architecture of RTOS is shown in the below fig:


Kernel: RTOS kernel acts as an abstraction layer between the hardware and

the applications. There are three broad categories of kernels

Monolithic kernel

Monolithic kernels are part of Unix-like operating systems like Linux, FreeBSD

etc. A monolithic kernel is one single program that contains all of the code

necessary to perform every kernel related task. It runs all basic system

services (i.e. process and memory management, interrupt handling and I/O
communication, file system, etc) and provides powerful abstractions of the

underlying hardware. Amount of context switches and messaging involved

are greatly reduced which makes it run faster than microkernel.

Microkernel

It runs only basic process communication (messaging) and I/O control.

It normally provides only the minimal services such as managing memory

protection, Inter process communication and the process management. The

other functions such as running the hardware processes are not handled
directly by microkernels. Thus, micro kernels provide a smaller set of simple

hardware abstractions. It is more stable than monolithic as the kernel is

unaffected even if the servers failed (i.e.File System). Microkernels are part

of the operating systems like AIX, BeOS, Mach, Mac OS X, MINIX, and QNX.

Etc

Hybrid Kernel

Hybrid kernels are extensions of microkernels with some properties of

monolithic kernels. Hybrid kernels are similar to microkernels, except that


they include additional code in kernel space so that such code can run more

swiftly than it would were it in user space. These are part of the operating

systems such as Microsoft Windows NT, 2000 and XP. DragonFly BSD, etc

Common services offered by RTOS are shown in the below fig.

Task Management
Task Object: In RTOS, The application is decomposed into small, schedulable,

and sequential program units known as “Task”, a basic unit of execution and
is governed by three time-critical properties; release time, deadline and

execution time. Release time refers to the point in time from which the task

can be executed. Deadline is the point in time by which the task must

complete. Execution time denotes the time the task takes to execute.

Each task may exist in following states


· Dormant : Task doesn’t require computer time

· Ready: Task is ready to go active state, waiting processor time

· Active: Task is running

· Suspended: Task put on hold temporarily

· Pending: Task waiting for resource.


During the execution of an application program, individual tasks are

continuously changing from one state to another. However, only one task is

in the running mode (i.e. given CPU control) at any point of the execution. In

the process where CPU control is change from one task to another, context of

the to-be-suspended task will be saved while context of the to-be-executed

task will be retrieved, the process referred to as context switching.

A task object is defined by the following set of components:


Task Control block: Task uses TCBs to remember its context. TCBs are
data structures residing in RAM, accessible only by RTOS

Task Stack: These reside in RAM, accessible by stack pointer.

Task Routine: Program code residing in ROM


Scheduler
Scheduling is the process of deciding which task should be executed at any
point in time based on a predefined algorithm. The logic for the scheduling is
implemented in a functional unit called the scheduler.

Types of Scheduling Algorithms


There are many scheduling algorithms that can be used for scheduling task
execution on a CPU. They can be classified into two main types:

a) preemptive scheduling algorithms


b) non-preemptive scheduling algorithms.

Preemptive Scheduling
Preemptive scheduling allows the interruption of a currently running task, so
another one with more “urgent” status can be run. The interrupted task is
involuntarily moved by the scheduler from running state to ready state. This
dynamic switching between tasks that this algorithm employs is, in fact, a
form of multitasking. It requires assigning a priority level for each task. A
running task can be interrupted if a task with a higher priority enters the
queue.

As an example let’s have three tasks called Task 1, Task 2 and Task 3. Task 1
has the lowest priority and Task 3 has the highest priority. Their arrival times
and execute times are listed in the table below.

Task 1 is the first to start executing, as it is the first one to arrive (at t = 10
μs ). Task 2 arrives at t = 40μs and since it has a higher priority, the
scheduler interrupts the execution of Task 1 and puts Task 2 into running
state. Task 3 which has the highest priority arrives at t = 60 μs. At this
moment Task 2 is interrupted and Task 3 is put into running state. As it is the
highest priority task it runs until it completes at t = 100 μs. Then Task 2
resumes its operation as the current highest priority task. Task 1 is the last to
complete is operation.
Non-preemptive Scheduling: In non-preemptive scheduling, the scheduler has more
restricted control over the tasks. It can only start a task and then it has to
wait for the task to finish or for the task to voluntarily return the control. A
running task can’t be stopped by the scheduler.

Scheduling Algorithms
The most used algorithms in practical RTOS are non-preemptive
scheduling, round-robin scheduling, and preemptive priority scheduling.

First Come, First Served (FCFS)


FCFS is a non-preemptive scheduling algorithm that has no priority levels
assigned to the tasks. The task that arrives first into the scheduling queue
(i.e enters ready state), gets put into the running state first and starts
utilizing the CPU. It is a relatively simple scheduling algorithm where all the
tasks will get executed eventually. The response time is high as this is a non-
preemptive type of algorithm.

Shortest Job First (SJF)


In the shortest job first scheduling algorithm, the scheduler must obtain
information about the execution time of each task and it then schedules the
one with the shortest execution time to run next.
SJF is a non-preemptive algorithm, but it also has a preemptive version. In
the preemptive version of the algorithm (aka shortest remaining time) the
parameter on which the scheduling is based is the remaining execution time
of a task. If a task is running it can be interrupted if another task with shorter
remaining execution time enters the queue.
A disadvantage of this algorithm is that it requires the total execution time of
a task to be known before it is run.
Priority Scheduling
Priority scheduling is one of the most popular scheduling algorithms. Each
task is assigned a priority level. The basic principle is that the task with the
highest priority will be given the opportunity to use the CPU.
In the preemptive version of the algorithm, a running task can be stopped if
a higher priority task enters the scheduling queue. In the non-preemptive
version of the algorithm once a task is started it can’t be interrupted by a
higher priority task.

Round-Robin Scheduling
Round-robin is a preemptive type of scheduling algorithm. There are no
priorities assigned to the tasks. Each task is put into a running state for a
fixed predefined time. This time is commonly referred to as time-slice. A task
can not run longer than the time-slice. In case a task has not completed by
the end of its dedicated time-slice, it is interrupted, so the next task from the
scheduling queue can be run in the following time slice. A pre-emptied task
has an opportunity to complete its operation once it’s again its turn to use a
time-slice.

Interrupt Service Routine (ISR): Handles hardware interrupts.

Timer: Keeps track of time, crucial for task scheduling and deadlines.

Inter-Process Communication (IPC) and Synchronization of


Processes

IPC Mechanisms:

Message Queues: Allows tasks to send and receive messages.

Semaphores: Synchronizes access to shared resources.

Mutexes: Ensures mutual exclusion, allowing only one task to access a


resource at a time.

Event Flags: Used for task synchronization.

Synchronization Techniques:

Spinlocks: Busy-waiting locks for short critical sections.

Condition Variables: Block a thread until a specific condition is met.


Message Queues: A message queue is an inter-process communication
(IPC) mechanism that allows processes to exchange data in the form of
messages between two processes. It allows processes to communicate
asynchronously by sending messages to each other where the messages are
stored in a queue, waiting to be processed, and are deleted after being
processed.

The message queue is a buffer that is used in non-shared memory


environments, where tasks communicate by passing messages to each other
rather than by accessing shared variables. Tasks share a common buffer
pool. The message queue is an unbounded FIFO queue that is protected from
concurrent access by different threads.

Events are asynchronous. When a class sends an event to another class,


rather than sending it directly to the target reactive class, it passes the event
to the operating system message queue. The target class retrieves the event
from the head of the message queue when it is ready to process it.
Synchronous events can be passed using triggered operations instead.

Many tasks can write messages into the queue, but only one can read
messages from the queue at a time. The reader waits in the message queue
until there is a message to process. Messages can be of any size.

Semaphore: A semaphore is a synchronization tool used to control access


to shared resources in operating systems. It is essentially a variable that
regulates access to the shared resource. Semaphores are used to prevent
race conditions and ensure that only one process or thread can access the
shared resource at a time. They are used to regulate access to shared
resources such as files, memory, or network connections.

When a process or thread wants to access the shared resource, it must


first request the semaphore. If the semaphore value is greater than zero, the
process or thread is allowed to access the shared resource and the
semaphore value is decremented. If the semaphore value is zero, the process
or thread must wait until the semaphore value becomes greater than zero.

Types of Semaphores: There are different types of semaphores such as


binary semaphores, counting semaphores, and named semaphores. Binary
semaphores have two states (0 or 1) and are used to control access to a
single resource. Counting semaphores have an integer value and are used to
control access to multiple resources. Named semaphores are used to
synchronize processes across different systems.

Binary Semaphores:

A binary semaphore acts as a simple lock mechanism to ensure mutual


exclusion and synchronization between tasks. It has two states:

 Taken (0): Indicates that the resource is currently in use or the


condition is not met.
 Available (1): Indicates that the resource is free or the condition is met.

There are typically two main operations associated with binary semaphores:

1. Take (or Wait, P operation):


o The task attempts to take the semaphore. If the semaphore is
available (1), it is taken, and the state changes to 0.
o If the semaphore is already taken (0), the task is blocked until
the semaphore is released by another task.
2. Give (or Signal, V operation):
o The task releases the semaphore, changing its state from 0 to 1.
o If other tasks are waiting on the semaphore, one of them is
unblocked and allowed to take the semaphore.

Advantages of Binary Semaphores

 Simplicity: Easy to understand and implement, providing a


straightforward way to achieve mutual exclusion and task
synchronization.
 Efficiency: Suitable for scenarios where resource access times are short
and predictable.

Limitations

 Priority Inversion: A lower-priority task holding a semaphore can block


a higher-priority task, leading to potential priority inversion. This can
be mitigated using priority inheritance mechanisms.
 Binary Nature: Only allows for two states, making it less flexible
compared to counting semaphores for scenarios requiring counting
beyond binary states.

 Counting Semaphores: Counting semaphore is a synchronization


tool that is used in operating systems to control the access to shared
resources. It is a type of semaphore that allows more than two
processes to access the shared resource at the same time. A counting
semaphore is represented by an integer value that can be incremented
or decremented by the processes.

The key difference between binary and counting semaphores is that binary
semaphores can only take on two values, indicating either that a resource is
available or unavailable, while counting semaphores can take on multiple
values, indicating the number of available resources.
How counting semaphore works?
A counting semaphore allows multiple processes to access a shared resource
simultaneously while ensuring that the maximum number of processes
accessing the resource at any given time does not exceed a predefined limit.

Working Procedure

 Initialize the counting semaphore with a value that represents the


maximum number of resources that can be accessed simultaneously.
 When a process attempts to access the shared resource, it first
attempts to acquire the semaphore using the `wait()` or `P()` function.
 The semaphore value is checked. If it is greater than zero, the process
is allowed to proceed and the value of the semaphore is decremented
by one. If it is zero, the process is blocked and added to a queue of
waiting processes.
 When a process finishes accessing the shared resource, it releases the
semaphore using the `signal()` or `V()` function.
 The value of the semaphore is incremented by one, and any waiting
processes are unblocked and allowed to proceed.
 Multiple processes can access the shared resource simultaneously as
long as the value of the semaphore is greater than zero.
 The counting semaphore provides a way to manage access to shared
resources and ensure that conflicts are avoided, while also allowing
multiple processes to access the resource at the same time.

Mutex
Mutex is a mutual exclusion object that synchronizes access to a resource. It
is created with a unique name at the start of a program. The Mutex is a
locking mechanism that makes sure only one thread can acquire the Mutex
at a time and enter the critical section. This thread only releases the Mutex
when it exits the critical section.
It is a special type of binary semaphore used for controlling access to the
shared resource. It includes a priority inheritance mechanism to avoid
extended priority inversion problems. It allows current higher priority tasks to
be kept in the blocked state for the shortest time possible. However, priority
inheritance does not correct priority inversion but only minimizes its effect.

Event Bits (Event Flags)


Event bits are used to indicate if an event has occurred or not. Event bits are
often referred to as event flags. For example, an application may:

 Define a bit (or flag) that means "A message has been received and is
ready for processing" when it is set to 1, and "there are no messages
waiting to be processed" when it is set to 0.
 Define a bit (or flag) that means "The application has queued a
message that is ready to be sent to a network" when it is set to 1, and
"there are no messages queued ready to be sent to the network" when
it is set to 0.
 Define a bit (or flag) that means "It is time to send a heartbeat
message onto a network" when it is set to 1, and "it is not yet time to
send another heartbeat message" when it is set to 0.

Event Groups
An event group is a set of event bits. Individual event bits within an event
group are referenced by a bit number. Expanding the example provided
above:

 The event bit that means "A message has been received and is ready
for processing" might be bit number 0 within an event group.
 The event bit that means "The application has queued a message that
is ready to be sent to a network" might be bit number 1 within the
same event group.
 The event bit that means "It is time to send a heartbeat message onto
a network" might be bit number 2 within the same event group.

You might also like