0% found this document useful (0 votes)
2 views149 pages

UNIT IV - Real Time Operating System

A Real Time Operating System (RTOS) is designed for applications with strict timing constraints, ensuring tasks are completed within specified deadlines. It differs from General Purpose Operating Systems (GPOS) by prioritizing response times and bounded latencies for processes and threads. RTOS can be classified into hard, firm, and soft real-time systems, each with varying consequences for missed deadlines, and includes features like context switching, task management, and scheduling policies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views149 pages

UNIT IV - Real Time Operating System

A Real Time Operating System (RTOS) is designed for applications with strict timing constraints, ensuring tasks are completed within specified deadlines. It differs from General Purpose Operating Systems (GPOS) by prioritizing response times and bounded latencies for processes and threads. RTOS can be classified into hard, firm, and soft real-time systems, each with varying consequences for missed deadlines, and includes features like context switching, task management, and scheduling policies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 149

Real Time Operating System

(RTOS)
RTOS
Why we go for RTOS
Requirements for RTOS
RTOS
• A real time operating system (RTOS) is
multitasking operation system for the
applications with hard or soft real time
constraints
• RTOS is an OS for response time controlled
and event controlled process
REAL TIME SYSTEM
• A system is said to be Real Time if it is
required to complete it’s work & deliver it’s
services on time.
• Example – Flight Control System
– All tasks in that system must execute on
time.
GPOS Vs RTOS
• A GPOS is used for systems/applications that are not
time critical.
• In the case of a GPOS – task scheduling is not based on
“priority”,it is programmed to achieve high throughput
• A GPOS is made for high end, general purpose systems
whereas RTOS is usually designed for a low end, stand
alone device
• All the process and threads in RTOS has got bounded
latencies – which means –a process/thread will get
executed within a specified time limit.
Classification real time systems(RTS)

• Hard Real Time System


• Firm Real Time System
• Soft Real Time System
Hard Real time

• Here missing an individual deadline results in


catastrophic failure of the system which also
causes a great financial loss .
• The examples for Hard real time systems are:
Air traffic control
Nuclear power plant control
Firm Real time

• In this, missing a deadline results in


unacceptable quality reduction. Technically
there is no difference with hard Real time, but
economically the disaster risk is limited.
• Examples for Firm real time are :
Failure of Ignition of a automobile
Video Conferencing
Satellite based tracking of enemy movement
Soft real time
• Here the dead line may not be fulfilled and can
be recovered from. The reduction in system
quality and performance is at an acceptable
level.
• Examples of Soft real time systems :
 Multimedia transmission and reception
 Networking, telecom (Mobile) networks
 websites and services
 Computer games
 Late completion of jobs is undesirable but not
fatal.
Architecture of RTOS
Architecture of RTOS
Good RTOS
Good RTOS
Features of an RTOS:
• Context switching latency should be short. This
means that the time taken while saving the context
of current task and then switching over to another
task should be short.
• The time taken between executing the last
instruction of an interrupted task and executing the
first instruction of interrupt handler should be
predictable and short. This is also known as interrupt
latency.
• Similarly the time taken between executing the last
instruction of the interrupt handler and executing
the next task should also be short and predictable.
This is also known as interrupt dispatch latency.
In general an Operating System performs following
tasks:

Memory management
Multitasking
Task Management
Time Management
Context switching
Scheduling
Semaphore
Inter Task Communication
Interrupt Handling
Interrupt Handling
Tasks, Process and Threads
• Task – something that needs to be done (job)
• It is defined as the program in execution and
the related information maintained by the OS
for the program
• Process: A program or part of it in execution
Process
• A process is a program, or part of it in
execution.
• A process requires various system resources
like CPU for execution, memory for storing the
code, I/O devices for information exchange.
Process
Stack Pointer

Working Registers
Process
Status Registers
Code Memory
Program Counter corresponding to
the process
Period of a process is the time between successive executions
Process State
S.N. Process State & Description
1 Start
This is the initial state when a process is first started/created.
2 Ready
The process is waiting to be assigned to a processor. Ready processes are
waiting to have the processor allocated to them by the operating system so
that they can run. Process may come into this state after Start state or while
running it by but interrupted by the scheduler to assign CPU to some other
process.

3 Running
Once the process has been assigned to a processor by the OS scheduler, the
process state is set to running and the processor executes its instructions.

4 Waiting
Process moves into the waiting state if it needs to wait for a resource, such
as waiting for user input, or waiting for a file to become available.
5 Terminated or Exit
Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed
from main memory.
Process Control Block (PCB)
• A Process Control Block is a data structure
maintained by the Operating System for every
process.
• The PCB is identified by an integer process ID
(PID).
• A PCB keeps all the information needed to
keep track of a process as listed below in the
table :
S.N. Information & Description

1 Process State
The current state of the process i.e., whether it is ready, running, waiting, or whatever.

2 Process privileges
This is required to allow/disallow access to system resources.

3 Process ID
Unique identification for each of the process in the operating system.

4 Pointer
A pointer to parent process.

5 Program Counter
Program Counter is a pointer to the address of the next instruction to be executed for this process.

6 CPU registers
Various CPU registers where process need to be stored for execution for running state.

7 CPU Scheduling Information


Process priority and other scheduling information which is required to schedule the process.

8 Memory management information


This includes the information of page table, memory limits, Segment table depending on memory used by the
operating system.

9 Accounting information
This includes the amount of CPU used for process execution, time limits, execution ID etc.

10 IO status information
This includes a list of I/O devices allocated to the process.
Process Scheduling
• The process scheduling is the activity of the
process manager that handles the removal of the
running process from the CPU and the selection of
another process on the basis of a particular
strategy.
• Process scheduling is an essential part of a
Multiprogramming operating systems. Such
operating systems allow more than one process to
be loaded into the executable memory at a time
and the loaded process shares the CPU using time
multiplexing.
Process Scheduling Queues

• The OS maintains all PCBs in Process Scheduling Queues. The OS


maintains a separate queue for each of the process states and
PCBs of all processes in the same execution state are placed in
the same queue. When the state of a process is changed, its PCB
is unlinked from its current queue and moved to its new state
queue.
• The Operating System maintains the following important
process scheduling queues −
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing
in main memory, ready and waiting to execute. A new process is
always put in this queue.
• Device queues − The processes which are blocked due to
unavailability of an I/O device constitute this queue.
THREAD
• A thread is a flow of execution through the process code, with its own
program counter that keeps track of which instruction to execute next,
system registers which hold its current working variables, and a stack
which contains the execution history.
• A thread shares with its peer threads few information like code
segment, data segment and open files. When one thread alters a code
segment memory item, all other threads see that.
• A thread is also called a lightweight process. Threads provide a way to
improve application performance through parallelism. Threads
represent a software approach to improving performance of operating
system by reducing the overhead thread is equivalent to a classical
process.
• Each thread belongs to exactly one process and no thread can exist
outside a process. Each thread represents a separate flow of control.
Threads have been successfully used in implementing network servers
and web server. They also provide a suitable foundation for parallel
execution of applications on shared memory multiprocessors.
S.N. Process Thread
1 Process is heavy weight or resource Thread is light weight, taking lesser
intensive. resources than a process.

2 Process switching needs interaction Thread switching does not need to


with operating system. interact with operating system.

3 In multiple processing environments, All threads can share same set of open
each process executes the same files, child processes.
code but has its own memory and
file resources.

4 If one process is blocked, then no While one thread is blocked and


other process can execute until the waiting, a second thread in the same
first process is unblocked. task can run.

5 Multiple processes without using Multiple threaded processes use fewer


threads use more resources. resources.

6 In multiple processes each process One thread can read, write or change
operates independently of the another thread's data.
others.
Advantages of Thread

• Threads minimize the context switching time.


• Use of threads provides concurrency within a
process.
• Efficient communication.
• It is more economical to create and context
switch threads.
• Threads allow utilization of multiprocessor
architectures to a greater scale and efficiency.
Types of Thread

• Threads are implemented in following two


ways −
• User Level Threads − User managed threads.
• Kernel Level Threads − Operating System
managed threads acting on kernel, an
operating system core.
S.N. User-Level Threads Kernel-Level Thread

1 User-level threads are faster Kernel-level threads are


to create and manage. slower to create and
manage.
2 Implementation is by a Operating system supports
thread library at the user creation of Kernel threads.
level.
3 User-level thread is generic Kernel-level thread is
and can run on any specific to the operating
operating system. system.
4 Multi-threaded applications Kernel routines themselves
cannot take advantage of can be multithreaded.
multiprocessing.
User Level Threads
• In this case, the thread management kernel is
not aware of the existence of threads.
• The thread library contains code for creating
and destroying threads, for passing message
and data between threads, for scheduling
thread execution and for saving and restoring
thread contexts.
• The application starts with a single thread.
Advantages:
• Thread switching does not require Kernel mode
privileges.
• User level thread can run on any operating system.
• Scheduling can be application specific in the user
level thread.
• User level threads are fast to create and manage.
Disadvantages:
• In a typical operating system, most system calls are
blocking.
• Multithreaded application cannot take advantage
of multiprocessing.
Kernel Level Threads

• In this case, thread management is done by the Kernel.


There is no thread management code in the application
area. Kernel threads are supported directly by the
operating system. Any application can be programmed
to be multithreaded. All of the threads within an
application are supported within a single process.
• The Kernel maintains context information for the
process as a whole and for individuals threads within
the process. Scheduling by the Kernel is done on a
thread basis. The Kernel performs thread creation,
scheduling and management in Kernel space. Kernel
threads are generally slower to create and manage
than the user threads.
Advantages:
• Kernel can simultaneously schedule multiple threads
from the same process on multiple processes.
• If one thread in a process is blocked, the Kernel can
schedule another thread of the same process.
• Kernel routines themselves can be multithreaded.
Disadvantages:
• Kernel threads are generally slower to create and
manage than the user threads.
• Transfer of control from one thread to another within
the same process requires a mode switch to the
Kernel.
Multithreading Models
• Some operating system provide a combined user
level thread and Kernel level thread facility.
Solaris is a good example of this combined
approach. In a combined system, multiple
threads within the same application can run in
parallel on multiple processors and a blocking
system call need not block the entire process.
• Multithreading models are three types
Many to many relationship.
Many to one relationship.
One to one relationship.
Many to Many Model

• The many-to-many model multiplexes any


number of user threads onto an equal or smaller
number of kernel threads.
• In this model, developers can create as many
user threads as necessary and the corresponding
Kernel threads can run in parallel on a
multiprocessor machine. This model provides the
best accuracy on concurrency and when a thread
performs a blocking system call, the kernel can
schedule another thread for execution.
Many to One Model
• Many-to-one model maps many user level
threads to one Kernel-level thread. Thread
management is done in user space by the
thread library. When thread makes a blocking
system call, the entire process will be blocked.
Only one thread can access the Kernel at a
time, so multiple threads are unable to run in
parallel on multiprocessors..
One to One Model
• There is one-to-one relationship of user-level
thread to the kernel-level thread. This model
provides more concurrency than the many-to-
one model. It also allows another thread to
run when a thread makes a blocking system
call. It supports multiple threads to execute in
parallel on microprocessors.
TASK
Task States
Task States Contd..,
Task States Contd..,
Task and its Data
Scheduling
Scheduling in RTOS
Scheduling in RTOS
Interrupts Handling in RTOS
Applications of RTOS
SCHEDULING
Basic Concept
CPU Scheduler
Scheduling by OS
Scheduling Policy
Preemptive vs. Non-preemptive scheduling

• Non-preemptive scheduling:
– The running process keeps the CPU until it
voluntarily gives up the CPU
• process exits 4
Terminated
Running
• switches to blocked state 1
3
• Transition 3 is only voluntary
• Preemptive scheduling: Ready Blocked

– The running process can be interrupted and


must release the CPU (can be forced to give up
CPU) 62
Why we use pre-emptive Scheduling
Pre-emptive Scheduling
Pre-emptive Scheduling
Pre-emptive Scheduling
Pre-emptive Scheduling
Pre-emptive Scheduling
Simple Scheduling
Round Robin
Pre-emption
Context Switch
Context Switch between Process
Context Switch between Process
Context Switch
Context Switch
Context Switch
Steps in Context Switching
Context Switch
Example of Context Switch
Priorities in Scheduling
Priority driven Scheduling
Priority driven Scheduling Contd.,
Priority driven Scheduling Contd.,
Assigning Task Priorities
Rate Monotonic Scheduling (RMS)
Rate Monotonic Scheduling (RMS)
RMS - Example
RMS - Example
Deadline Miss with RMS
Example
Task Execution Time Period
T1 3 20 Assume T2>T3>T1
T2 2 5
T3 2 10
Earliest Deadline First (EDF) Scheduling
Example 1 - EDF Scheduling
Example 2 - EDF Scheduling
Example
Task Execution Time Dead Line Period
T1 3 7 20
T2 2 4 5
T3 2 8 10
EDF – Overload Conditions
RMS vs EDF
Interprocess Communication
Interprocess Communication
Interprocess Communication
SEMAPHORE
SEMAPHORE

• Consider a situation where there are two person who wants to


share a bike. At one time only one person can use the bike. The
one who has the bike key will get the chance to use it. And when
this person gives the key to 2nd person, then only 2nd person can
use the bike.

• Semaphore is just like this Key and the bike is the shared
resource. Whenever a task wants access to the shared resource, it
must acquire the semaphore first. The task should release the
semaphore after it is done with the shared resource. Till this time
all other tasks have to wait if they need access to shared resource
as semaphore is not available. Even if the task trying to acquire
the semaphore is of higher priority than the task acquiring the
semaphore, it will be in wait state until semaphore is released by
the lower priority task.
Use of Semaphore
1. Managing Shared Resource

2. Task Synchronization
Apart from managing shared resource, task synchronization
can also be performed with the help of a semaphore. In this case
semaphore will be like a flag not key.

 Unilateral Rendezvous
– This is one way synchronization which uses a semaphore as a flag to
signal another task.
 Bilateral Rendezvous
– This is two way synchronization performed using two semaphores. A
bilateral rendezvous is similar to a unilateral rendezvous, except both
tasks must synchronize with one another before proceeding.
Types of semaphore
1. Binary Semaphore
• Binary semaphore is used when there is only
one shared resource.
2. Counting Semaphore
• To handle more then one shared resource of
same type, counting semaphore is used.
3. Mutual Exclusion Semaphore or Mutex
• To avoid extended priority inversion, mutexes
can be used.
SEMAPHORE
• A semaphore is an integer variable taking on values from 0 to
a predefined maximum, each being associated with a queue
for process suspension. The order of process activation from
the queue must be fair.
• Two atomic operations that are indivisible and uninterruptible
are defined for a semaphore

 (a) WAIT: decrease the counter by one; if it becomes negative,


block the process and enter this process’s ID in the waiting
processes queue.
 (b) SIGNAL: increase the semaphore by one; if it is still
negative, unblock the first process of the waiting processes
queue, removing this process’s ID from the queue itself.
SEMAPHORE - Steps
• Many cooperating processes - reading records from and
writing records to a single data file
• File access to be strictly coordinated
Step by step operation:
• A semaphore with an initial value of 1 could be used
• The first process to access the file would try to
decrement the semaphore’s value and it would succeed,
the semaphore’s value now being 0.
• This process can now go ahead and use the data file
• If another process wishing to use the file now tries to
decrement the semaphore’s value, it would fail as the
result would be -1.
Step by step operation (contd.,)
• That process will be suspended until the first
process has finished with the data file
• When the first process has finished with the
data file it will increment the semaphore’s
value, making it 1 again.
• Now the waiting process can be woken and
this time its attempt to increment the
semaphore will succeed.
Example

• P() is wait
• V() is Signal
Deadlock
• Deadlock is the situation in which multiple
concurrent threads of execution in a system
are blocked permanently because of resource
requirements that can never be satisfied.
Deadlock
Deadlock Conditions
• Mutual exclusion-A resource can be accessed by only
one task at a time, i.e., exclusive access mode.
• No preemption-A non-preemptible resource cannot
be forcibly removed from its holding task. A resource
becomes available only when its holder voluntarily
relinquishes claim to the resource.
• Hold and wait-A task holds already-acquired
resources, while waiting for additional resources to
become available.
• Circular wait-A circular chain of two or more tasks
exists, in which each task holds one or more
resources being requested by a task next in the chain.
Avoid Deadlock
• Deadlocks can be avoided by avoiding at least one of the four
conditions

• Mutual Exclusion: Resources shared such as read-only files do not


lead to deadlocks but resources, such as printers and tape drives,
requires exclusive access by a single process.
• Hold and Wait: In this condition processes must be prevented
from holding one or more resources while simultaneously waiting
for one or more others.
• No Preemption: Preemption of process resource allocations can
avoid the condition of deadlocks, where ever possible.
• Circular Wait: Circular wait can be avoided if we number all
resources, and require that processes request resources only in
strictly increasing(or decreasing) order.
Handling Deadlock
• Following three strategies can be used to remove deadlock
after its occurrence.

• Pre-emption: We can take a resource from one process and


give it to other. This will resolve the deadlock situation, but
sometimes it does causes problems.
• Rollback: In situations where deadlock is a real possibility,
the system can periodically make a record of the state of
each process and when deadlock occurs, roll everything
back to the last checkpoint, and restart, but allocating
resources differently so that deadlock does not occur.
• Kill one or more processes: This is the simplest way, but it
works.
What is a Livelock?
• There is a variant of deadlock called livelock.
• This is a situation in which two or more
processes continuously change their state in
response to changes in the other process(es)
without doing any useful work.
• This is similar to deadlock in that no progress is
made but differs in that neither process is
blocked or waiting for anything.
Example of Livelock
• A human example of livelock would be two
people who meet face-to-face in a corridor
and each moves aside to let the other pass,
but they end up swaying from side to side
without making any progress because they
always move the same way at the same time.
Starvation
• It is the condition in which a process does not get the
resources required to continue its execution for long
time.
• As time progresses the process starves on resource.

• The situation where two or more processes are


reading or writing some shared data & the final
results depends on who runs precisely when are
called race conditions.
Dining Philosophers Problem
• The dining philosopher's problem involves the
allocation of limited resources to a group of
processes in a deadlock-free and starvation-free
manner.
• There are five philosophers sitting around a table,
in which there are five chopsticks/forks kept
beside them and a bowl of rice in the centre,
When a philosopher wants to eat, he uses two
chopsticks - one from their left and one from their
right. When a philosopher wants to think, he
keeps down both chopsticks at their original
place.
Dining Philosophers Problem
Solution
• From the problem statement, it is clear that a
philosopher can think for an indefinite amount
of time. But when a philosopher starts eating,
he has to stop at some point of time. The
philosopher is in an endless cycle of thinking
and eating.
• An array of five semaphores, stick[5], for each
of the five chopsticks.
• When a philosopher wants to eat the rice, he will
wait for the chopstick at his left and picks up that
chopstick. Then he waits for the right chopstick
to be available, and then picks it too. After
eating, he puts both the chopsticks down.
• But if all five philosophers are hungry
simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs
because they will be waiting for another
chopstick forever.
• The possible solutions for this are:
• A philosopher must be allowed to pick up the
chopsticks only if both the left and right
chopsticks are available.
• Allow only four philosophers to sit at the table.
That way, if all the four philosophers pick up
four chopsticks, there will be one chopstick left
on the table. So, one philosopher can start
eating and eventually, two chopsticks will be
available.
• In this way, deadlocks can be avoided.
Another possible solutions are:
• First Try
• P1 and P3 higher priority (eat whenever
needed)
• P2,P4,P5 lower priority (no able to eat)
• P2,P4,P5- Starves.
• Second Try
• Right fork Simultaneously
• P1 waits for P2, P2 waits for P3, P3 waits for P4,
P4 waits for P5, P5 waits for P1.
• Starvation leads to deadlock.
• Third Try
• All take the Right fork first.
• Check for left one
– If available – EAT
– Else put back the right fork and sleep for fixed interval.
– Then try again.

• All simultaneously starts to eat ---- Starving ----- Deadlock.

• Fourth Try
• sleep for Random interval
• Reduces the possibility for starvation
• Uses Mutex (Lock and UnLock)
• Prevents deadlock

• One Philosopher can eat at a time


• Not Efficient.
• In the case of mutex, only the thread that
locked or acquired the mutex can unlock it.

• In the case of a semaphore, a thread waiting


on a semaphore can be signaled by a different
thread.
Multitasking
• The process of having a computer perform
multiple tasks simultaneously.
• During multitasking, tasks such as listening to
a CD or browsing the Internet can be
performed in the background while using
other programs in the foreground such as an
e-mail client.
Advantages of multitasking

• Data can be copied and moved between programs.


• Increases productivity, since dozens of programs can be
running at once.
• Any changes or updates are seen immediately. For
example, if a new e-mail is received you immediately know.

Disadvantages of multitasking

• Requires more system resources.


• If on a laptop or portable device takes more battery power.
Action Plan – 1. Specifications of the system
• Product Functions and Tasks: Eg., Robot
• Delivery Time Schedule: Tight Schedule- High speed
rapid development model
• Product Life Cycle: Life cycle of the product
• Human Machine Interaction: Eg., Remote controller for
a TV
• Operating Environment: Operating Temp, humidity
• Sensors: Sensitivity, Precession, resolution & accuracy
• Power Requirement and Environment: greater load will
need greater Power
• System Cost: Maximum bearable cost
Action Plan – 2. Conceptual Design
• UML class diagram – User Diagram, Object
Diagram, Sequence Diagram, State Diagram,
Class Diagram, Activity Diagram.
Action Plan – 3. Software and Hardware
Layout Design
• Independent Design Approach followed by
Integration
• Concurrent Co-design approach
• Example: Washing Machine
Use of Target System/Emulator & ICE
• Target system has a processor, memory,
peripherals and interfaces.
• Emulator: Emulates the target system with
extended memory and with codes
downloading ability during the edit-test-debug
cycles.
• ICE: uses another circuit with a card that
connects to target processor through a socket.
Use of Software Tools for Development of
an Embedded System
• Interpreter: line by line translation to the machine executable
codes
• Compiler: complete set of the expressions
• Dissembler: translates the object codes into the mnemonics
form of assembly language
• Assembler: translates the assembly mnemonics into binary
opcodes and instructions ie. Executable file (object file)
• Linker: links the needed object code files and library code
files
• Cross assembler: convert object codes of a
microprocessor/controller to other codes for another
microprocessor/controller
Integrated Development Environment (IDE)

You might also like