RTOS Based Embedded System Design
RTOS Based Embedded System Design
(EC 604)
RTOS based Embedded System
Design
Prepared by:
Dr. Hemant S. Goklani
ECE Department,
IIIT, Surat
1
Contents
Operating System Basics
Types of Operating Systems
Tasks
Process and Threads
Multiprocessing and Multitasking
Task Scheduling
Task Operations
Structure, Synchronization
Communication and Concurrency
2
Contents
Interrupts and Timers Exceptions
Interrupts Applications
Processing of Exceptions and Spurious Interrupts
Real Time CLO scks
Programmable Timers
Timer Interrupt Service Routines (ISR)
Soft Timers
3
Operating System (OS)
It acts as a bridge between user applications/tasks and the underlying
system resources through a set of system functionalities and services
It manages the system resources and makes them available to the
user applications/tasks on a requirement basis
4
The Kernel
It is the core of OS and is responsible for managing the system
resources and the communication among the hardware and other
system devices
Acts as the abstraction layer between system resources and user
applications
Contains a set of system libraries and services
5
The architecture of OS
It
6
The architecture of OS
General purpose OS contains different services for the following:
Primary Memory Management
Process Management
Time management
File System management
I/O System (Device) Management
Secondary Storage Management
Protection
Interrupt Handling
7
Primary Memory Management
It refers to the volatile memory RAM, where processes are loaded
and variables and shared data associated with process are stored
The Memory Management Unit (MMU) of the kernel is responsible
for:
Keeping track of which part of memory area is currently used by
which process
Allocating and de-allocating memory space on requirement basis
(Dynamic Memory Allocation)
8
Process Management
It deals with managing the processes/tasks
By setting up the memory space for the process
Loading the process’s code into the memory space
Allocating system resources
Scheduling and managing execution of process
Setting up and managing the Process Control Block (PCB)
Inter-process communication and synchronization
Process termination/deletion etc.
9
Time Management
It deals with at what time which process needs to be scheduled
Till what duration processes needs to be executed
After what time a process needs to be preempted
10
File System Management
File is collection of information
A file could be a program, text file, image file, word documents,
audio/video files etc.
Each of these files differ in kind of information they hold and the way in
which information is stored
The file system management service of kernel is responsible for:
Creation, deletion and alteration of files/directories
Saving of files in secondary storage memory
Providing automatic allocation of file space based on amount of free
space available
Providing a flexible naming convention for the files
The various file system management operations are OS dependent
11
I/O System Management
Kernel is responsible for routing the I/O request coming from different
user applications to the appropriate I/O devices of the system
In a well structured OS direct access of I/O devices is not allowed and
access to them is provided through a Application Programming
Interfaces (API) exposed by kernel
Kernel maintains a list of I/O devices of the system and this list may be
available well in advance at the time of building the kernel or in cases
Kernels dynamically updates the list and where the new device is
installed
The service ‘Device Manager’ is responsible handling all I/O device
related operations
The kernel talks to I/O devices through a set of low level system calls,
which are implemented in a service called device drivers. These drivers
are specific to a device or a class of devices
12
I/O System Management
The device manager is responsible for:
Loading and uploading of device drivers
Exchanging information and the system specific controls to and
from the device
13
Secondary Storage Management
Deals with management of secondary storage devices
Secondary memory is used as a backup medium for program and
data since the main memory is volatile
In most of the systems the secondary storage is in the form of Hard
disks
The secondary storage management services of kernel deals with:
Disk storage allocation
Disk scheduling
Free disk space management
14
Protection System
Most of the modern OS are designed in such a way that they support
multiple users with different levels of access permissions
Protection deals with implementing the security polices to restrict
the access to both user and system resources by different
applications or processes or users
In multiuser OS, one user may not be allowed to view or modify
complete portion of another user’s data or profile details
Also some applications may not be granted permission to make use
of certain system resources
15
Interrupt Handler
Kernel provides handler mechanism for all external/internal interrupts
generated by the system
These are some of the services offered by kernel of an OS
Depending on type of OS, a kernel may contain lesser/more number of
services/components
In addition to above mentioned services, many OS offer a number of
add-on services to kernel:
Network communication
Network management
User-interface graphics
Timer services
Error handler
Database management etc.
16
Kernel Space and User Space
The program code corresponding to the kernel applications/services are kept
in a contiguous area (OS dependent) of primary (working) memory and is
protected from the un-authorized access by user programs/applications
The memory space at which the kernel code is located is known as ‘Kernel
Space’
All user applications are loaded to a specific area of primary memory and this
memory area is referred as ‘User Space’
The partitioning of memory into kernel and user space is purely Operating
System dependent
An operating system with virtual memory support, loads the user applications
into its corresponding virtual memory space with demand paging technique
Most of the operating systems keep the kernel application code in main
memory and it is not swapped out into the secondary memory
17
Monolithic Kernel
All kernel services run in the kernel space
All kernel modules run within the same memory space under a single
kernel thread
The tight internal integration of kernel modules in monolithic kernel
architecture allows the effective utilization of the low-level features of
the underlying system
The major drawback of monolithic kernel is that any error or failure
in any one of the kernel modules leads to the crashing of the entire
kernel application
LINUX, SOLARIS, MS-DOS kernels are examples of monolithic kernel
18
The Monolithic Kernel Model
19
Microkernel
The microkernel design incorporates only the essential
set of Operating System services into the kernel
Rest of the Operating System services are implemented in
programs known as ‘Servers’ which runs in user space
The kernel design is highly modular provides OS-neutral
abstraction
Memory management, process management, timer
systems and interrupt handlers are examples of essential
services, which forms the part of the microkernel
QNX, Minix 3 kernels are examples for microkernel
20
The Microkernel Model
21
Benefits of Microkernel
1. Robustness: If a problem is encountered in any
services in server can reconfigured and re-
started without the need for re-starting the
entire OS
2. Configurability: Any services, which run as
‘server’ application can be changed without
need to restart the whole system
22
Types of Operating Systems
Depending on the type of kernel and kernel
services, purpose and type of computing systems
where the OS is deployed and the responsiveness
to applications, Operating Systems are classified
into
1. General Purpose Operating System (GPOS)
2. Real Time Purpose Operating System (RTOS)
23
General Purpose Operating System
(GPOS)
Operating Systems, which are deployed in general computing systems
The kernel is more generalized and contains all the required
services to execute generic applications
Need not be deterministic in execution behavior
May inject random delays into application software and thus cause
slow responsiveness of an application at unexpected times
Usually deployed in computing systems where deterministic
behavior is not an important criterion
Personal Computer/Desktop system is a typical example for a system
where GPOSs are deployed
Windows XP/MS-DOS etc are examples of General Purpose Operating
System
24
Real Time Purpose Operating System
(RTOS)
Operating Systems, which are deployed in embedded systems
demanding real-time response
Deterministic in execution behavior. Consumes only known amount
of time for kernel applications
Implements scheduling policies for executing the highest priority
task/application always
Implements policies and rules concerning time-critical allocation of a
system’s resources
Examples of Real Time Operating Systems (RTOS):
Windows CE (Windows Embedded Compact)
QNX (is a commercial Unix-like real-time operating system released by Quantum
Software Systems, aimed primarily at the embedded systems market)
VxWorks , MicroC/OS-II etc.
25
The Real Time Kernel
The kernel of a Real Time Operating System is referred as Real Time
kernel
In complement to the conventional OS kernel, the Real Time kernel is
highly specialized and it contains only the minimal set of services
required for running the user applications/tasks
The basic functions of a Real Time kernel are
i. Task/Process management
ii. Task/Process scheduling
iii. Task/Process synchronization
iv. Error/Exception handling
v. Memory Management
vi. Interrupt handling
vii. Time management
26
Real Time Kernel Task/Process
Management
Deals with setting up the memory space for the
tasks
Loading the task’s code into the memory space
Allocating system resources
Setting up a Task Control Block (TCB) for the task
and task/process termination/deletion
A Task Control Block (TCB) is used for holding the
information corresponding to a task
27
Real Time Kernel Task/Process
Management
TCB usually contains the following set of
information:
Task ID: Task Identification Number
Task State: The current state of the task. (E.g. State=
‘Ready’ for a task which is ready to execute)
Task Type: Indicates what is the type for this task. The task
can be a hard real time or soft real time or background
task
Task Priority: E.g. Task priority =1 for task with priority =1)
Task Context Pointer: Pointer for context saving
28
Real Time Kernel Task/Process
Management
Task Memory Pointers: Pointers to the code memory,
data memory and stack memory for the task
Task System Resource Pointers: Pointers to system
resources (semaphores, mutex etc) used by the task
Task Pointers: Pointers to other TCBs (TCBs for
preceding, next and waiting tasks)
Other Parameters: Other relevant task parameters
29
Real Time Kernel Task/Process
Management
The parameters and implementation of the TCB is
kernel dependent
The TCB parameters vary across different kernels,
based on the task management implementation
30
Task/Process Scheduling
It Deals with sharing the CPU among various
tasks/processes
A kernel application called ‘Scheduler’ handles
the task scheduling
Scheduler is nothing but an algorithm
implementation, which performs the efficient
and optimal scheduling of tasks to provide a
deterministic behavior
31
Task/Process Synchronization
Deals with synchronizing the concurrent access of
a resource, which is shared across multiple tasks
and the communication between various tasks
32
Error/Exception handling
Deals with registering and handling the errors
occurred/exceptions raised during the execution of tasks
Insufficient memory, timeouts, deadlocks, deadline
missing, bus error, divide by zero, unknown instruction
execution etc, are examples of errors/exceptions
Errors/Exceptions can happen at the kernel level services
or at task level
Deadlock is an example for kernel level exception,
whereas timeout is an example for a task level exception
The OS kernel gives the information about the error in the
form of a system call (API)
33
Memory Management
The memory management function of an RTOS kernel is slightly
different compared to the General Purpose Operating Systems
The memory allocation time increases depending on the size of
the block of memory needs to be allocated and the state of the
allocated memory block (initialized memory block consumes
more allocation time than uninitialized memory block)
Since predictable timing and deterministic behavior are the
primary focus for an RTOS, RTOS achieves this by
compromising the effectiveness of memory allocation
RTOS generally uses ‘block’ based memory allocation
technique, instead of the usual dynamic memory allocation
techniques used by the GPOS
34
Memory Management
RTOS kernel uses blocks of fixed size of dynamic memory and
the block is allocated for a task on a need basis. The blocks are
stored in a ‘Free buffer Queue’
Most of the RTOS kernels allow tasks to access any of the
memory blocks without any memory protection to achieve
predictable timing and avoid the timing overheads
RTOS kernels assume that the whole design is proven correct
and protection is unnecessary. Some commercial RTOS kernels
allow memory protection as optional and the kernel enters a
fail-safe mode when an illegal memory access occurs
35
Memory Management
A few RTOS kernels implement Virtual Memory concept for
memory allocation if the system supports secondary memory
storage (like HDD and FLASH memory)
In the ‘block’ based memory allocation, a block of fixed
memory is always allocated for tasks on need basis and it is
taken as a unit. Hence, there will not be any memory
fragmentation issues
The memory allocation can be implemented as constant
functions and thereby it consumes fixed amount of time for
memory allocation. This leaves the deterministic behavior of
the RTOS kernel untouched
36
Interrupt Handling
Interrupts inform the processor that an external device or an
associated task requires immediate attention of the CPU
Interrupts can be either Synchronous or Asynchronous
Interrupts which occurs in sync with the currently executing
task is known as Synchronous interrupts. Usually the software
interrupts fall under the Synchronous Interrupt category. Divide
by zero, memory segmentation error etc are examples of
Synchronous interrupts
For synchronous interrupts, the interrupt handler runs in the
same context of the interrupting task
37
Interrupt Handling
Asynchronous interrupts are interrupts, which occurs at any
point of execution of any task, and are not in sync with the
currently executing task
The interrupts generated by external devices (by asserting the
Interrupt line of the processor/controller to which the interrupt
line of the device is connected) connected to the
processor/controller, timer overflow interrupts, serial data
reception/ transmission interrupts etc are examples for
asynchronous interrupts
38
Interrupt Handling
For asynchronous interrupts, the interrupt handler is usually
written as separate task (Depends on OS Kernel
implementation) and it runs in a different context. Hence, a
context switch happens while handling the asynchronous
interrupts
Priority levels can be assigned to the interrupts and each
interrupts can be enabled or disabled individually
Most of the RTOS kernel implements ‘Nested Interrupts’
architecture. Interrupt nesting allows the pre-emption
(interruption) of an Interrupt Service Routine (ISR), servicing an
interrupt, by a higher priority interrupt
39
Time Management
Interrupts inform the processor that an external device or an
associated task requires immediate attention of the CPU
Accurate time management is essential for providing precise
time reference for all applications
The time reference to kernel is provided by a high-resolution
Real Time Clock (RTC) hardware chip (hardware timer)
The hardware timer is programmed to interrupt the
processor/controller at a fixed rate. This timer interrupt is
referred as ‘Timer tick’
40
Time Management
The ‘Timer tick’ is taken as the timing reference by the kernel.
Its interval may vary depending on the hardware timer and
usually it varies in the microseconds range
The time parameters for tasks are expressed as the multiples of
the ‘Timer tick’. The System time is updated based on the
‘Timer tick’
If the System time register is 32 bits wide and the ‘Timer tick’
interval is 1 μs, the System time register will reset in
If the ‘Timer tick’ interval is 1 ms, the System time register will
reset in
41
Time Management
The ‘Timer tick’ interrupt is handled by the ‘Timer Interrupt’
handler of kernel. The ‘Timer tick’ interrupt can be utilized for
implementing the following actions:
Save the current context (Context of the currently executing task)
Increment the System time register by one. Generate timing error
and reset the System time register if the timer tick count is greater
than the maximum range available for System time register
Update the timers implemented in kernel (Increment or decrement
the timer registers for each timer depending on the count direction
setting for each register. Increment registers with count direction
setting = ‘count up’ and decrement registers with count direction
setting = ‘count down’)
42
Time Management
Activate the periodic tasks, which are in the idle state
Invoke the scheduler and schedule the tasks again based on the
scheduling algorithm
Delete all the terminated tasks and their associated data
structures (TCBs)
Load the context for the first task in the ready queue. Due to the
rescheduling, the ready task might be changed to a new one from
the task, which was pre-empted by the ‘Timer Interrupt’ task
43
Hard Real-time System
A Real Time Operating Systems which strictly adheres to the
timing constraints for a task
A Hard Real Time system must meet the deadlines for a task
without any slippage
Missing any deadline may produce catastrophic results for Hard
Real Time Systems, including permanent data lose and
irrecoverable damages to the system/users
Emphasize on the principle ‘A late answer is a wrong answer’
Air bag control systems and Anti-lock Brake Systems (ABS) of
vehicles are typical examples of Hard Real Time Systems
44
Hard Real-time System
As a rule of thumb, Hard Real Time Systems does not implement
the virtual memory model for handling the memory. This
eliminates the delay in swapping in and out the code
corresponding to the task to and from the primary memory
The presence of Human in the loop (HITL) for tasks introduces
unexpected delays in the task execution. Most of the Hard Real
Time Systems are automatic and does not contain a ‘human in
the loop’
45
Soft Real-time System
Real Time Operating Systems that does not guarantee meeting
deadlines, but, offer the best effort to meet the deadline
Missing deadlines for tasks are acceptable if the frequency of
deadline missing is within the compliance limit of the Quality of
Service(QoS)
A Soft Real Time system emphasizes on the principle ‘A late
answer is an acceptable answer, but it could have done bit
faster’
Soft Real Time systems most often have a ‘human in the loop
(HITL)’
46
Soft Real-time System
Automatic Teller Machine (ATM) is a typical example of Soft Real
Time System. If the ATM takes a few seconds more than the ideal
operation time, nothing fatal happens
An audio video play back system is another example of Soft Real
Time system. No potential damage arises if a sample comes late
by fraction of a second, for play back
47
Tasks, Processes & Threads
In the Operating System context, a task is defined as the program
in execution and the related information maintained by the
Operating system for the program
Task is also known as ‘Job’ in the operating system context
A program or part of it in execution is also called a ‘Process’
The terms ‘Task’, ‘job’ and ‘Process’ refer to the same entity in
the Operating System context and most often they are used
interchangeably
A process requires various system resources like CPU for
executing the process, memory for storing the code
corresponding to the process and associated variables, I/O
devices for information exchange etc
48
The structure of a Processes
The concept of ‘Process’ leads to concurrent execution (pseudo
parallelism) of tasks and thereby the efficient utilization of the
CPU and other system resources
Concurrent execution is achieved through the sharing of CPU
among the processes
A process mimics a processor in properties and holds a set of
registers, process status, a Program Counter (PC) to point to the
next executable instruction of the process, a stack for holding the
local variables associated with the process and the code
corresponding to the process
49
50
The structure of a Processes
A process, which inherits all the properties of the CPU, can be
considered as a virtual processor, awaiting its turn to have its
properties switched into the physical processor
When the process gets its turn, its registers and Program counter
register becomes mapped to the physical registers of the CPU
51
Memory organization of Processes
The memory occupied by the process is segregated into three
regions namely
1. Stack memory: It holds all temporary data such as variables local
to the process
2. Data memory: It holds all global data for the process
3. Code memory: It contains the program code (instructions)
corresponding to the process
On loading a process into the main memory, a specific area of
memory is allocated for the process
The stack memory usually starts at the highest memory address
from the memory area allocated for the process (Depending on
the OS kernel implementation)
52
53
Process States & State Transition
The creation of a process to its termination is not a single step
operation
The process traverses through a series of states during its
transition from the newly created state to the terminated state
The cycle through which a process changes its state from ‘newly
created’ to ‘execution completed’ is known as ‘Process Life Cycle’.
The various states through which a process traverses through
during a Process Life Cycle indicates the current status of the
process with respect to time and also provides information on
what it is allowed to do next
54
Process States & State Transition
Created State: The state at which a process is being created is
referred as ‘Created State’. The Operating System recognizes a
process in the ‘Created State’ but no resources are allocated to
the process
Ready State: The state, where a process is incepted into the
memory and awaiting the processor time for execution, is known
as ‘Ready State’. At this stage, the process is placed in the ‘Ready
list’ queue maintained by the OS
Running State: The state where in the source code instructions
corresponding to the process is being executed is called ‘Running
State’. Running state is the state at which the process execution
happens
55
Process States & State Transition
Blocked State/Wait State: Refers to a state where a running
process is temporarily suspended from execution and does not
have immediate access to resources. The blocked state might
have invoked by various conditions like- the process enters a wait
state for an event to occur (E.g. Waiting for user inputs such as
keyboard input) or waiting for getting access to a shared resource
like semaphore, mutex etc
Completed State: A state where the process completes its
execution
56
57
Process States & State Transition
The transition of a process from one state to another is known as
‘State transition’
When a process changes its state from Ready to running or from
running to blocked or terminated or from blocked to running, the
CPU allocation for the process may also change
58
Threads
A thread is the primitive that can execute code
A thread is a single sequential flow of control within a
process
‘Thread’ is also known as lightweight process
A process can have many threads of execution
Different threads, which are part of a process, share the
same address space; meaning they share the data memory,
code memory and heap memory area
Threads maintain their own thread status (CPU register
values), Program Counter (PC) and stack
59
Memory organization of process and
its associated Threads
60
Thread V/s Process
62
Advantages of Threads
1. Better memory utilization: Multiple threads of the
same process share the address space for data
memory. This also reduces the complexity of inter
thread communication since variables can be shared
across the threads
2. Efficient CPU utilization: The CPU is engaged all time
3. Speeds up the execution of the process: The process
is split into different threads, when one thread enters
a wait state, the CPU can be utilized by other threads
of the process that do not require the event, which
the other thread is waiting, for processing
63
Multiprocessing & Multitasking
The ability to execute multiple processes simultaneously is
referred as multiprocessing and system with such capability
are known as multiprocessor systems
Multiprocessor systems possess multiple CPUs and can
execute multiple processes simultaneously
The ability of the Operating System to have multiple
programs in memory, which are ready for execution, is
referred as multiprogramming
Multitasking refers to the ability of an operating system to
hold multiple processes in memory and switch the
processor (CPU) from executing one process to another
process
64
Multiprocessing & Multitasking
Multitasking involves ‘Context switching’, ‘Context saving’
and ‘Context retrieval’
Context switching refers to the switching of execution
context from task to other
When a task/process switching happens, the current
context of execution should be saved to (Context saving)
retrieve it at a later point of time when the CPU executes
the process, which is interrupted currently due to execution
switching
During context switching, the context of the task to be
executed is retrieved from the saved context list. This is
known as Context retrieval
65
How to chose RTOS
The decision of an RTOS for an embedded design is very
critical
A lot of factors need to be analyzed carefully before making
a decision on the selection of an RTOS.
These factors can be either
1. Functional requirements
2. Non-functional requirements
66
Functional Requirements
1. Processor support:
It is not necessary that all RTOS’s support all kinds of
processor architectures
It is essential to ensure the processor support by the RTOS
2. Memory Requirements:
The RTOS requires ROM memory for holding the OS files and
it is normally stored in a non-volatile memory like FLASH
OS also requires working memory RAM for loading the OS
service
Since embedded systems are memory constrained, it is
essential to evaluate the minimal RAM and ROM
requirements for the OS under consideration
67
Functional Requirements
3. Real-Time Capabilities:
It is not mandatory that the OS for all embedded systems
need to be Real Time and all embedded OS’s are ‘Real-Time’
in behavior
The Task/process scheduling policies plays an important role
in the Real Time behavior of an OS
4. Kernel and Interrupt Latency:
The kernel of the OS may disable interrupts while executing
certain services and it may lead to interrupt latency
For an embedded system whose response requirements
are high, this latency should be minimal
68
Functional Requirements
5. Inter process Communication (IPC) and Task
Synchronization:
The implementation of IPC and Synchronization is OS kernel
dependent
6. Modularization Support:
Most of the OS’s provide a bunch of features
It is very useful if the OS supports modularization where in
which the developer can choose the essential modules and
re-compile the OS image for functioning
69
Functional Requirements
7. Support for Networking and Communication:
The OS kernel may provide stack implementation and driver
support for a bunch of communication interfaces and
networking
Ensure that the OS under consideration provides support for
all the interfaces required by the embedded product
8. Development Language Support:
Certain OS’s include the run time libraries required for
running applications written in languages like JAVA and C++
The OS may include these components as built-in
component, if not , check the availability of the same from a
third party
70
Types of Multitasking
Depending on how the task/process execution switching act
is implemented, multitasking can is classified into
Co-operative Multitasking: Co-operative multitasking is the
most primitive form of multitasking in which a task/process
gets a chance to execute only when the currently executing
task/process voluntarily relinquishes the CPU
In this method, any task/process can avail the CPU as much
time as it wants. Since this type of implementation involves
the mercy of the tasks each other for getting the CPU time
for execution, it is known as co-operative multitasking. If the
currently executing task is non-cooperative, the other tasks
may have to wait for a long time to get the CPU
72
Types of Multitasking
Preemptive Multitasking: Preemptive multitasking
ensures that every task/process gets a chance to
execute
When and how much time a process gets is dependent
on the implementation of the preemptive scheduling
As the name indicates, in preemptive multitasking, the
currently running task/process is preempted to give a
chance to other tasks/process to execute. The
preemption of task may be based on time slots or
task/process priority
73
Types of Multitasking
Non-preemptive Multitasking: The process/task, which is
currently given the CPU time, is allowed to execute until it
terminates (enters the ‘Completed’ state) or enters the
‘Blocked/Wait’ state, waiting for an I/O
The cooperative and non-preemptive multitasking differs in
their behavior when they are in the ‘Blocked/Wait’ state
In co-operative multitasking, the currently executing
process/task need not relinquish the CPU when it enters the
‘Blocked/Wait’ sate, waiting for an I/O, or a shared resource
access or an event to occur whereas in non-preemptive
multitasking the currently executing task relinquishes the
CPU when it waits for an I/O
74
Task Scheduling
In a multitasking system, there should be some mechanism in
place to share the CPU among the different tasks and to decide
which process/task is to be executed at a given point of time
Determining which task/process is to be executed at a given
point of time is known as task/process scheduling Task
scheduling forms the basis of multitasking
Scheduling policies forms the guidelines for determining which
task is to be executed when
The scheduling policies are implemented in an algorithm and it
is run by the kernel as a service
The kernel service/application, which implements the
scheduling algorithm, is known as ‘Scheduler’
75
Task Scheduling
The task scheduling policy can be pre-emptive, non-preemptive
or cooperative
Depending on the scheduling policy the process scheduling
decision may take place when a process switches its state to
‘Ready’ state from ‘Running’ state
‘Blocked/Wait’ state from ‘Running’ state
‘Ready’ state from ‘Blocked/Wait’ state
‘Completed’ state
76
Task Scheduling - Scheduler Selection:
The selection of a scheduling criteria/algorithm should consider
CPU Utilization: The scheduling algorithm should always make
the CPU utilization high. CPU utilization is a direct measure of
how much percentage of the CPU is being utilized
Throughput: This gives an indication of the number of processes
executed per unit of time. The throughput for a good scheduler
should always be higher
Turnaround Time: It is the amount of time taken by a process for
completing its execution. It includes the time spent by the
process for waiting for the main memory, time spent in the
ready queue, time spent on completing the I/O operations, and
the time spent in execution. The turnaround time should be a
minimum for a good scheduling algorithm
77
Task Scheduling - Scheduler Selection:
Waiting Time: It is the amount of time spent by a process in
the ‘Ready’ queue waiting to get the CPU time for execution.
The waiting time should be minimal for a good scheduling
algorithm
Response Time: It is the time elapsed between the submission
of a process and the first response. For a good scheduling
algorithm, the response time should be as least as possible
78
Task Scheduling - Queues
The various queues maintained by OS in association with CPU
scheduling are
Job Queue: Job queue contains all the processes in the system
Ready Queue: Contains all the processes, which are ready for
execution and waiting for CPU to get their turn for execution.
The Ready queue is empty when there is no process ready for
running
Device Queue: Contains the set of processes, which are waiting
for an I/O device
79
Non-preemptive scheduling – First Come
First Served (FCFS)/First In First Out (FIFO)
Scheduling:
Allocates CPU time to the processes based on the order in which they
enters the ‘Ready’ queue
The first entered process is serviced first
It is same as any real world application where queue systems are used;
E.g. Ticketing
Drawbacks:
Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task
In general, FCFS favors CPU bound processes and I/O bound processes
may have to wait until the completion of CPU bound process, if the
currently executing process is a CPU bound process. This leads to poor
device utilization
The average waiting time is not minimal for FCFS scheduling algorithm
81
EXAMPLE
Three processes with process IDs P1, P2, P3 with
estimated completion time 10, 5, 7 milliseconds
respectively enters the ready queue together in the order
P1, P2, P3.
Calculate the waiting time and Turn Around Time (TAT) for
each process and the Average waiting time and Turn
Around Time (Assuming there is no I/O waiting for the
processes).
82
Solution
The sequence of execution of the processes by the CPU is represented
as
83
Solution
Waiting Time for P1 = 0 ms (P1 starts executing first)
Waiting Time for P2 = 10 ms (P2 starts executing after completing P1)
Waiting Time for P3 = 15 ms (P3 starts executing after completing P1
and P2)
Average waiting time
= (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (0+10+15)/3 = 25/3 = 8.33 milliseconds
84
Solution
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue +
Execution Time)
Turn Around Time (TAT) for P2 = 15 ms
Turn Around Time (TAT) for P3 = 22 ms
Average Turn Around Time
= (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (10+15+22)/3 = 47/3
= 15.66 milliseconds
85
Non-preemptive scheduling – Last Come
First Served (LCFS)/Last In First Out (LIFO)
Scheduling
Allocates CPU time to the processes based on the order in which they
are entered in the ‘Ready’ queue
The last entered process is serviced first
LCFS scheduling is also known as Last In First Out (LIFO) where the
process, which is put last into the ‘Ready’ queue, is serviced first
Drawbacks:
Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task
In general, LCFS favors CPU bound processes and I/O bound processes
may have to wait until the completion of CPU bound process, if the
currently executing process is a CPU bound process. This leads to poor
device utilization
The average waiting time is not minimal for LCFS scheduling algorithm
86
Example
Three processes with process IDs P1, P2, P3 with estimated completion
time 10, 5, 7 milliseconds respectively enters the ready queue together
in the order P1, P2, P3 (Assume only P1 is present in the ‘Ready’ queue
when the scheduler picks up it and P2, P3 entered ‘Ready’ queue after
that). Now a new process P4 with estimated completion time 6ms
enters the ‘Ready’ queue after 5ms of scheduling P1.
Calculate the waiting time and Turn Around Time (TAT) for each process
and the Average waiting time and Turn Around Time (Assuming there is
no I/O waiting for the processes).Assume all the processes contain only
CPU operation and no I/O operations are involved
87
Solution
Initially there is only P1 available in the Ready queue and the
scheduling sequence will be P1, P3, P2. P4 enters the queue during the
execution of P1 and becomes the last process entered the ‘Ready’
queue. Now the order of execution changes to P1, P4, P3, and P2 as
given below.
88
The waiting time for all the processes are given as
Waiting Time for P1 = 0 ms (P1 starts executing first)
Waiting Time for P4 = 5 ms (P4 starts executing after completing P1. But
P4 arrived after 5ms of execution of P1. Hence its waiting time =
Execution start time – Arrival Time = 10-5 = 5)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1
and P4)
Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4
and P3)
Average waiting time
= (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (0 + 5 + 16 + 23)/4 = 44/4
= 11 milliseconds
89
Turn Around Time (TAT) for P1
= 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4
= 11 ms (Time spent in Ready Queue + Execution Time
= (Execution Start Time – Arrival Time) + Estimated Execution Time
= (10-5) + 6 = 5 +6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue +
Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue +
Execution Time)
Average Turn Around Time
= (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4
= (10+11+23+28)/4 = 72/4 = 18 milliseconds
90
Non-preemptive scheduling –
Shortest Job First (SJF) Scheduling
Allocates CPU time to the processes based on the execution
completion time for tasks
The average waiting time for a given set of processes is minimal in SJF
scheduling
Optimal compared to other non-preemptive scheduling like FCFS
91
Drawbacks
A process whose estimated execution completion time is high may
not get a chance to execute if more and more processes with least
estimated execution time enters the ‘Ready’ queue before the
process with longest estimated execution time starts its execution
May lead to the ‘Starvation’ of processes with high estimated
completion time
Difficult to know in advance the next shortest process in the ‘Ready’
queue for scheduling since new processes with different estimated
execution time keep entering the ‘Ready’ queue at any point of time
92
Non-preemptive scheduling – Priority
based Scheduling
A priority, which is unique or same is associated with each task
The priority of a task is expressed in different ways, like a priority
number, the time required to complete the execution etc
In number based priority assignment the priority is a number ranging
from 0 to the maximum priority supported by the OS. The maximum
level of priority is OS dependent.
Windows CE supports 256 levels of priority (0 to 255 priority numbers,
with 0 being the highest priority)
The priority is assigned to the task on creating it. It can also be changed
dynamically (If the Operating System supports this feature)
The non-preemptive priority based scheduler sorts the ‘Ready’ queue
based on the priority and picks the process with the highest level of
priority for execution
93
EXAMPLE
Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds and priorities 0, 3, 2 (0- highest
priority, 3 lowest priority) respectively enters the ready queue
together.
Calculate the waiting time and Turn Around Time (TAT) for each
process and the Average waiting time and Turn Around Time
(Assuming there is no I/O waiting for the processes) in priority based
scheduling algorithm
94
Solution
The scheduler sorts the ‘Ready’ queue based on the priority and
schedules the process with the highest priority (P1 with priority
number 0) first and the next high priority process (P3 with priority
number 2) as second and so on. The order in which the processes are
scheduled for execution is represented as
95
The waiting time for all the processes are given as
Waiting Time for P1 = 0 ms (P1 starts executing first)
Waiting Time for P3 = 10 ms (P3 starts executing after completing P1)
Waiting Time for P2 = 17 ms (P2 starts executing after completing P1
and P3)
Average waiting time
= (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P3+P2)) / 3
= (0+10+17)/3 = 27/3
= 9 millisecondes
96
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue +
Execution Time)
Turn Around Time (TAT) for P3 = 17 ms
Turn Around Time (TAT) for P2 = 22 ms
Average Turn Around Time= (Turn Around Time for all processes) / No.
of Processes
= (Turn Around Time for (P1+P3+P2)) / 3
= (10+17+22)/3 = 49/3
= 16.33 milliseconds
97
Preemptive scheduling
Employed in systems, which implements preemptive multitasking model
Every task in the ‘Ready’ queue gets a chance to execute. When and how
often each process gets a chance to execute (gets the CPU time) is
dependent on the type of preemptive scheduling algorithm used for
scheduling the processes
The scheduler can preempt (stop temporarily) the currently executing
task/process and select another task from the ‘Ready’ queue for execution
When to pre-empt a task and which task is to be picked up from the ‘Ready’
queue for execution after preempting the current task is purely dependent
on the scheduling algorithm
A task which is preempted by the scheduler is moved to the ‘Ready’ queue.
The act of moving a ‘Running’ process/task into the ‘Ready’ queue by the
scheduler, without the processes requesting for it is known as ‘Preemption’
Time-based preemption and priority-based preemption are the two
important approaches adopted in preemptive scheduling
98
Preemptive scheduling – Preemptive SJF
Scheduling/ Shortest Remaining Time
(SRT):
The non preemptive SJF scheduling algorithm sorts the ‘Ready’ queue
only after the current process completes execution or enters wait
state, whereas the preemptive SJF scheduling algorithm sorts the
‘Ready’ queue when a new process enters the ‘Ready’ queue and
checks whether the execution time of the new process is shorter
than the remaining of the total estimated execution time of the
currently executing process
If the execution time of the new process is less, the currently
executing process is preempted and the new process is scheduled for
execution
Always compares the execution completion time (i.e., the remaining
execution time for the new process) of a new process entered the
‘Ready’ queue with the remaining time for completion of the
currently executing process and schedules the process with shortest
remaining time for execution
99
EXAMPLE
Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready
queue together. A new process P4 with estimated completion time
2ms enters the ‘Ready’ queue after 2ms.
Assume all the processes contain only CPU operation and no I/O
operations are involved
100
Solution
At the beginning, there are only three processes (P1, P2 and P3) available in
the ‘Ready’ queue and the SRT scheduler picks up the process with the
Shortest remaining time for execution completion (In this example P2 with
remaining time 5ms) for scheduling. Now process P4 with estimated
execution completion time 2ms enters the ‘Ready’ queue after 2ms of start
of execution of P2.
The processes are re-scheduled for execution in the following order
101
The waiting time for all the processes are given as
Waiting Time for P2 = 0 ms + (4 -2) ms
= 2ms (P2 starts executing first and is interrupted by
P4 and has to wait till the completion of P4 to get the next CPU slot)
Waiting Time for P4 = 0 ms (P4 starts executing by preempting P2 since the
execution time for completion of P4 (2ms) is less than that of the Remaining
time for execution completion of P2 (Here it is 3ms))
Waiting Time for P3 = 7 ms (P3 starts executing after completing P4 and P2)
Waiting Time for P1 = 14 ms (P1 starts executing after completing P4, P2 and
P3)
102
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P4+P2+P3+P1)) / 4
= (0 + 2 + 7 + 14)/4 = 23/4
= 5.75 milliseconds
Turn Around Time (TAT) for P2
= 7 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4
= 2 ms (Time spent in Ready Queue + Execution Time = (Execution Start Time –
Arrival Time) + Estimated Execution Time = (2-2) + 2)
Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P1 = 24 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of
Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (7+2+14+24)/4 = 47/4
= 11.75 milliseconds
103
104
Preemptive scheduling – Round Robin
(RR) Scheduling:
Each process in the ‘Ready’ queue is executed for a pre-defined time slot
The execution starts with picking up the first process in the ‘Ready’ queue. It
is executed for a pre-defined time
When the pre-defined time elapses or the process completes (before the
predefined time slice), the next process in the ‘Ready’ queue is selected for
execution.
This is repeated for all the processes in the ‘Ready’ queue
Once each process in the ‘Ready’ queue is executed for the pre-defined time
period, the scheduler comes back and picks the first process in the ‘Ready’
queue again for execution
Round Robin scheduling is similar to the FCFS scheduling and the only
difference is that a time slice based preemption is added to switch the
execution between the processes in the ‘Ready’ queue
105
Round Robin Scheduling
106
EXAMPLE
Three processes with process IDs P1, P2, P3 with estimated
completion time 6, 4, 2 milliseconds respectively, enters the ready
queue together in the order P1, P2, P3.
Calculate the waiting time and Turn Around Time (TAT) for each
process and the Average waiting time and Turn Around Time
(Assuming there is no I/O waiting for the processes) in RR algorithm
with Time slice= 2ms
107
Solution
The scheduler sorts the ‘Ready’ queue based on the FCFS policy and picks up
the first process P1 from the ‘Ready’ queue and executes it for the time slice
2ms. When the time slice is expired, P1 is preempted and P2 is scheduled for
execution.
The Time slice expires after 2ms of execution of P2. Now P2 is preempted
and P3 is picked up for execution. P3 completes its execution within the time
slice and the scheduler picks P1 again for execution for the next time slice.
This procedure is repeated till all the processes are serviced. The order in
which the processes are scheduled for execution is represented as
108
The waiting time for all the processes are given as
Waiting Time for P1 = 0 + (6-2) + (10-8)
= 0+4+2= 6ms (P1 starts executing first and waits for
two time slices to get execution back and again 1 time slice for getting CPU
time)
Waiting Time for P2 = (2-0) + (8-4)
= 2+4 = 6ms (P2 starts executing after P1 executes for 1
time slice and waits for two time slices to get the CPU time)
Waiting Time for P3 = (4 -0) = 4ms (P3 starts executing after completing the
first time slices for P1 and P2 and completes its execution in a single time
slice.)
Average waiting time
= (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (6+6+4)/3 = 16/3 = 5.33 milliseconds
109
Turn Around Time (TAT) for P1 = 12 ms (Time spent in Ready Queue +
Execution Time)
Turn Around Time (TAT) for P2 = 10 ms
Turn Around Time (TAT) for P3 = 6 ms
Average Turn Around Time
= (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (12+10+6)/3 = 28/3
= 9.33 milliseconds.
110
Preemptive scheduling – Priority
based Scheduling
Same as that of the non-preemptive priority based scheduling except
for the switching of execution between tasks
In preemptive priority based scheduling, any high priority process
entering the ‘Ready’ queue is immediately scheduled for execution
whereas in the non-preemptive scheduling any high priority process
entering the ‘Ready’ queue is scheduled only after the currently
executing process completes its execution or only when it
voluntarily releases the CPU
The priority of a task/process in preemptive priority based scheduling
is indicated in the same way as that of the mechanisms adopted for
non preemptive multitasking
111
EXAMPLE
Three processes with process IDs P1, P2, P3 with estimated completion time
10, 5, 7 milliseconds and priorities 1, 3, 2 (0- highest priority, 3 lowest
priority) respectively enters the ready queue together. A new process P4 with
estimated completion time 6 ms and priority 0 enters the ‘Ready’ queue
after 5ms of start of execution of P1.
Assume all the processes contain only CPU operation and no I/O operations
are involved
112
Solution
At the beginning, there are only three processes (P1, P2 and P3) available in
the ‘Ready’ queue and the scheduler picks up the process with the highest
priority (In this example P1 with priority 1) for scheduling
Now process P4 with estimated execution completion time 6ms and priority
0 enters the ‘Ready’ queue after 5ms of start of execution of P1. The
processes are re-scheduled for execution in the following order
113
The waiting time for all the processes are given as
Waiting Time for P1 = 0 + (11-5) = 0+6 =6 ms (P1 starts executing first and
gets Preempted by P4 after 5ms and again gets the CPU time after
completion of P4)
Waiting Time for P4 = 0 ms (P4 starts executing immediately on entering the
‘Ready’ queue, by preempting P1)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and
P3)
Average waiting time
= (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (6 + 0 + 16 + 23)/4 = 45/4
= 11.25 milliseconds
114
Turn Around Time (TAT) for P1 = 16 ms (Time spent in Ready Queue +
Execution Time)
Turn Around Time (TAT) for P4 = 6ms (Time spent in Ready Queue +
Execution Time = (Execution Start Time – Arrival Time) + Estimated Execution
Time = (5-5) + 6 = 0 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue +
Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue +
Execution Time)
Average Turn Around Time
= (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (16+6+23+28)/4 = 73/4
= 18.25 millisecond
115
Task Communication
In a multitasking system, multiple tasks/processes run concurrently and each
process may or may not interact between
Based on the degree of interaction, the processes running on an OS are
classified as:
Co-operating Processes: In the co-operating interaction model one
process requires the inputs from other processes to complete its
execution
Competing Processes: The competing processes do not share
anything among themselves but they share the system resources
The competing processes compete for the system resources such as
file, display device, etc
116
Co-operating Processes
Co-operating processes exchanges information and communicate
through the following methods:
Co-operation through Sharing: The co-operating process exchange
data through some shared resources
Co-operation through Communication: No data is shared between the
processes. But they communicate for synchronization
The mechanism through which processes/tasks communicate each other is
known as Inter Process/Task Communication (IPC)
Inter Process Communication is essential for process co-ordination
The various types of Inter Process Communication (IPC) mechanisms
adopted by process are kernel (Operating System) dependent
117
IPC Mechanisms: Shared Memory
Shared Memory: The Processes share some area of the memory to communicate
among them
Information to be communicated by the process is written to the shared memory area
Other processes which require this information can read the same from the shared
memory area
It is same as the real world example where ‘Notice Board’ is used by corporate to
publish the public information among the employees
The implementation of shared memory concept is kernel dependent. Different
mechanisms are adopted by different kernels for implementing this
118
Different mechanisms to implement
shared memory
Pipes: ‘Pipe’ is a section of the shared memory used by processes for communicating
A process which creates a pipe is known as a pipe server and a process which
connects to a pipe is known as pipe client
A pipe can be considered as a conduit for information flow and has two conceptual
ends
A unidirectional pipe allows the process connecting at one end of the pipe to write to
the pipe and the process connected at the other end of the pipe to read the data,
whereas a bi-directional pipe allows both reading and writing at one end
119
Pipes
The implementation of ‘Pipes’ is also OS dependent. Microsoft®
Windows Desktop Operating Systems support two types of ‘Pipes’ for
Inter Process Communication:
Anonymous Pipes: The anonymous pipes are unnamed, unidirectional pipes
used for data transfer between two processes
Named Pipes: Named pipe is a named, unidirectional or bi-directional pipe for
data exchange between processes
Like anonymous pipes, the process which creates the named pipe is known as
pipe server. A process which connects to the named pipe is known as pipe client
With named pipes, any process can act as both client and server allowing point-
to-point communication
Named pipes can be used for communicating between processes running on the
same machine or between processes running on different machines connected
to a network
120
Memory Mapped Objects
Memory mapped object is a shared memory technique adopted by certain
Real-Time Operating Systems for allocating a shared block of memory which
can be accessed by multiple process simultaneously
A mapping object is created and physical storage for it is reserved and
committed
A process can map the entire committed physical area or a block of it to its
virtual address space
All read and write operation to this virtual address space by a process is
directed to its committed physical area
Any process which wants to share data with other processes can map the
physical memory area of the mapped object to its virtual memory space and
use it for sharing the data
121
IPC Mechanisms: Message Passing
Message Passing: Message passing is an asynchronous information
exchange mechanism used for Inter Process/Thread Communication
The major difference between shared memory and message passing
technique is that, through shared memory lots of data can be
shared whereas only limited amount of info/data is passed through
message passing
Also message passing is relatively fast and free from the
synchronization overheads compared to shared memory
122
Classification of Message Passing
Message Queue: Usually the process which
wants to talk to another process posts the
message to a First-In-First-Out (FIFO) queue
called ‘Message queue’, which stores the
messages temporarily in a system defined
memory object, to pass it to the desired
process
Messages are sent and received through send
(Name of the process to which the message is to be
sent, message) and receive (Name of the process
from which the message is to be received, message)
methods
The messages are exchanged through a message
queue
The implementation of the message queue, send
and receive methods are OS kernel dependent
123
Classification of Message Passing
Mailbox: Mailbox is an alternate form of ‘Message
queues’ and it is used in certain Real-Time
Operating Systems for IPC
Mailbox technique for IPC in RTOS is usually used
for one way messaging
The task/thread which wants to send a message to
other tasks/threads creates a mailbox for posting
the message
The threads which are interested in receiving the
messages posted to the mailbox by the mailbox
creator thread can subscribe to the mailbox
The thread which creates the mailbox is known as
‘mailbox server’ and the threads which subscribe
to the mailbox are known as ‘mailbox clients’
124
Classification of Message Passing
The mailbox server posts messages to the
mailbox and notifies it to the clients which are
subscribed to the mailbox
The clients read the message from the mailbox
on receiving the notification
The mailbox creation, subscription, message
reading and writing are achieved through OS
kernel provided API calls
125
Classification of Message Passing
Signalling: Signalling is a primitive way of communication between
processes/threads
Signals are used for asynchronous notifications where one
process/thread fires a signal, indicating the occurrence of a scenario
which the other process(es)/thread(s) is waiting
Signals are not queued and they do not carry any data
126
IPC Mechanisms: Remote Procedure
Call ( RPC) and Sockets
RPC is the Inter Process Communication (IPC) mechanism used by a
process to call a procedure of another process running on the same
CPU or on a different CPU which is interconnected in a network
In the object oriented language terminology RPC is also known as
Remote Invocation or Remote Method Invocation ( RMI)
RPC is mainly used for distributed applications like client server
applications
With RPC it is possible to communicate over a heterogeneous network
(i.e. Network where Client and server applications are running on
different Operating systems)
The CPU/process containing the procedure which needs to be invoked
remotely is known as server
The CPU/process which initiates an RPC request is known as client
127
IPC Mechanisms: Remote Procedure
Call ( RPC) and Sockets
128
IPC Mechanisms: Remote Procedure
Call ( RPC) and Sockets
It is possible to implement RPC communication with different invocation
interfaces
In order to make the RPC communication compatible across all platforms it
should stick on to certain standard formats
Interface Definition Language (IDL) defines the interfaces for RPC. Microsoft
Interface Definition Language (MIDL) is the IDL implementation from Microsoft
for all Microsoft platforms
The RPC communication can be either Synchronous (Blocking) or Asynchronous
(Non-blocking)
In the Synchronous communication, the process which calls the remote
procedure is blocked until it receives a response back from the other process
In asynchronous RPC calls, the calling process continues its execution while the
remote process performs the execution of the procedure
The result from the remote procedure is returned back to the caller through
mechanisms like callback functions
129
IPC Mechanisms: Remote Procedure
Call ( RPC) and Sockets
On security front, RPC employs authentication mechanisms to protect the
systems against vulnerabilities
The client applications (processes) should authenticate themselves with the
server for getting access
Authentication mechanisms like IDs, public key cryptography (like DES, 3DES),
etc. are used by the client for authentication. Without authentication, any client
can access the remote procedure. This may lead to potential security risks
Sockets are used for RPC communication. Socket is a logical endpoint in a two-
way communication link between two applications running on a network
A port number is associated with a socket so that the network layer of the
communication channel can deliver the data to the designated application
Sockets are of different types, namely, Internet sockets (INET), UNIX sockets, etc.
The INET socket works on internet communication protocol. TCP/IP, UDP, etc. are
the communication protocols used by INET sockets
130
Classification of INET Sockets
INET sockets are classified into:
1. Stream sockets
2. Datagram sockets
Stream sockets are connection oriented and they use TCP to establish a
reliable connection
Datagram sockets rely on UDP for establishing a connection. The UDP
connection is unreliable when compared to TCP
131
TASK SYNCHRONISATION
In a multitasking environment, multiple processes run concurrently (in pseudo
parallelism) and share the system resources
Each process has its own boundary wall and they communicate with each other
with different IPC mechanisms including shared memory and variables
Imagine a situation where two processes try to access display hardware
connected to the system or two processes try to access a shared memory area
where one process tries to write to a memory location when the other process
is trying to read from this. What could be the result in these scenarios?
Obviously unexpected results. How these issues can be addressed? The solution
is, make each process aware of the access of a shared resource either directly
or indirectly
The act of making processes aware of the access of shared resources by each
process to avoid conflicts is known as ‘Task/ Process Synchronisation’
Various synchronisation issues may arise in a multitasking environment if
processes are not synchronized properly
132
Task Communication/Synchronisation
Issues
133
Racing
Racing or Race condition is the situation in which multiple processes
compete (race) each other to access and manipulate shared data
concurrently
In a Race condition the final value of the shared data depends on the
process which acted on the data finally
Example
134
Racing condition
135
Deadlock
A race condition produces incorrect results whereas a deadlock condition
creates a situation where none of the processes are able to make any
progress in their execution, resulting in a set of deadlocked processes
A situation very similar to our traffic jam issues in a junction
136
Deadlock
137
Mutual Exclusion:
The criteria that only one process can hold a resource at a time. Meaning
processes should access shared resources with mutual exclusion
Typical example is the accessing of display hardware in an embedded device
Hold and Wait:
The condition in which a process holds a shared resource by acquiring the
lock controlling the shared access and waiting for additional resources
held by other processes
No Resource Preemption:
The criteria that operating system cannot take back a resource from a
process which is currently holding it and the resource can only be released
voluntarily by the process holding it
138
Circular Wait
A process is waiting for a resource which is currently held by another
process which in turn is waiting for a resource held by the first process
In general, there exists a set of waiting process P0, P1 … Pn with P0 is
waiting for a resource held by P1 and P1 is waiting for a resource held by P0,
…, Pn is waiting for a resource held by P0 and P0 is waiting for a resource
held by Pn and so on…
This forms a circular wait queue
139
Deadlock handling – Techniques to detect
and prevent deadlock conditions
A smart OS may foresee the deadlock condition and will act
proactively to avoid such a situation
140
Ignore Deadlocks
Always assume that the system design is deadlock free
This is acceptable for the reason the cost of removing a deadlock is large
compared to the chance of happening a deadlock
UNIX is an example for an OS following this principle
A life critical system cannot pretend that it is deadlock free for any reason
141
Detect and Recover
This is similar to the deadlock condition that
may arise at a traffic junction. When the
vehicles from different directions compete to
cross the junction, deadlock (traffic jam)
condition is resulted
Once a deadlock (traffic jam) is happened at
the junction, the only solution is to back up the
vehicles from one direction and allow the
vehicles from opposite direction to cross the
junction
If the traffic is too high, lots of vehicles may
have to be backed up to resolve the traffic jam
This technique is also known as ‘back up cars’
technique
142
Detect and Recover
Operating systems keep a resource graph in their memory
The resource graph is updated on each resource request and release
A deadlock condition can be detected by analysing the resource graph by
graph analyser algorithms
Once a deadlock condition is detected, the system can terminate a process
or preempt the resource to break the deadlocking cycle
143
Avoid Deadlocks
Deadlock is avoided by the careful resource allocation techniques by the
Operating System
It is similar to the traffic light mechanism at junctions to avoid the traffic
jams
144
Prevent Deadlocks
Prevent the deadlock condition by negating one of the four conditions
favouring the deadlock situation
1. Ensure that a process does not hold any other resources when it requests
a resource
This can be achieved by implementing the following set of rules/guidelines
in allocating resources to processes
i. A process must request all its required resource and the resources
should be allocated before the process begins its execution
ii. Grant resource allocation requests from processes only if the process
does not hold a resource currently
145
Deadlock handling – Techniques to detect
and prevent deadlock conditions
2. Ensure that resource preemption (resource releasing) is possible at
operating system level. This can be achieved by implementing the
following set of rules/guidelines in resources allocation and releasing. A
process must request all its required resource and the resources should be
allocated before the process begins its execution
i. Release all the resources currently held by a process if a request made
by the process for a new resource is not able to fulfil immediately
ii. Add the resources which are preempted (released) to a resource list
describing the resources which the process requires to complete its
execution
iii. Reschedule the process for execution only when the process gets its old
resources and the new resource which is requested by the process
Imposing these criterions may introduce negative impacts like low
resource utilization and starvation of processes
146
Deadlock handling – Techniques to detect
and prevent deadlock conditions
3. Livelock: The Livelock condition is similar to the deadlock condition
except that a process in livelock condition changes its state with time.
While in deadlock a process enters in wait state for a resource and
continues in that state forever without making any progress in the
execution, in a livelock condition a process always does something but is
unable to make any progress in the execution completion
The livelock condition is better explained with the real world example, two
people attempting to cross each other in a narrow corridor
Both the persons move towards each side of the corridor to allow the
opposite person to cross. Since the corridor is narrow, none of them are
able to cross each other. Here both of the persons perform some action but
still they are unable to achieve their target, cross each other
147
Deadlock handling – Techniques to detect
and prevent deadlock conditions
4. Starvation: In the multitasking context, starvation is the condition in
which a process does not get the resources required to continue its
execution for a long time
As time progresses the process starves on resource. Starvation may arise
due to various conditions like byproduct of preventive measures of
deadlock, scheduling policies favouring high priority tasks and tasks with
shortest execution time, etc
148
The Dining Philosophers’ Problem
Five philosophers (It can be ‘n’. The number 5 is taken for illustration) are
sitting around a round table, involved in eating and brainstorming
At any point of time each philosopher will be in any one of the three states:
eating, hungry or brainstorming. (While eating the philosopher is not
involved in brainstorming and while brainstorming the philosopher is not
involved in eating)
For eating, each philosopher requires 2 forks. There are only 5 forks
available on the dining table (‘n’ for ‘n’ number of philosophers) and they
are arranged in a fashion one fork in between two philosophers
The philosopher can only use the forks on his/her immediate left and right
that too in the order pickup the left fork fi rst and then the right fork
Analyse the situation and explain the possible outcomes of this scenario
149
The Dining Philosophers’ Problem
150
Various scenarios
151
Scenario 1
All the philosophers involve in brainstorming together
and try to eat together
Each philosopher picks up the left fork and is unable to
proceed since two forks are required for eating the
spaghetti present in the plate
Philosopher 1 thinks that Philosopher 2 sitting to the
right of him/her will put the fork down and waits for it
Philosopher 2 thinks that Philosopher 3 sitting to the
right of him/her will put the fork down and waits for it,
and so on
This forms a circular chain of un-granted requests
If the philosophers continue in this state waiting for
the fork from the philosopher sitting to the right of
each, they will not make any progress in eating and
this will result in starvation of the philosophers and
deadlock
152
Scenario 2
All the philosophers start brainstorming
together
One of the philosophers is hungry and he/she
picks up the left fork
When the philosopher is about to pick up the
right fork, the philosopher sitting to his right
also become hungry and tries to grab the left
fork which is the right fork of his neighbouring
philosopher who is trying to lift it, resulting in
a ‘Race condition’
153
Scenario 3
All the philosophers involve in brainstorming together
and try to eat together
Each philosopher picks up the left fork and is unable to
proceed, since two forks are required for eating the
spaghetti present in the plate
Each of them anticipates that the adjacently sitting
philosopher will put his/her fork down and waits for a
fixed duration and after this puts the fork down
Each of them again tries to lift the fork after a fixed
duration of time. Since all philosophers are trying to lift
the fork at the same time, none of them will be able to
grab two forks
This condition leads to live lock and starvation of
philosophers, where each philosopher tries to do
something, but they are unable to make any progress in
achieving the target
154
Solution
Solution 1: Imposing rules in accessing the forks by philosophers, like
The philosophers should put down the fork he/she already have in hand (left fork)
after waiting for a fixed duration for the second fork (right fork) and should wait for a
fixed time before making the next attempt
This solution works fine to some extent, but, if all the philosophers try to lift the forks
at the same time, a livelock situation is resulted
155
Solution
Solution 2: Each philosopher acquires a semaphore (mutex) before picking up any fork
When a philosopher feels hungry he/she checks whether the philosopher sitting to
the left and right of him is already using the fork, by checking the state of the
associated semaphore
If the forks are in use by the neighbouring philosophers, the philosopher waits till the
forks are available
A philosopher when finished eating puts the forks down and informs the philosophers
sitting to his/her left and right, who are hungry (waiting for the forks), by signalling
the semaphores associated with the forks
In the operating system context, the dining philosophers represent the processes and
forks represent the resources
The dining philosophers’ problem is an analogy of processes competing for shared
resources and the different problems like racing, deadlock, starvation and livelock
arising from the competition
156
Producer-Consumer/ Bounded Buffer
Problem
Producer-Consumer problem is a common data sharing problem where two
processes concurrently access a shared buffer with fixed size
A thread/process which produces data is called ‘Producer thread/process’
and a thread/process which consumes the data produced by a producer
thread/process is known as ‘Consumer thread/process’
Imagine a situation where the producer thread keeps on producing data
and puts it into the buffer and the consumer thread keeps on consuming
the data from the buffer and there is no synchronization between the two
There may be chances where in which the producer produces data at a
faster rate than the rate at which it is consumed by the consumer. This will
lead to ‘buffer overrun’ where the producer tries to put data to a full buffer
If the consumer consumes data at a faster rate than the rate at which it is
produced by the producer, it will lead to the situation ‘buffer under-run’ in
which the consumer tries to read from an empty buffer
Both of these conditions will lead to inaccurate data and data loss
157
Readers-Writers Problem
The Readers-Writers problem is a common issue observed in processes
competing for limited shared resources
The Readers-Writers problem is characterised by multiple processes trying
to read and write shared data concurrently
A typical real-world example for the Readers-Writers problem is the banking
system where one process tries to read the account information like
available balance and the other process tries to update the available
balance for that account
This may result in inconsistent results
If multiple processes try to read a shared data concurrently it may not
create any impacts, whereas when multiple processes try to write and read
concurrently it will definitely create inconsistent results
Proper synchronisation techniques should be applied to avoid the readers-
writers problem
158
Priority Inversion
Priority inversion is the byproduct of the combination of blocking based
(lock based) process synchronization and pre-emptive priority scheduling
‘Priority inversion’ is the condition in which a high priority task needs to
wait for a low priority task to release a resource which is shared between
the high priority task and the low priority task, and a medium priority task
which doesn’t require the shared resource continue its execution by
preempting the low priority task
159
Priority Inversion
160
Priority Inversion
Priority inversion may be sporadic in nature but can lead to potential
damages as a result of missing critical deadlines
Literally speaking, priority inversion ‘inverts’ the priority of a high priority
task with that of a low priority task
Proper workaround mechanism should be adopted for handling the priority
inversion problem
161
Priority Inheritance
A low-priority task that is currently accessing (by holding the lock) a shared
resource requested by a high-priority task temporarily ‘inherits’ the priority
of that high-priority task, from the moment the high-priority task raises the
request
Boosting the priority of the low priority task to that of the priority of the
task which requested the shared resource holding by the low priority task
eliminates the preemption of the low priority task by other tasks whose
priority are below that of the task requested the shared resource and
thereby reduces the delay in waiting to get the resource requested by the
high priority task
The priority of the low priority task which is temporarily boosted to high is
brought to the original value when it releases the shared resource
162
Priority Inheritance
163
Priority Inheritance
Priority inheritance is only a work around and it will not eliminate the delay
in waiting the high priority task to get the resource from the low priority
task
The only thing is that it helps the low priority task to continue its execution
and release the shared resource as soon as possible
The moment, at which the low priority task releases the shared resource,
the high priority task kicks the low priority task out and grabs the CPU
Priority inheritance handles priority inversion at the cost of run-time
overhead at scheduler
It imposes the overhead of checking the priorities of all tasks which tries to
access shared resources and adjust the priorities dynamically
164
Priority Ceiling
In ‘Priority Ceiling’, a priority is associated with each shared resource
The priority associated to each resource is the priority of the highest
priority task which uses this shared resource
This priority level is called ‘ceiling priority’. Whenever a task accesses a
shared resource, the scheduler elevates the priority of the task to that of
the ceiling priority of the resource
If the task which accesses the shared resource is a low priority task, its
priority is temporarily boosted to the priority of the highest priority task to
which the resource is also shared
This eliminates the pre-emption of the task by other medium priority tasks
leading to priority inversion
The priority of the task is brought back to the original level once the task
completes the accessing of the shared resource
165
Priority Ceiling
‘Priority Ceiling’ brings the added advantage of sharing resources without
the need for synchronisation techniques like locks
Since the priority of the task accessing a shared resource is boosted to the
highest priority of the task among which the resource is shared, the
concurrent access of shared resource is automatically handled
Another advantage of ‘Priority Ceiling’ technique is that all the overheads
are at compile time instead of run-time
166
Priority Ceiling
Priority
167
Priority Ceiling
The biggest drawback of ‘Priority Ceiling’ is that it may produce hidden
priority inversion
With ‘Priority Ceiling’ technique, the priority of a task is always elevated no
matter another task wants the shared resources
This unnecessary priority elevation always boosts the priority of a low
priority task to that of the highest priority tasks among which the resource
is shared and other tasks with priorities higher than that of the low priority
task is not allowed to preempt the low priority task when it is accessing a
shared resource
This always gives the low priority task the luxury of running at high priority
when accessing shared resources
168
Task Synchronization Techniques
Process/Task synchronization is essential for
1. Avoiding conflicts in resource access (racing, deadlock, starvation,
livelock, etc.) in a multitasking environment
2. Ensuring proper sequence of operation across processes.
Example: In producer consumer problem, accessing the shared
buffer by different processes is not the issue, the issue is the
writing process should write to the shared buffer only if the
buffer is not full and the consumer thread should not read from
the buffer if it is empty. Hence proper synchronization should be
provided to implement this sequence of operations
3. Communicating between processes
169
Task Synchronization Techniques
The code memory area which holds the program instructions for accessing a
shared resource is known as ‘critical section’
In order to synchronize the access to shared resources, the access to the
critical section should be exclusive
The exclusive access to critical section of code is provided through mutual
exclusion mechanism
Let us have a look at how mutual exclusion is important in concurrent
access. Consider two processes Process A and Process B running on a
multitasking system. Process A is currently running and it enters its critical
section. Before Process A completes its operation in the critical section, the
scheduler preempts Process A and schedules Process B for execution
(Process B is of higher priority compared to Process A). Process B also
contains the access to the critical section which is already in use by Process
A. If Process B continues its execution and enters the critical section which
is already in use by Process A, a racing condition will be resulted
170
Task Synchronization Techniques
A mutual exclusion policy enforces mutually exclusive access of
critical sections
Mutual exclusions can be enforced in different ways
Mutual exclusion blocks a process
Based on the behavior of the blocked process, mutual exclusion
methods can be classified into two categories
1.Mutual Exclusion through Busy Waiting/ Spin Lock
2.Mutual Exclusion through Sleep & Wakeup
171
Mutual Exclusion through Busy Waiting/
Spin Lock
The ‘Busy waiting’ technique uses a lock variable for implementing
mutual exclusion and each process/ thread checks this lock variable
before entering the critical section
The lock is set to ‘1’ by a process/ thread if the process/thread is
already in its critical section; otherwise the lock is set to ‘0’
The major challenge in implementing the lock variable based
synchronization is the non-availability of a single atomic instruction
(Instruction whose execution is uninterruptible) which combines the
reading, comparing and setting of the lock variable
Most often the three different operations related to the locks, viz. the
operation of Reading the lock variable, checking its present value and
setting it are achieved with multiple low level instructions
173
Mutual Exclusion through Busy Waiting/
Spin Lock
The above issue can be effectively tackled by combining the actions of
reading the lock variable, testing its state and setting the lock into a
single step
This can be achieved with the combined hardware and software
support. Most of the processors support a single instruction ‘Test and
Set Lock (TSL)’ for testing and setting the lock variable
The ‘Test and Set Lock (TSL)’ instruction call copies the value of the lock
variable and sets it to a nonzero value
It should be noted that the implementation and usage of ‘Test and Set
Lock ( TSL)’ instruction is processor architecture dependent
The Intel 486 and the above family of processors support the ‘ Test and
Set Lock (TSL)’ instruction with a special instruction CMPXCHG—
Compare and Exchange
The usage of CMPXCHG instruction is given below
179
Mutual Exclusion through Busy Waiting/
Spin Lock
180
Mutual Exclusion through Busy Waiting/
Spin Lock
181
Mutual Exclusion through Busy Waiting/
Spin Lock
The lock based mutual exclusion implementation always checks the
state of a lock and waits till the lock is available
This keeps the processes/threads always busy and forces the
processes/threads to wait for the availability of the lock for
proceeding further
Hence this synchronization mechanism is popularly known as ‘Busy
waiting’
The ‘Busy waiting’ technique can also be visualized as a lock around
which the process/ thread spins, checking for its availability
Spin locks are useful in handling scenarios where the processes/ threads
are likely to be blocked for a shorter period of time on waiting the lock,
as they avoid OS overheads on context saving and process re-scheduling
185
Mutual Exclusion through Busy Waiting/
Spin Lock
Another drawback of Spin lock based synchronization is that if the lock
is being held for a long time by a process and if it is preempted by the
OS, the other threads waiting for this lock may have to spin a longer
time for getting it
The ‘Busy waiting’ mechanism keeps the process/ threads always active,
performing a task which is not useful and leads to the wastage of
processor time and high power consumption
186
Mutual Exclusion through Sleep &
Wakeup
The ‘Busy waiting’ mutual exclusion enforcement mechanism used by
processes makes the CPU always busy by checking the lock to see
whether they can proceed
This results in the wastage of CPU time and leads to high power
consumption
This is not affordable in embedded systems powered on battery, since it
affects the battery backup time of the device
An alternative to ‘busy waiting’ is the ‘Sleep & Wakeup’ mechanism
When a process is not allowed to access the critical section, which is
currently being locked by another process, the process undergoes
‘Sleep’ and enters the ‘blocked’ state
The process which is blocked on waiting for access to the critical
section is awakened by the process which currently owns the critical
section
189
Mutual Exclusion through Sleep &
Wakeup
The process which owns the critical section sends a wakeup message
to the process, which is sleeping as a result of waiting for the access to
the critical section, when the process leaves the critical section
The ‘Sleep & Wakeup’ policy for mutual exclusion can be implemented
in different ways
Implementation of this policy is OS kernel dependent
190
Important techniques for ‘Sleep & Wakeup’ policy
implementation for mutual exclusion by Windows NT/CE OS
kernels
191
Semaphore
Semaphore is a sleep and wakeup based mutual exclusion
implementation for shared resource access
Semaphore is a system resource and the process which wants to access
the shared resource can first acquire this system object to indicate the
other processes which wants the shared resource that the shared
resource is currently acquired by it
The resources which are shared among a process can be either for
exclusive use by a process or for using by a number of processes at a
time
The display device of an embedded system is a typical example for the
shared resource which needs exclusive access by a process
The Hard disk (secondary storage) of a system is a typical example for
sharing the resource among a limited number of multiple processes
192
Semaphore
Various processes can access the different sectors of the hard-disk
concurrently
Based on the implementation of the sharing limitation of the shared
resource, semaphores are classified into two; namely ‘Binary
Semaphore’ and ‘ Counting Semaphore’
193
Binary Semaphore and Counting Semaphore
The binary semaphore provides exclusive access to shared resource by
allocating the resource to a single process at a time and not allowing
the other processes to access it when it is being owned by a process
The implementation of binary semaphore is OS kernel dependent
Under certain OS kernel it is referred as mutex
Unlike a binary semaphore, the ‘Counting Semaphore’ limits the
access of resources by a fixed number of processes/threads‘
‘Counting Semaphore’ maintains a count between zero and a maximum
value
It limits the usage of the resource to the maximum value of the count
supported by it
The state of the counting semaphore object is set to ‘signalled’ when
the count of the object is greater than zero
194
Binary Semaphore and Counting Semaphore
The count associated with a ‘Semaphore object’ is decremented by one
when a process/thread acquires it and the count is incremented by one
when a process/thread releases the ‘Semaphore object’
The state of the ‘Semaphore object’ is set to non-signalled when the
semaphore is acquired by the maximum number of processes/threads
that the semaphore can support (i.e. when the count associated with
the ‘Semaphore object’ becomes zero)
195
Binary Semaphore and Counting Semaphore
A real world example for the counting semaphore concept is the dormitory
system for accommodation
A dormitory contains a fixed number of beds (say 5) and at any point of time it
can be shared by the maximum number of users supported by the dormitory
If a person wants to avail the dormitory facility, he/she can contact the
dormitory caretaker for checking the availability
If beds are available in the dorm the caretaker will hand over the keys to the
user
If beds are not available currently, the user can register his/her name to get
notifications when a slot is available
Those who are availing the dormitory shares the dorm facilities like TV,
telephone, toilet, etc
When a dorm user vacates, he/she gives the keys back to the caretaker
The caretaker informs the users, who booked in advance, about the dorm
availability
196
197
Binary Semaphore and Counting Semaphore
Counting Semaphores are similar to Binary Semaphores in operation
The only difference between Counting Semaphore and Binary Semaphore is that
Binary Semaphore can only be used for exclusive access, whereas Counting
Semaphores can be used for both exclusive access (by restricting the maximum
count value associated with the semaphore object to one (1) at the time of
creation of the semaphore object) and limited access (by restricting the
maximum count value associated with the semaphore object to the limited
number at the time of creation of the semaphore object)
198
Binary Semaphore (Mutex)
Binary Semaphore ( Mutex) is a synchronization object provided by OS for process/thread
synchronization
Any process/thread can create a ‘mutex object’ and other processes/threads of the system
can use this ‘mutex object’ for synchronizing the access to critical sections
Only one process/thread can own the ‘mutex object’ at a time. The state of a mutex object
is set to signalled when it is not owned by any process/thread, and set to non-signalled
when it is owned by any process/thread
A real world example for the mutex concept is the hotel accommodation system (lodging
system)
The rooms in a hotel are shared for the public. Any user who pays and follows the norms of
the hotel can avail the rooms for accommodation. A person wants to avail the hotel room
facility can contact the hotel reception for checking the room availability. If room is
available the receptionist will handover the room key to the user. If room is not available
currently, the user can book the room to get notifications when a room is available. When a
person gets a room he/she is granted the exclusive access to the room facilities like TV,
telephone, toilet, etc. When a user vacates the room, he/she gives the keys back to the
receptionist
The receptionist informs the users, who booked in advance, about the room’s availability
199
200
Device Driver
Device driver is a piece of software that acts as a bridge between the
operating system and the hardware
In an operating system based product architecture, the user applications
talk to the Operating System kernel for all necessary information exchange
including communication with the hardware peripherals
The architecture of the OS kernel will not allow direct device access from
the user application
All the device related access should flow through the OS kernel and the OS
kernel routes it to the concerned hardware peripheral OS provides
interfaces in the form of Application Programming Interfaces (APIs) for
accessing the hardware
The device driver abstracts the hardware from user applications
The topology of user applications and hardware interaction in an RTOS
based system is depicted in
201
The topology of user applications and hardware
interaction in an RTOS based system
202
Device Driver
Device drivers are responsible for initiating and managing the communication
with the hardware peripherals
They are responsible for establishing the connectivity, initializing the hardware
(setting up various registers of the hardware device) and transferring data
An embedded product may contain different types of hardware components like
Wi-Fi module, File systems, Storage device interface, etc.
The initialisation of these devices and the protocols required for communicating
with these devices may be different
All these requirements are implemented in drivers and a single driver will not be
able to satisfy all these. Hence each hardware (more specifically each class of
hardware) requires a unique driver component
Certain drivers come as part of the OS kernel and certain drivers need to be
installed on the fly
The implementation of driver is OS dependent
203
How to choose an RTOS
Functional Requirements
Processor Support
Memory Requirements
Real- time Capabilities
Kernel and Interrupt Latency
Inter Process Communication and Task Synchronization
Modularization Support
Support for Networking and Communication
Development Language Support
204
How to choose an RTOS
Non - Functional Requirements
Custom Developed or Off the Shelf
Cost
Development and Debugging Tools Availability
Ease of Use
After Sales
205
Thank You
207