0% found this document useful (0 votes)
21 views

Unit 2 RTOS

Uploaded by

Nikhath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Unit 2 RTOS

Uploaded by

Nikhath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 69

Unit 2: Real Time Operating Systems

-Shibu K V “Introduction to Embedded Systems”


1. OPERATING SYSTEM BASICS
THE KERNEL

• Process management
- setting up the memory space for the process, loading the process code into the memory
space, allocating system resources, scheduling and managing the execution of the
process, setting up and managing the Process Control Block (PCB), inter process
communication and synchronization, process termination etc.

• Primary memory management


- The primary memory refers to the RAM where processes are loaded and related data is
stored.
- The Memory Management Unit of the kernel is responsible for
• Keeping track of which part of the memory area is currently used by which process
• Allocating and De allocating memory space
THE KERNEL
• File system management
-File is a collection of related information
- Each file differ in the kind of information they hold and the way the information is stored
- It is responsible for creating, deleting and altering files, directories, saving files in secondary storage memory,
automatic allocation of file space, flexible naming convention for the files.

• I/O system management


- Routing the I/O requests from user applications to I/O devices
- Direct access to I/O devices are not allowed, it is only through the Application Programming Interface (API).
- Kernel maintains a list of all the I/O devices of the system.
- Device manager of the kernel is responsible for handling all the I/O device related operations.
- Kernal talks through low level system calls which are implemented in a service called device drivers.
- Device manager : Loading/ Unloading the device drivers, exchanging information and system specific control
signals to and from the device.
THE KERNEL
• Secondary Storage Management
- It is used as backup medium for the program and data since the main memory is volatile. Secondary storage is kept is
disks (Hard disks)
- It deals with disk storage allocation, disk scheduling, free disk space management

• Protection system
- Multiple users with different levels of access permissions (Admin, restricted, standard)

• Interrupt handler
- Handler mechanism for all external/internal interrupts

Kernel space and user space


- The program code corresponding to the kernel applications/services are in contiguous area of primary memory and it
is protected from unauthorized access by the user programs.
- Kernel code is located in “Kernel Space”
- User code is saved in “User Space”.
- The act of loading the code into and out of the main memory is
termed as “Swapping”. Swapping happens between the main
memory and secondary memory.
- Each process will have certain privilege levels on accessing the
memory of other processes
Microkernel
- It incorporates only the essential set of operating services into the kernel. The rest of the OS services
are implemented in programs known as servers which run in user space .

Ex. Mach, Minix 3, QNX


Benefits of microkernel
-Robustness: if a problem is encountered in any services, which runs the server application, the same
can be reconfigured and re-started without the need for re-starting the entire OS.
-Configurability: Any services, which run as server application can be changed without the need to
restart the whole system
TYPES OF OPERATING SYSTEMS

1. General Purpose Operating System (GPOS)


-Deployed in general computing systems, non deterministic in nature

Ex: Personal computer- Windows

2. Real TimeOperating System(RTOS)


•The real time kernel
- Task/Process management

- Task/ process scheduling

- Task/process synchronization

- Error/exception handling

- Memory management

- Interrupt handling

- Time management
REAL TIME KERNEL

• Task/ process management


- Setting up the memory for task, allocating system resources, setting up a task control block(TCB)
- TCB has the following information
Task ID, Task state, Task type, Task priority, Task context pointer, Task memory pointer, Task system
resource pointers, Task pointers(to other TCB), other task parameters.
Task management
- Creates the TCB for every task
- Deletes the TCB of a task when the task is terminated
- TCB state
- Update the TCB parameters
- Modify the TCB prioirity
REAL TIME KERNEL

Task/ Process Scheduling


-A kernel application called “Scheduler”handles the task scheduling.

Task/ Process synchronization


- Sync between various processes
REAL TIME KERNEL

• Error/Exception handling
- Registering and handling the errors occurred/exceptions raised during the execution of tasks.
- Insufficient memory, timeouts, deadlocks,deadline missing, bus error, divide by zero etc are errors/exception.
- Watchdog will be loaded with maximum expected wait time for an event

• Memory management
- It makes use of ‘block’ based memory allocation system instead of dynamic memory allocation as in GPOS
- Block of fixed size is allocated for a task on a need basis and are stored in ‘Free Buffer Queue’
- Kernel enters a fail-safe mode when an illegal memory access occurs.
• Interrupt Handling
- Interrupts inform the processor that an external device or an associated task requires immediate attention of
the CPU.
- Synchronous interrupts- software interrupts (interrupts are in sync with the executing task), Asynchronous
interrupts- external device.
REAL TIME KERNEL
• Time Management
- Real time clock (RTC) : Timer interrupt is referred to as ‘Timer tick’ (microsecond range)
- System time register is 32 bits wide and the ‘Timer tick ‘interval is 1 microsecond, system will have reset in 1.19Hours.
2^32 *10^-6/(24*60*60)= 0.0497 days
If the timer tick is 1milliseond, the system time register will reset in 50 days.
Timer tick can be utilized for the following
- Save the current context
- Increment the system time register by one.
- Update the timer count
- Activate the periodic tasks, which are in idle state
- Invoke the scheduler and scheduler tasks
- Delete the terminated tasks
- Load the context for the first tasks in the ready queue.
REAL TIME KERNEL

• Hard real time


- Missing deadline will result in catastrophic results.
- A late answer is a wrong answer
- Ex: Air bag control and anti lock breaking system
- It does not contain Human in the loop (HITL), and the systems are automatic

• Soft real time


- Does not guarantee meeting deadlines, missing the deadline is acceptable
- A late answer is an acceptable answer, but it could have done bit faster
Ex: ATM Machine
TASKS, PROCESS AND THREADS
A task is defined as the program in execution and the related information maintained by the operating system for the program.

Process
A process is a program or part of it in execution. Process is also known as an instance of a program in execution. Multiple
instances of the same program can execute simultaneously.
•The structure of a process
-Concurrent execution; sharing of CPU among the processes.
-A process mimics a processor in properties and holds a set of registers, program status, program counter, stack and the
code.
-Memory is segregated into 3 regions
Stack memory, data memory and code memory
-Stack holds all temporary data as variables local

to process, starts at the highest address.


-Data memory holds global data for the process
-Code memory has the program code
PROCESS

Process states and state transition


-The cycle through which a process changes its state from
‘newly created’ to execution completed’ is known as
‘Process Life Cycle’.
-State at which a process is being created is referred
as ‘Created State’

Process Management
-Creation of the process, setting up the memory space, loading
the process’s code into memory space, allocating system resources,
setting a Process control Block (PCB)
THREADS

- A thread is the primitive that can execute code.


-A thread is a single sequential flow of control within a process.
-Thread is also known as lightweight process.
-A process can have many threads of execution
-Different threads share the same address space; same data,
code memory and heap memory
Concept of Multithreading
-Better memory utilization: less inter thread communication
-Process split into different threads
-Efficient CPU utilization
Thread Pre-emption : is the act of pre-empting the currently running thread, switching among threads
are known as ‘Thread context switching’

-User level thread:do not have kernel/OS support and they exist solely in the running process. If a process contains
multiple user level threads, the OS treats it as single thread and will not switch the execution among different threads
of it.

-Kernel /System level thread: are individual units of execution, which the OS treats as separate threads and are pre-
emptive (switching execution)

-Many-to-one model: Many user level threads are mapped to a single kernel thread. Switching among user level
thread happens when a currently executing user level thread voluntarily blocks itself

-One-to-One thread: each user level kernel is bonded to a kernel/system level thread.

-Many-to-many model: Many user threads are allowed to be mapped to many kernel threads.

Threads Vs Process – Self Study


MULTIPROCESSING AND
MULTITASKING

Types of multitasking
a.co-operative multitasking
b.Preemptive multitasking
c.Non preemptive multitasking
RTOS
The Real Time Kernel: Tasks/Processes and threads

Types of Multitasking
Depending on how the task/process execution switching act is implemented, multitasking can is
classified into
• Co-operative Multitasking: Co-operative multitasking is the most primitive form of multitasking in
which a task/process gets a chance to execute only when the currently executing task/process voluntarily
relinquishes the CPU. In this method, any task/process can avail the CPU as much time as it wants. Since
this type of implementation involves the mercy of the tasks each other for getting the CPU time for
execution, it is known as co-operative multitasking. If the currently executing task is non-cooperative, the
other tasks may have to wait for a long time to get the CPU
• Preemptive Multitasking: Preemptive multitasking ensures that every task/process gets a chance to
execute. When and how much time a process gets is dependent on the implementation of preemptive
scheduling. As the name indicates, in preemptive multitasking, the currently running task/process is
preempted to give a chance to other tasks/process to execute. The preemption of task may be based on
time slots or task/process priority
• Non-preemptive Multitasking: The process/task, which is currently given the CPU time, is allowed
to execute until it terminates (enters the ‘Completed’ state) or enters the ‘Blocked/Wait’ state, waiting for
an I/O. The co-operative and non-preemptive multitasking differs in their behaviour when they are in the
‘Blocked/Wait’ state. In co-operative multitasking, the currently executing process/task need not relinquish
the CPU when it enters the ‘Blocked/Wait’ state, waiting for an I/O, or a shared resource access or an
event to occur whereas in non-preemptive multitasking the currently executing task relinquishes the CPU
when it waits for an I/O.
RTOS
Task/Process Scheduling

• In a multitasking system, there should be some mechanism in place to share the CPU among the
different tasks and to decide which process/task is to be executed at a given point in time
• Determining which task/process is to be executed at a given point of time is known as
task/process scheduling
• Task scheduling forms the basis of multitasking
• Scheduling policies form the guidelines for determining which task is to be executed when
• The scheduling policies are implemented in an algorithm and it is run by the kernel as a service
• The kernel service/application, which implements the scheduling algorithm, is known as the
‘Scheduler’
• The task scheduling policy can be pre-emptive, non-preemptive or co-operative
• Depending on the scheduling policy the process scheduling decision may take place when a
process switches its state to
• ‘Ready’ state from ‘Running’ state: PreEmptive Scheduling
• ‘Blocked/Wait’ state from ‘Running’ state : Priority Based PreEmptive Scheduling
• ‘Ready’ state from ‘Blocked/Wait’ state :Pre or Non Preemptive Scheduling
• ‘Completed’ state
TASK SCHEDULING

Task scheduling decision may take place when a process switches its state.
•CPU utilization
•Throughput
•Turaround time
•Waiting time
•Response time
•Job queue
•Ready queue
•Device queue
RTOS
Task/Process Scheduling
Scheduler Selection:
The selection of a scheduling criteria/algorithm should consider
•CPU Utilization: The scheduling algorithm should always make the CPU utilization high.
CPU utilization is a direct measure of how much percentage of the CPU is being utilized.
•Throughput: This gives an indication of the number of processes executed per unit of
time. The throughput for a good scheduler should always be higher.
•Turnaround Time: It is the amount of time taken by a process for completing its
execution. It includes the time spent by the process for waiting for the main memory, time
spent in the ready queue, time spent on completing the I/O operations, and the time spent
in execution. The turnaround time should be a minimum for a good scheduling algorithm.
•Waiting Time: It is the amount of time spent by a process in the ‘Ready’ queue waiting
to get the CPU time for execution. The waiting time should be minimal for a good
scheduling algorithm.
•Response Time: It is the time elapsed between the submission of a process and the first
response. For a good scheduling algorithm, the response time should be as least as
possible.
RTOS
Task/Process Scheduling
Queues:
The various queues maintained by OS in association with CPU scheduling are
• Job Queue: Job queue contains all the processes in the system
• Ready Queue: Contains all the processes, which are ready for execution and waiting
for CPU to get their turn for execution. The Ready queue is empty when there is no
process ready for running.
• Device Queue: Contains the set of processes, which are waiting for an I/O device
TASK SCHEDULING

1. Non preemptive scheduling


• First come first served (FCFS) /FIFO Scheduling: CPU time in the
order in which they enter the ready queue
• Last come first served(LCFS)/LIFO scheduling: the last entered
process is serviced first.
• Shortest job first (SJF) scheduling : sorts the ready queue each
time a process relinquishes CPU
• Priority based scheduling: highest priority process in the ready
queue is processed first. Priority based on time of completion,
priority number during creation of process.
RTOS
Task/Process Scheduling
Non-preemptive scheduling – First Come First Served (FCFS)/First In First Out
(FIFO) Scheduling

•Allocates CPU time to the processes based on the order in which they enter the ‘Ready’
queue
•The first entered process is serviced first
•It is the same as any real-world application where queue systems are used; E.g.
Ticketing
Drawbacks:
•Favors monopoly of process. A process, which does not contain any I/O operation,
continues its execution until it finishes its task
•In general, FCFS favors CPU-bound processes and I/O bound processes may have to
wait until the completion of CPU bound process, if the currently executing process is a
CPU-bound process. This leads to poor device utilization.
•The average waiting time is not minimal for the FCFS scheduling algorithm
RTOS
Task/Process Scheduling
Non-preemptive scheduling – First Come First Served (FCFS)/First In First Out
(FIFO) Scheduling
•Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds respectively enters the ready queue together in the order P1, P2, P3. Calculate
the waiting time and Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes).
The sequence of execution of the processes by the CPU is represented as

P1 P2 P3

0 10 15 22
10 5 7

Assuming the CPU is readily available at the time of arrival of P1, P1 starts executing
without any waiting in the ‘Ready’ queue. Hence the waiting time for P1 is zero.
RTOS
Task/Process Scheduling: Non-preemptive scheduling – First Come First Served (FCFS)/First In First Out
(FIFO) Scheduling
Waiting Time for P1 = 0 ms (P1 starts executing first)
Waiting Time for P2 = 10 ms (P2 starts executing after completing P1)
Waiting Time for P3 = 15 ms (P3 starts executing after completing P1 and P2)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P2+P3)) / 3
= (0+10+15)/3 = 25/3 = 8.33 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 15 ms (-Do-)
Turn Around Time (TAT) for P3 = 22 ms (-Do-)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P2+P3)) / 3
= (10+15+22)/3 = 47/3
= 15.66 milliseconds
Average Turn Around Time (TAT) is the sum of average waiting time and average execution time.
The average Execution time = (Execution time for all processes)/No. of processes
= (Execution time for (P1+P2+P3))/3
= (10+5+7)/3 = 22/3 = 7.33 milliseconds
Average Turn Around Time = Average Waiting time + Average Execution time
= 8.33 + 7.33
= 15.66 milliseconds
RTOS
Task/Process Scheduling
Non-preemptive scheduling – Last Come First Served (LCFS)/Last In First Out
(LIFO) Scheduling

•Allocates CPU time to the processes based on the order in which they are
entered in the ‘Ready’ queue
•The last entered process is serviced first
•LCFS scheduling is also known as Last In First Out (LIFO) where the process,
which is put last into the ‘Ready’ queue, is serviced first
Drawbacks:
•Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task
•In general, LCFS favors CPU bound processes and I/O bound processes may have
to wait until the completion of CPU bound process, if the currently executing
process is a CPU bound process. This leads to poor device utilization.
•The average waiting time is not minimal for LCFS scheduling algorithm
RTOS
Task/Process Scheduling: Non-preemptive scheduling – Last Come First Served (LCFS)/Last In First Out
(LIFO) Scheduling

• Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds respectively enters the ready queue together in the order P1, P2, P3
(Assume only P1 is present in the ‘Ready’ queue when the scheduler picks up it and
P2, P3 entered ‘Ready’ queue after that). Now a new process P4 with estimated
completion time 6ms enters the ‘Ready’ queue after 5ms of scheduling P1. Calculate
the waiting time and Turn Around Time (TAT) for each process and the Average waiting
time and Turn Around Time (Assuming there is no I/O waiting for the
processes).Assume all the processes contain only CPU operation and no I/O operations
are involved.
RTOS
Task/Process Scheduling:Non-preemptive scheduling – Last Come First Served (LCFS)/Last In First Out
(LIFO) Scheduling

Initially there is only P1 available in the Ready queue and the scheduling sequence will
be P1, P3, P2. P4 enters the queue during the execution of P1 and becomes the last
process entered the ‘Ready’ queue. Now the order of execution changes to P1, P4, P3,
and P2 as given below.

P1 P4 P3 P2

0 10 16 23 28
10 6 7 5
RTOS
Task/Process Scheduling: Non-preemptive scheduling – Last Come First Served (LCFS)/Last In First Out
(LIFO) Scheduling
The waiting time for all the processes are given as
Waiting Time for P1 = 0 ms (P1 starts executing first)
Waiting Time for P4 = 5 ms (P4 starts executing after completing P1. But P4 arrived
after 5ms of execution of P1. Hence its waiting time = Execution start time – Arrival Time = 10-5 = 5)
Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)
Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)
Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4
= (0 + 5 + 16 + 23)/4 = 44/4
= 11 milliseconds
Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 11 ms (Time spent in Ready Queue + Execution Time = (Execution Start Time – Arrival Time) + Estimated
Execution Time = (10-5) + 6 = 5 + 6)
Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4
= (10+11+23+28)/4 = 72/4
= 18 milliseconds
RTOS
Task/Process Scheduling: Non-preemptive scheduling
Shortest Job First (SJF) Scheduling:

• Allocates CPU time to the processes based on the execution completion time for
tasks
• The average waiting time for a given set of processes is minimal in SJF scheduling
• Optimal compared to other non-preemptive scheduling like FCFS
Drawbacks:
• A process whose estimated execution completion time is high may not get a chance
to execute if more and more processes with least estimated execution time enters
the ‘Ready’ queue before the process with longest estimated execution time starts
its execution
• May lead to the ‘Starvation’ of processes with high estimated completion time
• Difficult to know in advance the next shortest process in the ‘Ready’ queue for
scheduling since new processes with different estimated execution time keep
entering the ‘Ready’ queue at any point of time.
RTOS
Task/Process Scheduling: Non-preemptive scheduling
Shortest Job First (SJF) Scheduling:

• Three processes with process IDs P1, P2, P3 with estimated completion time
10, 5, 7 milliseconds respectively enters the ready queue together. Calculate
the waiting time and Turn Around Time (TAT) for each process and the
Average waiting time and Average Turn Around Time (Assuming there is no I/O
waiting for the processes) in SJF algorithm.
RTOS
Task/Process Scheduling: Non-preemptive scheduling
Shortest Job First (SJF) Scheduling:

• Three processes with process IDs P1, P2, P3 with estimated completion time
10, 5, 7 milliseconds respectively enters the ready queue together. Calculate
the waiting time and Turn Around Time (TAT) for each process and the
Average waiting time and Average Turn Around Time (Assuming there is no I/O
waiting for the processes) in SJF algorithm.

• Calculate the waiting time and TAT for each process and the Average WT and
TAT for the above example if a new process P4 with a completion time of 2ms
enters the ready queue after 2ms of execution of P2.
RTOS
Task/Process Scheduling: Non-preemptive scheduling – Priority based Scheduling

• A priority, which is unique or same is associated with each task


• The priority of a task is expressed in different ways, like a priority number,
the time required to complete the execution etc.
• In number based priority assignment the priority is a number ranging from 0
to the maximum priority supported by the OS. The maximum level of priority
is OS dependent.
• Windows CE supports 256 levels of priority (0 to 255 priority numbers, with 0
being the highest priority)
• The priority is assigned to the task on creating it. It can also be changed
dynamically (If the Operating System supports this feature)
• The non-preemptive priority based scheduler sorts the ‘Ready’ queue based
on the priority and picks the process with the highest level of priority for
execution
RTOS
Task/Process Scheduling: Non-preemptive scheduling – Priority based Scheduling

• Three processes with process IDs P1, P2, P3 with estimated completion time 10, 5, 7
milliseconds and priorities 0, 3, 2 (0- highest priority, 3 lowest priority) respectively
enters the ready queue together. Calculate the waiting time and Turn Around Time
(TAT) for each process and the Average waiting time and Turn Around Time (Assuming
there is no I/O waiting for the processes) in priority based scheduling algorithm.

• The scheduler sorts the ‘Ready’ queue based on the priority and schedules the
process with the highest priority (P1 with priority number 0) first and the next high
priority process (P3 with priority number 2) as second and so on. The order in which
the processes are scheduled for execution is represented as

P1 P3 P2

0 10 17 22
10 7 5
RTOS
Task/Process Scheduling: Non-preemptive scheduling – Priority based Scheduling

The waiting time for all the processes are given as

Waiting Time for P1 = 0 ms (P1 starts executing first)


Waiting Time for P3 = 10 ms (P3 starts executing after completing P1)
Waiting Time for P2 = 17 ms (P2 starts executing after completing P1 and P3)

Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P3+P2)) / 3
= (0+10+17)/3 = 27/3
= 9 milliseconds

Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P3 = 17 ms (-Do-)
Turn Around Time (TAT) for P2 = 22 ms (-Do-)

Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P3+P2)) / 3
= (10+17+22)/3 = 49/3
= 16.33 milliseconds
RTOS
Task/Process Scheduling: Non-preemptive scheduling – Priority based Scheduling

Drawbacks:
• Similar to the SJF scheduling algorithm, the non-preemptive priority-based
algorithm also possesses the drawback of ‘Starvation’ where a process whose
priority is low may not get a chance to execute if more and more processes
with higher priorities enter the ‘Ready’ queue before the process with lower
priority starts its execution.
• ‘Starvation’ can be effectively tackled in priority-based non-preemptive
scheduling by dynamically raising the priority of the low-priority task/process
which is under starvation (waiting in the ready queue for a longer time for
getting the CPU time)
• The technique of gradually raising the priority of processes which are waiting
in the ‘Ready’ queue as time progresses, for preventing ‘Starvation’, is known
as ‘Aging’.
TASK SCHEDULING
2. Preemptive scheduling
Every task in the ready queue gets a chance to execute. The act of moving a ‘Running’ process/task into the ‘Ready’
queue by the scheduler, without the processes requesting for it is known as ‘Preemption’.

•Preemptive SJF scheduling/ Shortest Remaining Time (SRT): it sorts the ‘Ready’ queue when a
new process enters the ‘Ready’ queue, current executed process is preemptied and the new process
starts execution if it takes less time.

•Round Robin(RR) scheduling: Each process in the ‘Ready’ queue is executed for a pre-defined time
slot. Time slice based pre-emption is added to switch the execution between the processes in the
‘Ready’ queue. If the process termites before the elapse of time slice, it voluntarily releases CPU and
the next process continues execution.

•Priority Based scheduling: any high priority process is immediately scheduled for execution whereas
in non preemptive scheduling higher priority process is scheduled only after the currently executing
process completes execution or voluntarily relinquishes the CPU.
RTOS
Task/Process Communication
Inter Process (Task) Communication (IPC)

 IPC refers to the mechanism through which tasks/processes communicate each other
 IPC is essential for task /process execution co-ordination and synchronization
 Implementation of IPC mechanism is OS kernel dependent
 Some important IPC mechanisms adopted by OS kernels are:
 Shared Memory
 Global Variables
 Pipes (Named & Un-named)
 Memory mapped Objects
 Message Passing
 Message Queues
 Mailbox
 Mail slot
 Signals
 Remote Procedure Calls (RPC)
RTOS
Task/Process Communication

IPC(Inter process communication)


Shared Memory
 Processes share same area of the memory to communicate among them
 Information to be communicated by the process is written to the shared
memory area
 Processes which require this information can read the same from the shared
memory area
 Same as the real world concept where a ‘Notice Board’ is used by the college
to publish the information for students (The only exception is; only college
has the right to modify the information published on the Notice board and
students are givenProcess
‘Read’1 only access.
Shared Meaning
Memory Area Processit2 is only a one way channel)

Concept of Shared Memory


RTOS
Task/Process Communication
IPC – Shared Memory:
1. Pipes
‘Pipe’ is a section of the shared memory used by processes for communicating
Pipes follow the client-server architecture
A process which creates a pipe is known as pipe server and a process which connects to a pipe is known as pipe
client
It can be unidirectional or bidirectional
There are two types of ‘Pipes’ for Inter Process Communication. Namely;
•Anonymous Pipes: The anonymous pipes are unnamed, unidirectional pipes used for data transfer between
two processes.
•Named Pipes:
•. With named pipes, any process can act as both client and server allowing point-to-point communication.
•Named pipes can be used for communicating
Process 1 between processes running on Process
the same2 machine or between
processes running on different machines connected to a network
Write Pipe Read
(Named/un-named)

Concept of Shared Memory


RTOS
Task/Process Communication
IPC – Shared Memory:
2. Memory mapped objects
-Shared block of memory which can be accessed by multiple process
simultaneously
-A mapping object is created and physical storage for it is reserved and committed
-A process can map the entire physical area or a block of it to its virtual address
space
RTOS
Task/Process Communication
IPC – Message Passing
1.Message queue
-Message queue is FIFO, which stores the message temporarily in a system
defined memory object to pass it to the desired process.
-The messages are exchanged through send and receive
-Thread will post the message to system message queue
-The kernel will pick up the message from system message queue one at a time
and examines the message for finding the destination thread and then posts the
message to message queue of the corresponding thread
-Asynchronous messaging.. Posting thread will not wait for an acceptance (return)
from the thread to which message is posted.
-Synchronous messaging..waits for the message result from the thread to which
the message is posted.
RTOS
Task/Process Communication
IPC – Message Passing
2. Mailbox
-Mailbox is an alternate form of Message queues and used in RTOS.
-It is used for one way messaging
-The task which are interested in receiving the messages posted to the mailbox by
the mailbox creator thread can subscribe to the mailbox.
-Thread that creates mailbox is known as ‘mailbox server’ and threads which
subscribe to the mailbox are known as ‘mailbox clients’
-The mailbox creation, subscription, message reading and writing are achieved
through OS kernel provided API calls
-Mailbox is similar to message queue but mailbox is used for exchanging a single
message between two tasks or ISR and a task.
RTOS
Task/Process Communication
IPC – Message Passing
3. Signalling
-It is used for asynchronous notification where one process/thread fires a signal,
indicating the occurrence of a scenario which the other process/ thread is waiting
-Signals are not queued and donot carry any data.
RTOS
Task/Process Synchronization
• Multiple processes may try to access and modify shared resources in a multitasking
environment. This may lead to conflicts and inconsistent results
• Processes should be made aware of the access of a shared resource by each process
and should not be allowed to access a shared resource when it is currently being
accessed by another process
• The act of making processes aware of the access of shared resources by each process
to avoid conflicts is known as ‘Task/Process Synchronization
• Task Synchronization is essential for avoiding conflicts in shared resource access and
ensuring a specified sequence for task execution
• Various synchronization issues may arise in a multitasking environment if processes
are not synchronized properly in shared resource access
• Process synchronization problem arises in the case of Cooperative processes because
resources are shared in Cooperative processes.
RTOS
Task/Process Synchronization Issues
1. Race Condition:
•When two processes/threads are executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the
output can go wrong.
•The regions of a program that try to access shared resources and may cause race
conditions are called critical section.
•That resource may be any resource in a computer like a memory location, Data
structure, CPU or any IO device.
•To avoid race condition among the processes, we need to assure that only one process
at a time can execute within the critical section.
•All the other processes have to wait to execute in their critical sections.
RTOS
Task/Process Synchronization Issues: Race Condition
RTOS
Task/Process Synchronization Issues: Race Condition

• Process synchronization is needed-


• When multiple processes execute concurrently sharing
some system resources.
• To avoid the inconsistent results.
• One primary solution to avoid the Race Condition is :
Mutual Exclusion (MUTEX)
• Our solution must provide mutual exclusion. By
Mutual Exclusion, we mean that if one process is
executing inside critical section then the other
process must not enter in the critical section.
RTOS
Task/Process Synchronization: MUTEX
• Thread synchronization is a mechanism which ensures multiple threads do
not access the same shared resource of a process (like common memory
space) in order to avoid race condition.
• Mutual Exclusion (MUTEX):
• Most popular method/program of achieving synchronization between tasks/process.
• Basically a lock that is set before using a shared resource and unlock it after using the
shared resource.
• When a lock is set no other thread/process can access the shared resource.
• MUTEX is needed during Context Switching during concurrent programming with a
critical section(CS).
RTOS
Task/Process Synchronization Issues
2. Deadlock
Deadlock is the condition in which a process is waiting for a resource held by another
process which is waiting for a resource held by the first process

Process A holds a resource ‘x’ and it wants a resource ‘y’ held by Process B. Process B is
currently holding resource ‘y’ and it wants the resource ‘x’ which is currently held by
Process A. Both hold the respective resources and they compete each other to get the
resource held by the respective processes

Process A Process B

Resource ‘x’

Resource ‘y’

Scenarios leading to Deadlock Deadlock Visualization


RTOS
Task/Process Synchronization Issues - Deadlock
Conditions favouring deadlock:
Mutual Exclusion: The criteria that only one process can hold a resource at a time.
Meaning processes should access shared resources with mutual exclusion. Typical example
is the accessing of display device in an embedded device
Hold & Wait: The condition in which a process holds a shared resource by acquiring the
lock controlling the shared access and waiting for additional resources held by other
processes
No Resource Preemption: The criteria that Operating System cannot take back a
resource from a process which is currently holding it and the resource can only be released
voluntarily by the process holding it
Circular Wait: A process is waiting for a resource which is currently held by another
process which in turn is waiting for a resource held by the first process. In general there
exists a set of waiting process P0, P1 …. Pn with P0 is waiting for a resource held by P1 and
P1 is waiting for a resource held by P0, ……, Pn is waiting for a resource held by P0 and P0 is
waiting for a resource held by Pn and so on… This forms a circular wait queue.

‘Deadlock’ is a result of the combined occurrence of these four conditions listed above.
RTOS
Task/Process Synchronization Issues – Livelock and Starvation
3. Livelock
• The Livelock condition is similar to the deadlock condition except that a process in livelock
condition changes its state with time
•In deadlock, a process enters in wait state for a resource and continues in that state forever
without making any progress in the execution
• In a livelock condition a process always does something but is unable to make any progress in the
execution completion

4. Starvation
•In the task synchronization issue context, starvation is the condition in which a process does not
get the resources required to continue its execution for a long time.
• Starvation may arise due to various conditions like byproduct of preventive measures of deadlock,
scheduling policies favoring high priority tasks and tasks with shortest execution time etc.
RTOS
Task/Process Synchronization Issues
1. Dining Philosophers Problem
Five philosophers are sitting around a round table, involved in eating and brainstorming. At any point,
each philosopher will be in any of the three states: eating, hungry or brainstorming. (While eating
the philosopher is not involved in brainstorming and while brainstorming the philosopher is not
involved in eating). For eating, each philosopher requires 2 forks. There are only 5 forks available on
the dining table and they are arranged in a fashion that one fork is in between two philosophers.
The philosopher can only use the forks on his/her immediate left and right that too in the order,
pickup the left fork first and then the right fork. Analyze the situation and explain the possible
outcomes of this scenario.
RTOS
Task/Process Synchronization Issues
Task Synchronization Issues – Dining Philosophers Problem
 Scenario 1: All philosophers involve in brainstorming together and try to eat together. Each
philosopher picks up the left fork and unable to proceed since two forks are required for eating.
Philosopher-1 thinks that Philosopher-2 sitting to the right of him/her will put the fork down and
waits for it. Philosopher-2 thinks that Philosopher-3 sitting to the right of him/her will put the fork
down and waits for it and so on. This forms a circular chain of un-granted requests. If the
philosophers continue in this state waiting for the fork from the philosopher sitting to the right of
each, they will not make any progress in eating and this will result in starvation of the
philosophers and deadlock.
 Scenario 2: All philosophers start brainstorming together. One of the philosophers is hungry and
he/she picks up the left fork. When the philosopher is about to pick up the right fork, the
philosopher sitting to his right also become hungry and he tries to grab the left fork which is the
right fork for his neighbouring philosopher who is trying to lift it, resulting in a ‘Race condition’
 Scenario 3: All philosophers involve in brainstorming together and try to eat together. Each
philosopher picks up the left fork and unable to proceed, since two forks are required for eating.
Each of them anticipates that the adjacently sitting philosopher will put his/her fork down and waits
for a fixed duration and after this puts the fork down. Each of them again tries to lift the fork after a
fixed duration of time. Since all philosophers are trying to lift the fork together, none of them will
be able to grab two forks. This condition leads to livelock and starvation of philosophers, where
each philosopher tries to do something, but they cannot progress in achieving the target.
RTOS
Task/Process Synchronization Issues
Philosopher
5

ts

Ho
an
W

lds
Holds
Wants
lds
Philosopher Ho
1 Philosopher
4

Wa Philosopher Philosopher
nts
lds 1 2
Ho

Holds
T ri

Wants
es rab
to
gr a t og
es
b Tri
nts Hold
Philosopher Wa s Philosopher
2 3

(a) (b)

Philosopher
5

Ho
lds
Holds

Philosopher
1 Philosopher
4

lds
Ho

Holds
Hold
Philosopher s Philosopher
2 3

(c)

The ‘Real Problems’ in the ‘Dining Philosophers problem’


(a) Starvation & Deadlock (b) Racing (c) Livelock & Starvation
RTOS
Task/Process Synchronization Issues
Task Synchronization Issues
4. Producer Consumer/Bounded Buffer Problem

• A common data sharing problem where two tasks/processes concurrently access a


shared buffer with fixed size
• A thread/process which produces data is called ‘Producer thread/process’ and a
thread/process which consumes the data produced by a producer thread/process is
known as ‘Consumer thread/process’
• There may be chances where in which the producer produces data at a faster rate
than the rate at which it is consumed by the consumer. This will lead to ‘buffer
overrun’ where the producer tries to put data to a full buffer
• If the consumer consumes data at a faster rate than the rate at which it is produced
by the producer, it will lead to the situation ‘buffer under-run’ in which the
consumer tries to read from an empty buffer and will end up in reading the old data
RTOS
Task/Process Synchronization Issues - Producer Consumer/Bounded Buffer Problem
RTOS
Task/Process Synchronization Issues - Producer Consumer/Bounded Buffer Problem

Buffer is Limited in its Size

The problem is to

Make sure that the producer won't try


to add data into the buffer if it's full

The consumer won't try to remove


data from an empty buffer.
RTOS
Task/Process Synchronization Issues
Task Synchronization Issues
5. Readers-Writers Problem

• A common issue observed in processes competing for limited shared resources


• Characterized by multiple processes trying to read and write shared data concurrently
• A typical real world example for the Readers-Writers problem is the banking system
where one process tries to read the account information like available balance and
the other process tries to update the available balance for that account. This will
result in inconsistent results.
• If multiple processes try to read a shared data concurrently it may not create any
impacts, whereas when multiple processes try to write and read concurrently it will
definitely create inconsistent results
RTOS
Task/Process Synchronization Issues
Task Synchronization Issues
6. Priority Inversion

• Priority inversion is the byproduct of the combination of blocking based (lock based)
process synchronization and pre-emptive priority scheduling
• ‘Priority inversion’ is the condition in which a high priority task needs to wait for a low
priority task to release a resource which is shared between the high priority task and
the low priority task and a medium priority task which doesn’t require the shared
resource for its execution preempts the low priority task and continue its execution
• Priority based preemptive scheduling technique ensures that a high priority task is
always executed first, whereas the lock based process synchronization mechanism
(like mutex, semaphore etc) ensures that a process will not access a shared resource,
which is currently in use by another process
• The synchronization technique is only interested in avoiding conflicts that may arise
due to the concurrent access of the shared resources and not at all bothered about
the priority of the process which tries to access the shared resource.
Self reading – Thread Safe Reentrant function
RTOS
Concepts: Threads: POSIX Threads (pthreads)

For threads creation and management, the following steps are needed.
•Define thread reference variable/thread id.
•Creating a thread function.
•Creating the thread
•Joining everything for execution. (Thread Management)

In C programming language, POSIX threads (pthreads)are used for thread creation


and management we have to include the header file pthread.h, which contains all
functionalities for thread creation and management.
RTOS
Concepts: Threads: POSIX Threads (pthreads)

For threads creation and management, the following steps are needed.
•Define thread reference variable.
• For defining a thread reference variable, we must use the following statement.
• pthread_t th_id;
• For every thread that needs to be created, a unique id must be defined using the above
statement.

•Creating a thread function.


• Threads need to execute or perform some operation, hence a thread function is needed.
• Syntax:
void *th_fn (void *argu)
{ // function_body }
RTOS
Concepts: Threads: POSIX Threads (pthreads)

For threads creation and management, the following steps are needed (continued).
•Creating the thread.
• Using the below function:
int pthread_create (pthread_t *th_id, const pthread_attr_t *attr, void *th_fn(void *argu),
void *argu);
• The first argument is a pointer to the thread id (thread reference variable).
• Second arguments is NULL as there are no additional attributes.
• Third argument is a pointer to a thread function.
• Fourth argument is a pointer to the argument that we want to pass to the thread function.
• The function returns an integer value, 0 if thread created successfully else any other
integer value.
•Joining everything for execution. (Thread Management)
• To provide a thread for execution, the following statement needs to be used.
• pthread_join (thread_id, status); // status is always NULL
RTOS
Concepts: Threads: POSIX Threads (pthreads): Example

#include<stdio.h>
#include<pthread.h> // Header file for Thread creation and management

void *thfn (void *argu) // Thread execution function


{
printf(“ Inside Thread Function.\n”);
}
int main()
{
pthread_t th1; // Thread id creation
int ret;
ret = pthread_create (&th1, NULL, &thfn, NULL); //Thread creation function
if (ret == 0)
printf(“ Thread creation success. \n”);
else
return 0;
pthread_join (th1, NULL); // Joining thread for execution.
return 0;
}
• Self study : Thread Vs Process

• Thread Pre-emption
- User level thread
- kernel/ system level thread
- many to one model
- one to one model
- many to many model
THANK YOU

You might also like