0% found this document useful (0 votes)
5 views

Module 3 RTOS

The document provides an overview of operating system basics, highlighting the role of the OS as an intermediary between user applications and system resources. It discusses the kernel's functions, including process and memory management, and contrasts monolithic and microkernel architectures. Additionally, it categorizes operating systems into general-purpose and real-time systems, detailing their characteristics and the importance of task management and scheduling in real-time environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Module 3 RTOS

The document provides an overview of operating system basics, highlighting the role of the OS as an intermediary between user applications and system resources. It discusses the kernel's functions, including process and memory management, and contrasts monolithic and microkernel architectures. Additionally, it categorizes operating systems into general-purpose and real-time systems, detailing their characteristics and the importance of task management and scheduling in real-time environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Operating System Basics:

The Operating System acts as a bridge between the user applications/tasks and
the underlying system resources through a set of system functionalities and
services

OS manages the system


resources and makes them
available to the user
applications/tasks on a need basis

The primary functions of an Operating system is

Make the system convenient to use

Organize and manage the system resources efficiently andcorrectly

Figure 1: The Architecture of Operating System

Page 2
The Kernel:

The kernel is the core of the operating system

It is responsible for managing the system resources and the communication


among the hardware and other system services

Kernel acts as the abstraction layer between system resources and user
applications

Kernel contains a set of system libraries and services.

For a general purpose OS, the kernel contains different services like

Process Management

Primary Memory Management

File System management

I/O System (Device) Management

Secondary Storage Management

Protection

Time management

Interrupt Handling

Kernel Space and User Space:

The program code corresponding to the kernel applications/services are kept


in a contiguous area (OS dependent) of primary (working) memory and is
protected from the un-authorized access by user programs/applications

Kernel

Page 3
All user applications are loaded to a specific area of primary memory and this
memory area is referred as User Space

The partitioning of memory into kernel and user space is purely Operating
System dependent

An operating system with virtual memory support, loads the user applications
into its corresponding virtual memory space with demand paging technique

Most of the operating systems keep the kernel application code in main
memory and it is not swapped out into the secondary memory

Monolithic Kernel:

All kernel services run in the kernel space

All kernel modules run within the same memory space under a single kernel
thread
The tight internal integration of kernel modules in monolithic kernel
architecture allows the effective
utilization of the low-level features of
the underlying system Applications
The major drawback of monolithic
kernel is that any error or failure in any
one of the kernel modules leads tothe
crashing of the entire kernel
Monolithic kernel with all
application operating system services
running in kernel space
LINUX, SOLARIS, MS-DOS kernels
are examples of monolithic kernel

Figure 2: The Monolithic Kernel Model

Page 4
Microkernel

The microkernel design incorporates only the essential set of Operating


System services into the kernel

Rest of the Operating System services are implemented in programs known


Servers which runs in user space
Servers (kernel
The kernel design is highly services running Applications
modular provides OS-neutral in user space)

abstraction.

Memory management, process

management, timer systems and Microkernel with essential


services like memory
interrupt handlers are examples of management, process
management, timer systemetc...
essential services, which forms the part
of the microkernel

Figure 3: The Microkernel Model

QNX, Minix 3 kernels are examples for microkernel.


Benefits of Microkernel:
1. Robustness: If a problem is encountered in any services in server can
reconfigured and re-started without the need for re-starting the entire OS.
2. Configurability: Any services , which
changed without need to restart the whole system.
Types of Operating Systems:
Depending on the type of kernel and kernel services, purpose and type of
computing systems where the OS is deployed and the responsiveness to applications,
Operating Systems are classified into

1. General Purpose Operating System (GPOS):

2. Real Time Purpose Operating System (RTOS):

Page 5
1. General Purpose Operating System (GPOS):

Operating Systems, which are deployed in general computing systems

The kernel is more generalized and contains all the required services to
execute generic applications

Need not be deterministic in execution behavior

May inject random delays into application software and thus cause slow
responsiveness of an application at unexpected times

Usually deployed in computing systems where deterministic behavior is not


an important criterion

Personal Computer/Desktop system is a typical example for a system where


GPOSs are deployed.

Windows XP/MS-DOS etc are examples of General Purpose Operating


System

2. Real Time Purpose Operating System (RTOS):

Operating Systems, which are deployed in embedded systems demanding


real-time response

Deterministic in execution behavior. Consumes only known amount of time


for kernel applications

Implements scheduling policies for executing the highest priority


task/application always

Implements policies and rules concerning time-critical allocation of a

Windows CE, QNX, VxWorks , MicroC/OS-II etc are examples of Real


Time Operating Systems (RTOS)

Page 6
The Real Time Kernel: The kernel of a Real Time Operating System is referred as
Real Time kernel. In complement to the conventional OS kernel, the Real Time
kernel is highly specialized and it contains only the minimal set of services required
for running the user applications/tasks. The basic functions of a Real Timekernel are
a) Task/Process management

b) Task/Process scheduling

c) Task/Process synchronization

d) Error/Exception handling

e) Memory Management

f) Interrupt handling

g) Time management

Real Time Kernel Task/Process Management: Deals with setting up the

allocating system resources, setting up a Task Control Block (TCB) for the task
and task/process termination/deletion. A Task Control Block (TCB) is used for
holding the information corresponding to a task. TCB usually contains the
following set of information

Task ID: Task Identification Number

Task State: The current state of the task. for a task


which is ready to execute)

Task Type: Task type. Indicates what is the type for this task. The task can
be a hard real time or soft real time or background task.

Task Priority: Task priority (E.g. Task priority =1 for task with priority = 1)

Task Context Pointer: Context pointer. Pointer for context saving

Page 7
Task Memory Pointers: Pointers to the code memory, data memory and
stack memory for the task

Task System Resource Pointers: Pointers to system resources (semaphores,


mutex etc) used by the task

Task Pointers: Pointers to other TCBs (TCBs for preceding, next and
waiting tasks)

Other Parameters Other relevant task parameters

The parameters and implementation of the TCB is kernel dependent. The TCB
parameters vary across different kernels, based on the task management
implementation

Task/Process Scheduling: Deals with sharing the CPU among various


tasks/processes. A kernel application called Scheduler handles the task
scheduling. Scheduler is nothing but an algorithm implementation, which
performs the efficient and optimal scheduling of tasks to provide a deterministic
behavior.
Task/Process Synchronization: Deals with synchronizing the concurrent access
of a resource, which is shared across multiple tasks and the communication
between various tasks.

Error/Exception handling: Deals with registering and handling the errors


occurred/exceptions raised during the execution of tasks. Insufficient memory,
timeouts, deadlocks, deadline missing, bus error, divide by zero, unknown
instruction execution etc, are examples of errors/exceptions. Errors/Exceptions
can happen at the kernel level services or at task level. Deadlock is an example
for kernel level exception, whereas timeout is an example for a task level
exception. The OS kernel gives the information about the error in the form of a
system call (API).

Page 8
Memory Management:

The memory management function of an RTOS kernel is slightly different


compared to the General Purpose Operating Systems

The memory allocation time increases depending on the size of the block
of memory needs to be allocated and the state of the allocated memory
block (initialized memory block consumes more allocation time than un-
initialized memory block)

Since predictable timing and deterministic behavior are the primary focus
for an RTOS, RTOS achieves this by compromising the effectiveness of
memory allocation

block
of the usual dynamic memory allocation techniques used by theGPOS.

RTOS kernel uses blocks of fixed size of dynamic memory and the block
Free
buffer Queue

Most of the RTOS kernels allow tasks to access any of the memory blocks
without any memory protection to achieve predictable timing and avoid
the timing overheads

RTOS kernels assume that the whole design is proven correct and
protection is unnecessary. Some commercial RTOS kernels allow memory
protection as optional and the kernel enters a fail-safe mode when an illegal
memory access occurs

The memory management function of an RTOS kernel is slightly different


compared to the General Purpose Operating Systems

A few RTOS kernels implement Virtual Memory concept for memory


allocation if the system supports secondary memory storage (like HDD and
FLASH memory).

Page 9
block always
allocated for tasks on need basis and it is taken as a unit. Hence, there will
not be any memory fragmentation issues.

The memory allocation can be implemented as constant functions and


thereby it consumes fixed amount of time for memory allocation. This
leaves the deterministic behavior of the RTOS kernel untouched.

Interrupt Handling:

Interrupts inform the processor that an external device or an associated task


requires immediate attention of the CPU.

Interrupts can be either Synchronous or Asynchronous.

Interrupts which occurs in sync with the currently executing task is known
as Synchronous interrupts. Usually the software interrupts fall under the
Synchronous Interrupt category. Divide by zero, memory segmentation
error etc are examples of Synchronous interrupts.

For synchronous interrupts, the interrupt handler runs in the same context
of the interrupting task.

Asynchronous interrupts are interrupts, which occurs at any point of


execution of any task, and are not in sync with the currently executing task.

The interrupts generated by external devices (by asserting the Interrupt


line of the processor/controller to which the interrupt line of the device is
connected) connected to the processor/controller, timer overflow interrupts,
serial data reception/ transmission interrupts etc are examples for
asynchronous interrupts.

For asynchronous interrupts, the interrupt handler is usually written as


separate task (Depends on OS Kernel implementation) and it runs in a

Page 10
different context. Hence, a context switch happens while handling the
asynchronous interrupts.

Priority levels can be assigned to the interrupts and each interrupts can be
enabled or disabled individually.

Nested Interrupts
Interrupt nesting allows the pre-emption (interruption) of an Interrupt
Service Routine (ISR), servicing an interrupt, by a higher priority interrupt.

Time Management:

Interrupts inform the processor that an external device or an associated task


requires immediate attention of the CPU.

Accurate time management is essential for providing precise time


reference for all applications

The time reference to kernel is provided by a high-resolution Real Time


Clock (RTC) hardware chip (hardware timer)

The hardware timer is programmed to interrupt the processor/controller


at a fixed rate. This timer interrupt is referred as

the
varies in the microseconds range

The time parameters for tasks are expressed as the multiples of the

The System time is updated based on the

1microsecond, the System time register will reset in

232 * 10-6/ (24 * 60 * 60) = 49700 Days =~ 0.0497 Days = 1.19 Hours

Page 11
If the interval is 1 millisecond, the System time register will
reset in

232 * 10-3 / (24 * 60 * 60) = 497 Days = 49.7 Days =~ 50 Days

Timer tick handler of kernel.


The Timer tick interrupt can be utilized for implementing the following actions.

Save the current context (Context of the currently executing task)

Increment the System time register by one. Generate timing error and reset
the System time register if the timer tick count is greater than the maximum
range available for System time register

Update the timers implemented in kernel (Increment or decrement the timer


registers for each timer depending on the count direction setting for each
count up
decrement registers with count direction setting count down

Activate the periodic tasks, which are in the idle state

Invoke the scheduler and schedule the tasks again based on the scheduling
algorithm

Delete all the terminated tasks and their associated data structures (TCBs)

Load the context for the first task in the ready queue. Due to the re- scheduling,
the ready task might be changed to a new one from the task, which was pre-
empted by task

Page 12
Hard Real-time System:

A Real Time Operating Systems which strictly adheres to the timing


constraints for a task

A Hard Real Time system must meet the deadlines for a task without any
slippage

Missing any deadline may produce catastrophic results for Hard Real Time
Systems, including permanent data lose and irrecoverable damages to the
system/users

Emphasize on the A late answer is a wrong answer

Air bag control systems and Anti-lock Brake Systems (ABS) of vehicles
are typical examples of Hard Real Time Systems

As a rule of thumb, Hard Real Time Systems does not implement the
virtual memory model for handling the memory. This eliminates thedelay
in swapping in and out the code corresponding to the task to and from the
primary memory

The presence of Human in the loop (HITL) for tasks introduces un-
expected delays in the task execution. Most of the Hard Real TimeSystems
are automatic and does not contain a in the

Soft Real-time System:

Real Time Operating Systems that does not guarantee meeting deadlines,
but, offer the best effort to meet the deadline

Missing deadlines for tasks are acceptable if the frequency of deadline


missing is within the compliance limit of the Quality of Service(QoS)

A late answer is an
acceptable answer, but it could have done bit

Soft Real Time systems most often have a human in the loop (HITL)
Page 13
Automatic Teller Machine (ATM) is a typical example of Soft Real Time
System. If the ATM takes a few seconds more than the ideal operation time,
nothing fatal happens.

An audio video play back system is another example of Soft Real Time
system. No potential damage arises if a sample comes late by fraction of a
second, for play back.

Tasks, Processes & Threads :


In the Operating System context, a task is defined as the program in
execution and the related information maintained by the Operating system
for the program
Task is also known as Job in the operating systemcontext
A program or part of it in execution is also called a Process
Task job Process
Operating System context and most often they are usedinterchangeably
A process requires various system resources like CPU for executing the
process, memory for storing the code corresponding to the process and
associated variables, I/O devices for information exchange etc
The structure of a Processes

Process
of tasks and thereby the efficient utilization of the CPU and other system
resources

Concurrent execution is achieved through the sharing of CPU among the


processes.

A process mimics a processor in properties and holds a set of registers, process


status, a Program Counter (PC) to point to the next executable instruction of
the process, a stack for holding the local variables associated with the process
and the code corresponding to the process

Page 14
Process
A process, which inherits all
Stack
the properties of the CPU, (Stack Pointer)

can be considered as a virtual Working Registers

processor, awaiting its turn


Status Registers
to have its properties
switched into the physical Program Counter (PC)

processor
Code Memory
corresponding to the
Process

Figure: 4 Structure of a Process

When the process gets its turn, its registers and Program counter register
becomes mapped to the physical registers of the CPU

Memory organization of Processes:

The memory occupied by the process is


segregated into three regions namely; Stack Stack Memory
memory, Data memory and Code memory
Stack memory grows
The memory holds all temporarydata downwards
such as variables local to the process
Data memory grows
Data memory holds all global data for the upwards
process

The code memory contains the program


Data Memory
code (instructions) corresponding to the
process

Code Memory

Fig: 5 Memory organization of a Process


Page 15
On loading a process into the main memory, a specific area of memory is
allocated for the process

The stack memory usually starts at the highest memory address from the
memory area allocated for the process (Depending on the OS kernel
implementation)

Process States & State Transition

The creation of a process to its termination is not a single step operation

The process traverses through a series of states during its transition from the
newly created state to the terminated state

newly created
execution completed Process Life Cycle
through which a process traverses through during a Process Life Cycle
indicates the current status of the process with respect to time and also
provides information on what it is allowed to do next

Process States & State Transition:

Created State: The state at which a process is being created is referred as


Created
State but no resources are allocated to the process

Ready State: The state, where a process is incepted into the memory and
awaiting the processor time for execution, is known as Ready State At
this stage, t Ready list
OS

Running State: The state where in the source code instructions corresponding
Running State Running state is the
state at which the process execution happens

Page 16
Created
. Blocked State/Wait State: Refers
to a state where a running process is Incepted into memory

temporarily suspended from


execution and does not have Ready

immediate access to resources. The


blocked state might have invoked by
Blocked
various conditions like- the process
enters a wait state for an event to
occur (E.g. Waiting for user inputs Running
such as keyboard input) or waiting for
getting access to a shared resource Execution Completion

like semaphore, mutex etc


Completed

Figure 6.Process states and State transition

Completed State: A state where the process completes its execution

The transition of a process from one state to another is known as


Statetransition

When a process changes its state from Ready to running or from


running toblocked or terminated or from blocked to running, the CPU
allocation for the process may alsochange

Threads

A thread is the primitive that can execute code

A thread is a single sequential flow of control within a process

Thread is also known as lightweight process

A process can have many threads of execution

Page 17
Different threads, which are part of a
process, share the same address space;
meaning they share the data memory, code
memory and heap memory area

Threads maintain their own thread status


(CPU register values), Program Counter
(PC) and stack

Figure 7 Memory organization of process and its associated Threads

The Concept of multithreading


Use of multiple threads to execute a process brings the following advantage.

Better memory utilization.


Multiple threads of the same Task/Process

process share the address spacefor Code Memory

data memory. This also reduces Data Memory

the complexity of inter thread Stack Stack Stack

communication since variables Registers

Thread 1
Registers

Thread 2
Registers

Thread 3

can be shared across thethreads. void main (void)


{
//Create child
int ChildThread1
(void)
{
//Do something
int ChildThread2
(void)
{
//Do something
thread 1
CreateThread(NULL,
1000,(LPTHREAD_STA
RT_ROUTINE) } }

Since the process is split into ChildThread1,NULL,


0, &dwThreadID);
//Create child
thread 2

different threads, when one


CreateThread(NULL,
1000,(LPTHREAD_STA
RT_ROUTINE)
ChildThread2,NULL,

thread enters a wait state, the


0, &dwThreadID);
}

CPU can be utilized by other

Figure 8 Process with multi-threads

threads of the process that do not require the event, which the other thread is
waiting, for processing. This speeds up the execution of the process.

Efficient CPU utilization. The CPU is engaged all time.

Page 18
Thread V/s Process

Thread Process
Thread is a single unit of execution and is part Process is a program in execution and contains
of process. one or more threads.

A thread does not have its own data memory Process has its own code memory, data memory
and heap memory. It shares the data memory and stack memory.
and heap memory with other threads of the
same process.

A thread cannot live independently; it lives A process contains at least one thread.
within the process.

There can be multiple threads in a process.The Threads within a process share the code, data
first thread (main thread) calls the main and heap memory. Each thread holds separate
function and occupies the start of the stack memory area for stack (shares the total stack
memory of the process. memory of the process).

Threads are very inexpensive to create Processes are very expensive to create. Involves
many OS overhead.

Context switching is inexpensive and fast Context switching is complex and involves lot of
OS overhead and is comparatively slower.

If a thread expires, its stack is reclaimed by the If a process dies, the resources allocated to it are
process. reclaimed by the OS and all the associated
threads of the process also dies.

Advantages of Threads:

1. Better memory utilization: Multiple threads of the same process share the
address space for data memory. This also reduces the complexity of inter
thread communication since variables can be shared across the threads.

2. Efficient CPU utilization: The CPU is engaged all time.

Page 19
3. Speeds up the execution of the process: The process is split into different
threads, when one thread enters a wait state, the CPU can be utilized by other
threads of the process that do not require the event, which the other thread is
waiting, for processing.

Multiprocessing & Multitasking

The ability to execute multiple processes simultaneously is referred as


multiprocessing

Systems which are capable of performing multiprocessing are known as


multiprocessor systems

Multiprocessor systems possess multiple CPUs and can execute multiple


processes simultaneously

The ability of the Operating System to have multiple programs in memory,


which are ready for execution, is referred as multiprogramming

Multitasking refers to the ability of an operating system to hold multiple


processes in memory and switch the processor (CPU) from executing one
process to another process

Multitasking Context switching Context saving Context


retrieval

Context switching refers to the switching of execution context from task to


other

When a task/process switching happens, the current context of execution


should be saved to (Context saving) retrieve it at a later point of time when
the CPU executes the process, which is interrupted currently due to execution
switching

During context switching, the context of the task to be executed is retrieved


from the saved context list. This is known as Context retrieval

Page 20
Multitasking Context Switching:

Idle

due to due to

Process 2 Idle Running Queue

Process 1 Running Waits in Queue Idle Running

Figure 9 Context Switching

Multiprogramming: The ability of the Operating System to have multiple


programs in memory, which are ready for execution, is referred as
multiprogramming.

Types of Multitasking :

Depending on how the task/process execution switching act is implemented,


multitasking can is classified into

Page 21
Co-operative Multitasking: Co-operative multitasking is the most primitive
form of multitasking in which a task/process gets a chance to execute only
when the currently executing task/process voluntarily relinquishes the CPU.
In this method, any task/process can avail the CPU as much time as it wants.
Since this type of implementation involves the mercy of the tasks each other
for getting the CPU time for execution, it is known as co-operative
multitasking. If the currently executing task is non-cooperative, the other tasks
may have to wait for a long time to get the CPU

Preemptive Multitasking: Preemptive multitasking ensures that every


task/process gets a chance to execute. When and how much time a process
gets is dependent on the implementation of the preemptive scheduling. As the
name indicates, in preemptive multitasking, the currently running task/process
is preempted to give a chance to other tasks/process to execute. The
preemption of task may be based on time slots or task/processpriority

Non-preemptive Multitasking: The process/task, which is currently given the

state) or enters the state, waiting for an I/O. The co- operative
and non-preemptive multitasking differs in their behavior when they are in the
state. In co-operative multitasking, the currently executing
process/task need not relinquish the CPU when it enters
sate, waiting for an I/O, or a shared resource access or anevent to occur
whereas in non-preemptive multitasking the currently executing task
relinquishes the CPU when it waits for an I/O.

Task Scheduling:
In a multitasking system, there should be some mechanism in place to share
the CPU among the different tasks and to decide which process/task is to be
executed at a given point of time

Determining which task/process is to be executed at a given point of time is


known as task/process scheduling

Page 22
Task scheduling forms the basis of multitasking

Scheduling policies forms the guidelines for determining which task is to be


executed when

The scheduling policies are implemented in an algorithm and it is run by the


kernel as a service

The kernel service/application, which implements the scheduling algorithm,


is known Scheduler

The task scheduling policy can be pre-emptive, non-preemptive or co-


operative

Depending on the scheduling policy the process scheduling decision may take
place when a process switches its state to

Ready state from Running state


Blocked/Wait state from Running state
Ready state from Blocked/Wait state
Completed state
Task Scheduling - Scheduler Selection:
The selection of a scheduling criteria/algorithm should consider
CPU Utilization: The scheduling algorithm should always make the CPU
utilization high. CPU utilization is a direct measure of how much percentage
of the CPU is being utilized.
Throughput: This gives an indication of the number of processes executed
per unit of time. The throughput for a good scheduler should always be higher.
Turnaround Time: It is the amount of time taken by a process for completing
its execution. It includes the time spent by the process for waiting for the
main memory, time spent in the ready queue, time spent on completing the
I/O operations, and the time spent in execution. The turnaround time should
be a minimum for a good schedulingalgorithm.

Page 23
Waiting Time: Ready
queue waiting to get the CPU time for execution. The waiting time should be
minimal for a good scheduling algorithm.
Response Time: It is the time elapsed between the submission of a process
and the first response. For a good scheduling algorithm, the response time
should be as least as possible.

To summarize, a good scheduling algorithm has high CPU utilization, minimum


Turn Around Time (TAT), maximum throughput and least response time.

Task Scheduling - Queues


The various queues maintained by OS in association with CPU scheduling are
Job Queue: Job queue contains all the processes in the system
Ready Queue: Contains all the processes, which are ready for execution and
waiting for CPU to get their turn for execution. The Ready queue is empty
when there is no process ready for running.
Device Queue: Contains the set of processes, which are waiting for an I/O
device
Task Scheduling Task transition through various Queues

Process 1

Scheduler

Job Queue

Admitted Process 1
Process
Run Process
to Completion

Process n
Ready Queue
to queue Process
Move preempted process CPU

Move I/O C ompleted


Process to queue

Device
Manager
Process
Process 1
Process 2

Figure 10. Process TranDesviicteiQouneuethrough various queues

Page 24
Non-preemptive scheduling First Come First Served (FCFS)/First In
First Out (FIFO) Scheduling:

Allocates CPU time to the processes based on the order in which they enters
the Ready queue
The first entered process is serviced first
It is same as any real world application where queue systems are used; E.g.
Ticketing
Drawbacks:
Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task
In general, FCFS favors CPU bound processes and I/O bound processes may
have to wait until the completion of CPU bound process, if the currently
executing process is a CPU bound process. This leads to poor device
utilization.
The average waiting time is not minimal for FCFS scheduling algorithm

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue together
in the order P1, P2, P3. Calculate the waiting time and Turn Around Time (TAT) for
each process and the Average waiting time and Turn Around Time (Assuming there
is no I/O waiting for the processes).

Solution: The sequence of execution of the processes by the CPU is represented as

0 10 15 22
10 5 7

Page 25
Assuming the CPU is readily available at the time of arrival of P1, P1 starts

is zero.

Waiting Time for P1 = 0 ms (P1 starts executing first)

Waiting Time for P2 = 10 ms (P2 starts executing after completing P1)

Waiting Time for P3 = 15 ms (P3 starts executing after completing P1 and P2)

Average waiting time = (Waiting time for all processes) / No. of Processes

= (Waiting time for (P1+P2+P3)) / 3

= (0+10+15)/3 = 25/3 = 8.33 milliseconds

Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue +


Execution Time)

Turn Around Time (TAT) for P2 = 15 ms (-Do-)

Turn Around Time (TAT) for P3 = 22 ms (-Do-)

Average Turn Around Time= (Turn Around Time for all processes) / No. of
Processes

= (Turn Around Time for (P1+P2+P3)) / 3

= (10+15+22)/3 = 47/3

= 15.66 milliseconds

Non-preemptive scheduling Last Come First Served (LCFS)/Last In


First Out (LIFO) Scheduling:

Allocates CPU time to the processes based on the order in which they are
entered in Ready queue

The last entered process is serviced first

Page 26
LCFS scheduling is also known as Last In First Out (LIFO) where the process,
which is put last into Ready queue, is serviced first

Drawbacks:

Favors monopoly of process. A process, which does not contain any I/O
operation, continues its execution until it finishes its task

In general, LCFS favors CPU bound processes and I/O bound processes may
have to wait until the completion of CPU bound process, if the currently
executing process is a CPU bound process. This leads to poor device
utilization.

The average waiting time is not minimal for LCFS scheduling algorithm

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue together
Ready
Ready
process P4 with estimated completion
of scheduling P1. Calculate the waiting time and Turn Around Time (TAT) for each
process and the Average waiting time and Turn Around Time (Assumingthere is no
I/O waiting for the processes).Assume all the processes contain only CPU operation
and no I/O operations are involved.

Solution: Initially there is only P1 available in the Ready queue and the scheduling
sequence will be P1, P3, P2. P4 enters the queue during the execution of P1 and
becomes the last proces Ready
changes to P1, P4, P3, and P2 as given below.

Page 27
P1 P4 P3 P2

0 10 16 23 28

10 6 7 5

The waiting time for all the processes are given as

Waiting Time for P1 = 0 ms (P1 starts executing first)

Waiting Time for P4 = 5 ms (P4 starts executing after completing P1. But P4
arrived after 5ms of execution of P1. Hence its waiting time = Execution start time
Arrival Time = 10-5 = 5)

Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)

Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)

Average waiting time = (Waiting time for all processes) / No. of Processes
= (Waiting time for (P1+P4+P3+P2)) / 4

= (0 + 5 + 16 + 23)/4 = 44/4

= 11 milliseconds

Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)

Turn Around Time (TAT) for P4 = 11 ms (Time spent in Ready Queue +


Execution Time = (Execution Start Time Arrival
Time) + Estimated Execution Time = (10-5) + 6 = 5 + 6)

Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)
Average Turn Around Time = (Turn Around Time for all processes) / No. of Processes
= (Turn Around Time for (P1+P4+P3+P2)) / 4

= (10+11+23+28)/4 = 72/4

= 18 milliseconds

Page 28
Non-preemptive scheduling Shortest Job First (SJF) Scheduling.
Allocates CPU time to the processes based on the execution completion time
for tasks

The average waiting time for a given set of processes is minimal in SJF
scheduling

Optimal compared to other non-preemptive scheduling like FCFS

Drawbacks:

A process whose estimated execution completion time is high may not get a
chance to execute if more and more processes with least estimated execution
Ready
execution time starts its execution

time

Difficult to Ready
for scheduling since new processes with different estimated execution time
keep entering the Ready queue at any point of time.

Non-preemptive scheduling Priority based Scheduling

A priority, which is unique or same is associated with each task

The priority of a task is expressed in different ways, like a priority number,


the time required to complete the execution etc.

In number based priority assignment the priority is a number ranging from 0


to the maximum priority supported by the OS. The maximum level ofpriority
is OS dependent.

Windows CE supports 256 levels of priority (0 to 255 priority numbers, with


0 being the highest priority)

Page 29
The priority is assigned to the task on creating it. It can also be changed
dynamically (If the Operating System supports this feature)

The non-
on the priority and picks the process with the highest level of priority for
execution

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds and priorities 0, 3, 2 (0- highest priority, 3
lowest priority) respectively enters the ready queue together. Calculate the waiting
time and Turn Around Time (TAT) for each process and the Average waiting time
and Turn Around Time (Assuming there is no I/O waiting for the processes) in
priority based scheduling algorithm.

Solution: The scheduler sorts the Ready queue based on the priority and schedules
the process with the highest priority (P1 with priority number 0) first andthe next
high priority process (P3 with priority number 2) as second and so on. Theorder in
which the processes are scheduled for execution is represented as

P1 P3 P2

0 10 17 22
10 7 5

The waiting time for all the processes are given as

Waiting Time for P1 = 0 ms (P1 starts executing first)

Waiting Time for P3 = 10 ms (P3 starts executing after completing P1)

Waiting Time for P2 = 17 ms (P2 starts executing after completing P1 and P3)

Average waiting time = (Waiting time for all processes) / No. of Processes

= (Waiting time for (P1+P3+P2)) / 3

Page 30
= (0+10+17)/3 = 27/3

= 9 milliseconds

Turn Around Time (TAT) for P1 = 10 ms (Time spent in Ready Queue + Execution Time)

Turn Around Time (TAT) for P3 = 17 ms (-Do-)

Turn Around Time (TAT) for P2 = 22 ms (-Do-)

Average Turn Around Time= (Turn Around Time for all processes) / No. of Processes

= (Turn Around Time for (P1+P3+P2)) / 3

= (10+17+22)/3 = 49/3

= 16.33 milliseconds

Drawbacks:

Similar to SJF scheduling algorithm, non-preemptive priority based algorithm


Starvation priority is
low may not get a chance to execute if more and more processes with higher
Ready priority starts
its execution.

Starvation can be effectively tackled in priority based non-preemptive


scheduling by dynamically raising the priority of the low priority task/process
which is under starvation (waiting in the ready queue for a longer time for
getting the CPU time)

The technique of gradually raising the priority of processes which are waiting
Starvation is known
Aging

Page 31
Preemptive scheduling:
Employed in systems, which implements preemptive multitasking model

Ready often
each process gets a chance to execute (gets the CPU time) is dependenton the
type of preemptive scheduling algorithm used for scheduling the processes

The scheduler can preempt (stop temporarily) the currently executing


task/process and select another task from Ready queue for execution

When to pre- Ready


queue for execution after preempting the current task is purely dependent on
the scheduling algorithm

Ready eue.
Running Ready
scheduler, without the processes requesting for it is known Preemption

Time-based preemption and priority-based preemption are the two important


approaches adopted in preemptive scheduling

Preemptive scheduling Preemptive SJF Scheduling/ Shortest Remaining


Time (SRT):

The non preemptive SJF


after the current process completes execution or enters wait state, whereas
the preemptive SJF new
process enters the queue and checks whether the executiontime
of the new process is shorter than the remaining of the total estimated
execution time of the currently executing process

If the execution time of the new process is less, the currently executing
process is preempted and the new process is scheduled for execution

Page 32
Always compares the execution completion time (ie the remaining execution
time f
the remaining time for completion of the currently executing process and
schedules the process with shortest remaining time for execution.

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds respectively enters the ready queue together.

after 2ms. Assume all the processes contain only CPU operation and no I/O
operations are involved.

Solution: At the beginning, there are only three processes (P1, P2 and P3) available
Ready Shortest
remaining time for execution completion (In this example P2 with remaining time
5ms) for scheduling. Now process P4 with estimated execution completion time 2ms
Ready P2. The processes are re-
scheduled for execution in the following order

P2 P4 P2 P3 P1

0 2 4 7 14 24
2 2 3 7 10

The waiting time for all the processes are given as

Waiting Time for P2 = 0 ms + (4 -2) ms = 2ms (P2 starts executing first and is
interrupted by P4 and has to wait till the completion of
P4 to get the next CPU slot)
Waiting Time for P4 = 0 ms (P4 starts executing by preempting P2 since the
execution time for completion of P4 (2ms) is less
than that of the Remaining time for execution
completion of P2 (Here it is 3ms))
Waiting Time for P3 = 7 ms (P3 starts executing after completing P4 and P2)

Page 33
Waiting Time for P1 = 14 ms (P1 starts executing after completing P4, P2 and P3)
Average waiting time = (Waiting time for all the processes) / No. of Processes
= (Waiting time for (P4+P2+P3+P1)) / 4
= (0 + 2 + 7 + 14)/4 = 23/4
= 5.75 milliseconds
Turn Around Time (TAT) for P2 = 7 ms (Time spent in Ready Queue + Execution Time)
Turn Around Time (TAT) for P4 = 2 ms
(Time spent in Ready Queue + Execution Time = (Execution Start Time Arrival
Time) + Estimated Execution Time = (2-2) + 2)

Turn Around Time (TAT) for P3 = 14 ms (Time spent in Ready Queue +


Execution Time)
Turn Around Time (TAT) for P1 = 24 ms (Time spent in Ready Queue +
Execution Time)
Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes
= (Turn Around Time for (P2+P4+P3+P1)) / 4
= (7+2+14+24)/4 = 47/4
= 11.75 milliseconds Process 1

Preemptive scheduling Round Robin (RR) Execution Switch


Execution Switch

Scheduling:

Each process in the queue is Process 4 Process 2

executed for a pre-defined time slot.

The execution starts with picking up the first Execution Switch


Execution Switch

for a
Process 3
pre-defined time

Page 34
Figure 11 Round Robin Scheduling

Gandhi Lakavath, EEE Page 35


When the pre-defined time elapses or the process completes (before the pre-
defined time slice), the
execution.

This is repeated for all the processes in the queue

-defined time
period, the scheduler comes back and picks the first
queue again for execution.

Round Robin scheduling is similar to the FCFS scheduling and the only
difference is that a time slice based preemption is added to switch the
execution between the processes in the queue

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 6, 4, 2 milliseconds respectively, enters the ready queue together in
the order P1, P2, P3. Calculate the waiting time and Turn Around Time (TAT) for
each process and the Average waiting time and Turn Around Time (Assuming there
is no I/O waiting for the processes) in RR algorithm with Time slice= 2ms.

Solution: Ready picks


executes it for the time slice 2ms.
When the time slice is expired, P1 is preempted and P2 is scheduled for execution.
The Time slice expires after 2ms of execution of P2. Now P2 is preempted and P3
is picked up for execution. P3 completes its execution within thetime slice and the
scheduler picks P1 again for execution for the next time slice. This procedure is
repeated till all the processes are serviced. The order in which theprocesses are
scheduled for execution is represented as

P1 P2 P3 P1 P2 P1

0 2 4 6 8 10 12
2 2 2 2 2 2

Page 36
The waiting time for all the processes are given as

Waiting Time for P1 = 0 + (6-2) + (10-8) = 0+4+2= 6ms (P1 starts executing first
and waits for two time slices to get execution back and
again 1 time slice for getting CPU time)
Waiting Time for P2 = (2-0) + (8-4) = 2+4 = 6ms (P2 starts executing after P1
executes for 1 time slice and waits for two time
slices to get the CPU time)

Waiting Time for P3 = (4 -0) = 4ms (P3 starts executing after completing the first
time slices for P1 and P2 and completes its execution in a single time slice.)

Average waiting time = (Waiting time for all the processes) / No. of Processes

= (Waiting time for (P1+P2+P3)) / 3

= (6+6+4)/3 = 16/3

= 5.33 milliseconds

Turn Around Time (TAT) for P1 = 12 ms (Time spent in Ready Queue + Execution Time)

Turn Around Time (TAT) for P2 = 10 ms (-Do-)

Turn Around Time (TAT) for P3 = 6 ms (-Do-)

Average Turn Around Time = (Turn Around Time for all the processes) / No. of Processes

= (Turn Around Time for (P1+P2+P3)) / 3

= (12+10+6)/3 = 28/3

= 9.33 milliseconds.

Page 37
Preemptive scheduling Priority based Scheduling
Same as that of the non-preemptive priority based scheduling except for the
switching of execution between tasks

In preemptive priority based scheduling, any high priority process entering


non-
preemptive queue
is scheduled only after the currently executing process completes its execution
or only when it voluntarily releases the CPU

The priority of a task/process in preemptive priority based scheduling is


indicated in the same way as that of the mechanisms adopted for non-
preemptive multitasking.

EXAMPLE: Three processes with process IDs P1, P2, P3 with estimated
completion time 10, 5, 7 milliseconds and priorities 1, 3, 2 (0- highest priority, 3
lowest priority) respectively enters the ready queue together. A new process P4 with
5ms of
start of execution of P1. Assume all the processes contain only CPU operation and
no I/O operations are involved.

Solution: At the beginning, there are only three processes (P1, P2 and P3) available
Ready highest priority
(In this example P1 with priority 1) for scheduling. Now process P4 with estimated
execution completion time 6ms and priority 0 Ready queue after 5ms of
start of execution of P1. The processes are re-scheduled for execution in the
following order

P1 P4 P1 P3 P2

0 5 11 16 23 28
5 6 5 7 5

Page 38
The waiting time for all the processes are given as

Waiting Time for P1 = 0 + (11-5) = 0+6 =6 ms (P1 starts executing first and gets
Preempted by P4 after 5ms and again gets the CPU time
after completion of P4)

Waiting Time for P4 = 0 ms (P4 starts executing immediately on entering the


queue, by preempting P1)

Waiting Time for P3 = 16 ms (P3 starts executing after completing P1 and P4)

Waiting Time for P2 = 23 ms (P2 starts executing after completing P1, P4 and P3)

Average waiting time = (Waiting time for all the processes) / No. of Processes

= (Waiting time for (P1+P4+P3+P2)) / 4

= (6 + 0 + 16 + 23)/4 = 45/4

= 11.25 milliseconds

Turn Around Time (TAT) for P1 = 16 ms (Time spent in Ready Queue + Execution Time)

Turn Around Time (TAT) for P4 = 6ms (Time spent in Ready Queue + Execution Time
= (Execution Start Time Arrival Time) + Estimated Execution Time = (5-5) + 6 = 0 + 6)

Turn Around Time (TAT) for P3 = 23 ms (Time spent in Ready Queue + Execution Time)

Turn Around Time (TAT) for P2 = 28 ms (Time spent in Ready Queue + Execution Time)

Average Turn Around Time= (Turn Around Time for all the processes) / No. of Processes

= (Turn Around Time for (P2+P4+P3+P1)) / 4

= (16+6+23+28)/4 = 73/4

= 18.25 milliseconds

Page 39
How to chose RTOS:
The decision of an RTOS for an embedded design is very critical.

A lot of factors need to be analyzed carefully before making a decision on


the selection of an RTOS.

These factors can be either

1. Functional

2. Non-functional requirements.

1. Functional Requirements:

1. Processor support:
It is not necessary that all support all kinds of processor
architectures.

It is essential to ensure the processor support by the RTOS

2. Memory Requirements:

The RTOS requires ROM memory for holding the OS files and it is
normally stored in a non-volatile memory like FLASH.

OS also requires working memory RAM for loading the OS service.

Since embedded systems are memory constrained, it is essential to evaluate


the minimal RAM and ROM requirements for the OS under consideration.

3. Real-Time Capabilities:

It is not mandatory that the OS for all embedded systems need to be Real-
Time and all are - behavior.

The Task/process scheduling policies plays an important role in the Real-


Time behavior of an OS.

Page 40
3. Kernel and Interrupt Latency:

The kernel of the OS may disable interrupts while executing certain services
and it may lead to interrupt latency.

For an embedded system whose response requirements are high, this latency
should be minimal.

5. Inter process Communication (IPC) and Task Synchronization: The


implementation of IPC and Synchronization is OS kernel dependent.

6. Modularization Support:

Most of the provide a bunch of features.

It is very useful if the OS supports modularization where in which the


developer can choose the essential modules and re-compile the OS image for
functioning.

7. Support for Networking and Communication:

The OS kernel may provide stack implementation and driver support for a
bunch of communication interfaces and networking.

Ensure that the OS under consideration provides support for all the
interfaces required by the embedded product.

8. Development Language Support:

written in languages like JAVA and C++.

The OS may include these components as built-in component, if not , check


the availability of the same from a third party.

Page 41
2. Non-Functional Requirements:

1. Custom Developed or Off the Shelf:

It is possible to go for the complete development of an OS suiting the


embedded system needs or use an off the shelf, readily availableOS.

It may be possible to build the required features by customizing an open


source OS.

The decision on which to select is purely dependent on the development cost,


licensing fees for the OS, development time and availability of skilled
resources.

2. Cost:

The total cost for developing or buying the OS and maintaining it in terms of
commercial product and custom build needs to be evaluated before taking a
decision on the selection of OS.

3. Development and Debugging tools Availability:

The availability of development and debugging tools is a critical decision


making factor in the selection of an OS for embedded design.

may be superior in performance, but the availability of tools for


supporting the development may be limited.

4. Ease of Use:

How easy it is to use a commercial RTOS is another important feature that


needs to be considered in the RTOS selection.

5. After Sales:

For a commercial embedded RTOS, after sales in the form of e-mail, on-call
services etc. for bug fixes, critical patch updates and support for production
issues etc. should be analyzed thoroughly.

Page 42

You might also like