0% found this document useful (0 votes)
258 views85 pages

OS - II UNIT (Processes and Scheduling)

Processes and Scheduling

Uploaded by

Hariharan SK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
258 views85 pages

OS - II UNIT (Processes and Scheduling)

Processes and Scheduling

Uploaded by

Hariharan SK
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 85

UNIT - II Processes and Scheduling

Definition - Process Relationship - Different states of a Process - Process State transitions,


Process Control Block (PCB), Context switching-Thread: Definition, Various states, Benefits of
threads, Types of threads, Concept of multithreads-Process Scheduling-Foundation and
Scheduling objectives - Types of Schedulers, Scheduling criteria-CPU utilization, Throughput,
Turnaround Time, Waiting Time, Response Time-Scheduling algorithms- Pre-emptive and Non
pre-emptive, FCFS, SJF, RR-Multiprocessor scheduling-Real Time scheduling-RM and EDF.

What is Process Scheduling?

Process Scheduling is an OS task that schedules processes of different states like ready, waiting, and
running.

Process scheduling allows OS to allocate a time interval of CPU execution for each process. Another
important reason for using a process scheduling system is that it keeps the CPU busy all the time. This
allows you to get the minimum response time for programs.

Process Scheduling Queues

Process Scheduling Queues help you to maintain a distinct queue for each and every process states and
PCBs. All the process of the same execution state are placed in the same queue. Therefore, whenever
the state of a process is modified, its PCB needs to be unlinked from its existing queue, which moves
back to the new state queue.

Three types of operating system queues are:

1. Job queue – It helps you to store all the processes in the system.

2. Ready queue – This type of queue helps you to set every process residing in the main memory,
which is ready and waiting to execute.

3. Device queues – It is a process that is blocked because of the absence of an I/O device.
In the above-given Diagram,

 Rectangle represents a queue.

 Circle denotes the resource

 Arrow indicates the flow of the process.

1. Every new process first put in the Ready queue .It waits in the ready queue until it is finally
processed for execution. Here, the new process is put in the ready queue and wait until it is
selected for execution or it is dispatched.

2. One of the processes is allocated the CPU and it is executing

3. The process should issue an I/O request

4. Then, it should be placed in the I/O queue.

5. The process should create a new subprocess

6. The process should be waiting for its termination.

7. It should remove forcefully from the CPU, as a result interrupt. Once interrupt is completed, it
should be sent back to ready queue.

Two State Process Model

Two-state process models are:

 Running

 Not Running

Running

In the Operating system, whenever a new process is built, it is entered into the system, which should be
running.

Not Running

The process that are not running are kept in a queue, which is waiting for their turn to execute. Each
entry in the queue is a point to a specific process.

Scheduling Objectives

Here, are important objectives of Process scheduling

 Maximize the number of interactive users within acceptable response times.

 Achieve a balance between response and utilization.

 Avoid indefinite postponement and enforce priorities.

 It also should give reference to the processes holding the key resources.

Type of Process Schedulers


A scheduler is a type of system software that allows you to handle process scheduling.

There are mainly three types of Process Schedulers:

1. Long Term

2. Short Term

3. Medium Term

Long Term Scheduler

Long term scheduler is also known as a job scheduler. This scheduler regulates the program and select
process from the queue and loads them into memory for execution. It also regulates the degree of
multi-programing.

However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor, I/O
jobs., that allows managing multiprogramming.

Medium Term Scheduler

Medium-term scheduling is an important part of swapping. It enables you to handle the swapped out-
processes. In this scheduler, a running process can become suspended, which makes an I/O request.

A running process can become suspended if it makes an I/O request. A suspended processes can't make
any progress towards completion. In order to remove the process from memory and make space for
other processes, the suspended process should be moved to secondary storage.

Short Term Scheduler

Short term scheduling is also known as CPU scheduler. The main goal of this scheduler is to boost the
system performance according to set criteria. This helps you to select from a group of processes that are
ready to execute and allocates CPU to one of them. The dispatcher gives control of the CPU to the
process selected by the short term scheduler.

Difference between Schedulers

Long-Term Vs. Short Term Vs. Medium-Term

Long-Term Short-Term Medium-Term

Long term is also known as a job Short term is also known as Medium-term is also called
scheduler CPU scheduler swapping scheduler.

It is either absent or minimal in a time- It is insignificant in the time- This scheduler is an element of
sharing system. sharing order. Time-sharing systems.

Speed is the fastest compared


Speed is less compared to the short term
to the short-term and medium- It offers medium speed.
scheduler.
term scheduler.

Allow you to select processes from the It only selects processes that is It helps you to send process back
loads and pool back into the memory in a ready state of the
Long-Term Short-Term Medium-Term

execution. to memory.

Reduce the level of


Offers full control Offers less control
multiprogramming.

An operating system uses two types of scheduling processes execution, preemptive and non -
preemptive.

1. Preemptive process:
In preemptive scheduling policy, a low priority process has to be suspend its execution if high priority
process is waiting in the same queue for its execution.

2. Non - Preemptive process:


In non - preemptive scheduling policy, processes are executed in first come first serve basis, which
means the next process is executed only when currently running process finishes its execution.

Operating system performs the task of scheduling processes based on priorities using these following
algorithms:

1) First come first serve (FCFS)


In this scheduling algorithm the first process entered in queue is processed first.

2) Shortest job first (SJF)


In this scheduling algorithm the process which requires shortest CPU time to execute is processed first.

3) Shortest Remaining Time First (SRTF) scheduling


This scheduling Algorithm is the preemptive version of the SJF scheduling algorithm. In this, the process
which is left with the least processing time is executed first.

4) Longest Job First (LJF)


In this type of scheduling algorithm, the process with the maximum time required to execute is
scheduled first. In this type of scheduling is not widely used because it is not a very effective way of
scheduling, as the average turn-around time and the average waiting time are maximum in this case.

5) Longest Remaining Time First (LRTF)


As SRTF is to SJF, LRTF is the preemptive version of the LJF scheduling algorithm.

6) Priority scheduling
In this scheduling algorithm the priority is assigned to all the processes and the process with highest
priority executed first. Priority assignment of processes is done on the basis of internal factor such as
CPU and memory requirements or external factor such as user’s choice. The priority scheduling
algorithm supports preemptive and non - preemptive scheduling policy.

7) Round Robin (RR) scheduling


In this algorithm the process is allocated the CPU for the specific time period called time slice, which is
normally of 10 to 100 milliseconds. If the process completes its execution within this time slice, then it is
removed from the queue otherwise it has to wait for another time slice.

What is Context switch?

It is a method to store/restore the state or of a CPU in PCB. So that process execution can be resumed
from the same point at a later time. The context switching method is important for multitasking OS.

Summary:

 Process scheduling is an OS task that schedules the processes of different states like ready,
waiting, and running.

 Two-state process models are 1) Running, and )Not Running

 Process scheduling maximizes the number of interactive users, within acceptable response
times.

 A scheduler is a type of system software that allows you to handle process scheduling.

 Three types of the scheduler are 1) Long term 2) Short term 3) Medium-term

 Long term scheduler regulates the program and select process from the queue and loads them
into memory for execution.

 The medium-term scheduler enables you to handle the swapped out-processes.

 The main goal of short term scheduler is to boost the system performance according to set
criteria

 Long term is also known as a job scheduler, whereas the short term is also known as CPU
scheduler, and the medium-term is also called swapping scheduler.

What is a Process?

A process is a program in execution. Process is not as same as program code but a lot more than it. A
process is an 'active' entity as opposed to program which is considered to be a 'passive' entity. Attributes
held by process include hardware state, memory, CPU etc.

Process memory is divided into four sections for efficient working :

 The Text section is made up of the compiled program code, read in from non-volatile storage
when the program is launched.

 The Data section is made up the global and static variables, allocated and initialized prior to
executing the main.
 The Heap is used for the dynamic memory allocation, and is managed via calls to new, delete,
malloc, free, etc.

 The Stack is used for local variables. Space on the stack is reserved for local variables when they
are declared.

Different Process States

Processes in the operating system can be in any of the following states:

 NEW- The process is being created.

 READY- The process is waiting to be assigned to a processor.

 RUNNING- Instructions are being executed.

 WAITING- The process is waiting for some event to occur(such as an I/O completion or reception
of a signal).

 TERMINATED- The process has finished execution.

Process Control Block

There is a Process Control Block for each process, enclosing all the information about the process. It is a
data structure, which contains the following:

 Process State: It can be running, waiting etc.

 Process ID and the parent process ID.

 CPU registers and Program Counter. Program Counter holds the address of the next instruction
to be executed for that process.

 CPU Scheduling information: Such as priority information and pointers to scheduling queues.

 Memory Management information: For example, page tables or segment tables.

 Accounting information: The User and kernel CPU time consumed, account numbers, limits, etc.

 I/O Status information: Devices allocated, open file tables, etc.


What is Process Control Block (PCB)?

Process Control Block is a data structure that contains information of the process related to it. The
process control block is also known as a task control block, entry of the process table, etc.

It is very important for process management as the data structuring for processes is done in terms of the
PCB. It also defines the current state of the operating system.

Structure of the Process Control Block

The process control stores many data items that are needed for efficient process management. Some of
these data items are explained with the help of the given diagram −
The following are the data items −

Process State

This specifies the process state i.e. new, ready, running, waiting or terminated.

Process Number

This shows the number of the particular process.

Program Counter

This contains the address of the next instruction that needs to be executed in the process.

Registers

This specifies the registers that are used by the process. They may include accumulators, index registers,
stack pointers, general purpose registers etc.

List of Open Files


These are the different files that are associated with the process

CPU Scheduling Information

The process priority, pointers to scheduling queues etc. is the CPU scheduling information that is
contained in the PCB. This may also include any other scheduling parameters.

Memory Management Information

The memory management information includes the page tables or the segment tables depending on the
memory system used. It also contains the value of the base registers, limit registers etc.

I/O Status Information

This information includes the list of I/O devices used by the process, the list of files etc.

Accounting information

The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of the PCB
accounting information.

Location of the Process Control Block

The process control block is kept in a memory area that is protected from the normal user access. This is
done because it contains important process information. Some of the operating systems place the PCB
at the beginning of the kernel stack for the process as it is a safe location.

States of a Process in Operating Systems

 Difficulty Level : Medium

 Last Updated : 14 Aug, 2019

Prerequisite – Introduction, Process Scheduler


States of a process are as following:
 New (Create) – In this step, the process is about to be created but not yet created, it is the
program which is present in secondary memory that will be picked up by OS to create the
process.

 Ready – New -> Ready to run. After the creation of a process, the process enters the ready state
i.e. the process is loaded into the main memory. The process here is ready to run and is waiting
to get the CPU time for its execution. Processes that are ready for execution by the CPU are
maintained in a queue for ready processes.

 Run – The process is chosen by CPU for execution and the instructions within the process are
executed by any one of the available CPU cores.

 Blocked or wait – Whenever the process requests access to I/O or needs input from the user or
needs access to a critical region(the lock for which is already acquired) it enters the blocked or
wait state. The process continues to wait in the main memory and does not require CPU. Once
the I/O operation is completed the process goes to the ready state.

 Terminated or completed – Process is killed as well as PCB is deleted.

 Suspend ready – Process that was initially in the ready state but were swapped out of main
memory(refer Virtual Memory topic) and placed onto external storage by scheduler are said to
be in suspend ready state. The process will transition back to ready state whenever the process
is again brought onto the main memory.

 Suspend wait or suspend blocked – Similar to suspend ready but uses the process which was
performing I/O operation and lack of main memory caused them to move to secondary memory.
When work is finished it may go to suspend ready.
CPU and IO Bound Processes:
If the process is intensive in terms of CPU operations then it is called CPU bound process. Similarly, If the
process is intensive in terms of I/O operations then it is called IO bound process.

Types of schedulers:

1. Long term – performance – Makes a decision about how many processes should be made to
stay in the ready state, this decides the degree of multiprogramming. Once a decision is taken it
lasts for a long time hence called long term scheduler.

2. Short term – Context switching time – Short term scheduler will decide which process to be
executed next and then it will call dispatcher. A dispatcher is a software that moves process
from ready to run and vice versa. In other words, it is context switching.

3. Medium term – Swapping time – Suspension decision is taken by medium term scheduler.
Medium term scheduler is used for swapping that is moving the process from main memory to
secondary and vice versa.

Multiprogramming – We have many processes ready to run. There are two types of multiprogramming:

1. Pre-emption – Process is forcefully removed from CPU. Pre-emption is also called as time
sharing or multitasking.

2. Non pre-emption – Processes are not removed until they complete the execution.

Degree of multiprogramming –
The number of processes that can reside in the ready state at maximum decides the degree of
multiprogramming, e.g., if the degree of programming = 100, this means 100 processes can reside in the
ready state at maximum.

What is a Process?

Process is the execution of a program that performs the actions specified in that program. It can be
defined as an execution unit where a program runs. The OS helps you to create, schedule, and
terminates the processes which is used by CPU. A process created by the main process is called a child
process.

Process operations can be easily controlled with the help of PCB(Process Control Block). You can
consider it as the brain of the process, which contains all the crucial information related to processing
like process id, priority, state, CPU registers, etc.

What is Process Management?

Process management involves various tasks like creation, scheduling, termination of processes, and a
dead lock. Process is a program that is under execution, which is an important part of modern-day
operating systems. The OS must allocate resources that enable processes to share and exchange
information. It also protects the resources of each process from other methods and allows
synchronization among processes.

It is the job of OS to manage all the running processes of the system. It handles operations by
performing tasks like process scheduling and such as resource allocation.

Process Architecture

Process architecture Image

Here, is an Architecture diagram of the Process

 Stack: The Stack stores temporary data like function parameters, returns addresses, and local
variables.

 Heap Allocates memory, which may be processed during its run time.

 Data: It contains the variable.

 Text: Text Section includes the current activity, which is represented by the value of the
Program Counter.

Process Control Blocks

The PCB is a full form of Process Control Block. It is a data structure that is maintained by the Operating
System for every process. The PCB should be identified by an integer Process ID (PID). It helps you to
store all the information required to keep track of all the running processes.

It is also accountable for storing the contents of processor registers. These are saved when the process
moves from the running state and then returns back to it. The information is quickly updated in the PCB
by the OS as soon as the process makes the state transition.

Process States
Process States Diagram

A process state is a condition of the process at a specific instant of time. It also defines the current
position of the process.

There are mainly seven stages of a process which are:

 New: The new process is created when a specific program calls from secondary memory/ hard
disk to primary memory/ RAM a

 Ready: In a ready state, the process should be loaded into the primary memory, which is ready
for execution.

 Waiting: The process is waiting for the allocation of CPU time and other resources for execution.

 Executing: The process is an execution state.

 Blocked: It is a time interval when a process is waiting for an event like I/O operations to
complete.

 Suspended: Suspended state defines the time when a process is ready for execution but has not
been placed in the ready queue by OS.

 Terminated: Terminated state specifies the time when a process is terminated

After completing every step, all the resources are used by a process, and memory becomes free.

Process Control Block(PCB)

Every process is represented in the operating system by a process control block, which is also called a
task control block.

Here, are important components of PCB


Process Control Block(PCB)

 Process state: A process can be new, ready, running, waiting, etc.

 Program counter: The program counter lets you know the address of the next instruction, which
should be executed for that process.

 CPU registers: This component includes accumulators, index and general-purpose registers, and
information of condition code.

 CPU scheduling information: This component includes a process priority, pointers for scheduling
queues, and various other scheduling parameters.

 Accounting and business information: It includes the amount of CPU and time utilities like real
time used, job or process numbers, etc.

 Memory-management information: This information includes the value of the base and limit
registers, the page, or segment tables. This depends on the memory system, which is used by
the operating system.

 I/O status information: This block includes a list of open files, the list of I/O devices that are
allocated to the process, etc.

Summary:

 A process is defined as the execution of a program that performs the actions specified in that
program.

 Process management involves various tasks like creation, scheduling, termination of processes,
and a dead lock.

 The important elements of Process architecture are 1)Stack 2) Heap 3) Data, and 4) Text

 The PCB is a full form of Process Control Block. It is a data structure that is maintained by the
Operating System for every process

 A process state is a condition of the process at a specific instant of time.

 Every process is represented in the operating system by a process control block, which is also
called a task control block.
3.1 Process Concept

 A process is an instance of a program in execution.

 Batch systems work in terms of "jobs". Many modern process concepts are still expressed in
terms of jobs, ( e.g. job scheduling ), and the two terms are often used interchangeably.

3.1.1 The Process

 Process memory is divided into four sections as shown in Figure 3.1 below:

o The text section comprises the compiled program code, read in from non-volatile
storage when the program is launched.

o The data section stores global and static variables, allocated and initialized prior to
executing main.

o The heap is used for dynamic memory allocation, and is managed via calls to new,
delete, malloc, free, etc.

o The stack is used for local variables. Space on the stack is reserved for local variables
when they are declared ( at function entrance or elsewhere, depending on the
language ), and the space is freed up when the variables go out of scope. Note that the
stack is also used for function return values, and the exact mechanisms of stack
management may be language specific.

o Note that the stack and the heap start at opposite ends of the process's free space and
grow towards each other. If they should ever meet, then either a stack overflow error
will occur, or else a call to new or malloc will fail due to insufficient memory available.

 When processes are swapped out of memory and later restored, additional information must
also be stored and restored. Key among them are the program counter and the value of all
program registers.
Figure 3.1 - A process in memory

3.1.2 Process State

 Processes may be in one of 5 states, as shown in Figure 3.2 below.

o New - The process is in the stage of being created.

o Ready - The process has all the resources available that it needs to run, but the CPU is
not currently working on this process's instructions.

o Running - The CPU is working on this process's instructions.

o Waiting - The process cannot run at the moment, because it is waiting for some
resource to become available or for some event to occur. For example the process may
be waiting for keyboard input, disk access request, inter-process messages, a timer to go
off, or a child process to finish.

o Terminated - The process has completed.

 The load average reported by the "w" command indicate the average number of processes in
the "Ready" state over the last 1, 5, and 15 minutes, i.e. processes who have everything they
need to run but cannot because the CPU is busy doing something else.

 Some systems may have other states besides the ones listed here.
Figure 3.2 - Diagram of process state

3.1.3 Process Control Block

For each process there is a Process Control Block, PCB, which stores the following ( types of ) process-
specific information, as illustrated in Figure 3.1. ( Specific details may vary from system to system. )

 Process State - Running, waiting, etc., as discussed above.

 Process ID, and parent process ID.

 CPU registers and Program Counter - These need to be saved and restored when swapping
processes in and out of the CPU.

 CPU-Scheduling information - Such as priority information and pointers to scheduling queues.

 Memory-Management information - E.g. page tables or segment tables.

 Accounting information - user and kernel CPU time consumed, account numbers, limits, etc.

 I/O Status information - Devices allocated, open file tables, etc.

Figure 3.3 - Process control block ( PCB )


Figure 3.4 - Diagram showing CPU switch from process to process

3.1.4 Threads

 Modern systems allow a single process to have multiple threads of execution, which execute
concurrently. Threads are covered extensively in the next chapter.

3.2 Process Scheduling

 The two main objectives of the process scheduling system are to keep the CPU busy at all times
and to deliver "acceptable" response times for all programs, particularly for interactive ones.

 The process scheduler must meet these objectives by implementing suitable policies for
swapping processes in and out of the CPU.

 ( Note that these objectives can be conflicting. In particular, every time the system steps in to
swap processes it takes up time on the CPU to do so, which is thereby "lost" from doing any
useful productive work. )

3.2.1 Scheduling Queues

 All processes are stored in the job queue.

 Processes in the Ready state are placed in the ready queue.

 Processes waiting for a device to become available or to deliver data are placed in device
queues. There is generally a separate device queue for each device.

 Other queues may also be created and used as needed.


Figure 3.5 - The ready queue and various I/O device queues

3.2.2 Schedulers

 A long-term scheduler is typical of a batch system or a very heavily loaded system. It runs
infrequently, ( such as when one process ends selecting one more to be loaded in from disk in its
place ), and can afford to take the time to implement intelligent and advanced scheduling
algorithms.

 The short-term scheduler, or CPU Scheduler, runs very frequently, on the order of 100
milliseconds, and must very quickly swap one process out of the CPU and swap in another one.

 Some systems also employ a medium-term scheduler. When system loads get high, this
scheduler will swap one or more processes out of the ready queue system for a few seconds, in
order to allow smaller faster jobs to finish up quickly and clear the system. See the differences in
Figures 3.7 and 3.8 below.

 An efficient scheduling system will select a good process mix of CPU-bound processes and I/O
bound processes.
Figure 3.6 - Queueing-diagram representation of process scheduling

Figure 3.7 - Addition of a medium-term scheduling to the queueing diagram

3.2.3 Context Switch

 Whenever an interrupt arrives, the CPU must do a state-save of the currently running process,
then switch into kernel mode to handle the interrupt, and then do a state-restore of the
interrupted process.

 Similarly, a context switch occurs when the time slice for one process has expired and a new
process is to be loaded from the ready queue. This will be instigated by a timer interrupt, which
will then cause the current process's state to be saved and the new process's state to be
restored.

 Saving and restoring states involves saving and restoring all of the registers and program
counter(s), as well as the process control blocks described above.

 Context switching happens VERY VERY frequently, and the overhead of doing the switching is
just lost CPU time, so context switches ( state saves & restores ) need to be as fast as possible.
Some hardware has special provisions for speeding this up, such as a single machine instruction
for saving or restoring all registers at once.

 Some Sun hardware actually has multiple sets of registers, so the context switching can be
speeded up by merely switching which set of registers are currently in use. Obviously there is a
limit as to how many processes can be switched between in this manner, making it attractive to
implement the medium-term scheduler to swap some processes out as shown in Figure 3.8
above.

In the Operating System, a Process is something that is currently under execution. So, an active program
can be called a Process. For example, when you want to search something on web then you start a
browser. So, this can be process. Another example of process can be starting your music player to listen
to some cool music of your choice.

A Process has various attributes associated with it. Some of the attributes of a Process are:

 Process Id: Every process will be given an id called Process Id to uniquely identify that process
from the other processes.

 Process state: Each and every process has some states associated with it at a particular instant
of time. This is denoted by process state. It can be ready, waiting, running, etc.

 CPU scheduling information: Each process is executed by using some process scheduling
algorithms like FCSF, Round-Robin, SJF, etc.

 I/O information: Each process needs some I/O devices for their execution. So, the information
about device allocated and device need is crucial.

States of a Process

During the execution of a process, it undergoes a number of states. So, in this section of the blog, we will
learn various states of a process during its lifecycle.

 New State: This is the state when the process is just created. It is the first state of a process.

 Ready State: After the creation of the process, when the process is ready for its execution then
it goes in the ready state. In a ready state, the process is ready for its execution by the CPU but it
is waiting for its turn to come. There can be more than one process in the ready state.

 Ready Suspended State: There can be more than one process in the ready state but due to
memory constraint, if the memory is full then some process from the ready state gets placed in
the ready suspended state.

 Running State: Amongst the process present in the ready state, the CPU chooses one process
amongst them by using some CPU scheduling algorithm. The process will now be executed by
the CPU and it is in the running state.

 Waiting or Blocked State: During the execution of the process, the process might require some
I/O operation like writing on file or some more priority process might come. In these situations,
the running process will have to go into the waiting or blocked state and the other process will
come for its execution. So, the process is waiting for something in the waiting state.
 Waiting Suspended State: When the waiting queue of the system becomes full then some of
the processes will be sent to the waiting suspended state.

 Terminated State: After the complete execution of the process, the process comes into the
terminated state and the information related to this process is deleted.

The following image will show the flow of a process from the new state to the terminated state.

In the above image, you can see that when a process is created then it goes into the new state. After the
new state, it goes into the ready state. If the ready queue is full, then the process will be shifted to the
ready suspended state. From the ready sate, the CPU will choose the process and the process will be
executed by the CPU and will be in the running state. During the execution of the process, the process
may need some I/O operation to perform. So, it has to go into the waiting state and if the waiting state
is full then it will be sent to the waiting suspended state. From the waiting state, the process can go to
the ready state after performing I/O operations. From the waiting suspended state, the process can go
to waiting or ready suspended state. At last, after the complete execution of the process, the process
will go to the terminated state and the information of the process will be deleted.

Thread in Operating System


What is a Thread?
A thread is a path of execution within a process. A process can contain multiple threads.
Why Multithreading?
A thread is also known as lightweight process. The idea is to achieve parallelism by dividing a process
into multiple threads. For example, in a browser, multiple tabs can be different threads. MS Word uses
multiple threads: one thread to format the text, another thread to process inputs, etc. More advantages
of multithreading are discussed below
Process vs Thread?
The primary difference is that threads within the same process run in a shared memory space, while
processes run in separate memory spaces.
Threads are not independent of one another like processes are, and as a result threads share with other
threads their code section, data section, and OS resources (like open files and signals). But, like process,
a thread has its own program counter (PC), register set, and stack space.
Advantages of Thread over Process
1. Responsiveness: If the process is divided into multiple threads, if one thread completes its execution,
then its output can be immediately returned.

2. Faster context switch: Context switch time between threads is lower compared to process context
switch. Process context switching requires more overhead from the CPU.

3. Effective utilization of multiprocessor system: If we have multiple threads in a single process, then we
can schedule multiple threads on multiple processor. This will make process execution faster.

4. Resource sharing: Resources like code, data, and files can be shared among all threads within a
process.
Note: stack and registers can’t be shared among the threads. Each thread has its own stack and
registers.

5. Communication: Communication between multiple threads is easier, as the threads shares common
address space. while in process we have to follow some specific communication technique for
communication between two process.

6. Enhanced throughput of the system: If a process is divided into multiple threads, and each thread
function is considered as one job, then the number of jobs completed per unit of time is increased, thus
increasing the throughput of the system.

Types of Threads
There are two types of threads.
User Level Thread
Kernel Level Thread

Threads and its types in Operating System

Thread is a single sequence stream within a process. Threads have same properties as of the process so
they are called as light weight processes. Threads are executed one after another but gives the illusion
as if they are executing in parallel. Each thread has different states. Each thread has

1. A program counter

2. A register set

3. A stack space

Threads are not independent of each other as they share the code, data, OS resources etc.

Similarity between Threads and Processes –

 Only one thread or process is active at a time

 Within process both execute sequentiall

 Both can create children

Differences between Threads and Processes –


 Threads are not independent, processes are.

 Threads are designed to assist each other, processes may or may not do it

Types of Threads:

1. User Level thread (ULT) –


Is implemented in the user level library, they are not created using the system calls. Thread
switching does not need to call OS and to cause interrupt to Kernel. Kernel doesn’t know about
the user level thread and manages them as if they were single-threaded processes.

Advantages of ULT –

 Can be implemented on an OS that does’t support multithreading.

 Simple representation since thread has only program counter, register set, stack space.

 Simple to create since no intervention of kernel.

 Thread switching is fast since no OS calls need to be made.

Disadvantages of ULT –

 No or less co-ordination among the threads and Kernel.

 If one thread causes a page fault, the entire process blocks.

2. Kernel Level Thread (KLT) –


Kernel knows and manages the threads. Instead of thread table in each process, the kernel itself
has thread table (a master one) that keeps track of all the threads in the system. In addition
kernel also maintains the traditional process table to keep track of the processes. OS kernel
provides system call to create and manage threads.

Advantages of KLT –

 Since kernel has full knowledge about the threads in the system, scheduler may decide
to give more time to processes having large number of threads.

 Good for applications that frequently block.

Disadvantages of KLT –

 Slow and inefficient.

 It requires thread control block so it is an overhead.

Summary:

1. Each ULT has a process that keeps track of the thread using the Thread table.

2. Each KLT has Thread Table (TCB) as well as the Process Table (PCB).
What are Threads?

Thread is an execution unit which consists of its own program counter, a stack, and a set of registers.
Threads are also known as Lightweight processes. Threads are popular way to improve application
through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the
threads are running in parallel.

As each thread has its own independent resource for process execution, multpile processes can be
executed parallely by increasing number of threads.

Types of Thread

There are two types of threads:

1. User Threads

2. Kernel Threads

User threads, are above the kernel and without kernel support. These are the threads that application
programmers use in their programs.

Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel level
threads, allowing the kernel to perform multiple simultaneous tasks and/or to service multiple kernel
system calls simultaneously.

Multithreading Models

The user threads must be mapped to kernel threads, by one of the following strategies:

 Many to One Model


 One to One Model

 Many to Many Model

Many to One Model

 In the many to one model, many user-level threads are all mapped onto a single kernel thread.

 Thread management is handled by the thread library in user space, which is efficient in nature.

One to One Model

 The one to one model creates a separate kernel thread to handle each and every user thread.

 Most implementations of this model place a limit on how many threads can be created.

 Linux and Windows from 95 to XP implement the one-to-one model for threads.
Many to Many Model

 The many to many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads, combining the best features of the one-to-one and many-to-one
models.

 Users can create any number of the threads.

 Blocking the kernel system calls does not block the entire process.

 Processes can be split across multiple processors.


What are Thread Libraries?

Thread libraries provide programmers with API for creation and management of threads.

Thread libraries may be implemented either in user space or in kernel space. The user space involves API
functions implemented solely within the user space, with no kernel support. The kernel space involves
system calls, and requires a kernel with thread library support.

Three types of Thread

1. POSIX Pitheads, may be provided as either a user or kernel library, as an extension to the POSIX
standard.

2. Win32 threads, are provided as a kernel-level library on Windows systems.

3. Java threads: Since Java generally runs on a Java Virtual Machine, the implementation of
threads is based upon whatever OS and hardware the JVM is running on, i.e. either Pitheads or
Win32 threads depending on the system.

Benefits of Multithreading

1. Responsiveness

2. Resource sharing, hence allowing better utilization of resources.

3. Economy. Creating and managing threads becomes easier.

4. Scalability. One thread runs on one CPU. In Multithreaded processes, threads can be distributed
over a series of processors to scale.

5. Context Switching is smooth. Context switching refers to the procedure followed by CPU to
change from one task to another.

Multithreading Issues

Below we have mentioned a few issues related to multithreading. Well, it's an old saying, All good
things, come at a price.

Thread Cancellation

Thread cancellation means terminating a thread before it has finished working. There can be two
approaches for this, one is Asynchronous cancellation, which terminates the target thread immediately.
The other is Deferred cancellation allows the target thread to periodically check if it should be
cancelled.

Signal Handling
Signals are used in UNIX systems to notify a process that a particular event has occurred. Now in when a
Multithreaded process receives a signal, to which thread it must be delivered? It can be delivered to all,
or a single thread.

fork() System Call

fork() is a system call executed in the kernel through which a process creates a copy of itself. Now the
problem in Multithreaded process is, if one thread forks, will the entire process be copied or not?

Security Issues

Yes, there can be security issues because of extensive sharing of resources between multiple threads.

There are many other issues that you might face in a multithreaded process, but there are appropriate
solutions available for them. Pointing out some issues here was just to study both sides of the coin.

Introduction to Threads in Operating System

In this article, we will discuss Threads in Operating System. The execution of process code by tracking
the instructions one after the other in the process is called threads. Threads control the process and
each thread represents the control of the code. The tasks are run in parallel and it is the subset of the
process. We can call threads as a lightweight process or LWP. Different thread does different tasks and
the flow of control is separate. Threads can make the application faster by doing different things at the
same time. Threads in the same process share the address and memory and are easily accessible
whenever needed.

What are Threads?

 The program is divided into different tasks using threads. These tasks give the illusion of running
parallel but are carried out one after the other. Thread has a program counter, registers, and
stack. The program counter has the information of all the tasks and the timing, registers save
the current working variables and stack stores the history of execution.

 Information such as code and data related and the open files are shared between threads.
When changes are made in one thread, the alterations are seen by other threads as well.
Threads in the same process share the memory space and address space.

 All the threads have a parent process in the execution of codes and no threads run without a
process. Servers, be it network or web, run using threads.

 When processors share the common memory, threads help in doing the parallel execution of
processes and programs. Resources needed for threads are less and is always dependent on the
process. The threads share the processes and threads and the process is done smoothly. Also,
threads share the tasks as if one thread is blocked, the second thread takes up the task.

 A single thread in the process can read or write or update the details in the other thread. Also,
multithreading in the same process requires less memory space than multi-threads in a different
process.
 When a thread takes up the task of other thread or updates the information, OS is not kept
informed and all the tasks are done with accuracy.

Types of Threads in Operating System

Threads are classified based on the way they are managed. There are two types of threads.

1. User Threads

These threads are implemented and used in the user library. They cannot be created using the system.
While doing thread switching, if the OS is called there will be distractions. The user thread avoids all
distractions and does not interrupt the kernel system. These threads are considered as single-threaded
processes by the kernel. These are implemented on the systems that do not support multithreading as it
is a single thread process. These are simply represented with a single, register, counter and stack. The
user threads do not create any separate tasks for creation. The switching is also fast as there is no OS
intervention. There is no coordination between threads and kernel and hence if one thread breaks, the
entire process is blocked.

2. Kernel Threads
Kernel manages the threads and knows each and every thread. This is a multithreading type. The kernel
manages a table to track the threads in each process. Also, there is a separate table to track the
processes and update whenever the changes are made. OS makes the changes in the thread while
creating and managing them. Knowledge of threads is shared with the kernel and hence the time for
each process is scheduled according to the execution. Kernel threads are used for applications that
break in between the process. Kernel threads are slow when compared with user threads. The thread
control block is needed to control the tasks.

Advantages of Threads in Operating System

1. The context switching time is reduced using threads. With traditional methods, it takes longer to
switch the contexts between different processes even though they belong to the same OS. This
helps in managing the time of the tasks.

2. While using threads, one task after the other is carried out without being instructed always.
Hence concurrency is achieved for the entire process using threads.

3. Communication between threads and communication between processes is made efficient with
the help of threads. This helps to manage the process without being tracking the entire process
using a tracker. This reduces costs.

4. Since it is easy to do context switching, the cost is less and hence the entire process is
economical to create and manage and switch the threads between the processes.

5. Multiprocessors are used in large scale by threads as both have the same characteristics.
Building multiprocessor is easy and reliable when compared to other processors.
6. When multithreading is employed, it responds to the user every now and then and hence
customer satisfaction is achieved.

Disadvantages of Threads in Operating System

1. All the variables both local and global are shared between threads. This creates a security issue
as the global variables give access to any process in the system.

2. When the entire application is dependent on threads, if a single thread breaks, the entire
process is broken and blocked. Thus the application is crashed. This particularly happens when
the application runs the process with a single thread. When many threads are used, the threads
crash each other making the communication difficult.

3. Threads depend on the system and the process to run. It is not independent. Also, the execution
of the process via threads is time-consuming. But processes cannot be run without threads.

4. Threads are not reusable and it requires more hardware than software due to application
changes from the base. Threads cannot be made work without process as they do not have their
own address space.

Threads are important to the process and hence to the system. Threads are even used in designing the
operating systems. The threads to be used should be carefully determined based on user thread or
kernel thread. Safety and security of the codes must be considered as threads share global variables to
other threads in the same address space.

Multithreading in Operating System

 Difficulty Level : Medium

 Last Updated : 14 Aug, 2019

A thread is a path which is followed during a program’s execution. Majority of programs written now a
days run as a single thread.Lets say, for example a program is not capable of reading keystrokes while
making drawings. These tasks cannot be executed by the program at the same time. This problem can
be solved through multitasking so that two or more tasks can be executed simultaneously.

Multitasking is of two types: Processor based and thread based. Processor based multitasking is totally
managed by the OS, however multitasking through multithreading can be controlled by the programmer
to some extent.

The concept of multi-threading needs proper understanding of these two terms – a process and a
thread. A process is a program being executed. A process can be further divided into independent units
known as threads.

A thread is like a small light-weight process within a process. Or we can say a collection of threads is
what is known as a process.
Applications –
Threading is used widely in almost every field. Most widely it is seen over the internet now days where
we are using transaction processing of every type like recharges, online transfer, banking etc. Threading
is a segment which divide the code into small parts that are of very light weight and has less burden on
CPU memory so that it can be easily worked out and can achieve goal in desired field. The concept of
threading is designed due to the problem of fast and regular changes in technology and less the work in
different areas due to less application. Then as says “need is the generation of creation or innovation”
hence by following this approach human mind develop the concept of thread to enhance the capability
of programming.

What is a thread?

Thread is the basic unit of CPU utilization i.e. how an application uses CPU. Let's understand it through
an example.
Let's say we open a web-page, we often see the text contents being loaded before the image and video
contents. Here loading the web-page is a process and the process contains 2 threads, one for loading
the text contents and the other for loading the images contents.

Let's take another example consisting of basic coding problem. Suppose, we want to get the sum of the
numbers in an array of length N. We can make this simple process of adding number multi threaded by
associating 2 threads for summing them. One for the first half of array, and second for the other half of
the array. And then sum the sum of two halves. Here the threads summing the both the halves are child
threads and the thread having the final sum is called parent thread.

A process can have as many threads as required (limited to the hardware requirements, and efficiency
overheads). Thus, it is clear that code, data, files belonging to a particular process will be common to all
the threads in multi threaded process.

But each thread has it's unique thread Id, program counter, register set and stack memory as illustrated
in following diagram:

Why do we need multi threading?

1. Responsiveness - Let's say in the aforementioned example, while loading a web-page, there is
some large image being loaded and taking it's time. As the whole process is multi-threaded, the
loading of image will not block the loading of text content thus making it more responsive to the
user.

2. Resource sharing - Threads share the memory and resources of the process by default thus
allowing application to have several different threads within same address space.

3. Economy - As threads share the same memory and resources that of the processes. It's
economical to create and context switch threads vis-a-vis process creation.

What are the challenges of multi threading?


1. Identifying tasks to multi thread to make the application efficient.

2. Maintaining data integrity as there maybe situations where the same data is being manipulated
by different threads.

3. Balancing cost- It is important to share the workload of application equally among different
threads otherwise there will be threads doing less work than other creating economical
overheads.

4. It is rather easy to test and debug a single threaded application than multi threaded one.

Parallelism

We would come across two terms very frequently, parallelism and concurrency. Generally, these two go
hand-in-hand. But what exactly do they mean?

Parallelism vs Concurrency

A system is parallel if it can perform more than one task simultaneously.

Concurrency is when more than one task makes progress. Even if the system is single core, the CPU
schedulers rapidly switch between processes therefore making illusion of parallel system and thus
allowing progress for different tasks.

Therefore, it is important to note that concurrency can occur without parallelism.

Types of parallelism

1. Data parallelism- Here, same data is divided into groups and those subset of data are operated
on different cores.

2. Task parallelism - Unique operations are performed onto the same data on different cores.

Multi-threading models

Many to one model


Many user level threads(threads created in application by user using thread library(explained later)) are
mapped onto single kernel level thread. In this, there are following problems:-

 Block statement on 1 user level thread blocks all the thread.

 No true concurrency.

 Not efficient use of multi-core architecture.

One to one model

Single user level thread is mapped onto single kernel level thread. It has following advantage over many
to one model:-

 Block statement on one thread doesn't blocks any other thread.

 True concurrency.

 Efficient use of multi-core system.

But it has a problem of overhead for creating as much kernel level threads.
Many to many model

Many user level threads are mapped onto equal or less number of kernel level threads. It solves the
problem of overhead for creating kernel level threads.

This model has a variant namely 2 level model which includes many to many as well one to one model.
In this certain thread of a process are mapped onto a certain kernel level thread until it finishes it's
execution.

Thread Library

Thread library is API for programmer to create and manage threads in their applications.

There can be two approaches for implementing thread library:-

1. The first approach is to provide a library entirely in user space with no kernel support. All code
and data structures for the library exist in user space. This means that invoking a function in the
library results in a local function call in user space and not a system call.

2. The second approach is to implement a kernel-level library supported directly by the operating
system. In this case, code and data structures for the library exist in kernel space. Invoking a
function in the API for the library typically results in a system call to the kernel.

There are 3 main thread libraries:-

1. POSIX Pthreads- Maybe a user level or kernel level library. Mostly used by Linux/Unix based OS.

2. Windows - Kernel level.

3. Java - Thread created and managed directly in Java programs. As, JVM itself runs on an OS. So, it
is implemented using a thread library present on the OS.

Thread creation
There are 2 strategies for thread creation:-

1. Asynchronous - Parent thread creates child then executes independently. It means, little data
sharing between parent and child thread.

2. Synchronous - Parent thread waits for the child thread to finish it's execution. More of data
sharing is done here.

Pthreads

 POSIX standard for thread creation and synchronization.

 These are mere specifications for thread behavior, not it's implementation.

 Mostly implemented by UNIX type system.

 Windows doesn't support it natively.

Windows threads

 Similar to Pthread creation and management in many ways.

 Differences in method names. For eg:- pthread.join() function is implemented here


using WaitForSingleObject().

Java threads

2 techniques for implementing Java threads:-

1. New class derived from thread class and then override the run method.

2. Implement the runnable interface.

JVM hides the implementation details of underlying OS and provide consistent, abstract environment
that allows Java program to operate at any platform.

All the above user level thread library creation and management comes in the category of explicit
threading where the programmer creates and manages threads.
One more way to create and manage threads is to transfer the creation and management from
application developers to compilers and run-time libraries. This strategy is known as implicit threading.

2 common strategies of implicit threading are:-

1. Thread Pool

2. Open MP

Thread Pool

There were few difficulties while explicit threading:-

 How much thread to create in order to use multi-core architecture efficiently?

 Time for creating thread.


The general idea behind a thread pool is to create a number of threads at process startup and place
them into a pool, where they sit and wait for work. When a server receives a request, it awakens a
thread from this pool—if one is available—and passes it the request for service. Once the thread
completes its service, it returns to the pool and awaits more work. If the pool contains no available
thread, the server waits until one becomes free.

Benefits of thread pool:-

 Servicing a request with an existing thread is faster than waiting to create a thread.

 Limits on number of threads. Thus, benefiting the system which doesn't support large number of
threads.

Open MP

 These are set of compiler directives as well as API to provide support for parallel programming.

 It identifies the parallel region in the process and executes them.

 We can also have control over number of thread being created and data being shared between
the threads.

Finally, it's time to delve inside the issues while threading.

Threading issues

fork() and exec() system calls.

Whether all the threads of the process will be duplicated or it will become
single threaded after executing fork() statement.
exec() statement still almost works the same way i.e. program specified as
parameter will replace the whole process.

Signal handling

Signals are to mark any event while executing a process.


There are 2 types of handler:-

 Default signal handler - Kernel runs this while handling the signal.

 User defined signal handler - User defined handler will override the default signal handler.

In single threaded program, all the signals are delivered to the process.
In multi threaded program, there are 4 options:-

 Deliver the signal to the thread to which the signal applies.

 Deliver the signal to every thread in the process.

 Deliver the signal to certain threads in the process.

 Assign a specific thread to receive all signals for the process.


In case of synchronous signals, the signal needs to be delivered to the thread which signal applies.
In case of asynchronous signals, if the signal is affecting all the threads, then it is to be delivered to every
thread. If it is affecting certain thread, then the signal is to be delivered to that certain thread.
Windows doesn't explicitly provides for signal handling but emulate it through Asynchronous procedure
calls(APC) . APC is delivered to particular thread rather than the process.

Thread cancellation

The thread which is to be cancelled is known as target thread.


There are 2 strategies for thread cancellation:-

1. Asynchronous cancellation- One thread immediately terminates the target thread resulting in
abrupt termination.

2. Deferred cancellation - Target thread periodically checks whether it should terminate or not,
thus terminating in orderly fashion.

pthread_cancel(tid) only requests to cancel a thread. Original cancellation depends on how target
thread is set up to handle request i.e. deferred or asynchronous.

Thread local Storage

Threads belonging to a process share the data of the process. However, in some circumstances, each
thread might need its own copy of certain data. We will call such data thread-local storage(TLS). Most
thread libraries—including Windows and Pthreads—provide some form of support for thread-local
storage. Java provides support as well.

Scheduler activation

For a user level thread to be executed, it has to communicate with kernel level thread. The scheme for
this communication is known as scheduler activation.
 Kernel provides the application with a set of virtual processors known as light weight
process(LWP)

 App can schedule user threads on LWP.

 Kernel must inform application about the events, called as upcall.

 Upcalls are handled by the thread library with an upcall handler, and upcall handlers must run
on a virtual processor.

 In the case of upcall(let's say block), the kernel makes an upcall to the application informing it
that a thread is about to block and identifying the specific thread.

 The kernel then allocates a new virtual processor to the application.

 The application runs an upcall handler on this new virtual processor, which saves the state of
the blocking thread and relinquishes the virtual processor on which the blocking thread is
running.

 The upcall handler then schedules another thread that is eligible to run on the new virtual
processor.

 When the event that the blocking thread was waiting for occurs, the kernel makes another
upcall to the thread library informing it that the previously blocked thread is now eligible to run.

 The upcall handler for this event also requires a virtual processor, and the kernel may allocate a
new virtual processor.
 After marking the unblocked thread as eligible to run, the application schedules an eligible
thread to run on an available virtual processor.

Benefits of Multithreading in Operating System

The benefits of multi threaded programming can be broken down into four major categories:

1. Responsiveness –
Multithreading in an interactive application may allow a program to continue running even if a
part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to
the user.

In a non multi threaded environment, a server listens to the port for some request and when the
request comes, it processes the request and then resume listening to another request. The time taken
while processing of request makes other users wait unnecessarily. Instead a better approach would be
to pass the request to a worker thread and continue listening to port.

For example, a multi threaded web browser allow user interaction in one thread while an video is being
loaded in another thread. So instead of waiting for the whole web-page to load the user can continue
viewing some portion of the web-page.

2. Resource Sharing –
Processes may share resources only through techniques such as-

 Message Passing

 Shared Memory

Such techniques must be explicitly organized by programmer. However, threads share the memory and
the resources of the process to which they belong by default.
The benefit of sharing code and data is that it allows an application to have several threads of activity
within same address space.

3. Economy –
Allocating memory and resources for process creation is a costly job in terms of time and space.
Since, threads share memory with the process it belongs, it is more economical to create and
context switch threads. Generally much more time is consumed in creating and managing
processes than in threads.
In Solaris, for example, creating process is 30 times slower than creating threads and context
switching is 5 times slower.

4. Scalability –
The benefits of multi-programming greatly increase in case of multiprocessor architecture,
where threads may be running parallel on multiple processors. If there is only one thread then it
is not possible to divide the processes into smaller tasks that different processors can perform.
Single threaded process can run only on one processor regardless of how many processors are
available.
Multi-threading on a multiple CPU machine increases parallelism.
Scheduling Objectives

 Be Fair

 Maximize throughput

 Maximize number of users receiving acceptible response times.

 Be predictable

 Balance resource use

 Avoid indefinite postponement

 Enforce Priorities

 Give preference to processes holding key resources

 Give better service to processes that have desirable behaviour patterns

 Degrade gracefully under heavy loads

Scheduling and Performance Criteria

 CPU Utilisation - is the process CPU-bound or I/O-bound?

 Throughput - How many processes are executed in a unit of time?

 Turnaround time - how long does it take for a program to exit the system from when it is
submitted?

 Waiting Time - how much time does the process spend waiting for the CPU in total?

 Response Time - How long does it take for the system to respond to an interactive request?

 The above should be prioritised, and a scheduling algorithm which satisfies the criteria should
be used.

CPU Scheduling Criteria

Different CPU scheduling algorithms have different properties and the choice of a particular algorithm
depends on the various factors. Many criteria have been suggested for comparing CPU scheduling
algorithms.

The criteria include the following:

1. CPU utilisation –
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilisation can range from 0 to 100 but in a real-time system, it varies from 40
to 90 percent depending on the load upon the system.
2. Throughput –
A measure of the work done by CPU is the number of processes being executed and completed
per unit time. This is called throughput. The throughput may vary depending upon the length or
duration of processes.

3. Turnaround time –
For a particular process, an important criteria is how long it takes to execute that process. The
time elapsed from the time of submission of a process to the time of completion is known as the
turnaround time. Turn-around time is the sum of times spent waiting to get into memory,
waiting in ready queue, executing in CPU, and waiting for I/O.

4. Waiting time –
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in the
ready queue.

5. Response time –
In an interactive system, turn-around time is not the best criteria. A process may produce some
output fairly early and continue computing new results while previous results are being output
to the user. Thus another criteria is the time taken from submission of the process of request
until the first response is produced. This measure is called response time.

What is Burst time, Arrival time, Exit time, Response time, Waiting time, Turnaround time, and
Throughput?

When we are dealing with some CPU scheduling algorithms then we encounter with some confusing
terms like Burst time, Arrival time, Exit time, Waiting time, Response time, Turnaround time, and
throughput. These parameters are used to find the performance of a system. So, in this blog, we will
learn about these parameters. Let's get started one by one.

Burst time

Every process in a computer system requires some amount of time for its execution. This time is both
the CPU time and the I/O time. The CPU time is the time taken by CPU to execute the process. While the
I/O time is the time taken by the process to perform some I/O operation. In general, we ignore the I/O
time and we consider only the CPU time for a process. So, Burst time is the total time taken by the
process for its execution on the CPU.

Arrival time

Arrival time is the time when a process enters into the ready state and is ready for its execution.
Here in the above example, the arrival time of all the 3 processes are 0 ms, 1 ms, and 2 ms respectively.

Exit time

Exit time is the time when a process completes its execution and exit from the system.

Response time

Response time is the time spent when the process is in the ready state and gets the CPU for the first
time. For example, here we are using the First Come First Serve CPU scheduling algorithm for the below
3 processes:

Here, the response time of all the 3 processes are:

 P1: 0 ms

 P2: 7 ms because the process P2 have to wait for 8 ms during the execution of P1 and then after
it will get the CPU for the first time. Also, the arrival time of P2 is 1 ms. So, the response time will
be 8-1 = 7 ms.

 P3: 13 ms because the process P3 have to wait for the execution of P1 and P2 i.e. after 8+7 = 15
ms, the CPU will be allocated to the process P3 for the first time. Also, the arrival of P3 is 2 ms.
So, the response time for P3 will be 15-2 = 13 ms.

Response time = Time at which the process gets the CPU for the first time - Arrival time

Waiting time
Waiting time is the total time spent by the process in the ready state waiting for CPU. For example,
consider the arrival time of all the below 3 processes to be 0 ms, 0 ms, and 2 ms and we are using the
First Come First Serve scheduling algorithm.

Then the waiting time for all the 3 processes will be:

 P1: 0 ms

 P2: 8 ms because P2 have to wait for the complete execution of P1 and arrival time of P2 is 0
ms.

 P3: 13 ms becuase P3 will be executed after P1 and P2 i.e. after 8+7 = 15 ms and the arrival time
of P3 is 2 ms. So, the waiting time of P3 will be: 15-2 = 13 ms.

Waiting time = Turnaround time - Burst time

In the above example, the processes have to wait only once. But in many other scheduling algorithms,
the CPU may be allocated to the process for some time and then the process will be moved to the
waiting state and again after some time, the process will get the CPU and so on.

There is a difference between waiting time and response time. Response time is the time spent between
the ready state and getting the CPU for the first time. But the waiting time is the total time taken by the
process in the ready state. Let's take an example of a round-robin scheduling algorithm. The time
quantum is 2 ms.
In the above example, the response time of the process P2 is 2 ms because after 2 ms, the CPU is
allocated to P2 and the waiting time of the process P2 is 4 ms i.e turnaround time - burst time (10 - 6 = 4
ms).

Turnaround time

Turnaround time is the total amount of time spent by the process from coming in the ready state for the
first time to its completion.

Turnaround time = Burst time + Waiting time

or

Turnaround time = Exit time - Arrival time

For example, if we take the First Come First Serve scheduling algorithm, and the order of arrival of
processes is P1, P2, P3 and each process is taking 2, 5, 10 seconds. Then the turnaround time of P1 is 2
seconds because when it comes at 0th second, then the CPU is allocated to it and so the waiting time of
P1 is 0 sec and the turnaround time will be the Burst time only i.e. 2 seconds. The turnaround time of P2
is 7 seconds because the process P2 have to wait for 2 seconds for the execution of P1 and hence the
waiting time of P2 will be 2 seconds. After 2 seconds, the CPU will be given to P2 and P2 will execute its
task. So, the turnaround time will be 2+5 = 7 seconds. Similarly, the turnaround time for P3 will be 17
seconds because the waiting time of P3 is 2+5 = 7 seconds and the burst time of P3 is 10 seconds. So,
turnaround time of P3 is 7+10 = 17 seconds.

Different CPU scheduling algorithms produce different turnaround time for the same set of processes.
This is because the waiting time of processes differ when we change the CPU scheduling algorithm.

Throughput

Throughput is a way to find the efficiency of a CPU. It can be defined as the number of processes
executed by the CPU in a given amount of time. For example, let's say, the process P1 takes 3 seconds
for execution, P2 takes 5 seconds, and P3 takes 10 seconds. So, throughput, in this case, the throughput
will be (3+5+10)/3 = 18/3 = 6 seconds.
What is the difference between Preemptive and Non-Preemptive scheduling?

In the Operating System, the process scheduling algorithms can be divided into two broad categories i.e.
Preemptive Scheduling and Non-Preemptive Scheduling. In this blog, we will learn the difference
between these two. So, let's get started.

Preemptive Scheduling

In preemptive scheduling, the CPU will execute a process but for a limited period of time and after that,
the process has to wait for its next turn i.e. in preemptive scheduling, the state of a process gets
changed i.e. the process may go to the ready state from running state or from the waiting state to the
ready state. The resources are allocated to the process for a limited amount of time and after that, they
are taken back and the process goes to the ready queue if it still has some CPU burst time remaining.
Some of the preemptive scheduling algorithms are Round-robin, SJF (preemptive), etc.

Non-preemptive Scheduling

In non-preemptive scheduling, if some resource is allocated to a process then that resource will not be
taken back until the completion of the process. Other processes that are present in the ready queue
have to wait for its turn and it cann't forcefully get the CPU. Once the CPU is allocated to a process, then
it will be held by that process until it completes its execution or it goes in the waiting state for I/O
operation.

Difference between Preemptive and Non-preemptive scheduling

 In preemptive scheduling, the CPU can be taken back from the process at any time during the
execution of the process. But in non-preemptive scheduling, if the CPU is allocated, then it will
not be taken back until the process completes its execution.

 In preemptive scheduling, a process can be interrupted by some high priority process but in non-
preemptive scheduling no interruption by other processes is allowed.

 The preemptive approach is flexible in nature while the non-preemptive approach is rigid in
nature.

 In preemptive scheduling, the CPU utilization is more as compared to the non-preemptive


approach.

 In preemptive scheduling, the waiting and response time is more. While in non-preemptive
scheduling, the waiting and response time is less(learn more about waiting and response time
from here).

 In a preemptive approach, if higher priority process keeps on coming, then it can lead to
starvation(read more about starvation from here). But in a non-preemptive approach, if process
having higher burst time keeps on coming, then it can lead to starvation(read more about burst
time from here).
Different process scheduling algorithms.
First Come First Serve (FCFS)

As the name suggests, the process coming first in the ready state will be executed first by the CPU
irrespective of the burst time or the priority. This is implemented by using the First In First Out
(FIFO) queue. So, what happens is that, when a process enters into the ready state, then the PCB of that
process will be linked to the tail of the queue and the CPU starts executing the processes by taking the
process from the head of the queue (learn more about PCB from here). If the CPU is allocated to a
process then it can't be taken back until it finishes the execution of that process.

Example:

In the above example, you can see that we have three processes P1, P2, and P3, and they are coming in
the ready state at 0ms, 2ms, and 2ms respectively. So, based on the arrival time, the process P1 will be
executed for the first 18ms. After that, the process P2 will be executed for 7ms and finally, the process
P3 will be executed for 10ms. One thing to be noted here is that if the arrival time of the processes is the
same, then the CPU can select any process.

---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------

| P1 | 0ms | 18ms |

| P2 | 16ms | 23ms |

| P3 | 23ms | 33ms |

---------------------------------------------

Total waiting time: (0 + 16 + 23) = 39ms

Average waiting time: (39/3) = 13ms


Total turnaround time: (18 + 23 + 33) = 74ms

Average turnaround time: (74/3) = 24.66ms

Advantages of FCFS:

 It is the most simple scheduling algorithm and is easy to implement.

Disadvantages of FCFS:

 This algorithm is non-preemptive so you have to execute the process fully and after that other
processes will be allowed to execute.

 Throughput is not efficient.

 FCFS suffers from the Convey effect i.e. if a process is having very high burst time and it is
coming first, then it will be executed first irrespective of the fact that a process having very less
time is there in the ready state.

First Come First Serve Scheduling

In the "First come first serve" scheduling algorithm, as the name suggests, the process which arrives
first, gets executed first, or we can say that the process which requests the CPU first, gets the CPU
allocated first.

 First Come First Serve, is just like FIFO(First in First out) Queue data structure, where the data
element which is added to the queue first, is the one who leaves the queue first.

 This is used in Batch Systems.

 It's easy to understand and implement programmatically, using a Queue data structure, where
a new process enters through the tail of the queue, and the scheduler selects process from
the head of the queue.

 A perfect real life example of FCFS scheduling is buying tickets at ticket counter.

Calculating Average Waiting Time

For every scheduling algorithm, Average waiting time is a crucial parameter to judge it's performance.

AWT or Average waiting time is the average of the waiting times of the processes in the queue, waiting
for the scheduler to pick them for execution.

Lower the Average Waiting Time, better the scheduling algorithm.

Consider the processes P1, P2, P3, P4 given in the below table, arrives for execution in the same order,
with Arrival Time 0, and given Burst Time, let's find the average waiting time using the FCFS scheduling
algorithm.
The average waiting time will be 18.75 ms

For the above given proccesses, first P1 will be provided with the CPU resources,

 Hence, waiting time for P1 will be 0

 P1 requires 21 ms for completion, hence waiting time for P2 will be 21 ms

 Similarly, waiting time for process P3 will be execution time of P1 + execution time for P2, which
will be (21 + 3) ms = 24 ms.

 For process P4 it will be the sum of execution times of P1, P2 and P3.

The GANTT chart above perfectly represents the waiting time for each process.

Problems with FCFS Scheduling

Below we have a few shortcomings or problems with the FCFS scheduling algorithm:

1. It is Non Pre-emptive algorithm, which means the process priority doesn't matter.

If a process with very least priority is being executed, more like daily routine backup process, which
takes more time, and all of a sudden some other high priority process arrives, like interrupt to avoid
system crash, the high priority process will have to wait, and hence in this case, the system will crash,
just because of improper process scheduling.
2. Not optimal Average Waiting Time.

3. Resources utilization in parallel is not possible, which leads to Convoy Effect, and hence poor
resource(CPU, I/O etc) utilization.

What is Convoy Effect?

Convoy Effect is a situation where many processes, who need to use a resource for short time are
blocked by one process holding that resource for a long time.

This essentially leads to poort utilization of resources and hence poor performance.

Program for FCFS Scheduling

Here we have a simple C++ program for processes with arrival time as 0.

In the program, we will be calculating the Average waiting time and Average turn around time for a
given array of Burst times for the list of processes.

/* Simple C++ program for implementation


of FCFS scheduling */

#include<iostream>

using namespace std;

// function to find the waiting time for all processes


void findWaitingTime(int processes[], int n, int bt[], int wt[])
{
// waiting time for first process will be 0
wt[0] = 0;

// calculating waiting time


for (int i = 1; i < n ; i++)
{
wt[i] = bt[i-1] + wt[i-1];
}
}

// function to calculate turn around time


void findTurnAroundTime( int processes[], int n, int bt[], int wt[], int tat[])
{
// calculating turnaround time by adding
// bt[i] + wt[i]
for (int i = 0; i < n ; i++)
{
tat[i] = bt[i] + wt[i];
}
}
// function to calculate average time
void findAverageTime( int processes[], int n, int bt[])
{
int wt[n], tat[n], total_wt = 0, total_tat = 0;

// function to find waiting time of all processes


findWaitingTime(processes, n, bt, wt);

// function to find turn around time for all processes


findTurnAroundTime(processes, n, bt, wt, tat);

// display processes along with all details


cout << "Processes "<< " Burst time "<< " Waiting time " << " Turn around time\n";

// calculate total waiting time and total turn around time


for (int i = 0; i < n; i++)
{
total_wt = total_wt + wt[i];
total_tat = total_tat + tat[i];
cout << " " << i+1 << "\t\t" << bt[i] <<"\t "<< wt[i] <<"\t\t " << tat[i] <<endl;
}

cout << "Average waiting time = "<< (float)total_wt / (float)n;


cout << "\nAverage turn around time = "<< (float)total_tat / (float)n;
}

// main function
int main()
{
// process ids
int processes[] = { 1, 2, 3, 4};
int n = sizeof processes / sizeof processes[0];

// burst time of all processes


int burst_time[] = {21, 3, 6, 2};

findAverageTime(processes, n, burst_time);

return 0;
}
Processes Burst time Waiting time Turn around time

1 21 0 21

2 3 21 24

3 6 24 30

4 2 30 32
Average waiting time = 18.75

Average turn around time = 26.75

Here we have simple formulae for calculating various times for given processes:

Completion Time: Time taken for the execution to complete, starting from arrival time.

Turn Around Time: Time taken to complete after arrival. In simple words, it is the difference between
the Completion time and the Arrival time.

Waiting Time: Total time the process has to wait before it's execution begins. It is the difference
between the Turn Around time and the Burst time of the process.

For the program above, we have considered the arrival time to be 0 for all the processes, try to
implement a program with variable arrival times.

Shortest Job First (Non-preemptive)

In the FCFS, we saw if a process is having a very high burst time and it comes first then the other process
with a very low burst time have to wait for its turn. So, to remove this problem, we come with a new
approach i.e. Shortest Job First or SJF.

In this technique, the process having the minimum burst time at a particular instant of time will be
executed first. It is a non-preemptive approach i.e. if the process starts its execution then it will be fully
executed and then some other process will come.

Example:
In the above example, at 0ms, we have only one process i.e. process P2, so the process P2 will be
executed for 4ms. Now, after 4ms, there are two new processes i.e. process P1 and process P3. The
burst time of P1 is 5ms and that of P3 is 2ms. So, amongst these two, the process P3 will be executed
first because its burst time is less than P1. P3 will be executed for 2ms. Now, after 6ms, we have two
processes with us i.e. P1 and P4 (because we are at 6ms and P4 comes at 5ms). Amongst these two, the
process P4 is having a less burst time as compared to P1. So, P4 will be executed for 4ms and after that
P1 will be executed for 5ms. So, the waiting time and turnaround time of these processes will be:

---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------

| P1 | 7ms | 12ms |

| P2 | 0ms | 4ms |

| P3 | 0ms | 2ms |

| P4 | 1ms | 5ms |

---------------------------------------------

Total waiting time: (7 + 0 + 0 + 1) = 8ms

Average waiting time: (8/4) = 2ms

Total turnaround time: (12 + 4 + 2 + 5) = 23ms

Average turnaround time: (23/4) = 5.75ms

Advantages of SJF (non-preemptive):

 Short processes will be executed first.

Disadvantages of SJF (non-preemptive):

 It may lead to starvation if only short burst time processes are coming in the ready state(learn
more about starvation from here).

Shortest Job First (Preemptive)

This is the preemptive approach of the Shortest Job First algorithm. Here, at every instant of time, the
CPU will check for some shortest job. For example, at time 0ms, we have P1 as the shortest process. So,
P1 will execute for 1ms and then the CPU will check if some other process is shorter than P1 or not. If
there is no such process, then P1 will keep on executing for the next 1ms and if there is some process
shorter than P1 then that process will be executed. This will continue until the process gets executed.

This algorithm is also known as Shortest Remaining Time First i.e. we schedule the process based on the
shortest remaining time of the processes.
Example:

In the above example, at time 1ms, there are two processes i.e. P1 and P2. Process P1 is having burst
time as 6ms and the process P2 is having 8ms. So, P1 will be executed first. Since it is a preemptive
approach, so we have to check at every time quantum. At 2ms, we have three processes i.e. P1(5ms
remaining), P2(8ms), and P3(7ms). Out of these three, P1 is having the least burst time, so it will
continue its execution. After 3ms, we have four processes i.e P1(4ms remaining), P2(8ms), P3(7ms), and
P4(3ms). Out of these four, P4 is having the least burst time, so it will be executed. The process P4 keeps
on executing for the next three ms because it is having the shortest burst time. After 6ms, we have 3
processes i.e. P1(4ms remaining), P2(8ms), and P3(7ms). So, P1 will be selected and executed. This
process of time comparison will continue until we have all the processes executed. So, waiting and
turnaround time of the processes will be:

---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------

| P1 | 3ms | 9ms |

| P2 | 16ms | 24ms |

| P3 | 8ms | 15ms |

| P4 | 0ms | 3ms |

---------------------------------------------

Total waiting time: (3 + 16 + 8 + 0) = 27ms

Average waiting time: (27/4) = 6.75ms


Total turnaround time: (9 + 24 + 15 + 3) = 51ms

Average turnaround time: (51/4) = 12.75ms

Advantages of SJF (preemptive):

 Short processes will be executed first.

Disadvantages of SJF (preemptive):

 It may result in starvation if short processes keep on coming.

Shortest Job First(SJF) Scheduling

Shortest Job First scheduling works on the process with the shortest burst time or duration first.

 This is the best approach to minimize waiting time.

 This is used in Batch Systems.

 It is of two types:

1. Non Pre-emptive

2. Pre-emptive

 To successfully implement it, the burst time/duration time of the processes should be known to
the processor in advance, which is practically not feasible all the time.

 This scheduling algorithm is optimal if all the jobs/processes are available at the same time.
(either Arrival time is 0 for all, or Arrival time is same for all)

Non Pre-emptive Shortest Job First

Consider the below processes available in the ready queue for execution, with arrival time as 0 for all
and given burst times.
As you can see in the GANTT chart above, the process P4 will be picked up first as it has the shortest
burst time, then P2, followed by P3 and at last P1.

We scheduled the same set of processes using the First come first serve algorithm in the previous
tutorial, and got average waiting time to be 18.75 ms, whereas with SJF, the average waiting time comes
out 4.5 ms.

Problem with Non Pre-emptive SJF

If the arrival time for processes are different, which means all the processes are not available in the
ready queue at time 0, and some jobs arrive after some time, in such situation, sometimes process with
short burst time have to wait for the current process's execution to finish, because in Non Pre-emptive
SJF, on arrival of a process with short duration, the existing job/process's execution is not
halted/stopped to execute the short job first.

This leads to the problem of Starvation, where a shorter process has to wait for a long time until the
current longer process gets executed. This happens if shorter jobs keep coming, but this can be solved
using the concept of aging.

Pre-emptive Shortest Job First


In Preemptive Shortest Job First Scheduling, jobs are put into ready queue as they arrive, but as a
process with short burst time arrives, the existing process is preempted or removed from execution,
and the shorter job is executed first.

The average waiting time will be,((5-3)+(6-2)+(12-1))/4=8.75

The average waiting time for preemptive shortest job first scheduling is less than both,non preemptive
SJF scheduling and FCFS scheduling

As you can see in the GANTT chart above, as P1 arrives first, hence it's execution starts immediately, but
just after 1 ms, process P2 arrives with a burst time of 3 ms which is less than the burst time of P1,
hence the process P1(1 ms done, 20 ms left) is preemptied and process P2 is executed.

As P2 is getting executed, after 1 ms, P3 arrives, but it has a burst time greater than that of P2, hence
execution of P2 continues. But after another millisecond, P4 arrives with a burst time of 2 ms, as a
result P2(2 ms done, 1 ms left) is preemptied and P4 is executed.

After the completion of P4, process P2 is picked up and finishes, then P2 will get executed and at last P1.

The Pre-emptive SJF is also known as Shortest Remaining Time First, because at any given point of time,
the job with the shortest remaining time is executed first.

Program for SJF Scheduling

In the below program, we consider the arrival time of all the jobs to be 0.

Also, in the program, we will sort all the jobs based on their burst time and then execute them one by
one, just like we did in FCFS scheduling program.

// c++ program to implement Shortest Job first

#include<bits/stdc++.h>
using namespace std;

struct Process
{
int pid; // process ID
int bt; // burst Time
};

/*
this function is used for sorting all
processes in increasing order of burst time
*/
bool comparison(Process a, Process b)
{
return (a.bt < b.bt);
}

// function to find the waiting time for all processes


void findWaitingTime(Process proc[], int n, int wt[])
{
// waiting time for first process is 0
wt[0] = 0;

// calculating waiting time


for (int i = 1; i < n ; i++)
{
wt[i] = proc[i-1].bt + wt[i-1] ;
}
}

// function to calculate turn around time


void findTurnAroundTime(Process proc[], int n, int wt[], int tat[])
{
// calculating turnaround time by adding bt[i] + wt[i]
for (int i = 0; i < n ; i++)
{
tat[i] = proc[i].bt + wt[i];
}
}

// function to calculate average time


void findAverageTime(Process proc[], int n)
{
int wt[n], tat[n], total_wt = 0, total_tat = 0;

// function to find waiting time of all processes


findWaitingTime(proc, n, wt);
// function to find turn around time for all processes
findTurnAroundTime(proc, n, wt, tat);

// display processes along with all details


cout << "\nProcesses "<< " Burst time "
<< " Waiting time " << " Turn around time\n";

// calculate total waiting time and total turn around time


for (int i = 0; i < n; i++)
{
total_wt = total_wt + wt[i];
total_tat = total_tat + tat[i];
cout << " " << proc[i].pid << "\t\t"
<< proc[i].bt << "\t " << wt[i]
<< "\t\t " << tat[i] <<endl;
}

cout << "Average waiting time = "


<< (float)total_wt / (float)n;
cout << "\nAverage turn around time = "
<< (float)total_tat / (float)n;
}

// main function
int main()
{
Process proc[] = {{1, 21}, {2, 3}, {3, 6}, {4, 2}};
int n = sizeof proc / sizeof proc[0];

// sorting processes by burst time.


sort(proc, proc + n, comparison);

cout << "Order in which process gets executed\n";


for (int i = 0 ; i < n; i++)
{
cout << proc[i].pid <<" ";
}

findAverageTime(proc, n);

return 0;
}

Order in which process gets executed

4231
Processes Burst time Waiting time Turn around time

4 2 0 2

2 3 2 5

3 6 5 11

1 21 11 32

Average waiting time = 4.5

Average turn around time = 12.5

Try implementing the program for SJF with variable arrival time for different jobs, yourself.

Round-Robin

In this approach of CPU scheduling, we have a fixed time quantum and the CPU will be allocated to a
process for that amount of time only at a time. For example, if we are having three process P1, P2, and
P3, and our time quantum is 2ms, then P1 will be given 2ms for its execution, then P2 will be given 2ms,
then P3 will be given 2ms. After one cycle, again P1 will be given 2ms, then P2 will be given 2ms and so
on until the processes complete its execution.

It is generally used in the time-sharing environments and there will be no starvation in case of the
round-robin.

Example:
In the above example, every process will be given 2ms in one turn because we have taken the time
quantum to be 2ms. So process P1 will be executed for 2ms, then process P2 will be executed for 2ms,
then P3 will be executed for 2 ms. Again process P1 will be executed for 2ms, then P2, and so on. The
waiting time and turnaround time of the processes will be:

---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------

| P1 | 13ms | 23ms |

| P2 | 10ms | 15ms |

| P3 | 13ms | 21ms |

---------------------------------------------

Total waiting time: (13 + 10 + 13) = 36ms

Average waiting time: (36/3) = 12ms

Total turnaround time: (23 + 15 + 21) = 59ms

Average turnaround time: (59/3) = 19.66ms

Advantages of round-robin:
 No starvation will be there in round-robin because every process will get chance for its
execution.

 Used in time-sharing systems.

Disadvantages of round-robin:

 We have to perform a lot of context switching here, which will keep the CPU idle(learn more
about context switching from here).

Round Robin Scheduling


 A fixed time is allotted to each process, called quantum, for execution.

 Once a process is executed for given time period that process is preemptied and other

process executes for given time period.

 Context switching is used to save states of preemptied processes.

Priority Scheduling (Non-preemptive)

In this approach, we have a priority number associated with each process and based on that priority
number the CPU selects one process from a list of processes. The priority number can be anything. It is
just used to identify which process is having a higher priority and which process is having a lower
priority. For example, you can denote 0 as the highest priority process and 100 as the lowest priority
process. Also, the reverse can be true i.e. you can denote 100 as the highest priority and 0 as the lowest
priority.

Example:

In the above example, at 0ms, we have only one process P1. So P1 will execute for 5ms because we are
using non-preemption technique here. After 5ms, there are three processes in the ready state i.e.
process P2, process P3, and process P4. Out to these three processes, the process P4 is having the
highest priority so it will be executed for 6ms and after that, process P2 will be executed for 3ms
followed by the process P1. The waiting and turnaround time of processes will be:

---------------------------------------------

| Process | Waiting Time | Turnaround Time |

---------------------------------------------

| P1 | 0ms | 5ms |

| P2 | 10ms | 13ms |

| P3 | 12ms | 20ms |

| P4 | 2ms | 8ms |

---------------------------------------------

Total waiting time: (0 + 10 + 12 + 2) = 24ms

Average waiting time: (24/4) = 6ms


Total turnaround time: (5 + 13 + 20 + 8) = 46ms

Average turnaround time: (46/4) = 11.5ms

Advantages of priority scheduling (non-preemptive):

 Higher priority processes like system processes are executed first.

Disadvantages of priority scheduling (non-preemptive):

 It can lead to starvation if only higher priority process comes into the ready state.

 If the priorities of more two processes are the same, then we have to use some other scheduling
algorithm.

Priority CPU Scheduling

In this tutorial we will understand the priority scheduling algorithm, how it works and its advantages and
disadvantages.

In the Shortest Job First scheduling algorithm, the priority of a process is generally the inverse of the
CPU burst time, i.e. the larger the burst time the lower is the priority of that process.

In case of priority scheduling the priority is not always set as the inverse of the CPU burst time, rather it
can be internally or externally set, but yes the scheduling is done on the basis of priority of the process
where the process which is most urgent is processed first, followed by the ones with lesser priority in
order.

Processes with same priority are executed in FCFS manner.

The priority of process, when internally defined, can be decided based on memory requirements, time
limits ,number of open files, ratio of I/O burst to CPU burst etc.

Whereas, external priorities are set based on criteria outside the operating system, like the importance
of the process, funds paid for the computer resource use, makrte factor etc.

Types of Priority Scheduling Algorithm

Priority scheduling can be of two types:

1. Preemptive Priority Scheduling: If the new process arrived at the ready queue has a higher
priority than the currently running process, the CPU is preempted, which means the processing
of the current process is stoped and the incoming new process with higher priority gets the CPU
for its execution.

2. Non-Preemptive Priority Scheduling: In case of non-preemptive priority scheduling algorithm if


a new process arrives with a higher priority than the current running process, the incoming
process is put at the head of the ready queue, which means after the execution of the current
process it will be processed.
Example of Priority Scheduling Algorithm

Consider the below table fo processes with their respective CPU burst times and the priorities.

As you can see in the GANTT chart that the processes are given CPU time just on the basis of the
priorities.

Problem with Priority Scheduling Algorithm

In priority scheduling algorithm, the chances of indefinite blocking or starvation.

A process is considered blocked when it is ready to run but has to wait for the CPU as some other
process is running currently.

But in case of priority scheduling if new higher priority processes keeps coming in the ready queue then
the processes waiting in the ready queue with lower priority may have to wait for long durations before
getting the CPU for execution.

In 1973, when the IBM 7904 machine was shut down at MIT, a low-priority process was found which
was submitted in 1967 and had not yet been run.
Using Aging Technique with Priority Scheduling

To prevent starvation of any process, we can use the concept of aging where we keep on increasing the
priority of low-priority process based on the its waiting time.

For example, if we decide the aging factor to be 0.5 for each day of waiting, then if a process with
priority 20(which is comparitively low priority) comes in the ready queue. After one day of waiting, its
priority is increased to 19.5 and so on.

Doing so, we can ensure that no process will have to wait for indefinite time for getting CPU time for
processing.

Implementing Priority Scheduling Algorithm in C++

Implementing priority scheduling algorithm is easy. All we have to do is to sort the processes based on
their priority and CPU burst time, and then apply FCFS Algorithm on it.

Here is the C++ code for priority scheduling algorithm:


// Implementation of Priority scheduling algorithm
#include<bits/stdc++.h>
using namespace std;

struct Process
{
// this is the process ID
int pid;
// the CPU burst time
int bt;
// priority of the process
int priority;
};

// sort the processes based on priority


bool sortProcesses(Process a, Process b)
{
return (a.priority > b.priority);
}

// Function to find the waiting time for all processes


void findWaitingTime(Process proc[], int n,
int wt[])
{
// waiting time for first process is 0
wt[0] = 0;

// calculating waiting time


for (int i = 1; i < n ; i++ )
wt[i] = proc[i-1].bt + wt[i-1] ;
}

// Function to calculate turn around time


void findTurnAroundTime( Process proc[], int n,
int wt[], int tat[])
{
// calculating turnaround time by adding
// bt[i] + wt[i]
for (int i = 0; i < n ; i++)
tat[i] = proc[i].bt + wt[i];
}

//Function to calculate average time


void findavgTime(Process proc[], int n)
{
int wt[n], tat[n], total_wt = 0, total_tat = 0;

//Function to find waiting time of all processes


findWaitingTime(proc, n, wt);

//Function to find turn around time for all processes


findTurnAroundTime(proc, n, wt, tat);

//Display processes along with all details


cout << "\nProcesses "<< " Burst time "
<< " Waiting time " << " Turn around time\n";

// Calculate total waiting time and total turn


// around time
for (int i=0; i<n; i++)
{
total_wt = total_wt + wt[i];
total_tat = total_tat + tat[i];
cout << " " << proc[i].pid << "\t\t"
<< proc[i].bt << "\t " << wt[i]
<< "\t\t " << tat[i] <<endl;
}

cout << "\nAverage waiting time = "


<< (float)total_wt / (float)n;
cout << "\nAverage turn around time = "
<< (float)total_tat / (float)n;
}

void priorityScheduling(Process proc[], int n)


{
// Sort processes by priority
sort(proc, proc + n, sortProcesses);
cout<< "Order in which processes gets executed \n";
for (int i = 0 ; i < n; i++)
cout << proc[i].pid <<" " ;

findavgTime(proc, n);
}

// Driver code
int main()
{
Process proc[] = {{1, 10, 2}, {2, 5, 0}, {3, 8, 1}};
int n = sizeof proc / sizeof proc[0];
priorityScheduling(proc, n);
return 0;
}

Multiprocessor Operating system


A multiprocessor system consists of several processors which share memory. In the multiprocessor,
there is more than one processor in the system. The reason we use multiprocessor is that sometimes
load on the processor is very high but input output on other function is not required. This type of
operating system is more reliable as even if on processor goes down the other can still continues to
work. This system is relatively cheap because we are only having the copies of processor but other
devices like input-output and Memory are shared. In the multiprocessor system all the processor
operate under the single operating system. Multiplicity of the processor and how the processors work
together are transparent to the other.

In this, the user does not know in which processor their process work. A process is divided into several
small processes and they work independently on the different processor. A system can be both multi-
programmed by having multiple programs running at the same time and multiprocessing by having more
than one physical and the processor.

In this diagram, there are more than 1 CPU and they shared a common memory.
Multiprocessing scheduling

In the multiprocessor scheduling, there are multiple CPU’s which share the load so that various process
run simultaneously. In general, the multiprocessor scheduling is complex as compared to single
processor scheduling. In the multiprocessor scheduling, there are many processors and they are
identical and we can run any process at any time.

The multiple CPU’s in the system are in the close communication which shares a common bus, memory
and other peripheral devices. So we can say that the system is a tightly coupled system. These systems
are used when we want to process a bulk amount of data. These systems are mainly used in satellite,
weather forecasting etc.

Multiprocessing system work on the concept of symmetric multiprocessing model. In this system, each
processor work on the identical copy of the operating system and these copies communicate with each
other. We the help of this system we can save money because of other devices like peripherals. Power
supplies and other devices are shared. The most important thing is that we can do more work in a short
period of time. If one system fails in the multiprocessor system the whole system will not halt only the
speed of the processor will be slow down. The whole performance of the multiprocessing system is
managed by the operating system . operating system assigns different task to the different processor in
the system. In the multiprocessing system, the process is broken into the thread which they can be run
independently. These type of system allow the threads to run on more than one processor
simultaneously. In these systems the various process in the parallel so this is called parallel processor.
Parallel processing is the ability of the CPU to run various process simultaneously. In the multiprocessing
system, there is dynamically sharing of resources among the various processors.

Multiprocessor operating system is a kind of regular OS which handles many systems calls at the same
time, do memory management, provide file management also the input-output devices.

There are some extra features which multiprocessor perform:

 Process synchronization

 Resource management

 Scheduling

There are various organizations of multiprocessor operating system:

1. Each CPU has its own OS

In this types of the organization then there are much Central processing units in the system and each
CPU has its own private operating system and memory is shared among all the processors and input-
output system are also shared. All the system is connected by the single bus.

2. Master slave multiprocessor

In this type of multiprocessor model, there is a single data structure which keeps track of the ready
processes. In this model, one central processing unit works as master and other central processing unit
work as a slave. In this, all the processors are handled by the single processor which is called master
server. The master server runs the operating system process and the slave server run the user
processes. The memory and input-output devices are shared among all the processors and all the
processor are connected to a common bus. This system is simple and reduces the data sharing so this
system is called Asymmetric multiprocessing.
3. Symmetric multiprocessor

Symmetric Multiprocessors (SMP) is the third model. In this model, there is one copy of the OS in
memory, but any central processing unit can run it. Now, when a system call is made, then the central
processing unit on which the system call was made traps to the kernel and then processes that system
call. This model balances processes and memory dynamical. This approach uses Symmetric
Multiprocessing where each processor is self-scheduling. The scheduling proceeds further by having the
scheduler for each processor examine the ready queue and select a process to execute. In this system,
this is possible that all the process may be in common ready queue or each processor may have its own
private queue for the ready process.

There are mainly three sources of contention that can be found in a multiprocessor operating system.

 Locking system
As we know that the resources are shared in the multiprocessor system so there is a need to
protect these resources for safe access among the multiple processors. The main purpose of
locking scheme is to serialize access of the resources by the multiple processors.

 Shared data
When the multiple processor access the same data at the same time then there may be a
chance of inconsistency of data so to protect this we have to use some protocols or locking
scheme.
 Cache coherence
It is the shared resource data which is stored in the multiple local caches. Suppose there are two
clients have a cached copy of memory and one client change the memory block and the other
client could be left with invalid cache without notification of the change so this kind of conflict
can be resolved by maintaining a coherence view of the data.

Earliest Deadline First (EDF) CPU scheduling algorithm


Earliest Deadline First (EDF) is an optimal dynamic priority scheduling algorithm used in real-time
systems.
It can be used for both static and dynamic real-time scheduling.

EDF uses priorities to the jobs for scheduling. It assigns priorities to the task according to the absolute
deadline. The task whose deadline is closest gets the highest priority. The priorities are assigned and
changed in a dynamic fashion. EDF is very efficient as compared to other scheduling algorithms in real-
time systems. It can make the CPU utilization to about 100% while still guaranteeing the deadlines of all
the tasks.

EDF includes the kernel overload. In EDF, if the CPU usage is less than 100%, then it means that all the
tasks have met the deadline. EDF finds an optimal feasible schedule. The feasible schedule is one in
which all the tasks in the system are executed within the deadline. If EDF is not able to find a feasible
schedule for all the tasks in the real-time system, then it means that no other task scheduling algorithms
in real-time systems can give a feasible schedule. All the tasks which are ready for execution should
announce their deadline to EDF when the task becomes runnable.

EDF scheduling algorithm does not need the tasks or processes to be periodic and also the tasks or
processes require a fixed CPU burst time. In EDF, any executing task can be preempted if any other
periodic instance with an earlier deadline is ready for execution and becomes active. Preemption is
allowed in the Earliest Deadline First scheduling algorithm.

Example:
Consider two processes P1 and P2.

Let the period of P1 be p1 = 50


Let the processing time of P1 be t1 = 25

Let the period of P2 be period2 = 75


Let the processing time of P2 be t2 = 30
Steps for solution:

1. Deadline pf P1 is earlier, so priority of P1>P2.

2. Initially P1 runs and completes its execution of 25 time.

3. After 25 times, P2 starts to execute until 50 times, when P1 is able to execute.

4. Now, comparing the deadline of (P1, P2) = (100, 75), P2 continues to execute.

5. P2 completes its processing at time 55.

6. P1 starts to execute until time 75, when P2 is able to execute.

7. Now, again comparing the deadline of (P1, P2) = (100, 150), P1 continues to execute.

8. Repeat the above steps…

9. Finally at time 150, both P1 and P2 have the same deadline, so P2 will continue to execute till its
processing time after which P1 starts to execute.

Limitations of EDF scheduling algorithm:

 Transient Overload Problem

 Resource Sharing Problem

 Efficient Implementation Problem

EARLIEST DEADLINE FIRST (EDF) SCHEDULING ALGORITHM

Table of Contents

Earliest deadline first (EDF) is dynamic priority scheduling algorithm for real time embedded
systems. Earliest deadline first selects a task according to its deadline such that a task with earliest
deadline has higher priority than others. It means priority of a task is inversely proportional to its
absolute deadline. Since absolute deadline of a task depends on the current instant of time so every
instant is a scheduling event in EDF as deadline of task changes with time. A task which has a higher
priority due to earliest deadline at one instant it may have low priority at next instant due to early
deadline of another task. EDF typically executes in preemptive mode i.e. currently executing task is
preempted whenever another task with earliest deadline becomes active.

EARLIEST DEADLINE FIRST (EDF) SCHEDULING ALGORITHM

EDF is an optimal algorithm which means if a task set is feasible then it is surely scheduled by EDF.
Another thing is that EDF does not specifically take any assumption on periodicity of tasks so it is
independent of Period of task and therefore can be used to schedule aperiodic tasks as well. If two tasks
have same absolute deadline choose one of them randomly. you may also like to read

 Scheduling Algorithms in RTOS

 Rate monotonic scheduling Algorithm

Example of EARLIEST DEADLINE FIRST (EDF) SCHEDULING ALGORITHM

An example of EDF is given below for task set of table-2.

Task Release time(ri) Execution Time(Ci) Deadline (Di) Time Period(Ti)

T1 0 1 4 4

T2 0 2 6 6

T3 0 3 8 8

Table 2. Task set

U= 1/4 +2/6 +3/8 = 0.25 + 0.333 +0.375 = 0.95 = 95%

As processor utilization is less than 1 or 100% so task set is surely schedulable by EDF.
Figure 2. Earliest deadline first scheduling of task set in Table -2

1. At t=0 all the tasks are released, but priorities are decided according to their absolute deadlines
so T1 has higher priority as its deadline is 4 earlier than T2 whose deadline is 6 and T3 whose
deadline is 8, that’s why it executes first.

2. At t=1 again absolute deadlines are compared and T2 has shorter deadline so it executes and
after that T3 starts execution but at t=4 T1 comes in the system and deadlines are compared, at
this instant both T1 and T3 has same deadlines so ties are broken randomly so we continue to
execute T3.

3. At t=6 T2 is released, now deadline of T1 is earliest than T2 so it starts execution and after that
T2 begins to execute. At t=8 again T1 and T2 have same deadlines i.e. t=16, so ties are broken
randomly an T2 continues its execution and then T1 completes. Now at t=12 T1 and T2 come in
the system simultaneously so by comparing absolute deadlines, T1 and T2 has same deadlines
therefore ties broken randomly and we continue to execute T3.

4. At t=13 T1 begins it execution and ends at t=14. Now T2 is the only task in the system so it
completes it execution.

5. At t=16 T1 and T2 are released together, priorities are decided according to absolute deadlines
so T1 execute first as its deadline is t=20 and T3’s deadline is t=24.After T1 completion T3 starts
and reaches at t=17 where T2 comes in the system now by deadline comparison both have same
deadline t=24 so ties broken randomly ant we T continue to execute T3.

6. At t=20 both T1 and T2 are in the system and both have same deadline t=24 so again ties broken
randomly and T2 executes. After that T1 completes it execution. In the same way system
continue to run without any problem by following EDF algorithm.

Transient Over Load Condition & Domino Effect in Earliest deadline first

Transient over load is a short time over load on the processor. Transient overload condition occurs when
the computation time demand of a task set at an instant exceeds the processor timing capacity available
at that instant. Due to transient over load tasks miss their deadline. This transient over load may occur
due many reasons such as changes in the environment, simultaneous arrival of asynchronous jobs,
system exception. In real time operating systems under EDF, whenever a task in Transient overload
condition miss its deadline and as result each of other tasks start missing their deadlines one after the
other in sequence, such an effect is called domino effect. It jeopardizes the behavior of the whole
system. An example of such condition is given below.

Task Release time(ri) Execution Time(Ci) Deadline (Di) Period(Ti)

T1 0 2 5 5

T2 0 2 6 6

T3 0 2 7 7
T4 0 2 8 8

Figure 3. Domino effect under Earliest deadline first

As in the above figure at t=15 T1 misses it deadline and after that at t=16 T4 is missing its deadline then
T2 and finally T3 so the whole system is collapsed. It is clearly proved that EDF has a shortcoming due to
domino effect and as a result critical tasks may miss their deadlines. The solution of this problem is
another scheduling algorithm that is least laxity first (LLF). It is an optimal scheduling algorithm. Demand
bound function ad Demand bound analysis are also used for schedualability analysis of given set of
tasks.

Advantages of EDF over rate monotonic

 No need to define priorities offline

 It has less context switching than rate monotonic

 It utilize the processor maximum up to 100% utilization factor as compared to rate monotonic

Disadvantages of EDF over rate monotonic

 It is less predictable. Because response time of tasks are variable and response time of tasks are
constant in case of rate monotonic or fixed priority algorithm.

 EDF provided less control over the execution

 It has high overheads

Rate-monotonic scheduling
Rate monotonic scheduling is a priority algorithm that belongs to the static priority scheduling category
of Real Time Operating Systems. It is preemptive in nature. The priority is decided according to the cycle
time of the processes that are involved. If the process has a small job duration, then it has the highest
priority. Thus if a process with highest priority starts execution, it will preempt the other running
processes. The priority of a process is inversely proportional to the period it will run for.

A set of processes can be scheduled only if they satisfy the following equation :

Where n is the number of processes in the process set, Ci is the computation time of the process, Ti is
the Time period for the process to run and U is the processor utilization.

Example:
An example to understand the working of Rate monotonic scheduling algorithm.

Processes Execution Time (C) Time period (T)

P1 3 20

P2 2 5

P3 2 10

n( 2^1/n - 1 ) = 3 ( 2^1/3 - 1 ) = 0.7977

U = 3/20 + 2/5 + 2/10 = 0.75

It is less than 1 or 100% utilization. The combined utilization of three processes is less than the threshold
of these processes that means the above set of processes are schedulable and thus satisfies the above
equation of the algorithm.

1. Scheduling time –
For calculating the Scheduling time of algorithm we have to take the LCM of the Time period of
all the processes. LCM ( 20, 5, 10 ) of the above example is 20. Thus we can schedule it by 20
time units.
2. Priority –
As discussed above, the priority will be the highest for the process which has the least running
time period. Thus P2 will have the highest priority, after that P3 and lastly P1.

P2 > P3 > P1

3. Representation and flow –

Above figure says that, Process P2 will execute two times for every 5 time units, Process P3 will execute
two times for every 10 time units and Process P1 will execute three times in 20 time units. This has to be
kept in mind for understanding the entire execution of the algorithm below.

Process P2 will run first for 2 time units because it has the highest priority. After completing its two
units, P3 will get the chance and thus it will run for 2 time units.

As we know that process P2 will run 2 times in the interval of 5 time units and process P3 will run 2
times in the interval of 10 time units, they have fulfilled the criteria and thus now process P1 which has
the least priority will get the chance and it will run for 1 time. And here the interval of five time units
have completed. Because of its priority P2 will preempt P1 and thus will run 2 times. As P3 have
completed its 2 time units for its interval of 10 time units, P1 will get chance and it will run for the
remaining 2 times, completing its execution which was thrice in 20 time units.

Now 9-10 interval remains idle as no process needs it. At 10 time units, process P2 will run for 2 times
completing its criteria for the third interval ( 10-15 ). Process P3 will now run for two times completing
its execution. Interval 14-15 will again remain idle for the same reason mentioned above. At 15 time
unit, process P2 will execute for two times completing its execution.This is how the rate monotonic
scheduling works.

Conditions :
The analysis of Rate monotonic scheduling assumes few properties that every process should possess.
They are :

1. Processes involved should not share the resources with other processes.
2. Deadlines must be similar to the time periods. Deadlines are deterministic.

3. Process running with highest priority that needs to run, will preempt all the other processes.

4. Priorities must be assigned to all the processes according to the protocol of Rate monotonic
scheduling.

Advantages :

1. It is easy to implement.

2. If any static priority assignment algorithm can meet the deadlines then rate monotonic
scheduling can also do the same. It is optimal.

3. It consists of calculated copy of the time periods unlike other time-sharing algorithms as Round
robin which neglects the scheduling needs of the processes.

Disadvantages :

1. It is very difficult to support aperiodic and sporadic tasks under RMA.

2. RMA is not optimal when tasks period and deadline differ.

RATE MONOTONIC (RM)


SCHEDULING ALGORITHM
Table of Contents
The Rate Monotonic scheduling algorithm is a simple rule that assigns priorities
to different tasks according to their time period. That is task with smallest time
period will have highest priority and a task with longest time period will have
lowest priority for execution. As the time period of a task does not change so not
its priority changes over time, therefore Rate monotonic is fixed priority algorithm.
The priorities are decided before the start of execution and they does not change
overtime.

Introduction to RATE MONOTONIC (RM)


SCHEDULING ALGORITHM
Rate monotonic scheduling Algorithm works on the principle of preemption.
Preemption occurs on a given processor when higher priority task blocked lower
priority task from execution. This blocking occurs due to priority level of different
tasks in a given task set. rate monotonic is a preemptive algorithm which means
if a task with shorter period comes during execution it will gain a higher priority
and can block or preemptive currently running tasks. In RM priorities are
assigned according to time period. Priority of a task is inversely proportional to its
timer period. Task with lowest time period has highest priority and the task with
highest period will have lowest priority. A given task is scheduled under rate
monotonic scheduling Algorithm, if its satisfies the following equation:

where n is the number of tasks in a task set.

Example of RATE MONOTONIC (RM)


SCHEDULING ALGORITHM
For example, we have a task set that consists of three tasks as follows

Time
Tasks Release time(ri) Execution time(Ci) Deadline (Di)
period(Ti)

T1 0 0.5 3 3

T2 0 1 4 4

T3 0 2 6 6

Table 1. Task set


U= 0.5/3 +1/4 +2/6 = 0.167+ 0.25 + 0.333 = 0.75
As processor utilization is less than 1 or 100% so task set is schedulable and it
also satisfies the above equation of rate monotonic scheduling algorithm.

Figure 1. RM scheduling of Task set in table 1.


A task set given in table 1 it RM scheduling is given in figure 1. The explanation
of above is as follows
1. According to RM scheduling algorithm task with shorter period has higher
priority so T1 has high priority, T2 has intermediate priority and T3 has
lowest priority. At t=0 all the tasks are released. Now T1 has highest
priority so it executes first till t=0.5.
2. At t=0.5 task T2 has higher priority than T3 so it executes first for one-time
units till t=1.5. After its completion only one task is remained in the system
that is T3, so it starts its execution and executes till t=3.
3. At t=3 T1 releases, as it has higher priority than T3 so it preempts or
blocks T3 and starts it execution till t=3.5. After that the remaining part of
T3 executes.
4. At t=4 T2 releases and completes it execution as there is no task running
in the system at this time.
5. At t=6 both T1 and T3 are released at the same time but T1 has higher
priority due to shorter period so it preempts T3 and executes till t=6.5, after
that T3 starts running and executes till t=8.
6. At t=8 T2 with higher priority than T3 releases so it preempts T3 and starts
its execution.
7. At t=9 T1 is released again and it preempts T3 and executes first and at
t=9.5 T3 executes its remaining part. Similarly, the execution goes on.

In simple words, "the task with the shortest periodicity executes with the
highest priority."

Rate-monotonic is a priority based scheduling. The scheduling scheme is pre-


emptive; it ensures that a task is pre-empted if another task with a shorter
period is expected to run.

This scheme is typically used in embedded systems where the nature of the
scheduling is deterministic. When implementing RMS scheduling in applications,
the rates should be designed/picked such that utilization of the system is high.

In other words the tasks period, execution time should be designed such that all
tasks get a fair chance to execute or at least get a chance to run when the tasks
are expected to run, because the nature of the scheduling always puts priority
to tasks with shorter duration.

Consider two tasks with a rate 10 ms-task1, and 20 ms-task2. As per RMS, task1
should always execute at the rate of 10 ms as it is the task with the shorter
duration. Task2 will execute at the rate of 20 ms if the task1 is not executing.

Consider a case in which the tasks are implemented such that execution time of
task1 is 10 ms and task2 is also 10 ms. In this scenario, the task2 will never
execute as the task1 will always execute at every 10 ms. So the tasks need to
be designed such that other tasks at least get a chance to execute. In this case
if task1 takes 8 ms execution time and task2 takes around 10 ms, then we can
be sure that task2 at least executes at around 100 ms as it gets 2 ms free every
20 ms.

Both execution time, and rates of the task must be perused before we
implement a RMS scheme for an application.

You might also like