OS Notes
OS Notes
OS Notes
The content is described in detailed manner and has the ability to answer
most of your queries. The tutorial also contains the numerical examples
based on previous year GATE questions which will help you to address the
problems in a practical manner.
The Operating System Tutorial is divided into various parts based on its
functions such as Process Management, Process Synchronization, Deadlocks
and File Management.
ADVERTISEMENT
We need a system which can act as an intermediary and manage all the
processes and resources present in the system.
An Operating System can be defined as an interface between user and
hardware. It is responsible for the execution of all the processes, Resource
Allocation, CPU management, File Management and many other tasks.
ADVERTISEMENT
Processor Management
In a multi-programming environment, the OS decides the order in which
processes have access to the processor, and how much processing time
each process has. This function of OS is called Process Scheduling. An
Operating System performs the following activities for Processor
Management.
An operating system manages the processor’s work by allocating various
jobs to it and ensuring that each process receives enough time from the
processor to function properly.
Keeps track of the status of processes. The program which performs this
task is known as a traffic controller. Allocates the CPU that is a processor to
a process. De-allocates processor when a process is no longer required.
Processor Management
Device Management
An OS manages device communication via its respective drivers. It performs
the following activities for device management.
Keeps track of all devices connected to the system. Designates a
program responsible for every device known as the Input/Output
controller.
Decide which process gets access to a certain device and for how long.
Allocates devices effectively and efficiently. Deallocates devices when
they are no longer required.
There are various input and output devices. An OS controls the working
of these input-output devices.
It receives the requests from these devices, performs a specific task, and
communicates back to the requesting process.
File Management
A file system is organized into directories for efficient or easy navigation and
usage. These directories may contain other directories and other files. An
Operating System carries out the following file management activities. It
keeps track of where information is stored, user access settings, the status
of every file, and more. These facilities are collectively known as the file
system. An OS keeps track of information regarding the creation, deletion,
transfer, copy, and storage of files in an organized way. It also maintains the
integrity of the data stored in these files, including the file directory structure,
by protecting against unauthorized access.
File Management
1. Process
2. Memory Management
4. System Structure
Process:
Memory Management:
System Structure:
ADVERTISEMENT
o Processor management
o Act as a Resource Manager
o Memory Management
o File Management
o Security
o Device Management
o Input devices / Output devices
o Deadlock Prevention
o Time Management
o Coordinate with system software or hardware
When the first electronic computer was developed in 1940, it was created
without any operating system. In early times, users have full access to the
computer machine and write a program for each task in absolute machine
language. The programmer can perform and solve only simple mathematical
calculations during the computer generation, and this calculation does not
require an operating system.
The first operating system (OS) was created in the early 1950s and was
known as GMOS. General Motors has developed OS for the IBM computer.
The second-generation operating system was based on a single stream batch
processing system because it collects all similar jobs in groups or batches
and then submits the jobs to the operating system using a punch card to
complete all jobs in a machine. At each completion of jobs (either normally or
abnormally), control transfer to the operating system that is cleaned after
completing one job and then continues to read and initiates the next job in a
punch card. After that, new machines were called mainframes, which were
very big and used by professional operators.
During the late 1960s, operating system designers were very capable of
developing a new operating system that could simultaneously perform
multiple tasks in a single computer program called multiprogramming. The
introduction of multiprogramming plays a very important role in
developing operating systems that allow a CPU to be busy every time by
performing different tasks on a computer at the same time. During the third
generation, there was a new development of minicomputer's phenomenal
growth starting in 1961 with the DEC PDP-1. These PDP's leads to the
creation of personal computers in the fourth generation.
Advantages
1. It is a very reliable system because multiple processors may share their work
between the systems, and the work is completed with collaboration.
2. It requires complex configuration.
3. Parallel processing is achieved via multiprocessing.
4. If multiple processors work at the same time, the throughput may increase.
5. Multiple processors execute the multiple processes a few times.
Disadvantages
Advantages
Disadvantages
There are various disadvantages of the multicore system. Some
disadvantages of the multicore system are as follows:
Here, you will learn the main differences between the Multiprocessor and
Multicore systems. Various differences between the Multiprocessor and
Multicore system are as follows:
Reliability It is more reliable than the multicore It is not much reliable than th
system. If one of any processors fails multiprocessors.
in the system, the other processors
will not be affected.
Traffic It has high traffic than the multicore It has less traffic than th
system. multiprocessors.
In Batch operating system, access is given to more than one person; they
submit their respective jobs to the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve
and then executes the jobs one by one. The users collect their respective
output when all the jobs get executed.
The purpose of this operating system was mainly to transfer control from one
job to another as soon as the job was completed. It contained a small set of
programs called the resident monitor that always resided in one part of the
main memory. The remaining part is used for servicing jobs.
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates
CPU time between two jobs.
Disadvantages of Batch OS
1. Starvation
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution
time of J1 is very high, then the other four jobs will never be executed, or
they will have to wait for a very long time. Hence the other processes get
starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's
input. If a job requires the input of two numbers from the console, then it will
never get it in the batch processing scenario since the user is not present at
the time of execution.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems
resources are used efficiently, but they do not provide any user interaction
with the computer system.
ADVERTISEMENT
The main duty or the work of the Operating System is to complete the given
process in less than the given stipulated time. So, the term process is very
important for the subject named Operating Systems. Now, let us learn
everything about the term process in a very deep manner.
Definition of Process
Basically, a process is a simple program.
When a program is loaded into memory, it may be divided into the four
components stack, heap, text, and data to form a process. The simplified
depiction of a process in the main memory is shown in the diagram below.
Now, let us understand the Process Control Block with the help of the
components present in the Process Control Block.
1. Process ID
2. Process State
3. Program Counter
4. CPU Registers
5. CPU Scheduling Information
6. Accounting and Business Information
7. Memory Management Information
8. Input Output Status Information
Now, let us understand about each and every component in detail now.
1) Process ID
It is a Identification mark which is present for the Process. This is very useful
for finding the process. It is also very useful for identifying the process also.
2) Process State
Now, let us know about each and every process states in detail. Let me
explain about each and every state
i) New State
The ready state, when a process waits for the CPU to be assigned, is the first
state it enters after being formed. The operating system pulls new processes
from secondary memory and places them all in main memory.
The term "ready state processes" refers to processes that are in the main
memory and are prepared for execution. Numerous processes could be
active at the moment.
The Operating System will select one of the processes from the ready state
based on the scheduling mechanism. As a result, if our system only has one
CPU, there will only ever be one process operating at any given moment. We
can execute n processes concurrently in the system if there are n
processors.
v) Terminated State
A process enters the termination state once it has completed its execution.
The operating system will end the process and erase the whole context of
the process (Process Control Block).
3) Program Counter
4) CPU Registers
When the process is in a running state, here is where the contents of the
processor registers are kept. Accumulators, index and general-purpose
registers, instruction registers, and condition code registers are the many
categories of CPU registers.
This Input Output Status Information section consists of Input and Output
related information which includes about the process statuses, etc.
An operating system that offers a solid definition for faults cannot be disrupted by a single
point of failure. It ensures business continuity and the high availability of crucial applications
and systems regardless of any failures.
Fault tolerance is reliant on aspects like load balancing and failover, which remove the
risk of a single point of failure. It will typically be part of the operating system’s interface,
which enables programmers to check the performance of data throughout a transaction.
Normal functioning
This describes a situation when a fault-tolerant system encounters a fault but continues to
function as usual. This means the system sees no change in performance metrics like
throughput or response time.
Graceful degradation
Diversity
If a system’s main electricity supply fails, potentially due to a storm that causes a power
outage or affects a power station, it will not be possible to access alternative electricity
sources. In this event, fault tolerance can be sourced through diversity, which provides
electricity from sources like backup generators that take over when a main power failure
occurs.
Some diverse fault-tolerance options result in the backup not having the same level of
capacity as the primary source. This may, in some cases, require the system to ensure
graceful degradation until the primary power source is restored.
Redundancy
Fault-tolerant systems use redundancy to remove the single point of failure. The system is
equipped with one or more power supply units (PSUs), which do not need to power the
system when the primary PSU functions as normal. In the event the primary PSU fails or
suffers a fault, it can be removed from service and replaced by a redundant PSU, which
takes over system function and performance.
Replication
Hardware systems
Hardware systems can be backed up by systems that are identical or equivalent to them. A
typical example is a server made fault-tolerant by deploying an identical server that runs in
parallel to it and mirrors all its operations, such as the redundant array of inexpensive disks
(RAID), which combines physical disk components to achieve redundancy and improved
performance.
Software systems
Software systems can be made fault-tolerant by backing them up with other software. A
common example is backing up a database that contains customer data to ensure it can
continuously replicate onto another machine. As a result, in the event that a primary
database fails, normal operations will continue because they are automatically replicated
and redirected onto the backup database.
Power sources
Power sources can also be made fault-tolerant by using alternative sources to support
them. One approach is to run devices on an uninterruptible power supply (UPS). Another is
to use backup power generators that ensure storage and hardware, heating, ventilation, and
air conditioning (HVAC) continue to operate as normal if the primary power source fails.
Boy, A decides upon some clothes to buy and heads to the changing
room to try them out. Now, while boy A is inside the changing room,
there is an ‘occupied’ sign on it – indicating that no one else can
come in. Boy B has to use the changing room too, so she has to wait
till boy A is done using the changing room.
Once boy A comes out of the changing room, the sign on it changes
from ‘occupied’ to ‘vacant’ – indicating that another person can use
it. Hence, boy B proceeds to use the changing room, while the sign
displays ‘occupied’ again.
The changing room is nothing but the critical section, boy A and boy
B are two different processes, while the sign outside the changing
room indicates the process synchronization mechanism being used.
Introduction of Process Synchronization
6 mins read
274 views
Topics Covered
Overview
In the world of modern computing, operating systems (OS) play a critical role in ensuring that a
computer can perform multiple tasks simultaneously. One of the key techniques used to achieve
this is concurrency. Concurrency in OS allows multiple tasks or processes to run concurrently,
providing simultaneous execution and significantly improving system efficiency. However, the
implementation of concurrency in operating systems brings its own set of challenges and
complexities. In this article, we will explore the concept of concurrency in OS, exploring its
principles, advantages, limitations, and the problems it presents.
Multitasking involves the execution of multiple tasks by rapidly switching between them. Each
task gets a time slot, and the OS switches between them so quickly that it seems as if they are
running simultaneously.
Multithreading takes advantage of modern processors with multiple cores. It allows different
threads of a process to run on separate cores, enabling true parallelism within a single process.
Multiprocessing goes a step further by distributing multiple processes across multiple physical
processors or cores, achieving parallel execution at a higher level.
The need for concurrent execution arises from the desire to utilize computer resources
efficiently. Here are some key reasons why concurrent execution is essential:
Resource Utilization:
Concurrency ensures that the CPU, memory, and other resources are used optimally. Without
concurrency, a CPU might remain idle while waiting for I/O operations to complete, leading to
inefficient resource utilization.
Responsiveness:
Concurrent systems are more responsive. Users can interact with multiple applications
simultaneously, and the OS can switch between them quickly, providing a smoother user
experience.
Throughput:
Concurrency increases the overall throughput of the system. Multiple tasks can progress
simultaneously, allowing more work to be done in a given time frame.
Real-Time Processing:
Certain applications, such as multimedia playback and gaming, require real-time processing.
Concurrency ensures that these applications can run without interruptions, delivering a
seamless experience.
Process Isolation:
Each process should have its own memory space and resources to prevent interference between
processes. This isolation is critical to maintain system stability.
Synchronization:
Concurrency introduces the possibility of data races and conflicts. Synchronization mechanisms
like locks, semaphores, and mutexes are used to coordinate access to shared resources and
ensure data consistency.
Deadlock Avoidance:
OSs implement algorithms to detect and avoid deadlock situations where processes are stuck
waiting for resources indefinitely. Deadlocks can halt the entire system.
Fairness:
The OS should allocate CPU time fairly among processes to prevent any single process from
monopolizing system resources.
Problems in Concurrency
While concurrency offers numerous benefits, it also introduces a range of challenges and
problems:
Race Conditions:
They occur when multiple threads or processes access shared resources simultaneously without
proper synchronization. In the absence of synchronization mechanisms, race conditions can lead
to unpredictable behavior and data corruption. This can result into data inconsistencies,
application crashes, or even security vulnerabilities if sensitive data is involved.
Deadlocks:
A deadlock arises when two or more processes or threads become unable to progress as they
are mutually waiting for resources that are currently held by each other. This situation can bring
the entire system to a standstill, causing disruptions and frustration for users.
Priority Inversion:
Priority inversion occurs when a lower-priority task temporarily holds a resource that a higher-
priority task needs. This can lead to delays in the execution of high-priority tasks, reducing
system efficiency and responsiveness.
Resource Starvation:
Resource starvation occurs when some processes are unable to obtain the resources they need,
leading to poor performance and responsiveness for those processes. This can happen if the OS
does not manage resource allocation effectively or if certain processes monopolize resources.
Advantages of Concurrency
Concurrency in operating systems offers several distinct advantages:
Improved Performance:
Concurrency significantly enhances system performance by effectively utilizing available
resources. With multiple tasks running concurrently, the CPU, memory, and I/O devices are
continuously engaged, reducing idle time and maximizing overall throughput.
Responsiveness:
Concurrency ensures that users enjoy fast response times, even when juggling multiple
applications. The ability of the operating system to swiftly switch between tasks gives the
impression of seamless multitasking and enhances the user experience.
Scalability:
Concurrency allows systems to scale horizontally by adding more processors or cores, making it
suitable for both single-core and multi-core environments.
Fault Tolerance:
Concurrency contributes to fault tolerance, a critical aspect of system reliability. In
multiprocessor systems, if one processor encounters a failure, the remaining processors can
continue processing tasks. This redundancy minimizes downtime and ensures uninterrupted
system operation.
Limitations of Concurrency
Despite its advantages, concurrency has its limitations:
Complexity:
Debugging and testing concurrent code is often more challenging than sequential code. The
potential for hard-to-reproduce bugs necessitates careful design and thorough testing.
Overhead:
Synchronization mechanisms introduce overhead, which can slow down the execution of
individual tasks, especially in scenarios where synchronization is excessive.
Race Conditions:
Dealing with race conditions requires careful consideration during design and rigorous testing to
prevent data corruption and erratic behavior.
Resource Management:
Balancing resource usage to prevent both resource starvation and excessive contention is a
critical task. Careful resource management is vital to maintain system stability.
Issues of Concurrency
Concurrency introduces several critical issues that OS designers and developers must address:
Security:
Concurrent execution may inadvertently expose data to unauthorized access or data leaks.
Managing access control and data security in a concurrent environment is a non-trivial task, that
demands thorough consideration.
Compatibility:
Compatibility issues can arise when integrating legacy software into concurrent environments,
potentially limiting their performance.
Testing and Debugging:
Debugging concurrent code is a tough task. Identifying and reproducing race conditions and
other concurrency-related bugs can be difficult.
Scalability:
While concurrency can improve performance, not all applications can be easily parallelized.
Identifying tasks that can be parallelized and those that cannot is crucial in optimizing system
performance.
Mutual Exclusion
The mutual exclusion condition states that some resources can only be
accessed by one process at a time. This means that once a process acquires
a resource, other processes are denied access until it is released. For
example, in a multi-threaded application, a critical section of code may be
protected by a lock that can only be held by one thread at a time.
The hold and wait condition arises when a process is holding at least one
resource and is waiting to acquire additional resources. In this scenario, a
process may acquire a resource but cannot proceed with its execution
because it requires other resources that are currently held by other processes.
This leads to a situation where processes are stuck, waiting for resources to
be released.
No Preemption
Circular Wait
The circular wait condition occurs when a set of processes is circularly waiting
for resources. In other words, each process waits for a resource that another
process holds in the set, creating a circular dependency. This circular wait
cycle prevents any process from acquiring the necessary resources to
continue its execution.
Deadlock Prevention
Preventing deadlocks is a proactive approach that focuses on eliminating one
or more of the Coffman conditions to ensure that deadlocks cannot occur.
Let's explore some strategies for preventing characteristics of deadlock in
operating system:
The hold and wait condition can be avoided by adopting a strategy where a
process requests all the required resources before starting its execution. This
ensures that a process does not hold any resources while waiting for
additional resources, reducing the chances of a deadlock. However, this
approach may result in resource underutilization and decreased system
efficiency.
No Preemption
Read our latest blogs "List of Operating Systems" and "Booting in Operating
System".
1. Process Termination
2. Resource Preemption
Deadlock recovery strategies aim to resolve deadlocks and restore the system
to a consistent state. However, both process termination and resource
preemption strategies have their limitations and may result in performance
degradation or data loss. Therefore, careful planning and consideration are
necessary when implementing deadlock recovery mechanisms.
Deadlock
Deadlock occurs when two or more processes are unable to proceed because each
process is waiting for a resource that is being held by another process.
Deadlock is an infinite waiting situation where processes are stuck in a circular
dependency and cannot progress.
All the necessary conditions for deadlock, including mutual exclusion, hold and wait, no
preemption, and circular wait, must be fulfilled for a deadlock to occur.
Deadlock can result in system crashes, frozen applications, and unresponsive user
interfaces.
Starvation
Starvation occurs when a low-priority process is continuously denied access to a
resource while high-priority processes are granted access.
Starvation is a long waiting situation but is not infinite like a deadlock in OS.
Starvation can occur due to uncontrolled priority and resource management, where
certain processes are consistently prioritized over others.
Starvation does not necessarily lead to a system crash or frozen applications, but it can
result in decreased system performance and unfair resource allocation.
Advantages of Deadlock in OS
Deadlocks can be useful in certain scenarios, such as processes that perform a single
burst of activity and do not require resource sharing.
Deadlocks can provide simplicity and efficiency in systems where the correctness of
data is more important than overall system performance.
Deadlocks can be enforced via compile-time checks, eliminating the need for runtime
computation and reducing the chances of unexpected system behavior.
Disadvantages of Deadlock in OS
Deadlocks can cause system crashes, frozen applications, and unresponsive user
interfaces, leading to a poor user experience.
Deadlocks may result in delays in process initiation and overall system performance
degradation.
Deadlocks can preclude incremental resource requests and disallow processes from
making progress.
Deadlocks may require inherent preemption, resulting in losses and potential data
integrity issues.
Suppose the number of account holders in a particular bank is 'n', and the
total money in a bank is 'T'. If an account holder applies for a loan; first, the
bank subtracts the loan amount from full cash and then estimates the cash
difference is greater than T to approve the loan amount. These steps are
taken because if another person applies for a loan or withdraws some
amount from the bank, it helps the bank manage and operate all things
without any restriction in the functionality of the banking system.
Advantages
Following are the essential characteristics of the Banker's algorithm:
Disadvantages
1. It requires a fixed number of processes, and no additional processes can be
started in the system while executing the process.
2. The algorithm does no longer allows the processes to exchange its maximum
needs while processing its tasks.
3. Each process has to know and state their maximum resource requirement in
advance for the system.
4. The number of resource requests can be granted in a finite time, but the time
limit for allocating the resources is one year.
Fence Register
In this approach, the operating system keeps track of the first and
last location available for the allocation of the user program
The operating system is loaded either at the bottom or at top
Interrupt vectors are often loaded in low memory therefore, it
makes sense to load the operating system in low memory
Sharing of data and code does not make much sense in a single
process environment
The Operating system can be protected from user programs with
the help of a fence register.
Advantages of Memory Management
It is a simple management approach
Disadvantages of Memory Management
It does not support multiprogramming
Memory is wasted
Multiprogramming with Fixed Partitions (Without Swapping)
A memory partition scheme with a fixed number of partitions was
introduced to support multiprogramming. this scheme is based on
contiguous allocation
Each partition is a block of contiguous memory
Memory is partitioned into a fixed number of partitions.
Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the
region is reserved for updating the system the remaining four
partitions are for the user program.
Fixed Size Partitioning
Operating System
p1
p2
p3
p4
Partition Table
Once partitions are defined operating system keeps track of the
status of memory partitions it is done through a data structure
called a partition table.
Sample Partition Table
Starting Address of Partition Size of Partition Status
0k 200k allocated
Memory Allocation
To gain proper memory utilization, memory allocation must be
allocated efficient manner. One of the simplest methods for
allocating memory is to divide memory into several fixed-sized
partitions and each partition contains exactly one process. Thus, the
degree of multiprogramming is obtained by the number of
partitions.
Multiple partition allocation: In this method, a process is
selected from the input queue and loaded into the free partition.
When the process terminates, the partition becomes available for
other processes.
Fixed partition allocation: In this method, the operating
system maintains a table that indicates which parts of memory
are available and which are occupied by processes. Initially, all
memory is available for user processes and is considered one
large block of available memory. This available memory is known
as a “Hole”. When the process arrives and needs memory, we
search for a hole that is large enough to store this process. If the
requirement is fulfilled then we allocate memory to process,
otherwise keeping the rest available to satisfy future requests.
While allocating a memory sometimes dynamic storage allocation
problems occur, which concerns how to satisfy a request of size n
from a list of free holes. There are some solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of
the process allocated.
First Fit
Here in this example, first, we traverse the complete list and find the
last hole 25KB is the best suitable hole for Process A(size 25KB). In
this method, memory utilization is maximum as compared to other
memory allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This
method produces the largest leftover hole.
Worst Fit
The main visible advantage of this scheme is that programs can be larger than
physical memory. Virtual memory serves two purposes. First, it allows us to
extend the use of physical memory by using disk. Second, it allows us to have
memory protection, because each virtual address is translated to a physical
address.
Following are the situations, when entire program is not required to be loaded
fully in main memory.
User written error handling routines are used only when an error occurred
in the data or computation.
Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though
only a small amount of the table is actually used.
The ability to execute a program that is only partially in memory would
counter many benefits.
Less number of I/O would be needed to load or swap each user program
into memory.
A program would no longer be constrained by the amount of physical
memory that is available.
Each user program could take less physical memory, more programs
could be run the same time, with a corresponding increase in CPU
utilization and throughput.
Demand Paging
A demand paging system is quite similar to a paging system with swapping
where processes reside in secondary memory and pages are loaded only on
demand, not in advance. When a context switch occurs, the operating system
does not copy any of the old program’s pages out to the disk or any of the new
program’s pages into the main memory Instead, it just begins executing the new
program after loading the first page and fetches that program’s pages as they
are referenced.
When the page that was selected for replacement and was paged out, is
referenced again, it has to read in from disk, and this requires for I/O
completion. This process determines the quality of the page replacement
algorithm: the lesser the time waiting for page-ins, the better is the algorithm.
Characteristics of SJF
Shortest Job first has the advantage of having a minimum
average waiting time among all operating system scheduling
algorithms.
It is associated with each task as a unit of time to complete.
It may cause starvation if shorter processes keep coming. This
problem can be solved using the concept of ageing.
Advantages of SJF
As SJF reduces the average waiting time thus, it is better than the
first come first serve scheduling algorithm.
SJF is generally used for long term scheduling
Disadvantages of SJF
One of the demerit SJF has is starvation.
Many times it becomes complicated to predict the length of the
upcoming CPU request
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on Shortest Job First.
3. Longest Job First(LJF)
Longest Job First(LJF) scheduling process is just opposite of
shortest job first (SJF), as the name suggests this algorithm is based
upon the fact that the process with the largest burst time is
processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF
Among all the processes waiting in a waiting queue, CPU is
always assigned to the process having largest burst time.
If two processes have the same burst time then the tie is broken
using FCFS i.e. the process that arrived first is processed first.
LJF CPU Scheduling can be of both preemptive and non-
preemptive types.
Advantages of LJF
No other task can schedule until the longest job or process
executes completely.
All the jobs or processes finish at the same time approximately.
Disadvantages of LJF
Generally, the LJF algorithm gives a very high average waiting
time and average turn-around time for a given set of processes.
This may lead to convoy effect.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on the Longest job first
scheduling.
4. Priority Scheduling
Preemptive Priority CPU Scheduling Algorithm is a pre-
emptive method of CPU scheduling algorithm that works based on
the priority of a process. In this algorithm, the editor sets the
functions to be as important, meaning that the most important
process must be done first. In the case of any conflict, that is, where
there is more than one process with equal value, then the most
important CPU planning algorithm works on the basis of the FCFS
(First Come First Serve) algorithm.
Characteristics of Priority Scheduling
Schedules tasks based on priority.
When the higher priority work arrives and a task with less priority
is executing, the higher priority proess will takes the place of the
less priority proess and
The later is suspended until the execution is complete.
Lower is the number assigned, higher is the priority level of a
process.
Advantages of Priority Scheduling
The average waiting time is less than FCFS
Less complex
Disadvantages of Priority Scheduling
One of the most common demerits of the Preemptive priority CPU
scheduling algorithm is the Starvation Problem. This is the
problem in which a process has to wait for a longer amount of
time to get scheduled into the CPU. This condition is called the
starvation problem.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on Priority Preemptive Scheduling
algorithm.
5. Round Robin
Round Robin is a CPU scheduling algorithm where each process is
cyclically assigned a fixed time slot. It is the preemptive version
of First come First Serve CPU Scheduling algorithm. Round Robin
CPU Algorithm generally focuses on Time Sharing technique.
Characteristics of Round robin
It’s simple, easy to use, and starvation-free as all processes get
the balanced CPU allocation.
One of the most widely used methods in CPU scheduling as a
core.
It is considered preemptive as the processes are given to the CPU
for a very limited time.
Advantages of Round robin
Round robin seems to be fair as every process gets an equal
share of CPU.
The newly created process is added to the end of the ready
queue.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on the Round robin Scheduling
algorithm.
6. Shortest Remaining Time First(SRTF)
Shortest remaining time first is the preemptive version of the
Shortest job first which we have discussed earlier where the
processor is allocated to the job closest to completion. In SRTF the
process with the smallest amount of time remaining until completion
is selected to execute.
Characteristics of SRTF
SRTF algorithm makes the processing of the jobs faster than SJF
algorithm, given it’s overhead charge
s are not counted.
The context switch is done a lot more times in SRTF than in SJF
and consumes the CPU’s valuable time for processing. This adds
up to its processing time and diminishes its advantage of fast
processing.
Advantages of SRTF
In SRTF the short processes are handled very fast.
The system also requires very little overhead since it only makes
a decision when a process completes or a new process is added.
Disadvantages of SRTF
Like the shortest job first, it also has the potential for process
starvation.
Long processes may be held off indefinitely if short processes are
continually added.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on the shortest remaining time
first.
7. Longest Remaining Time First(LRTF)
The longest remaining time first is a preemptive version of the
longest job first scheduling algorithm. This scheduling algorithm is
used by the operating system to program incoming processes for
use in a systematic way. This algorithm schedules those processes
first which have the longest processing time remaining for
completion.
Characteristics of LRTF
Among all the processes waiting in a waiting queue, the CPU is
always assigned to the process having the largest burst time.
If two processes have the same burst time then the tie is broken
using FCFS i.e. the process that arrived first is processed first.
LRTF CPU Scheduling can be of both preemptive and non-
preemptive.
No other process can execute until the longest task executes
completely.
All the jobs or processes finish at the same time approximately.
Advantages of LRTF
Maximizes Throughput for Long Processes.
Reduces Context Switching.
Simplicity in Implementation.
Disadvantages of LRTF
This algorithm gives a very high average waiting
time and average turn-around time for a given set of processes.
This may lead to a convoy effect.
To learn about how to implement this CPU scheduling algorithm,
please refer to our detailed article on the longest remaining time
first.
8. Highest Response Ratio Next(HRRN)
Highest Response Ratio Next is a non-preemptive CPU
Scheduling algorithm and it is considered as one of the most optimal
scheduling algorithms. The name itself states that we need to find
the response ratio of all available processes and select the one with
the highest Response Ratio. A process once selected will run till
completion.
Characteristics of HRRN
The criteria for HRRN is Response Ratio, and
the mode is Non-Preemptive.
HRRN is considered as the modification of Shortest Job First to
reduce the problem of starvation.
In comparison with SJF, during the HRRN scheduling algorithm,
the CPU is allotted to the next process which has the highest
response ratio and not to the process having less burst time.
Scheduling in Real Time Systems