0% found this document useful (0 votes)
22 views99 pages

OS Exam Notes

jhfghfghfgh

Uploaded by

rajanikanthmeka4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views99 pages

OS Exam Notes

jhfghfghfgh

Uploaded by

rajanikanthmeka4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

VSRGDC

What is an Operating System and what are the goals and functions of an Operating
System?

An Operating System is a software that acts as an intermediate between the hardware


and the user. It is a kind of resource manager that manages both hardware and
software resources of a system.

There can be various resources present in the system and to manage it manually is a very
very difficult task. So, we make the use of the Operating System to manage all the
resources present in the system.

Apart from resource management, the other thing that the Operating System does is, it
provides a platform where other application programs can be published and used. The
following is the conceptual view of a common computer system.

In the above image, we can see that at level 0, the computer hardware are present and to
access this hardware you need to take help from the Operating System which is present
at level 1. At the upper level or at level 2, various application software (this software is
used by users to perform a specific task like MS word, VLC media player, etc) and system
software(this software is used to manage the system resources like assembler, compiler,
etc) are present. So, the Operating System is used for the communication of these
Softwares with the hardware.

Goals of the Operating System


There are two types of goals of an Operating System i.e. Primary Goals and Secondary
Goal.

 Primary Goal: The primary goal of an Operating System is to provide a user-


friendly and convenient environment. We know that it is not compulsory to use
the Operating System, but things become harder when the user has to perform all
the process scheduling and converting the user code into machine code is also very
difficult. So, we make the use of an Operating System to act as an intermediate
VSRGDC
between us and the hardware. All you need to do is give commands to the
Operating System and the Operating System will do the rest for you. So, the
Operating System should be convenient to use.
 Secondary Goal: The secondary goal of an Operating System is efficiency. The
Operating System should perform all the management of resources in such a way
that the resources are fully utilised and no resource should be held idle if some
request to that resource is there at that instant of time.
So, in order to achieve the above primary and secondary goals, the Operating System
performs a number of functions. Let's see them.

Functions of an Operating System


To achieve the goals of an Operating system, the Operating System performs a number of
functionalities. They are:

 Process Management: At a particular instant of time, the CPU may have a


number of processes that are in the ready state. But at a time, only one process can
be processed by a processor. So, the CPU should apply some kind of algorithm that
can be used to provide uniform and efficient access to resources by the processes.
The CPU should not give priority to only one process and it should make sure that
every process which is in the ready state will be executed. Some of the CPU
scheduling algorithms are First Come First Serve, Round Robin, Shortest Job
First, Priority Scheduling, etc.
 Memory Management: For the execution of a process, the whole process is put
into the main memory and the process is executed and after the execution of the
process, the memory is freed and that memory can be used for other processes. So,
it is the duty of the Operating System to manage the memory by allocating and
deallocating the memory for the process.
 I/O Device Management: There are various I/O devices that are present in a
system. Various processes require access to these resources and the process
should not directly access these devices. So, it is the duty of the Operating System
to allow the use of I/O devices by the various process that are requiring these
resources.
 File Mangement: There are various files, folders and directory system in a
particular computer. All these are maintained and managed by the Operating
System of the computer. All these files related information are maintained by
using a File Allocation Table or FAT. So, every detail related to the file i.e.
filename, file size, file type, etc is stored in the File Allocation Table. Also, it is the
duty of the Operating System to make sure that the files should not be opened by
some unauthorized access.
 Virtual Memory: When the size of the program is larger than the main memory
then it is the duty of the Operating System to load only frequently used pages in
the main memory. This is called Virtual Memory.
VSRGDC
What are the types of Operating Systems?

Batch Operating System


In the Batch Operating System, similar types of jobs are grouped together into batches.

This grouping of batches is done with the help of the operator. When we have the batches ready, the execution is
done one by one for the batches which means batch-wise.

The best way to understand it is by taking an example. Let’s take an example:

Suppose that we have 100 programs to execute. Those programs are in two languages: Java and C++.

55 programs are in Java and 45 programs are in C++. Here, two batches can be created: one for all the Java
programs, and one for all the C++ programs.

Here, if we execute in batches, we will get the benefit of loading the Java compiler for 1 time only not 55 times.

Similarly for the C++, the C++ compiler will get loaded for 1 time only not 45 times.

As we are loading the compiler for the 1 time only and executing that particular batch. We have an advantage.

This is the working model of the Batch Operating System.

Advantages:

1. It allows multiple users to use it at the same time.


2. Reduction in the time taken by the system to execute all the programs.

Disadvantages:

1. If a job fails, the other jobs will have to wait for an unknown time.
2. Batch systems are sometimes costly.
3. Difficult to debug.

Time-Sharing Operating System


Here, more than one task gets executed at a particular time with the help of the time-sharing concept. And we
have the term quantum which means the amount of time that each task gets for execution.

Quantum is the short duration of time that is decided for each task in a time-sharing operating system.

Suppose we have 4 tasks.


VSRGDC
 Task 1
 Task 2
 Task 3
 Task 4

It starts with Task 1, it gets executed for that particular fixed amount of time.

Then, Task 2 gets the chance of execution for that particular fixed amount of time.

After that, Task 3 gets the chance of execution for that particular fixed amount of time.

And finally, Task 4 gets the chance of execution for that particular fixed amount of time.

And then again the Task 1, Task 2, Task 3, and so on. The cycle goes on.

This is the working model of the Time-Sharing Operating System.

Advantages:

1. Each task gets an equal opportunity for execution.


2. Reduction in the idle time of the CPU.

Disadvantages:

1. The data of each task should be handled properly so that they don’t get mixed during the execution.
2. No way to give an advantage to the higher-priority task that needs immediate execution.

Distributed Operating System


In this Operating System, we have many systems and all these systems have their own CPU, main memory,
secondary memory, and resources.

As the name is distributed operating system, these systems are connected to each other over the network.

As it is connected to each other over the network, one user can access the data of the other system. So, remote
access is the major highlight of this operating system.

And also each system can perform its task individually.

Now, let’s talk about the advantages and disadvantages of Distributed Operating Systems.

Advantages:

1. No single point of failure as we have multiple systems, if one fails, another system can execute the task.
2. Resources are shared with each other, hence, increasing availability across the entire system.
VSRGDC
3. It helps in the reduction of execution time.

Disadvantages:

1. Since the data is shared across the systems, it needs extra handling to manage the overall infrastructure.
2. It is difficult to provide adequate security in distributed systems because the nodes as well as the connections
need to be secured.
3. Network failure needs to be handled.

Embedded Operating System


An embedded operating system is a specialized operating system designed to perform a specific task for a
device that is not a computer.

Examples of Embedded Operating Systems:

 The operating system of ATMs.


 The operating system in Elevators.

The operating system used in the ATMs or in the Elevators is dedicated to that specialized type of task.

Advantages:

1. It is fast as it does the specialized type of task only.

Disadvantages:

1. Only one task can be performed.

Real-time Operating System


A real-time operating system is an operating system for real-time applications that process data and events with
critically defined time constraints. It is used when we are dealing with real-time data.

Examples:

 Air Traffic Control Systems


 Medical systems

There are two types of Real-time Operating Systems:

1. Hard Real-time: When a slight delay can lead to a big problem, we use this Hard Real-time operating system.
The time constraints are very strict.
2. Soft Real-time: When a slight delay is manageable and does not impact anything as such, we use this Soft Real-
time operating system.
VSRGDC
Advantages:

1. Maximum utilization of devices and resources.


2. These systems are almost error-free.

Disadvantages:

1. The algorithms used in Real-time Operating System is very complex.


2. Limited tasks.
3. Use Heavy System resources.
VSRGDC

Multiprogramming vs Multiprocessing vs Multitasking?

Multiprogramming
A process executing in a computer system mainly requires two things i.e. CPU time and
I/O time. The CPU time is the time taken by CPU to execute a process and I/O time is
the time taken by the process for I/O operations such as some file operation like read
and write. Generally, our computer system wants to execute a number of processes at a
time. But it is not possible. You can run only one process at a time in a processor. But
this can result in some problems.

Suppose we have a single processor system and we have 5 processes P1, P2, P3, P4, and
P5 that is to be executed by the CPU. Since the CPU can execute only one process at a
time, so it starts with the process P1 and after some time of execution of process P1, the
process P1 requires some I/O operation. So, it leaves the CPU and starts performing that
I/O operation. Now, the CPU will wait for the process P1 to come back for its execution
and the CPU will be in an idle state for that period of time. But at the same time, other
processes i.e. P2, P3, P4, and P5 are waiting for there execution. Our CPU is in idle state
and this is a very expensive thing. So, why to keep the CPU in the idle state? What we can
do is, if the process P1 wants to perform some I/O operation then let the process P1 do
the I/O job and at the same time, the CPU will be given to the process P2 and if the
process P2 also requires some I/O operation, then the CPU will be given to process P3
and so on. This is called Context Switching . Once the I/O work is done by the
processes then the CPU can resume the working of that process(i.e. the Process P1 and
P2) and by doing so the CPU will never go into the idle state. This concept of effective
CPU utilization is called Multiprogramming .

So, in a multiprogramming system, the CPU executes some part of one


program, then some part of another program, and so on. By doing so, the
CPU will never go into the idle state unless there is no process ready to
execute at the time of Context Switching.

Advantages of Multiprogramming

 Very high CPU utilization as the CPU will never be idle unless there is no process
to execute.
 Less waiting time for the processes.
 Can be used in a Multiuser system. A Multiuser system allows different users that
are on different computers to access the same CPU and this, in turn, result in
Multiprogramming.
Disadvantages of Multiprogramming

 Since you have to perform Context Switching, so you need to have some process
scheduling technique that will tell the CPU which process to take for execution
and it is difficult.
VSRGDC
 Here, CPU is executing some part of one process, then some part of other and so
on. So, in this case, the memory will be divided into small parts as each process
require some memory and this will result in memory fragmentation. So, no or less
continuous memory will be available.

Multiprocessing
As we know that in a uni-processor system, the processor can execute only one process at
a time. But when your system is having a lot of work to do and one processor is very less
to perform all those work in the required unit of time, then we can use more than one
processors in the same system.

So, two or more processors present in the same computer, sharing the
system bus, memory, and other I/O is said to be Multiprocessing System.

Suppose, we are having 5 processes P1, P2, P3, P4, and P5. In a uni-processor system,
only one process can be executed at a time and after its execution, the next process will
be executed and so on. But in a multiprocessor system, the different process can be
assigned to different processors and this, in turn, decreases the overall process execution
time by the system. A dual-processor system can execute two processes at a time while a
quad-processor can execute four processes at a time.

Advantages of Multiprocessing

 Since more than one processors are working at a time, so more work is done in a
shorter period of time. Throughput will be increased. You can read more about
throughput from here .
 We have more than one processor, so if one processor is not working then the job
can be done with the help of other processors. This, in turn, increases reliability.
 If you are providing lots of work on one processor then it will result in more
battery drain. But if the work is divided into various processors then it will provide
a better battery efficiency.
 Multiprocessing is an example of true parallel processing i.e. more than one
processes executing at the same time.
Disadvantages of Multiprocessing

 As more than processors are working at a particular instant of time. So, the
coordination between these is very complex.
 Since, the buses, memory, and I/O devices are shared. So, if some processors are
using some I/O then another processor has to wait for its turn and this will result
in the reduction of throughput.
 To have the efficient working of all the processors at a time, we need to have a
large main memory and this, in turn, increase the cost.
VSRGDC

Multitasking
If the CPU is allocated to such a process that is taking a lot of time then other processes
will have to wait for the execution of that process and this will result in long waiting of
processes for resource allocation.

For example, if process P1 is taking 20 seconds of CPU time and the CPU is allocated to
P1. Now, if some process P2 comes that requires 1 second of CPU time, then P2 have to
wait for 20 seconds irrespective of the fact that it requires only 1 second of CPU time.

What we can do here is, we can set a time quantum and CPU will be given to each
process for that amount of time only and after that, the CPU will be given to some other
process that is ready for execution. So, in our above example, if the decided time
quantum is 2 seconds, then initially, the process P1 will be allocated the CPU for 2
seconds and then it will be given to process P2. The process P2 will complete its
execution in 1 second and then the CPU will be given to process P1 again. Since there is
no other process available for execution, the process P1 can continue to execute for its
remaining time i.e. 18 seconds. This is called time-sharing. And the concept of time-
sharing between various processes is called Multitasking .

Multitasking is Multiprogramming with time-sharing.

Here the switching between processes is so quick that it gives an illusion that all the
processes are being executed at the same time.

For multitasking, firstly there should be multiprogramming and secondly, there should
be time-sharing.

Advantages of Multitasking

 Since each process is given a particular time quantum for execution. So, it will
reduce starvation.
 It provides an illusion to the user that he/she is using multiple programmes at the
same time.
Disadvantages of Multitasking

 Every process will be given a fixed time quantum in one cycle. So, the high priority
process will also have to wait.
 If the processor is slow and the work is very large, then it can't be run smoothly. It
requires more processing power.

Multiprogramming vs Multiprocessing vs Multitasking


We have seen the concepts of Multiprogramming, Multiprocessing, Multitasking. So,
when we do the context switching between various processes then it is called the
VSRGDC
multiprogramming system. It is done for better CPU utilization and it makes sure that the
CPU never goes in the idle state. While the multitasking is a process of
multiprogramming with a time-sharing concept where every process is given some time
quantum and after that time quantum the CPU is then provided to other processes. On
the other hand, Multiprocessing is the use of more than one processors in the same
system so that true parallel processing can be achieved.
VSRGDC

What is Kernel in Operating System and Kernel Mode& user Mode and functions of kernal?

A Kernel is a computer program that is the heart and core of an Operating System. Since
the Operating System has control over the system so, the Kernel also has control over
everything in the system. It is the most important part of an Operating System.
Whenever a system starts, the Kernel is the first program that is loaded after the
bootloader because the Kernel has to handle the rest of the thing of the system for the
Operating System. The Kernel remains in the memory until the Operating System is
shut-down.

The Kernel is responsible for low-level tasks such as disk management, memory
management, task management, etc. It provides an interface between the user and the
hardware components of the system. When a process makes a request to the Kernel, then
it is called System Call.

A Kernel is provided with a protected Kernel Space which is a separate area of memory
and this area is not accessible by other application programs. So, the code of the Kernel
is loaded into this protected Kernel Space. Apart from this, the memory used by other
applications is called the User Space. As these are two different spaces in the memory, so
communication between them is a bit slower.

Functions of a Kernel
Following are the functions of a Kernel:

 Access Computer resource: A Kernel can access various computer resources


like the CPU, I/O devices and other resources. It acts as a bridge between the user
and the resources of the system.
 Resource Management: It is the duty of a Kernel to share the resources
between various process in such a way that there is uniform access to the
resources by every process.
 Memory Management: Every process needs some memory space. So, memory
must be allocated and deallocated for its execution. All these memory
management is done by a Kernel.
 Device Management: The peripheral devices connected in the system are used
by the processes. So, the allocation of these devices is managed by the Kernel.

Kernel Mode and User Mode


There are certain instructions that need to be executed by Kernel only. So, the CPU
executes these instructions in the Kernel Mode only. For example, memory management
should be done in Kernel-Mode only. While in the User Mode, the CPU executes the
processes that are given by the user in the User Space.
VSRGDC

What is Spooling in Operating System?

Initially, when Operating System came into existence then we had to give the input to the
CPU and the CPU executes the instructions and finally gives us the output. But there was
a problem with this approach. In a normal situation, we have to deal with a number of
processes and we know that the time taken in the I/O operation is very large as
compared to the time taken by CPU for the execution of the instructions. So, in the old
approach, one process will give the input with the help of an input device and during this
period of time, the CPU is in idle state. Then the CPU executes the instruction and then
the output is again given to some output device and at this time also, the CPU is in idle
state. After showing the output, the next process starts its execution. So, most of the
time, the CPU is in idle state and this is the worst condition that we can have in
Operating Systems. Here, the concept of Spooling comes into play. Let's learn more
about it.

Spooling
Spooling stands for " Simultaneous Peripheral Operations Online ". So, in a
Spooling, more than one I/O operations can be performed simultaneously i.e. at the time
when the CPU is executing some process then more than one I/O operations can also de
done at the same time. The following image will help us in understanding the concept in
a better way:

From the above image, we can see that the input data is stored in some kind of secondary
device and this data is then fetched by the main memory. The benefit of this approach is
that, in general, the CPU works on the data stored in the main memory. Since we can
have a number of input devices at a time, so all these input devices can put the data into
the disk or secondary memory. Then, the main memory will fetch the data one by one
from the secondary memory and the CPU will execute some instruction on that data.
Both the main memory and secondary memory are digital in nature, so taking data from
the main to secondary is very fast. Also, when the CPU is executing some task then at
that time, the input devices need not wait for its turn. They can directly put their data in
VSRGDC
the secondary memory without waiting for its turn. By doing so, the CPU will be in the
execution phase most of the time. So, the CPU will not be idle in this case.

When the CPU generates some output, then that output is first stored in the main
memory and the main memory transfers that output to the secondary memory and from
the secondary memory, the output will be provided to some output devices. By doing so,
again we are saving time because now the CPU doesn't have to wait for the output device
to show the output and this, in turn, increases the overall execution speed of the system.
The CPU will not be held idle in this case.

For example, in a printer spooling, there can be more than one documents that need to
be printed. So, the documents can be stored into the spool and the printer can fetch that
documents and print the document one by one.

Advantages of Spooling
 Since there is no interaction of I/O devices with CPU, so the CPU need not wait for
the I/O operation to take place. The I/O operations take a large amount of time.
 The CPU is kept busy most of the time and hence it is not in the idle state which is
good to have a situation.
 More than one I/O devices can work simultaneously.

Difference between Spooling and Buffering


We all know that a buffer is an area in the main memory that is used to store and hold
data temporarily. This data can be transferred between two devices or between a device
and an application. The main aim of buffers is to match the speed of data streaming
between a sender and receiver. There is a difference between Spooling and Buffering.

 In spooling, the I/O of one job can be handled along with some operations of
another job. While in buffering, only one job is handled at a time.
 Spooling is more efficient than buffering.
 In buffering, there is a small separate area in the memory know as a buffer. But
spooling can make use of the whole memory.
VSRGDC

What is a Process in Operating System and what are the different states of a Process?

In the Operating System, a Process is something that is currently under execution. So, an
active program can be called a Process. For example, when you want to search something
on web then you start a browser. So, this can be process. Another example of process can
be starting your music player to listen to some cool music of your choice.

A Process has various attributes associated with it. Some of the attributes of a Process
are:

 Process Id: Every process will be given an id called Process Id to uniquely


identify that process from the other processes.
 Process state: Each and every process has some states associated with it at a
particular instant of time. This is denoted by process state. It can be ready,
waiting, running, etc.
 CPU scheduling information: Each process is executed by using some process
scheduling algorithms like FCSF, Round-Robin, SJF, etc.
 I/O information: Each process needs some I/O devices for their execution. So,
the information about device allocated and device need is crucial.

States of a Process
During the execution of a process, it undergoes a number of states. So, in this section of
the blog, we will learn various states of a process during its lifecycle.

 New State: This is the state when the process is just created. It is the first state of
a process.
 Ready State: After the creation of the process, when the process is ready for its
execution then it goes in the ready state. In a ready state, the process is ready for
its execution by the CPU but it is waiting for its turn to come. There can be more
than one process in the ready state.
 Ready Suspended State: There can be more than one process in the ready state
but due to memory constraint, if the memory is full then some process from the
ready state gets placed in the ready suspended state.
 Running State: Amongst the process present in the ready state, the CPU chooses
one process amongst them by using some CPU scheduling algorithm. The process
will now be executed by the CPU and it is in the running state.
 Waiting or Blocked State: During the execution of the process, the process
might require some I/O operation like writing on file or some more priority
process might come. In these situations, the running process will have to go into
VSRGDC
the waiting or blocked state and the other process will come for its execution. So,
the process is waiting for something in the waiting state.
 Waiting Suspended State: When the waiting queue of the system becomes full
then some of the processes will be sent to the waiting suspended state.
 Terminated State: After the complete execution of the process, the process
comes into the terminated state and the information related to this process is
deleted.
The following image will show the flow of a process from the new state to the terminated
state.

In the above image, you can see that when a process is created then it goes into the new
state. After the new state, it goes into the ready state. If the ready queue is full, then the
process will be shifted to the ready suspended state. From the ready sate, the CPU will
choose the process and the process will be executed by the CPU and will be in the
running state. During the execution of the process, the process may need some I/O
operation to perform. So, it has to go into the waiting state and if the waiting state is full
then it will be sent to the waiting suspended state. From the waiting state, the process
can go to the ready state after performing I/O operations. From the waiting suspended
state, the process can go to waiting or ready suspended state. At last, after the complete
execution of the process, the process will go to the terminated state and the information
of the process will be deleted.

This is the whole life cycle of a process.


VSRGDC

Process Control Block in Operating System?

A Process Control Block or simple PCB is a data structure that is used to store the
information of a process that might be needed to manage the scheduling of a particular
process.

So, each process will be given a PCB which is a kind of identification card for a process.
All the processes present in the system will have a PCB associated with it and all these
PCBs are connected in a Linked List.

Attributes of a Process Control Block


There are various attributes of a PCB that helps the CPU to execute a particular process.
These attributes are:

 Process Id: A process id is a unique identity of a process. Each process is


identified with the help of the process id.
 Program Counter: The program counter, points to the next instruction that is
to be executed by the CPU. It is used to find the next instruction that is to be
executed.
 Process State: A process can be in any state out of the possible states of a
process. So, the CPU needs to know about the current state of a process, so that its
execution can be done easily. You can learn more about process state from here .
 Priority: There is a priority associated with each process. Based on that priority
the CPU finds which process is to be executed first. Higher priority process will be
executed first.
 General-purpose Registers: During the execution of a process, it deals with a
number of data that are being used and changed by the process. But in most of the
cases, we have to stop the execution of a process to start another process and after
some times, the previous process should be resumed once again. Since the
previous process was dealing with some data and had changed the data so when
the process resumes then it should use that data only. These data are stored in
some kind of storage units called registers.
 CPU Scheduling Information: It indicates the information about the process
scheduling algorithms that are being used by the CPU for the process.
 List of opened files: A process can deal with a number of files, so the CPU
should maintain a list of files that are being opened by a process to make sure that
no other process can open the file at the same time.
 List of I/O devices: A process may need a number of I/O devices to perform
various tasks. So, a proper list should be maintained that shows which I/O device
is being used by which process.
VSRGDC
These are the attributes of a Process Control Block and these pieces of information are
needed to have detailed info about the process and this, in turn, results in better
execution of the process.
VSRGDC

What is Context Switching in Operating System?

What is Context Switching?


A context switching is a process that involves switching of the CPU from one process or
task to another. In this phenomenon, the execution of the process that is present in the
running state is suspended by the kernel and another process that is present in the ready
state is executed by the CPU.

It is one of the essential features of the multitasking operating system. The processes are
switched so fastly that it gives an illusion to the user that all the processes are being
executed at the same time.

But the context switching process involved a number of steps that need to be followed.
You can't directly switch a process from the running state to the ready state. You have to
save the context of that process. If you are not saving the context of any process P then
after some time, when the process P comes in the CPU for execution again, then the
process will start executing from starting. But in reality, it should continue from that
point where it left the CPU in its previous execution. So, the context of the process
should be saved before putting any other process in the running state.

A context is the contents of a CPU's registers and program counter at any point in time.
Context switching can happen due to the following reasons:

 When a process of high priority comes in the ready state. In this case, the
execution of the running process should be stopped and the higher priority
process should be given the CPU for execution.
 When an interruption occurs then the process in the running state should be
stopped and the CPU should handle the interrupt before doing something else.
 When a transition between the user mode and kernel mode is required then you
have to perform the context switching.

Steps involved in Context Switching


The process of context switching involves a number of steps. The following diagram
depicts the process of context switching between the two processes P1 and P2.
VSRGDC

In the above figure, you can see that initially, the process P1 is in the running state and
the process P2 is in the ready state. Now, when some interruption occurs then you have
to switch the process P1 from running to the ready state after saving the context and the
process P2 from ready to running state. The following steps will be performed:

1. Firstly, the context of the process P1 i.e. the process present in the running state
will be saved in the Process Control Block of process P1 i.e. PCB1.
2. Now, you have to move the PCB1 to the relevant queue i.e. ready queue, I/O
queue, waiting queue, etc.
3. From the ready state, select the new process that is to be executed i.e. the process
P2.
4. Now, update the Process Control Block of process P2 i.e. PCB2 by setting the
process state to running. If the process P2 was earlier executed by the CPU, then
you can get the position of last executed instruction so that you can resume the
execution of P2.
5. Similarly, if you want to execute the process P1 again, then you have to follow the
same steps as mentioned above(from step 1 to 4).
For context switching to happen, two processes are at least required in general, and in
the case of the round-robin algorithm, you can perform context switching with the help
of one process only.

The time involved in the context switching of one process by other is called
the Context Switching Time.
VSRGDC

Advantage of Context Switching


Context switching is used to achieve multitasking i.e. multiprogramming with time-
sharing(learn more about multitasking from here ). Multitasking gives an illusion to the
users that more than one process are being executed at the same time. But in reality,
only one task is being executed at a particular instant of time by a processor. Here, the
context switching is so fast that the user feels that the CPU is executing more than one
task at the same time.

The disadvantage of Context Switching


The disadvantage of context switching is that it requires some time for context switching
i.e. the context switching time. Time is required to save the context of one process that is
in the running state and then getting the context of another process that is about to come
in the running state. During that time, there is no useful work done by the CPU from the
user perspective. So, context switching is pure overhead in this condition.
VSRGDC

What is Long-Term, Short-Term, and Medium-Term Scheduler?

In the Operating System, CPU schedulers are used to handle the scheduling of various
processes that are coming for its execution by the CPU. Schedulers are responsible for
transferring a process from one state to the other. Basically, we have three types of
schedulers i.e.

1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler
In this blog, we will learn about these schedulers and we will see the difference between
them. Also, at the end of the blog, we will look where these schedulers are placed in the
Process State Diagram. So, let's get started.

Long-Term Scheduler
Long-Term schedulers are those schedulers whose decision will have a long-term effect
on the performance. The duty of the long-term scheduler is to bring the process from the
JOB pool to the Ready state for its execution.

Long-Term Scheduler is also called Job Scheduler and is responsible for


controlling the Degree of Multiprogramming i.e. the total number of
processes that are present in the ready state.

So, the long-term scheduler decides which process is to be created to put into the ready
state.

Effect on performance

 The long term scheduler is responsible for creating a balance between the I/O
bound(a process is said to be I/O bound if the majority of the time is spent on the
I/O operation) and CPU bound(a process is said to be CPU bound if the majority
of the time is spent on the CPU). So, if we create processes which are all I/O
bound then the CPU might not be used and it will remain idle for most of the time.
This is because the majority of the time will be spent on the I/O operation.
 So, if we create processes that are having high a CPU bound or a perfect balance
between the I/O and CPU bound, then the overall performance of the system will
be increased.

Short-Term Scheduler
Short-term schedulers are those schedulers whose decision will have a short-term effect
on the performance of the system. The duty of the short-term scheduler is to schedule
VSRGDC
the process from the ready state to the running state. This is the place where all the
scheduling algorithms are used i.e. it can be FCFS or Round-Robin or SJF or any other
scheduling algorithm.

Short-Term Scheduler is also known as CPU scheduler and is responsible


for selecting one process from the ready state for scheduling it on the
running state.

Effect on performance

 The choice of the short-term scheduler is very important for the performance of
the system. If the short-term scheduler only selects a process that is having very
high burst time(learn more about burst time from here ) then the other process
may go into the condition of starvation(learn more about starvation from here ).
So, be specific when you are choosing short-term scheduler because the
performance of the system is our highest priority.
The following image shows the scheduling of processes using the long-term and short-
term schedulers.

Medium-Term Schedulers
Sometimes, you need to send the running process to the ready state or to the wait/block
state. For example, in the round-robin process, after a fixed time quantum, the process is
again sent to the ready state from the running state. So, these things are done with the
help of Medium-Term schedulers.
VSRGDC
Medium-term schedulers are those schedulers whose decision will have a
mid-term effect on the performance of the system. It is responsible for
swapping of a process from the Main Memory to Secondary Memory and
vice-versa.

It is helpful in maintaining a perfect balance between the I/O bound and the CPU bound.
It reduces the degree of multiprogramming.

The following diagram will give a brief about the working of the medium-term
schedulers.

Long-term vs Short-term vs Medium-term Schedulers


Following is the difference between the long-term, short-term, and medium-term
schedulers:
VSRGDC

Place of schedulers in the Process State Diagram


The Process State Diagram is used to display the flow of processes from one state to
other. In this portion of the blog, we will see the position of long-term, short-term, and
medium-term schedulers in the Process State Diagram. Following is the image of the
same:
VSRGDC

Difference between Scheduler and Dispatcher

Dispatcher
When the processes are in the ready state, then the CPU applies some process scheduling
algorithm and choose one process from a list of processes that will be executed at a
particular instant of time. This is done by a scheduler i.e. selecting one process from a
number of processes is done by a scheduler.

Now, the selected process has to be transferred from the current state to the desired or
scheduled state. So, it is the duty of the dispatcher to dispatch or transfer a process from
one state to another. A dispatcher is responsible for context switching and switching to
user mode(learn more about context switching from here ).

For example, if we have three processes P1, P2, and P3 in the ready state. The arrival
time of all these processes is T0, T1, and T2 respectively(learn more about arrival time
from here ). If we are using the First Come First Serve approach, then the scheduler will
first select the process P1 and the dispatcher will transfer the process P1 from the ready
state to the running state. After completion of the execution of the process P1, the
scheduler will then select the process P2 and the dispatcher will transfer the process P2
from ready to running state and so on.

Difference between Dispatcher and Scheduler


Till now, we are familiar with the concept of dispatcher and scheduler. Now in this
section of the blog, we will see the difference between a dispatcher and a scheduler.

 The scheduler selects a process from a list of processes by applying some process
scheduling algorithm. On the other hand, the dispatcher transfers the process
selected by the short-term scheduler from one state to another.
 The scheduler works independently, while the dispatcher has to be dependent on
the scheduler i.e. the dispatcher transfers only those processes that are selected by
the scheduler.
 For selecting a process, the scheduler uses some process scheduling algorithm like
FCFS, Round-Robin, SJF, etc. But the dispatcher doesn't use any kind of
scheduling algorithms.
 The only duty of a scheduler is to select a process from a list of processes. But
apart from transferring a process from one state to another, the dispatcher can
also be used for switching to user mode. Also, the dispatcher can be used to jump
to a proper location when the process is restarted.
VSRGDC

Process scheduling algorithms in the Operating System?

In a system, there are a number of processes that are present in different


states at a particular time. Some processes may be in the waiting state,
others may be in the running state and so on. Have you ever thought how
CPU selects one process out of some many processes for execution? Yes, you
got it right. CPU uses some kind of process scheduling algorithms to select
one process for its execution amongst so many processes. The process
scheduling algorithms are used to maximize CPU utilization by increasing
throughput. In this blog, we will learn about various process scheduling
algorithms used by CPU to schedule a process.

First Come First Serve (FCFS)


As the name suggests, the process coming first in the ready state will be executed first by
the CPU irrespective of the burst time or the priority. This is implemented by using
the First In First Out (FIFO) queue. So, what happens is that, when a process enters
into the ready state, then the PCB of that process will be linked to the tail of the queue
and the CPU starts executing the processes by taking the process from the head of the
queue If the CPU is allocated to a process then it can't be taken back until it finishes the
execution of that process.

Example:

In the above example, you can see that we have three processes P1, P2, and P3, and they
are coming in the ready state at 0ms, 2ms, and 2ms respectively. So, based on the arrival
time, the process P1 will be executed for the first 18ms. After that, the process P2 will be
executed for 7ms and finally, the process P3 will be executed for 10ms. One thing to be
noted here is that if the arrival time of the processes is the same, then the CPU can select
any process.
VSRGDC
---------------------------------------------
| Process | Waiting Time | Turnaround Time |
---------------------------------------------
| P1 | 0ms | 18ms |
| P2 | 16ms | 23ms |
| P3 | 23ms | 33ms |
---------------------------------------------
Total waiting time: (0 + 16 + 23) = 39ms
Average waiting time: (39/3) = 13ms

Total turnaround time: (18 + 23 + 33) = 74ms


Average turnaround time: (74/3) = 24.66ms
Advantages of FCFS:

 It is the most simple scheduling algorithm and is easy to implement.


Disadvantages of FCFS:

 This algorithm is non-preemptive so you have to execute the process fully and
after that other processes will be allowed to execute.
 Throughput is not efficient.
 FCFS suffers from the Convey effect i.e. if a process is having very high burst
time and it is coming first, then it will be executed first irrespective of the fact that
a process having very less time is there in the ready state.

Shortest Job First (Non-preemptive)


In the FCFS, we saw if a process is having a very high burst time and it comes first then
the other process with a very low burst time have to wait for its turn. So, to remove this
problem, we come with a new approach i.e. Shortest Job First or SJF.

In this technique, the process having the minimum burst time at a particular instant of
time will be executed first. It is a non-preemptive approach i.e. if the process starts its
execution then it will be fully executed and then some other process will come.

Example:
VSRGDC

In the above example, at 0ms, we have only one process i.e. process P2, so the process P2
will be executed for 4ms. Now, after 4ms, there are two new processes i.e. process P1 and
process P3. The burst time of P1 is 5ms and that of P3 is 2ms. So, amongst these two, the
process P3 will be executed first because its burst time is less than P1. P3 will be
executed for 2ms. Now, after 6ms, we have two processes with us i.e. P1 and P4 (because
we are at 6ms and P4 comes at 5ms). Amongst these two, the process P4 is having a less
burst time as compared to P1. So, P4 will be executed for 4ms and after that P1 will be
executed for 5ms. So, the waiting time and turnaround time of these processes will be:

---------------------------------------------
| Process | Waiting Time | Turnaround Time |
---------------------------------------------
| P1 | 7ms | 12ms |
| P2 | 0ms | 4ms |
| P3 | 0ms | 2ms |
| P4 | 1ms | 5ms |
---------------------------------------------
Total waiting time: (7 + 0 + 0 + 1) = 8ms
Average waiting time: (8/4) = 2ms

Total turnaround time: (12 + 4 + 2 + 5) = 23ms


Average turnaround time: (23/4) = 5.75ms
Advantages of SJF (non-preemptive):

 Short processes will be executed first.


Disadvantages of SJF (non-preemptive):
VSRGDC
 It may lead to starvation if only short burst time processes are coming in the ready

stateShortest Job First (Preemptive)


This is the preemptive approach of the Shortest Job First algorithm. Here, at every
instant of time, the CPU will check for some shortest job. For example, at time 0ms, we
have P1 as the shortest process. So, P1 will execute for 1ms and then the CPU will check if
some other process is shorter than P1 or not. If there is no such process, then P1 will
keep on executing for the next 1ms and if there is some process shorter than P1 then that
process will be executed. This will continue until the process gets executed.

This algorithm is also known as Shortest Remaining Time First i.e. we schedule the
process based on the shortest remaining time of the processes.

Example:

In the above example, at time 1ms, there are two processes i.e. P1 and P2. Process P1 is
having burst time as 6ms and the process P2 is having 8ms. So, P1 will be executed first.
Since it is a preemptive approach, so we have to check at every time quantum. At 2ms,
we have three processes i.e. P1(5ms remaining), P2(8ms), and P3(7ms). Out of these
three, P1 is having the least burst time, so it will continue its execution. After 3ms, we
have four processes i.e P1(4ms remaining), P2(8ms), P3(7ms), and P4(3ms). Out of these
four, P4 is having the least burst time, so it will be executed. The process P4 keeps on
executing for the next three ms because it is having the shortest burst time. After 6ms,
we have 3 processes i.e. P1(4ms remaining), P2(8ms), and P3(7ms). So, P1 will be
selected and executed. This process of time comparison will continue until we have all
the processes executed. So, waiting and turnaround time of the processes will be:

---------------------------------------------
| Process | Waiting Time | Turnaround Time |
VSRGDC
---------------------------------------------
| P1 | 3ms | 9ms |
| P2 | 16ms | 24ms |
| P3 | 8ms | 15ms |
| P4 | 0ms | 3ms |
---------------------------------------------
Total waiting time: (3 + 16 + 8 + 0) = 27ms
Average waiting time: (27/4) = 6.75ms

Total turnaround time: (9 + 24 + 15 + 3) = 51ms


Average turnaround time: (51/4) = 12.75ms
Advantages of SJF (preemptive):

 Short processes will be executed first.


Disadvantages of SJF (preemptive):

 It may result in starvation if short processes keep on coming.

Round-Robin
In this approach of CPU scheduling, we have a fixed time quantum and the CPU will be
allocated to a process for that amount of time only at a time. For example, if we are
having three process P1, P2, and P3, and our time quantum is 2ms, then P1 will be giv en
2ms for its execution, then P2 will be given 2ms, then P3 will be given 2ms. After one
cycle, again P1 will be given 2ms, then P2 will be given 2ms and so on until the processes
complete its execution.

It is generally used in the time-sharing environments and there will be no starvation in


case of the round-robin.

Example:
VSRGDC

In the above example, every process will be given 2ms in one turn because we have taken
the time quantum to be 2ms. So process P1 will be executed for 2ms, then process P2 will
be executed for 2ms, then P3 will be executed for 2 ms. Again process P1 will be executed
for 2ms, then P2, and so on. The waiting time and turnaround time of the processes will
be:

---------------------------------------------
| Process | Waiting Time | Turnaround Time |
---------------------------------------------
| P1 | 13ms | 23ms |
| P2 | 10ms | 15ms |
| P3 | 13ms | 21ms |
---------------------------------------------
Total waiting time: (13 + 10 + 13) = 36ms
Average waiting time: (36/3) = 12ms

Total turnaround time: (23 + 15 + 21) = 59ms


Average turnaround time: (59/3) = 19.66ms
Advantages of round-robin:

 No starvation will be there in round-robin because every process will get chance
for its execution.
 Used in time-sharing systems.
VSRGDC
Disadvantages of round-robin:

 We have to perform a lot of context switching here, which will keep the CPU

idlePriority Scheduling (Non-preemptive)


In this approach, we have a priority number associated with each process and based on
that priority number the CPU selects one process from a list of processes. The priority
number can be anything. It is just used to identify which process is having a higher
priority and which process is having a lower priority. For example, you can denote 0 as
the highest priority process and 100 as the lowest priority process. Also, the reverse can
be true i.e. you can denote 100 as the highest priority and 0 as the lowest priority.

Example:

In the above example, at 0ms, we have only one process P1. So P1 will execute for 5ms
because we are using non-preemption technique here. After 5ms, there are three
processes in the ready state i.e. process P2, process P3, and process P4. Out to these
three processes, the process P4 is having the highest priority so it will be executed for
6ms and after that, process P2 will be executed for 3ms followed by the process P1. The
waiting and turnaround time of processes will be:
VSRGDC
---------------------------------------------
| Process | Waiting Time | Turnaround Time |
---------------------------------------------
| P1 | 0ms | 5ms |
| P2 | 10ms | 13ms |
| P3 | 12ms | 20ms |
| P4 | 2ms | 8ms |
---------------------------------------------
Total waiting time: (0 + 10 + 12 + 2) = 24ms
Average waiting time: (24/4) = 6ms

Total turnaround time: (5 + 13 + 20 + 8) = 46ms


Average turnaround time: (46/4) = 11.5ms
Advantages of priority scheduling (non-preemptive):

 Higher priority processes like system processes are executed first.


Disadvantages of priority scheduling (non-preemptive):

 It can lead to starvation if only higher priority process comes into the ready state.
 If the priorities of more two processes are the same, then we have to use some
other scheduling algorithm.

Multilevel Queue Scheduling


In multilevel queue scheduling, we divide the whole processes into some batches or
queues and then each queue is given some priority number. For example, if there are
four processes P1, P2, P3, and P4, then we can put process P1 and P4 in queue1 and
process P2 and P3 in queue2. Now, we can assign some priority to each queue. So, we
can take the queue1 as having the highest priority and queue2 as the lowest priority. So,
all the processes of the queue1 will be executed first followed by queue2. Inside the
queue1, we can apply some other scheduling algorithm for the execution of processes of
queue1. Similar is with the case of queue2.

So, multiple queues for processes are maintained that are having common characteristics
and each queue has its own priority and there is some scheduling algorithm used in each
of the queues.

Example:
VSRGDC

In the above example, we have two queues i.e. queue1 and queue2. Queue1 is having
higher priority and queue1 is using the FCFS approach and queue2 is using the round -
robin approach(time quantum = 2ms).

Since the priority of queue1 is higher, so queue1 will be executed first. In the queue1, we
have two processes i.e. P1 and P4 and we are using FCFS. So, P1 will be executed
followed by P4. Now, the job of the queue1 is finished. After this, the execution of the
processes of queue2 will be started by using the round-robin approach.

Multilevel Feedback Queue Scheduling


Multilevel feedback queue scheduling is similar to multilevel queue scheduling but here
the processes can change their queue also. For example, if a process is in queue1 initially
then after partial execution of the process, it can go into some other queue.

In a multilevel feedback queue, we have a list of queues having some priority and the
higher priority queue is always executed first. Let's assume that we have two queues i.e.
queue1 and queue2 and we are using round-robin for these i.e. time quantum for queue1
is 2 ms and for queue2 is 3ms. Now, if a process starts executing in the queue1 then if it
gets fully executed in 2ms then it is ok, its priority will not be changed. But if the
execution of the process will not be completed in the time quantum of queue1, then the
priority of that process will be reduced and it will be placed in the lower priority queue
i.e. queue2 and this process will continue.

While executing a lower priority queue, if a process comes into the higher priority queue,
then the execution of that lower priority queue will be stopped and the execution of the
higher priority queue will be started. This can lead to starvation because if the process
keeps on going into the higher priority queue then the lower priority queue keeps on
waiting for its turn.
VSRGDC

What is the difference between Preemptive and Non-Preemptive scheduling?

In the Operating System, the process scheduling algorithms can be divided


into two broad categories i.e. Preemptive Scheduling and Non-Preemptive
Scheduling.

Preemptive Scheduling
In preemptive scheduling, the CPU will execute a process but for a limited period of time
and after that, the process has to wait for its next turn i.e. in preemptive scheduling, the
state of a process gets changed i.e. the process may go to the ready state from running
state or from the waiting state to the ready state. The resources are allocated to the
process for a limited amount of time and after that, they are taken back and the process
goes to the ready queue if it still has some CPU burst time remaining. Some of the
preemptive scheduling algorithms are Round-robin, SJF (preemptive), etc.

Non-preemptive Scheduling
In non-preemptive scheduling, if some resource is allocated to a process then that
resource will not be taken back until the completion of the process. Other processes that
are present in the ready queue have to wait for its turn and it cann't forcefully get the
CPU. Once the CPU is allocated to a process, then it will be held by that process until it
completes its execution or it goes in the waiting state for I/O operation.

Difference between Preemptive and Non-preemptive


scheduling
 In preemptive scheduling, the CPU can be taken back from the process at any time
during the execution of the process. But in non-preemptive scheduling, if the CPU
is allocated, then it will not be taken back until the process completes its
execution.
 In preemptive scheduling, a process can be interrupted by some high priority
process but in non-preemptive scheduling no interruption by other processes is
allowed.
 The preemptive approach is flexible in nature while the non-preemptive approach
is rigid in nature.
 In preemptive scheduling, the CPU utilization is more as compared to the non-
preemptive approach.
 In preemptive scheduling, the waiting and response time is more. While in non-
preemptive scheduling, the waiting and response time is less
VSRGDC
 In a preemptive approach, if higher priority process keeps on coming, then it can
lead to starvation But in a non-preemptive approach, if process having higher
burst time keeps on coming, then it can lead to starvation
VSRGDC

What is Burst time, Arrival time, Exit time, Response time, Waiting time, Turnaround
time, and Throughput?

When we are dealing with some CPU scheduling algorithms then we


encounter with some confusing terms like Burst time, Arrival time, Exit
time, Waiting time, Response time, Turnaround time, and throughput.
These parameters are used to find the performance of a system.

Burst time
Every process in a computer system requires some amount of time for its execution. This
time is both the CPU time and the I/O time. The CPU time is the time taken by CPU to
execute the process. While the I/O time is the time taken by the process to perform some
I/O operation. In general, we ignore the I/O time and we consider only the CPU time for
a process. So, Burst time is the total time taken by the process for its execution
on the CPU.

Arrival time
Arrival time is the time when a process enters into the ready state and is ready for its
execution.

Here in the above example, the arrival time of all the 3 processes are 0 ms, 1 ms, and 2
ms respectively.

Exit time
Exit time is the time when a process completes its execution and exit from the system.

Response time
Response time is the time spent when the process is in the ready state and gets the CPU
for the first time. For example, here we are using the First Come First Serve CPU
scheduling algorithm for the below 3 processes:
VSRGDC

Here, the response time of all the 3 processes are:

 P1: 0 ms
 P2: 7 ms because the process P2 have to wait for 8 ms during the execution of P1
and then after it will get the CPU for the first time. Also, the arrival time of P2 is 1
ms. So, the response time will be 8-1 = 7 ms.
 P3: 13 ms because the process P3 have to wait for the execution of P1 and P2 i.e.
after 8+7 = 15 ms, the CPU will be allocated to the process P3 for the first time.
Also, the arrival of P3 is 2 ms. So, the response time for P3 will be 15-2 = 13 ms.
Response time = Time at which the process gets the CPU for the first time -
Arrival time

Waiting time
Waiting time is the total time spent by the process in the ready state waiting for CPU. For
example, consider the arrival time of all the below 3 processes to be 0 ms, 0 ms, and 2
ms and we are using the First Come First Serve scheduling algorithm.

Then the waiting time for all the 3 processes will be:
VSRGDC
 P1: 0 ms
 P2: 8 ms because P2 have to wait for the complete execution of P1 and arrival
time of P2 is 0 ms.
 P3: 13 ms becuase P3 will be executed after P1 and P2 i.e. after 8+7 = 15 ms and
the arrival time of P3 is 2 ms. So, the waiting time of P3 will be: 15-2 = 13 ms.
Waiting time = Turnaround time - Burst time

In the above example, the processes have to wait only once. But in many other
scheduling algorithms, the CPU may be allocated to the process for some time and then
the process will be moved to the waiting state and again after some time, the process will
get the CPU and so on.

There is a difference between waiting time and response time. Response time is the time
spent between the ready state and getting the CPU for the first time. But the waiting time
is the total time taken by the process in the ready state. Let's take an example of a round -
robin scheduling algorithm. The time quantum is 2 ms.

In the above example, the response time of the process P2 is 2 ms because after 2 ms, the
CPU is allocated to P2 and the waiting time of the process P2 is 4 ms i.e turnaround time
- burst time (10 - 6 = 4 ms).

Turnaround time
Turnaround time is the total amount of time spent by the process from coming in the
ready state for the first time to its completion.

Turnaround time = Burst time + Waiting time

or
VSRGDC
Turnaround time = Exit time - Arrival time

For example, if we take the First Come First Serve scheduling algorithm, and the order of
arrival of processes is P1, P2, P3 and each process is taking 2, 5, 10 seconds. Then the
turnaround time of P1 is 2 seconds because when it comes at 0th second, then the CPU is
allocated to it and so the waiting time of P1 is 0 sec and the turnaround time will be the
Burst time only i.e. 2 seconds. The turnaround time of P2 is 7 seconds because the
process P2 have to wait for 2 seconds for the execution of P1 and hence the waiting time
of P2 will be 2 seconds. After 2 seconds, the CPU will be given to P2 and P2 will execute
its task. So, the turnaround time will be 2+5 = 7 seconds. Similarly, the turnaround time
for P3 will be 17 seconds because the waiting time of P3 is 2+5 = 7 seconds and the burst
time of P3 is 10 seconds. So, turnaround time of P3 is 7+10 = 17 seconds.

Different CPU scheduling algorithms produce different turnaround time for the same set
of processes. This is because the waiting time of processes differ when we change the
CPU scheduling algorithm.

Throughput
Throughput is a way to find the efficiency of a CPU. It can be defined as the number of
processes executed by the CPU in a given amount of time. For example, let's say, the
process P1 takes 3 seconds for execution, P2 takes 5 seconds, and P3 takes 10 seconds.
So, throughput, in this case, the throughput will be (3+5+10)/3 = 18/3 = 6 seconds.
VSRGDC

What is Starvation and Aging?

Brief Overview of Priority Scheduling Algorithm


Before getting into the details of Starvation, let's have a quick overview of Priority
Scheduling algorithms.

In Priority scheduling technique, we assign some priority to every process we have and
based on that priority, the CPU will be allocated and the process will be executed. Here,
the CPU will be allocated to the process that is having the highest priority. We don't
care about the burst time here. Even if the burst time is low, the CPU will be
allocated to the process having the highest priority.

In the above image, we can see the priority of process P1 is the highest, followed by P3
and P2. So, the CPU will be allocated to process P1, then to process P3 and then to
process P2.

NOTE: In our example, we are taking 0 as the highest priority number and 100 or more
as the lowest priority number. You can take the reverse of it also but the concept will be
the same i.e. higher priority process will be allocated the CPU first.
VSRGDC

Starvation
If you closely look at the concept of Priority scheduling, then you might have noticed one
thing. What if the priority of some process is very low and the higher priority processes
keep on coming and the CPU is allocated to that higher priority processes and the low
priority process keeps on waiting for its turn. Let's have an example:

In the above example, the process P2 is having the highest priority and the process P1 is
having the lowest priority. In general, we have a number of processes that are in the
ready state for its execution. So, as time passes, if only that processes are coming in the
CPU that are having a higher priority than the process P1, then the process P1 will keep
on waiting for its turn for CPU allocation and it will never get CPU because all the other
processes are having higher priority than P1. This is called Starvation.

Starvation is a phenomenon in which a process that is present in the ready


state and has low priority, keeps on waiting for the CPU allocation because
some other process with higher priority comes with due respect to time.

So, starvation should be removed because if some process is in the ready state then we
should provide CPU to it. Since the process is of low priority so we can take our time for
CPU allocation to that process but we must ensure that the CPU is allocated.
VSRGDC

Aging
To avoid starvation, we use the concept of Aging. In Aging, after some fixed amount of
time quantum, we increase the priority of the low priority processes. By doing so, as time
passes, the lower priority process becomes a higher priority process.

For example, if a process P is having a priority number as 75 at 0 ms. Then after every 5
ms(you can use any time quantum), we can decrease the priority number of the process
P by 1(here also instead of 1, you can take any other number). So, after 5 ms, the priority
of the process P will be 74. Again after 5 ms, we will decrease the priority number of
process P by 1. So, after 10 ms, the priority of the process P will become 73 and this
process will continue. After a certain period of time, the process P will become a high
priority process when the priority number comes closer to 0 and the process P will get
the CPU for its execution. In this way, the lower priority process also gets the CPU. No
doubt the CPU is allocated after a very long time but since the priority of the process is
very low so, we are not that much concerned about the response time of the process. The
only thing that we are taking care of is starvation.

So, we are Aging our low priority process to make it a high priority process
and as a result, to allocate the CPU for it.

Whenever you are using Priority scheduling algorithm or Shortest Job First algorithm,
then make sure to use the concept of Aging, otherwise, your process will end up with
starvation.
VSRGDC

What is a Thread in OS and what are the differences between a Process and a Thread?

Thread
A thread is an execution unit that has its own program counter, a stack and a set of
registers that reside in a process . Threads can’t exist outside any process. Also, each
thread belongs to exactly one process. The information like code segment, files, and data
segment can be shared by the different threads.

Threads are popularly used to improve the application through parallelism . Actually
only one thread is executed at a time by the CPU, but the CPU switches
rapidly between the threads to give an illusion that the threads are running parallelly.

Threads are also known as light-weight processes.

The diagram above shows the single-threaded process and the multi-threaded process.
A single-threaded process is a process with a single thread. A multi-threaded
process is a process with multiple threads. As the diagram clearly shows that the
multiple threads in it have its own registers, stack, and counter but they share the code
and data segment.
VSRGDC

Types of Thread
User-Level Thread

1. The user-level threads are managed by users and the kernel is not aware of it.
2. These threads are faster to create and manage.
3. The kernel manages them as if it was a single-threaded process.
4. It is implemented using user-level libraries and not by system calls. So, no call to
the operating system is made when a thread switches the context.
5. Each process has its own private thread table to keep the track of the threads.
Kernel-Level Thread

1. The kernel knows about the thread and is supported by the OS.
2. The threads are created and implemented using system calls.
3. The thread table is not present here for each process. The kernel has a thread table
to keep the track of all the threads present in the system.
4. Kernel-level threads are slower to create and manage as compared to user-level
threads.

Advantages of threads
1. Performance: Threads improve the overall performance(throughput,
computational speed, responsiveness) of a program.
2. Resource sharing: As the threads can share the memory and resources of any
process it allows any application to perform multiple activities inside the same
address space.
3. Utilization of Multiple Processor Architecture: The different threads can
run parallel on the multiple processors hence, this enables the utilization of the
processor to a large extent and efficiency.
4. Reduced Context Switching Time: The threads minimize the context
switching time as in Thread Switching, the virtual memory space remains the
same.
5. Concurrency: Thread provides concurrency within a process.
6. Parallelism: Parallel programming techniques are easier to implement.
VSRGDC

Difference between process and thread


1. Definition: Process means a program that is currently under execution, whereas
thread is an entity that resides within a process that can be scheduled for
execution.
2. Termination Time: The processes take more time to terminate, whereas
threads take less time to terminate.
3. Creation Time: The process creation time takes more time as compared to
thread creation time.
4. Context Switching Time: Process context switching takes more time as
compared to the thread context switching.
5. Communication: The communication between threads requires less time as
compared to the communication between processes.
6. Resources: Processes are also called heavyweight processes as they use more
resources. The threads are called light-weight processes as they share resources.
7. Memory: A Process is run in separate memory space, whereas threads run in
shared memory space.
8. Sharing Data: Different processes have different copies of data, files, and codes
whereas threads share the same copy of data, file and code segments.
9. Example: Opening a new browser (say Chrome, etc) is an example of creating a
process. At this point, a new process will start to execute. On the contrary, opening
multiple tabs in the browser is an example of creating the thread.
VSRGDC

What is the concept of Multithreading in OS and what are its benefits?

Multithreading
Multithreading is a phenomenon of executing multiple threads at the same time. To
understand the concept of multithreading, you must understand what is a thread and
a process .

A process is a program in execution . A process can further be divided into sub-


processes known as threads

Example: Playing a video and downloading it at the same time is an example of


multithreading.

As we have two types of thread i.e. user-level thread and kernel-level thread . So,
for these threads to function together there must exist a relationship between them. This
relation is established by using Multithreading Models . There are three common
ways of establishing this relationship.

1. Many-to-One Model
2. One-to-One Model
3. Many-to-Many Model

Many-to-One Model
As the name suggests there is many to one relationship between threads. Here, multiple
user threads are associated or mapped with one kernel thread. The thread management
is done on the user level so it is more efficient.
VSRGDC

Drawbacks
1. As multiple users threads are mapped to one kernel thread. So, if one user
thread makes a blocking system call( like function read() call then the thread or
process has to wait until read event is completed), it will block the kernel
thread which will in turn block all the other threads.
2. As only one thread can access the kernel thread at a time so multiple threads are
unable to run in parallel in the multiprocessor system. Even though we have
multiple processers one kernel thread will run on only one processor .
Hence, the user thread will also run in that processor only in which the mapped
kernel thread is running.

One-to-One Model
From the name itself, we can understand the one user thread to mapped to one kernel
thread.

Advantages over Many-to-One Model


1. In this model, the first drawback of the Many-to-One model is solved. As each user
thread is mapped to different kernel threads so even if any user thread makes a
blocking system call, the other user threads won't be blocked.
2. The second drawback is also overcome. It allows the threads to run parallel on a
multiprocessor. Since it has each kernel threads mapped to one user thread. So
each kernel thread can run on different processors . Hence, each user
thread can run on one of the processors.
VSRGDC

Disadvantages
1. Each time we create a user thread we have to create a kernel thread. So, the
overhead of creating a kernel thread can affect the performance of the application.
2. Also, in a multiprocessor system, there is a limit of how many threads can run at a
time. Suppose if there are four-processors in the system then only max four
threads can run at the time. So, if you are having 5 threads and trying to run them
at a time then it may not work. Therefore, the application should restrict the
number of kernel threads that it supports.

Many-to-Many Model
From the name itself, we can understand the many user threads to mapped to a smaller
or equal number of kernel threads. The number of kernel threads is specific to a
particular application or machine.

Advantages over the other two models


1. Whenever a user thread makes a blocking system call other threads are not
blocked.
2. There can be as many user threads as necessary. Also, the threads can run parallel
on multiple processors.
3. Here we don't need to create as many kernel threads as the user thread. So, there
is no problem with any extra overhead which was caused due to creating kernel
thread.
VSRGDC
4. The number of kernel threads supported here is specific to the application or
machine.
So, this is the best model that we can have in a multithreading system to establish the
relationship between user-thread and kernel thread.

Benefits of MultiThreading
1. Resource sharing: As the threads can share the memory and resources of any
process it allows any application to perform multiple activities inside the same
address space.
2. Utilization of MultipleProcessor Architecture: The different threads can
run parallel on the multiple processors hence, this enables the utilization of the
processor to a large extent and efficiency.
3. Reduced Context Switching Time: The threads minimize the context
switching time as in Thread Context Switching, the virtual memory space remains
the same.
4. Economical: The allocation of memory and resources during process creation
comes with a cost. As the threads can distribute resources of the process it is more
economical to create context-switch threads.
VSRGDC

What is Process Synchronization in Operating System?

the Operating System, there are a number of processes present in a particular state. At
the same time, we have a limited amount of resources present, so those resources need to
be shared among various processes. But you should make sure that no two processes are
using the same resource at the same time because this may lead to data inconsistency.
So, synchronization of process should be there in the Operating System. These processes
that are sharing resources between each other are called Cooperative Processes and
the processes whose execution does not affect the execution of other processes are
called Independent Processes .

In this blog, we will learn about Process Synchronization in Operating System. We will
learn the two important concepts that are related to process synchronization i.e. Race
Condition and Critical Section .

Race Condition
In an Operating System, we have a number of processes and these processes require a
number of resources. Now, think of a situation where we have two processes and these
processes are using the same variable "a". They are reading the variable and then
updating the value of the variable and finally writing the data in the memory.

SomeProcess(){
...
read(a) //instruction 1
a = a + 5 //instruction 2
write(a) //instruction 3
...
}
In the above, you can see that a process after doing some operations will have to read the
value of "a", then increment the value of "a" by 5 and at last write the value of "a" in the
memory. Now, we have two processes P1 and P2 that needs to be executed. Let's take the
following two cases and also assume that the value of "a" is 10 initially.

1. In this case, process P1 will be executed fully (i.e. all the three instructions) and
after that, the process P2 will be executed. So, the process P1 will first read the
value of "a" to be 10 and then increment the value by 5 and make it to 15. Lastly,
this value will be updated in the memory. So, the current value of "a" is 15. Now,
the process P2 will read the value i.e. 15, increment with 5(15+5 = 20) and finally
write it to the memory i.e. the new value of "a" is 20. Here, in this case, the final
value of "a" is 20.
2. In this case, let's assume that the process P1 starts executing. So, it reads the value
of "a" from the memory and that value is 10(initial value of "a" is taken to be 10).
Now, at this time, context switching happens between process P1 and P2. Now, P2
VSRGDC
will be in the running state and P1 will be in the waiting state and the context of
the P1 process will be saved. As the process P1 didn't change the value of "a", so,
P2 will also read the value of "a" to be 10. It will then increment the value of "a" by
5 and make it to 15 and then save it to the memory. After the execution of the
process P2, the process P1 will be resumed and the context of the P1 will be read.
So, the process P1 is having the value of "a" as 10(because P1 has already executed
the instruction 1). It will then increment the value of "a" by 5 and write the final
value of "a" in the memory i.e. a = 15. Here, the final value of "a" is 15.
In the above two cases, after the execution of the two processes P1 and P2, the final value
of "a" is different i.e. in 1st case it is 20 and in 2nd case, it is 15. What's the reason behind
this?

The processes are using the same resource here i.e. the variable "a". In the first
approach, the process P1 executes first and then the process P2 starts executing. But in
the second case, the process P1 was stopped after executing one instruction and after
that the process P2 starts executing. And here both the processes are dealing on the same
resource i.e. variable "a" at the same time.

Here, the order of execution of processes changes the output. All these processes are in a
race to say that their output is correct. This is called a race condition.

Critical Section
The code in the above part is accessed by all the process and this can lead to data
inconsistency. So, this code should be placed in the critical section. The critical section
code can be accessed by only one process at a time and no other process can access that
critical section code. All the shared variables or resources are placed in the critical
section that can lead to data inconsistency.

All the Critical Section problems need to satisfy the following three conditions:
VSRGDC
 Mutual Exclusion: If a process is in the critical section, then other processes
shouldn't be allowed to enter into the critical section at that time i.e. there must be
some mutual exclusion between processes.
 Progress: If in the critical section, there is no process that is being executed, then
other processes that need to go in the critical section and have a finite time can
enter into the critical section.
 Bounded Waiting: There must be some limit to the number of times a process
can go into the critical section i.e. there must be some upper bound. If no upper
bound is there then the same process will be allowed to go into the critical section
again and again and other processes will never get a chance to get into the critical
section.
So, in order to remove the problem of race condition, there must be synchronization
between various processes present in the system for its execution otherwise, it may lead
to data inconsistency i.e. a proper order should be defined in which the processes can
execute.
VSRGDC

What is semaphore and what are its types?

n a system, we have a limited amount of resources that are being shared


between various processes. One resource should be used by only one
process at a time. This is called process synchronization. So, in an Operating
System, we must have synchronization between various processes. This
synchronization between processes can be achieved with the help of
semaphore. So, in this blog, we will learn about semaphore and we will also
look at the types of a semaphore.

Semaphore
A semaphore is a variable that indicates the number of resources that are available in a
system at a particular time and this semaphore variable is generally used to achieve the
process synchronization. It is generally denoted by " S ". You can use any other variable
name of your choice.

A semaphore uses two functions i.e. wait() and signal() . Both these functions are
used to change the value of the semaphore but the value can be changed by only one
process at a particular time and no other process should change the value
simultaneously.

The wait() function is used to decrement the value of the semaphore variable " S " by
one if the value of the semaphore variable is positive. If the value of the semaphore
variable is 0, then no operation will be performed.

wait(S) {
while (S == 0); //there is ";" sign here
S--;
}
The signal() function is used to increment the value of the semaphore variable by one.

signal(S) {
S++;
}

Types of Semaphore
There are two types of semaphores:

 Binary Semaphores: In Binary semaphores, the value of the semaphore


variable will be 0 or 1. Initially, the value of semaphore variable is set to 1 and if
some process wants to use some resource then the wait() function is called and
VSRGDC
the value of the semaphore is changed to 0 from 1. The process then uses the
resource and when it releases the resource then the signal() function is called and
the value of the semaphore variable is increased to 1. If at a particular instant of
time, the value of the semaphore variable is 0 and some other process wants to use
the same resource then it has to wait for the release of the resource by the previous
process. In this way, process synchronization can be achieved.
 Counting Semaphores: In Counting semaphores, firstly, the semaphore
variable is initialized with the number of resources available. After that, whenever
a process needs some resource, then the wait() function is called and the value of
the semaphore variable is decreased by one. The process then uses the resource
and after using the resource, the signal() function is called and the value of the
semaphore variable is increased by one. So, when the value of the semaphore
variable goes to 0 i.e all the resources are taken by the process and there is no
resource left to be used, then if some other process wants to use resources then
that process has to wait for its turn. In this way, we achieve the process
synchronization.

Advantages of semaphore
 The mutual exclusion principle is followed when you use semaphores because
semaphores allow only one process to enter into the critical section.
 Here, you need not verify that a process should be allowed to enter into the critical
section or not. So, processor time is not wasted here.

Disadvantages of semaphore
 While using semaphore, if a low priority process is in the critical section, then no
other higher priority process can get into the critical section. So, the higher
priority process has to wait for the complete execution of the lower priority
process.
 The wait() and signal() functions need to be implemented in the correct order. So,
the implementation of a semaphore is quite difficult.
VSRGDC

What is Deadlock and what are its four necessary conditions?

What is Deadlock?
Deadlock is a situation where two or more processes are waiting for each other. For
example, let us assume, we have two processes P1 and P2. Now, process P1 is holding the
resource R1 and is waiting for the resource R2. At the same time, the process P2 is
having the resource R2 and is waiting for the resource R1. So, the process P1 is waiting
for process P2 to release its resource and at the same time, the process P2 is waiting for
process P1 to release its resource. And no one is releasing any resource. So, both are
waiting for each other to release the resource. This leads to infinite waiting and no work
is done here. This is called Deadlock.

If a process is in the waiting state and is unable to change its state because the
resources required by the process is held by some other waiting process, then the
system is said to be in Deadlock.

Let's take one real-life example to understand the concept of Deadlock in a better way.
Suppose, you are studying in a school and you are using the bus service also. So, you
have to pay two fees i.e. bus fee and tuition fee. Now, think of a situation, when you go
for submitting the bus fee and the accountant says that you have to submit the tuition fee
first and then the bus fee. So, you go to submit the tuition fees on the other counter and
the accountant there said that you have to first submit the bus fees and then the tuition
fees. So, what will you do here? You are in a situation of deadlock here. You don't know
what to submit first, bus fees or tuition fees?

Necessary Conditions of Deadlock


There are four different conditions that result in Deadlock. These four conditions are
also known as Coffman conditions and these conditions are not mutually exclusive. Let's
look at them one by one.
VSRGDC
 Mutual Exclusion: A resource can be held by only one process at a time. In
other words, if a process P1 is using some resource R at a particular instant of
time, then some other process P2 can't hold or use the same resource R at that
particular instant of time. The process P2 can make a request for that resource R
but it can't use that resource simultaneously with process P1.

 Hold and Wait: A process can hold a number of resources at a time and at the
same time, it can request for other resources that are being held by some other
process. For example, a process P1 can hold two resources R1 and R2 and at the
same time, it can request some resource R3 that is currently held by process P2.

 No preemption: A resource can't be preempted from the process by another


process, forcefully. For example, if a process P1 is using some resource R, then
some other process P2 can't forcefully take that resource. If it is so, then what's the
need for various scheduling algorithm. The process P2 can request for the
resource R and can wait for that resource to be freed by the process P1.
 Circular Wait: Circular wait is a condition when the first process is waiting for
the resource held by the second process, the second process is waiting for the
resource held by the third process, and so on. At last, the last process is waiting for
the resource held by the first process. So, every process is waiting for each other to
release the resource and no one is releasing their own resource. Everyone is
waiting here for getting the resource. This is called a circular wait.
VSRGDC

Deadlock will happen if all the above four conditions happen simultaneously.

Difference between Deadlock and Starvation


There is a difference between a Deadlock and Starvation. You shouldn't get confused
between these. In the case of Deadlock, each and every process is waiting for each other
to release the resource. But in the case of starvation, the high priority processes keep on
executing and the lower priority processes keep on waiting for its execution. So, every
deadlock is always starvation, but every starvation is not a deadlock. Deadlock is infinite
waiting but starvation is not an infinite waiting. Starvation is long waiting. If the higher
priority processes don't come, then the lower priority process will get a chance to be
executed in case of starvation. So, in the case of starvation, we have long waiting and not
infinite waiting.
VSRGDC

What are Deadlock handling techniques in Operating System?

So, you know what Deadlock is and what are those four necessary conditions. Cool. The
four conditions of deadlock are:

1. Mutual Exclusion
2. Hold and Wait
3. No Preemption
4. Circular Wait
To remove deadlock from our system, we need to avoid any one of the above four
conditions of deadlock. So, there are various ways of deadlock handling. Let's see all of
them, one by one.

1. Deadlock Prevention
In this method, the system will prevent any deadlock condition to happen i.e. the system
will make sure that at least one of the four conditions of the deadlock will be violated.
Since we are preventing any one of four conditions to happen by applying some
techniques. These techniques can be very costly. So, you should apply deadlock
prevention in only those situation which has a drastic change in the system if deadlock
happens.

For example, in hospitals, we have generators or inverters installed. So that in case of a


power cut, no life-saving machines should stop otherwise it can lead to the death of a
patient. There can be chances that in the area of the hospital, the power cut happens
rarely. But since it is a case of life-death, then you must Prevent this by installing
generators or inverters. No doubt, you have to bear the cost of generators. Now, think of
other situation, if there is a temple in the same area, then you need not install generators
because here we are not dealing with some life-death situation and the power cut in the
area is also very rare. So, prevention technique should be applied only when there will be
a drastic change if deadlock happens. So, before using the deadlock prevention
mechanism, make sure that if deadlock happens in your system then it will have an
adverse effect on your system or not.

Let's see how we can avoid the four conditions of deadlock by using the deadlock
prevention technique.

 Mutual Exclusion: Mutual exclusion says that a resource can only be held by
one process at a time. If another process is also demanding the same resource then
it has to wait for the allocation of that resource. So, practically, we can't violate the
mutual exclusion for a process because in general, one resource can perform the
work of one process at a time. For example, a printer can't print documents of two
users at the same time.
VSRGDC
 Hold and Wait: Hold and wait arises when a process holds some resources and
is waiting for some other resources that are being held by some other waiting
process. To avoid this, the process can acquire all the resources that it needs,
before starting its execution and after that, it starts its execution. In this way, the
process need not wait for some resources during its execution. But this method is
not practical because we can't know the resources required by a process in
advance, before its execution. So, another way of avoiding hold and wait can be
the "Do not hold" technique. For example, if the process needs 10 resources R1,
R2, R3,...., R10. At a particular time, we can provide R1, R2, R3, and R4. After
performing the jobs on these resources, the process needs to release these
resources and then the other resources will be provided to the process. In this way,
we can avoid the hold and wait condition.
 No Preemption: This is a technique in which a process can't forcefully take the
resource of other processes. But if we found some resource due to which, deadlock
is happening in the system, then we can forcefully preempt that resource from the
process that is holding that resource. By doing so, we can remove the deadlock but
there are certain things that should be kept in mind before using this forcefull
approach. If the process is having a very high priority or the process is a system
process, then only the process can forcefully preempt the resources of other
processes. Also, try to preempt the resources of those process which are in the
waiting state.
 Circular Wait: Circular wait is a condition in which the first process is waiting
for the resource held by the second process, the second process is waiting for the
resource held by the third process and so on. At last, the last process is waiting for
the resource held by the first process. So, every process is waiting for each other to
release the resource. This is called a circular wait. To avoid this, what we can do is,
we can list the number of resources required by a process and we assign some
number or priority to each resource(in our case, we are using R1, R2, R3, and so
on). Now, the process will take the resources in the ascending order. For example,
if the process P1 and P2, requires resource R1 and R2, then initially, both the
process will demand the resource R1 and only one of them will get resource R1 at
that time and the other process have to wait for its turn. So, in this way, both the
process will not be waiting for each other. One of them will be executing and the
other will wait for its turn. So, there is no circular wait here.

2. Deadlock Avoidance
In the deadlock avoidance technique, we try to avoid deadlock to happen in our system.
Here, the system wants to be in a safe state always. So, the system maintains a set of data
and using that data it decides whether a new request should be entertained or not. If the
system is going into the bad state by taking that new request, then the system will avoid
those kinds of request and will ignore the request. So, if a request is made for a resource,
from a system, then that request should only be approved if the resulting state of the
system is safe i.e. not going into deadlock.
VSRGDC

3. Detection and Recovery


In this approach, the CPU assumes that at some point of time, a deadlock will happen in
the system and after that, the CPU will apply some recovery technique to get rid of that
deadlock. The CPU periodically checks for the deadlock. The Resource Allocation Graphs
are used to detect the deadlock in a system.

For recovery, the CPU may forcefully take the resource allocated to some process and
give it to some other process but that process should be of high priority or that process
must be a system process.

4. Deadlock Ignorance
In most of the systems, deadlock happens rarely. So, why to apply so many detection and
recovery techniques or why to apply some method to prevent deadlock? As these
processes of deadlock prevention are costly, so, the Operating System assumes that the
deadlock is never going to happen. It simply ignores the deadlock. This is the most
widely used methods of deadlock handling.

We have to compromise between correctness and performance. In the above three


methods, the correctness is good but the performance of the system is low because the
CPU has to check for deadlock after a regular interval. But if we ignore the deadlock then
there might be cases where deadlock can happen but that is rare of the rarest case. We
can simply restart the system and get rid of deadlock if some deadlock happens in our
system. But at the same time, you will lose your data that is not being saved.

So, you have to think that you want correctness or performance. If you want
performance, then your system should ignore deadlock otherwise you can apply some
deadlock prevention technique. It totally depends on the need of the situation. If your
system is dealing with some very very important data and you can't lose that if deadlock
happens then you should definitely go for deadlock prevention.
VSRGDC

What is Banker’s algorithm?

Banker’s Algorithm
Banker’s Algorithm is a deadlock avoidance algorithm . It is also used for deadlock
detection. This algorithm tells that if any system can go into a deadlock or not by
analyzing the currently allocated resources and the resources required by it in the future.
There are various data structures which are used to implement this algorithm. So, let's
learn about these first.

Data Structures used to implement Banker’s Algorithm


1. Available: It is a 1-D array that tells the number of each resource type (instance
of resource type) currently available. Example: Available[R1]= A, means that
there are A instances of R1 resources are currently available.
2. Max: It is a 2-D array that tells the maximum number of each resource type
required by a process for successful execution. Example: Max[P1][R1] = A,
specifies that the process P1 needs a maximum of A instances of resource R1 for
complete execution.
3. Allocation: It is a 2-D array that tells the number of types of each resource type
that has been allocated to the process. Example: Allocation[P1][R1] = A, means
that A instances of resource type R1 have been allocated to the process P1.
4. Need: It is a 2-D array that tells the number of remaining instances of each
resource type required for execution. Example: Need[P1][R1]= A tells
that A instances of R1 resource type are required for the execution of process P1.
Need[i][j]= Max[i][j] - Allocation[i][j], where i corresponds any process P(i) and j
corresponds to any resouce type R(j)
The Bankers Algorithm consists of the following two algorithms

1. Request-Resource Algorithm
2. Safety Algorithm

Resource- Request Algorithm


Whenever a process makes a request of the resources then this algorithm checks that if
the resource can be allocated or not.

It includes three steps:


VSRGDC
1. The algorithm checks that if the request made is valid or not. A request is valid if
the number of requested resources of each resource type is less than
the Need( which was declared previously by the process ). If it is a valid request
then step 2 is executed else aborted.
2. Here, the algorithm checks that if the number of requested instances of each
resource is less than the available resources. If not then the process has to wait
until sufficient resources are available else go to step 3.
3. Now, the algorithm assumes that the resources have been allocated and modifies
the data structure accordingly.
Available = Available - Request(i)
Allocation(i) = Allocation(i) + Request(i)
Need(i) = Need(i) - Request(i)
After the allocation of resources, the new state formed may or may not be a safe state. So,
the safety algorithm is applied to check whether the resulting state is a safe state or
not.

Safe state: A safe state is a state in which all the processes can be executed in some
arbitrary order with the available resources such that no deadlock occurs.

1. If it is a safe state, then the requested resources are allocated to the process in
actual.
2. If the resulting state is an unsafe state then it rollbacks to the previous state and
the process is asked to wait longer.

Safety Algorithm
The safety algorithm is applied to check whether a state is in a safe state or not.

This algorithm involves the following four steps:

1. Suppose currently all the processes are to be executed. Define two data structures
as work and finish as vectors of length m(where m is the length
of Available vector)and n(is the number of processes to be executed).
Work = Available
Finish[i] =false for i = 0, 1, … , n — 1.
2. This algorithm will look for a process that has Need value less than or equal to
the Work . So, in this step, we will find an index i such that

Finish[i] ==false &&


Need[i]<= Work
If no such ‘i’ is present then go to step 4 else to step 3.
VSRGDC
3. The process ' i' selected in the above step runs and finishes its execution. Also, the
resources allocated to it gets free. The resources which get free are added to the Work
and Finish(i) of the process is set as true. The following operations are performed:

Work = Work + Allocation


Finish[i] = true
After performing the 3rd step go to step 2.

4. If all the processes are executed in some sequence then it is said to be a safe state. Or,
we can say that if

Finish[i]==true for all i,


then the system is said to be in a safe state .

Let's take an example to understand this more clearly.

Example

Suppose we have 3 processes(A, B, C) and 3 resource types(R1, R2, R3) each


having 5 instances. Suppose at any time t if the snapshot of the system taken
is as follows then find the system is in a safe state or not.

So, the total allocated resources(total_alloc)are [5, 4, 3]. Therefore, the Available( the
resources that are currently available ) resources are

Available = [0, 1, 2]
VSRGDC
Now, we will make the Need Matrix for the system according to the given conditions. As
we know, Need(i)=Max(i)-Allocation(i) , so the resultant Need matrix will be as
follows:

Now, we will apply the safety algorithm to check that if the given state is a safe state or
not.

1. Work=Available=[0, 1, 2]
2. Also Finish[i]=false, for i=0,1,2, are set as false as none of these processes have
been executed.
3. Now, we check that Need[i]≤Work . By seeing the above Need matrix we can
tell that only B[0, 1, 2] process can be executed. So, process B( i=1 )is allocated the
resources and it completes its execution. After completing the execution, it frees
up the resources.
4. Again, Work=Work+Available i.e. Work=[0, 1, 2]+[2, 0,1]= [2, 1, 3] and Finish[1]=
true.
5. Now, as we have more instances of resources available we will check that if any
other process resource needs can be satisfied. With the currently available
resources[2, 1, 3], we can see that only process A[1, 2, 1] can be executed. So,
process A( i=0 ) is allocated the resources and it completes its execution. After
completing the execution, it frees up the resources.
6. Again, Work=Work+Available i.e. Work=[2, 1, 3]+[1, 2, 1]= [3, 3, 4] and
Finish[0]= true.
7. Now, as we have more instances of resources available we will check that if the
remaining last process resource requirement can be satisfied. With the currently
available resources[3, 3, 4], we can see that process C[2, 2, 1] can be executed. So,
process C( i=2 ) is allocated the resources and it completes its execution. After
completing the execution, it frees up the resources.
8. Fianlly, Work=Work+Available i.e. Work=[3, 3, 4]+[2, 2, 1]= [5, 5, 5] and
Finish[2]= true.
VSRGDC
9. Finally, all the resources are free and there exists a safe sequence B, A, C in which
all the processes can be executed. So. the system is in a safe state and deadlock will
not occur.
VSRGDC
Partition Allocation Methods in Memory Management

In the operating system, the following are four common memory management techniques.
Single contiguous allocation: Simplest allocation method used by MS-DOS. All memory
(except some reserved for OS) is available to a process.
Partitioned allocation: Memory is divided into different blocks or partitions. Each process is
allocated according to the requirement.
Paged memory management: Memory is divided into fixed-sized units called page frames,
used in a virtual memory environment.
Segmented memory management: Memory is divided into different segments (a segment is
a logical grouping of the process’ data or code).In this management, allocated memory
doesn’t have to be contiguous.
Most of the operating systems (for example Windows and Linux) use Segmentation with
Paging. A process is divided into segments and individual segments have pages.
In Partition Allocation, when there is more than one partition freely available to
accommodate a process’s request, a partition must be selected. To choose a particular
partition, a partition allocation method is needed. A partition allocation method is considered
better if it avoids internal fragmentation.
When it is time to load a process into the main memory and if there is more than one free
block of memory of sufficient size then the OS decides which free block to allocate.
There are different Placement Algorithm:
A. First Fit
B. Best Fit
C. Worst Fit
D. Next Fit
1. First Fit: In the first fit, the partition is allocated which is the first sufficient block from the
top of Main Memory. It scans memory from the beginning and chooses the first available
block that is large enough. Thus it allocates the first hole that is large enough.

2. Best Fit Allocate the process to the partition which is the first smallest sufficient partition
among the free available partition. It searches the entire list of holes to find the smallest hole
whose size is greater than or equal to the size of the process.
VSRGDC

3. Worst Fit Allocate the process to the partition which is the largest sufficient among the
freely available partitions available in the main memory. It is opposite to the best-fit algorithm.
It searches the entire list of holes to find the largest hole and allocate it to process.

4. Next Fit: Next fit is similar to the first fit but it will search for the first sufficient partition from
the last allocation point.
Is Best-Fit really best?
Although best fit minimizes the wastage space, it consumes a lot of processor time for
searching the block which is close to the required size. Also, Best-fit may perform poorer than
other algorithms in some cases. For example, see the below exercise.
Comparison of Partition Allocation Methods:
Partition
Allocation
Sl.No. Method Advantages Disadvantages

Simple, easy to use, no complex Memory waste, inefficient use


Fixed Partition
1. algorithms needed of memory resources

Flexible, more efficient, Requires complex algorithms


Dynamic Partition
2. partitions allocated as required for memory allocation

Minimizes memory waste,


More computational overhead
Best-fit Allocation allocates smallest suitable
to find smallest split
3. partition
VSRGDC
Partition
Allocation
Sl.No. Method Advantages Disadvantages

Worst-fit Ensures larger processes have May result in substantial


4. Allocation sufficient memory memory waste

Quick, efficient, less Risk of memory


First-fit Allocation
5. computational work fragmentation
VSRGDC

What is the difference between logical and physical address wrt Operating System?

The addresses identify a location in the memory. In the operating system,


when we talk about memory we discuss a location where the actual code
resides in the system. Like we have the address of our house so that anyone
can reach out to us. In the same way, we store the data in the memory at
different locations with addresses so that we can access the data again
whenever required in the future. There are two types of addresses that are
used for memory in the operating system

Physical Address
The physical address refers to a location in the memory. It allows access to data in the
main memory. A physical address is not directly accessible to the user program hence, a
logical address needs to be mapped to it to make the address accessible. This mapping is
done by the MMU . Memory Management Unit(MMU) is a hardware component
responsible for translating a logical address to a physical address.

Logical Address
A logical address or virtual address is an address that is generated by the CPU during
program execution. A logical address doesn't exist physically. The logical address is used
as a reference to access the physical address. A logical address usually ranges from zero
to maximum (max). The user program that generates the logical address assumes that
the process runs on locations between 0 to the max. This logical address (generated by
CPU) combines with the base address generated by the MMU to form the physical
address .

The diagram below explains how the mapping between logical and physical addresses is
done.
VSRGDC

1. The CPU generates the logical address(here, 324).


2. The MMU will generate the base address (here, 2000) which is stored in the
Relocation Register.
3. The value of Relocation Register(here, 2000) is added to the logical address to get
the physical address. i.e. 2000+324= 2324(Physical Address).

Difference between the physical and logical address


1. The fundamental difference between a physical address and the logical address is
that logical address is generated by the CPU while the program is running whereas
the physical address is a location in memory.
2. The logical address is generated by the CPU whereas physical address is computed
by the MMU.
3. The logical address does not exist physically in the memory hence it is sometimes
known as virtual address whereas the physical address is a location in the memory
unit.
4. The logical address is used as a reference to access the physical address. The
physical address cannot be accessed directly.
5. Users can view the logical address of a program. But, they cannot view the physical
address of a program.

6. The set of all the logical addresses generated in reference to a program by the CPU
is called Logical Address Space whereas the set of all the physical addresses
mapped to the logical address is called Physical Address Space .
VSRGDC

What is Fragmentation and what are its types?

In contiguous memory allocation whenever the processes come into RAM, space is
allocated to them. These spaces in RAM are divided either on the basis of fixed
partitioning (the size of partitions are fixed before the process gets loaded into RAM)
or dynamic partitioning (the size of the partition is decided at the run time according
to the size of the process). As the process gets loaded and removed from the memory
these spaces get broken into small pieces of memory that it can’t be allocated to the
coming processes. This problem is called fragmentation . In this blog, we will study
how these free space and fragmentations occur in memory. So, let's get started.

Fragmentation
Fragmentation is an unwanted problem where the memory blocks cannot be allocated to
the processes due to their small size and the blocks remain unused. It can also be
understood as when the processes are loaded and removed from the memory they create
free space or hole in the memory and these small blocks cannot be allocated to new
upcoming processes and results in inefficient use of memory. Basically, there are two
types of fragmentation:

 Internal Fragmentation
 External Fragmentation

Internal Fragmentation
In this fragmentation, the process is allocated a memory block of size more than the size
of that process. Due to this some part of the memory is left unused and this cause
internal fragmentation.

Example: Suppose there is fixed partitioning (i.e. the memory blocks are of fixed sizes)
is used for memory allocation in RAM. These sizes are 2MB, 4MB, 4MB, 8MB. Some part
of this RAM is occupied by the Operating System (OS).

Now, suppose a process P1 of size 3MB comes and it gets memory block of size 4MB. So,
the 1MB that is free in this block is wasted and this space can’t be utilized for allocating
memory to some other process. This is called internal fragmentation .
VSRGDC

How to remove internal fragmentation?


This problem is occurring because we have fixed the sizes of the memory blocks. This
problem can be removed if we use dynamic partitioning for allocating space to the
process. In dynamic partitioning, the process is allocated only that much amount of
space which is required by the process. So, there is no internal fragmentation.
VSRGDC

External Fragmentation
In this fragmentation, although we have total space available that is needed by a process
still we are not able to put that process in the memory because that space is not
contiguous. This is called external fragmentation.

Example: Suppose in the above example, if three new processes P2, P3, and P4 come of
sizes 2MB, 3MB, and 6MB respectively. Now, these processes get memory blocks of size
2MB, 4MB and 8MB respectively allocated.

So, now if we closely analyze this situation then process P3 (unused 1MB)and P4(unused
2MB) are again causing internal fragmentation. So, a total of 4MB (1MB (due to process
P1) + 1MB (due to process P3) + 2MB (due to process P4)) is unused due to internal
fragmentation.
VSRGDC
Now, suppose a new process of 4 MB comes. Though we have a total space of
4MB still we can’t allocate this memory to the process. This is called external
fragmentation .

How to remove external fragmentation?


This problem is occurring because we are allocating memory continuously to the
processes. So, if we remove this condition external fragmentation can be reduced. This is
what done in paging & segmentation (non-contiguous memory allocation techniques)
where memory is allocated non-contiguously to the processes. We will learn about
paging and segmentation in the next blog.

Another way to remove external fragmentation is compaction . When dynamic


partitioning is used for memory allocation then external fragmentation can be reduced
by merging all the free memory together in one large block. This technique is also
called defragmentation. This larger block of memory is then used for allocating space
according to the needs of the new processes.
VSRGDC

What are Paging and Segmentation?

External fragmentation occurs because we allocate memory continuously to the


processes. Due to this space is left and memory remains unused hence,
cause external fragmentation. So to tackle this problem the concept
of paging was introduced where we divide the process into small pages and
these pages are allocated memory non-contiguously into the RAM.

Non-Contiguous Memory Allocation Technique


In the non-contiguous memory allocation technique, different parts of the same process
are stored in different places of the main memory. Types:

1. Paging
2. Segmentation

Paging
Paging is a non-contiguous memory allocation technique in which secondary memory
and the main memory is divided into equal size partitions. The partitions of the
secondary memory are called pages while the partitions of the main memory are
called frames . They are divided into equal size partitions to have maximum utilization
of the main memory and avoid external fragmentation.

Example: We have a process P having process size as 4B, page size as 1B. Therefore
there will we four pages(say, P0, P1, P2, P3) each of size 1B. Also, when this process goes
into the main memory for execution then depending upon the availability, it may be
stored in non-contiguous fashion in the main memory frame as shown below:

This is how paging is done.


VSRGDC

Translation of logical Address into physical Address


As a CPU always generates a logical address and we need a physical address for accessing
the main memory. This mapping is done by the MMU(memory management Unit) with
the help of the page table . Lets first understand some of the basic terms then we will
see how this translation is done.

 Logical Address: The logical address consists of two parts page


number and page offset.
1. Page Number: It tells the exact page of the process which the CPU wants to access.

2. Page Offset: It tells the exact word on that page which the CPU wants to read.

Logical Address = Page Number + Page Offset

 Physical Address: The physical address consists of two parts frame


number and page offset.
1. Frame Number: It tells the exact frame where the page is stored in physical
memory.

2. Page Offset: It tells the exact word on that page which the CPU wants to read. It
requires no translation as the page size is the same as the frame size so the place of the
word which CPU wants access will not change.

Physical Address = Frame Number + Page Offset

 Page table: A page stable contains the frame number corresponding to the page
number of some specific process. So, each process will have its own page table. A
register called Page Table Base Register(PTBR) which holds the base value of the
page table.
Now, let's see how the translation is done.

How is the translation done?


The CPU generates the logical address which contains the page number and the page
offset . The PTBR register contains the address of the page table. Now, the page table
helps in determining the frame number corresponding to the page number. Now, with
the help of frame number and the page offset the physical address is determined and the
page is accessed in the main memory.
VSRGDC

Advantages of Paging
1. There is no external fragmentation as it allows us to store the data in a non-
contiguous way.
2. Swapping is easy between equal-sized pages and frames.

Disadvantages of Paging
1. As the size of the frame is fixed, so it may suffer from internal fragmentation. It
may happen that the process is too small and it may not acquire the entire frame
size.
2. The access time increases because of paging as the main memory has to be now
accessed two times. First, we need to access the page table which is also stored in
the main memory and second, combine the frame number with the page offset and
then get the physical address of the page which is again stored in the main
memory.
3. For every process, we have an independent page table and maintaining the page
table is extra overhead.
VSRGDC

Segmentation
In paging, we were blindly diving the process into pages of fixed sizes but in
segmentation, we divide the process into modules for better visualization of the process.
Here each segment or module consists of the same type of functions. For example, the
main function is included in one segment, library function is kept in other segments, and
so on. As the size of segments may vary, so memory is divided into variable size parts.

Translation of logical Address into physical Address


As a CPU always generates a logical address and we need a physical address for accessing
the main memory. This mapping is done by the MMU(memory management Unit) with
the help of the segment table .

Lets first understand some of the basic terms then we will see how this translation is
done.

 Logical Address: The logical address consists of two parts segment


number and page offset.
1. Segment Number: It tells the specific segment of the process from which the CPU
wants to read the data.

2. Segment Offset: It tells the exact word in that segment which the CPU wants to
read.

Logical Address = Segment Number + Segment Offset

 Physical Address: The physical address is obtained by adding the base


address of the segment to the segment offset.
 Segment table: A segment table stores the base address of each segment in the
main memory. It has two parts i.e. Base and Limit . Here, base indicates the
base address or starting address of the segment in the main memory. Limit tells
the size of that segment. A register called Segment Table Base Register(STBR)
which holds the base value of the segment table. The segment table is also stored
in the main memory itself.

How is the translation done?


The CPU generates the logical address which contains the segment number and the
segment offset . STBR register contains the address of the segment table. Now, the
segment table helps in determining the base address of the segment corresponding
to the page number. Now, the segment offset is compared with the limit corresponding
to the Base. If the segment offset is greater than the limit then it is an invalid address.
This is because the CPU is trying to access a word in the segment and this value is
VSRGDC
greater than the size of the segment itself which is not possible. If the segment offset is
less than or equal to the limit then only the request is accepted. The physical address is
generated by adding the base address of the segment to the segment offset.

Advantages of Segmentation
1. The size of the segment table is less compared to the size of the page table.
2. There is no internal fragmentation.

Disadvantages of Segmentation
1. When the processes are loaded and removed ( during swapping ) from the main
memory then free memory spaces are broken into smaller pieces and this causes
external fragmentation.
2. Here also the time to access the data increases as due to segmentation the main
memory has to be now accessed two times. First, we need to access the segment
table which is also stored in the main memory and second, combine the base
address of the segment with the segment offset and then get the physical address
which is again stored in the main memory.
VSRGDC

What are demand-paging and pre-paging?

According to concepts of virtual memory, in order to execute any process, it not


necessary that the whole process should present in the main memory at the given time.
The process can also be executed if only some pages are present in the main memory at
any given time. But, how can we decide beforehand which page should be present in the
main memory at a particular time and which should not be there?

To resolve this problem Demand paging concept came into play. This concept says we
should not load any page into the main memory until required or we should keep all the
pages in secondary memory until demanded. In contrast, in Pre-Paging , the OS
guesses in advance which page the process will require and pre-loads them into the
memory.

Demand Paging
Demand paging is a technique used in virtual memory systems where the pages are
brought in the main memory only when required or demanded by the CPU. Hence, it is
also named as lazy swapper because the swapping of pages is done only when
required by the CPU.

How does demand paging work?


Lets us understand this with the help of an example. Suppose we have to execute a
process P having four pages as P0, P1, P2, and P3. Currently, in the page table, we have
page P1 and P3.

1. Now, if the CPU wants to access page P2 of a process P, first it will search the page
in the page table.
VSRGDC
2. As the page table does not contain this page so it will be a trap or page fault . As
soon as the trap is generated and context switching happens and the control goes
to the operating system.
3. The OS system will put the process in a waiting/ blocked state. The OS system will
now search that page in the backing store or secondary memory.
4. The OS will then read the page from the backing store and load it to the main
memory.
5. Next, the OS system will update the page table entry accordingly.
6. Finally, the control is taken back from the OS and the execution of the process is
resumed.
Hence whenever a page fault occurs these steps are followed by the operating system and
the required page is brought into memory.

Page Fault Service time

So whenever a page fault occurs all the above steps(2–6) are performed. This time taken
to service the page fault is called the Page fault service time .

Effective Memory Access time

When the page fault rate is ‘p’ while executing any process then the effective memory
access time is calculated as follows:

Effective Memory Access time = (p)*(s) + (1-p)*(m)


where p is the page fault rate.
s is the page fault service time.
m is the main memory access time.

Advantages
 It increases the degree of multiprogramming as many processes can be present in
the main memory at the same time.
 There is a more efficient use of memory as processes having size more than the
size of the main memory can also be executed using this mechanism because we
are not loading the whole page at a time.

Disadvantages
 The amount of processor overhead and the number of tables used for handling the
page faults is greater than in simple page management techniques.
VSRGDC

PrePaging
In demand paging, that page is brought to the main memory which is actually demanded
during the execution of the process. But, in pre-paging pages other than the demanded
by the CPU are also brought in. The OS guesses in advance which page the process will
require and pre-loads them into the memory.

The diagram above shows that only one page was referenced or demanded by the CPU
but three more pages were pre-paged by the OS. The OS tries to predict which page
would be next required by the processor and brings that page proactively into the main
memory.

Advantages
 It saves time when large contiguous structures are used. Consider
an example where the process requests consecutive addresses. So, in such cases,
the operating system can guess the next pages. And, if the guesses are right, fewer
page faults will occur and the effective memory access time will increase.

Disadvantages
 There is a wastage of time and memory if those pre-paged pages are unused.
VSRGDC

What are the Page Replacement Algorithms?

This lesson will introduce you to the concept of page replacement, which is used in
memory management. You will understand the definition, need and various algorithms
related to page replacement.

A computer system has a limited amount of memory. Adding more memory physically is
very costly. Therefore most modern computers use a combination of both hardware and
software to allow the computer to address more memory than the amount physically
present on the system. This extra memory is actually called Virtual Memory.

Virtual Memory is a storage allocation scheme used by the Memory Management


Unit(MMU) to compensate for the shortage of physical memory by transferring data
from RAM to disk storage. It addresses secondary memory as though it is a part of the
main memory. Virtual Memory makes the memory appear larger than actually present
which helps in the execution of programs that are larger than the physical memory.

Virtual Memory can be implemented using two methods :

 Paging
 Segmentation
In this blog, we will learn about the paging part.

Paging
Paging is a process of reading data from, and writing data to, the secondary storage. It is
a memory management scheme that is used to retrieve processes from the secondary
memory in the form of pages and store them in the primary memory. The main objectiv e
of paging is to divide each process in the form of pages of fixed size. These pages are
stored in the main memory in frames. Pages of a process are only brought from the
secondary memory to the main memory when they are needed.

When an executing process refers to a page, it is first searched in the main memory. If it
is not present in the main memory, a page fault occurs.

** Page Fault is the condition in which a running process refers to a page that is not
loaded in the main memory.

In such a case, the OS has to bring the page from the secondary storage into the main
memory. This may cause some pages in the main memory to be replaced due to limited
storage. A Page Replacement Algorithm is required to decide which page needs to be
replaced.
VSRGDC

Page Replacement Algorithm


Page Replacement Algorithm decides which page to remove, also called swap out when a
new page needs to be loaded into the main memory. Page Replacement happens when a
requested page is not present in the main memory and the available space is not
sufficient for allocation to the requested page.

When the page that was selected for replacement was paged out, and referenced again, it
has to read in from disk, and this requires for I/O completion. This process determines
the quality of the page replacement algorithm: the lesser the time waiting for page-ins,
the better is the algorithm.

A page replacement algorithm tries to select which pages should be replaced so as to


minimize the total number of page misses. There are many different page replaceme nt
algorithms. These algorithms are evaluated by running them on a particular string of
memory reference and computing the number of page faults. The fewer is the page faults
the better is the algorithm for that situation.

** If a process requests for page and that page is found in the main memory then it is
called page hit , otherwise page miss or page fault .

Some Page Replacement Algorithms :


 First In First Out (FIFO)
 Least Recently Used (LRU)
 Optimal Page Replacement

First In First Out (FIFO)


This is the simplest page replacement algorithm. In this algorithm, the OS maintains a
queue that keeps track of all the pages in memory, with the oldest page at the front and
the most recent page at the back.

When there is a need for page replacement, the FIFO algorithm, swaps out the page at
the front of the queue, that is the page which has been in the memory for the longest
time.

For Example:

Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size
4(i.e. maximum 4 pages in a frame).
VSRGDC

Total Page Fault = 9

Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty
slots in order of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.

When 5 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 1.

When 1 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 2.

When 3,1 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

When 6 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 3.

When 3 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 4.

When 2 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 5.

When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

Page Fault ratio = 9/12 i.e. total miss/total possible cases

Advantages

 Simple and easy to implement.


 Low overhead.
VSRGDC
Disadvantages

 Poor performance.
 Doesn’t consider the frequency of use or last used time, simply replaces the oldest
page.
 Suffers from Belady’s Anomaly(i.e. more page faults when we increase the number
of page frames).

Least Recently Used (LRU)


Least Recently Used page replacement algorithm keeps track of page usage over a short
period of time. It works on the idea that the pages that have been most heavily used in
the past are most likely to be used heavily in the future too.

In LRU, whenever page replacement happens, the page which has not been used for the
longest amount of time is replaced.

For Example

Total Page Fault = 8

Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty
slots in order of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.

When 5 comes, it is not available in memory so page fault occurs and it replaces 1 which
is the least recently used page.

When 1 comes, it is not available in memory so page fault occurs and it replaces 2.
VSRGDC
When 3,1 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

When 6 comes, it is not available in memory so page fault occurs and it replaces 4.

When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

When 2 comes, it is not available in memory so page fault occurs and it replaces 5.

When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.

Page Fault ratio = 8/12

Advantages

 Efficient.
 Doesn't suffer from Belady’s Anomaly.
Disadvantages

 Complex Implementation.
 Expensive.
 Requires hardware support.

Optimal Page Replacement


Optimal Page Replacement algorithm is the best page replacement algorithm as it gives
the least number of page faults. It is also known as OPT, clairvoyant replacement
algorithm, or Belady’s optimal page replacement policy.

In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future, i.e., the pages in the memory which are going to be referred farthest in
the future are replaced.

This algorithm was introduced long back and is difficult to implement because it requires
future knowledge of the program behaviour. However, it is possible to implement
optimal page replacement on the second run by using the page reference information
collected on the first run.

For Example
VSRGDC

Total Page Fault = 6

Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty
slots in order of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.

When 5 comes, it is not available in memory so page fault occurs and it replaces 4 which
is going to be used farthest in the future among 1, 2, 3, 4.

When 1,3,1 comes, they are available in the memory, i.e., Page Hit, so no replacement
occurs.

When 6 comes, it is not available in memory so page fault occurs and it replaces 1.

When 3, 2, 3 comes, it is available in the memory, i.e., Page Hit, so no replacement


occurs.

Page Fault ratio = 6/12

Advantages

 Easy to Implement.
 Simple data structures are used.
 Highly efficient.
Disadvantages

 Requires future knowledge of the program.

 Time-consuming.
VSRGDC

What is Virtual Memory and how is it implemented?

We all know that a process is divided into various pages and these pages are used during
the execution of the process. The whole process is stored in the secondary memory. But
to make the execution of a process faster, we use the main memory of the system and
store the process pages into it. But there is a limitation with the main memory. We have
limited space and less space in the main memory. So, what if the size of the process is
larger than the size of the main memory? Here, the concept of Virtual Memory comes
into play.

Virtual Memory is a way of using the secondary memory in such a way


that it feels like we are using the main memory.

So, the benefit of using the Virtual Memory is that if we are having some program that is
larger than the size of the main memory then instead of loading all the pages we load
some important pages.

In general, when we execute a program, then the entire program is not required to be
loaded fully in the main memory. This is because only a few portions of the program are
being executed at a time. For example, the error handling part of any program is called
only when there is some error and we know that the error happens rarely. So, why to
load this part of the program in the main memory and fill the memory space? Another
example can be a large-sized array. Generally, we have over-sized arrays because we
reserve the space for worst-case scenarios. But in reality, only a small part of the array is
used regularly. So, why to put the whole array in the main memory?

So, what we do is we put the frequently used pages of the program in the main memory
and this will result in fast execution of the program because whenever those pages will be
needed then it will be served from the main memory only. Other pages are still in the
secondary memory. Now, if some request to the page that is not in the main memory
comes, then this situation is called a Page Miss or Page Fault. In this situation, we
remove one page from the main memory and load the desired page from the secondary
memory to the main memory at the run time i.e swapping of pages will be performed
here. By doing so, the user feels like he/she is having a lot of memory in its system but in
reality, we are just putting that part of the process in the memory that is frequently used.
The following figure shows the working in brief:
VSRGDC

In the above image, we can see that the whole process is divided into 6 pages and out of
these 6 pages, 2 pages are frequently used and due to this, these 2 pages are put into the
physical memory. If there is some request for the pages present in the physical memory,
then it is directly served otherwise if the page is not present in the physical memory then
it is called a page fault and whenever page fault occurs, then we load the demanded page
in the memory and this process is known as Demand Paging.

Demand Paging
Whenever a page fault occurs, then the process of loading the page into the memory is
called demand paging. So, in demand paging, we load the process only when we need it.
Initially, when a process comes into execution, then at that time only those pages are
loaded which are required for initial execution of the process and no other pages are
loaded. But with time, when there is a need for other pages, then the CPU will find that
page from the secondary memory and load that page in the main memory.
VSRGDC

Following are the steps involved in demand paging:

1. The CPU first tries to find the requested page in the main memory and if it is
found then it will be provided immediately otherwise an interrupt is generated
that indicates memory access fault.
2. Now, the process is sent to the blocked/waiting state because, for the execution of
the process, we need to find the required page from the secondary memory.
3. The logical address of the process will be converted to the physical address
because without having the physical address, you can't locate the page in
secondary memory.
4. Now, we apply some page replacement algorithms that can be used to swap the
pages from the main memory to secondary memory and vice-versa.
5. Finally, the page table will be updated with the removal of the address of the old
page and the addition of the address of the new page.
6. At last, the CPU provides the page to the process and the process comes to the
running state from the waiting/block state.
So, in this way, we can implement the concept of Virtual Memory with the help of
Demand Paging.
VSRGDC
What are the various Disk Scheduling Algorithms in Operating System?

In the ever-accelerating world of computing, where speed is paramount, Disk Scheduling


algorithms emerge as unsung heroes, orchestrating the intricate dance of data retrieval. These
algorithms are the silent conductors within Operating Systems, serving as traffic controllers for
your computer's hard drive, ensuring that data access is not a chaotic rush but a streamlined,
efficient process.

Disk Scheduling algorithms

As we embark on this journey through the world of Disk Scheduling algorithms, we'll uncover the
inner workings of these digital maestros. These algorithms don various hats, each tailored to
address specific data access scenarios. From the fundamental question of "What are Disk
Scheduling algorithms?" to hands-on applications like "Disk Scheduling algorithms in C program,"
we'll explore their nuances and real-world impact.

So, buckle up and be prepared to dive deep into the intricacies of Disk Scheduling algorithms. By
the end of this journey, you'll be equipped with the knowledge to fine-tune your computer's data
access, optimizing it for peak performance in our digital age. Join us in demystifying the world of
Disk Scheduling algorithms, where efficiency meets technology in perfect harmony.

What are Disk Scheduling Algorithms?


VSRGDC

Disk Scheduling algorithms

In the fascinating world of computing, Disk Scheduling algorithms take center stage. These
algorithms, such as "Disk Scheduling algorithms in OS," are like conductors orchestrating the
movements of your computer's hard drive. But let's break it down to the basics.

Understanding the Basics of Disk Scheduling

What is disk scheduling? At its core, Disk Scheduling is about efficiently fetching data from the
hard drive. Picture your computer's hard drive as a vast library, with each piece of data as a book.
When you open an application or access a file, your computer must find and retrieve the relevant
data. This is where Disk Scheduling algorithms step in.

They're the organizers, ensuring that data is fetched swiftly and logically. These algorithms, like
"Disk Scheduling algorithms in C program," decide which data requests get priority and in what
order, minimizing waiting times and keeping your computer running smoothly.

The Crucial Role of Disk Scheduling Algorithms in OS

What is disk scheduling in OS? Now, let's talk about the big picture. In the realm of operating
systems, Disk Scheduling algorithms play a mission-critical role. They ensure that your computer
juggles multiple data requests efficiently. When your operating system handles tasks like saving a
file, streaming a video, or loading an application, these algorithms optimize data access.

Think of them as traffic controllers on a busy intersection, keeping the flow smooth and ensuring
everyone gets where they need to go. This optimization is vital for a computer's overall
performance, making Disk Scheduling algorithms an indispensable part of the computing
landscape.
VSRGDC
Disk Scheduling Algorithms in Action
In this section, we're putting Disk Scheduling algorithms into practical scenarios and exploring
their real-world impact.

Disk Scheduling Algorithms in OS

When discussing Disk Scheduling algorithms in operating system (OS), we're peeking behind the
curtain of your computer's multitasking wizardry. These algorithms are the conductors, ensuring
that data requests from various programs are handled efficiently. Think of it like a traffic controller
orchestrating the data flow to prevent bottlenecks and keep your computer running smoothly.

Disk Scheduling Algorithms in C

Now, let's delve into the programming world, specifically "Disk Scheduling algorithms in C." Here,
these algorithms aren't just theoretical concepts; they're the tools programmers use to optimize
data access. Imagine them as the architects of efficiency, guiding your code to retrieve data
swiftly and intelligently. In coding, these algorithms differentiate between a program that stutters
and one that runs seamlessly.

Disk Scheduling Algorithms Examples: Real-World Applications

But where does the rubber meet the road? In our daily lives, we encounter Disk Scheduling
algorithm examples in various forms. Think of your favorite streaming service, where they ensure
your binge-watching experience is uninterrupted. Or consider online banking, where these
algorithms safeguard your financial data while ensuring quick access.

These are tangible examples of Disk Scheduling algorithms at work.

In logistics and warehousing, these algorithms optimize the movement of goods, reducing delivery
times and costs. In healthcare, they ensure patient records are accessible when needed,
potentially saving lives through swift diagnosis and treatment decisions.

So, whether you're navigating the complexities of an operating system, writing code in C, or
simply enjoying a seamless online experience, Disk Scheduling algorithms are there, quietly
ensuring things run smoothly. They're not just theoretical concepts but the unsung heroes of
efficiency in our digital world.

Types of Disk Scheduling Algorithm in OS


VSRGDC

Various Disk Scheduling Algorithms

In the world of Disk Scheduling, where optimizing data access is paramount, various algorithms
take on the challenge with distinct technical approaches. Let's explain various Disk Scheduling
algorithms:

FCFS Disk Scheduling Algorithm:

FCFS, or First-Come-First-Serve, operates based on a straightforward principle. When a data


request arrives, it joins the queue. The algorithm serves the recommendations in the order they
arrived. While conceptually simple, FCFS can lead to inefficiencies, mainly when requests are
scattered across the disk, resulting in a phenomenon known as "head-thrashing."

SSTF Disk Scheduling Algorithm:

SSTF, Shortest Seek Time First, prioritizes minimizing seek times. It chooses the request closest
to the current position of the disk arm. This algorithm optimizes data retrieval by reducing the
arm's movement. However, it can favor the more immediate requests, potentially causing some
recommendations to wait indefinitely in specific scenarios, known as "starvation."

SCAN and C-SCAN Disk Scheduling Algorithm:

SCAN and C-SCAN, often called "elevator algorithms," mimic the motion of an elevator within the
disk. SCAN disk scheduling starts at one end, servicing requests along the way, and reverses
direction. C SCAN disk scheduling adds predictability by ignoring requests while returning,
reducing variability in waiting times. These algorithms are efficient, ensuring all requests
eventually get served.

LOOK and C-LOOK Disk Scheduling Algorithm:

LOOK and C-LOOK fine-tune the elevator approach. They only serve requests in the direction of
the arm's movement, avoiding the ends of the disk where requests are less frequent. This
minimizes unnecessary movement and optimizes data retrieval, balancing speed and fairness.
VSRGDC
Comparing and Contrasting Different Disk Scheduling Methods

It's essential to compare these scheduling algorithms based on technical criteria like seek time,
rotational latency, and overall efficiency. Factors such as queue management, prioritization, and
starvation prevention differ among these methods. The algorithm depends on the system
requirements and data access patterns.

In the realm of Disk Scheduling algorithms, technical nuances drive their effectiveness. Different
types of Disk Scheduling algorithms address optimizing data access on your computer's hard
drive. These algorithms, such as SSTF, SCAN, C-SCAN, LOOK, and C-LOOK, each offer a
unique solution to the problem. They range from minimizing seek times in SSTF to the
predictability of SCAN and C-SCAN, as well as the refined efficiency of LOOK and C-LOOK.

Read our latest blogs "List of Operating Systems" and "Booting in Operating System".

How Disk Scheduling Algorithms Work


Have you ever wondered What happens behind the scenes when you click that file or launch an
application? Disk Scheduling algorithms are the unsung heroes ensuring it all runs smoothly. Let's
unravel the technical magic.

Behind the Scenes: How Disk Scheduling Algorithms Function

At the core, Disk Scheduling algorithms manage the requests for data stored on the hard drive.
Picture the hard drive as a library with countless books (data blocks). When you request a book,
the librarian (the algorithm) has to find and retrieve it efficiently.

Here's how it works technically:

1. Seek Time: This is the time it takes for the disk arm (like a needle on a vinyl record) to move to
the right track where the data is located. The algorithm aims to minimize this seek time.
2. Rotational Latency: Once the disk arm is on the right track, the disk platter must rotate to
bring the desired data under the read/write head. Again, the algorithm tries to minimize this
rotational latency.
3. Transfer Time: Finally, the data is read from or written to the disk. This transfer time depends
on the data's size and speed.
4. Disk Access Time: The total time for these three steps is the Disk Access Time, which the
metric Disk Scheduling algorithms aim to optimize.

Disk Scheduling Algorithms in OS with Examples


Let’s take a look into various Disk Scheduling algorithms in OS. These algorithms are like traffic
controllers in operating systems, deciding which data request gets served next. Consider a
scenario where multiple applications are vying for disk access simultaneously. The algorithm
ensures fair and efficient access for each.

For instance, in a real-world example, think of your computer running a web browser, a video
game, and an antivirus scan simultaneously. Disk Scheduling algorithms ensure these diverse
requests are handled effectively, preventing slowdowns or freezing.
VSRGDC
Let's not forget about coding. In programming, these algorithms are implemented to optimize data
access. Say you're developing software that loads large files or processes extensive databases.
The correct Disk Scheduling Algorithm can significantly impact the software's performance.

So, whether you're navigating the complexities of an operating system, coding a new software
application, or just using your computer for everyday tasks, Disk Scheduling algorithms are
silently working to ensure your data access is efficient and seamless.

Choosing the Right Disk Scheduling Algorithm


Regarding Disk Scheduling algorithms, selecting the ideal one isn't a one-size-fits-all affair. It
involves considering several factors to ensure optimal data access. Let's explore the art of making
this crucial choice.

Factors Influencing the Selection of Disk Scheduling Methods

1. Workload Characteristics: The nature of the data requests matters. Is it a server handling
database queries or a personal computer running everyday applications? Different workloads
benefit from specific algorithms.
2. Seek Time vs. Throughput: If you prioritize reducing seek times for individual requests,
algorithms like SSTF or LOOK may be ideal. Conversely, if you aim for overall throughput,
SCAN or C-SCAN might be better.
3. Starvation Tolerance: Some algorithms, like FCFS, ensure every request eventually gets
serviced, preventing starvation. Others, like SSTF, might favor specific requests, potentially
leaving some waiting indefinitely.
4. Queue Management: How the algorithm manages the queue of pending requests can impact
fairness and efficiency. A well-optimized queue management strategy can prevent bottlenecks.

The Art of Optimizing Disk Access: Picking the Ideal Algorithm

Imagine a web server handling requests from various users. If it prioritizes serving more minor
requests first (SSTF), it can reduce latency for many users. However, larger requests might wait
indefinitely if they receive more minor requests. In this case, a more balanced algorithm like
SCAN might be preferable.

In a different scenario, consider a scientific computing cluster processing vast datasets. Here,
throughput matters more than individual seek times. Algorithms like SCAN or C-SCAN, which
optimize data transfer efficiency, would be a better fit.

The ideal Disk Scheduling Algorithm choice involves a deep understanding of the system's
requirements and characteristics. It's a balancing act between minimizing latency, optimizing
throughput, and ensuring fairness.

So, whether managing a data center, designing software, or fine-tuning your computer, choosing
the correct Disk Scheduling Algorithm is about aligning technical needs with algorithmic
capabilities. It's a critical step in achieving efficient and responsive data access.

Advantages and Limitations of Disk Scheduling Algorithms


VSRGDC
Understanding the pros and cons of different Disk Scheduling algorithms is crucial in making
informed decisions about which to employ. Let's delve into the advantages and limitations of
various approaches to optimize your data access.

The Pros and Cons of Various Disk Scheduling Approaches

Advantages:

1. Improved Efficiency: Disk Scheduling algorithms can significantly enhance efficiency by


reducing seek times and minimizing rotational latency, resulting in faster data retrieval.
2. Customization: Different algorithms cater to diverse needs. You can choose an algorithm that
aligns with your workload requirements, prioritizing individual seek times or overall throughput.
3. Fairness: Some algorithms, like FCFS, ensure fairness by servicing requests in the order they
arrive, preventing submissions from waiting indefinitely.
4. Predictability: Elevator algorithms like SCAN and C-SCAN offer predictability regarding wait
times, making them suitable for scenarios where fairness and user experience are paramount.

Limitations:

1. Starvation: Certain algorithms may prioritize requests close to the disk's current position,
potentially leaving others waiting indefinitely. This is known as starvation and can be a
limitation in some situations.
2. Complexity: Implementing and managing Disk Scheduling algorithms can be complex. To
properly optimize a system, one must thoroughly comprehend its inherent characteristics and
workload patterns.
3. No Universal Solution: It's impossible to have a single solution that fits everyone. Each
algorithm has strengths and weaknesses; choosing the wrong one for a particular scenario can
lead to inefficiencies.

When to Use Which Disk Scheduling Algorithm


Choosing the correct Disk Scheduling Algorithm involves considering your specific needs:

 For systems where fairness is essential, FCFS ensures every request eventually gets serviced,
but at the cost of potential inefficiencies.
 When seeking optimal times, SSTF shines, but be cautious of potential starvation.
 SCAN and C-SCAN are excellent for predictable wait times and efficient data access but may
not be suitable for all workloads.
 LOOK and C-LOOK balance speed and fairness, making them versatile choices.

Ultimately, the key to making informed decisions lies in understanding your system's
requirements, workload patterns, and the advantages and limitations of each Disk Scheduling
Algorithm. By aligning these factors, you can optimize data access and enhance system
performance effectively.

You might also like