OS Exam Notes
OS Exam Notes
What is an Operating System and what are the goals and functions of an Operating
System?
There can be various resources present in the system and to manage it manually is a very
very difficult task. So, we make the use of the Operating System to manage all the
resources present in the system.
Apart from resource management, the other thing that the Operating System does is, it
provides a platform where other application programs can be published and used. The
following is the conceptual view of a common computer system.
In the above image, we can see that at level 0, the computer hardware are present and to
access this hardware you need to take help from the Operating System which is present
at level 1. At the upper level or at level 2, various application software (this software is
used by users to perform a specific task like MS word, VLC media player, etc) and system
software(this software is used to manage the system resources like assembler, compiler,
etc) are present. So, the Operating System is used for the communication of these
Softwares with the hardware.
This grouping of batches is done with the help of the operator. When we have the batches ready, the execution is
done one by one for the batches which means batch-wise.
Suppose that we have 100 programs to execute. Those programs are in two languages: Java and C++.
55 programs are in Java and 45 programs are in C++. Here, two batches can be created: one for all the Java
programs, and one for all the C++ programs.
Here, if we execute in batches, we will get the benefit of loading the Java compiler for 1 time only not 55 times.
Similarly for the C++, the C++ compiler will get loaded for 1 time only not 45 times.
As we are loading the compiler for the 1 time only and executing that particular batch. We have an advantage.
Advantages:
Disadvantages:
1. If a job fails, the other jobs will have to wait for an unknown time.
2. Batch systems are sometimes costly.
3. Difficult to debug.
Quantum is the short duration of time that is decided for each task in a time-sharing operating system.
It starts with Task 1, it gets executed for that particular fixed amount of time.
Then, Task 2 gets the chance of execution for that particular fixed amount of time.
After that, Task 3 gets the chance of execution for that particular fixed amount of time.
And finally, Task 4 gets the chance of execution for that particular fixed amount of time.
And then again the Task 1, Task 2, Task 3, and so on. The cycle goes on.
Advantages:
Disadvantages:
1. The data of each task should be handled properly so that they don’t get mixed during the execution.
2. No way to give an advantage to the higher-priority task that needs immediate execution.
As the name is distributed operating system, these systems are connected to each other over the network.
As it is connected to each other over the network, one user can access the data of the other system. So, remote
access is the major highlight of this operating system.
Now, let’s talk about the advantages and disadvantages of Distributed Operating Systems.
Advantages:
1. No single point of failure as we have multiple systems, if one fails, another system can execute the task.
2. Resources are shared with each other, hence, increasing availability across the entire system.
VSRGDC
3. It helps in the reduction of execution time.
Disadvantages:
1. Since the data is shared across the systems, it needs extra handling to manage the overall infrastructure.
2. It is difficult to provide adequate security in distributed systems because the nodes as well as the connections
need to be secured.
3. Network failure needs to be handled.
The operating system used in the ATMs or in the Elevators is dedicated to that specialized type of task.
Advantages:
Disadvantages:
Examples:
1. Hard Real-time: When a slight delay can lead to a big problem, we use this Hard Real-time operating system.
The time constraints are very strict.
2. Soft Real-time: When a slight delay is manageable and does not impact anything as such, we use this Soft Real-
time operating system.
VSRGDC
Advantages:
Disadvantages:
Multiprogramming
A process executing in a computer system mainly requires two things i.e. CPU time and
I/O time. The CPU time is the time taken by CPU to execute a process and I/O time is
the time taken by the process for I/O operations such as some file operation like read
and write. Generally, our computer system wants to execute a number of processes at a
time. But it is not possible. You can run only one process at a time in a processor. But
this can result in some problems.
Suppose we have a single processor system and we have 5 processes P1, P2, P3, P4, and
P5 that is to be executed by the CPU. Since the CPU can execute only one process at a
time, so it starts with the process P1 and after some time of execution of process P1, the
process P1 requires some I/O operation. So, it leaves the CPU and starts performing that
I/O operation. Now, the CPU will wait for the process P1 to come back for its execution
and the CPU will be in an idle state for that period of time. But at the same time, other
processes i.e. P2, P3, P4, and P5 are waiting for there execution. Our CPU is in idle state
and this is a very expensive thing. So, why to keep the CPU in the idle state? What we can
do is, if the process P1 wants to perform some I/O operation then let the process P1 do
the I/O job and at the same time, the CPU will be given to the process P2 and if the
process P2 also requires some I/O operation, then the CPU will be given to process P3
and so on. This is called Context Switching . Once the I/O work is done by the
processes then the CPU can resume the working of that process(i.e. the Process P1 and
P2) and by doing so the CPU will never go into the idle state. This concept of effective
CPU utilization is called Multiprogramming .
Advantages of Multiprogramming
Very high CPU utilization as the CPU will never be idle unless there is no process
to execute.
Less waiting time for the processes.
Can be used in a Multiuser system. A Multiuser system allows different users that
are on different computers to access the same CPU and this, in turn, result in
Multiprogramming.
Disadvantages of Multiprogramming
Since you have to perform Context Switching, so you need to have some process
scheduling technique that will tell the CPU which process to take for execution
and it is difficult.
VSRGDC
Here, CPU is executing some part of one process, then some part of other and so
on. So, in this case, the memory will be divided into small parts as each process
require some memory and this will result in memory fragmentation. So, no or less
continuous memory will be available.
Multiprocessing
As we know that in a uni-processor system, the processor can execute only one process at
a time. But when your system is having a lot of work to do and one processor is very less
to perform all those work in the required unit of time, then we can use more than one
processors in the same system.
So, two or more processors present in the same computer, sharing the
system bus, memory, and other I/O is said to be Multiprocessing System.
Suppose, we are having 5 processes P1, P2, P3, P4, and P5. In a uni-processor system,
only one process can be executed at a time and after its execution, the next process will
be executed and so on. But in a multiprocessor system, the different process can be
assigned to different processors and this, in turn, decreases the overall process execution
time by the system. A dual-processor system can execute two processes at a time while a
quad-processor can execute four processes at a time.
Advantages of Multiprocessing
Since more than one processors are working at a time, so more work is done in a
shorter period of time. Throughput will be increased. You can read more about
throughput from here .
We have more than one processor, so if one processor is not working then the job
can be done with the help of other processors. This, in turn, increases reliability.
If you are providing lots of work on one processor then it will result in more
battery drain. But if the work is divided into various processors then it will provide
a better battery efficiency.
Multiprocessing is an example of true parallel processing i.e. more than one
processes executing at the same time.
Disadvantages of Multiprocessing
As more than processors are working at a particular instant of time. So, the
coordination between these is very complex.
Since, the buses, memory, and I/O devices are shared. So, if some processors are
using some I/O then another processor has to wait for its turn and this will result
in the reduction of throughput.
To have the efficient working of all the processors at a time, we need to have a
large main memory and this, in turn, increase the cost.
VSRGDC
Multitasking
If the CPU is allocated to such a process that is taking a lot of time then other processes
will have to wait for the execution of that process and this will result in long waiting of
processes for resource allocation.
For example, if process P1 is taking 20 seconds of CPU time and the CPU is allocated to
P1. Now, if some process P2 comes that requires 1 second of CPU time, then P2 have to
wait for 20 seconds irrespective of the fact that it requires only 1 second of CPU time.
What we can do here is, we can set a time quantum and CPU will be given to each
process for that amount of time only and after that, the CPU will be given to some other
process that is ready for execution. So, in our above example, if the decided time
quantum is 2 seconds, then initially, the process P1 will be allocated the CPU for 2
seconds and then it will be given to process P2. The process P2 will complete its
execution in 1 second and then the CPU will be given to process P1 again. Since there is
no other process available for execution, the process P1 can continue to execute for its
remaining time i.e. 18 seconds. This is called time-sharing. And the concept of time-
sharing between various processes is called Multitasking .
Here the switching between processes is so quick that it gives an illusion that all the
processes are being executed at the same time.
For multitasking, firstly there should be multiprogramming and secondly, there should
be time-sharing.
Advantages of Multitasking
Since each process is given a particular time quantum for execution. So, it will
reduce starvation.
It provides an illusion to the user that he/she is using multiple programmes at the
same time.
Disadvantages of Multitasking
Every process will be given a fixed time quantum in one cycle. So, the high priority
process will also have to wait.
If the processor is slow and the work is very large, then it can't be run smoothly. It
requires more processing power.
What is Kernel in Operating System and Kernel Mode& user Mode and functions of kernal?
A Kernel is a computer program that is the heart and core of an Operating System. Since
the Operating System has control over the system so, the Kernel also has control over
everything in the system. It is the most important part of an Operating System.
Whenever a system starts, the Kernel is the first program that is loaded after the
bootloader because the Kernel has to handle the rest of the thing of the system for the
Operating System. The Kernel remains in the memory until the Operating System is
shut-down.
The Kernel is responsible for low-level tasks such as disk management, memory
management, task management, etc. It provides an interface between the user and the
hardware components of the system. When a process makes a request to the Kernel, then
it is called System Call.
A Kernel is provided with a protected Kernel Space which is a separate area of memory
and this area is not accessible by other application programs. So, the code of the Kernel
is loaded into this protected Kernel Space. Apart from this, the memory used by other
applications is called the User Space. As these are two different spaces in the memory, so
communication between them is a bit slower.
Functions of a Kernel
Following are the functions of a Kernel:
Initially, when Operating System came into existence then we had to give the input to the
CPU and the CPU executes the instructions and finally gives us the output. But there was
a problem with this approach. In a normal situation, we have to deal with a number of
processes and we know that the time taken in the I/O operation is very large as
compared to the time taken by CPU for the execution of the instructions. So, in the old
approach, one process will give the input with the help of an input device and during this
period of time, the CPU is in idle state. Then the CPU executes the instruction and then
the output is again given to some output device and at this time also, the CPU is in idle
state. After showing the output, the next process starts its execution. So, most of the
time, the CPU is in idle state and this is the worst condition that we can have in
Operating Systems. Here, the concept of Spooling comes into play. Let's learn more
about it.
Spooling
Spooling stands for " Simultaneous Peripheral Operations Online ". So, in a
Spooling, more than one I/O operations can be performed simultaneously i.e. at the time
when the CPU is executing some process then more than one I/O operations can also de
done at the same time. The following image will help us in understanding the concept in
a better way:
From the above image, we can see that the input data is stored in some kind of secondary
device and this data is then fetched by the main memory. The benefit of this approach is
that, in general, the CPU works on the data stored in the main memory. Since we can
have a number of input devices at a time, so all these input devices can put the data into
the disk or secondary memory. Then, the main memory will fetch the data one by one
from the secondary memory and the CPU will execute some instruction on that data.
Both the main memory and secondary memory are digital in nature, so taking data from
the main to secondary is very fast. Also, when the CPU is executing some task then at
that time, the input devices need not wait for its turn. They can directly put their data in
VSRGDC
the secondary memory without waiting for its turn. By doing so, the CPU will be in the
execution phase most of the time. So, the CPU will not be idle in this case.
When the CPU generates some output, then that output is first stored in the main
memory and the main memory transfers that output to the secondary memory and from
the secondary memory, the output will be provided to some output devices. By doing so,
again we are saving time because now the CPU doesn't have to wait for the output device
to show the output and this, in turn, increases the overall execution speed of the system.
The CPU will not be held idle in this case.
For example, in a printer spooling, there can be more than one documents that need to
be printed. So, the documents can be stored into the spool and the printer can fetch that
documents and print the document one by one.
Advantages of Spooling
Since there is no interaction of I/O devices with CPU, so the CPU need not wait for
the I/O operation to take place. The I/O operations take a large amount of time.
The CPU is kept busy most of the time and hence it is not in the idle state which is
good to have a situation.
More than one I/O devices can work simultaneously.
In spooling, the I/O of one job can be handled along with some operations of
another job. While in buffering, only one job is handled at a time.
Spooling is more efficient than buffering.
In buffering, there is a small separate area in the memory know as a buffer. But
spooling can make use of the whole memory.
VSRGDC
What is a Process in Operating System and what are the different states of a Process?
In the Operating System, a Process is something that is currently under execution. So, an
active program can be called a Process. For example, when you want to search something
on web then you start a browser. So, this can be process. Another example of process can
be starting your music player to listen to some cool music of your choice.
A Process has various attributes associated with it. Some of the attributes of a Process
are:
States of a Process
During the execution of a process, it undergoes a number of states. So, in this section of
the blog, we will learn various states of a process during its lifecycle.
New State: This is the state when the process is just created. It is the first state of
a process.
Ready State: After the creation of the process, when the process is ready for its
execution then it goes in the ready state. In a ready state, the process is ready for
its execution by the CPU but it is waiting for its turn to come. There can be more
than one process in the ready state.
Ready Suspended State: There can be more than one process in the ready state
but due to memory constraint, if the memory is full then some process from the
ready state gets placed in the ready suspended state.
Running State: Amongst the process present in the ready state, the CPU chooses
one process amongst them by using some CPU scheduling algorithm. The process
will now be executed by the CPU and it is in the running state.
Waiting or Blocked State: During the execution of the process, the process
might require some I/O operation like writing on file or some more priority
process might come. In these situations, the running process will have to go into
VSRGDC
the waiting or blocked state and the other process will come for its execution. So,
the process is waiting for something in the waiting state.
Waiting Suspended State: When the waiting queue of the system becomes full
then some of the processes will be sent to the waiting suspended state.
Terminated State: After the complete execution of the process, the process
comes into the terminated state and the information related to this process is
deleted.
The following image will show the flow of a process from the new state to the terminated
state.
In the above image, you can see that when a process is created then it goes into the new
state. After the new state, it goes into the ready state. If the ready queue is full, then the
process will be shifted to the ready suspended state. From the ready sate, the CPU will
choose the process and the process will be executed by the CPU and will be in the
running state. During the execution of the process, the process may need some I/O
operation to perform. So, it has to go into the waiting state and if the waiting state is full
then it will be sent to the waiting suspended state. From the waiting state, the process
can go to the ready state after performing I/O operations. From the waiting suspended
state, the process can go to waiting or ready suspended state. At last, after the complete
execution of the process, the process will go to the terminated state and the information
of the process will be deleted.
A Process Control Block or simple PCB is a data structure that is used to store the
information of a process that might be needed to manage the scheduling of a particular
process.
So, each process will be given a PCB which is a kind of identification card for a process.
All the processes present in the system will have a PCB associated with it and all these
PCBs are connected in a Linked List.
It is one of the essential features of the multitasking operating system. The processes are
switched so fastly that it gives an illusion to the user that all the processes are being
executed at the same time.
But the context switching process involved a number of steps that need to be followed.
You can't directly switch a process from the running state to the ready state. You have to
save the context of that process. If you are not saving the context of any process P then
after some time, when the process P comes in the CPU for execution again, then the
process will start executing from starting. But in reality, it should continue from that
point where it left the CPU in its previous execution. So, the context of the process
should be saved before putting any other process in the running state.
A context is the contents of a CPU's registers and program counter at any point in time.
Context switching can happen due to the following reasons:
When a process of high priority comes in the ready state. In this case, the
execution of the running process should be stopped and the higher priority
process should be given the CPU for execution.
When an interruption occurs then the process in the running state should be
stopped and the CPU should handle the interrupt before doing something else.
When a transition between the user mode and kernel mode is required then you
have to perform the context switching.
In the above figure, you can see that initially, the process P1 is in the running state and
the process P2 is in the ready state. Now, when some interruption occurs then you have
to switch the process P1 from running to the ready state after saving the context and the
process P2 from ready to running state. The following steps will be performed:
1. Firstly, the context of the process P1 i.e. the process present in the running state
will be saved in the Process Control Block of process P1 i.e. PCB1.
2. Now, you have to move the PCB1 to the relevant queue i.e. ready queue, I/O
queue, waiting queue, etc.
3. From the ready state, select the new process that is to be executed i.e. the process
P2.
4. Now, update the Process Control Block of process P2 i.e. PCB2 by setting the
process state to running. If the process P2 was earlier executed by the CPU, then
you can get the position of last executed instruction so that you can resume the
execution of P2.
5. Similarly, if you want to execute the process P1 again, then you have to follow the
same steps as mentioned above(from step 1 to 4).
For context switching to happen, two processes are at least required in general, and in
the case of the round-robin algorithm, you can perform context switching with the help
of one process only.
The time involved in the context switching of one process by other is called
the Context Switching Time.
VSRGDC
In the Operating System, CPU schedulers are used to handle the scheduling of various
processes that are coming for its execution by the CPU. Schedulers are responsible for
transferring a process from one state to the other. Basically, we have three types of
schedulers i.e.
1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler
In this blog, we will learn about these schedulers and we will see the difference between
them. Also, at the end of the blog, we will look where these schedulers are placed in the
Process State Diagram. So, let's get started.
Long-Term Scheduler
Long-Term schedulers are those schedulers whose decision will have a long-term effect
on the performance. The duty of the long-term scheduler is to bring the process from the
JOB pool to the Ready state for its execution.
So, the long-term scheduler decides which process is to be created to put into the ready
state.
Effect on performance
The long term scheduler is responsible for creating a balance between the I/O
bound(a process is said to be I/O bound if the majority of the time is spent on the
I/O operation) and CPU bound(a process is said to be CPU bound if the majority
of the time is spent on the CPU). So, if we create processes which are all I/O
bound then the CPU might not be used and it will remain idle for most of the time.
This is because the majority of the time will be spent on the I/O operation.
So, if we create processes that are having high a CPU bound or a perfect balance
between the I/O and CPU bound, then the overall performance of the system will
be increased.
Short-Term Scheduler
Short-term schedulers are those schedulers whose decision will have a short-term effect
on the performance of the system. The duty of the short-term scheduler is to schedule
VSRGDC
the process from the ready state to the running state. This is the place where all the
scheduling algorithms are used i.e. it can be FCFS or Round-Robin or SJF or any other
scheduling algorithm.
Effect on performance
The choice of the short-term scheduler is very important for the performance of
the system. If the short-term scheduler only selects a process that is having very
high burst time(learn more about burst time from here ) then the other process
may go into the condition of starvation(learn more about starvation from here ).
So, be specific when you are choosing short-term scheduler because the
performance of the system is our highest priority.
The following image shows the scheduling of processes using the long-term and short-
term schedulers.
Medium-Term Schedulers
Sometimes, you need to send the running process to the ready state or to the wait/block
state. For example, in the round-robin process, after a fixed time quantum, the process is
again sent to the ready state from the running state. So, these things are done with the
help of Medium-Term schedulers.
VSRGDC
Medium-term schedulers are those schedulers whose decision will have a
mid-term effect on the performance of the system. It is responsible for
swapping of a process from the Main Memory to Secondary Memory and
vice-versa.
It is helpful in maintaining a perfect balance between the I/O bound and the CPU bound.
It reduces the degree of multiprogramming.
The following diagram will give a brief about the working of the medium-term
schedulers.
Dispatcher
When the processes are in the ready state, then the CPU applies some process scheduling
algorithm and choose one process from a list of processes that will be executed at a
particular instant of time. This is done by a scheduler i.e. selecting one process from a
number of processes is done by a scheduler.
Now, the selected process has to be transferred from the current state to the desired or
scheduled state. So, it is the duty of the dispatcher to dispatch or transfer a process from
one state to another. A dispatcher is responsible for context switching and switching to
user mode(learn more about context switching from here ).
For example, if we have three processes P1, P2, and P3 in the ready state. The arrival
time of all these processes is T0, T1, and T2 respectively(learn more about arrival time
from here ). If we are using the First Come First Serve approach, then the scheduler will
first select the process P1 and the dispatcher will transfer the process P1 from the ready
state to the running state. After completion of the execution of the process P1, the
scheduler will then select the process P2 and the dispatcher will transfer the process P2
from ready to running state and so on.
The scheduler selects a process from a list of processes by applying some process
scheduling algorithm. On the other hand, the dispatcher transfers the process
selected by the short-term scheduler from one state to another.
The scheduler works independently, while the dispatcher has to be dependent on
the scheduler i.e. the dispatcher transfers only those processes that are selected by
the scheduler.
For selecting a process, the scheduler uses some process scheduling algorithm like
FCFS, Round-Robin, SJF, etc. But the dispatcher doesn't use any kind of
scheduling algorithms.
The only duty of a scheduler is to select a process from a list of processes. But
apart from transferring a process from one state to another, the dispatcher can
also be used for switching to user mode. Also, the dispatcher can be used to jump
to a proper location when the process is restarted.
VSRGDC
Example:
In the above example, you can see that we have three processes P1, P2, and P3, and they
are coming in the ready state at 0ms, 2ms, and 2ms respectively. So, based on the arrival
time, the process P1 will be executed for the first 18ms. After that, the process P2 will be
executed for 7ms and finally, the process P3 will be executed for 10ms. One thing to be
noted here is that if the arrival time of the processes is the same, then the CPU can select
any process.
VSRGDC
---------------------------------------------
| Process | Waiting Time | Turnaround Time |
---------------------------------------------
| P1 | 0ms | 18ms |
| P2 | 16ms | 23ms |
| P3 | 23ms | 33ms |
---------------------------------------------
Total waiting time: (0 + 16 + 23) = 39ms
Average waiting time: (39/3) = 13ms
This algorithm is non-preemptive so you have to execute the process fully and
after that other processes will be allowed to execute.
Throughput is not efficient.
FCFS suffers from the Convey effect i.e. if a process is having very high burst
time and it is coming first, then it will be executed first irrespective of the fact that
a process having very less time is there in the ready state.
In this technique, the process having the minimum burst time at a particular instant of
time will be executed first. It is a non-preemptive approach i.e. if the process starts its
execution then it will be fully executed and then some other process will come.
Example:
VSRGDC
In the above example, at 0ms, we have only one process i.e. process P2, so the process P2
will be executed for 4ms. Now, after 4ms, there are two new processes i.e. process P1 and
process P3. The burst time of P1 is 5ms and that of P3 is 2ms. So, amongst these two, the
process P3 will be executed first because its burst time is less than P1. P3 will be
executed for 2ms. Now, after 6ms, we have two processes with us i.e. P1 and P4 (because
we are at 6ms and P4 comes at 5ms). Amongst these two, the process P4 is having a less
burst time as compared to P1. So, P4 will be executed for 4ms and after that P1 will be
executed for 5ms. So, the waiting time and turnaround time of these processes will be:
---------------------------------------------
| Process | Waiting Time | Turnaround Time |
---------------------------------------------
| P1 | 7ms | 12ms |
| P2 | 0ms | 4ms |
| P3 | 0ms | 2ms |
| P4 | 1ms | 5ms |
---------------------------------------------
Total waiting time: (7 + 0 + 0 + 1) = 8ms
Average waiting time: (8/4) = 2ms
This algorithm is also known as Shortest Remaining Time First i.e. we schedule the
process based on the shortest remaining time of the processes.
Example:
In the above example, at time 1ms, there are two processes i.e. P1 and P2. Process P1 is
having burst time as 6ms and the process P2 is having 8ms. So, P1 will be executed first.
Since it is a preemptive approach, so we have to check at every time quantum. At 2ms,
we have three processes i.e. P1(5ms remaining), P2(8ms), and P3(7ms). Out of these
three, P1 is having the least burst time, so it will continue its execution. After 3ms, we
have four processes i.e P1(4ms remaining), P2(8ms), P3(7ms), and P4(3ms). Out of these
four, P4 is having the least burst time, so it will be executed. The process P4 keeps on
executing for the next three ms because it is having the shortest burst time. After 6ms,
we have 3 processes i.e. P1(4ms remaining), P2(8ms), and P3(7ms). So, P1 will be
selected and executed. This process of time comparison will continue until we have all
the processes executed. So, waiting and turnaround time of the processes will be:
---------------------------------------------
| Process | Waiting Time | Turnaround Time |
VSRGDC
---------------------------------------------
| P1 | 3ms | 9ms |
| P2 | 16ms | 24ms |
| P3 | 8ms | 15ms |
| P4 | 0ms | 3ms |
---------------------------------------------
Total waiting time: (3 + 16 + 8 + 0) = 27ms
Average waiting time: (27/4) = 6.75ms
Round-Robin
In this approach of CPU scheduling, we have a fixed time quantum and the CPU will be
allocated to a process for that amount of time only at a time. For example, if we are
having three process P1, P2, and P3, and our time quantum is 2ms, then P1 will be giv en
2ms for its execution, then P2 will be given 2ms, then P3 will be given 2ms. After one
cycle, again P1 will be given 2ms, then P2 will be given 2ms and so on until the processes
complete its execution.
Example:
VSRGDC
In the above example, every process will be given 2ms in one turn because we have taken
the time quantum to be 2ms. So process P1 will be executed for 2ms, then process P2 will
be executed for 2ms, then P3 will be executed for 2 ms. Again process P1 will be executed
for 2ms, then P2, and so on. The waiting time and turnaround time of the processes will
be:
---------------------------------------------
| Process | Waiting Time | Turnaround Time |
---------------------------------------------
| P1 | 13ms | 23ms |
| P2 | 10ms | 15ms |
| P3 | 13ms | 21ms |
---------------------------------------------
Total waiting time: (13 + 10 + 13) = 36ms
Average waiting time: (36/3) = 12ms
No starvation will be there in round-robin because every process will get chance
for its execution.
Used in time-sharing systems.
VSRGDC
Disadvantages of round-robin:
We have to perform a lot of context switching here, which will keep the CPU
Example:
In the above example, at 0ms, we have only one process P1. So P1 will execute for 5ms
because we are using non-preemption technique here. After 5ms, there are three
processes in the ready state i.e. process P2, process P3, and process P4. Out to these
three processes, the process P4 is having the highest priority so it will be executed for
6ms and after that, process P2 will be executed for 3ms followed by the process P1. The
waiting and turnaround time of processes will be:
VSRGDC
---------------------------------------------
| Process | Waiting Time | Turnaround Time |
---------------------------------------------
| P1 | 0ms | 5ms |
| P2 | 10ms | 13ms |
| P3 | 12ms | 20ms |
| P4 | 2ms | 8ms |
---------------------------------------------
Total waiting time: (0 + 10 + 12 + 2) = 24ms
Average waiting time: (24/4) = 6ms
It can lead to starvation if only higher priority process comes into the ready state.
If the priorities of more two processes are the same, then we have to use some
other scheduling algorithm.
So, multiple queues for processes are maintained that are having common characteristics
and each queue has its own priority and there is some scheduling algorithm used in each
of the queues.
Example:
VSRGDC
In the above example, we have two queues i.e. queue1 and queue2. Queue1 is having
higher priority and queue1 is using the FCFS approach and queue2 is using the round -
robin approach(time quantum = 2ms).
Since the priority of queue1 is higher, so queue1 will be executed first. In the queue1, we
have two processes i.e. P1 and P4 and we are using FCFS. So, P1 will be executed
followed by P4. Now, the job of the queue1 is finished. After this, the execution of the
processes of queue2 will be started by using the round-robin approach.
In a multilevel feedback queue, we have a list of queues having some priority and the
higher priority queue is always executed first. Let's assume that we have two queues i.e.
queue1 and queue2 and we are using round-robin for these i.e. time quantum for queue1
is 2 ms and for queue2 is 3ms. Now, if a process starts executing in the queue1 then if it
gets fully executed in 2ms then it is ok, its priority will not be changed. But if the
execution of the process will not be completed in the time quantum of queue1, then the
priority of that process will be reduced and it will be placed in the lower priority queue
i.e. queue2 and this process will continue.
While executing a lower priority queue, if a process comes into the higher priority queue,
then the execution of that lower priority queue will be stopped and the execution of the
higher priority queue will be started. This can lead to starvation because if the process
keeps on going into the higher priority queue then the lower priority queue keeps on
waiting for its turn.
VSRGDC
Preemptive Scheduling
In preemptive scheduling, the CPU will execute a process but for a limited period of time
and after that, the process has to wait for its next turn i.e. in preemptive scheduling, the
state of a process gets changed i.e. the process may go to the ready state from running
state or from the waiting state to the ready state. The resources are allocated to the
process for a limited amount of time and after that, they are taken back and the process
goes to the ready queue if it still has some CPU burst time remaining. Some of the
preemptive scheduling algorithms are Round-robin, SJF (preemptive), etc.
Non-preemptive Scheduling
In non-preemptive scheduling, if some resource is allocated to a process then that
resource will not be taken back until the completion of the process. Other processes that
are present in the ready queue have to wait for its turn and it cann't forcefully get the
CPU. Once the CPU is allocated to a process, then it will be held by that process until it
completes its execution or it goes in the waiting state for I/O operation.
What is Burst time, Arrival time, Exit time, Response time, Waiting time, Turnaround
time, and Throughput?
Burst time
Every process in a computer system requires some amount of time for its execution. This
time is both the CPU time and the I/O time. The CPU time is the time taken by CPU to
execute the process. While the I/O time is the time taken by the process to perform some
I/O operation. In general, we ignore the I/O time and we consider only the CPU time for
a process. So, Burst time is the total time taken by the process for its execution
on the CPU.
Arrival time
Arrival time is the time when a process enters into the ready state and is ready for its
execution.
Here in the above example, the arrival time of all the 3 processes are 0 ms, 1 ms, and 2
ms respectively.
Exit time
Exit time is the time when a process completes its execution and exit from the system.
Response time
Response time is the time spent when the process is in the ready state and gets the CPU
for the first time. For example, here we are using the First Come First Serve CPU
scheduling algorithm for the below 3 processes:
VSRGDC
P1: 0 ms
P2: 7 ms because the process P2 have to wait for 8 ms during the execution of P1
and then after it will get the CPU for the first time. Also, the arrival time of P2 is 1
ms. So, the response time will be 8-1 = 7 ms.
P3: 13 ms because the process P3 have to wait for the execution of P1 and P2 i.e.
after 8+7 = 15 ms, the CPU will be allocated to the process P3 for the first time.
Also, the arrival of P3 is 2 ms. So, the response time for P3 will be 15-2 = 13 ms.
Response time = Time at which the process gets the CPU for the first time -
Arrival time
Waiting time
Waiting time is the total time spent by the process in the ready state waiting for CPU. For
example, consider the arrival time of all the below 3 processes to be 0 ms, 0 ms, and 2
ms and we are using the First Come First Serve scheduling algorithm.
Then the waiting time for all the 3 processes will be:
VSRGDC
P1: 0 ms
P2: 8 ms because P2 have to wait for the complete execution of P1 and arrival
time of P2 is 0 ms.
P3: 13 ms becuase P3 will be executed after P1 and P2 i.e. after 8+7 = 15 ms and
the arrival time of P3 is 2 ms. So, the waiting time of P3 will be: 15-2 = 13 ms.
Waiting time = Turnaround time - Burst time
In the above example, the processes have to wait only once. But in many other
scheduling algorithms, the CPU may be allocated to the process for some time and then
the process will be moved to the waiting state and again after some time, the process will
get the CPU and so on.
There is a difference between waiting time and response time. Response time is the time
spent between the ready state and getting the CPU for the first time. But the waiting time
is the total time taken by the process in the ready state. Let's take an example of a round -
robin scheduling algorithm. The time quantum is 2 ms.
In the above example, the response time of the process P2 is 2 ms because after 2 ms, the
CPU is allocated to P2 and the waiting time of the process P2 is 4 ms i.e turnaround time
- burst time (10 - 6 = 4 ms).
Turnaround time
Turnaround time is the total amount of time spent by the process from coming in the
ready state for the first time to its completion.
or
VSRGDC
Turnaround time = Exit time - Arrival time
For example, if we take the First Come First Serve scheduling algorithm, and the order of
arrival of processes is P1, P2, P3 and each process is taking 2, 5, 10 seconds. Then the
turnaround time of P1 is 2 seconds because when it comes at 0th second, then the CPU is
allocated to it and so the waiting time of P1 is 0 sec and the turnaround time will be the
Burst time only i.e. 2 seconds. The turnaround time of P2 is 7 seconds because the
process P2 have to wait for 2 seconds for the execution of P1 and hence the waiting time
of P2 will be 2 seconds. After 2 seconds, the CPU will be given to P2 and P2 will execute
its task. So, the turnaround time will be 2+5 = 7 seconds. Similarly, the turnaround time
for P3 will be 17 seconds because the waiting time of P3 is 2+5 = 7 seconds and the burst
time of P3 is 10 seconds. So, turnaround time of P3 is 7+10 = 17 seconds.
Different CPU scheduling algorithms produce different turnaround time for the same set
of processes. This is because the waiting time of processes differ when we change the
CPU scheduling algorithm.
Throughput
Throughput is a way to find the efficiency of a CPU. It can be defined as the number of
processes executed by the CPU in a given amount of time. For example, let's say, the
process P1 takes 3 seconds for execution, P2 takes 5 seconds, and P3 takes 10 seconds.
So, throughput, in this case, the throughput will be (3+5+10)/3 = 18/3 = 6 seconds.
VSRGDC
In Priority scheduling technique, we assign some priority to every process we have and
based on that priority, the CPU will be allocated and the process will be executed. Here,
the CPU will be allocated to the process that is having the highest priority. We don't
care about the burst time here. Even if the burst time is low, the CPU will be
allocated to the process having the highest priority.
In the above image, we can see the priority of process P1 is the highest, followed by P3
and P2. So, the CPU will be allocated to process P1, then to process P3 and then to
process P2.
NOTE: In our example, we are taking 0 as the highest priority number and 100 or more
as the lowest priority number. You can take the reverse of it also but the concept will be
the same i.e. higher priority process will be allocated the CPU first.
VSRGDC
Starvation
If you closely look at the concept of Priority scheduling, then you might have noticed one
thing. What if the priority of some process is very low and the higher priority processes
keep on coming and the CPU is allocated to that higher priority processes and the low
priority process keeps on waiting for its turn. Let's have an example:
In the above example, the process P2 is having the highest priority and the process P1 is
having the lowest priority. In general, we have a number of processes that are in the
ready state for its execution. So, as time passes, if only that processes are coming in the
CPU that are having a higher priority than the process P1, then the process P1 will keep
on waiting for its turn for CPU allocation and it will never get CPU because all the other
processes are having higher priority than P1. This is called Starvation.
So, starvation should be removed because if some process is in the ready state then we
should provide CPU to it. Since the process is of low priority so we can take our time for
CPU allocation to that process but we must ensure that the CPU is allocated.
VSRGDC
Aging
To avoid starvation, we use the concept of Aging. In Aging, after some fixed amount of
time quantum, we increase the priority of the low priority processes. By doing so, as time
passes, the lower priority process becomes a higher priority process.
For example, if a process P is having a priority number as 75 at 0 ms. Then after every 5
ms(you can use any time quantum), we can decrease the priority number of the process
P by 1(here also instead of 1, you can take any other number). So, after 5 ms, the priority
of the process P will be 74. Again after 5 ms, we will decrease the priority number of
process P by 1. So, after 10 ms, the priority of the process P will become 73 and this
process will continue. After a certain period of time, the process P will become a high
priority process when the priority number comes closer to 0 and the process P will get
the CPU for its execution. In this way, the lower priority process also gets the CPU. No
doubt the CPU is allocated after a very long time but since the priority of the process is
very low so, we are not that much concerned about the response time of the process. The
only thing that we are taking care of is starvation.
So, we are Aging our low priority process to make it a high priority process
and as a result, to allocate the CPU for it.
Whenever you are using Priority scheduling algorithm or Shortest Job First algorithm,
then make sure to use the concept of Aging, otherwise, your process will end up with
starvation.
VSRGDC
What is a Thread in OS and what are the differences between a Process and a Thread?
Thread
A thread is an execution unit that has its own program counter, a stack and a set of
registers that reside in a process . Threads can’t exist outside any process. Also, each
thread belongs to exactly one process. The information like code segment, files, and data
segment can be shared by the different threads.
Threads are popularly used to improve the application through parallelism . Actually
only one thread is executed at a time by the CPU, but the CPU switches
rapidly between the threads to give an illusion that the threads are running parallelly.
The diagram above shows the single-threaded process and the multi-threaded process.
A single-threaded process is a process with a single thread. A multi-threaded
process is a process with multiple threads. As the diagram clearly shows that the
multiple threads in it have its own registers, stack, and counter but they share the code
and data segment.
VSRGDC
Types of Thread
User-Level Thread
1. The user-level threads are managed by users and the kernel is not aware of it.
2. These threads are faster to create and manage.
3. The kernel manages them as if it was a single-threaded process.
4. It is implemented using user-level libraries and not by system calls. So, no call to
the operating system is made when a thread switches the context.
5. Each process has its own private thread table to keep the track of the threads.
Kernel-Level Thread
1. The kernel knows about the thread and is supported by the OS.
2. The threads are created and implemented using system calls.
3. The thread table is not present here for each process. The kernel has a thread table
to keep the track of all the threads present in the system.
4. Kernel-level threads are slower to create and manage as compared to user-level
threads.
Advantages of threads
1. Performance: Threads improve the overall performance(throughput,
computational speed, responsiveness) of a program.
2. Resource sharing: As the threads can share the memory and resources of any
process it allows any application to perform multiple activities inside the same
address space.
3. Utilization of Multiple Processor Architecture: The different threads can
run parallel on the multiple processors hence, this enables the utilization of the
processor to a large extent and efficiency.
4. Reduced Context Switching Time: The threads minimize the context
switching time as in Thread Switching, the virtual memory space remains the
same.
5. Concurrency: Thread provides concurrency within a process.
6. Parallelism: Parallel programming techniques are easier to implement.
VSRGDC
Multithreading
Multithreading is a phenomenon of executing multiple threads at the same time. To
understand the concept of multithreading, you must understand what is a thread and
a process .
As we have two types of thread i.e. user-level thread and kernel-level thread . So,
for these threads to function together there must exist a relationship between them. This
relation is established by using Multithreading Models . There are three common
ways of establishing this relationship.
1. Many-to-One Model
2. One-to-One Model
3. Many-to-Many Model
Many-to-One Model
As the name suggests there is many to one relationship between threads. Here, multiple
user threads are associated or mapped with one kernel thread. The thread management
is done on the user level so it is more efficient.
VSRGDC
Drawbacks
1. As multiple users threads are mapped to one kernel thread. So, if one user
thread makes a blocking system call( like function read() call then the thread or
process has to wait until read event is completed), it will block the kernel
thread which will in turn block all the other threads.
2. As only one thread can access the kernel thread at a time so multiple threads are
unable to run in parallel in the multiprocessor system. Even though we have
multiple processers one kernel thread will run on only one processor .
Hence, the user thread will also run in that processor only in which the mapped
kernel thread is running.
One-to-One Model
From the name itself, we can understand the one user thread to mapped to one kernel
thread.
Disadvantages
1. Each time we create a user thread we have to create a kernel thread. So, the
overhead of creating a kernel thread can affect the performance of the application.
2. Also, in a multiprocessor system, there is a limit of how many threads can run at a
time. Suppose if there are four-processors in the system then only max four
threads can run at the time. So, if you are having 5 threads and trying to run them
at a time then it may not work. Therefore, the application should restrict the
number of kernel threads that it supports.
Many-to-Many Model
From the name itself, we can understand the many user threads to mapped to a smaller
or equal number of kernel threads. The number of kernel threads is specific to a
particular application or machine.
Benefits of MultiThreading
1. Resource sharing: As the threads can share the memory and resources of any
process it allows any application to perform multiple activities inside the same
address space.
2. Utilization of MultipleProcessor Architecture: The different threads can
run parallel on the multiple processors hence, this enables the utilization of the
processor to a large extent and efficiency.
3. Reduced Context Switching Time: The threads minimize the context
switching time as in Thread Context Switching, the virtual memory space remains
the same.
4. Economical: The allocation of memory and resources during process creation
comes with a cost. As the threads can distribute resources of the process it is more
economical to create context-switch threads.
VSRGDC
the Operating System, there are a number of processes present in a particular state. At
the same time, we have a limited amount of resources present, so those resources need to
be shared among various processes. But you should make sure that no two processes are
using the same resource at the same time because this may lead to data inconsistency.
So, synchronization of process should be there in the Operating System. These processes
that are sharing resources between each other are called Cooperative Processes and
the processes whose execution does not affect the execution of other processes are
called Independent Processes .
In this blog, we will learn about Process Synchronization in Operating System. We will
learn the two important concepts that are related to process synchronization i.e. Race
Condition and Critical Section .
Race Condition
In an Operating System, we have a number of processes and these processes require a
number of resources. Now, think of a situation where we have two processes and these
processes are using the same variable "a". They are reading the variable and then
updating the value of the variable and finally writing the data in the memory.
SomeProcess(){
...
read(a) //instruction 1
a = a + 5 //instruction 2
write(a) //instruction 3
...
}
In the above, you can see that a process after doing some operations will have to read the
value of "a", then increment the value of "a" by 5 and at last write the value of "a" in the
memory. Now, we have two processes P1 and P2 that needs to be executed. Let's take the
following two cases and also assume that the value of "a" is 10 initially.
1. In this case, process P1 will be executed fully (i.e. all the three instructions) and
after that, the process P2 will be executed. So, the process P1 will first read the
value of "a" to be 10 and then increment the value by 5 and make it to 15. Lastly,
this value will be updated in the memory. So, the current value of "a" is 15. Now,
the process P2 will read the value i.e. 15, increment with 5(15+5 = 20) and finally
write it to the memory i.e. the new value of "a" is 20. Here, in this case, the final
value of "a" is 20.
2. In this case, let's assume that the process P1 starts executing. So, it reads the value
of "a" from the memory and that value is 10(initial value of "a" is taken to be 10).
Now, at this time, context switching happens between process P1 and P2. Now, P2
VSRGDC
will be in the running state and P1 will be in the waiting state and the context of
the P1 process will be saved. As the process P1 didn't change the value of "a", so,
P2 will also read the value of "a" to be 10. It will then increment the value of "a" by
5 and make it to 15 and then save it to the memory. After the execution of the
process P2, the process P1 will be resumed and the context of the P1 will be read.
So, the process P1 is having the value of "a" as 10(because P1 has already executed
the instruction 1). It will then increment the value of "a" by 5 and write the final
value of "a" in the memory i.e. a = 15. Here, the final value of "a" is 15.
In the above two cases, after the execution of the two processes P1 and P2, the final value
of "a" is different i.e. in 1st case it is 20 and in 2nd case, it is 15. What's the reason behind
this?
The processes are using the same resource here i.e. the variable "a". In the first
approach, the process P1 executes first and then the process P2 starts executing. But in
the second case, the process P1 was stopped after executing one instruction and after
that the process P2 starts executing. And here both the processes are dealing on the same
resource i.e. variable "a" at the same time.
Here, the order of execution of processes changes the output. All these processes are in a
race to say that their output is correct. This is called a race condition.
Critical Section
The code in the above part is accessed by all the process and this can lead to data
inconsistency. So, this code should be placed in the critical section. The critical section
code can be accessed by only one process at a time and no other process can access that
critical section code. All the shared variables or resources are placed in the critical
section that can lead to data inconsistency.
All the Critical Section problems need to satisfy the following three conditions:
VSRGDC
Mutual Exclusion: If a process is in the critical section, then other processes
shouldn't be allowed to enter into the critical section at that time i.e. there must be
some mutual exclusion between processes.
Progress: If in the critical section, there is no process that is being executed, then
other processes that need to go in the critical section and have a finite time can
enter into the critical section.
Bounded Waiting: There must be some limit to the number of times a process
can go into the critical section i.e. there must be some upper bound. If no upper
bound is there then the same process will be allowed to go into the critical section
again and again and other processes will never get a chance to get into the critical
section.
So, in order to remove the problem of race condition, there must be synchronization
between various processes present in the system for its execution otherwise, it may lead
to data inconsistency i.e. a proper order should be defined in which the processes can
execute.
VSRGDC
Semaphore
A semaphore is a variable that indicates the number of resources that are available in a
system at a particular time and this semaphore variable is generally used to achieve the
process synchronization. It is generally denoted by " S ". You can use any other variable
name of your choice.
A semaphore uses two functions i.e. wait() and signal() . Both these functions are
used to change the value of the semaphore but the value can be changed by only one
process at a particular time and no other process should change the value
simultaneously.
The wait() function is used to decrement the value of the semaphore variable " S " by
one if the value of the semaphore variable is positive. If the value of the semaphore
variable is 0, then no operation will be performed.
wait(S) {
while (S == 0); //there is ";" sign here
S--;
}
The signal() function is used to increment the value of the semaphore variable by one.
signal(S) {
S++;
}
Types of Semaphore
There are two types of semaphores:
Advantages of semaphore
The mutual exclusion principle is followed when you use semaphores because
semaphores allow only one process to enter into the critical section.
Here, you need not verify that a process should be allowed to enter into the critical
section or not. So, processor time is not wasted here.
Disadvantages of semaphore
While using semaphore, if a low priority process is in the critical section, then no
other higher priority process can get into the critical section. So, the higher
priority process has to wait for the complete execution of the lower priority
process.
The wait() and signal() functions need to be implemented in the correct order. So,
the implementation of a semaphore is quite difficult.
VSRGDC
What is Deadlock?
Deadlock is a situation where two or more processes are waiting for each other. For
example, let us assume, we have two processes P1 and P2. Now, process P1 is holding the
resource R1 and is waiting for the resource R2. At the same time, the process P2 is
having the resource R2 and is waiting for the resource R1. So, the process P1 is waiting
for process P2 to release its resource and at the same time, the process P2 is waiting for
process P1 to release its resource. And no one is releasing any resource. So, both are
waiting for each other to release the resource. This leads to infinite waiting and no work
is done here. This is called Deadlock.
If a process is in the waiting state and is unable to change its state because the
resources required by the process is held by some other waiting process, then the
system is said to be in Deadlock.
Let's take one real-life example to understand the concept of Deadlock in a better way.
Suppose, you are studying in a school and you are using the bus service also. So, you
have to pay two fees i.e. bus fee and tuition fee. Now, think of a situation, when you go
for submitting the bus fee and the accountant says that you have to submit the tuition fee
first and then the bus fee. So, you go to submit the tuition fees on the other counter and
the accountant there said that you have to first submit the bus fees and then the tuition
fees. So, what will you do here? You are in a situation of deadlock here. You don't know
what to submit first, bus fees or tuition fees?
Hold and Wait: A process can hold a number of resources at a time and at the
same time, it can request for other resources that are being held by some other
process. For example, a process P1 can hold two resources R1 and R2 and at the
same time, it can request some resource R3 that is currently held by process P2.
Deadlock will happen if all the above four conditions happen simultaneously.
So, you know what Deadlock is and what are those four necessary conditions. Cool. The
four conditions of deadlock are:
1. Mutual Exclusion
2. Hold and Wait
3. No Preemption
4. Circular Wait
To remove deadlock from our system, we need to avoid any one of the above four
conditions of deadlock. So, there are various ways of deadlock handling. Let's see all of
them, one by one.
1. Deadlock Prevention
In this method, the system will prevent any deadlock condition to happen i.e. the system
will make sure that at least one of the four conditions of the deadlock will be violated.
Since we are preventing any one of four conditions to happen by applying some
techniques. These techniques can be very costly. So, you should apply deadlock
prevention in only those situation which has a drastic change in the system if deadlock
happens.
Let's see how we can avoid the four conditions of deadlock by using the deadlock
prevention technique.
Mutual Exclusion: Mutual exclusion says that a resource can only be held by
one process at a time. If another process is also demanding the same resource then
it has to wait for the allocation of that resource. So, practically, we can't violate the
mutual exclusion for a process because in general, one resource can perform the
work of one process at a time. For example, a printer can't print documents of two
users at the same time.
VSRGDC
Hold and Wait: Hold and wait arises when a process holds some resources and
is waiting for some other resources that are being held by some other waiting
process. To avoid this, the process can acquire all the resources that it needs,
before starting its execution and after that, it starts its execution. In this way, the
process need not wait for some resources during its execution. But this method is
not practical because we can't know the resources required by a process in
advance, before its execution. So, another way of avoiding hold and wait can be
the "Do not hold" technique. For example, if the process needs 10 resources R1,
R2, R3,...., R10. At a particular time, we can provide R1, R2, R3, and R4. After
performing the jobs on these resources, the process needs to release these
resources and then the other resources will be provided to the process. In this way,
we can avoid the hold and wait condition.
No Preemption: This is a technique in which a process can't forcefully take the
resource of other processes. But if we found some resource due to which, deadlock
is happening in the system, then we can forcefully preempt that resource from the
process that is holding that resource. By doing so, we can remove the deadlock but
there are certain things that should be kept in mind before using this forcefull
approach. If the process is having a very high priority or the process is a system
process, then only the process can forcefully preempt the resources of other
processes. Also, try to preempt the resources of those process which are in the
waiting state.
Circular Wait: Circular wait is a condition in which the first process is waiting
for the resource held by the second process, the second process is waiting for the
resource held by the third process and so on. At last, the last process is waiting for
the resource held by the first process. So, every process is waiting for each other to
release the resource. This is called a circular wait. To avoid this, what we can do is,
we can list the number of resources required by a process and we assign some
number or priority to each resource(in our case, we are using R1, R2, R3, and so
on). Now, the process will take the resources in the ascending order. For example,
if the process P1 and P2, requires resource R1 and R2, then initially, both the
process will demand the resource R1 and only one of them will get resource R1 at
that time and the other process have to wait for its turn. So, in this way, both the
process will not be waiting for each other. One of them will be executing and the
other will wait for its turn. So, there is no circular wait here.
2. Deadlock Avoidance
In the deadlock avoidance technique, we try to avoid deadlock to happen in our system.
Here, the system wants to be in a safe state always. So, the system maintains a set of data
and using that data it decides whether a new request should be entertained or not. If the
system is going into the bad state by taking that new request, then the system will avoid
those kinds of request and will ignore the request. So, if a request is made for a resource,
from a system, then that request should only be approved if the resulting state of the
system is safe i.e. not going into deadlock.
VSRGDC
For recovery, the CPU may forcefully take the resource allocated to some process and
give it to some other process but that process should be of high priority or that process
must be a system process.
4. Deadlock Ignorance
In most of the systems, deadlock happens rarely. So, why to apply so many detection and
recovery techniques or why to apply some method to prevent deadlock? As these
processes of deadlock prevention are costly, so, the Operating System assumes that the
deadlock is never going to happen. It simply ignores the deadlock. This is the most
widely used methods of deadlock handling.
So, you have to think that you want correctness or performance. If you want
performance, then your system should ignore deadlock otherwise you can apply some
deadlock prevention technique. It totally depends on the need of the situation. If your
system is dealing with some very very important data and you can't lose that if deadlock
happens then you should definitely go for deadlock prevention.
VSRGDC
Banker’s Algorithm
Banker’s Algorithm is a deadlock avoidance algorithm . It is also used for deadlock
detection. This algorithm tells that if any system can go into a deadlock or not by
analyzing the currently allocated resources and the resources required by it in the future.
There are various data structures which are used to implement this algorithm. So, let's
learn about these first.
1. Request-Resource Algorithm
2. Safety Algorithm
Safe state: A safe state is a state in which all the processes can be executed in some
arbitrary order with the available resources such that no deadlock occurs.
1. If it is a safe state, then the requested resources are allocated to the process in
actual.
2. If the resulting state is an unsafe state then it rollbacks to the previous state and
the process is asked to wait longer.
Safety Algorithm
The safety algorithm is applied to check whether a state is in a safe state or not.
1. Suppose currently all the processes are to be executed. Define two data structures
as work and finish as vectors of length m(where m is the length
of Available vector)and n(is the number of processes to be executed).
Work = Available
Finish[i] =false for i = 0, 1, … , n — 1.
2. This algorithm will look for a process that has Need value less than or equal to
the Work . So, in this step, we will find an index i such that
4. If all the processes are executed in some sequence then it is said to be a safe state. Or,
we can say that if
Example
So, the total allocated resources(total_alloc)are [5, 4, 3]. Therefore, the Available( the
resources that are currently available ) resources are
Available = [0, 1, 2]
VSRGDC
Now, we will make the Need Matrix for the system according to the given conditions. As
we know, Need(i)=Max(i)-Allocation(i) , so the resultant Need matrix will be as
follows:
Now, we will apply the safety algorithm to check that if the given state is a safe state or
not.
1. Work=Available=[0, 1, 2]
2. Also Finish[i]=false, for i=0,1,2, are set as false as none of these processes have
been executed.
3. Now, we check that Need[i]≤Work . By seeing the above Need matrix we can
tell that only B[0, 1, 2] process can be executed. So, process B( i=1 )is allocated the
resources and it completes its execution. After completing the execution, it frees
up the resources.
4. Again, Work=Work+Available i.e. Work=[0, 1, 2]+[2, 0,1]= [2, 1, 3] and Finish[1]=
true.
5. Now, as we have more instances of resources available we will check that if any
other process resource needs can be satisfied. With the currently available
resources[2, 1, 3], we can see that only process A[1, 2, 1] can be executed. So,
process A( i=0 ) is allocated the resources and it completes its execution. After
completing the execution, it frees up the resources.
6. Again, Work=Work+Available i.e. Work=[2, 1, 3]+[1, 2, 1]= [3, 3, 4] and
Finish[0]= true.
7. Now, as we have more instances of resources available we will check that if the
remaining last process resource requirement can be satisfied. With the currently
available resources[3, 3, 4], we can see that process C[2, 2, 1] can be executed. So,
process C( i=2 ) is allocated the resources and it completes its execution. After
completing the execution, it frees up the resources.
8. Fianlly, Work=Work+Available i.e. Work=[3, 3, 4]+[2, 2, 1]= [5, 5, 5] and
Finish[2]= true.
VSRGDC
9. Finally, all the resources are free and there exists a safe sequence B, A, C in which
all the processes can be executed. So. the system is in a safe state and deadlock will
not occur.
VSRGDC
Partition Allocation Methods in Memory Management
In the operating system, the following are four common memory management techniques.
Single contiguous allocation: Simplest allocation method used by MS-DOS. All memory
(except some reserved for OS) is available to a process.
Partitioned allocation: Memory is divided into different blocks or partitions. Each process is
allocated according to the requirement.
Paged memory management: Memory is divided into fixed-sized units called page frames,
used in a virtual memory environment.
Segmented memory management: Memory is divided into different segments (a segment is
a logical grouping of the process’ data or code).In this management, allocated memory
doesn’t have to be contiguous.
Most of the operating systems (for example Windows and Linux) use Segmentation with
Paging. A process is divided into segments and individual segments have pages.
In Partition Allocation, when there is more than one partition freely available to
accommodate a process’s request, a partition must be selected. To choose a particular
partition, a partition allocation method is needed. A partition allocation method is considered
better if it avoids internal fragmentation.
When it is time to load a process into the main memory and if there is more than one free
block of memory of sufficient size then the OS decides which free block to allocate.
There are different Placement Algorithm:
A. First Fit
B. Best Fit
C. Worst Fit
D. Next Fit
1. First Fit: In the first fit, the partition is allocated which is the first sufficient block from the
top of Main Memory. It scans memory from the beginning and chooses the first available
block that is large enough. Thus it allocates the first hole that is large enough.
2. Best Fit Allocate the process to the partition which is the first smallest sufficient partition
among the free available partition. It searches the entire list of holes to find the smallest hole
whose size is greater than or equal to the size of the process.
VSRGDC
3. Worst Fit Allocate the process to the partition which is the largest sufficient among the
freely available partitions available in the main memory. It is opposite to the best-fit algorithm.
It searches the entire list of holes to find the largest hole and allocate it to process.
4. Next Fit: Next fit is similar to the first fit but it will search for the first sufficient partition from
the last allocation point.
Is Best-Fit really best?
Although best fit minimizes the wastage space, it consumes a lot of processor time for
searching the block which is close to the required size. Also, Best-fit may perform poorer than
other algorithms in some cases. For example, see the below exercise.
Comparison of Partition Allocation Methods:
Partition
Allocation
Sl.No. Method Advantages Disadvantages
What is the difference between logical and physical address wrt Operating System?
Physical Address
The physical address refers to a location in the memory. It allows access to data in the
main memory. A physical address is not directly accessible to the user program hence, a
logical address needs to be mapped to it to make the address accessible. This mapping is
done by the MMU . Memory Management Unit(MMU) is a hardware component
responsible for translating a logical address to a physical address.
Logical Address
A logical address or virtual address is an address that is generated by the CPU during
program execution. A logical address doesn't exist physically. The logical address is used
as a reference to access the physical address. A logical address usually ranges from zero
to maximum (max). The user program that generates the logical address assumes that
the process runs on locations between 0 to the max. This logical address (generated by
CPU) combines with the base address generated by the MMU to form the physical
address .
The diagram below explains how the mapping between logical and physical addresses is
done.
VSRGDC
6. The set of all the logical addresses generated in reference to a program by the CPU
is called Logical Address Space whereas the set of all the physical addresses
mapped to the logical address is called Physical Address Space .
VSRGDC
In contiguous memory allocation whenever the processes come into RAM, space is
allocated to them. These spaces in RAM are divided either on the basis of fixed
partitioning (the size of partitions are fixed before the process gets loaded into RAM)
or dynamic partitioning (the size of the partition is decided at the run time according
to the size of the process). As the process gets loaded and removed from the memory
these spaces get broken into small pieces of memory that it can’t be allocated to the
coming processes. This problem is called fragmentation . In this blog, we will study
how these free space and fragmentations occur in memory. So, let's get started.
Fragmentation
Fragmentation is an unwanted problem where the memory blocks cannot be allocated to
the processes due to their small size and the blocks remain unused. It can also be
understood as when the processes are loaded and removed from the memory they create
free space or hole in the memory and these small blocks cannot be allocated to new
upcoming processes and results in inefficient use of memory. Basically, there are two
types of fragmentation:
Internal Fragmentation
External Fragmentation
Internal Fragmentation
In this fragmentation, the process is allocated a memory block of size more than the size
of that process. Due to this some part of the memory is left unused and this cause
internal fragmentation.
Example: Suppose there is fixed partitioning (i.e. the memory blocks are of fixed sizes)
is used for memory allocation in RAM. These sizes are 2MB, 4MB, 4MB, 8MB. Some part
of this RAM is occupied by the Operating System (OS).
Now, suppose a process P1 of size 3MB comes and it gets memory block of size 4MB. So,
the 1MB that is free in this block is wasted and this space can’t be utilized for allocating
memory to some other process. This is called internal fragmentation .
VSRGDC
External Fragmentation
In this fragmentation, although we have total space available that is needed by a process
still we are not able to put that process in the memory because that space is not
contiguous. This is called external fragmentation.
Example: Suppose in the above example, if three new processes P2, P3, and P4 come of
sizes 2MB, 3MB, and 6MB respectively. Now, these processes get memory blocks of size
2MB, 4MB and 8MB respectively allocated.
So, now if we closely analyze this situation then process P3 (unused 1MB)and P4(unused
2MB) are again causing internal fragmentation. So, a total of 4MB (1MB (due to process
P1) + 1MB (due to process P3) + 2MB (due to process P4)) is unused due to internal
fragmentation.
VSRGDC
Now, suppose a new process of 4 MB comes. Though we have a total space of
4MB still we can’t allocate this memory to the process. This is called external
fragmentation .
1. Paging
2. Segmentation
Paging
Paging is a non-contiguous memory allocation technique in which secondary memory
and the main memory is divided into equal size partitions. The partitions of the
secondary memory are called pages while the partitions of the main memory are
called frames . They are divided into equal size partitions to have maximum utilization
of the main memory and avoid external fragmentation.
Example: We have a process P having process size as 4B, page size as 1B. Therefore
there will we four pages(say, P0, P1, P2, P3) each of size 1B. Also, when this process goes
into the main memory for execution then depending upon the availability, it may be
stored in non-contiguous fashion in the main memory frame as shown below:
2. Page Offset: It tells the exact word on that page which the CPU wants to read.
2. Page Offset: It tells the exact word on that page which the CPU wants to read. It
requires no translation as the page size is the same as the frame size so the place of the
word which CPU wants access will not change.
Page table: A page stable contains the frame number corresponding to the page
number of some specific process. So, each process will have its own page table. A
register called Page Table Base Register(PTBR) which holds the base value of the
page table.
Now, let's see how the translation is done.
Advantages of Paging
1. There is no external fragmentation as it allows us to store the data in a non-
contiguous way.
2. Swapping is easy between equal-sized pages and frames.
Disadvantages of Paging
1. As the size of the frame is fixed, so it may suffer from internal fragmentation. It
may happen that the process is too small and it may not acquire the entire frame
size.
2. The access time increases because of paging as the main memory has to be now
accessed two times. First, we need to access the page table which is also stored in
the main memory and second, combine the frame number with the page offset and
then get the physical address of the page which is again stored in the main
memory.
3. For every process, we have an independent page table and maintaining the page
table is extra overhead.
VSRGDC
Segmentation
In paging, we were blindly diving the process into pages of fixed sizes but in
segmentation, we divide the process into modules for better visualization of the process.
Here each segment or module consists of the same type of functions. For example, the
main function is included in one segment, library function is kept in other segments, and
so on. As the size of segments may vary, so memory is divided into variable size parts.
Lets first understand some of the basic terms then we will see how this translation is
done.
2. Segment Offset: It tells the exact word in that segment which the CPU wants to
read.
Advantages of Segmentation
1. The size of the segment table is less compared to the size of the page table.
2. There is no internal fragmentation.
Disadvantages of Segmentation
1. When the processes are loaded and removed ( during swapping ) from the main
memory then free memory spaces are broken into smaller pieces and this causes
external fragmentation.
2. Here also the time to access the data increases as due to segmentation the main
memory has to be now accessed two times. First, we need to access the segment
table which is also stored in the main memory and second, combine the base
address of the segment with the segment offset and then get the physical address
which is again stored in the main memory.
VSRGDC
To resolve this problem Demand paging concept came into play. This concept says we
should not load any page into the main memory until required or we should keep all the
pages in secondary memory until demanded. In contrast, in Pre-Paging , the OS
guesses in advance which page the process will require and pre-loads them into the
memory.
Demand Paging
Demand paging is a technique used in virtual memory systems where the pages are
brought in the main memory only when required or demanded by the CPU. Hence, it is
also named as lazy swapper because the swapping of pages is done only when
required by the CPU.
1. Now, if the CPU wants to access page P2 of a process P, first it will search the page
in the page table.
VSRGDC
2. As the page table does not contain this page so it will be a trap or page fault . As
soon as the trap is generated and context switching happens and the control goes
to the operating system.
3. The OS system will put the process in a waiting/ blocked state. The OS system will
now search that page in the backing store or secondary memory.
4. The OS will then read the page from the backing store and load it to the main
memory.
5. Next, the OS system will update the page table entry accordingly.
6. Finally, the control is taken back from the OS and the execution of the process is
resumed.
Hence whenever a page fault occurs these steps are followed by the operating system and
the required page is brought into memory.
So whenever a page fault occurs all the above steps(2–6) are performed. This time taken
to service the page fault is called the Page fault service time .
When the page fault rate is ‘p’ while executing any process then the effective memory
access time is calculated as follows:
Advantages
It increases the degree of multiprogramming as many processes can be present in
the main memory at the same time.
There is a more efficient use of memory as processes having size more than the
size of the main memory can also be executed using this mechanism because we
are not loading the whole page at a time.
Disadvantages
The amount of processor overhead and the number of tables used for handling the
page faults is greater than in simple page management techniques.
VSRGDC
PrePaging
In demand paging, that page is brought to the main memory which is actually demanded
during the execution of the process. But, in pre-paging pages other than the demanded
by the CPU are also brought in. The OS guesses in advance which page the process will
require and pre-loads them into the memory.
The diagram above shows that only one page was referenced or demanded by the CPU
but three more pages were pre-paged by the OS. The OS tries to predict which page
would be next required by the processor and brings that page proactively into the main
memory.
Advantages
It saves time when large contiguous structures are used. Consider
an example where the process requests consecutive addresses. So, in such cases,
the operating system can guess the next pages. And, if the guesses are right, fewer
page faults will occur and the effective memory access time will increase.
Disadvantages
There is a wastage of time and memory if those pre-paged pages are unused.
VSRGDC
This lesson will introduce you to the concept of page replacement, which is used in
memory management. You will understand the definition, need and various algorithms
related to page replacement.
A computer system has a limited amount of memory. Adding more memory physically is
very costly. Therefore most modern computers use a combination of both hardware and
software to allow the computer to address more memory than the amount physically
present on the system. This extra memory is actually called Virtual Memory.
Paging
Segmentation
In this blog, we will learn about the paging part.
Paging
Paging is a process of reading data from, and writing data to, the secondary storage. It is
a memory management scheme that is used to retrieve processes from the secondary
memory in the form of pages and store them in the primary memory. The main objectiv e
of paging is to divide each process in the form of pages of fixed size. These pages are
stored in the main memory in frames. Pages of a process are only brought from the
secondary memory to the main memory when they are needed.
When an executing process refers to a page, it is first searched in the main memory. If it
is not present in the main memory, a page fault occurs.
** Page Fault is the condition in which a running process refers to a page that is not
loaded in the main memory.
In such a case, the OS has to bring the page from the secondary storage into the main
memory. This may cause some pages in the main memory to be replaced due to limited
storage. A Page Replacement Algorithm is required to decide which page needs to be
replaced.
VSRGDC
When the page that was selected for replacement was paged out, and referenced again, it
has to read in from disk, and this requires for I/O completion. This process determines
the quality of the page replacement algorithm: the lesser the time waiting for page-ins,
the better is the algorithm.
** If a process requests for page and that page is found in the main memory then it is
called page hit , otherwise page miss or page fault .
When there is a need for page replacement, the FIFO algorithm, swaps out the page at
the front of the queue, that is the page which has been in the memory for the longest
time.
For Example:
Consider the page reference string of size 12: 1, 2, 3, 4, 5, 1, 3, 1, 6, 3, 2, 3 with frame size
4(i.e. maximum 4 pages in a frame).
VSRGDC
Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty
slots in order of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.
When 5 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 1.
When 1 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 2.
When 3,1 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
When 6 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 3.
When 3 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 4.
When 2 comes, it is not available in memory so page fault occurs and it replaces the
oldest page in memory, i.e., 5.
When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
Advantages
Poor performance.
Doesn’t consider the frequency of use or last used time, simply replaces the oldest
page.
Suffers from Belady’s Anomaly(i.e. more page faults when we increase the number
of page frames).
In LRU, whenever page replacement happens, the page which has not been used for the
longest amount of time is replaced.
For Example
Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty
slots in order of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.
When 5 comes, it is not available in memory so page fault occurs and it replaces 1 which
is the least recently used page.
When 1 comes, it is not available in memory so page fault occurs and it replaces 2.
VSRGDC
When 3,1 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
When 6 comes, it is not available in memory so page fault occurs and it replaces 4.
When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
When 2 comes, it is not available in memory so page fault occurs and it replaces 5.
When 3 comes, it is available in the memory, i.e., Page Hit, so no replacement occurs.
Advantages
Efficient.
Doesn't suffer from Belady’s Anomaly.
Disadvantages
Complex Implementation.
Expensive.
Requires hardware support.
In this algorithm, pages are replaced which would not be used for the longest duration of
time in the future, i.e., the pages in the memory which are going to be referred farthest in
the future are replaced.
This algorithm was introduced long back and is difficult to implement because it requires
future knowledge of the program behaviour. However, it is possible to implement
optimal page replacement on the second run by using the page reference information
collected on the first run.
For Example
VSRGDC
Initially, all 4 slots are empty, so when 1, 2, 3, 4 came they are allocated to the empty
slots in order of their arrival. This is page fault as 1, 2, 3, 4 are not available in memory.
When 5 comes, it is not available in memory so page fault occurs and it replaces 4 which
is going to be used farthest in the future among 1, 2, 3, 4.
When 1,3,1 comes, they are available in the memory, i.e., Page Hit, so no replacement
occurs.
When 6 comes, it is not available in memory so page fault occurs and it replaces 1.
Advantages
Easy to Implement.
Simple data structures are used.
Highly efficient.
Disadvantages
Time-consuming.
VSRGDC
We all know that a process is divided into various pages and these pages are used during
the execution of the process. The whole process is stored in the secondary memory. But
to make the execution of a process faster, we use the main memory of the system and
store the process pages into it. But there is a limitation with the main memory. We have
limited space and less space in the main memory. So, what if the size of the process is
larger than the size of the main memory? Here, the concept of Virtual Memory comes
into play.
So, the benefit of using the Virtual Memory is that if we are having some program that is
larger than the size of the main memory then instead of loading all the pages we load
some important pages.
In general, when we execute a program, then the entire program is not required to be
loaded fully in the main memory. This is because only a few portions of the program are
being executed at a time. For example, the error handling part of any program is called
only when there is some error and we know that the error happens rarely. So, why to
load this part of the program in the main memory and fill the memory space? Another
example can be a large-sized array. Generally, we have over-sized arrays because we
reserve the space for worst-case scenarios. But in reality, only a small part of the array is
used regularly. So, why to put the whole array in the main memory?
So, what we do is we put the frequently used pages of the program in the main memory
and this will result in fast execution of the program because whenever those pages will be
needed then it will be served from the main memory only. Other pages are still in the
secondary memory. Now, if some request to the page that is not in the main memory
comes, then this situation is called a Page Miss or Page Fault. In this situation, we
remove one page from the main memory and load the desired page from the secondary
memory to the main memory at the run time i.e swapping of pages will be performed
here. By doing so, the user feels like he/she is having a lot of memory in its system but in
reality, we are just putting that part of the process in the memory that is frequently used.
The following figure shows the working in brief:
VSRGDC
In the above image, we can see that the whole process is divided into 6 pages and out of
these 6 pages, 2 pages are frequently used and due to this, these 2 pages are put into the
physical memory. If there is some request for the pages present in the physical memory,
then it is directly served otherwise if the page is not present in the physical memory then
it is called a page fault and whenever page fault occurs, then we load the demanded page
in the memory and this process is known as Demand Paging.
Demand Paging
Whenever a page fault occurs, then the process of loading the page into the memory is
called demand paging. So, in demand paging, we load the process only when we need it.
Initially, when a process comes into execution, then at that time only those pages are
loaded which are required for initial execution of the process and no other pages are
loaded. But with time, when there is a need for other pages, then the CPU will find that
page from the secondary memory and load that page in the main memory.
VSRGDC
1. The CPU first tries to find the requested page in the main memory and if it is
found then it will be provided immediately otherwise an interrupt is generated
that indicates memory access fault.
2. Now, the process is sent to the blocked/waiting state because, for the execution of
the process, we need to find the required page from the secondary memory.
3. The logical address of the process will be converted to the physical address
because without having the physical address, you can't locate the page in
secondary memory.
4. Now, we apply some page replacement algorithms that can be used to swap the
pages from the main memory to secondary memory and vice-versa.
5. Finally, the page table will be updated with the removal of the address of the old
page and the addition of the address of the new page.
6. At last, the CPU provides the page to the process and the process comes to the
running state from the waiting/block state.
So, in this way, we can implement the concept of Virtual Memory with the help of
Demand Paging.
VSRGDC
What are the various Disk Scheduling Algorithms in Operating System?
As we embark on this journey through the world of Disk Scheduling algorithms, we'll uncover the
inner workings of these digital maestros. These algorithms don various hats, each tailored to
address specific data access scenarios. From the fundamental question of "What are Disk
Scheduling algorithms?" to hands-on applications like "Disk Scheduling algorithms in C program,"
we'll explore their nuances and real-world impact.
So, buckle up and be prepared to dive deep into the intricacies of Disk Scheduling algorithms. By
the end of this journey, you'll be equipped with the knowledge to fine-tune your computer's data
access, optimizing it for peak performance in our digital age. Join us in demystifying the world of
Disk Scheduling algorithms, where efficiency meets technology in perfect harmony.
In the fascinating world of computing, Disk Scheduling algorithms take center stage. These
algorithms, such as "Disk Scheduling algorithms in OS," are like conductors orchestrating the
movements of your computer's hard drive. But let's break it down to the basics.
What is disk scheduling? At its core, Disk Scheduling is about efficiently fetching data from the
hard drive. Picture your computer's hard drive as a vast library, with each piece of data as a book.
When you open an application or access a file, your computer must find and retrieve the relevant
data. This is where Disk Scheduling algorithms step in.
They're the organizers, ensuring that data is fetched swiftly and logically. These algorithms, like
"Disk Scheduling algorithms in C program," decide which data requests get priority and in what
order, minimizing waiting times and keeping your computer running smoothly.
What is disk scheduling in OS? Now, let's talk about the big picture. In the realm of operating
systems, Disk Scheduling algorithms play a mission-critical role. They ensure that your computer
juggles multiple data requests efficiently. When your operating system handles tasks like saving a
file, streaming a video, or loading an application, these algorithms optimize data access.
Think of them as traffic controllers on a busy intersection, keeping the flow smooth and ensuring
everyone gets where they need to go. This optimization is vital for a computer's overall
performance, making Disk Scheduling algorithms an indispensable part of the computing
landscape.
VSRGDC
Disk Scheduling Algorithms in Action
In this section, we're putting Disk Scheduling algorithms into practical scenarios and exploring
their real-world impact.
When discussing Disk Scheduling algorithms in operating system (OS), we're peeking behind the
curtain of your computer's multitasking wizardry. These algorithms are the conductors, ensuring
that data requests from various programs are handled efficiently. Think of it like a traffic controller
orchestrating the data flow to prevent bottlenecks and keep your computer running smoothly.
Now, let's delve into the programming world, specifically "Disk Scheduling algorithms in C." Here,
these algorithms aren't just theoretical concepts; they're the tools programmers use to optimize
data access. Imagine them as the architects of efficiency, guiding your code to retrieve data
swiftly and intelligently. In coding, these algorithms differentiate between a program that stutters
and one that runs seamlessly.
But where does the rubber meet the road? In our daily lives, we encounter Disk Scheduling
algorithm examples in various forms. Think of your favorite streaming service, where they ensure
your binge-watching experience is uninterrupted. Or consider online banking, where these
algorithms safeguard your financial data while ensuring quick access.
In logistics and warehousing, these algorithms optimize the movement of goods, reducing delivery
times and costs. In healthcare, they ensure patient records are accessible when needed,
potentially saving lives through swift diagnosis and treatment decisions.
So, whether you're navigating the complexities of an operating system, writing code in C, or
simply enjoying a seamless online experience, Disk Scheduling algorithms are there, quietly
ensuring things run smoothly. They're not just theoretical concepts but the unsung heroes of
efficiency in our digital world.
In the world of Disk Scheduling, where optimizing data access is paramount, various algorithms
take on the challenge with distinct technical approaches. Let's explain various Disk Scheduling
algorithms:
SSTF, Shortest Seek Time First, prioritizes minimizing seek times. It chooses the request closest
to the current position of the disk arm. This algorithm optimizes data retrieval by reducing the
arm's movement. However, it can favor the more immediate requests, potentially causing some
recommendations to wait indefinitely in specific scenarios, known as "starvation."
SCAN and C-SCAN, often called "elevator algorithms," mimic the motion of an elevator within the
disk. SCAN disk scheduling starts at one end, servicing requests along the way, and reverses
direction. C SCAN disk scheduling adds predictability by ignoring requests while returning,
reducing variability in waiting times. These algorithms are efficient, ensuring all requests
eventually get served.
LOOK and C-LOOK fine-tune the elevator approach. They only serve requests in the direction of
the arm's movement, avoiding the ends of the disk where requests are less frequent. This
minimizes unnecessary movement and optimizes data retrieval, balancing speed and fairness.
VSRGDC
Comparing and Contrasting Different Disk Scheduling Methods
It's essential to compare these scheduling algorithms based on technical criteria like seek time,
rotational latency, and overall efficiency. Factors such as queue management, prioritization, and
starvation prevention differ among these methods. The algorithm depends on the system
requirements and data access patterns.
In the realm of Disk Scheduling algorithms, technical nuances drive their effectiveness. Different
types of Disk Scheduling algorithms address optimizing data access on your computer's hard
drive. These algorithms, such as SSTF, SCAN, C-SCAN, LOOK, and C-LOOK, each offer a
unique solution to the problem. They range from minimizing seek times in SSTF to the
predictability of SCAN and C-SCAN, as well as the refined efficiency of LOOK and C-LOOK.
Read our latest blogs "List of Operating Systems" and "Booting in Operating System".
At the core, Disk Scheduling algorithms manage the requests for data stored on the hard drive.
Picture the hard drive as a library with countless books (data blocks). When you request a book,
the librarian (the algorithm) has to find and retrieve it efficiently.
1. Seek Time: This is the time it takes for the disk arm (like a needle on a vinyl record) to move to
the right track where the data is located. The algorithm aims to minimize this seek time.
2. Rotational Latency: Once the disk arm is on the right track, the disk platter must rotate to
bring the desired data under the read/write head. Again, the algorithm tries to minimize this
rotational latency.
3. Transfer Time: Finally, the data is read from or written to the disk. This transfer time depends
on the data's size and speed.
4. Disk Access Time: The total time for these three steps is the Disk Access Time, which the
metric Disk Scheduling algorithms aim to optimize.
For instance, in a real-world example, think of your computer running a web browser, a video
game, and an antivirus scan simultaneously. Disk Scheduling algorithms ensure these diverse
requests are handled effectively, preventing slowdowns or freezing.
VSRGDC
Let's not forget about coding. In programming, these algorithms are implemented to optimize data
access. Say you're developing software that loads large files or processes extensive databases.
The correct Disk Scheduling Algorithm can significantly impact the software's performance.
So, whether you're navigating the complexities of an operating system, coding a new software
application, or just using your computer for everyday tasks, Disk Scheduling algorithms are
silently working to ensure your data access is efficient and seamless.
1. Workload Characteristics: The nature of the data requests matters. Is it a server handling
database queries or a personal computer running everyday applications? Different workloads
benefit from specific algorithms.
2. Seek Time vs. Throughput: If you prioritize reducing seek times for individual requests,
algorithms like SSTF or LOOK may be ideal. Conversely, if you aim for overall throughput,
SCAN or C-SCAN might be better.
3. Starvation Tolerance: Some algorithms, like FCFS, ensure every request eventually gets
serviced, preventing starvation. Others, like SSTF, might favor specific requests, potentially
leaving some waiting indefinitely.
4. Queue Management: How the algorithm manages the queue of pending requests can impact
fairness and efficiency. A well-optimized queue management strategy can prevent bottlenecks.
Imagine a web server handling requests from various users. If it prioritizes serving more minor
requests first (SSTF), it can reduce latency for many users. However, larger requests might wait
indefinitely if they receive more minor requests. In this case, a more balanced algorithm like
SCAN might be preferable.
In a different scenario, consider a scientific computing cluster processing vast datasets. Here,
throughput matters more than individual seek times. Algorithms like SCAN or C-SCAN, which
optimize data transfer efficiency, would be a better fit.
The ideal Disk Scheduling Algorithm choice involves a deep understanding of the system's
requirements and characteristics. It's a balancing act between minimizing latency, optimizing
throughput, and ensuring fairness.
So, whether managing a data center, designing software, or fine-tuning your computer, choosing
the correct Disk Scheduling Algorithm is about aligning technical needs with algorithmic
capabilities. It's a critical step in achieving efficient and responsive data access.
Advantages:
Limitations:
1. Starvation: Certain algorithms may prioritize requests close to the disk's current position,
potentially leaving others waiting indefinitely. This is known as starvation and can be a
limitation in some situations.
2. Complexity: Implementing and managing Disk Scheduling algorithms can be complex. To
properly optimize a system, one must thoroughly comprehend its inherent characteristics and
workload patterns.
3. No Universal Solution: It's impossible to have a single solution that fits everyone. Each
algorithm has strengths and weaknesses; choosing the wrong one for a particular scenario can
lead to inefficiencies.
For systems where fairness is essential, FCFS ensures every request eventually gets serviced,
but at the cost of potential inefficiencies.
When seeking optimal times, SSTF shines, but be cautious of potential starvation.
SCAN and C-SCAN are excellent for predictable wait times and efficient data access but may
not be suitable for all workloads.
LOOK and C-LOOK balance speed and fairness, making them versatile choices.
Ultimately, the key to making informed decisions lies in understanding your system's
requirements, workload patterns, and the advantages and limitations of each Disk Scheduling
Algorithm. By aligning these factors, you can optimize data access and enhance system
performance effectively.