Unit 2 Os
Unit 2 Os
Process planning is an integral part of the process management operating system. It refers to the
mechanism used by the operating system to determine which process to run next. The goal of process
scheduling is to improve overall system performance by maximizing CPU utilization, minimizing throughput
time, and improving system response time.
• Process Creation and Termination : Process creation involves creating a Process ID, setting up
Process Control Block, etc. A process can be terminated either by the operating system or by the
parent process. Process termination involves clearing all resources allocated to it.
• CPU Scheduling : In a multiprogramming system, multiple processes need to get the CPU. It is the
job of Operating System to ensure smooth and efficient execution of multiple processes.
• Deadlock Handling : Making sure that system does not reach a state where two or processes cannot
proceed due to a cycling dependency on each other.
• Inter-Process Communication : Operating System provides facilities such as shared memory and
message passing for cooperating processes to communicate.
A process is a program in execution. For example, when we write a program in C or C++ and compile it, the
compiler creates binary code. The original code and binary code are both programs. When we actually run
the binary code, it becomes a process.
• A single program can create many processes when run multiple times; for example, when we open a
.exe or binary file multiple times, multiple instances begin (multiple processes are created). .
A process in memory is divided into several distinct sections, each serving a different purpose :
• Text Section: A text or code segment contains executable instructions. It is typically a read only
section
• Stack: The stack contains temporary data, such as function parameters, returns addresses, and
local variables.
• Heap Section: Dynamically memory allocated to process during its run time.
Advantages of Process Management
• Running Multiple Programs: Process management lets you run multiple applications at the same
time, for example, listen to music while browsing the web.
• Process Isolation: It ensures that different programs don’t interfere with each other, so a problem in
one program won’t crash another.
• Fair Resource Use: It makes sure resources like CPU time and memory are shared fairly among
programs, so even lower-priority programs get a chance to run.
• Smooth Switching: It efficiently handles switching between programs, saving and loading their
states quickly to keep the system responsive and minimize delays.
• Overhead: Process management uses system resources because the OS needs to keep track of
various data structures and scheduling queues. This requires CPU time and memory, which can
affect the system’s performance.
• Complexity: Designing and maintaining an OS is complicated due to the need for complex
scheduling algorithms and resource allocation methods.
• Deadlocks: To keep processes running smoothly together, the OS uses mechanisms like
semaphores and mutex locks. However, these can lead to deadlocks, where processes get stuck
waiting for each other indefinitely.
Processes with the same priority are executed on a first-come first served basis. Priority can be decided
based on memory requirements, time requirements or any other resource requirement. Also priority can be
decided on the ratio of average I/O to average CPU burst time.
In Non-Preemptive Priority Scheduling, the CPU is not taken away from the running process. Even if a
higher-priority process arrives, the currently running process will complete first.
In Preemptive Priority Scheduling, the CPU can be taken away from the currently running process if a new
process with a higher priority arrives.
Process Control Block / Attributes of Process :
A Process Control Block (PCB) is used by the operating system to manage information about a process. The
process control keeps track of crucial data needed to manage processes efficiently. A process control block
(PCB) contains information about the process, i.e. registers, quantum, priority, etc.In operating systems,
managing the process and scheduling them properly play the most significant role in the efficient usage of
memory and other system resources.
With the creation of a process, a PCB is created which controls how that process is being carried out. The
PCB is created with the aim of helping the OS to manage the enormous amounts of tasks that are being
carried out in the system.
• Process State: The state of the process is stored in the PCB which helps to manage the processes
and schedule them. There are different states for a process which are “running,” “waiting,” “ready,” or
“terminated.”
• Process ID: The OS assigns a unique identifier to every process as soon as it is created which is
known as Process ID, this helps to distinguish between processes.
• Program Counter: While running processes when the context switch occurs the last instruction to
be executed is stored in the program counter which helps in resuming the execution of the process
from where it left off.
• CPU Registers: The CPU registers of the process helps to restore the state of the process so the PCB
stores a copy of them.
• Memory Information: The information like the base address or total memory allocated to a process
is stored in PCB which helps in efficient memory allocation to the processes.
• Process Scheduling Information: The priority of the processes or the algorithm of scheduling is
stored in the PCB to help in making scheduling decisions of the OS.
• Accounting Information: The information such as CPU time, memory usage, etc helps the OS to
monitor the performance of the process.
• Process Scheduling: The different information like Process priority, process state, and resources
used can be used by the OS to schedule the process on the execution stack. The scheduler checks
the priority and other information to set when the process will be executed.
• Context Switching: When context switching happens in the OS the process state is saved in the
CPU register and a copy of it is stored in the PCB. When the CPU switches to another process and
then switches back to that process the CPU fetches that value from the PCB and restores the
previous state of the process.
• Resources Sharing: The PCB stores information like the resources that a process is using, such as
files open and memory allocated. This information helps the OS to let a new process use the
resources which are being used by any other process to execute sharing of the resources.
Advantages of Using Process Control Block
• As, PCB stores all the information about the process so it lets the operating system execute different
tasks like process scheduling, context switching, etc.
• Using PCB helps in scheduling the processes and it ensures that the CPU resources are allocated
efficiently.
• When the different resource utilization information about a process are used from the PCB they help
in efficient resource utilization and resource sharing.
• The CPU registers and stack pointers information helps the OS to save the process state which helps
in Context switching.
• The process table and PCB can be used to synchronize processes in an operating system. The PCB
contains information about each process’s synchronization state, such as its waiting status and the
resources it is waiting for.
• The process table and PCB can be used to schedule processes for execution. By keeping track of
each process’s state and resource usage, the operating system can determine which processes
should be executed next.
• To store the PCB for each and every process there is a significant usage of the memory in there can
be a large number of processes available simultaneously in the OS. So using PCB adds extra
memory usage.
• Using PCB reduces the scalability of the process in the OS as the whole process of using the PCB
adds some complexity to the user so it makes it tougher to scale the system further.
• The process table and PCB can introduce overhead and reduce system performance. The operating
system must maintain the process table and PCB for each process, which can consume system
resources.
• The process table and PCB can increase system complexity and make it more challenging to develop
and maintain operating systems. The need to manage and synchronize multiple processes can make
it more difficult to design and implement system features and ensure system stability
Operations On Process :
Process operations refer to the actions or activities performed on processes in an operating system. These
operations include creating, terminating, suspending, resuming, and communicating between processes.
Process Schedulers in Operating System
A process is the instance of a computer program in execution.
• One of the key responsibilities of an Operating System (OS) is to decide which programs will execute
on the CPU.
Categories of Scheduling
Scheduling falls into one of two categories:
• Non-Preemptive: In this case, a process’s resource cannot be taken before the process has finished
running. When a running process finishes and transitions to a waiting state, resources are switched.
• Preemptive: In this case, the OS can switch a process from running state to ready state. This
switching happens because the CPU may give other processes priority and substitute the currently
active process for the higher priority process.
Context Switching
In order for a process execution to be continued from the same point at a later time, context switching is a
mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A multitasking
operating system must include context switching among its features.
The state of the currently running process is saved into the process control block when the scheduler
switches the CPU from executing one process to another.
In order for a process execution to be continued from the same point at a later time, context switching is a
mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A multitasking
operating system must include context switching among its features.
• Program Counter
• Scheduling information
• Changed State
• Accounting information
Types of Process Schedulers
Long Term Scheduler loads a process from disk to main memory for execution. The new process to the
‘Ready State’.
• It controls the Degree of Multi-programming, i.e., the number of processes present in a ready state or
in main memory at any point in time.
• It is important that the long-term scheduler make a careful selection of both I/O and CPU-bound
processes. I/O-bound tasks are which use much of their time in input and output operations while
CPU-bound processes are which spend their time on the CPU. The job scheduler increases
efficiency by maintaining a balance between the two.
CPU Scheduler is responsible for selecting one process from the ready state for running (or assigning CPU
to it).
• STS (Short Term Scheduler) must select a new process for the CPU frequently to avoid starvation.
• The CPU scheduler uses different scheduling algorithms to balance the allocation of CPU time.
3. Medium-Term Scheduler
Medium Term Scheduler (MTS) is responsible for moving a process from memory to disk (or swapping).
• A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix (of CPU bound and IO bound)
• When needed, it brings process back into memory and pick up right where it left off.
If a process having high priority frequently If a process with a long burst time is running CPU, then
Starvation arrives in the ready queue, a low priority later coming process with less CPU burst time may
process may starve starve
Overhead It has overheads of scheduling the processes It does not have overheads
Response Time Preemptive scheduling response time is less Non-preemptive scheduling response time is high
Decision Decisions are made by the scheduler and are Decisions are made by the process itself and the OS just
making based on priority and time slice allocation follows the process’s instructions
The operating system can interrupt or preempt a running process to allocate CPU time to another process,
typically based on priority or time-sharing policies. Mainly a process is switched from the running state to
the ready state. Algorithms based on preemptive scheduling are Round Robin (RR) , Shortest Remaining
Time First (SRTF) , Priority (preemptive version) , etc.
• Because a process may not monopolize the processor, it is a more reliable method and does not
cause denial of service attack.
• The average response time is improved. Utilizing this method in a multi-programming environment is
more advantageous.
• Most of the modern operating systems (Window, Linux and macOS) implement Preemptive
Scheduling.
• Suspending the running process, change the context, and dispatch the new incoming process all
take more time.
• Might cause starvation : A low-priority process might be preempted again and again if multiple high-
priority processes arrive.
• Causes Concurrency Problems as processes can be stopped when they were accessing shared
memory (or variables) or resources.
Non-Preemptive Scheduling
Algorithms based on non-preemptive scheduling are: First Come First Serve, Shortest Job First (SJF
basically non preemptive) and Priority (nonpreemptive version) , etc.
• It is easy to implement in an operating system. It was used in Windows 3.11 and early macOS.
• It is open to denial of service attack. A malicious process can take CPU forever.
• Since we cannot implement round robin, the average response time becomes less.
CPU Scheduling in Operating Systems
CPU scheduling is a process used by the operating system to decide which task or process gets to use the
CPU at a particular time. This is important because a CPU can only handle one task at a time, but there are
usually many tasks that need to be processed. The following are different purposes of a CPU scheduling
time.
CPU scheduling is the process of deciding which process will own the CPU to use while another process is
suspended. The main function of CPU scheduling is to ensure that whenever the CPU remains idle, the OS
has at least selected one of the processes available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I/O binding processes then most of the
time, the CPU remains idle. The function of an effective program is to improve resource utilization.
• Arrival Time: The time at which the process arrives in the ready queue.
• Completion Time: The time at which the process completes its execution.
• Turn Around Time: Time Difference between completion time and arrival time.
• Waiting Time(W.T): Time Difference between turn around time and burst time.
• CPU Utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as possible.
• Throughput: The average CPU performance is the number of processes performed and completed
during each unit. This is called throughput. The output may vary depending on the length or duration
of the processes.
• Turn Round Time: For a particular process, the important conditions are how long it takes to perform
that process. The time elapsed from the time of process delivery to the time of completion is known
as the conversion time.
• Waiting Time: The Scheduling algorithm does not affect the time required to complete the process
once it has started performing. It only affects the waiting time of the process i.e. the time spent in the
waiting process in the ready queue.
• Response Time: In a collaborative system, turn around time is not the best option. The process may
produce something early and continue to computing the new results while the previous results are
released to the user. Therefore another method is the time taken in the submission of the application
process until the first response is issued. This measure is called response time.
FCFS – First Come First Serve CPU Scheduling
First Come, First Serve (FCFS) is one of the simplest types of CPU scheduling algorithms. It is exactly what
it sounds like: processes are attended to in the order in which they arrive in the ready queue, much like
customers lining up at a grocery store.
FCFS Scheduling is a non-preemptive algorithm, meaning once a process starts running, it cannot be
stopped until it voluntarily relinquishes the CPU, typically when it terminates or performs I/O.
Advantages of FCFS
• Every process gets a chance to execute in the order of its arrival. This ensures that no process is
arbitrarily prioritized over another.
Disadvantages of FCFS
• As it is a Non-preemptive CPU Scheduling Algorithm, FCFS can result in long waiting times,
especially if a long process arrives before a shorter one. This is known as the convoy effect, where
shorter processes are forced to wait behind longer processes, leading to inefficient execution.
• The average waiting time in the FCFS is much higher than in the others
• Processes that are at the end of the queue, have to wait longer to finish.
• It is not suitable for time-sharing operating systems where each process should get the same
amount of CPU time.
• Then select that process that has minimum arrival time and minimum Burst time.
• SJF is better than the First come first serve(FCFS) algorithm as it reduces the average waiting time.
• It is suitable for the jobs running in batches, where run times are already known.
• Many times it becomes complicated to predict the length of the upcoming CPU request.
Shortest Remaining Time First (Preemptive SJF) Scheduling Algorithm
the pre-emptive version of Shortest Job First (SJF) scheduling, called Shortest Remaining Time First
(SRTF). In SRTF, the process with the least time left to finish is selected to run. The running process will
continue until it finishes or a new process with a shorter remaining time arrives. This way, the process that
can finish the fastest is always given priority.
1. Minimizes Average Waiting Time: SRTF reduces the average waiting time by prioritizing processes
with the shortest remaining execution time.
2. Efficient for Short Processes: Shorter processes get completed faster, improving overall system
responsiveness.
3. Ideal for Time-Critical Systems: It ensures that time-sensitive processes are executed quickly.
1. Starvation of Long Processes: Longer processes may be delayed indefinitely if shorter processes
keep arriving.
2. Difficult to Predict Burst Times: Accurate prediction of process burst times is challenging and
affects scheduling decisions.
3. High Overhead: Frequent context switching can increase overhead and slow down system
performance.
4. Not Suitable for Real-Time Systems: Real-time tasks may suffer delays due to frequent
preemptions.
• Responsiveness: Round Robin can handle multiple processes without significant delays, making it
ideal for time-sharing systems.
• Overhead: Switching between processes can lead to high overhead, especially if the quantum is too
small.
• Underutilization: If the quantum is too large, it can cause the CPU to feel unresponsive as it waits for
a process to finish its time.
Introduction of Deadlock in Operating System
A deadlock is a situation where a set of processes is blocked because each process is holding a resource
and waiting for another resource acquired by some other process. In this article, we will discuss deadlock,
its necessary conditions, etc. in detail.
• Deadlock is a situation in computing where two or more processes are unable to proceed because
each is waiting for the other to release resources.
• Key concepts include mutual exclusion, resource holding, circular wait, and no preemption.
Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)
• Mutual Exclusion: Only one process can use a resource at any given time i.e. the resources are non-
sharable.
• Hold and Wait: A process is holding at least one resource at a time and is waiting to acquire other
resources held by some other process.
• No Preemption: A resource cannot be taken from a process unless the process releases the
resource.
• Circular Wait: set of processes are waiting for each other in a circular fashion. For example, lets say
there are a set of processes {P0P0,P1P1,P2P2,P3P3} such that P0P0 depends on P1P1, P1P1
depends on P2P2, P2P2 depends on P3P3 and P3P3 depends on P0P0. This creates a circular
relation between all these processes and they have to wait forever to be executed.
Banker’s Algorithm is a resource allocation and deadlock avoidance algorithm used in operating systems. It
ensures that a system remains in a safe state by carefully allocating resources to processes while avoiding
unsafe states that could lead to deadlocks.
• The Banker’s Algorithm is a smart way for computer systems to manage how programs use
resources, like memory or CPU time.
• It helps prevent situations where programs get stuck and can not finish their tasks. This condition is
known as deadlock.
• By keeping track of what resources each program needs and what’s available, the banker algorithm
makes sure that programs only get what they need in a safe order.