0% found this document useful (0 votes)
4 views12 pages

Unit 2 Os

Uploaded by

Siddhant Pathak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views12 pages

Unit 2 Os

Uploaded by

Siddhant Pathak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Process Management

Process planning is an integral part of the process management operating system. It refers to the
mechanism used by the operating system to determine which process to run next. The goal of process
scheduling is to improve overall system performance by maximizing CPU utilization, minimizing throughput
time, and improving system response time.

CPU-Bound vs I/O-Bound Processes


A CPU-bound process requires more CPU time or spends more time in the running state. An I/O-bound
process requires more I/O time and less CPU time. An I/O-bound process spends more time in the waiting
state.

Process Management Tasks :


Process management is a key part in operating systems with multi-programming or multitasking.

• Process Creation and Termination : Process creation involves creating a Process ID, setting up
Process Control Block, etc. A process can be terminated either by the operating system or by the
parent process. Process termination involves clearing all resources allocated to it.

• CPU Scheduling : In a multiprogramming system, multiple processes need to get the CPU. It is the
job of Operating System to ensure smooth and efficient execution of multiple processes.

• Deadlock Handling : Making sure that system does not reach a state where two or processes cannot
proceed due to a cycling dependency on each other.

• Inter-Process Communication : Operating System provides facilities such as shared memory and
message passing for cooperating processes to communicate.

• Process Synchronization : Process Synchronization is the coordination of execution of multiple


processes in a multiprogramming system to ensure that they access shared resources (like memory)
in a controlled and predictable manner.

Process in Operating System

A process is a program in execution. For example, when we write a program in C or C++ and compile it, the
compiler creates binary code. The original code and binary code are both programs. When we actually run
the binary code, it becomes a process.

• A process is an 'active' entity instead of a program, which is considered a 'passive' entity.

• A single program can create many processes when run multiple times; for example, when we open a
.exe or binary file multiple times, multiple instances begin (multiple processes are created). .

Process Structure in Memory

A process in memory is divided into several distinct sections, each serving a different purpose :

• Text Section: A text or code segment contains executable instructions. It is typically a read only
section

• Stack: The stack contains temporary data, such as function parameters, returns addresses, and
local variables.

• Data Section: Contains the global variable.

• Heap Section: Dynamically memory allocated to process during its run time.
Advantages of Process Management

• Running Multiple Programs: Process management lets you run multiple applications at the same
time, for example, listen to music while browsing the web.

• Process Isolation: It ensures that different programs don’t interfere with each other, so a problem in
one program won’t crash another.

• Fair Resource Use: It makes sure resources like CPU time and memory are shared fairly among
programs, so even lower-priority programs get a chance to run.

• Smooth Switching: It efficiently handles switching between programs, saving and loading their
states quickly to keep the system responsive and minimize delays.

Disadvantages of Process Management

• Overhead: Process management uses system resources because the OS needs to keep track of
various data structures and scheduling queues. This requires CPU time and memory, which can
affect the system’s performance.

• Complexity: Designing and maintaining an OS is complicated due to the need for complex
scheduling algorithms and resource allocation methods.

• Deadlocks: To keep processes running smoothly together, the OS uses mechanisms like
semaphores and mutex locks. However, these can lead to deadlocks, where processes get stuck
waiting for each other indefinitely.

• Increased Context Switching: In multitasking systems, the OS frequently switches between


processes. Storing and loading the state of each process (context switching) takes time and
computing power, which can slow down the system.

Priority Scheduling in Operating System


Priority scheduling is one of the most common scheduling algorithms used by the operating system to
schedule processes based on their priority. Each process is assigned a priority. The process with the highest
priority is to be executed first and so on.

Processes with the same priority are executed on a first-come first served basis. Priority can be decided
based on memory requirements, time requirements or any other resource requirement. Also priority can be
decided on the ratio of average I/O to average CPU burst time.

Priority Scheduling can be implemented in two ways:

• Non-Preemptive Priority Scheduling

• Preemptive Priority Scheduling

Non-Preemptive Priority Scheduling

In Non-Preemptive Priority Scheduling, the CPU is not taken away from the running process. Even if a
higher-priority process arrives, the currently running process will complete first.

Preemptive Priority Scheduling

In Preemptive Priority Scheduling, the CPU can be taken away from the currently running process if a new
process with a higher priority arrives.
Process Control Block / Attributes of Process :

A Process Control Block (PCB) is used by the operating system to manage information about a process. The
process control keeps track of crucial data needed to manage processes efficiently. A process control block
(PCB) contains information about the process, i.e. registers, quantum, priority, etc.In operating systems,
managing the process and scheduling them properly play the most significant role in the efficient usage of
memory and other system resources.

With the creation of a process, a PCB is created which controls how that process is being carried out. The
PCB is created with the aim of helping the OS to manage the enormous amounts of tasks that are being
carried out in the system.

Technologies Related to PCB.

• Process State: The state of the process is stored in the PCB which helps to manage the processes
and schedule them. There are different states for a process which are “running,” “waiting,” “ready,” or
“terminated.”

• Process ID: The OS assigns a unique identifier to every process as soon as it is created which is
known as Process ID, this helps to distinguish between processes.

• Program Counter: While running processes when the context switch occurs the last instruction to
be executed is stored in the program counter which helps in resuming the execution of the process
from where it left off.

• CPU Registers: The CPU registers of the process helps to restore the state of the process so the PCB
stores a copy of them.

• Memory Information: The information like the base address or total memory allocated to a process
is stored in PCB which helps in efficient memory allocation to the processes.

• Process Scheduling Information: The priority of the processes or the algorithm of scheduling is
stored in the PCB to help in making scheduling decisions of the OS.

• Accounting Information: The information such as CPU time, memory usage, etc helps the OS to
monitor the performance of the process.

Operations Carried out through PCB

• Process Scheduling: The different information like Process priority, process state, and resources
used can be used by the OS to schedule the process on the execution stack. The scheduler checks
the priority and other information to set when the process will be executed.

• Multitasking: Resource allocation, process scheduling, and process synchronization altogether


helps the OS to multitask and run different processes simultaneously.

• Context Switching: When context switching happens in the OS the process state is saved in the
CPU register and a copy of it is stored in the PCB. When the CPU switches to another process and
then switches back to that process the CPU fetches that value from the PCB and restores the
previous state of the process.

• Resources Sharing: The PCB stores information like the resources that a process is using, such as
files open and memory allocated. This information helps the OS to let a new process use the
resources which are being used by any other process to execute sharing of the resources.
Advantages of Using Process Control Block

• As, PCB stores all the information about the process so it lets the operating system execute different
tasks like process scheduling, context switching, etc.

• Using PCB helps in scheduling the processes and it ensures that the CPU resources are allocated
efficiently.

• When the different resource utilization information about a process are used from the PCB they help
in efficient resource utilization and resource sharing.

• The CPU registers and stack pointers information helps the OS to save the process state which helps
in Context switching.

• The process table and PCB can be used to synchronize processes in an operating system. The PCB
contains information about each process’s synchronization state, such as its waiting status and the
resources it is waiting for.

• The process table and PCB can be used to schedule processes for execution. By keeping track of
each process’s state and resource usage, the operating system can determine which processes
should be executed next.

Disadvantages of using Process Control Block

• To store the PCB for each and every process there is a significant usage of the memory in there can
be a large number of processes available simultaneously in the OS. So using PCB adds extra
memory usage.

• Using PCB reduces the scalability of the process in the OS as the whole process of using the PCB
adds some complexity to the user so it makes it tougher to scale the system further.

• The process table and PCB can introduce overhead and reduce system performance. The operating
system must maintain the process table and PCB for each process, which can consume system
resources.

• The process table and PCB can increase system complexity and make it more challenging to develop
and maintain operating systems. The need to manage and synchronize multiple processes can make
it more difficult to design and implement system features and ensure system stability

Operations On Process :

Process operations refer to the actions or activities performed on processes in an operating system. These
operations include creating, terminating, suspending, resuming, and communicating between processes.
Process Schedulers in Operating System
A process is the instance of a computer program in execution.

• Scheduling is important in operating systems with multiprogramming as multiple processes might


be eligible for running at a time.

• One of the key responsibilities of an Operating System (OS) is to decide which programs will execute
on the CPU.

What is Process Scheduling?


Process scheduling is the activity of the process manager that handles the removal of the running process
from the CPU and the selection of another process based on a particular strategy. Throughout its lifetime, a
process moves between various scheduling queues, such as the ready queue, waiting queue, or devices
queue.

Categories of Scheduling
Scheduling falls into one of two categories:

• Non-Preemptive: In this case, a process’s resource cannot be taken before the process has finished
running. When a running process finishes and transitions to a waiting state, resources are switched.

• Preemptive: In this case, the OS can switch a process from running state to ready state. This
switching happens because the CPU may give other processes priority and substitute the currently
active process for the higher priority process.

Context Switching

In order for a process execution to be continued from the same point at a later time, context switching is a
mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A multitasking
operating system must include context switching among its features.

The state of the currently running process is saved into the process control block when the scheduler
switches the CPU from executing one process to another.

In order for a process execution to be continued from the same point at a later time, context switching is a
mechanism to store and restore the state or context of a CPU in the Process Control block. A context
switcher makes it possible for multiple processes to share a single CPU using this method. A multitasking
operating system must include context switching among its features.

• Program Counter

• Scheduling information

• The base and limit register value

• Currently used register

• Changed State

• I/O State information

• Accounting information
Types of Process Schedulers

There are three types of process schedulers:

1. Long Term or Job Scheduler

Long Term Scheduler loads a process from disk to main memory for execution. The new process to the
‘Ready State’.

• It mainly moves processes from Job Queue to Ready Queue.

• It controls the Degree of Multi-programming, i.e., the number of processes present in a ready state or
in main memory at any point in time.

• It is important that the long-term scheduler make a careful selection of both I/O and CPU-bound
processes. I/O-bound tasks are which use much of their time in input and output operations while
CPU-bound processes are which spend their time on the CPU. The job scheduler increases
efficiency by maintaining a balance between the two.

• Slowest among the three (that is why called long term).

2. Short-Term or CPU Scheduler

CPU Scheduler is responsible for selecting one process from the ready state for running (or assigning CPU
to it).

• STS (Short Term Scheduler) must select a new process for the CPU frequently to avoid starvation.

• The CPU scheduler uses different scheduling algorithms to balance the allocation of CPU time.

• It picks a process from ready queue.

• Its main objective is to make the best use of CPU.

• It mainly calls dispatcher.

• Fastest among the three (that is why called Short Term).

3. Medium-Term Scheduler

Medium Term Scheduler (MTS) is responsible for moving a process from memory to disk (or swapping).

• It reduces the degree of multiprogramming (Number of processes present in main memory).

• A running process may become suspended if it makes an I/O request. A suspended processes
cannot make any progress towards completion. In this condition, to remove the process from
memory and make space for other processes, the suspended process is moved to the secondary
storage. This process is called swapping, and the process is said to be swapped out or rolled out.
Swapping may be necessary to improve the process mix (of CPU bound and IO bound)

• When needed, it brings process back into memory and pick up right where it left off.

• It is faster than long term and slower than short term.


Differences Between Pre-emptive and Non-Pre-emptive Scheduling :

Parameter PREEMPTIVE SCHEDULING NON-PREEMPTIVE SCHEDULING

Once resources(CPU Cycle) are allocated to a process,


In this resources(CPU Cycle) are allocated to
Basic the process holds it till it completes its burst time or
a process for a limited time.
switches to waiting state

Process can not be interrupted until it terminates itself or


Interrupt Process can be interrupted in between.
its time is up

If a process having high priority frequently If a process with a long burst time is running CPU, then
Starvation arrives in the ready queue, a low priority later coming process with less CPU burst time may
process may starve starve

Overhead It has overheads of scheduling the processes It does not have overheads

Flexibility flexible Rigid

Cost Cost associated No cost associated

Response Time Preemptive scheduling response time is less Non-preemptive scheduling response time is high

Decision Decisions are made by the scheduler and are Decisions are made by the process itself and the OS just
making based on priority and time slice allocation follows the process’s instructions

The OS has greater control over the


Process control The OS has less control over the scheduling of processes
scheduling of processes

Higher overhead due to frequent context


Overhead Lower overhead since context switching is less frequent
switching

Concurrency More as a process might be preempted when


Less as a process is never preempted.
Overhead it was accessing a shared resource.

Examples of preemptive scheduling are


Examples of non-preemptive scheduling are First Come
Examples Round Robin and Shortest Remaining Time
First Serve and Shortest Job First
First
Preemptive Scheduling

The operating system can interrupt or preempt a running process to allocate CPU time to another process,
typically based on priority or time-sharing policies. Mainly a process is switched from the running state to
the ready state. Algorithms based on preemptive scheduling are Round Robin (RR) , Shortest Remaining
Time First (SRTF) , Priority (preemptive version) , etc.

Advantages of Preemptive Scheduling

• Because a process may not monopolize the processor, it is a more reliable method and does not
cause denial of service attack.

• Each preemption occurrence prevents the completion of ongoing tas s.

• The average response time is improved. Utilizing this method in a multi-programming environment is
more advantageous.

• Most of the modern operating systems (Window, Linux and macOS) implement Preemptive
Scheduling.

Disadvantages of Preemptive Scheduling

• More Complex to implement in Operating Systems.

• Suspending the running process, change the context, and dispatch the new incoming process all
take more time.

• Might cause starvation : A low-priority process might be preempted again and again if multiple high-
priority processes arrive.

• Causes Concurrency Problems as processes can be stopped when they were accessing shared
memory (or variables) or resources.

Non-Preemptive Scheduling

In non-preemptive scheduling, a running process cannot be interrupted by the operating system; it


voluntarily relinquishes control of the CPU. In this scheduling, once the resources (CPU cycles) are
allocated to a process, the process holds the CPU till it gets terminated or reaches a waiting state.

Algorithms based on non-preemptive scheduling are: First Come First Serve, Shortest Job First (SJF
basically non preemptive) and Priority (nonpreemptive version) , etc.

Advantages of Non-Preemptive Scheduling

• It is easy to implement in an operating system. It was used in Windows 3.11 and early macOS.

• It has a minimal scheduling burden.

• Less computational resources are used.

Disadvantages of Non-Preemptive Scheduling

• It is open to denial of service attack. A malicious process can take CPU forever.

• Since we cannot implement round robin, the average response time becomes less.
CPU Scheduling in Operating Systems

CPU scheduling is a process used by the operating system to decide which task or process gets to use the
CPU at a particular time. This is important because a CPU can only handle one task at a time, but there are
usually many tasks that need to be processed. The following are different purposes of a CPU scheduling
time.

• Maximize the CPU utilization

• Minimize the response and waiting time of the process.

What is the Need for a CPU Scheduling Algorithm?

CPU scheduling is the process of deciding which process will own the CPU to use while another process is
suspended. The main function of CPU scheduling is to ensure that whenever the CPU remains idle, the OS
has at least selected one of the processes available in the ready-to-use line.

In Multiprogramming, if the long-term scheduler selects multiple I/O binding processes then most of the
time, the CPU remains idle. The function of an effective program is to improve resource utilization.

Terminologies Used in CPU Scheduling

• Arrival Time: The time at which the process arrives in the ready queue.

• Completion Time: The time at which the process completes its execution.

• Burst Time: Time required by a process for CPU execution.

• Turn Around Time: Time Difference between completion time and arrival time.

• Waiting Time(W.T): Time Difference between turn around time and burst time.

Things to Take Care While Designing a CPU Scheduling Algorithm


Different CPU Scheduling algorithms have different structures and the choice of a particular algorithm
depends on a variety of factors.

• CPU Utilization: The main purpose of any CPU algorithm is to keep the CPU as busy as possible.

• Throughput: The average CPU performance is the number of processes performed and completed
during each unit. This is called throughput. The output may vary depending on the length or duration
of the processes.

• Turn Round Time: For a particular process, the important conditions are how long it takes to perform
that process. The time elapsed from the time of process delivery to the time of completion is known
as the conversion time.

• Waiting Time: The Scheduling algorithm does not affect the time required to complete the process
once it has started performing. It only affects the waiting time of the process i.e. the time spent in the
waiting process in the ready queue.

• Response Time: In a collaborative system, turn around time is not the best option. The process may
produce something early and continue to computing the new results while the previous results are
released to the user. Therefore another method is the time taken in the submission of the application
process until the first response is issued. This measure is called response time.
FCFS – First Come First Serve CPU Scheduling

First Come, First Serve (FCFS) is one of the simplest types of CPU scheduling algorithms. It is exactly what
it sounds like: processes are attended to in the order in which they arrive in the ready queue, much like
customers lining up at a grocery store.

FCFS Scheduling is a non-preemptive algorithm, meaning once a process starts running, it cannot be
stopped until it voluntarily relinquishes the CPU, typically when it terminates or performs I/O.

Advantages of FCFS

• The simplest and basic form of CPU Scheduling algorithm

• Every process gets a chance to execute in the order of its arrival. This ensures that no process is
arbitrarily prioritized over another.

• Easy to implement, it doesn’t require complex data structures.

Disadvantages of FCFS

• As it is a Non-preemptive CPU Scheduling Algorithm, FCFS can result in long waiting times,
especially if a long process arrives before a shorter one. This is known as the convoy effect, where
shorter processes are forced to wait behind longer processes, leading to inefficient execution.

• The average waiting time in the FCFS is much higher than in the others

• Processes that are at the end of the queue, have to wait longer to finish.

• It is not suitable for time-sharing operating systems where each process should get the same
amount of CPU time.

Shortest Job First or SJF CPU Scheduling


Shortest Job First (SJF) or Shortest Job Next (SJN) is a scheduling process that selects the waiting process
with the smallest execution time to execute next. This scheduling method may or may not be preemptive.
Significantly reduces the average waiting time for other processes waiting to be executed.

Implementation of SJF Scheduling

• Sort all the processes according to the arrival time.

• Then select that process that has minimum arrival time and minimum Burst time.

Advantages of SJF Scheduling

• SJF is better than the First come first serve(FCFS) algorithm as it reduces the average waiting time.

• SJF is generally used for long term scheduling.

• It is suitable for the jobs running in batches, where run times are already known.

Disadvantages of SJF Scheduling

• SJF may cause very long turn-around times or starvation.

• In SJF job completion time must be known earlier.

• Many times it becomes complicated to predict the length of the upcoming CPU request.
Shortest Remaining Time First (Preemptive SJF) Scheduling Algorithm

the pre-emptive version of Shortest Job First (SJF) scheduling, called Shortest Remaining Time First
(SRTF). In SRTF, the process with the least time left to finish is selected to run. The running process will
continue until it finishes or a new process with a shorter remaining time arrives. This way, the process that
can finish the fastest is always given priority.

Advantages of SRTF Scheduling

1. Minimizes Average Waiting Time: SRTF reduces the average waiting time by prioritizing processes
with the shortest remaining execution time.

2. Efficient for Short Processes: Shorter processes get completed faster, improving overall system
responsiveness.

3. Ideal for Time-Critical Systems: It ensures that time-sensitive processes are executed quickly.

Disadvantages of SRTF Scheduling

1. Starvation of Long Processes: Longer processes may be delayed indefinitely if shorter processes
keep arriving.

2. Difficult to Predict Burst Times: Accurate prediction of process burst times is challenging and
affects scheduling decisions.

3. High Overhead: Frequent context switching can increase overhead and slow down system
performance.

4. Not Suitable for Real-Time Systems: Real-time tasks may suffer delays due to frequent
preemptions.

Round Robin Scheduling in Operating System


Round Robin Scheduling is a method used by operating systems to manage the execution time of multiple
processes that are competing for CPU attention. It is called "round robin" because the system rotates
through all the processes, allocating each of them a fixed time slice or "quantum", regardless of their
priority. The primary goal of this scheduling method is to ensure that all processes are given an equal
opportunity to execute, promoting fairness among tasks.

Advantages of Round Robin Scheduling

• Fairness: Each process gets an equal share of the CPU.

• Simplicity: The algorithm is straightforward and easy to implement.

• Responsiveness: Round Robin can handle multiple processes without significant delays, making it
ideal for time-sharing systems.

Disadvantages of Round Robin Scheduling:

• Overhead: Switching between processes can lead to high overhead, especially if the quantum is too
small.

• Underutilization: If the quantum is too large, it can cause the CPU to feel unresponsive as it waits for
a process to finish its time.
Introduction of Deadlock in Operating System

A deadlock is a situation where a set of processes is blocked because each process is holding a resource
and waiting for another resource acquired by some other process. In this article, we will discuss deadlock,
its necessary conditions, etc. in detail.

• Deadlock is a situation in computing where two or more processes are unable to proceed because
each is waiting for the other to release resources.

• Key concepts include mutual exclusion, resource holding, circular wait, and no preemption.

Necessary Conditions for Deadlock in OS

Deadlock can arise if the following four conditions hold simultaneously (Necessary Conditions)

• Mutual Exclusion: Only one process can use a resource at any given time i.e. the resources are non-
sharable.

• Hold and Wait: A process is holding at least one resource at a time and is waiting to acquire other
resources held by some other process.

• No Preemption: A resource cannot be taken from a process unless the process releases the
resource.

• Circular Wait: set of processes are waiting for each other in a circular fashion. For example, lets say
there are a set of processes {P0P0,P1P1,P2P2,P3P3} such that P0P0 depends on P1P1, P1P1
depends on P2P2, P2P2 depends on P3P3 and P3P3 depends on P0P0. This creates a circular
relation between all these processes and they have to wait forever to be executed.

Banker’s Algorithm in Operating System

Banker’s Algorithm is a resource allocation and deadlock avoidance algorithm used in operating systems. It
ensures that a system remains in a safe state by carefully allocating resources to processes while avoiding
unsafe states that could lead to deadlocks.

• The Banker’s Algorithm is a smart way for computer systems to manage how programs use
resources, like memory or CPU time.

• It helps prevent situations where programs get stuck and can not finish their tasks. This condition is
known as deadlock.

• By keeping track of what resources each program needs and what’s available, the banker algorithm
makes sure that programs only get what they need in a safe order.

You might also like