Process Management
Process Management
Introduction
• Process managements involve the execution of various tasks such as
creation of processes, scheduling of processes, management of
deadlock, and termination of processes.
• It is responsibility of operating system to manage all the running
processes of the system. Operating system manages processes by
performing tasks such as resource allocation and process
scheduling. When a process runs on computer device memory and
CPU of computer are utilized.
Process Concept
• A process can be thought of as a program in execution. A process
will need certain resources such as CPU time, memory, files, and I/O
devices to accomplish its task. These resources are allocated to the
process either when it is created or while it is executing.
• A program is a passive entity (file containing list of instruction
stored in the disk). A process whereas is an active entity with
program counter specifying, which instruction is to be executed
next. A program becomes a process when it is loaded into main
memory.
Process Concept
• A process in itself contains:
• Program Counter: Representing its current activity
• Text Section or Code section : where compiled code of program resides.
• Stack : which contains temporary data, local variables etc.
• Data Section : containing global variables
• Heap Section: memory allocated dynamically during run time.
Process Concept
Process Concept
• The operating system is responsible for the following activities in
connection with Process Management:
• Scheduling processes and threads on the CPUs.
• Creating and deleting both user and system processes.
• Suspending and resuming processes.
• Providing mechanisms for process synchronization.
• Providing mechanisms for process communication.
Attributes of a process
• Process ID: When a process is created, a unique id is assigned to the
process which is used for unique identification of the process in the
system.
• Program counter: A program counter stores the address of the last
instruction of the process on which the process was suspended. The
CPU uses this address when the execution of this process is resumed.
• Process State: The Process, from its creation to the completion, goes
through various states which are new, ready, running and waiting. We
will discuss about them later in detail.
Attributes of a process
• Priority: Every process has its own priority. The process with the highest
priority among the processes gets the CPU first. This is also stored on the
process control block.
• General Purpose Registers: Every process has its own set of registers which
are used to hold the data which is generated during the execution of the
process.
• List of open files: During the Execution, Every process uses some files
which need to be present in the main memory. OS also maintains a list of
open files in the PCB.
• List of open devices: OS also maintain the list of all open devices which are
used during the execution of the process.
Process Control Block
• Process Control Block is a data structure that contains information of
the process related to it. The process control block is also known as a
task control block, entry of the process table, etc.
• A process control block (PCB) contains information about the process,
i.e. registers, quantum, priority, etc. The process table is an array of
PCB’s, that means logically contains a PCB for all of the current
processes in the system.
Structure of the Process Control Block
• Process State: This specifies the process state i.e. new, ready, running,
waiting or terminated.
• Process ID : This shows the ID of the particular process.
• Program Counter: This contains the address of the next instruction
that needs to be executed in the process.
• Registers: This specifies the registers that are used by the process.
They may include accumulators, index registers, stack pointers,
general purpose registers etc.
Structure of the Process Control Block
• List of Open Files: These are the different files that are associated with the
process.
• CPU Scheduling Information: The process priority, pointers to scheduling
queues etc. is the CPU scheduling information that is contained in the PCB.
This may also include any other scheduling parameters.
• Memory Management Information: The memory management
information includes the page tables or the segment tables depending on
the memory system used. It also contains the value of the base registers,
limit registers etc.
• I/O Status Information: This information includes the list of I/O devices
used by the process, the list of files etc.
Structure of the Process Control Block
• Accounting information: The time limits, account numbers, amount
of CPU used, process numbers etc. are all a part of the PCB
accounting information.
Process State Diagram
Process State Diagram
• As a process executes, it changes state. The state of a process is
defined in part by the current activity of that process. A process may
be in one of the following states:
Thread
• A thread is basic unit of CPU utilization. Thread is a single sequence
stream within a process. Threads have same properties as of the
process so they are called as light weight processes.
• Each thread has a program counter, a register set and a stack space .
• Threads provide a way to improve application performance through
parallelism.
• A thread shares with its peer threads few information like code
segment, data segment and open files.
Process Vs Thread
• Process is heavy weight or • Thread is light weight, taking
resource intensive. lesser resources than a process.
• Process switching needs • Thread switching does not need
interaction with operating to interact with operating
system. system.
• In multiple processing • All threads can share same set of
environments, each process open files, child processes.
executes the same code but has
its own memory and file
resources.
Process VS Thread
• If one process is blocked, then • While one thread is blocked and
no other process can execute waiting, a second thread in the
until the first process is same task can run.
unblocked. • Multiple threaded processes
• Multiple processes without use fewer resources.
using threads use more • One thread can read, write or
resources. change another thread's data.
• In multiple processes each
process operates independently
of the others.
Advantages of Thread
• Threads minimize the context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• It is more economical to create and context switch threads.
• Threads allow utilization of multiprocessor architectures to a
greater scale and efficiency.
Inter-process Communication
• Inter Process Communication (IPC) refers to a mechanism, where
the operating systems allow various processes to communicate with
each other. This involves synchronizing their actions and managing
shared data.
• Inter-process communication (IPC) refers specifically to the
mechanisms an operating systems provides to allow the process to
manage shared data.
• Processes executing concurrently may be independent or
cooperating.
Cooperating Processes
• Cooperating processes are those that can affect or are affected by
other processes running on the system. Cooperating processes may
share data with each other.
• A process may be independent or cooperating one.
Cooperating Processes
• A process is said to be independent when it cannot affect or be
affected by any other processes that are running the system. It is
clear that any process which does not share any data (temporary or
persistent) with any another process then the process independent.
• A cooperating process is one which can affect or affected by any
another process that is running on the computer. the cooperating
process is one which shares data with another process.
Reasons for needing cooperating processes
• Modularity: Modularity involves dividing complicated tasks into
smaller subtasks. These subtasks can completed by different
cooperating processes. This leads to faster and more efficient
completion of the required tasks.
• Information Sharing: Sharing of information between multiple
processes can be accomplished using cooperating processes. This may
include access to the same files. A mechanism is required so that the
processes can access the files in parallel to each other.
Reasons for needing cooperating processes
• Convenience: There are many tasks that a user needs to do such as
compiling, printing, editing etc. It is convenient if these tasks can be
managed by cooperating processes.
• Computation Speedup: Subtasks of a single task can be performed
parallel using cooperating processes. This increases the computation
speedup as the task can be executed faster. However, this is only
possible if the system has multiple processing elements.
Process Scheduling
• The act of determining which process is in the ready state, and
should be moved to the running state is known as Process
Scheduling.
• The prime aim of the process scheduling system is to keep the CPU
busy all the time and to deliver minimum response time for all
programs. For achieving this, the scheduler must apply appropriate
rules for swapping processes in and out of CPU.
Process Queuing Diagram
Process Queuing Diagram
• All processes, upon entering into the system, are stored in the Job
Queue. All processes in the ready state are placed in ready queue.
Processes waiting for a device to become available are placed in
Device Queues. There are unique device queues available for each
I/O device.
• A new process is initially put in the Ready queue. It waits in the ready
queue until it is selected for execution(or dispatched).
Process Queuing Diagram
• Once the process is assigned to the CPU and is executing, one of the
following several events can occur:
• The process could issue an I/O request, and then be placed in the I/O queue.
• The process could create a new subprocess and wait for its termination.
• The process could be removed forcibly from the CPU, as a result of an
interrupt, and be put back in the ready queue.
Process Queuing Diagram
• In the first two cases, the process eventually switches from the
waiting state to the ready state, and is then put back in the ready
queue.
• A process continues this cycle until it terminates, at which time it is
removed from all queues and has its PCB and resources deallocated.
Scheduler
• A process migrates among various scheduling queues throughout its
lifetime. The scheduling process is done OS through appropriate
schedulers.
• Schedulers are special system software which handle process
scheduling in various ways. Their main task is to select the jobs to
be submitted into the system and to decide which process to run.
• Schedulers are of three types − 1) Long-Term Scheduler 2)Short-
Term Scheduler 3) Medium-Term Scheduler
Long term Scheduler
• It is also called a job scheduler. A long-term scheduler determines
which programs are admitted to the system for processing. It selects
processes from the queue and loads them into memory for
execution. Process loads into the memory for CPU scheduling.
• The primary objective of the job scheduler is to provide a balanced
mix of jobs, such as I/O bound and processor bound. It also controls
the degree of multiprogramming.
Long term Scheduler
• If the degree of multiprogramming is stable, then the
average rate of process creation must be equal to the
average departure rate of processes leaving the system.
Short term Scheduler
• It is also called CPU scheduler. Main objective is increasing system
performance in accordance with the chosen set of criteria. It is the
change of ready state to running state of the process.
• CPU scheduler selects process among the processes that are ready
to execute and allocates CPU to one of them. It selects from
processes that are in main memory and ready to execute.
• It is also known as dispatcher, execute most frequently and makes
the fine grained decision of which process to execute next. Short
term scheduler is faster than long term scheduler.
CPU Bound & I/O Bound Processes
• An I/O-bound process is one that spends more of its time doing I/O
than it spends doing computations.
• A CPU-bound process, in contrast, generates I/O requests
infrequently, using more of its time doing computations.
• It is important that the long-term scheduler select a good process
mix of I/O-bound and CPU-bound processes.
CPU Bound & I/O Bound Processes
• If all processes are I/O bound, the ready queue will almost always
be empty, and the short-term scheduler will have little to do.
• If all processes are CPU bound, the I/O waiting queue will almost
always be empty, devices will go unused, and again the system will
be unbalanced.
• The system with the best performance will thus have a combination
of CPU-bound and I/O-bound processes.
Medium Term Scheduler
Medium Term Scheduler
• The key idea behind a medium-term scheduler is that
sometimes it can be advantageous to remove a process from
memory (and from active contention for the CPU) and thus
reduce the degree of multiprogramming.
• Later, the process can be reintroduced into memory, and its
execution can be continued where it left off. This scheme is
called swapping.
• The process is swapped out, and is later swapped in, by the
medium-term scheduler. Swapping may be necessary to
improve the process mix or because a change in memory
requirements has overcommitted available memory, requiring
memory to be freed up.
CPU Scheduling
• Scheduling fell into one of the two general categories:
• Pre-emptive Scheduling: When the operating system decides to
favour another process, pre-empting the currently executing
process. Here resources allocated to a process can be taken by OS in
between the execution.
• Non Pre-emptive Scheduling: When the currently executing process
gives up the CPU voluntarily. Here resources cant be taken in
between the execution.
Context Switch
• A context switch is the mechanism to store and restore the state or
context of a CPU in Process Control block so that a process
execution can be resumed from the same point at a later time.
• Using this technique, a context switcher enables multiple processes
to share a single CPU. Context switching is an essential part of a
multitasking operating system features.
Context Switch
• When the scheduler switches the CPU from executing one process
to execute another, the state from the current running process is
stored into the process control block.
• After this, the state for the process to run next is loaded from its
own PCB and used to set the PC, registers, etc. At that point, the
second process can start executing.
Dispatcher
• The dispatcher is the module that gives control of the CPU to the
process selected by the short-term scheduler. This function involves
the following:
• •Switching context • Switching to user mode • Jumping to the
proper location in the user program to restart that program.
• The dispatcher should be as fast as possible, since it is invoked
during every process switch. The time it takes for the dispatcher to
stop one process and start another running is known as the dispatch
latency.
Scheduling Criteria
• CPU Utilization: To make out the best use of CPU and not to waste
any CPU cycle, CPU would be working most of the time(Ideally 100%
of the time). Considering a real system, CPU usage should range
from 40% (lightly loaded) to 90% (heavily loaded.)
• Throughput : It is the total number of processes completed per unit
time or rather say total amount of work done in a unit of time. This
may range from 10/second to 1/hour depending on the specific
processes.
Scheduling Criteria
• Turnaround Time: It is the amount of time taken to execute a
particular process, i.e. The interval from time of submission of the
process to the time of completion of the process(Wall clock time).
• Waiting Time: The sum of the periods spent waiting in the ready
queue amount of time a process has been waiting in the ready
queue to acquire get control on the CPU.
• Load Average: It is the average number of processes residing in the
ready queue waiting for their turn to get into the CPU.
Scheduling Criteria
• Response Time: Amount of time it takes from when a
request was submitted until the first response is produced.
Remember, it is the time till the first response and not the
completion of process execution(final response).
CPU Scheduling Algorithms
• Arrival Time: Time at which the process arrives in the ready queue.
• Completion Time: Time at which process completes its execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time and
arrival time. Turn Around Time = Completion Time – Arrival Time
• Waiting Time(W.T): Time Difference between turn around time and
burst time. Waiting Time = Turn Around Time – Burst Time
CPU Scheduling
CPU Scheduling: Types
• Preemptive Scheduling :In Preemptive Scheduling, the tasks are
mostly assigned with their priorities. Sometimes it is important to
run a task with a higher priority before another lower priority task,
even if the lower priority task is still running. The lower priority task
holds for some time and resumes when the higher priority task
finishes its execution.
• Non-Preemptive Scheduling: In this type of scheduling method, the
CPU has been allocated to a specific process. The process that keeps
the CPU busy will release the CPU either by switching context or
terminating.
Objectives of Process Scheduling Algorithm
• Max CPU utilization [Keep CPU as busy as possible]
• Fair allocation of CPU.
• Max throughput [Number of processes that complete their
execution per time unit]
• Min turnaround time [Time taken by a process to finish execution]
Min waiting time [Time a process waits in ready queue]
Min response time [Time when a process produces first response]
First Come First Serve Scheduling
• In the "First come first serve" scheduling algorithm, as the name
suggests, the process which arrives first, gets executed first, or we
can say that the process which requests the CPU first, gets the CPU
allocated first.
• First Come First Serve, is just like FIFO(First in First out) Queue data
structure, where the data element which is added to the queue first,
is the one who leaves the queue first. This is used in Batch Systems.
• It's easy to understand and implement programmatically, using a
Queue data structure, where a new process enters through the tail
of the queue, and the scheduler selects process from the head of
the queue
FCFS Example
• Consider the processes P1, P2, P3, P4 given in the below
table, arrives for execution in the same order, with Arrival
Time, and given Burst Time, let's find the average waiting
time using the FCFS scheduling algorithm.
FCFS Example
FCFS Example
FCFS Example
Problems with FCFS Scheduling
• It is Non Pre-emptive algorithm, which means the process priority
doesn't matter.
• Not optimal Average Waiting Time.
• Resources utilization in parallel is not possible, which leads to
Convoy Effect, and hence poor resource(CPU, I/O etc) utilization.
• Convoy Effect is a situation where many processes, who need to use
a resource for short time are blocked by one process holding that
resource for a long time.
Shortest Job First(SJF) Scheduling
• Shortest Job First scheduling works on the process with the shortest
burst time or duration first.
• To successfully implement it, the burst time/duration time of the
processes should be known to the processor in advance, which is
practically not feasible all the time.
• This scheduling algorithm is optimal if all the jobs/processes are
available at the same time. (either Arrival time is 0 or all, or Arrival
time is same for all)
Shortest Job First(SJF) Scheduling
• Advantages of SJF: Maximum throughput & Minimum average
waiting and turnaround time
• Disadvantages of SJF: May suffer with the problem of starvation and
It is not implementable because the exact Burst time for a process
can't be known in advance.
• It is of two types: Non Pre-emptive and Pre-emptive
Non Pre-emptive Shortest Job First
Non Pre-emptive Shortest Job First
Non Pre-emptive Shortest Job First
• If the arrival time for processes are different, which means all the
processes are not available in the ready queue at time 0 and some
jobs arrive after some time, in such situation, sometimes process
with short burst time have to wait for the current process's
execution to finish.
• This leads to the problem of Starvation, where a shorter process has
to wait for a long time until the current longer process gets
executed. This happens if shorter jobs keep coming, but this can be
solved using the concept of aging.
Pre-emptive Shortest Job First
• Also called Shortest remaining time first. Here jobs are put into ready
queue as they arrive, but as a process with short burst time arrives,
the existing process is preempted or removed from execution, and
the shorter job is executed first.
Shortest Job First
Priority CPU Scheduling
• Priority scheduling is a non-preemptive algorithm , where each
process is assigned a priority. Process with highest priority is to be
executed first and so on. Processes with same priority are executed
on first come first served basis.
• The priority of process, when internally defined, can be decided
based on memory requirements, time limits ,number of open files,
ratio of I/O burst to CPU burst etc. Whereas, external priorities are
set based on criteria outside the operating system, like the
importance of the process, funds paid for the computer resource
use, makrte factor etc.
Example of Priority Scheduling Algorithm
Problem with Priority Scheduling Algorithm
• In priority scheduling algorithm, the chances of indefinite blocking
or starvation.
• In case of priority scheduling if new higher priority processes keeps
coming in the ready queue then the processes waiting in the ready
queue with lower priority may have to wait for long durations
before getting the CPU for execution. This state is called starvation.
Aging Technique
• To prevent starvation of any process, we can use the concept of aging
where we keep on increasing the priority of low-priority process
based on the its waiting time.
• For example, if we decide the aging factor to be 0.5 for each day of
waiting, then if a process with priority 20(which is comparitively low
priority) comes in the ready queue. After one day of waiting, its
priority is increased to 19.5 and so on
Round Robin Scheduling