Unit 2
Unit 2
SYSTEM PRINCIPLES
1
UNIT-2
PROCESS MANAGEMENT
2
Process in OS
• A process in an operating system is a program in execution,
with its own address space and resources, managed by the
operating system for efficient and secure operation of the
system.
• A single process consist of following sections:
– Text (Holds executable code)
– Data (Holds global variables)
– Heap (Dynamic Memory allocated during execution)
– Stack (Temporary storage for invoking functions)
3
Process Scheduling
• Process Scheduler is responsible for selecting available
process so that CPU utilization can be maximized which is
the main purpose of multiprogramming
• The number of processes currently resides in memory is
known as degree of multiprogramming
• Processes can be broadly classified as:
– I/O bound (more I/O then computation) i.e. online media
players
– CPU bound (more computation then I/O) i.e multimedia
processing softwares
4
Cont..
• A process has to move around in following:
– Scheduling Queues (Wait for resources and put it in
ready queue)
– CPU Scheduling (Pick process from ready queue)
– Context Switch (toggle between processes)
5
Scheduling Algorithms
• First Come First Served
• Shortest Job First
• Round Robin
• Priority
• Multilevel Queue
• Multilevel Feedback Queue
6
First Come First Served
– In FCFS, the CPU is allocated to the first process that
arrives, and the process runs until it completes its
execution or gets blocked.
– FCFS is a non-preemptive scheduling algorithm, which
means that once a process starts executing, it cannot be
preempted until it completes its execution or blocks.
– FCFS is suitable for batch processing and low-traffic
systems where the response time is not critical.
– The average waiting time for the FCFS algorithm can be
long, especially if there are many long-running
processes.
7
Shortest Job First
– SJF can be either preemptive or non-preemptive. In
preemptive SJF, a process with a smaller burst time can
preempt a currently executing process with a longer burst
time. In non-preemptive SJF, a process with a shorter
burst time has to wait until the currently executing
process completes its execution.
– SJF can suffer from starvation, where a long-running
process with a small burst time may have to wait for a
long time if many shorter processes arrive.
– SJF requires knowledge of the burst time of all the
processes in advance, which may not be feasible in real-
time systems.
8
Round Robin
– RR is a preemptive scheduling algorithm, which means
that a process can be preempted after its time slice
expires, and the CPU can be allocated to another
process.
– In RR, each process is allocated a fixed time slice or
quantum, which can range from a few milliseconds to
several seconds, depending on the system configuration.
– After a process completes its time slice, it is preempted
and placed at the end of the ready queue, and the next
process in the queue is allocated the CPU.
9
Cont…
– RR provides fairness in CPU allocation, as each process is
given an equal chance to execute, regardless of its priority
or burst time.
– RR can suffer from high context switching overhead,
especially if the time slice is too small or the number of
processes in the queue is large.
– RR is suitable for interactive systems and systems with a
mix of short and long-running processes.
– The time slice in RR should be chosen carefully to balance
the trade-off between fairness and overhead.
10
Priority
– The priority of a process is typically determined by its
characteristics, such as its time-criticality, importance,
and resource requirements.
– The process with the highest priority is allocated the CPU
first, and if two or more processes have the same priority,
then the scheduling algorithm may use other criteria,
such as first-come-first-served (FCFS) or round-robin
scheduling.
– In preemptive priority scheduling, the CPU can be taken
away from a running process if a higher priority process
arrives.
– In non-preemptive priority scheduling, a running process
keeps the CPU until it finishes or voluntarily gives up the
CPU.
11
Multilevel Ǫueue
– Multilevel queue scheduling is a CPU scheduling algorithm
that divides processes into separate queues, each with its
own scheduling algorithm.
– Each queue is typically assigned a different priority level
based on the type of process or its priority. For example,
one queue may be dedicated to time-critical processes,
while another may be for background processes.
– The multilevel queue scheduling algorithm uses a
combination of scheduling techniques, such as FCFS,
round-robin, and priority scheduling, to schedule processes
in each queue.
– The scheduling algorithm can be either preemptive or non-
preemptive, depending on the requirements of the system.
12
Multilevel Feedback Ǫueue
– In the MLFQ algorithm, each process is initially assigned to the
highest priority queue, and the CPU is allocated to the process
in that queue.
– If a process uses up its allocated time slice in a given queue, it
is moved to a lower-priority queue.
– If a process continues to use up its time slice in a lower-priority
queue, it is moved down to an even lower-priority queue.
13
Cont…
– This process of moving a process down the priority
levels is called demotion.
– If a process releases the CPU before its time slice
is used up, it can move up to a higher-priority
queue. This process of moving a process up the
priority levels is called promotion.
– The purpose of the feedback mechanism is to allow
processes that require more CPU time to move up
to higher-priority queues, while processes that use
less CPU time move down to lower-priority
queues.
14
Thread Scheduling
– Thread scheduling is the process of assigning CPU time to
different threads of a process in a multi-threaded
environment.
– Round Robin: Each thread is allocated a fixed time slice or
quantum of CPU time, after which it is preempted and
replaced by the next thread in the queue.
– Priority-based scheduling: Threads are allocated CPU time
based on their priority, with higher priority threads getting
more CPU time than lower priority threads.
– Thread-specific scheduling: The operating system can use
different scheduling algorithms for different threads of a
process, based on their requirements.
15
Multiprocessor Scheduling
– Multiprocessor scheduling is the process of allocating tasks to
multiple processors or cores in a parallel computing
environment.
– Task decomposition: In task decomposition, a large task is
divided into smaller subtasks that can be executed in parallel.
– Priority scheduling: In priority scheduling, tasks are assigned
priorities based on their importance or criticality, and the
processor or core with the highest priority task is allocated
CPU time first.
– Round-robin scheduling: In round-robin scheduling, each
processor or core is allocated a fixed time slice or quantum of
CPU time, and tasks are assigned to processors or cores in a
rotating fashion.
16
The Critical Section Problem
– The critical section refers to a portion of the code that
accesses shared data, resources, or variables.
– The goal of the problem is to ensure that only one thread or
process at a time executes the critical section, to avoid race
conditions, inconsistencies, and data corruption.
– The Critical Section Problem is an essential concept in
concurrent programming and plays a crucial role in ensuring
correct and reliable operation of multi-threaded and multi-
process programs.
17
Semaphores
– Semaphores are type of synchronization mechanism used
in multi-threaded or multi-process programs to control
access to shared resources or critical sections of code.
– A semaphore is a variable that can be accessed by multiple
threads or processes and used to coordinate their access to
shared resources.
– Semaphores are of two types:
• Binary Semaphores: Binary semaphores can take only
two values, 0 or 1, and are used to indicate the
availability of a shared resource.
• Counting Semaphores: Counting semaphores can take
any non-negative integer value and are used to control
the number of threads or processes that can access a
shared resource.
18
Monitor
– A monitor is a high-level synchronization mechanism used in
concurrent programming languages to provide a structured
way of controlling access to shared resources or critical
sections of code.
– A monitor is implemented as an abstract data type that
encapsulates shared resources and provides methods or
procedures for accessing and modifying them.
19
Cont…
– Features:
• Mutual Exclusion: A monitor ensures that only one
thread can execute a critical section of code at a time,
preventing race conditions and ensuring data
consistency.
• Condition Variables: A monitor provides condition
variables, which allow threads to wait for certain
conditions to be met before proceeding with their
execution.
• Data Abstraction: A monitor encapsulates shared
resources and provides methods or procedures for
accessing and modifying them.
20
Deadlock
– A deadlock is a situation where two or more processes are
blocked or waiting for each other to release resources that
they are holding, preventing them from making progress.
– These deadlocks typically occur in systems where processes
are competing for a finite set of resources, such as shared
memory, file access, or network connections.
– To prevent deadlocks, operating systems use various
techniques such as resource allocation algorithms, process
scheduling algorithms, and deadlock detection and recovery
algorithms.
21
Deadlock Characterization
– The following four conditions are necessary for a deadlock
to occur:
• Mutual exclusion: At least one resource must be held in a
non-sharable mode, meaning only one process can access
the resource at a time.
• Hold and wait: A process must be holding at least one
resource and waiting for another resource that is currently
being held by another process.
• No preemption: Resources cannot be preempted or
forcibly taken away from a process that is holding them.
• Circular wait: A circular chain of two or more processes
exists, where each process is waiting for a resource that is
held by the next process in the chain.
22
Methods for Handling
Deadlocks
23
Deadlock Prevention
– To prevent a deadlock, one of the four condition has to be stopped.
• The mutual-exclusion condition must hold, meaning at least one
resource must be non sharable.
• To ensure that the hold-and-wait condition never occurs in the
system, a protocol must be used that requires each thread to
request and be allocated all its resources before it begins
execution.
• The third necessary condition for deadlocks is that there be no
preemption of resources that have already been allocated. To
ensure this, a protocol is used to preempt all resources the
thread is currently holding and add them to the list of resources
for which the thread is waiting. The thread will be restarted
when it can regain its old and new resources.
• One way to ensure that circular wait never holds is to impose a
total ordering of all resource types and to require that each
thread requests resources in an increasing order of enumeration. 24
Deadlock Avoidance
– Deadlock avoidance is a technique used to prevent deadlocks
from occurring by dynamically assessing the safety of each
resource request before granting it.
– It requires the system to have a prior knowledge of the
maximum resources needed, which can be difficult to obtain
in some cases.
– However, it is a useful technique for handling deadlocks in
systems where deadlock prevention is not feasible or
practical.
– The most popular algorithm for deadlock avoidance is the
Banker's algorithm. This algorithm uses a resource allocation
graph to determine whether a resource request should be
granted or denied.
25
Banker’s Algorithm
– The Banker's algorithm considers the following inputs:
• The total number of resources of each type in the system.
• The number of resources of each type that are currently
available.
• The maximum demand of each process, which is the
maximum number of resources of each type that a process
may need.
• The number of resources of each type currently allocated
to each process.
26
Cont…
– To determine if a request for resources can be granted, the
Banker's algorithm uses the following steps:
• The process makes a request for a certain number of
resources.
• The system checks if the request can be granted by verifying
that the number of available resources is greater than or
equal to the number of resources requested by the process.
• The system temporarily allocates the requested resources to
the process.
• The system checks if the resulting state is safe by simulating
the allocation of resources to all processes. If the system can
allocate resources to all processes and avoid deadlock, then
the request is granted. Otherwise, the request is denied, and
the system returns to its previous state.
27
Deadlock detection
– Deadlock detection is a technique used in computer systems to identify
situations where multiple processes are waiting for each other to release
resources that they need in order to proceed.
– There are several algorithms for detecting deadlocks in computer
systems, including the banker's algorithm, wait-for graph algorithm, and
resource allocation graph algorithm.
– The banker's algorithm is a resource allocation and deadlock avoidance
algorithm that ensures that the system will be in a safe state before
allocating resources to a process.
– The wait-for graph algorithm uses a directed graph to represent the wait-
for relationships between processes.
– Finally, the resource allocation graph algorithm uses a directed graph to
represent the allocation of resources to processes.
– Once a deadlock has been detected, various techniques can be used to
resolve it, such as resource preemption, process termination, or resource
allocation.
28
Recovery from Deadlock
– There are two options for breaking a deadlock.
• Process and Thread Termination
– Abort all deadlocked processes
– Abort one process at a time until th deadlock cycle is
eliminated
• Resource Preemption
– Victim selection
– Rollback process to safe state for restart
– Starvation, if process do not get required resources
than it cen go in starvation mode
29