0% found this document useful (0 votes)
19 views69 pages

Assignment 1

Uploaded by

Parashar Knayam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views69 pages

Assignment 1

Uploaded by

Parashar Knayam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

UNIVERSITY Name : MAYANK PARASHAR

POLYTECHNIC Faculty No. : 21DPCO177


Enrollment no. : GM0491
ASSIGNMENT-01
Semester : VI SEMESTER
OPERATING SYSTEM Class : Diploma in Computer

(PCO604C) Engineering
UNIT-2
PROCESS MANAGEMENT

INTRODUCTION OF PROCESS MANAGEMENT


■ A process is a program in execution. For example, when we write a program in C or C+
+ and compile it, the compiler creates binary code. The original code and binary code
are both programs. When we actually run the binary code, it becomes a process.
■ A process is an ‘active’ entity instead of a program, which is considered a ‘passive’
entity. A single program can create many processes when run multiple times; for
example, when we open a .exe or binary file multiple times, multiple instances begin
(multiple processes are created).
■ Process management refers to the techniques and strategies used by organizations to
design, monitor, and control their business processes to achieve their goals efficiently
and effectively. It involves identifying the steps involved in completing a task, assessing
the resources required for each step, and determining the best way to execute the task.
■ Process management can help organizations improve their operational efficiency, reduce
costs, increase customer satisfaction, and maintain compliance with regulatory
requirements. It involves analyzing the performance of existing processes, identifying
bottlenecks, and making changes to optimize the process flow.
■ Process management includes various tools and techniques such as process mapping,
process analysis, process improvement, process automation, and process control. By
applying these tools and techniques, organizations can streamline their processes,
eliminate waste, and improve productivity.
■ Overall, process management is a critical aspect of modern business operations and can
help organizations achieve their goals and stay competitive in today’s rapid
PROCESS MANAGEMENT
■ If the operating system supports multiple users then services under this are very
important. In this regard, operating systems have to keep track of all the completed
processes, Schedule them, and dispatch them one after another. But the user should
feel that he has full control of the CPU.
■ Some of the systems call in this category are as follows.
 Create a child’s process identical to the parent’s.
 Terminate a process
 Wait for a child process to terminate
 Change the priority of the process
 Block the process
 Ready the process
 Dispatch a process
 Suspend a process
 Resume a process
 Delay a process
 Fork a process

PROCESS CONCEPT
■ A question that arises in discussing operating systems involves what to call all the CPU
activities. A batch system executes jobs, whereas a time-shared system has user programs, or
tasks. Even on a single-user system such as Microsoft Windows, a user may be able to run
several programs at one time: a word processor, a web browser, and an e-mail package. Even if
the user can execute only one program at a time, the operating system may need to support its
own internal programmed activities, such as memory management. In many respects, all these
activities are similar, so we call all of them processes. The terms job and process are used
almost interchangeably in this text. Although we personally prefer the term process, much of
operating-system theory and terminology was developed during a time when the major activity
of operating systems was job processing. It would be misleading to avoid the use of commonly
accepted terms that include the word job (such as job scheduling) simply because process has
superseded job.
THE PROCESS
■ Informally, as mentioned earlier, a process is a program in execution. A process is
more than the program code, which is sometimes known as the text section. It also
includes the current activity, as represented by the value of the program counter and
the contents of the processor’s registers. A process generally also includes the
process stack, which contains temporary data (such as function parameters, return
addresses, and local variables), and a data section, which contains global variables.

Fig:2.1
EXPLANATION OF PROCESS
1. Text Section: A Process, sometimes known as the Text Section, also includes the
current activity represented by the value of the Program Counter.
2. Stack: The stack contains temporary data, such as function parameters, returns
addresses, and local variables.
3. Data Section: Contains the global variable.
4. Heap Section: Dynamically allocated memory to process during its run time.
ATTRIBUTES OR CHARACTERISTICS OF A PROCESS
A process has the following attributes.
 Process Id: A unique identifier assigned by the operating system
 Process State: Can be ready, running, etc.
 CPU registers: Like the Program Counter (CPU registers must be saved and restored
when a process is swapped in and out of the CPU)
 Accounts information: Amount of CPU used for process execution, time limits,
execution ID, etc
 I/O status information: For example, devices allocated to the process, open files,
etc
 CPU scheduling information: For example, Priority (Different processes may have
different priorities, for example, a shorter process assigned high priority in the
shortest job first scheduling) .

 All of the above attributes of a process are also known as the context of the process.
Every process has its own process control block(PCB), i.e. each process will have a
unique PCB. All of the above attributes are part of the PCB.
Fig 2.2
STATES OF PROCESS
A process is in one of the following states:
1. New: Newly Created Process (or) being-created process.
2. Ready: After the creation process moves to the Ready state, i.e. the process is ready
for execution.
3. Run: Currently running process in CPU (only one process at a time can be under
execution in a single processor)
4. Wait (or Block): When a process requests I/O access.
5. Complete (or Terminated): The process completed its execution.
6. Suspended Ready: When the ready queue becomes full, some processes are moved
to a suspended ready state
7. Suspended Block: When the waiting queue becomes full.
Fig 2.3
CRITICAL SECTION PROBLEM
 Critical Section is the part of a program which tries to access shared resources. That
resource may be any resource in a computer like a memory location, Data structure,
CPU or any IO device.
 The critical section cannot be executed by more than one process at the same time;
operating system faces the difficulties in allowing and disallowing the processes
from entering the critical section.
 The critical section is a code segment where the shared variables can be accessed. An
atomic action is required in a critical section i.e. only one process can execute in its
critical section at a time. All the other processes have to wait to execute in their
critical sections.
 A diagram that demonstrates the critical section is as follows –
 In the above diagram, the entry section handles the entry into the critical section. It
acquires the resources needed for execution by the process. The exit section handles
the exit from the critical section. It releases the resources and also informs the other
processes that the critical section is free.

SOLUTION TO THE CRITICAL SECTION PROBLEM


 The critical section problem needs a solution to synchronize the different processes.
The solution to the critical section problem must satisfy the following conditions –
MUTUAL EXCLUSION
 Mutual exclusion implies that only one process can be inside the critical section at
any time. If any other processes require the critical section, they must wait until it is
free.
Fig 2.4
PROGRESS
 Progress means that if a process is not using the critical section, then it should not
stop any other process from accessing it. In other words, any process can enter a
critical section if it is free.
BOUNDED WAITING
 Bounded waiting means that each process must have a limited waiting time. It should
not wait endlessly to access the critical section.

SYNCHRONIZATION
 Synchronization is the way by which processes that share the same memory space
are managed in an operating system. It helps maintain the consistency of data by
using variables or hardware so that only one process can make changes to the shared
memory at a time. There are various solutions for the same such as semaphores,
muted locks, synchronization hardware, etc.
 An operating system is a software that manages all applications on a device and
basically helps in the smooth functioning of our computer. Because of this reason,
the operating system has to perform many tasks, and sometimes simultaneously. This
isn’t usually a problem unless these simultaneously occurring processes use a
common resource.
 For example, consider a bank that stores the account balance of each customer in the
same database. Now suppose you initially have x rupees in your account. Now, you
take out some amount of money from your bank account, and at the same time,
someone tries to look at the amount of money stored in your account. As you are
taking out some money from your account, after the transaction, the total balance left
will be lower than x. But, the transaction takes time, and hence the person reads x as
your account balance which leads to inconsistent data. If in some way, we could
make sure that only one process occurs at a time, we could ensure consistent data
Fig 2.5
 In the above image, if Process1 and Process2 happen at the same time, user 2 will get
the wrong account balance as Y because of Process1 being transacted when the
balance is X.
 Inconsistency of data can occur when various processes share a common resource in
a system which is why there is a need for process synchronization in the operating
system
 For example, If a process1 is trying to read the data present in a memory location
while another process2 is trying to change the data present at the same location, there
is a high chance that the data read by the process1 will be incorrect.
Fig 2.6
 Let us look at different elements/sections of a program:
 Entry Section: The entry Section decides the entry of a process.
 Critical Section: Critical section allows and makes sure that only one process is
modifying the shared data.
 Exit Section: The entry of other processes in the shared data after the execution
of one process is handled by the Exit section.
 Remainder Section: The remaining part of the code which is not categorized as
above is contained in the Remainder section.
SEMAPHORES
• Semaphores are just normal variables used to coordinate the activities of multiple
processes in a computer system. They are used to enforce mutual exclusion, avoid
race conditions and implement synchronization between processes.
• The process of using Semaphores provides two operations: wait (P) and signal (V).
The wait operation decrements the value of the semaphore, and the signal operation
increments the value of the semaphore. When the value of the semaphore is zero, any
process that performs a wait operation will be blocked until another process performs
a signal operation.
• Semaphores are used to implement critical sections, which are regions of code that
must be executed by only one process at a time. By using semaphores, processes can
coordinate access to shared resources, such as shared memory or I/O devices.
SEMAPHORES ARE OF TWO TYPES:
 BINARY SEMAPHORE –
 This is also known as a mute lock. It can have only two values – 0 and 1. Its
value is initialized to 1. It is used to implement the solution of critical section
problems with multiple processes.
 COUNTING SEMAPHORE –
 Its value can range over an unrestricted domain. It is used to control access to a
resource that has multiple instances.

Fig 2.7
 Some points regarding P and V operation:
 P operation is also called wait, sleep, or down operation, and V operation is also
called signal, wake-up, or up operation.
 Both operations are atomic and semaphore(s) is always initialized to one. Here
atomic means that variable on which read, modify and update happens at the
same time/moment with no pre-emption i.e. in-between read, modify and
update no other operation is performed that may change the variable.
 A critical section is surrounded by both operations to implement process
synchronization. See the below image. The critical section of Process P is in
between P and V operation.
Fig 2.8
PROCESS GENERATION
 Process generation typically refers to the creation or instantiation of new processes
within an operating system. Processes are fundamental units of execution in a
computer system, and they represent running programs. Here’s an overview of the
process generation in process management:
1. Request for a New Process: The process generation begins when a user or
application requests the execution of a new program or task. This request could
be initiated through various means, such as opening a new application or
launching a background service.
2. Process Creation: Once a request is received, the operating system’s process
manager creates a new process. This involves allocating necessary resources
and setting up data structures to manage the new process.
3. Resource Allocation: The operating system allocates various resources to the
new process, including CPU time, memory space, file handles, and
input/output devices. These resources are essential for the proper execution of
the process.
3. Initialization: The newly created process is initialized with the program code,
data, and other necessary information. This includes setting the program
counter to the starting point of the program and initializing data structures.
4. Process Scheduling: Depending on the scheduling algorithm used by the
operating system, the newly created process may be added to a queue for
execution. The scheduler decides when and for how long each process will run
on the CPU.
5. Execution: The process is now in the “ready to run” state and can execute its
code on the CPU when it’s its turn, based on the scheduling algorithm.
6. Termination: Once the process completes its task or is terminated by the user or
system, its resources are de allocated, and the process is removed from the
system.
7. Clean up: Any remaining resources or data associated with the terminated
process are cleaned up to prevent memory leaks and resource wastage.
 Process generation is a crucial part of process management as it ensures that multiple
tasks or programs can run concurrently on a computer system without interfering
with each other. Properly managing the generation, execution, and termination of
processes is essential for efficient and stable system operation.

PROCESS SCHEDULING
 Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process on the basis
of a particular strategy.
 Process scheduling is an essential part of a Multiprogramming operating system.
Such operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
 CATEGORIES IN SCHEDULING
 Scheduling falls into one of two categories:
1. Non-preemptive: In this case, a process’s resource cannot be taken before the
process has finished running. When a running process finishes and transitions
to a waiting state, resources are switched.
2. Preemptive: In this case, the OS assigns resources to a process for a
predetermined period of time. The process switches from running state to ready
state or from waiting for state to ready state during resource allocation. This
switching happens because the CPU may give other processes priority and
substitute the currently active process for the higher priority process.
 There are three types of process schedulers.
LONG TERM OR JOB SCHEDULER
 It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in
time. It is important that the long-term scheduler make a careful selection of both I/O
and CPU-bound processes.
 I/O-bound tasks are which use much of their time in input and output operations
while CPU-bound processes are which spend their time on the CPU. The job
scheduler increases efficiency by maintaining a balance between the two. They
operate at a high level and are typically used in batch-processing systems.
SHORT-TERM OR CPU SCHEDULER
 It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling algorithms are
used.
 The CPU scheduler is responsible for ensuring no starvation due to high burst time
processes. The dispatcher is responsible for loading the process selected by the
Short-term scheduler on the CPU (Ready to Running State) Context switching is
done by the dispatcher only.
 A dispatcher does the following:
 Switching context.
 Switching to user mode.
 Jumping to the proper location in the newly loaded program.
MEDIUM-TERM SCHEDULER
 It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements
has overcommitted available memory, requiring memory to be freed up. It is helpful
in maintaining a perfect balance between the I/O bound and the CPU bound. It
reduces the degree of multiprogramming.
 Some Other Schedulers
 I/O schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use
various algorithms to determine the order in which I/O operations are executed,
such as FCFS (First-Come, First-Served) or RR (Round Robin).
 Real-time schedulers: In real-time systems, real-time schedulers ensure that
critical tasks are completed within a specified time frame. They can prioritize
and schedule tasks using various algorithms such as EDF (Earliest Deadline
First) or RM (Rate Monotonic).
PROCESS SCHEDULING QUEUES
 The OS maintains all Process Control Blocks (PCBs) in Process Scheduling Queues.
The OS maintains a separate queue for each of the process states and PCBs of all
processes in the same execution state are placed in the same queue. When the state of
a process is changed, its PCB is unlinked from its current queue and moved to its
new state queue.
 The Operating System maintains the following important process scheduling queues

 Job queue – This queue keeps all the processes in the system.
 Ready queue – This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
 Device queues – The processes which are blocked due to unavailability of an
I/O device constitute this queue.
Fig 2.9
 The OS can use different policies to manage each queue (FIFO, Round Robin,
Priority, etc.). The OS scheduler determines how to move processes between the
ready and run queues which can only have one entry per processor core on the
system; in the above diagram, it has been merged with the CPU.

CPU SCHEDULING
 CPU scheduling is a process that allows one process to use the CPU while the
execution of another process is on hold(in waiting state) due to unavailability of any
resource like I/O etc, thereby making full use of CPU. The aim of CPU scheduling is
to make the system efficient, fast, and fair.
 Whenever the CPU becomes idle, the operating system must select one of the
processes in the ready queue to be executed. The selection process is carried out by
the short-term scheduler (or CPU scheduler). The scheduler selects from among the
processes in memory that are ready to execute and allocates the CPU to one of them.
CPU SCHEDULING: DISPATCHER
 Another component involved in the CPU scheduling function is the Dispatcher. The
dispatcher is the module that gives control of the CPU to the process selected by the
short-term scheduler. This function involves:
 Switching context
 Switching to user mode
 Jumping to the proper location in the user program to restart that program from
where it left last time.
 The dispatcher should be as fast as possible, given that it is invoked during every
process switch. The time taken by the dispatcher to stop one process and start another
process is known as the Dispatch Latency. Dispatch Latency can be explained using
the below figure:
Fig 2.10
 TYPES OF CPU SCHEDULING
 CPU scheduling decisions may take place under the following four circumstances:
1. When a process switches from the running state to the waiting state(for I/O
request or invocation of wait for the termination of one of the child processes).
2. When a process switches from the running state to the ready state (for example,
when an interrupt occurs).
3. When a process switches from the waiting state to the ready state(for example,
completion of I/O).
4. When a process terminates .

 In circumstances 1 and 4, there is no choice in terms of scheduling. A new process(if


one exists in the ready queue) must be selected for execution. There is a choice,
however in circumstances 2 and 3.
SCHEDULING CONCEPT
 Scheduling in operating systems refers to the process of determining which tasks or
processes get access to the CPU (Central Processing Unit) and for how long. It plays
a crucial role in ensuring efficient resource utilization and system responsiveness.
Here are some key concepts related to scheduling in operating systems:
1. Process: A process is a program in execution. The operating system manages
multiple processes concurrently, and scheduling determines the order in which
they execute.
2. CPU Scheduler: The CPU scheduler is responsible for selecting the next
process to run from the queue of ready processes. It aims to optimize factors
like CPU utilization, throughput, response time, and fairness.
3. Preemptive vs. Non-preemptive Scheduling: Preemptive scheduling allows the
scheduler to interrupt a running process and move it to the ready queue to give
CPU time to another process with higher priority. Non-preemptive scheduling
only switches processes voluntarily, such as when a process terminates or
enters the waiting state.
4. Priority Scheduling: Processes are assigned priorities, and the scheduler selects the
process with the highest priority to execute next. This can be preemptive or non-
preemptive.
5. Round Robin Scheduling: Each process is assigned a fixed time slice (quantum) to
run on the CPU. When the time slice expires, the process is moved to the end of the
queue, allowing other processes to run.
6. First-Come-First-Serve (FCFS) Scheduling: Processes are executed in the order they
arrive in the ready queue. It’s a non-preemptive scheduling algorithm.
7. Shortest Job First (SJF) Scheduling: The scheduler selects the process with the
smallest burst time (execution time) to run next. It can be preemptive or non-
preemptive.
8. Multi-level Queue Scheduling: Processes are categorized into multiple priority
queues, and each queue may have its scheduling algorithm. Processes move between
queues based on their priority or characteristics.
9. Multi-level Feedback Queue Scheduling: Similar to multi-level queues but with the
ability for processes to move between queues based on their behavior and execution
history. This provides flexibility in handling varying workloads.
10. Aging: A mechanism in some scheduling algorithms to prevent starvation, where
lower-priority processes gradually gain higher priority over time.
 Scheduling algorithms aim to strike a balance between different performance
metrics, such as fairness, response time, and throughput, depending on the specific
requirements of the system and the applications it runs. The choice of scheduling
algorithm can significantly impact system performance and user experience.
Deadlock
 A deadlock is a situation where a set of processes are blocked because each process
is holding a resource and waiting for another resource acquired by some other
process.
 Consider an example when two trains are coming toward each other on the same
track and there is only one track, none of the trains can move once they are in front
of each other. A similar situation occurs in operating systems when there are two or
more processes that hold some resources and wait for resources held by other(s). For
example, in the below diagram, Process 1 is holding Resource 1 and waiting for
resource 2 which is acquired by process 2, and process 2 is waiting for resource 1.
Fig 2.11
EXAMPLES OF DEADLOCK
 The system has 2 tape drives. P1 and P2 each hold one tape drive and each needs
another one.
 Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:
 P0 executes wait(A) and preempts.
 P1 executes wait(B).
 Now P0 and P1 enter in deadlock.
 Assume the space is available for allocation of 200K bytes, and the following
sequence of events occurs.

 Deadlock can arise if the following four conditions hold simultaneously (Necessary
Conditions)
1. Mutual Exclusion: Two or more resources are non-shareable (Only one process
can use at a time)
2. Hold and Wait: A process is holding at least one resource and waiting for
resources.
3. No Preemption: A resource cannot be taken from a process unless the process
releases the resource.
4. Circular Wait: A set of processes waiting for each other in circular form.

DEADLOCK PREVENTION:
■ The idea is to not let the system into a deadlock state. This system will make sure
that above mentioned four conditions will not arise. These techniques are very costly
so we use this in cases where our priority is making a system deadlock-free.
■ One can zoom into each category individually, Prevention is done by negating one of
the above-mentioned necessary conditions for deadlock. Prevention can be done in
four different ways:
MUTUAL EXCLUSION
 Mutual section from the resource point of view is the fact that a resource can never
be used by more than one process simultaneously which is fair enough but that is the
main reason behind the deadlock. If a resource could have been used by more than
one process at the same time then the process would have never been waiting for any
resource.
 However, if we can be able to violate resources behaving in the mutually exclusive
manner then the deadlock can be prevented.
SPOOLING
 For a device like printer, spooling can work. There is a memory associated with the
printer which stores jobs from each of the process into it. Later, Printer collects all
the jobs and print each one of them according to FCFS. By using this mechanism, the
process doesn’t have to wait for the printer and it can continue whatever it was
doing. Later, it collects the output when it is produced.
Fig 2.12
 Although, Spooling can be an effective approach to violate mutual exclusion but it
suffers from two kinds of problems.
1. This cannot be applied to every resource.
2. After some point of time, there may arise a race condition between the
processes to get space in that spool.
 We cannot force a resource to be used by more than one process at the same time
since it will not be fair enough and some serious problems may arise in the
performance. Therefore, we cannot violate mutual exclusion for a process practically.
 Eliminate Hold and wait: Allocate all required resources to the process before the
start of its execution, this way hold and wait condition is eliminated but it will lead to
low device utilization. For example, if a process requires a printer at a later time and
we have allocated a printer before the start of its execution printer will remain
blocked till it has completed its execution. The process will make a new request for
resources after releasing the current set of resources. This solution may lead to
starvation.
Fig 2.13
 Eliminate No Preemption : Preempts resources from the process when resources are
required by other high-priority processes.
 Eliminate Circular Wait : Each resource will be assigned a numerical number. A
process can request the resources to increase/decrease. Order of numbering. For
Example, if the P1 process is allocated R5 resources, now next time if P1 asks for
R4, R3 lesser than R5 such a request will not be granted, only a request for resources
more than R5 will be granted.
Fig 2.14
DEADLOCK DETECTION AND RECOVERY
■ Deadlock detection and recovery is the process of detecting and resolving deadlocks
in an operating system. A deadlock occurs when two or more processes are blocked,
waiting for each other to release the resources they need. This can lead to a system-
wide stall, where no process can make progress.
■ There are two main approaches to deadlock detection and recovery:
 Prevention: The operating system takes steps to prevent deadlocks from
occurring by ensuring that the system is always in a safe state, where deadlocks
cannot occur. This is achieved through resource allocation algorithms such as
the Banker’s Algorithm.
 Detection and Recovery: If deadlocks do occur, the operating system must
detect and resolve them. Deadlock detection algorithms, such as the Wait-For
Graph, are used to identify deadlocks, and recovery algorithms, such as the
Rollback and Abort algorithm, are used to resolve them. The recovery algorithm
releases the resources held by one or more processes, allowing the system to
continue to make progress.
 Difference Between Prevention and Detection/Recovery: Prevention aims to avoid
deadlocks altogether by carefully managing resource allocation, while detection and
recovery aim to identify and resolve deadlocks that have already occurred.
 Deadlock detection and recovery is an important aspect of operating system design
and management, as it affects the stability and performance of the system. The choice
of deadlock detection and recovery approach depends on the specific requirements of
the system and the trade-offs between performance, complexity, and risk tolerance.
The operating system must balance these factors to ensure that deadlocks are
effectively detected and resolved.
 In the previous post, we discussed Deadlock Prevention and Avoidance. In this post,
the Deadlock Detection and Recovery technique to handle deadlock is discussed.
DEADLOCK DETECTION :
1. If resources have a single instance –
 In this case for Deadlock detection, we can run an algorithm to check for the
cycle in the Resource Allocation Graph. The presence of a cycle in the graph is
a sufficient condition for deadlock.
2. If there are multiple instances of resources –
 Detection of the cycle is necessary but not a sufficient condition for deadlock
detection, in this case, the system may or may not be in deadlock varies
according to different situations.
3. Wait-For Graph Algorithm –
 The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect
deadlocks in a system where resources can have multiple instances. The
algorithm works by constructing a Wait-For Graph, which is a directed graph
that represents the dependencies between processes and resources.
DEADLOCK RECOVERY :
 A traditional operating system such as Windows doesn’t deal with deadlock recovery
as it is a time and space-consuming process. Real-time operating systems use
Deadlock recovery.
 Killing the process
 Killing all the processes involved in the deadlock. Killing process one by one.
After killing each process check for deadlock again and keep repeating the
process till the system recovers from deadlock. Killing all the processes one by
one helps a system to break circular wait conditions.
 Resource Preemption
 Resources are preempted from the processes involved in the deadlock, and
preempted resources are allocated to other processes so that there is a possibility
of recovering the system from the deadlock. In this case, the system goes into
starvation.
 Concurrency Control
 Concurrency control mechanisms are used to prevent data inconsistencies in systems
with multiple concurrent processes. These mechanisms ensure that concurrent
processes do not access the same data at the same time, which can lead to
inconsistencies and errors. Deadlocks can occur in concurrent systems when two or
more processes are blocked, waiting for each other to release the resources they need.
This can result in a system-wide stall, where no process can make progress.
Concurrency control mechanisms can help prevent deadlocks by managing access to
shared resources and ensuring that concurrent processes do not interfere with each
other.
ADVANTAGES OR DISADVANTAGES:
Advantages of Deadlock Detection and Recovery in Operating Systems:
 Improved System Stability: Deadlocks can cause system-wide stalls, and detecting
and resolving deadlocks can help to improve the stability of the system.
 Better Resource Utilization: By detecting and resolving deadlocks, the operating
system can ensure that resources are efficiently utilized and that the system remains
responsive to user requests.
 Better System Design: Deadlock detection and recovery algorithms can provide
insight into the behavior of the system and the relationships between processes and
resources, helping to inform and improve the design of the system.
Disadvantages of Deadlock Detection and Recovery in Operating Systems:
 Performance Overhead: Deadlock detection and recovery algorithms can introduce
a significant overhead in terms of performance, as the system must regularly check
for deadlocks and take appropriate action to resolve them.
 Complexity: Deadlock detection and recovery algorithms can be complex to
implement, especially if they use advanced techniques such as the Resource
Allocation Graph or Time stamping.
 False Positives and Negatives: Deadlock detection algorithms are not perfect and
may produce false positives or negatives, indicating the presence of deadlocks when
they do not exist or failing to detect deadlocks that do exist.
 Risk of Data Loss: In some cases, recovery algorithms may require rolling back the
state of one or more processes, leading to data loss or corruption.
 Overall, the choice of deadlock detection and recovery approach depends on the
specific requirements of the system, the trade-offs between performance, complexity,
and accuracy, and the risk tolerance of the system. The operating system must
balance these factors to ensure that deadlocks are effectively detected and resolved.
DEADLOCK AVOIDANCE
 In complex systems involving multiple processes and shared resources, the potential
for deadlocks arises when processes wait for each other to release resources, causing
a standstill. The resulting deadlocks can cause severe issues in computer systems,
such as performance degradation and even system crashes.
 To prevent such problems, the technique of deadlock avoidance is employed. It
entails scrutinizing the requests made by processes for resources and evaluating the
available resources to determine if the grant of such requests would lead to a
deadlock.
 In cases where granting a request would result in a deadlock, the system denies the
request. Deadlock avoidance is a crucial aspect of operating system design and plays
an indispensable role in upholding the dependability and steadiness of computer
systems.
Safe State and Unsafe State
 A safe state refers to a system state where the allocation of resources to each process
ensures the avoidance of deadlock. The successful execution of all processes is
achievable, and the likelihood of a deadlock is low. The system attains a safe state
when a suitable sequence of resource allocation enables the successful completion of
all processes.
 Conversely, an unsafe state implies a system state where a deadlock may occur. The
successful completion of all processes is not assured, and the risk of deadlock is
high. The system is insecure when no sequence of resource allocation ensures the
successful execution of all processes.
Deadlock Avoidance Algorithms
 When resource categories have only single instances of their resources, Resource-
Allocation Graph Algorithm is used. In this algorithm, a cycle is a necessary and
sufficient condition for deadlock.
 When resource categories have multiple instances of their resources, Banker’s
Algorithm is used. In this algorithm, a cycle is a necessary but not a sufficient
condition for deadlock.
Resource-Allocation Graph Algorithm
 Resource Allocation Graph (RAG) is a popular technique used for deadlock
avoidance. It is a directed graph that represents the processes in the system, the
resources available, and the relationships between them. A process node in the RAG
has two types of edges, request edges, and assignment edges. A request edge
represents a request by a process for a resource, while an assignment edge represents
the assignment of a resource to a process.
 To determine whether the system is in a safe state or not, the RAG is analyzed to
check for cycles. If there is a cycle in the graph, it means that the system is in an
unsafe state, and granting a resource request can lead to a deadlock. In contrast, if
there are no cycles in the graph, it means that the system is in a safe state, and
resource allocation can proceed without causing a deadlock.
Fig 2.15
 The RAG technique is straightforward to implement and provides a clear visual
representation of the processes and resources in the system. It is also an effective
way to identify the cause of a deadlock if one occurs. However, one of the main
limitations of the RAG technique is that it assumes that all resources in the system
are allocated at the start of the analysis. This assumption can be unrealistic in
practice, where resource allocation can change dynamically during system operation.
Therefore, other techniques such as the Banker’s Algorithm are used to overcome
this limitation.

RECOVERY FROM DEADLOCK


Manual Intervention:
 When a deadlock is detected, one option is to inform the operator and let them
handle the situation manually. While this approach allows for human judgment and
decision-making, it can be time-consuming and may not be feasible in large-scale
systems.
Automatic Recovery:
 An alternative approach is to enable the system to recover from deadlock
automatically. This method involves breaking the deadlock cycle by either aborting
processes or preempting resources. Let’s delve into these strategies in more detail.
Resource Preemption:
Selecting a victim:
 Resource preemption involves choosing which resources and processes should be
preempted to break the deadlock. The selection order aims to minimize the overall
cost of recovery. Factors considered for victim selection may include the number of
resources held by a deadlocked process and the amount of time the process has
consumed.
BANKER’S ALGORITHM
 The banker’s algorithm is a deadlock avoidance algorithm used in operating systems.
It was proposed by Edgar Dijkstra in 1965. The banker’s algorithm works on the
principle of ensuring that the system has enough resources to allocate to each process
so that the system never enters a deadlock state.
 It works by keeping track of the total number of resources available in the system
and the number of resources allocated to each process.
 The algorithm Is used to prevent deadlocks that can occur when multiple processes
are competing for a finite set of resources. The resources can be of different types
such as memory, CPU cycles, or I/O devices.
 It works by first analysing the current state of the system and determining if granting
a resource request from a process will result in a safe state. A state is considered safe
if there is at least one sequence of resource allocations that can satisfy all processes
without causing a deadlock.
 The Banker's algorithm assumes that each process declares Its maximum resource
requirements upfront. Based on this information, the algorithm allocates resources to
each Resource-Allocation Graph process such that the total number of allocated
resources never exceeds the total number of available resources.
 The algorithm does not grant access to resources that could potentially lead to a
deadlock situation.
 The Banker’s algorithm uses a matrix called the “allocation matrix” to keep track of
the resources allocated to each process, and a “request matrix” to keep track of the
resources requested by each process.
 It also uses a “need matrix” to represent the resources that each process still needs to
complete its execution.
 To determine if a request can be granted, the algorithm checks if there is enough
available resources to satisfy the request, and then checks if granting the request will
still result in a safe state.
 If the request can be granted safely, the algorithm grants the resources and updates
the allocation matrix, request matrix, and need matrix accordingly. If the request
cannot be granted safely, the process must wait until sufficient resources become
available.
1. Initialize the system
 Define the number of processes and resource types.
 Define the total number of available resources for each resource type.
 Create a matrix called the “allocation matrix” to represent the current resource
allocation for each process.
 Create a matrix called the “need matrix” to represent the remaining resource
needs for each process.
2. Define a request
 A process requests a certain number of resources of a particular type.
3. Check if the request can be granted
 Check if the requested resources are available.
 If the requested resources are not available, the process must wait.
 If the requested resources are available, go to the next step.
4. Check if the system is in a safe state
 Simulate the allocation of the requested resources to the process.
 Check if this allocation results in a safe state, meaning there is a sequence of
allocations that can satisfy all processes without leading to a deadlock.
 If the state is safe, grant the request by updating the allocation matrix and the
need matrix.
 If the state is not safe, do not grant the request and let the process wait.
Release the Resources
 When a process has finished its execution, releases its allocated resources by
updating the allocation matrix and the need matrix.

o The above steps are repeated for each resource request made by any process in the
system. Overall, the Banker’s algorithm is an effective way to avoid deadlocks in
resource constrained systems by carefully managing resource allocations and
predicting potential conflicts before they arise.
REFERENCE
 https://www.google.com/amp/s/www.geeksforgeeks.org/introduction-of-process-man
agement/amp/
 https://padakuu.com/process-concept-56-article
 https://www.javatpoint.com/os-critical-section-problem
 https://www.scaler.com/topics/operating-system/process-synchronization-in-os/
 https://www.geeksforgeeks.org/semaphores-in-process-synchronization/
 https://www.tutorialspoint.com/operating_system/os_process_scheduling.Html
 https://www.geeksforgeeks.org/cpu-scheduling-in-operating-systems/
 https://www.techtarget.com/whatis/definition/deadlock
 https://www.geeksforgeeks.org/introduction-of-deadlock-in-operating-system/
 https://www.geeksforgeeks.org/bankers-algorithm-in-operating-system-2/

You might also like