0% found this document useful (0 votes)
46 views46 pages

Unit V - New

The document discusses topics related to distributed scheduling and deadlocks. It covers load distributing algorithms, task migration issues, and distributed deadlock algorithms. Distributed scheduling requires coordinating tasks across multiple nodes, which can involve global or local schedulers. An effective scheduling strategy balances workloads to optimize resource usage and system efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views46 pages

Unit V - New

The document discusses topics related to distributed scheduling and deadlocks. It covers load distributing algorithms, task migration issues, and distributed deadlock algorithms. Distributed scheduling requires coordinating tasks across multiple nodes, which can involve global or local schedulers. An effective scheduling strategy balances workloads to optimize resource usage and system efficiency.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit V

Topic covered in Unit V

Distributed Scheduling and Deadlock Distributed Scheduling- Introduction -


Clocks, events and process states, Logical time and logical clocks - Global
states –Coordination and Agreement – Introduction - Distributed mutual
exclusion, Issues in Load Distributing, Components for Load Distributing
Algorithms, Load Distributing Algorithms, Task Migration and its issues.
Deadlock-Issues in deadlock detection & Resolutions, Deadlock Handling
Strategy, Distributed Deadlock Algorithms.
Distributed Scheduling and Deadlock Distributed Scheduling

• Implementing a distributed system requires cost for hardware support and agreements
on service expectations. Optimizing tasks through proper scheduling helps reduce the
overall cost of computation.

• A task is a logical unit that makes progress toward a unified goal of a system. One
such set of tasks could be images that a distributed model is learning or predicting on.

• The process of managing task allocation, to where and to who, is the responsibility of
a scheduler. The scheduler of a distributed system performs akin to the process
scheduler on any operating system
Distributed Scheduling and Deadlock Distributed Scheduling

• Scheduling tasks from the primary to workers can occur at multiple stages. There can be
global scheduler that directs all tasks to all connected workers. There can also be local
schedulers that handle incoming and outgoing tasks/replies on both the primary and worker
nodes. Organizing this dynamic scale of events is a major component of the systems
efficiency.

• In a centralized distributed system, it is more common to see a global scheduler managing


the allocation of resources and tasks between primary and workers. However, in a
decentralized distributed system, there may exist multiple schedulers that are responsible
for part of the overall system.
Distributed Scheduling

• Scheduling tasks from the primary to workers can occur at multiple stages. There can be
global scheduler that directs all tasks to all connected workers. There can also be local
schedulers that handle incoming and outgoing tasks/replies on both the primary and worker
nodes. Organizing this dynamic scale of events is a major component of the systems
efficiency.

• In a centralized distributed system, it is more common to see a global scheduler managing


the allocation of resources and tasks between primary and workers. However, in a
decentralized distributed system, there may exist multiple schedulers that are responsible
for part of the overall system.
Distributed Scheduling
Scheduling in Distributed Systems:

The techniques that are used for scheduling the processes in distributed systems are as follows:

1. Task Assignment Approach: In the Task Assignment Approach, the user-submitted process is
composed of multiple related tasks which are scheduled to appropriate nodes in a system to
improve the performance of a system.

2. Load Balancing Approach: In the Load Balancing Approach, as the name implies, the workload
is balanced among the nodes of the system.

3. Load Sharing Approach: In the Load Sharing Approach, it is assured that no node would be idle
while processes are waiting for their processing.
Characteristics of a Good Scheduling Algorithm:
The following are the required characteristics of a Good Scheduling Algorithm:
1. The scheduling algorithms that require prior knowledge about the properties and resource
requirements of a process submitted by a user put a burden on the user. Hence, a good
scheduling algorithm does not require prior specification regarding the user-submitted
process.
2. A good scheduling algorithm must exhibit the dynamic scheduling of processes as the
initial allocation of the process to a system might need to be changed with time to balance
the load of the system.
3. The algorithm must be flexible enough to process migration decisions when there is a
change in the system load.
4. The algorithm must possess stability so that processors can be utilized optimally. It is
possible only when thrashing overhead gets minimized and there should no wastage of
time in process migration.
Characteristics of a Good Scheduling Algorithm:
5. An algorithm with quick decision making is preferable such as heuristic methods that take
less time due to less computational work give near-optimal results in comparison to an
exhaustive search that provides an optimal solution but takes more time.
6. A good scheduling algorithm gives balanced system performance by maintaining minimum
global state information as global state information (CPU load) is directly proportional to
overhead. So, with the increase in global state information overhead also increases.
7. The algorithm should not be affected by the failure of one or more nodes of the system.
Furthermore, even if the link fails and nodes of a group get separated into two or more
groups then also it should not break down. So, the algorithm must possess decentralized
decision-making capability in which consideration is given only to the available nodes for
taking a decision and thus, providing fault tolerance.
Characteristics of a Good Scheduling Algorithm:
8. A good scheduling algorithm has the property of being scalable. It is flexible for scaling
when the number of nodes increases in a system. If an algorithm opts for a strategy in
which it inquires about the workload of all nodes and then selects the one with the least
load then it is not considered a good approach because it leads to poor scalability as it will
not work well for a system having many nodes. The reason is that the inquirer receives a lot
many replies almost simultaneously and the processing time spent for reply messages is too
long for a node selection with the increase in several nodes (N). A straightforward way is to
examine only m of N nodes.
9. A good scheduling algorithm must be having fairness of service because in an attempt to
balance the workload on all nodes of the system there might be a possibility that nodes with
more load get more benefit as compared to nodes with less load because they suffer from
poor response time than stand-alone systems. Hence, the solution lies in the concept of load
sharing in which a node can share some of its resources until the user is not affected.
Load Balancing in Distributed Systems:

The Load Balancing approach refers to the division of load among the processing elements of
a distributed system. The excess load of one processing element is distributed to other
processing elements that have less load according to the defined limits. In other words, the
load is maintained at each processing element in such a manner that neither it gets overloaded
nor idle during the execution of a program to maximize the system throughput which is the
ultimate goal of distributed systems. This approach makes all processing elements equally
busy thus speeding up the entire task leads to the completion of the task by all processors
approximately at the same time.
Types of Load Balancing Algorithms:
• Static Load Balancing Algorithm: In the Static Load Balancing Algorithm, while distributing
load the current state of the system is not taken into account. These algorithms are simpler in
comparison to dynamic load balancing algorithms. Types of Static Load Balancing Algorithms
are as follows:
• Deterministic:  In Deterministic Algorithms, the properties of nodes and processes are taken
into account for the allocation of processes to nodes. Because of the deterministic
characteristic of the algorithm, it is difficult to optimize to give better results and also costs
more to implement.
• Probabilistic: Probabilistic Algorithms, Statistical attributes of the system are taken into
account such as several nodes, topology, etc. to make process placement rules. It does not
give better performance.
Types of Load Balancing Algorithms:
Dynamic Load Balancing Algorithm: Dynamic Load Balancing Algorithm takes into account the current
load of each node or computing unit in the system, allowing for faster processing by dynamically
redistributing workloads away from overloaded nodes and toward underloaded nodes. Dynamic
algorithms are significantly more difficult to design, but they can give superior results, especially when
execution durations for distinct jobs vary greatly. Furthermore, because dedicated nodes for task
distribution are not required, a dynamic load balancing architecture is frequently more modular. Types of
Dynamic Load Balancing Algorithms are as follows:
• Centralized: In Centralized Load Balancing Algorithms, the task of handling requests for process
scheduling is carried out by a centralized server node. The benefit of this approach is efficiency as all
the information is held at a single node but it suffers from the reliability problem because of the lower
fault tolerance. Moreover, there is another problem with the increasing number of requests.
• Distributed: In Distributed Load Balancing Algorithms, the decision task of assigning processes is
distributed physically to the individual nodes of the system. Unlike Centralized Load Balancing
Algorithms, there is no need to hold state information. Hence, speed is fast.
Types of Load Balancing Algorithms:
Cooperative In Cooperative Load Balancing Algorithms, as the name implies, scheduling
decisions are taken with the cooperation of entities in the system. The benefit lies in the stability
of this approach. The drawback is the complexity involved which leads to more overhead than
Non-cooperative algorithms.

Non-cooperative: In Non-cooperative Load Balancing Algorithms, scheduling decisions are


taken by the individual entities of the system as they act as autonomous entities. The benefit is
that minor overheads are involved due to the basic nature of non-cooperation. The drawback is
that these algorithms might be less stable than Cooperative algorithms.
Issues in Designing Load-balancing Algorithms
Many issues need to be taken into account while designing Load-balancing Algorithms:

1. Load Estimation Policies: Determination of a load of a node in a distributed system.

2. Process Transfer Policies: Decides for the execution of process: local or remote.

3. State Information Exchange: Determination of strategy for exchanging system load information
among the nodes in a distributed system.

4. Location Policy: Determining the selection of destination nodes for the migration of the process.

5. Priority Assignment: Determines whether the priority is given to a local or a remote process on a
node for execution.

6. Migration limit policy: Determines the limit value for the migration of processes.
Gang scheduling

In computer science, gang scheduling is a scheduling algorithm for parallel systems that
schedules related threads or processes to run simultaneously on different processors.

A gang means a task. So, gang scheduling is the scheduling of gangs in an efficient
manner. The point which separates gang scheduling from other scheduling is that it
considers a gang (task) as a quantum and schedules them. A gang might require either
multiple processes or threads or a combination of both (threads and processes).
Gang scheduling uses an Out sterhout matrix as a data
structure to facilitate all the scheduling tasks. It is a
two-dimensional matrix where a row represents a time
slice and a column represents a process or a thread. 
P1 P2 P3 P4 P5
Gang Time Slice 0 J1 J1 J1 J1 J1
Time Slice 1 J2 J2 J2 J2 J2
scheduling
Time Slice 2 J3 J3 J4 J4 J4

J1-J4 represents gangs and P1-P5 represents processes


Gang scheduling
As gang scheduling includes multiple processes and threads, there is a need for synchronization.
Broadly there are two synchronization methods.
• Concurrent gang scheduling: The synchronization module composes all the scheduling. All the gangs run for
a specific time interval ‘t’ then it is interpreted and another gang can begin.
• SHARE scheduling system: Gangs with the same resource utilization are collected and executed for a fixed
time period. The fixed period may change each time and the time is greater than equal to the minimum
time of the tasks till which they can be non-preemptive.
Like process scheduling there are various types of scheduling gangs namely:
1.Bag of gangs (BoG)
2.Adapted first come first served (AFCFS)
3.The largest gang first-served (LGFS)
4.Paired gang scheduling
5.Scheduling algorithm
CLOCKS, EVENTS AND PROCESS STATES

Each process executes on a single processor, and the processors do not share memory.

Each process pi in has a state si that, in general, it transforms as it executes. The process’s state
includes the values of all the variables within it. Its state may also include the values of any
objects in its local operating system environment that it affects, such as files. We assume that
processes cannot communicate with one another in any way except by sending messages through
the network.

We define an event to be the occurrence of a single action that a process carries out as it executes
– a communication action or a state-transforming action. The sequence of events within a single
process pi can be placed in a single, total ordering, which we denote by the relation i between the
events.
CLOCKS, EVENTS AND PROCESS STATES

we have assumed that the process executes on a single processor. Now we can define the
history of process pi to be the series of events that take place within it, ordered as we have
described by the relation Clocks • We have seen how to order the events at a process, but not
how to timestamp them – i.e., to assign to them a date and time of day. Computers each
contain their own physical clocks. These clocks are electronic devices that count oscillations
occurring in a crystal at a definite frequency, and typically divide this count and store the
result in a counter register. Clock devices can be programmed to generate interrupts at regular
intervals in order that, for example, time slicing can be implemented; however, we shall not
concern ourselves with this aspect of clock operation.
Logical time and logical clocks

A logical clock is a mechanism for capturing chronological and causal relationships in a


distributed system. Distributed systems may have no physically synchronous global
clock, so a logical clock allows global ordering on events from different processes in
such systems.

Logical local time is used by the process to mark its own events, and logical global time
is the local information about global time. A special protocol is used to update logical
local time after each local event, and logical global time when processes exchange data.
List of Time Zones and Abbreviations
Name Description Relative to GMT
GMT Greenwich Mean Time GMT
UTC Universal Coordinated Time GMT
ECT European Central Time GMT+1:00
EET Eastern European Time GMT+2:00
ART (Arabic) Egypt Standard Time GMT+2:00
EAT Eastern African Time GMT+3:00
MET Middle East Time GMT+3:30
NET Near East Time GMT+4:00
PLT Pakistan Lahore Time GMT+5:00
IST India Standard Time GMT+5:30
BST Bangladesh Standard Time GMT+6:00
VST Vietnam Standard Time GMT+7:00
CTT China Taiwan Time GMT+8:00
JST Japan Standard Time GMT+9:00
ACT Australia Central Time GMT+9:30
AET Australia Eastern Time GMT+10:00
SST Solomon Standard Time GMT+11:00
Logical time and logical clocks

If we go outside then we have made a full plan that at which place we have to
go first, second and so on. We don’t go to second place at first and then the
first place. We always maintain the procedure or an organization that is
planned before. In a similar way, we should do the operations on our PCs one
by one in an organized way.

Suppose, we have more than 10 PCs in a distributed system and every PC is


doing it’s own work but then how we make them work together. There comes a
solution to this i.e. LOGICAL CLOCK.
Global states –Coordination and Agreement

The global state of a distributed system is the set of local states of each
individual processes involved in the system plus the state of the
communication channels.

Coordination and Agreement. Mutual Exclusion in DS. - Mutual exclusion is a


mechanism that prevent interference and ensure consistency when accessing the
resources by a collection of processes. - In distributed system, shared variables
i.e. semaphores and local kernel can not be used to implement mutual exclusion.
Global states –Coordination and Agreement

Many algorithms used in distributed system require a coordinator


that performs functions needed by other processes in the system. Election
algorithms are designed to choose a coordinator. Election Algorithms: Election
algorithms choose a process from group of processors to act as a coordinator.

Agreement may be as simple as the goal of the distributed system, This is made more
complicated than it sounds, since all the processes must, not only agree, but be
confident that their peers agree.
Distributed mutual exclusion
Mutual exclusion is a concurrency control property which is introduced to prevent race
conditions. It is the requirement that a process can not enter its critical section while another
concurrent process is currently present or executing in its critical section i.e only one process
is allowed to execute the critical section at any given instance of time.

Mutual exclusion ensures that concurrent access of processes to a shared resource or data is
serialized, that is, executed in a mutually exclusive manner. Mutual exclusion in a distributed
system states that only one process is allowed to execute the critical section (CS) at any given
time.
Mutual exclusion in single computer
system Vs. distributed system:
In single computer system, memory and other resources are shared between different processes. The
status of shared resources and the status of users is easily available in the shared memory so with the
help of shared variable (For example: Semaphores) mutual exclusion problem can be easily solved.

In Distributed systems, we neither have shared memory nor a common physical clock and there for we
can not solve mutual exclusion problem using shared variables. To eliminate the mutual exclusion
problem in distributed system approach based on message passing is used.
Solution to distributed mutual exclusion
Token Based Algorithm

• A unique token is shared among all the sites.

• If a site possesses the unique token, it is allowed to enter its critical section

• This approach uses sequence number to order requests for the critical section.

• Each requests for critical section contains a sequence number. This sequence number is used to
distinguish old and current requests.

• This approach ensures Mutual exclusion as the token is unique


(Example: Suzuki-Kasami’s Broadcast Algorithm)
Solution to distributed mutual exclusion
Non-token based approach:
• A site communicates with other sites in order to determine which sites should execute
critical section next. This requires exchange of two or more successive round of
messages among sites.
• This approach use timestamps instead of sequence number to order requests for the
critical section.
• When ever a site make request for critical section, it gets a timestamp. Timestamp is
also used to resolve any conflict between critical section requests.
• All algorithm which follows non-token based approach maintains a logical clock.
Logical clocks get updated according to Lamport’s scheme

(Example: Lamport's algorithm, Ricart–Agrawala algorith )


Solution to distributed mutual exclusion

Quorum based approach:

• Instead of requesting permission to execute the critical section from all other sites,
Each site requests only a subset of sites which is called a quorum.

• Any two subsets of sites or Quorum contains a common site.

• This common site is responsible to ensure mutual exclusion


(Example: Maekawa’s Algorithm)
Deadlock in Distributed System
A Distributed System is a Network of Machines that can exchange information with each other through
Message-passing. It can be very useful as it helps in resource sharing. In such an environment, if the
sequence of resource allocation to processes is not controlled, a deadlock may occur. In principle,
deadlocks in distributed systems are similar to deadlocks in centralized systems. Therefore, the
description of deadlocks presented above holds good both for centralized and distributed systems.
However, handling of deadlocks in distributed systems is more complex than in centralized systems
because the resources, the processes, and other relevant information are scattered on different nodes of
the system.
Three commonly used strategies to handle deadlocks are as follows:
• Avoidance: Resources are carefully allocated to avoid deadlocks.
• Prevention: Constraints are imposed on the ways in which processes request resources in order to
prevent deadlocks.
• Detection and recovery: Deadlocks are allowed to occur and a detection algorithm is used to detect
them. After a deadlock is detected, it is resolved by certain means.
Types of Distributed Deadlock

There are two types of Deadlocks in Distributed System:


• Resource Deadlock
• Communication Deadlock
Resource Deadlock

A resource deadlock occurs when two or


more processes wait permanently for
resources held by each other.
• A process that requires certain resources
for its execution, and cannot proceed until
it has acquired all those resources.
• It will only proceed to its execution when
it has acquired all required resources.
• It can also be represented using AND
condition as the process will execute only
if it has all the required resources.
• Example: Process 1 has R1, R2, and
requests resources R3. It will not execute
if any one of them is missing. It will
proceed only when it acquires all
requested resources i.e. R1, R2, and R3.
Communication Deadlock
A communication deadlock occurs among a set of processes when they are blocked waiting for
messages from other processes in the set in order to start execution but there are no messages in
transit between them. When there are no messages in transit between any pair of processes in the set,
none of the processes will ever receive a message. This implies that all processes in the set are
deadlocked. Communication deadlocks can be easily modeled by using WFGs to indicate which
processes are waiting to receive messages from which other processes. Hence, the detection of
communication deadlocks can be done in the same manner as that for systems having only one unit
of each resource type.
• In Communication Model, a Process requires resources for its execution and proceeds
when it has acquired at least one of the resources it has requested for.
• Here resource stands for a process to communicate with.
• Here, a Process waits for communicating with another process in a set of processes. In
a situation where each process in a set, is waiting to communicate with another process
which itself is waiting to communicate with some other process, this situation is called
communication deadlock.
Communication Deadlock

• For 2 processes to communicate, each one should be in


the unblocked state.
• It can be represented using OR conditions as it requires
at least one of the resources to continue its Process.
• Example: In a Distributed System network, Process 1 is
trying to communicate with Process 2, Process 2 is
trying to communicate with Process 3 and Process 3 is
trying to communicate with Process 1. In this situation,
none of the processes will get unblocked and a
communication deadlock occurs.
Deadlock handling strategy in distributed system

Transaction processing in a distributed database system is also distributed,


i.e. the same transaction may be processing at more than one site. The
two main deadlock handling concerns in a distributed database system
that are not present in a centralized system are transaction location and
transaction control. Once these concerns are addressed, deadlocks are
handled through any of deadlock prevention, deadlock avoidance or
deadlock detection and removal.
Transaction Location

• Transactions in a distributed database system are processed in multiple sites


and use data items in multiple sites. The amount of data processing is not
uniformly distributed among these sites. The time period of processing also
varies. Thus the same transaction may be active at some sites and inactive at
others. When two conflicting transactions are located in a site, it may happen
that one of them is in inactive state. This condition does not arise in a
centralized system. This concern is called transaction location issue.
Transaction Control
Transaction control is concerned with designating and controlling the sites
required for processing a transaction in a distributed database system. There
are many options regarding the choice of where to process the transaction
and how to designate the center of control, like −

• One server may be selected as the center of control.

• The center of control may travel from one server to another.

• The responsibility of controlling may be shared by a number of servers.


Strategies used for Deadlock Handling in Distributed System are:
• Deadlock Prevention
• Deadlock Avoidance
• Deadlock Detection and Recovery
Distributed Deadlock Prevention
The site where the transaction enters is designated as the controlling site. The controlling site sends messages to the
sites where the data items are located to lock the items. Then it waits for confirmation. When all the sites have
confirmed that they have locked the data items, transaction starts. If any site or communication link fails, the
transaction has to wait until they have been repaired.

• Though the implementation is simple, this approach has some drawbacks −

• Pre-acquisition of locks requires a long time for communication delays. This increases the time required for
transaction.

• In case of site or link failure, a transaction has to wait for a long time so that the sites recover. Meanwhile, in the
running sites, the items are locked. This may prevent other transactions from executing.

• If the controlling site fails, it cannot communicate with the other sites. These sites continue to keep the locked data
items in their locked state, thus resulting in blocking.
Distributed Deadlock Prevention
Collective Requests:

In this strategy, all the processes will declare the required resources for their execution
beforehand and will be allowed to execute only if there is the availability of all the
required resources. When the process ends up with processing then only resources will be
released. Hence, the hold and wait condition of deadlock will be prevented. But the issue
is initial resource requirements of a process before it starts are based on an assumption
and not because they will be required. So, resources will be unnecessarily occupied by a
process and prior allocation of resources also affects potential concurrency.
Distributed Deadlock Prevention
Ordered Requests:

• In this strategy, ordering is imposed on the resources and thus, process requests for resources in increasing order. Hence,
the circular wait condition of deadlock can be prevented.

• An ordering strictly indicates that a process never asks for a low resource while holding a high one.

• There are two more ways of dealing with global timing and transactions in distributed systems, both of which are based
on the principle of assigning a global timestamp to each transaction as soon as it begins.

• During the execution of a process, if a process seems to be blocked because of the resource acquired by another process
then the timestamp of the processes must be checked to identify the larger timestamp process. In this way, cycle waiting
can be prevented.

• It is better to give priority to the old processes because of their long existence and might be holding more resources.

• It also eliminates starvation issues as the younger transaction will eventually be out of the system
Distributed Deadlock Prevention
Ordered Requests:
• In this strategy, ordering is imposed on the resources and thus, process requests for resources in increasing order.
Hence, the circular wait condition of deadlock can be prevented.
• An ordering strictly indicates that a process never asks for a low resource while holding a high one.
• There are two more ways of dealing with global timing and transactions in distributed systems, both of which
are based on the principle of assigning a global timestamp to each transaction as soon as it begins.
• During the execution of a process, if a process seems to be blocked because of the resource acquired by another
process then the timestamp of the processes must be checked to identify the larger timestamp process. In this
way, cycle waiting can be prevented.
• It is better to give priority to the old processes because of their long existence and might be holding more
resources.
• It also eliminates starvation issues as the younger transaction will eventually be out of the system
Distributed Deadlock Prevention
Preemption: 

Resource allocation strategies that reject no-preemption conditions can be used to


avoid deadlocks.
• Wait-die: If an older process requires a resource held by a younger process, the latter will
have to wait. A young process will be destroyed if it requests a resource controlled by an
older process.
• Wound-wait: If an old process seeks a resource held by a young process, the young
process will be preempted, wounded, and killed, and the old process will resume and
wait. If a young process needs a resource held by an older process, it will have to wait.
Deadlock Avoidance
In this strategy, deadlock can be avoided by examining the state of the system at every step.
The distributed system reviews the allocation of resources and wherever it finds an unsafe
state, the system backtracks one step and again comes to the safe state. For this, resource
allocation takes time whenever requested by a process. Firstly, the system analysis occurs
whether the granting of resources will make the system in a safe state or unsafe state then
only allocation will be made.
• A safe state refers to the state when the system is not in deadlocked state and order is there
for the process regarding the granting of requests.
• An unsafe state refers to the state when no safe sequence exists for the system. Safe
sequence implies the ordering of a process in such a way that all the processes run to
completion in a safe state.
Deadlock Detection and Recovery
In this strategy, deadlock is detected, and an attempt is made to resolve the deadlock state of
the system. These approaches rely on a Wait-For-Graph (WFG), which is generated and
evaluated for cycles in some methods. The following two requirements must be met by a
deadlock detection algorithm:
Progress: In a given period, the algorithm must find all existing deadlocks. There should be
no deadlock existing in the system which is undetected under this condition. To put it another
way, after all, wait-for dependencies for a deadlock have arisen, the algorithm should not
wait for any additional events to detect the deadlock.
No False Deadlocks: Deadlocks that do not exist should not be reported by the algorithm
which is called phantom or false deadlocks.

You might also like