0% found this document useful (0 votes)
18 views93 pages

3 Synchronization

The document discusses synchronization in distributed systems, focusing on clock synchronization methods such as physical clocks, logical clocks, and election algorithms. It details various algorithms including Cristian's, Berkeley's, and NTP for physical clock synchronization, as well as logical clock mechanisms like Lamport's and vector clocks for event ordering. Additionally, it covers mutual exclusion algorithms and their requirements, including centralized and distributed approaches to ensure safe access to critical sections.

Uploaded by

shruti.kamble
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views93 pages

3 Synchronization

The document discusses synchronization in distributed systems, focusing on clock synchronization methods such as physical clocks, logical clocks, and election algorithms. It details various algorithms including Cristian's, Berkeley's, and NTP for physical clock synchronization, as well as logical clock mechanisms like Lamport's and vector clocks for event ordering. Additionally, it covers mutual exclusion algorithms and their requirements, including centralized and distributed approaches to ensure safe access to critical sections.

Uploaded by

shruti.kamble
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

3.

Synchronization
By Sanjana Satpute
Clock synchronization
Ordering of events taking place in processes of a
distributed system according to their temporal
association.

Required to ensure mutual exclusion to guarantee


serialization of concurrent access to shared resources.
Types:
1. Physical clock
2. Logical Clock
3. Mutual Exclusion
4. Election algorithm
Physical clock

1. Synchronization in distributed systems is achieved


via clocks. The physical clocks are used to adjust
the time of nodes. Each node in the system can share
its local time with other nodes in the system. The time
is set based on UTC (Universal Time Coordination).

2. Christian’ Algorithm
3. Berkeley’s Algorithm
4. NTP(Network Time Protocol) Algorithm
Christian’ Algorithm
Cristian’s Algorithm
Uses a time server to synchronize clocks
Mainly designed for LAN
Time server keeps the reference time (UTC-Universal Coordinated Time)
A client asks the time server for time, the server responds with
its current time T, and the client uses the received value T to set
its clock
But network round-trip time introduces an error
RTT = response-received-time – request-sent time (measurable
at client)
Berkeley’s Algorithm
Berkeley’s Algorithm
Network Time Protocol (NTP)
Network Time Protocol (NTP) is a protocol that helps the
computer's clock times to be synchronized in a network. This
protocol is an application protocol that is responsible for the
synchronization of hosts on a TCP/IP network.
Some features of NTP are –
▪ NTP servers have access to highly precise atomic clocks and
GPU clocks
▪ It uses Coordinated Universal Time (UTC) to synchronize
CPU clock time.
▪ Avoids even having a fraction of vulnerabilities in information
exchange communication.
▪ Provides consistent timekeeping for file servers
NTP
First, at the topmost level, there is highly accurate time resources’ ex.
atomic or GPS clocks. These clock resources are called stratum 0
servers, and they are linked to the below NTP server called Stratum
1,2 or 3 and so on. These servers then provide the accurate date and
time so that communicating hosts are synced to each other.
What are stratum levels?
Degrees of separation from the UTC source are defined as strata. The
various strata include the following:
Stratum 0. A reference clock receives true time from a dedicated
transmitter or satellite navigation system. It is categorized as
stratum 0.
Stratum 1. A device is directly linked to the reference clock.
Stratum 2. A device receives its time from a stratum 1 computer.
Stratum 3. A device receives its time from a stratum 2 computer.
Logical Clock in Distributed System
Logical Clocks refer to implementing a protocol on all machines within
your distributed system, so that the machines are able to maintain
consistent ordering of events within some virtual time span. A logical
clock is a mechanism for capturing chronological and causal
relationships in a distributed system. Distributed systems may have no
physically synchronous global clock, so a logical clock allows global
ordering on events from different processes in such systems.
Example :
If we go outside then we have made a full plan that at which place we
have to go first, second and so on. We don’t go to second place at first
and then the first place. We always maintain the procedure or an
organization that is planned before. In a similar way, we should do the
operations on our PCs one by one in an organized way.
Suppose, we have more than 10 PCs in a distributed system and every PC
is doing it’s own work but then how we make them work together. There
comes a solution to this i.e. LOGICAL CLOCK.
Method-1:
To order events across process, try to sync clocks in one approach.
Method-2:
Another approach is to assign Timestamps to events.
Continew..
Method-2:
Another approach is to assign Timestamps to events.
Taking the example into consideration, this means if we assign the first place
as 1, second place as 2, third place as 3 and so on. Then we always know that
the first place will always come first and then so on. Similarly, If we give
each PC their individual number than it will be organized in a way that 1st
PC will complete its process first and then second and so on.
BUT, Timestamps will only work as long as they obey causality.
Taking single PC only if 2 events A and B are occurring one by one then
TS(A) < TS(B). If A has timestamp of 1, then B should have timestamp more
than 1, then only happen before relationship occurs.
Taking 2 PCs and event A in P1 (PC.1) and event B in P2 (PC.2) then also the
condition will be TS(A) < TS(B). Taking example- suppose you are sending
message to someone at 2:00:00 pm, and the other person is receiving it at
2:00:02 pm.Then it’s obvious that TS(sender) < TS(receiver).
Happen-Before Relationship
Happened before relation ( causal ordering)
◦ If a and b are events in the same process, and a
occurs before b then a→ b is true.
◦ If a is the event of a message being sent by one
process, and b is the event of the message being
received by another process, then a → b is also
true.
◦ If a → b and b → c, then a → c (transitive).
◦ Concurrent Events - events a and b are concurrent
(a||b) if neither a → b nor b → a is true.
Lamport’s Algorithm
Each process increments its clock counter between
every two consecutive events.
For two successive events a and b of the same
process Pi
● Ci(b)=Ci(a)+1;
If a is sending message m event at Pi and b is
receiving of the same message b at Pj
Cj(b)=max ((Ci(a)+1, Cj(b))
Shortcut Note:
Send🡪 ts=+1
Receive🡪ts=max[own ts, senders ts]+1
LAMPORT’s Algo
Shortcut Note:
Send🡪 ts=+1
Receive🡪ts=max[own ts, senders ts]+1
Vector Clocks in Distributed Systems
Vector Clock is an algorithm that generates partial ordering of events and
detects causality violations in a distributed system. These clocks expand
on Scalar time to facilitate a causally consistent view of the distributed
system, they detect whether a contributed event has caused another event
in the distributed system. It essentially captures all the causal relationships.
This algorithm helps us label every process with a vector(a list of
integers) with an integer for each local clock of every process within the
system. So for N given processes, there will be vector/ array of size N.
How does the vector clock algorithm work :
Initially, all the clocks are set to zero.
Every time, an Internal event occurs in a process, the value of the
processes' logical clock in the vector is incremented by 1
Also, every time a process sends a message, the value of the processes'
logical clock in the vector is incremented by 1.
Every time, a process receives a message, the value of the processes'
logical clock in the vector is incremented by 1, and moreover, each
element is updated by taking the maximum of the value in its own vector
clock and the value in the vector in the received message (for every
element).
Example :
Consider a process (P) with a vector size N for each process:
the above set of rules mentioned are to be executed by the
vector clock:
Election Algorithms
🡪 Many distributed algorithms employ a
coordinator process that performs functions
needed by the other processes in the system
◦ enforcing mutual exclusion
◦ maintaining a global wait-for graph for
deadlock detection
◦ replacing a lost token
◦ controlling an input/output device in the
system
🡪 If the coordinator process fails due to the failure
of the site at which it resides, a new coordinator
must be selected through an election algorithm.
Solution – an Election
🡪 All processes currently involved get together to
choose a coordinator
🡪 If the coordinator crashes or becomes isolated,
elect a new coordinator
🡪 If a previously crashed or isolated process, comes
on line, a new election may have to be held

25
■ A process begins an election when it notices,
through timeouts, that the coordinator has failed.

■ Three types of messages


1. An election message is sent to announce an
election.

2. An ok message is sent in response to an


election message.

3. A coordinator message is sent to announce


the identity of the elected process (the “new
coordinator”).
The Bully Election Algorithm
(Cont.)
The process that knows it has the highest identifier can elect
itself as the coordinator simply by sending a coordinator
message to all processes with lower identifiers.

A process with lower identifier begins an election by sending


an election message to those processes that have a higher
identifier and awaits an ok message in response.
◦ If none arrives within time T, the process considers itself the
coordinator and sends a coordinator message to all processes with
lower identifiers.
◦ If a reply arrives, the process waits a further period T’ for a coordinator
message to arrive from the new coordinator. If none arrives, it begins
another election.
The Bully Election Algorithm
(Cont.)
Three types of messages are used in this algorithm--
election , ok , coordinator

If a process receives an election message, it sends back


an ok message and begins another election (unless it
has begun one already).

If a process receives a coordinator message, it sets its


variable coordinator-id to the identifier of the
coordinator contained within it.
The Bully Election Algorithm (Cont.)

What happens if a crashed process recovers and


immediately initiates an election?

If it has the highest process identifier (for example


P7 in previous slide), then it will decide that it is the
coordinator and may choose to announce this to
other processes.
◦ It will become the coordinator, even though the current
coordinator is functioning (hence the name “bully”)
◦ This may take place concurrently with the sending of
coordinator message by another process which has
previously detected the crash.
The Bully Algorithm
The Bully Algorithm
Ring Algorithm
All processed organized in ring
● Independent of process number
Suppose P notices no coordinator
● Sends election message to successor with own process number in
body of message
● (If successor is down, skip to next process, etc.)
Suppose Q receives an election message
● Adds own process number to list in message body

34
Election Algorithms
Ring Algorithm: Example

Initiation:
1.Process 4 sends an ELECTION message to its successor (or next alive
process) with its ID
2.Each process adds its own ID and forwards the ELECTION message
Leader Election:
3. Message comes back to initiator, here the initiator is 4.
4. Initiator announces the winner by sending another message around
the ring
Ring Algorithm
Ring Algorithm
Mutual Exclusion
Mutual exclusion is a concurrency control property which is
introduced to prevent race conditions. It is the requirement
that a process can not enter its critical section while another
concurrent process is currently present or executing in its
critical section i.e only one process is allowed to execute the
critical section at any given instance of time.
In single-processor systems, critical regions are protected
using semaphores, monitors are use to guarantee mutual
exclusion in a uniprocessor system
In distributed systems this can be achieved using either
1. centralized mutual exclusion algorithms
2. distributed mutual exclusion algorithms
Requirements of Mutual exclusion
Algorithm:
1. No Deadlock:
Two or more site should not endlessly wait for any message
that will never arrive.
2. No Starvation:
Every site who wants to execute critical section should get
an opportunity to execute it in finite time. Any site should
not wait indefinitely to execute critical section while other
site are repeatedly executing critical section
3. Fairness:
Each site should get a fair chance to execute critical section.
Any request to execute critical section must be executed in
the order they are made i.e Critical section execution
requests should be executed in the order of their arrival in
the system.
4. Fault Tolerance:
In case of failure, it should be able to recognize it by itself in
order to continue functioning without any disruption.
Performance metrics of mutual exclusion.

1. Synchronization delay: Time interval between CR exit and


new entry by any process.
2. System Throughput: Rate at which requests for the CR get
executed.
3. Message complexity: Number of messages that are required
per CR execution by a process.
4. Response time: Time interval from a request send to its CR
execution completed.
1.The Centralized Algorithm
One of the processes in the system is chosen to coordinate
the entry to the critical section.
A process that wants to enter its critical section sends a
request message to the coordinator.
The coordinator decides which process can enter the critical
section next, and it sends that process a reply message.
When the process receives a reply message from the
coordinator, it enters its critical section.
After exiting its critical section, the process sends a release
message to the coordinator and proceeds with its execution.
Mutual Exclusion:
A Centralized Algorithm

a) Process 1 asks the coordinator for permission to enter a critical region. Permission is granted
b) Process 2 then asks permission to enter the same critical region. The coordinator does not reply.
c) When process 1 exits the critical region, it tells the coordinator, coordiinator then replies to 2
A Centralized Algorithm

coordin
proce Requ ator
ss est
Gra
nt
Enter Relea
crical se
section
Exi
Advantages:
t It is fair, easy to implement, and requires only
three messages per use of a critical region (request, grant,
release).
Disadvantages: single point of failure.
Distributed Mutual Exclusion
Algorithms
• Non-token based:
• A site/process can enter a critical section when an
assertion (condition) becomes true.
• Algorithm should ensure that the assertion will be true
in only one site/process.

• Token based:
• A unique token (a known, unique message) is shared
among cooperating sites/processes.
• Possessor of the token has access to critical section.
• Need to take care of conditions such as loss of token,
crash of token holder, possibility of multiple tokens, etc.

44
Classification of Distributed
Mutual Exclusion Algorithms
• Non-token-based algorithm:
1. Lamport’s Distributed Mutual Algorithm
2. Ricart–Agrawala Algorithm
3. Maekawa’s Algorithm
• Token-based algorithm:
1. Suzuki–Kasami’s Broadcast Algorithm
2. Singhal’s Heuristic Algorithm
3. Raymond’s Tree-Based Algorithm
Lamport’s Algorithm for mutual
exclusion
◦ Requesting CS:
1. Send REQUEST(tsi, i). (tsi,i): Request time stamp. Place
REQUEST in request_queuei.
2. On receiving the message; sj sends time-stamped REPLY
message to si. Si’s request placed in request_queuej.
◦ Executing CS:
1. Si has received a message with time stamp larger than
(tsi,i) from all other sites.
2. Si’s request is the top most one in request_queuei.
◦ Releasing CS:
1. Exiting CS: send a time stamped RELEASE message to all
sites in its request set.
2. Receiving RELEASE message: Sj removes Si’s request
from its queue.

46
Lamport’s Distributed Mutual
Exclusion Algorithm
Lamport’s Algorithm:
Example
Step
1: S1 (2,1
)

S2

(1,2
)
S3

Step
2: S1 (1,2)
(2,1)
S2 enters
S2 CS
(1,2)
(2,1)
S3
(1,2)
(2,1) 50
Performance Parameters
Lamport’s algorithm has message overhead of total 3(N − 1)
messages:
• N – 1 REQUEST messages to All process (N minus itself),
• N −1 REPLY messages
• N −1 RELEASE messages per CR invocation.

The synchronization delay is T. Throughput is 1/(T + E).

The algorithm has been proven to be fair and correct. It can


also be optimized by reducing the number of RELEASE
messages sent.
Ricart-Agrawala Algorithm
Requesting critical section
◦ Si sends time stamped REQUEST message
◦ Sj sends REPLY to Si, if
● Sj is not requesting nor executing CS
● If Sj is requesting CS and Si’s time stamp is smaller
than its own request.
● Request is deferred otherwise.
Executing CS: after it has received REPLY from
all sites in its request set.
Releasing CS: Send REPLY to all deferred
requests.

52
Ricart-Agrawala: Example
Step
1: S1 (2,1
)

S2
(1,2
)
S3

Step
2: S1

S2 enters
S2 CS
(2,1
)
S3
54
Ricart-Agrawala:
Example…
Step
3: S1

S1 enters
S2 CS

(2,1
) S2 leaves
S3
CS

55
Ricart-Agrawala:
The algorithm does not use explicit RELEASE
message. The dequeuing is done on the receipt of
REPLY itself. Thus, total message overhead would be
2(N − 1) messages, that is, for entering a CR, (N − 1)
requests and exiting (N − 1) replies.

2. The failure of any process almost halts the


algorithm (recovery measures are needed) as it
requires all replies.
Maekawa’s Algorithm
Maekawa’s Algorithm is quorum based approach to ensure mutual
exclusion in distributed systems. As we know, In permission based
algorithms like Lamport’s Algorithm, Ricart-Agrawala Algorithm
etc. a site request permission from every other site but in quorum
based approach, A site does not request permission from every other
site but from a subset of sites which is called quorum.
In this algorithm:
Three type of messages ( REQUEST, REPLY and RELEASE) are
used.
1. A site send a REQUEST message to all other site in its request set
or quorum to get their permission to enter critical section.
2. A site send a REPLY message to requesting site to give its
permission to enter the critical section.
3. A site send a RELEASE message to all other site in its request set
or quorum upon exiting the critical section.
The construction of request set or Quorum:
A request set or Quorum in Maekawa’s algorithm must satisfy
the following properties:
To enter Critical section:
◦ When a site Si wants to enter the critical section, it sends a
request message REQUEST(i) to all other sites in the request
set Ri.
◦ When a site Sj receives the request message REQUEST(i) from
site Si, it returns a REPLY message to site Si if it has not sent
a REPLY message to the site from the time it received the
last RELEASE message. Otherwise, it queues up the request.
To execute the critical section:
◦ A site Si can enter the critical section if it has received
the REPLY message from all the site in request set Ri
To release the critical section:
◦ When a site Si exits the critical section, it
sends RELEASE(i) message to all other sites in request set Ri
◦ When a site Sj receives the RELEASE(i) message from site Si, it
send REPLY message to the next site waiting in the queue and
deletes that entry from the queue
◦ In case queue is empty, site Sj update its status to show that it
has not sent any REPLY message since the receipt of the
last RELEASE message
Example:
Steps:
Steps:
Message Complexity:
Maekawa’s Algorithm requires invocation of 3√N messages
per critical section execution as the size of a request set is
√N. These 3√N messages involves.
√N request messages
√N reply messages
√N release messages
Drawbacks of Maekawa’s Algorithm:
This algorithm is deadlock prone because a site is
exclusively locked by other sites and requests are not
prioritized by their timestamp.
Performance:
Synchronization delay is equal to twice the message
propagation delay time
It requires 3√n messages per critical section execution.
Suzuki-Kasami algorithm
https://www.youtube.com/watch?v=Hvz20dUB1RY
■ Suzuki-Kasami algorithm

■ Completely connected network of processes

■ There is one token in the network. The owner


of the token has the permission to enter CS.

■ Token will move from one process to another


based on demand.
Suzuki-Kasami Algorithm
If a site without a token needs to enter a CS, broadcast a REQUEST for
token message to all other sites.
Token: (a) Queue of request sites (b) Array LN[1..N], the sequence
number of the most recent execution by a site j.
Token holder sends token to requestor, if it is not inside CS. Otherwise,
sends after exiting CS.
Token holder can make multiple CS accesses.

◦ Design issues:
◦ Distinguishing outdated REQUEST messages.
● Format: REQUEST(j,n) -> jth site making nth request.
● Each site has RNi[1..N] -> RNi[j] is the largest sequence number of
request from j.
◦ Determining which site has an outstanding token request.
● If LN[j] = RNi[j] - 1, then Sj has an outstanding request.

B. Prabhakaran 65
Suzuki-Kasami Algorithm
...
Passing the token
◦ After finishing CS
◦ (assuming Si has token), LN[i] := RNi[i]
◦ Token consists of Q and LN. Q is a queue of requesting
sites.
◦ Token holder checks if RNi[j] = LN[j] + 1. If so, place j in
Q.
◦ Send token to the site at head of Q.

B. Prabhakaran 66
Suzuki-Kasami
Algorithm
■ When a process i receives a request (i,
num) from process k, it sets req[k] to
max(req[k], num) and enqueues the
request in its Q

■ When process i sends a token to the head


of Q,
■ it sets last[i] := its own num, and passes
the array last, as well as the tail of Q,

Req: array[0..n-1] of integer

Last: Array [0..n-1] of integer

67
Example
req=[1,0,0,0
req=[1,0,0,0, ,0]
0] 1
last=[0,0,0,0 0
,0]

2
req=[1,0,0,0
,0]
4
req=[1,0,0,0
,0] 3
req=[1,0,0,0
,0]

initial state

68
Example
req=[1,1,1,
req=[1,1,1,0, 0,0]
0] 1
last=[0,0,0,0 0
,0]

2
req=[1,1,1,
0,0]
4
req=[1,1,1,
0,0] 3
req=[1,1,1,
0,0]

1 & 2 send requests

69
Example
req=[1,1,1,0,
req=[1,1,1,0 0]
,0] 1
last=[1,0,0,0 0
,0]
Q=(1,2)

2
req=[1,1,1,
0,0]
4
req=[1,1,1,
0,0] 3
req=[1,1,1,
0,0]

0 prepares to exit CS

70
Example
req=[1,1,1,
req=[1,1,1, 0,0]
0,0] 1 last=[1,0,0,
0 0,0]
Q=(2)

2
req=[1,1,1,
0,0]
4
req=[1,1,1,
0,0] 3
req=[1,1,1,
0,0]

0 passes token (Q and last) to 1

71
Example
req=[2,1,1,1
req=[2,1,1,1, ,0]
0] 1 last=[1,0,0,0
0 ,0]
Q=(2,0,3)

2
req=[2,1,1,
1,0]
4
req=[2,1,1,
1,0] 3
req=[2,1,1,
1,0]

0 and 3 send requests

72
Example
req=[2,1,1,
req=[2,1,1, 1,0]
1,0] 1
0

2
req=[2,1,1,1
,0]
4
last=[1,1,0,
req=[2,1,1,
1,0] 3 0,0]
req=[2,1,1,Q=(0,3)
1,0]

1 sends token to 2

73
Raymond’s Algorithm
Sites are arranged in a logical directed tree. Root: token holder.
Edges: directed towards root.
Every site has a variable holder that points to an immediate
neighbor node, on the directed path towards root. (Root’s holder
point to itself).
Requesting CS
◦ If Si does not hold token and request CS, sends REQUEST
upwards provided its request_q is empty. It then adds its request
to request_q.
◦ Non-empty request_q -> REQUEST message for top entry in q
(if not done before).
◦ Site on path to root receiving REQUEST -> propagate it up, if
its request_q is empty. Add request to request_q.
◦ Root on receiving REQUEST -> send token to the site that
forwarded the message. Set holder to that forwarding site.
◦ Any Si receiving token -> delete top entry from request_q, send
token to that site, set holder to point to it. If request_q is
non-empty now, send REQUEST message to the holder site.
B. Prabhakaran 74
Raymond’s Algorithm …
Executing CS: getting token with the site at the top of
request_q. Delete top of request_q, enter CS.

Releasing CS
◦ If request_q is non-empty, delete top entry from q,
send token to that site, set holder to that site.
◦ If request_q is non-empty now, send REQUEST
message to the holder site.

B. Prabhakaran 75
Raymond’s Algorithm:
Example
Step S Toke
1: 1 n
holde
S Toke r S
2 n 3
reque
S Sst S6 S
4 5 7
Step
2: S
1

S S
2 Toke 3
n
S S S6 S
4 5 7

B. Prabhakaran 76
Raymond’s Algm.:
Example…
Step
3: S
1

S S
2 3

S S S6 S
4 5 7

Toke
n
holde
r

B. Prabhakaran 77
Example:
Deadlock
A Deadlock is a situation where each of the computer process waits for a
resource which is being assigned to some another process. In this situation,
none of the process gets executed since the resource it needs, is held by
some other process which is also waiting for some other resource to be
released.
Necessary conditions for Deadlocks
Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if
two process cannot use the same resource at the same time.
Hold and Wait
A process waits for some resources while holding another resource at the
same time.
No preemption
The process which once scheduled will be executed till the completion. No
other process can be scheduled by the scheduler meanwhile.
Circular Wait
All the processes must be waiting for the resources in a cyclic manner so
that the last process is waiting for the resource which is being held by the
first process.
Deadlock Detection and Recovery
Resource Allocation Graph Algorithm
(RAG)
Example of Single Instances RAG

In the above single instances RAG example, it can be seen that P2 is holding the R1 and
waiting for R3. P1 is waiting for R1 and holding R2 and, P3 is holding the R3 and waiting for
R2. So, it can be concluded that none of the processes will get executed.
Process Allocation Request

Resource Resource

R1 R2 R3 R1 R2 R3
P1 0 1 0 1 0 0
P2 1 0 0 0 0 1
P3 0 0 1 0 1 0

It can also be concluded by writing it in the form of an allocation


and request matrix.
Algorithm to check deadlock
1. First, find the currently available instances of each resource.
2. Check for each process which can be executed using the
allocated + available resource.
3. Add the allocated resource of the executable process to the
available resources and terminate it.
4. Repeat the 2nd and 3rd steps until the execution of each
process.
5. If at any step, none of the processes can be executed then
there is a deadlock in the system.
Example of Multiple Instances RAG

To determine the state of the system we will write it in the form


of an allocation and request matrix.
Process Allocation Request
Resource Resource
R1 R2 R3 R1 R2 R3
P1 0 1 0 1 0 0
P2 1 0 0 0 0 1
P3 0 0 1 0 1 0
P4 0 0 1 0 0 0
Using the above algorithm, we will get that there is no deadlock in the
above-given example, and their sequence of execution can be P4 → P2
→ P1 → P3.
Deadlock avoidance
The deadlock Avoidance method is used by the operating system in order
to check whether the system is in a safe state or in an unsafe state and in
order to avoid the deadlocks, the process must need to tell the operating
system about the maximum number of resources a process can request in
order to complete its execution.
Algorithms:

1. Bankers Algorithm
2. Safety algorithm
3. Resource request algorithm
Deadlock Detection: Centralized approach,
Chandy - Misra_Hass Algorithm.
Distributed deadlocks can occur when distributed transactions or
concurrency control are utilized in distributed systems. It may be
identified via a distributed technique like edge chasing or by
creating a global wait-for graph (WFG) from local wait-for graphs
at a deadlock detector. Phantom deadlocks are identified in a
distributed system but do not exist due to internal system delays.
In a distributed system, deadlock cannot be prevented nor avoided
because the system is too vast. As a result, only deadlock detection
is possible. The following are required for distributed system
deadlock detection techniques:
1. Progress: The method may detect all the deadlocks in the system.
2. Safety: The approach must be capable of detecting all system
deadlocks.
Approaches to detect deadlock in the distributed
system
1. Centralized Approach
Only one resource is responsible for detecting deadlock in the
centralized method, and it is simple and easy to use. Still, the
disadvantages include excessive workload on a single node and
single-point failure (i.e., the entire system is dependent on one node,
and if that node fails, the entire system crashes), making the system
less reliable.
3. Distributed Approach
In the distributed technique, various nodes work to detect deadlocks.
There is no single point of failure as the workload is equally spread
among all nodes. It also helps to increase the speed of deadlock
detection.
Chandy-Misra-Haas’s
Distributed Deadlock Detection
Algorithm
https://www.youtube.com/watch?v=EZvzw4xSk9E
Another fully distributed deadlock detection algorithm is given by
Chandy, Misra, and Hass (1983).This is considered
an edge-chasing, probe-based algorithm.It is also considered one of
the best deadlock detection algorithms for distributed systems.
If a process makes a request for a resource which fails or times out,
the process generates a probe message and sends it to each of the
processes holding one or more of its requested resources.
Each probe message contains the following information:
1. the id of the process that is blocked (the one that initiates the probe
message);
2. the id of the process is sending this particular version of the probe
message; and
3. the id of the process that should receive this probe message.
When a process receives a probe message, it checks to see if it is
also waiting for resources.If not, it is currently using the needed
resource and will eventually finish and release the resource. If
it is waiting for resources, it passes on the probe message to all
processes it knows to be holding resources it has itself requested. T he
process first modifies the probe message, changing the sender and
receiver ids.
If a process receives a probe message that it recognizes as having
initiated, it knows there is a cycle in the system and
thus, deadlock.
The following example is based on the same data used in the
Silberschatz-Galvin algorithm example. In this case P1 initiates the
probe message, so that all the messages shown have P1 as the
initiator. When the probe message is received by process P3, it
modifies it and sends it to two more processes. Eventually, the probe
message returns to process P1. Deadlock!
1. Why clock synchronization is required in DS? Explain Physical clocks in
detail.[10M]
2. What is logical clock synchronization? Explain logical clocks in
detail.[10M]
3. What are the election algorithms? Explain with example.[10M]
4. What is mutual exclusion? Requirements and Performance measure of
Mutual Exclusion Algorithms. [5M]
5. Explain Ricart Agrawalas algorithm for mutual exclusion in DS.[10M]
6. Explain Lamports algorithm for mutual exclusion in DS.[10M]
7. Explain centralized algorithm with example for mutual exclusion in
DS.[10M]
8. What is token and non token based algorithm in mutual exclusion? Explain
suzuki kasamis broadcast algorithm for mutual exclusion in DS.[10M]
9. Explain Raymond tree based algorithm for mutual exclusion in DS.[10M]
10. Explain deadlock in DS?
11. Write a short note on “Deadlock Prevention and Avoidance ” in DS.
12. Explain Cetralized approach for deadlock detection in DS.
13. Explain with example working o Chandy - Misra_Hass Algorithm.

You might also like