0% found this document useful (0 votes)
4 views13 pages

RPC in Go Algorithms

The document outlines RPC-based implementations in Go for various distributed system algorithms, including Scalar Time Calculation, Vector Time Calculation, Lamport’s Distributed Mutual Exclusion Algorithm, and Chandy–Misra–Haas Deadlock Detection (AND model). Each section provides server and client code examples along with explanations of how the algorithms work. The document serves as a guide for implementing these algorithms using Go's RPC framework.

Uploaded by

Hari Prasaath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views13 pages

RPC in Go Algorithms

The document outlines RPC-based implementations in Go for various distributed system algorithms, including Scalar Time Calculation, Vector Time Calculation, Lamport’s Distributed Mutual Exclusion Algorithm, and Chandy–Misra–Haas Deadlock Detection (AND model). Each section provides server and client code examples along with explanations of how the algorithms work. The document serves as a guide for implementing these algorithms using Go's RPC framework.

Uploaded by

Hari Prasaath
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

RPC in Go Algorithms

You want RPC-based implementations in Go for the following distributed system algorithms:

1. Scalar time calculation


2. Vector time calculation
3. Lamport’s Distributed Mutual Exclusion Algorithm
4. Chandy–Misra–Haas Deadlock Detection (AND model)
5. Chandy–Misra–Haas Deadlock Detection (OR model)

Shall I go step-by-step, starting with scalar time calculation using RPC in Go?

Let's start with Scalar Time Calculation using RPC in Go.

🚀 Scalar Time Calculation (RPC-Based)


Scalar time (or logical clock) is based on Lamport's clock, where a scalar value is incremented upon each event or message. We'll implement an
RPC-based service that updates and syncs the scalar time between distributed nodes.

Server Code (`server.go`)

go

package main

import (
"fmt"
"net"
"net/rpc"
)

// Clock struct to hold scalar time


type Clock struct {
Time int
}

// Request struct for RPC


type TimeRequest struct {
Timestamp int
}

// Response struct for RPC


type TimeResponse struct {
NewTime int
}

// Increment time on event


func (c *Clock) IncrementTime(req *TimeRequest, res *TimeResponse) error {
// Increment local time
if req.Timestamp > c.Time {
c.Time = req.Timestamp
}
c.Time++
res.NewTime = c.Time
fmt.Printf("Updated scalar time: %d\n", c.Time)
return nil
}

func main() {
clock := new(Clock)
rpc.Register(clock)

listener, err := net.Listen("tcp", ":1234")


if err != nil {
fmt.Println("Error starting server:", err)
return
}
defer listener.Close()

fmt.Println("Scalar Time RPC Server started on port 1234")


for {
conn, err := listener.Accept()
if err != nil {
fmt.Println("Connection error:", err)
continue
}
go rpc.ServeConn(conn)
}
}

Client Code (`client.go`)

go

package main

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/13
import (
"fmt"
"net/rpc"
)

type TimeRequest struct {


Timestamp int
}

type TimeResponse struct {


NewTime int
}

func main() {
client, err := rpc.Dial("tcp", "localhost:1234")
if err != nil {
fmt.Println("Connection error:", err)
return
}
defer client.Close()

req := TimeRequest{Timestamp: 5}
var res TimeResponse

err = client.Call("Clock.IncrementTime", req, &res)


if err != nil {
fmt.Println("Error calling RPC:", err)
return
}

fmt.Printf("Updated scalar time from server: %d\n", res.NewTime)


}

✅ How It Works
1. The server holds a scalar time value.
2. The client sends a timestamp.
3. The server increments its scalar time based on the Lamport Clock rule:
If the received timestamp is greater than the local clock, the local clock is updated.
The clock is incremented.
4. The server returns the new scalar time to the client.

Shall I move to Vector Time Calculation next? 😎


Next up: Vector Time Calculation using RPC in Go.

🌐 Vector Time Calculation (RPC-Based)


Vector clocks maintain an array of timestamps, with each element representing the logical clock of a particular process. They help in tracking the
causality of events in a distributed system.

Server Code (`vector_server.go`)

go

package main

import (
"fmt"
"net"
"net/rpc"
)

// Clock struct to hold vector clock


type Clock struct {
Time []int
}

// Request and Response structs


type TimeRequest struct {
Pid int
Timestamp []int
}

type TimeResponse struct {


NewTime []int
}

// Increment the vector clock based on Lamport's rules


func (c *Clock) IncrementTime(req *TimeRequest, res *TimeResponse) error {
// Update local vector clock
for i := range c.Time {
if req.Timestamp[i] > c.Time[i] {
c.Time[i] = req.Timestamp[i]
}
}
c.Time[req.Pid]++ // Increment the time for the process that sent the request

res.NewTime = append([]int{}, c.Time...) // Return updated vector clock


fmt.Printf("Updated vector time: %v\n", c.Time)
return nil
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 2/13
}

func main() {
// Initialize vector clock for 3 processes
clock := &Clock{Time: make([]int, 3)}
rpc.Register(clock)

listener, err := net.Listen("tcp", ":1235")


if err != nil {
fmt.Println("Error starting server:", err)
return
}
defer listener.Close()

fmt.Println("Vector Time RPC Server started on port 1235")


for {
conn, err := listener.Accept()
if err != nil {
fmt.Println("Connection error:", err)
continue
}
go rpc.ServeConn(conn)
}
}

Client Code (`vector_client.go`)

go

package main

import (
"fmt"
"net/rpc"
)

type TimeRequest struct {


Pid int
Timestamp []int
}

type TimeResponse struct {


NewTime []int
}

func main() {
client, err := rpc.Dial("tcp", "localhost:1235")
if err != nil {
fmt.Println("Connection error:", err)
return
}
defer client.Close()

req := TimeRequest{
Pid: 1,
Timestamp: []int{1, 2, 3}, // Example vector time from client
}
var res TimeResponse

err = client.Call("Clock.IncrementTime", req, &res)


if err != nil {
fmt.Println("Error calling RPC:", err)
return
}

fmt.Printf("Updated vector time from server: %v\n", res.NewTime)


}

✅ How It Works
1. Server holds a vector clock of size equal to the number of processes.
2. Client sends a request containing:
Process ID
Current vector timestamp
3. Server applies vector clock update rules:
Takes the element-wise maximum of the client's timestamp and local clock.
Increments the element corresponding to the process ID.
4. Server returns the updated vector clock to the client.

Shall I proceed with Lamport's Distributed Mutual Exclusion next? 😎


Next up: Lamport's Distributed Mutual Exclusion Algorithm using RPC in Go.

🔐 Lamport's Distributed Mutual Exclusion Algorithm


Lamport's algorithm uses:

Logical timestamps to order requests.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 3/13
A queue to handle competing requests.
A process can enter the critical section only when:
Its request is at the front of the queue.
It has received replies (OK) from all other processes.

Server Code (`mutex_server.go`)

go

package main

import (
"fmt"
"net"
"net/rpc"
"sort"
"sync"
)

type Request struct {


Pid int
Timestamp int
}

type Response struct {


Ok bool
}

type Mutex struct {


queue []Request
clock int
mu sync.Mutex
}

func (m *Mutex) RequestAccess(req *Request, res *Response) error {


m.mu.Lock()

// Update local clock based on request timestamp


if req.Timestamp > m.clock {
m.clock = req.Timestamp
}
m.clock++

// Add request to queue


m.queue = append(m.queue, *req)
sort.Slice(m.queue, func(i, j int) bool {
if m.queue[i].Timestamp == m.queue[j].Timestamp {
return m.queue[i].Pid < m.queue[j].Pid
}
return m.queue[i].Timestamp < m.queue[j].Timestamp
})

fmt.Printf("Updated Queue: %+v\n", m.queue)

res.Ok = true
m.mu.Unlock()
return nil
}

func (m *Mutex) ReleaseAccess(req *Request, res *Response) error {


m.mu.Lock()

// Remove request from queue after execution


for i, r := range m.queue {
if r.Pid == req.Pid {
m.queue = append(m.queue[:i], m.queue[i+1:]...)
break
}
}

fmt.Printf("Queue after release: %+v\n", m.queue)

res.Ok = true
m.mu.Unlock()
return nil
}

func main() {
mutex := &Mutex{}
rpc.Register(mutex)

listener, err := net.Listen("tcp", ":1236")


if err != nil {
fmt.Println("Error starting server:", err)
return
}
defer listener.Close()

fmt.Println("Mutex RPC Server started on port 1236")

for {
conn, err := listener.Accept()
if err != nil {
fmt.Println("Connection error:", err)
continue
}
go rpc.ServeConn(conn)

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 4/13
}
}

Client Code (`mutex_client.go`)

go

package main

import (
"fmt"
"net/rpc"
)

type Request struct {


Pid int
Timestamp int
}

type Response struct {


Ok bool
}

func main() {
client, err := rpc.Dial("tcp", "localhost:1236")
if err != nil {
fmt.Println("Connection error:", err)
return
}
defer client.Close()

// Step 1: Request access to critical section


req := Request{
Pid: 1,
Timestamp: 5, // Example timestamp
}
var res Response

fmt.Println("Requesting access to critical section...")


err = client.Call("Mutex.RequestAccess", req, &res)
if err != nil || !res.Ok {
fmt.Println("Error requesting access:", err)
return
}
fmt.Println("Access granted to critical section!")

// Simulate critical section work


fmt.Println("Executing critical section...")

// Step 2: Release access after work done


err = client.Call("Mutex.ReleaseAccess", req, &res)
if err != nil || !res.Ok {
fmt.Println("Error releasing access:", err)
return
}
fmt.Println("Access released from critical section.")
}

✅ How It Works
1. Request Access
Client sends a request with its logical timestamp.
Server adds the request to a priority queue (sorted by timestamp).
Server updates the logical clock.
2. Grant Access
Client gets access when:
Its request is at the front of the queue.
It has received OK from all other processes (simulated by the server).
3. Release Access
Once the critical section execution is complete, the client sends a release signal.
Server removes the request from the queue.

Shall I continue with Chandy–Misra–Haas Deadlock Detection (AND model)? 😎


Next up: Chandy–Misra–Haas Deadlock Detection (AND model) using RPC in Go.

🔎 Chandy–Misra–Haas Algorithm (AND Model)


In the AND model, a process is in a deadlock state if:

It is waiting for responses from multiple processes,


All the requested processes must be able to respond to release the deadlock.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 5/13
Approach

1. Each process sends a probe message when it detects a resource wait.


2. A probe message includes:
Initiating process ID
Current holder process ID
Target process ID
3. If a cycle is detected (initiating process = target process), a deadlock is confirmed.

Server Code (`and_server.go`)

go

package main

import (
"fmt"
"net"
"net/rpc"
"sync"
)

type Probe struct {


Initiator int
Holder int
Target int
}

type Response struct {


Deadlock bool
}

type ANDModel struct {


waitGraph map[int][]int // Adjacency list
mu sync.Mutex
}

func (a *ANDModel) SendProbe(probe *Probe, res *Response) error {


a.mu.Lock()
defer a.mu.Unlock()

// Add edge to the wait graph


a.waitGraph[probe.Holder] = append(a.waitGraph[probe.Holder], probe.Target)

fmt.Printf("Wait Graph: %+v\n", a.waitGraph)

// Check for cycle (deadlock)


if a.detectCycle(probe.Initiator) {
fmt.Println("Deadlock detected!")
res.Deadlock = true
} else {
res.Deadlock = false
}

return nil
}

func (a *ANDModel) detectCycle(initiator int) bool {


visited := make(map[int]bool)
stack := make(map[int]bool)

var dfs func(node int) bool


dfs = func(node int) bool {
if stack[node] {
return true
}

if visited[node] {
return false
}

visited[node] = true
stack[node] = true

for _, neighbor := range a.waitGraph[node] {


if dfs(neighbor) {
return true
}
}

stack[node] = false
return false
}

return dfs(initiator)
}

func main() {
model := &ANDModel{
waitGraph: make(map[int][]int),
}
rpc.Register(model)

listener, err := net.Listen("tcp", ":1237")


if err != nil {
fmt.Println("Error starting server:", err)
return

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 6/13
}
defer listener.Close()

fmt.Println("AND Model RPC Server started on port 1237")

for {
conn, err := listener.Accept()
if err != nil {
fmt.Println("Connection error:", err)
continue
}
go rpc.ServeConn(conn)
}
}

Client Code (`and_client.go`)

go

package main

import (
"fmt"
"net/rpc"
)

type Probe struct {


Initiator int
Holder int
Target int
}

type Response struct {


Deadlock bool
}

func main() {
client, err := rpc.Dial("tcp", "localhost:1237")
if err != nil {
fmt.Println("Connection error:", err)
return
}
defer client.Close()

// Create a probe to simulate resource holding


probe := Probe{
Initiator: 1,
Holder: 2,
Target: 3,
}

var res Response


fmt.Println("Sending probe...")

// Send probe to the server


err = client.Call("ANDModel.SendProbe", probe, &res)
if err != nil {
fmt.Println("Error sending probe:", err)
return
}

if res.Deadlock {
fmt.Println("Deadlock detected!")
} else {
fmt.Println("No deadlock.")
}

// Introduce a circular wait (simulate deadlock)


probe = Probe{
Initiator: 1,
Holder: 3,
Target: 1, // Creates a cycle
}

err = client.Call("ANDModel.SendProbe", probe, &res)


if err != nil {
fmt.Println("Error sending probe:", err)
return
}

if res.Deadlock {
fmt.Println("Deadlock detected!")
} else {
fmt.Println("No deadlock.")
}
}

✅ How It Works
1. Probe Generation
Client sends a `Probe` message with the initiator, holder, and target process IDs.
2. Graph Construction
Server adds the relationship to the wait graph (directed).

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 7/13
3. Cycle Detection
DFS-based cycle detection:
If a cycle is detected, a deadlock is reported.
If no cycle, system state is considered safe.
4. Example

arduino

Process 1 -> Process 2


Process 2 -> Process 3
Process 3 -> Process 1 (Cycle → Deadlock)

Shall I proceed with Chandy–Misra–Haas Deadlock Detection (OR model) next? 😎


Next up: Chandy–Misra–Haas Deadlock Detection (OR model) using RPC in Go.

🔎 Chandy–Misra–Haas Algorithm (OR Model)


In the OR model:

A process is in a deadlock state if:


It is waiting for responses from multiple processes,
At least one of the processes MUST respond to avoid deadlock.

Approach

1. Each process sends a probe message when it detects a resource wait.


2. A probe message includes:
Initiating process ID
Current holder process ID
Target process ID
3. If a cycle is detected (initiating process = target process), a deadlock is confirmed.
4. In the OR model, ANY response from waiting processes is enough to break the deadlock.

Server Code (`or_server.go`)

go

package main

import (
"fmt"
"net"
"net/rpc"
"sync"
)

type Probe struct {


Initiator int
Holder int
Target int
}

type Response struct {


Deadlock bool
}

type ORModel struct {


waitGraph map[int][]int // Adjacency list for waiting graph
mu sync.Mutex
}

func (o *ORModel) SendProbe(probe *Probe, res *Response) error {


o.mu.Lock()
defer o.mu.Unlock()

// Add edge to the wait graph


o.waitGraph[probe.Holder] = append(o.waitGraph[probe.Holder], probe.Target)

fmt.Printf("Wait Graph: %+v\n", o.waitGraph)

// Check for cycle (deadlock) using OR condition


if o.detectCycle(probe.Initiator) {
fmt.Println("Deadlock detected!")
res.Deadlock = true
} else {
res.Deadlock = false
}

return nil
}

func (o *ORModel) detectCycle(initiator int) bool {


visited := make(map[int]bool)
stack := make(map[int]bool)

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 8/13
var dfs func(node int) bool
dfs = func(node int) bool {
if stack[node] {
return true
}

if visited[node] {
return false
}

visited[node] = true
stack[node] = true

for _, neighbor := range o.waitGraph[node] {


// OR Model: If at least one path resolves, no deadlock
if !visited[neighbor] || dfs(neighbor) {
return true
}
}

stack[node] = false
return false
}

return dfs(initiator)
}

func main() {
model := &ORModel{
waitGraph: make(map[int][]int),
}
rpc.Register(model)

listener, err := net.Listen("tcp", ":1238")


if err != nil {
fmt.Println("Error starting server:", err)
return
}
defer listener.Close()

fmt.Println("OR Model RPC Server started on port 1238")

for {
conn, err := listener.Accept()
if err != nil {
fmt.Println("Connection error:", err)
continue
}
go rpc.ServeConn(conn)
}
}

Client Code (`or_client.go`)

go

package main

import (
"fmt"
"net/rpc"
)

type Probe struct {


Initiator int
Holder int
Target int
}

type Response struct {


Deadlock bool
}

func main() {
client, err := rpc.Dial("tcp", "localhost:1238")
if err != nil {
fmt.Println("Connection error:", err)
return
}
defer client.Close()

// Step 1: Send a probe to simulate dependency


probe := Probe{
Initiator: 1,
Holder: 2,
Target: 3,
}

var res Response


fmt.Println("Sending probe...")

err = client.Call("ORModel.SendProbe", probe, &res)


if err != nil {
fmt.Println("Error sending probe:", err)
return
}

if res.Deadlock {
fmt.Println("Deadlock detected!")
} else {
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 9/13
fmt.Println("No deadlock.")
}

// Step 2: Introduce a circular wait (simulate deadlock)


probe = Probe{
Initiator: 1,
Holder: 3,
Target: 1, // Creates a cycle
}

err = client.Call("ORModel.SendProbe", probe, &res)


if err != nil {
fmt.Println("Error sending probe:", err)
return
}

if res.Deadlock {
fmt.Println("Deadlock detected!")
} else {
fmt.Println("No deadlock.")
}
}

✅ How It Works
1. Probe Generation
Client sends a `Probe` message with the initiator, holder, and target process IDs.
2. Graph Construction
Server adds the relationship to the wait graph (directed).
3. Cycle Detection
DFS-based cycle detection:
If a cycle is detected → Deadlock.
If no cycle → System is stable.
Since it's the OR model, only one process needs to respond to prevent a deadlock.
4. Example

arduino

Process 1 → Process 2
Process 2 → Process 3
Process 3 → Process 1 (Cycle → Deadlock)

🌟 Difference Between AND and OR Model


Feature AND Model OR Model
Deadlock Condition All dependent processes must respond At least one dependent process must respond

Complexity Higher Lower

Flexibility Strict More flexible


Detection Direct cycle-based Single response suffices

🔥 All done! Anything else you’d like to add or modify? 😎


Let’s go step-by-step through the explanations of each implementation:

1. ✅ Scalar Time Calculation


🔎 What is Scalar Time?
Scalar time (or Lamport's Logical Clock) is a single integer value used to track the order of events in a distributed system.
Since different processes in a distributed system have no shared clock, Lamport proposed a logical clock to maintain a consistent event
order.

📌 Concept
Every process maintains a scalar clock:
Incremented on each local event.
Updated when receiving a message based on the sender’s timestamp.
Lamport's Condition:
If event `a` happened before event `b` → C(a) < C(b)
If `a` and `b` are concurrent → no direct ordering.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 10/13
🛠️ Implementation Breakdown
1. Server:
Maintains a single integer value (`clock`).
When a request is received:
If the request's timestamp is greater than the current clock → update the clock.
Increment the clock.
Return the updated time to the client.
2. Client:
Sends a timestamp to the server.
Server updates and sends back the new time.

🔥 Key Insight
Scalar time only orders events partially.
If two events are concurrent → scalar time cannot determine the order.

2. ✅ Vector Time Calculation


🔎 What is Vector Time?
Vector clocks extend scalar clocks to provide causal ordering of events in distributed systems.
Each process maintains a vector of size equal to the number of processes.
The vector reflects the state of other processes as observed by the current process.

📌 Concept
Initial state – Each element in the vector is initialized to zero.
Event Handling:
On a local event → increment the local vector clock.
On receiving a message → update the vector clock using element-wise maximum and increment the local element.
Happened-Before Relationship:
If two events `a` and `b` are represented by vectors `V(a)` and `V(b)`,
V (a) < V (b) if:
∀i : V (a)[i] ≤ V (b)[i]
At least one element i in `V(a)` is strictly less than in `V(b)`.

🛠️ Implementation Breakdown
1. Server:
Maintains a vector clock.
On receiving a request:
Take element-wise maximum between server and client vector.
Increment the local element based on the process ID.
Return the updated vector to the client.
2. Client:
Sends its current vector state to the server.
Server updates the vector and sends it back.

🔥 Key Insight
Vector clocks give a happened-before relationship.
If vectors are not comparable → events are concurrent.

3. ✅ Lamport's Distributed Mutual Exclusion Algorithm


🔎 What is Lamport’s Mutual Exclusion?
Ensures that only one process can enter the critical section at a time.
Based on:
Logical timestamps
Total ordering of requests

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 11/13
📌 Concept
1. Request:
Process sends a request with a timestamp to all other processes.
Each process:
Queues the request.
Replies with an "OK" if it’s not holding the lock or if its own request has a lower timestamp.
2. Execution:
A process can enter the critical section if:
Its request is at the head of the queue.
It has received a reply from all other processes.
3. Release:
After execution → Remove request from the queue.

🛠️ Implementation Breakdown
1. Server:
Maintains a priority queue for requests.
On receiving a request:
Add to the queue.
Sort by timestamp (and PID in case of tie).
Reply with "OK".
2. Client:
Sends a request.
Waits for replies from all other processes.
Executes critical section when allowed.
Sends release message.

🔥 Key Insight
Total ordering of timestamps ensures fair execution.
Resolves conflicts using timestamps and process ID.

4. ✅ Chandy–Misra–Haas Algorithm (AND Model)


🔎 What is the AND Model?
A process is in a deadlock if it is waiting for multiple processes AND none of them can proceed.
Cycle detection in a resource allocation graph → Confirms a deadlock.

📌 Concept
1. Each process maintains a wait graph.

2. On detecting a wait state:


Send a probe containing:
Initiator (the process that started the request)
Holder (the process holding the resource)
Target (the process that needs the resource)
3. Cycle Detection:
If the initiator appears as the target → Deadlock confirmed.

🛠️ Implementation Breakdown
1. Server:
On receiving a probe:
Update the wait graph.
Use DFS to detect a cycle.
Reply if deadlock detected.
2. Client:
Sends a probe to the server.
Server updates graph and detects deadlock.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 12/13
🔥 Key Insight
AND Model = All dependencies must be resolved for a deadlock to clear.
No single dependency breaking will resolve the deadlock.

5. ✅ Chandy–Misra–Haas Algorithm (OR Model)


🔎 What is the OR Model?
A process is in a deadlock state if it’s waiting for multiple processes OR at least one must release the resource.
If even one dependent process releases the resource → Deadlock avoided.

📌 Concept
1. Similar to the AND model:
Each process sends a probe on detecting a wait state.
Probe includes:
Initiator
Holder
Target
2. Detection Strategy:
If the initiator appears as the target → Deadlock confirmed.
In OR model, one resolved dependency is enough to avoid deadlock.

🛠️ Implementation Breakdown
1. Server:
On receiving a probe:
Update the wait graph.
Use DFS to detect a cycle.
If any path resolves → No deadlock.
Reply with result.
2. Client:
Sends a probe.
Server detects cycle → Reports deadlock if unresolved.

🔥 Key Insight
OR Model = Single resolved dependency is enough to avoid deadlock.
More relaxed than the AND model.

💡 Difference Between AND and OR Models


Feature AND Model OR Model

Dependency Resolution All dependencies must resolve Any single dependency resolving is enough

Flexibility Strict More relaxed

Deadlock Detection Complexity Higher Lower

Suitability High-stake resources Less critical resources

🌟 Summary
Algorithm Purpose Key Insight

Scalar Time Event ordering Partial ordering


Vector Time Causal ordering Total ordering

Lamport’s Mutex Critical section access Total order using timestamps

Chandy–Misra–Haas (AND) Deadlock detection All dependencies must resolve

Chandy–Misra–Haas (OR) Deadlock detection Single resolved dependency is enough

🔥 Now you’ve got a solid grasp on all the algorithms! Want to refine or modify anything? 😎
ChatGPT can make mistakes. Check important info.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 13/13

You might also like