Distributed Mutual Exclusion (2 Files Merged)
Distributed Mutual Exclusion (2 Files Merged)
EXCLUSION
DISTRIBUTED MUTUAL EXCLUSION:- CLASSIFICATION -REQUIREMENTS –
MEASURING PERFORMANCE – LAMPORT’S ALGORITHM – RICKART-AGARWALA
ALGORITHM – SUZUKI- KASAMI’S BROADCAST ALGORITHM.
MUTUAL EXCLUSION IN SINGLE-COMPUTER SYSTEM
VS DISTRIBUTED SYSTEM
• Solutions to Mutual exclusion problem can be easily implemented using shared variables in single
computer system.
• Because of the shared memory, the status of shared resources and users is readily available.
• But in distributed systems shared resources and users are distributed and shared memory doesn’t
exist.
• So approaches based on shared variables are not applicable and approaches based on message
passing must be used.
CLASSIFICATION OF MUTUAL EXCLUSION
ALGORITHMS
• Mutual exclusion algorithms in distributed system are classified mainly into two classes:
• Token based
• Non-token based
• In the non-token-based approach, two or more successive rounds of messages are exchanged
among the Process to determine who will enter the CS next.
• A Process enters the critical section (CS) when an assertion, defined on its local variables,
becomes true.
CLASSIFICATION OF MUTUAL EXCLUSION
ALGORITHMS
• The primary objective of ME algorithm is to guarantee that only one request access the
Cs at a time.
• In addition following characteristics are also considered important:
1. Freedom from dead locks: Two or more sites/process should not endlessly wait for
messages that will never arrive.
• Freedom from starvation:A site must not wait indefinitely to execute the CS while other
sites are repeatedly executing the CS.
REQUIREMENTS OF ME ALGORITHMS
• Message Complexity
• Synchronization delay
• Response time
• System throughput
MEASURING PERFORMANCE
• Message Complexity: This is the number of messages that are required per CS
execution by a site.
• Synchronization delay: After a site leaves the CS, it is the time required and before
the next site enters the CS
• Response time: This is the time interval a request waits for its CS execution to be
over after its request messages have been sent out
• System throughput: It is the rate at which the system executes requests for the CS.
MEASURING PERFORMANCE
• If SD is the synchronization delay and E is the average critical section execution
time, then the throughput is given by the following equation:
System throughput = 1/(SD+E)
LOW AND HIGH LOAD PERFORMANCE
• Performance of a mutual exclusion algorithm depends upon the load.
• Under low load conditions, there is seldom more than one request for the
critical section present in the system simultaneously.
LOW AND HIGH LOAD PERFORMANCE
• The best and worst cases coincide with low and high loads, respectively.
BEST AND WORST CASE PERFORMANCE
• The best and worst cases coincide with low and high loads, respectively.
NON TOKEN BASED ALGORITHMS
• In non token based algorithms a site communicate with a set of other sites to
decide who should execute the CS next.
• For a site Si Request set Ri contains ids of all those sites from which site Si must
acquire permission before entering the CS
• Non token based algorithms uses time-stamps to order requests for CS and to
resolve conflicts between simultaneous requests for the CS.
• These algorithms maintain logical clocks and update them according to Lamport’s
Scheme. Each request for CS gets a timestamp and smaller time stamp requests
gets priority over larger time stamp requests
LAMPORT’S ALGORITHM
• Lamport developed a distributed mutual exclusion algorithm as an illustration of his
clock synchronization scheme. A request for CS are executed in the order of their
timestamps and time is determined by logical clocks.
• When a site processes a request for the CS, it updates its local clock and assigns the
request a timestamp. The algorithm executes CS requests in the increasing order of
timestamps.
• When a site Sj receives the REQUEST(tsi, i) message from site Si, it places
site Si’s request on request_queuej and returns a timestamped REPLY
message to Si.
LAMPORT’S ALGORITHM: Executing the critical section
• Site Si enters the CS when the following two conditions hold:
• Site Si, upon exiting the CS, removes its request from the top of its
request queue and broadcasts a timestamped RELEASE message to all
other sites.
• When a site Sj receives a RELEASE message from site Si, it removes Si’s
request from its request queue.
• When a site removes a request from its request queue, its own request
may come at the top of the queue, enabling it to enter the CS.
LAMPORT’S ALGORITHM: Releasing the critical section
• Site Si, upon exiting the CS, removes its request from the top of its
request queue and broadcasts a timestamped RELEASE message to all
other sites.
• When a site Sj receives a RELEASE message from site Si, it removes Si’s
request from its request queue.
• When a site removes a request from its request queue, its own request
may come at the top of the queue, enabling it to enter the CS.
LAMPORT’S ALGORITHM: Example
S2 entering the CS
LAMPORT’S ALGORITHM: Example
S1 entering the CS
LAMPORT’S ALGORITHM: Performance
• For each CS execution, Lamport’s algorithm requires N −1 REQUEST messages, N −1
REPLY messages, and N −1 RELEASE messages.
• The Lamport’s Algorithm can be optimized by reducing the no:of message to lie
between 3(N-1) and 2(N-1).
• A process sends a REPLY message to a process to give its permission to that process.
RICART–AGRAWALA ALGORITHM
• Lamport-style logical clocks to assign a timestamp to critical section requests.
• In this algorithm ,for every requesting site, the site with higher priority(smaller
timestamp) will always defer the request of the lower priority site.
• A site that possesses the token sends it to the site that sends the REQUEST message.
• If the site possessing the token is executing the CS, it sends the token only after it has
exited the CS.
• A site holding the token can repeatedly enter the critical session until it sends the
token to some other site.
SUZUKI –KASAMI’S BROADCAST ALGORITHM
• The main design issues in this algorithm are:
• A site Si keeps an array of integers RNi[1, … ,N] where RNi[j] denotes the largest
sequence number received in a REQUEST message so far from site Sj.
• After executing the CS, a site checks this condition for all the j’s to determine all the
sites that are requesting the token and places their i.d.’s in queue Q if these i.d.’s are
not already present in Q.
• Finally the site sends the token to the site whose i.d. is at the head of Q.
REQUESTING THE CRITICAL SECTION:
• If requesting site Si does not have the token, then it increments its
sequence number, RNi[i], and sends a REQUEST(i, sn) message to all other
sites. (“sn” is the updated value of RNi[i])
• If the token queue is nonempty after the above update, Si deletes the top site i.d.
from the token queue and sends the token to the site indicated
SECURITY
Potential Security Violations – Design Principles
for Secure Systems –The Access Matrix Model and
Implementation- The Access Control list Method.
Potential Security Violations
• Protection and security deals with the control of unauthorized access and use of
software and hardware resources of a computer.
• Potential security violations are classified into three categories:
• Least common Mechanism: Mechanism that is common to more than one user
should be minimized as shared mechanism represent potential information path
b/w users and thus a threat to security.
• Current Subjects: Finite set of entities that access current objects, denoted
by ‘S’. (Example: a process.) Usually S is a subset of O
✓P is a matrix called access matrix, with a row for every current subject and
a column for every current object
1. Capabilities
2. The Access Control List Method
3. The Lock-key Method
Implementation of Access Matrix: Capabilities
• This method corresponds to the row-wise decomposition of the access matrix.
• Each subject s is assigned a list of tuples (o, P[s,o]) for all objects o that it is
allowed to access.
• At any time a subject is authorized to access only those objects for which it has
capabilities.
Capabilities
• Capability has 2 fields:
• The capability id is used to search the capability list of user to locate the capability which
contains the allowed access rights and object descriptor.
• System checks whether the requested access is permitted by checking the capability.
• The base address of object is obtained from object table by using the object descriptor.
• Base address is added to offset in request to access exact memory location of the word.
Capability Based Addressing
Capability Based Addressing
• There are 2 features of capability based addressing
• Relocatability
• Sharing
• Sharing is made easy as several programs can share the same object with
different names(object descriptors) for the object.
Capability :Implementation
• There are two ways to implement capabilities:
• Tagged Approach
• Partitioned Approach
• In tagged approach one ore more bits are attached to each memory
location and to every processor register.
• If the tag bit is 1(ON), it indicates the presence of capability in the memory
word or register.
Capability :Implementation
• In partitioned approach capabilities and ordinary data are partitioned ,ie,
stored separately.
• There are two segments for every object: One segment storing only the
ordinary data and other storing only capabilities of the object.
• Processor also has 2 sets of registers: one for ordinary data and other for
capabilities.
• Each object o is assigned a list of pairs (s, P[s,o]) for all subjects s
that are allowed to access the object.
• The system searches access control list of o to find out if any entry (s,Φ) exists
for the subject s.
• If an entry (s,Φ) exists for subject s, then the system checks to see if the
requested access is permitted.(ie, α Є Φ)
• Efficiency of storage: Since list contains the subject and access rights which
have access to the corresponding protected object, list requires huge amount
of storage
Access Control List Method: Implementation
Efficiency of execution: Solution
Shadow register:
• stores the access rights of a subject w.r.t an object, when object is first
accessed.
• Subsequent access of object can refer shadow register for access rights
• When revoking the access rights of a particular subject, the corresponding
shadow register also should be cleared
Access Control List Method: Implementation
Efficiency of Storage: Solution
Protection group:
• solution to large storage requirement due to large no: of users.
• Subjects are divided into protection groups and the access control list consists
of names of groups along with their access rights.
• All subjects in a protection group have identical access rights to the object
• Subject gives protection group name and request access to the system
Access Control List Method:
Authority to change Access control list
• There are two methods to control the propagation of access rights:
• Self Control
• Hierarchical control
Self Control Policy: Owner process of object has special access right by which
it can modify the access control list of the object
• Owner is the creator of the object
• Disadvantage: Control is centralized to one process
Access Control List Method:
Authority to change Access control list
when an object is created, its owner specifies set of processes which have the
right to modify access control list of new object
Processes are arranged in hierarchy
A process can modify access control list associated with all processes below it
in the hierarchy
The Lock-Key Method
• Hybrid of capability based method and the access control method
• Includes the features of both the methods
• Otherwise access is permitted only if there exists a lock entry (l, Ψ) in the
access control list of the object o such that k=l and α Є Ψ
• To revoke the access rights of a subject to an object simply delete the lock
entry corresponding to the key of the subject