0% found this document useful (0 votes)
18 views24 pages

Department of Artificial Intelligence & Data Science

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views24 pages

Department of Artificial Intelligence & Data Science

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 24

KGiSL Institute of Technology

(Approved by AICTE, New Delhi; Affiliated to Anna University, Chennai)


Recognized by UGC, Accredited by NBA (IT)
365, KGiSL Campus, Thudiyalur Road, Saravanampatti, Coimbatore – 641035.

Department of Artificial Intelligence & Data Science


Name of the Faculty : Ms.T.Suganya

Subject Name & Code : CS3551 / DISTRIBUTED COMPUTING

Branch & Department : B.Tech & AI&DS

Year & Semester : III / V

Academic Year :2022-23(ODD)


UNIT I INTRODUCTION 8
Introduction: Definition-Relation to Computer System Components – Motivation – Message - Passing Systems versus Shared
Memory Systems – Primitives for Distributed Communication – Synchronous versus Asynchronous Executions – Design Issues and
Challenges; A Model of Distributed Computations: A Distributed Program – A Model of Distributed Executions – Models of
Communication Networks – Global State of a Distributed System.

UNIT II LOGICAL TIME AND GLOBAL STATE 10


Logical Time: Physical Clock Synchronization: NTP – A Framework for a System of Logical Clocks – Scalar Time – Vector Time;
Message Ordering and Group Communication: Message Ordering Paradigms – Asynchronous Execution with Synchronous
Communication – Synchronous Program Order on Asynchronous System – Group Communication – Causal Order – Total Order;
Global State and Snapshot Recording Algorithms: Introduction – System Model and Definitions – Snapshot Algorithms for FIFO
Channels.

UNIT III DISTRIBUTED MUTEX AND DEADLOCK 10


Distributed Mutual exclusion Algorithms: Introduction – Preliminaries – Lamport’s algorithm – Ricart- Agrawala’s Algorithm –– Token-
Based Algorithms – Suzuki-Kasami’s Broadcast Algorithm; Deadlock Detection in Distributed Systems: Introduction – System Model
– Preliminaries – Models of Deadlocks – Chandy-Misra-Haas Algorithm for the AND model and OR Model.

UNIT IV CONSENSUS AND RECOVERY 10


Consensus and Agreement Algorithms: Problem Definition – Overview of Results – Agreement in a Failure-Free
System(Synchronous and Asynchronous) – Agreement in Synchronous Systems with Failures; Checkpointing and Rollback
Recovery: Introduction – Background and Definitions – Issues in Failure Recovery – Checkpoint-based Recovery – Coordinated
Checkpointing Algorithm - - Algorithm for Asynchronous Checkpointing and Recovery

UNIT V CLOUD COMPUTING 7


Definition of Cloud Computing – Characteristics of Cloud – Cloud Deployment Models – Cloud Service Models – Driving Factors and
Challenges of Cloud – Virtualization – Load Balancing – Scalability and Elasticity – Replication – Monitoring
SYLLABUS

UNIT I INTRODUCTION 8

Introduction: Definition- Relation to Computer System Components – Motivation –


Message - Passing Systems versus Shared Memory Systems – Primitives for
Distributed Communication – Synchronous versus Asynchronous Executions –
Design Issues and Challenges; A Model of Distributed Computations: A Distributed
Program – A Model of Distributed Executions – Models of Communication Networks
– Global State of a Distributed System.
Course Outcomes

OUTCOMES:
Upon the completion of this course, the student will be able to
CO1: Explain the foundations of distributed systems (K2)
CO2: Solve synchronization and state consistency problems (K3)
CO3 Use resource sharing techniques in distributed systems (K3)
CO4: Apply working model of consensus and reliability of distributed systems (K3)
CO5: Explain the fundamentals of cloud computing (K2)
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

1.BLOCKING/NON-BLOCKING, SYNCHRONOUS/ASYNCHRONOUS PRIMITIVES


 Message send and message receive communication primitives are
denoted Send() and Receive(), respectively.

 A Send primitive has at least two parameters – the destination, and the
buffer in the user space, containing the data to be sent.

 A Receive primitive has at least two parameters – the source from


which the data is to be received and the user buffer into which the data
is to be received.

 There are two ways of sending data when the Send primitive is invoked
– the buffered option and the unbuffered option.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

1.BLOCKING/NON-BLOCKING, SYNCHRONOUS/ASYNCHRONOUS PRIMITIVES

 The buffered option which is the standard option copies the data from
the user buffer to the kernel buffer.

 The data later gets copied from the kernel buffer onto the network.

 In the unbuffered option, the data gets copied directly from the user
buffer onto the network.

 For the Receive primitive, the buffered option is usually required


because the data may already have arrived when the primitive is
invoked, and needs a storage place in the kernel.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

Blocking primitives
The primitive commands wait for the message to be delivered. The execution of the
processes is blocked.

The sending process must wait after a send until an acknowledgement is made by the
receiver.

The receiving process must wait for the expected message from the sending process

The receipt is determined by polling common buffer or interrupt

This is a form of synchronization or synchronous communication.

A primitive is blocking if control returns to the invoking process after the processing for
the primitive completes.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

Non Blocking primitives


If send is non blocking, it returns control to the caller immediately, before the message is
sent.

The advantage of this scheme is that the sending process can continue computing in
parallel with the message transmission, instead of having the CPU go idle.

This is a form of asynchronous communication.

A primitive is non-blocking if control returns back to the invoking process immediately after
invocation, even though the operation has not completed.

Non-blocking Send, control returns to the process even before the data is copied out of
the user buffer.

Non-blocking Receive, control returns to the process even before the data may have
arrived from the sender.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

(a) A blocking send primitive. (b) A non blocking send


primitive.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

Synchronous

A Send or a Receive primitive is synchronous if both the Send() and


Receive() handshake with each other.

The processing for the Send primitive completes only ,after the invoking
processor learns that the other corresponding Receive primitive has also
been invoked and that the receive operation has been completed.

The processing for the Receive primitive completes when the data to be
received is copied into the receiver’s user buffer.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

Asynchronous

Send primitive is asynchronous, if control returns back to the invoking


process after the data item to be sent has been copied out of the user
specified buffer.

For non-blocking primitives, a return parameter on the primitive call


returns a system-generated handle which can be later used to check the
status of completion of the call.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

 There are four versions of the Send primitive –


 synchronous blocking,
 synchronous non-blocking,
 asynchronous blocking,
 asynchronous non-blocking.
 For the Receive primitive, there are the
 blocking synchronous
 non-blocking synchronous versions.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

 Synchronous blocking Send:


 The data gets copied from the user
buffer to the kernel buffer and is
then sent over the network.

 After the data is copied to the


receiver’s system buffer and a
Receive call has been issued, an
acknowledgement back to the
sender causes control to return to
the process that invoked the Send
operation and completes the Send.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

 Synchronous non-blocking Send:


 Control returns back to the invoking
process as soon as the copy of data
from the user buffer to the kernel
buffer is initiated.

 A parameter in the non-blocking call


also gets set with the handle of a
location that the user process can
later check for the completion of the
synchronous send operation.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

 Asynchronous blocking Send:


 The user process that invokes the Send is
blocked until the data is copied from the user’s
buffer to the kernel buffer.
 Asynchronous non-blocking Send:
 The user process that invokes the Send is
blocked until the transfer of the data from the
user’s buffer to the kernel buffer is initiated.
 Control returns to the user process as soon as
this transfer is initiated, and a parameter in the
non-blocking call also gets set with the handle of
a location that the user process can check later
using the Wait operation for the completion of the
asynchronous Send operation.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

 Blocking synchronous Receive- The


Receive call blocks until the data
expected arrives and is written in the
specified user buffer. Then control is
returned to the user process.

 Non-blocking synchronous Receive- The


Receive call will cause the kernel to
register the call and return the handle of
a location that the user process can later
check for the completion of the non-
blocking Receive operation.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

2. PROCESSOR SYNCHRONY
Similar to synchronous and asynchronous communication primitives there is
also the classification of synchronous versus asynchronous processors.

Processor synchrony indicates that all the processors execute in lock-step


with their clocks synchronized.

Since distributed systems do not follow a common clock, this abstraction is


implemented using some form of barrier synchronization to ensure that no
processor begins executing the next step of code until all the processors have
completed executing the previous steps of code assigned to each of the
processors.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

3. LIBRARIES AND STANDARDS


There exists a wide range of primitives for message-passing. The
message-passing interface (MPI) library and the PVM (parallel virtual
machine) library are used largely by the scientific community.

Many commercial software products (banking, payroll, etc.,


applications) use proprietary primitive libraries supplied with the software
marketed by the vendors.

Remote Procedure Call (RPC): In RPC, the procedure need not exist in
the same address space as the calling procedure. The two processes
may be on the same system, or they may be on different systems with a
network connecting them.
PRIMITIVES FOR DISTRIBUTED COMMUNICATION

3. LIBRARIES AND STANDARDS


Message Passing Interface (MPI): MPI primarily addresses the message-
passing parallel programming model: data is moved from the address
space of one process to that of another process through cooperative
operations on each process.

Parallel Virtual Machine (PVM): It is designed to allow a network of


heterogeneous Unix and/or Windows machines to be used as a single
distributed parallel processor.

Remote Method Invocation (RMI): RMI is a way that a programmer can


write object-oriented programming in which objects on different computers
can interact in a distributed network.
SYNCHRONOUS VERSUS ASYNCHRONOUS EXECUTIONS
The types of processing execution
Synchronous
Asynchronous

Asynchronous Execution:
Here, Every communicating process can have a
different observation of the order of the messages
being exchanged.

In an asynchronous execution:
• there is no processor synchrony and there is
no bound on the drift rate of processor clocks

• message delays(Propagation Time +Transmission


Time ) are finite but unbounded(Number of
Transmission)

• no upper bound on the time taken by a process


(Delivery Time)
SYNCHRONOUS VERSUS ASYNCHRONOUS EXECUTIONS

Synchronous Process

A communication among processes is considered synchronous


when every process observes the same order of messages
within the system.

In the same manner, the execution is considered synchronous,


when every individual process in the system observes the same
total order of all the processes which happen within it.

In an synchronous execution:
•Processors are synchronized and the clock drift rate between
any two processors is bounded
•Message delivery times are such that they occur in one logical
step or round
•Upper bound on the time taken by a process to execute a
step.
SYNCHRONOUS VERSUS ASYNCHRONOUS EXECUTIONS
Emulating an asynchronous system by a synchronous system (A → S):
• An asynchronous program can be emulated on a synchronous system.
• The synchronous system is a special case of an asynchronous system
• All communication finishes within the same round in which it is initiated.
Emulating a synchronous system by an asynchronous system (S → A):
• A synchronous program can be emulated on an asynchronous system using a tool called
synchronizer.
Emulation for a fault free system:
• If system A can be emulated by system B, denoted A/B, and
• if a problem is not solvable in B, then it is also not solvable in A.
• If a problem is solvable in A, it is also solvable in B. Hence, in a sense, all four classes are equivalent
in terms of computability in failure-free systems.

You might also like