0% found this document useful (0 votes)
25 views14 pages

DSCC Unit 1 PDF

This document provides an overview of distributed systems, defining them as collections of independent entities that cooperate to solve problems. It discusses the characteristics, motivations, and models of distributed systems, including communication methods and global states. Additionally, it compares distributed systems with centralized systems and explores concepts such as message-passing, shared memory, and various communication models.

Uploaded by

23102208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views14 pages

DSCC Unit 1 PDF

This document provides an overview of distributed systems, defining them as collections of independent entities that cooperate to solve problems. It discusses the characteristics, motivations, and models of distributed systems, including communication methods and global states. Additionally, it compares distributed systems with centralized systems and explores concepts such as message-passing, shared memory, and various communication models.

Uploaded by

23102208
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

UNIT I INTRODUCTION

Definition - Relation to computer system components - Message-passing systems versus shared memory

systems - Primitives for distributed communication - Synchronous versus asynchronous executions. A model of

distributed computations: A distributed program - A model of distributed executions - Models of

communication networks - Global state of a distributed system.

Distributed Systems Definition

A distributed system is a collection of independent entities that cooperate to solve a problem that

cannot be individually solved

Features of distributed systems

 No common physical clock


 No shared memory
 Geographical separation
 Autonomy and heterogeneity .

Differences between centralized and distributed systems

Centralized systems Distributed systems


Centralized systems have non autonomous Distributed systems have autonomous components.
components
Centralized systems are build using homogenous Distributed systems are build using heterogeneous
components. components.
Centralized systems have single point of control and Distributed systems have multiple point of control
failures. and failures.

Explain Relation to computer system components

Relation to computer system components

 A typical distributed system is shown in Figure 1.1


 Each computer has a memory-processing unit and the computers are connected by
a communication network.
 Figure 1.2 shows the relationships of the software components that run on each of the
computers and use the local operating system and network protocol stack for functioning.
 The distributed software is also termed as middleware.
 A distributed execution is the execution of processes across the distributed system.
 An execution is also sometimes termed a computation or a run.
 The middleware is the distributed software that drives the distributed system, while
providing transparency of heterogeneity at the platform level.
 Examples of middleware:
1. Object Management Group’s (OMG)
2. Common Object Request Broker Architecture (CORBA)
3. Remote Procedure Call (RPC)
4. Message Passing Interface (MPI)

Motivation to Distributed Systems

 Inherently distributed computations


 Resource sharing
 Access to geographically remote data and resources
 Enhanced reliability
 Increased performance/cost ratio
 Scalable
Explain in detail about Parallel Systems with a neat example

Relation to Parallel Systems

A system is said to be a Parallel System in which multiple processor have direct access to shared memory
which forms a common address space.

Characteristics of parallel systems

Multiprocessor System

 Multiprocessor system is a parallel system.


 The multiple processors have direct access to shared memory which forms a common address
space.
 Multiprocessor is a set of processors connected by a communication network.
 Two standard architecture for parallel systems.
(a). Uniform memory access (UMA) multiprocessor.
(b). Non-uniform memory access (NUMA) multiprocessor.

Omega network

 An Omega network is a network configuration used in parallel computing architectures.


 An Omega network is a multistage interconnection network.
 The outputs from each stage are connected to the inputs of the next stage.
 A multistage omega network formed by a 2x2 switch.
 The 2× 2 switch allows data on either of the two input wires.
 The Omega network connecting n processors with n memory units has n/2 switching elements of
size 2 × 2 arranged in log n stages.
Butterfly network

 Unlike the Omega network, the generation of the interconnection pattern between a pair of
adjacent stages depends not only on n but also on the stage number s.
 The recursive expression is as follows. Let there be M = n/2 switches per stage, and
stage s ∈[ 0, log2n−1].

Multicomputer parallel System

A multicomputer parallel system is a parallel system in which the multiple processors do not have direct
access to shared memory. The memory of the multiple processors may or may not form a common address
space. Such computers usually do not have a common clock.
Figure 5(a) shows a wrap-around 4×4 mesh. For a k×k mesh which will contain k2 processors, the maximum
path length between any two processors is 2k/2−1. Routing can be done along the Manhattan grid. Figure
5(b) shows a four-dimensional hypercube. A k-dimensional hypercube has 2k processor-and-memory units.
Each such unit is a node in the hypercube, and has a unique k-bit label. Each of the k dimensions is
associated with a bit position in the label.

Array processor

 Array processor uses multiple synchronized arithmetic logic units to achieve spatial parallelism. It is
also called synchronous multiprocessor.
 Multiprocessor system consists of several processors some of which may be I/O processors, common,
fast access data local store and common, slow access main store.
 These components are interconnected by a common bus for carrying data and control information.

Explain Flynn’s Taxonomy in detail

Flynn’s taxonomy
 Flynn's taxonomy is a specific classification of parallel computer architectures that are based on the
number of concurrent instruction and data streams available in the architecture.
 Flynn's taxonomy based on the number of instruction streams and data streams are the following
 Single instruction, single data stream(SISD)
 Multiple instruction, single data stream(MISD)
 Single instruction, multiple data stream(SIMD)
 Multiple instruction, multiple data stream(MIMD)

Single instruction, single data stream (SISD)

 An SISD computing system is a uniprocessor machine which is capable of executing a single


instruction, operating on a single data stream.
 In SISD, machine instructions are processed in a sequential manner.
Multiple instruction, single data stream (MISD)

An MISD system is a multiprocessor machine capable of executing the different instruction on all the CPUs
but operating on same data set.

Single instruction, multiple data stream (SIMD)

An SIMD system is a multiprocessor machine capable of executing the same instruction on all the CPUs but
operating on different data streams.

Multiple instruction, multiple data stream (MIMD)

An MIMD system is a multiprocessor machine which is capable of executing multiple instructions on


multiple data sets. Each processor in the MIMD model has separate instruction and data streams.
Explain message-passing systems versus shared memory systems.

Message passing systems:

 This allows multiple processes to read and write data to the message queue without being
connected to each other.
 Messages are stored on the queue until their recipient retrieves them.
 Message queues are quite useful for interprocess communication and are used by most operating
systems.

Shared memory systems:

 The shared memory is the memory that can be simultaneously accessed by multiple processes. This
is done so that the processes can communicate with each other.
 Semaphores and monitors are common synchronization mechanisms on shared memory systems.
Explain primitives for distributed communication.

 Message send and message receive communication primitives are done through Send() and
Receive(), respectively.
 A Send primitive has two parameters: the destination and the buffer in the user space that holds
the data to be sent.
 The Receive primitive also has two parameters: the source from which the data is to be received
and the user buffer into which the data is to be received.

There are two ways of sending data when the Send primitive is

Buffered: The standard option copies the data from the user buffer to the kernel buffer. The data later gets
copied from the kernel buffer onto the network.

Unbuffered: The data gets copied directly from the user buffer onto the network.

Sender and receiver can be blocking or non blocking.Three combinations are possible using blocking and
nonblocking

1. Blocking send, Blocking receive.

2. Nonblocking send, blocking receive.

3. Nonblocking send ,nonblocking receive.

1. Blocking send, Blocking receive

Bothe sender and receiver are blocked until the message is delivered.

2. Nonblocking send, blocking receive

Sender may continue on, the receiver is blocked until the requested message arrives.

3. Nonblocking send, nonblocking receive

In a non-blocking send and non-blocking receive, both the sender and receiver processes can continue
their execution without waiting for the message transfer to complete

Types of Send and Receive operations:

Send Operations

1. Blocking Synchronous Send:

 Data is copied from user buffer to kernel buffer and sent over the network.

 Control returns to the process after the data is copied to the receiver's system buffer

and an acknowledgement is received.


2. Non-Blocking Synchronous Send:

 Control returns to the sender immediately after the data copy from user buffer to kernel buffer starts.

 A handle is provided to track the completion of the send operation.

3. Blocking Asynchronous Send:

 The process is blocked until the data is copied from the user's buffer to the kernel buffer

4. Non-Blocking Asynchronous Send:

 Control returns to the sender as soon as the data transfer from the user buffer to the kernel buffer
starts.

Receive Operations

1. Blocking Receive:

 The process is blocked until the expected data arrives.

2. Non-Blocking Receive:

 A non-blocking receive allows the process to request data and continue executing

other tasks without waiting for the data to arrive.


Write short notes on Models of communication networks

The three main types of communication models in distributed systems are:

 FIFO (first-in, first-out)


 Non-FIFO (N-FIFO)
 Causal Ordering (CO)

FIFO (first-in, first-out)

Each channel acts as a FIFO message queue and message ordering is preserved by channel.

Non-FIFO (N-FIFO)

A channel acts like a set in which a sender process adds messages and receiver removes messages in
random order.

Causal Ordering (CO)

The “causal ordering” model is based on Lamport’s “happens before” relation. A system that supports
the causal ordering model satisfies the following property:

Explain the Model of distributed computations in detail.

A Distributed Program

 A distributed program is composed of a set of asynchronous processes that


communicate by message passing over the communication network.
 Each process may run on different processor.
 The processes do not share a global memory.
 Process execution and message transfer are asynchronous –a process sending a message
does not wait for the delivery of the message to be complete.

A model of distributed executions

 The execution of a process consists of a sequential execution of its actions.


 The actions of a process are modeled as three types of events: internal events,
message send events, and message receive events.
 A send event changes the state of the process that sends the message and the state of
the channel on which the message is sent.
 A receive event changes the state of the process that receives the message and the
state of the channel on which the message is received.

The distributed execution is depicted by a space–time diagram. Figure shows the space–time
diagram of a distributed execution involving three processes. A horizontal line represents the
progress of the process; a dot indicates an event; a slant arrow indicates a message transfer.
The execution of an event takes a finite amount of time. In this figure, for process p1, the
second event is a message send event, the third event is an internal event, and the fourth event
is a message receive event.

Causal precedence relation

Causal message ordering is a partial ordering of messages in a distributed computing


environment. It is the delivery of messages to a process in the order in which they were
transmitted to that process.

Happen Before Relation

If say A →B if A happens before B. A→B is defined using the following rules:

 Local ordering: A and B occur on same process and A occurs before B.


 Messages: send (m) → receive (m) for any message m.
Write short notes on models of Process Communication

There are two basic models of process communications

 Synchronous
 Asynchronous

Synchronous

 The sender process blocks until the message has been received by the receiver process.
 The sender process resumes after the receiver process has accepted the message.
 The sender and the receiver processes must synchronize to exchange a message.

Asynchronous

 It is non- blocking communication where the sender and the receiver do not
synchronize to exchange a message.
 The sender process does not wait for the message to be delivered to the receiver process

Explain Global state of a distributed system in detail.

Global state

 The global state of a distributed system is a collection of the local states of the processes and the
channels.

 The state of a process at any time is defined by the contents of processor, registers, stacks, local
memory, etc.

 The state of a channel is given by the set of messages in transit in the channel.

Notationally, the global state GS is defined as

Consistent global state


A message cannot be received if it was not sent; that is, the state should not violate causality. Such states are
called consistent global states.

Inconsistent global state


An inconsistent global state in a distributed system is a snapshot of the system that violates causality and does
not represent a valid state the system could achieve during its execution.
Requirement of Global States

Distributed Garbage Collection:

Distributed Garbage Collection refers to the process of reclaiming memory occupied by objects that are no
longer needed in a distributed system.

Distributed Deadlock Detection:

Distributed deadlock detection is the process of identifying deadlocks in a distributed system. A deadlock occurs
when a group of processes is waiting for resources held by other processes, forming a cyclic dependency that
prevents further progress.

Distributed Termination Detection


Distributed Termination Detection is the process of detecting when a distributed system has reached a state in
which all processes are "passive" or inactive, meaning they are no longer performing any work or interacting
with each other.

Distributed Debugging
Distributed Debugging refers to the process of identifying and resolving issues in a distributed system, where
multiple independent processes or nodes work together.

Cuts of a distributed computation

 In the space–time diagram of a distributed computation, a zigzag line joining one arbitrary point on each
process line is termed a cut in the computation.
 The set of events in the distributed computation is divided into a PAST and a FUTURE.
 The PAST contains all the events to the left of the cut and the FUTURE contains all the events to the right
of the cut.
Consistent cut

 A consistent global state corresponds to a cut in which every message received in the PAST of the cut was
sent in the PAST of that cut. Such a cut is known as a consistent cut.

Inconsistent cut

 A cut is inconsistent if a message crosses the cut from the FUTURE to the PAST.

You might also like