0% found this document useful (0 votes)
5 views

Distributed Computing

Distributed computing involves a network of independent computers working together on tasks, giving users the impression of a single system. Key features include autonomy, heterogeneity, and the absence of shared memory or a common clock. Challenges in distributed systems encompass issues like security, scalability, and quality of service, while design considerations must address system architecture and algorithmic efficiency.

Uploaded by

rgothwal60phd18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Distributed Computing

Distributed computing involves a network of independent computers working together on tasks, giving users the impression of a single system. Key features include autonomy, heterogeneity, and the absence of shared memory or a common clock. Challenges in distributed systems encompass issues like security, scalability, and quality of service, while design considerations must address system architecture and algorithmic efficiency.

Uploaded by

rgothwal60phd18
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Distributed computing

VARSHA SRIVASTAVA
Distributed computing
A distributed system is a collection of independent computers,
interconnected via a network, capable of collaborating on a task.

Distributed computing is computing performed in a distributed system.


The definition of distributed systems
deals with two aspects that:
• Deals with hardware: The machines linked in a distributed system are
autonomous.
• Deals with software: A distributed system gives an impression to the
users that they are dealing with a single system.
Features of Distributed Systems:
• No common physical clock - refers to a system or scenario where
different components (e.g., devices, subsystems, or processors) are
not synchronized by a single, shared clock signal. In other words, each
component or part of the system operates independently, with its
own internal clock or timing mechanism. This is an important
assumption because it introduces the element of “distribution” in the
system and gives rise to the inherent asynchrony amongst the
processors.
• No shared memory - refers to a system or architecture in which
different components (such as processors, cores, or nodes) do not
have access to a common memory space. Each component has its
own private memory, and there is no direct, centralized memory that
can be accessed by multiple components simultaneously. This concept
is relevant in computer architecture, parallel computing, and
distributed systems.
• Autonomy:
• Autonomy refers to the ability of a system or component to operate
independently and make decisions without needing to rely on a
central authority or other components. In the context of distributed
systems, autonomy means that each component (such as a node,
agent, or processor) has the ability to function on its own, without
constant coordination or supervision from a central system.
• Heterogeneity:
• Heterogeneity refers to the diversity or difference in the components,
technologies, or resources within a system. In computing, this usually
refers to the presence of different types of hardware, software, or
networks that coexist and interact within the same system. Systems
that are heterogeneous consist of a variety of different entities that
might have different architectures, operating systems, hardware
platforms, or protocols.
Issues in distributed systems
Heterogeneity
• Openness
• Security
• Scalability
• Failure handling
• Concurrency
• Transparency
• Quality of service
QOS parameters (Quality of Service)
• The distributed systems must offer the following QOS:
• ➢ Performance
• ➢ Reliability
• ➢ Availability
• ➢ Security
Differences between centralized and
distributed systems
• Centralized Systems
• In Centralized Systems, several jobs are done on a particular central
processing unit(CPU)
• They have shared memory and shared variables.
• Clocking is present.
Distributed Systems
• In Distributed Systems, jobs are distributed among several processors.
The Processor are interconnected by a computer network
• They have no global state (i.e.) no shared memory and no shared
variables.
• No global clock.
1.2 Relation to Computer System
Components
• As shown in Fig 1.1, Each computer has a memory-processing unit
and the computers are connected by a communication network. Each
system connected to the distributed networks hosts distributed
software which is a middleware technology. This drives the
Distributed System (DS) at the same time preserves the heterogeneity
of the DS. The term computation or run in a distributed system is the
execution of processes to achieve a common goal.
1.3 Motivation
• • Inherently distributed computations: DS can process the
computations at geographically remote locations.
• • Resource sharing: The hardware, databases, special libraries can be
shared between systems without owning a dedicated copy or a
replica. This is cost effective and reliable.
• Access to geographically remote data and resources: As mentioned
previously, computations may happen at remote locations. Resources
such as centralized servers can also be accessed from distant
locations.
• •• Enhanced reliability: DS provides enhanced reliability, since they
run on multiple copies of resources. The distribution of resources at
distant locations makes them less susceptible for faults. The term
reliability comprises of:
• 1. Availability:the resource/ service provided by the resource should
be accessible at all times
• 2. Integrity: the value/state of the resource should be correct and
consistent.
• 3. Fault-Tolerance:the ability to recover from system failures
• • Increased performance/cost ratio: The resource sharing and remote
access features of DS naturally increase the performance / cost ratio.
• • Scalable: The number of systems operating in a distributed
environment can be increased as the demand increases.
1.5 MESSAGE-PASSING SYSTEMS
VERSUS SHARED MEMORY SYSTEMS
• Message passing systems:
• • This allows multiple processes to read and write data to the
message queue without being connected to each other.
• Messages are stored on the queue until their recipient retrieves them.
Message queues are quite useful for interprocess communication and
are used by most operating systems.
Shared memory systems:
• The shared memory is the memory that can be simultaneously
accessed by multiple processes. This is done so that the processes can
communicate with each other.
• Communication among processors takes place through shared data
variables, and control variables for synchronization among the
processors.
• Semaphores and monitors are common synchronization mechanisms
on shared memory systems.
• When shared memory model is implemented in a distributed
environment, it is termed as distributed shared memory.
a) Message Passing Model b) Shared Memory Model
1.6 PRIMITIVES FOR DISTRIBUTED
COMMUNICATION
• 1.6.1 Blocking / Non blocking / Synchronous / Asynchronous
Processor Synchrony
• Processor synchrony indicates that all the processors execute in lock-
step with their clocks synchronized.
SYNCHRONOUS VS ASYNCHRONOUS
EXECUTIONS
• The execution of process in distributed systems may be synchronous
or asynchronous.
• Asynchronous Execution: A communication among processes is
considered asynchronous, when every communicating process can
have a different observation of the order of the messages being
exchanged.
• In an asynchronous execution: there is no processor synchrony and
there is no bound on the drift rate of processor clocks
• • message delays are finite but unbounded
• • no upper bound on the time taken by a process
Synchronous Execution:
• A communication among processes is considered synchronous when every
process observes the same order of messages within the system.
• In the same manner,the execution is considered synchronous, when every
individual process in the system observes the same total order of all the
processes which happen within it.
• In an synchronous execution:
• • processors are synchronized and the clock drift rate between any two
processors is
• bounded
• • message delivery times are such that theyoccur in one logical step or round
• • upper boundon the time taken by a process to execute a step.
DESIGN ISSUES AND CHALLENGES
IN DISTRIBUTED SYSTEMS
• The design of distributed systems has numerous challenges.
• They can be categorized into:
• • Issues related to system and operating systems design
• • Issues related to algorithm design
• • Issues arising due to emerging technologies
• The above three classes are not mutually exclusive
Issues related to system and
operating systems design

You might also like