0% found this document useful (0 votes)
137 views

Parallel and Distributed Computing

Distributed systems consist of networked computers working together toward a common goal. Parallel and distributed computing overlap, as the same system can have characteristics of both. The main distinction is that parallel systems have shared memory accessible by all processors, while distributed systems have each processor having its own private memory and sharing information through message passing. Distributed systems are represented as a network topology where each node is a computer connected by communication links, while parallel systems have direct access to shared memory.

Uploaded by

Navin Chinnusamy
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
137 views

Parallel and Distributed Computing

Distributed systems consist of networked computers working together toward a common goal. Parallel and distributed computing overlap, as the same system can have characteristics of both. The main distinction is that parallel systems have shared memory accessible by all processors, while distributed systems have each processor having its own private memory and sharing information through message passing. Distributed systems are represented as a network topology where each node is a computer connected by communication links, while parallel systems have direct access to shared memory.

Uploaded by

Navin Chinnusamy
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 1

Parallel and distributed computing Distributed systems are groups of networked computers, which have the same goal

for their work. The terms "concurrent computing", "parallel computing", and "dis tributed computing" have a lot of overlap, and no clear distinction exists betwe en them.[13] The same system may be characterised both as "parallel" and "distri buted"; the processors in a typical distributed system run concurrently in paral lel.[14] Parallel computing may be seen as a particular tightly-coupled form of distributed computing,[15] and distributed computing may be seen as a loosely-co upled form of parallel computing.[5] Nevertheless, it is possible to roughly cla ssify concurrent systems as "parallel" or "distributed" using the following crit eria: In parallel computing, all processors have access to a shared memory. Shared mem ory can be used to exchange information between processors.[16] In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.[1 7] The figure on the right illustrates the difference between distributed and paral lel systems. Figure (a) is a schematic view of a typical distributed system; as usual, the system is represented as a network topology in which each node is a c omputer and each line connecting the nodes is a communication link. Figure (b) s hows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a paral lel system in which each processor has a direct access to a shared memory. The situation is further complicated by the traditional uses of the terms parall el and distributed algorithm that do not quite match the above definitions of pa rallel and distributed systems; see the section Theoretical foundations below fo r more detailed discussion. Nevertheless, as a rule of thumb, high-performance p arallel computation in a shared-memory multiprocessor uses parallel algorithms w hile the coordination of a large-scale distributed system uses distributed algor ithms.

You might also like