Assignment NO.3 parallel&distributed computing
Assignment NO.3 parallel&distributed computing
Subject:
Parallel and Distributed Computing
Submitted To:
Sir Manzoor Ahmad
Submitted By:
Ayyaf Ayub
Roll No:
2021-CS-
University:
Mir Chakar Khan Rind University of Technology Dera Ghazi Khan
Date:
08-01-2025
The Difference Between Shared Memory and Distributed Memory
in Parallel and Distributed Computing
In the realm of parallel and distributed computing, efficient memory management is one of the
critical factors that determines system performance, scalability, and ease of development.
Parallel and distributed systems involve multiple processors or processing units working together
on a common task, which requires the sharing of data or information between these units. How
these processing units access and share data depends on the memory architecture employed by
the system. Two primary memory architectures dominate this domain: shared memory and
distributed memory.
Understanding the differences between shared and distributed memory is essential for designing
and developing parallel and distributed computing systems. This article explores these two
architectures in-depth, comparing their communication models, performance characteristics,
scalability, use cases, and programming paradigms.
Unified Memory Space: All processors share the same physical memory, enabling direct
access to data.
Implicit Communication: Data sharing is implicit because all processors operate on the
same memory space. No explicit messaging is required to exchange information.
Synchronization Requirements: Since multiple processors access the same memory,
mechanisms like locks, semaphores, and barriers are required to ensure consistency and
prevent race conditions.
Hardware Implementation: Shared memory systems are typically limited to a single
machine where all processors are connected to the same memory bus.
Multi-core CPUs: Modern processors with multiple cores, such as Intel Core i9 or AMD
Ryzen.
Symmetric Multiprocessing (SMP): Systems like Oracle SPARC and IBM Power
Systems.
Programming Models and Frameworks: Examples include OpenMP (Open Multi-
Processing) and Pthreads (POSIX Threads).
In a distributed memory architecture, each processor has its own private memory that is not
accessible by other processors. Data sharing between processors is achieved through explicit
communication, typically in the form of message passing. Distributed memory systems are
commonly found in large-scale cluster computing, supercomputers, and cloud-based distributed
systems.
Independent Memory Spaces: Each processor has its own private memory, and no
global memory is shared.
Explicit Communication: Processors exchange data by sending and receiving messages
over a network.
Scalability: Distributed memory systems can scale to thousands or even millions of
processors because memory is not a shared resource.
Hardware Implementation: Distributed memory is used in cluster computing, where
nodes (each with its own processor and memory) are connected via high-speed networks.
1. High Scalability: Distributed memory systems can scale effectively by adding more
nodes, each with its own memory.
2. Reduced Contention: Since each processor has private memory, there is no contention
for a shared memory resource.
3. Fault Tolerance: Failures in one node's memory do not directly affect the memory of
other nodes, improving system reliability.
4. Aggregate Memory: The total memory available is the sum of the memory in all nodes,
enabling the handling of extremely large datasets.
The shared memory and distributed memory architectures differ fundamentally in terms of
memory organization, communication models, scalability, programming complexity, and
performance. Below is a detailed comparison:
Shared Memory: All processors access a single, unified memory space. Data written by
one processor is immediately visible to others.
Distributed Memory: Each processor has its own private memory. Data sharing is
achieved by explicitly transferring data between processors.
3.3 Scalability
3.5 Performance
Shared Memory: Faster for small-scale systems due to low-latency access to memory,
but performance degrades with contention as more processors are added.
Distributed Memory: Performance depends on efficient message-passing and load
balancing. Network latency can be a bottleneck, but the architecture handles large-scale
problems better.
Shared Memory: A failure in the shared memory affects all processors, leading to
system-wide failures.
Distributed Memory: Failures in one node typically do not impact the memory or
operation of other nodes, improving overall fault tolerance.
The choice between shared memory and distributed memory depends on several factors:
System Size: Shared memory is suitable for small systems, while distributed memory is
necessary for large-scale systems.
Application Requirements: Shared memory is ideal for tasks requiring frequent
communication, while distributed memory is better for tasks with independent
computations.
Scalability Needs: Distributed memory is the preferred option for applications that
require high scalability.
Conclusion
Shared memory and distributed memory represent two fundamental paradigms in parallel and
distributed computing, each with its strengths and limitations. Shared memory simplifies
programming and offers low-latency communication, making it ideal for small-scale systems. In
contrast, distributed memory provides scalability and fault tolerance, making it the architecture
of choice for large-scale systems like supercomputers and clusters.
Hybrid architectures that combine the two paradigms are increasingly popular, enabling systems
to harness the advantages of both approaches. By understanding the differences and trade-offs
between shared and distributed memory, developers can choose the most appropriate architecture
for their applications, ultimately achieving higher performance and efficiency in parallel and
distributed computing.