0% found this document useful (0 votes)
6 views

Assignment NO.3 parallel&distributed computing

This document discusses the differences between shared memory and distributed memory architectures in parallel and distributed computing. It highlights key characteristics, advantages, and limitations of each architecture, along with their scalability, programming complexity, and performance. The conclusion emphasizes the importance of choosing the appropriate memory model based on system size, application requirements, and scalability needs.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Assignment NO.3 parallel&distributed computing

This document discusses the differences between shared memory and distributed memory architectures in parallel and distributed computing. It highlights key characteristics, advantages, and limitations of each architecture, along with their scalability, programming complexity, and performance. The conclusion emphasizes the importance of choosing the appropriate memory model based on system size, application requirements, and scalability needs.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Assignment NO: 4

Subject:
Parallel and Distributed Computing

Submitted To:
Sir Manzoor Ahmad

Submitted By:
Ayyaf Ayub

Roll No:
2021-CS-

University:
Mir Chakar Khan Rind University of Technology Dera Ghazi Khan

Date:
08-01-2025
The Difference Between Shared Memory and Distributed Memory
in Parallel and Distributed Computing
In the realm of parallel and distributed computing, efficient memory management is one of the
critical factors that determines system performance, scalability, and ease of development.
Parallel and distributed systems involve multiple processors or processing units working together
on a common task, which requires the sharing of data or information between these units. How
these processing units access and share data depends on the memory architecture employed by
the system. Two primary memory architectures dominate this domain: shared memory and
distributed memory.

Understanding the differences between shared and distributed memory is essential for designing
and developing parallel and distributed computing systems. This article explores these two
architectures in-depth, comparing their communication models, performance characteristics,
scalability, use cases, and programming paradigms.

1. What is Shared Memory?

In a shared memory architecture, multiple processors or threads access a common, unified


memory space. Each processor can directly read from and write to this shared memory, allowing
for seamless communication and data sharing between them. This architecture is commonly
found in multi-core processors, single-node systems, and symmetric multiprocessing (SMP)
systems.

1.1 Key Characteristics of Shared Memory

 Unified Memory Space: All processors share the same physical memory, enabling direct
access to data.
 Implicit Communication: Data sharing is implicit because all processors operate on the
same memory space. No explicit messaging is required to exchange information.
 Synchronization Requirements: Since multiple processors access the same memory,
mechanisms like locks, semaphores, and barriers are required to ensure consistency and
prevent race conditions.
 Hardware Implementation: Shared memory systems are typically limited to a single
machine where all processors are connected to the same memory bus.

1.2 Advantages of Shared Memory

1. Simpler Programming Model: Communication is achieved by directly accessing shared


variables, making it easier to program.
2. Low Latency: Data resides in the same memory, which eliminates the need for network-
based data transfer.
3. Global View of Data: All processors have a consistent view of the memory, simplifying
the design of algorithms.
1.3 Limitations of Shared Memory

1. Scalability Challenges: As the number of processors increases, memory contention and


bus saturation occur, limiting scalability.
2. Synchronization Overhead: Explicit synchronization (e.g., using locks or barriers) is
required to manage concurrent access, which can degrade performance.
3. Limited to Single-node Systems: Shared memory systems are restricted to a single
physical machine and cannot scale easily across distributed systems.

1.4 Examples of Shared Memory Systems

 Multi-core CPUs: Modern processors with multiple cores, such as Intel Core i9 or AMD
Ryzen.
 Symmetric Multiprocessing (SMP): Systems like Oracle SPARC and IBM Power
Systems.
 Programming Models and Frameworks: Examples include OpenMP (Open Multi-
Processing) and Pthreads (POSIX Threads).

2. What is Distributed Memory?

In a distributed memory architecture, each processor has its own private memory that is not
accessible by other processors. Data sharing between processors is achieved through explicit
communication, typically in the form of message passing. Distributed memory systems are
commonly found in large-scale cluster computing, supercomputers, and cloud-based distributed
systems.

2.1 Key Characteristics of Distributed Memory

 Independent Memory Spaces: Each processor has its own private memory, and no
global memory is shared.
 Explicit Communication: Processors exchange data by sending and receiving messages
over a network.
 Scalability: Distributed memory systems can scale to thousands or even millions of
processors because memory is not a shared resource.
 Hardware Implementation: Distributed memory is used in cluster computing, where
nodes (each with its own processor and memory) are connected via high-speed networks.

2.2 Advantages of Distributed Memory

1. High Scalability: Distributed memory systems can scale effectively by adding more
nodes, each with its own memory.
2. Reduced Contention: Since each processor has private memory, there is no contention
for a shared memory resource.
3. Fault Tolerance: Failures in one node's memory do not directly affect the memory of
other nodes, improving system reliability.
4. Aggregate Memory: The total memory available is the sum of the memory in all nodes,
enabling the handling of extremely large datasets.

2.3 Limitations of Distributed Memory

1. Complex Programming Model: Developers must explicitly manage communication and


data sharing using message-passing libraries (e.g., MPI), which adds complexity.
2. Higher Latency: Communication between processors involves sending messages over a
network, which introduces latency.
3. Load Balancing Challenges: Distributing tasks and data efficiently among processors is
critical to achieving good performance.

2.4 Examples of Distributed Memory Systems

 Cluster Computing Systems: Beowulf clusters and HPC clusters.


 Supercomputers: Systems like IBM Blue Gene and Cray XC.
 Programming Models and Frameworks: Examples include MPI (Message Passing
Interface), Apache Hadoop, and Apache Spark.

3. Key Differences Between Shared Memory and Distributed Memory

The shared memory and distributed memory architectures differ fundamentally in terms of
memory organization, communication models, scalability, programming complexity, and
performance. Below is a detailed comparison:

3.1 Memory Organization

 Shared Memory: All processors access a single, unified memory space. Data written by
one processor is immediately visible to others.
 Distributed Memory: Each processor has its own private memory. Data sharing is
achieved by explicitly transferring data between processors.

3.2 Communication Model

 Shared Memory: Communication is implicit, as all processors operate on the same


memory. Synchronization mechanisms (e.g., locks) ensure consistency.
 Distributed Memory: Communication is explicit and requires message-passing routines
to send and receive data between processors.

3.3 Scalability

 Shared Memory: Scalability is limited due to memory contention and hardware


constraints. Shared memory systems are typically restricted to tens of processors.
 Distributed Memory: Highly scalable, capable of supporting thousands or millions of
processors in cluster or cloud environments.
3.4 Programming Complexity

 Shared Memory: Easier to program, as data is shared implicitly. However,


synchronization adds complexity.
 Distributed Memory: More challenging to program, as developers must explicitly
manage data distribution and communication.

3.5 Performance

 Shared Memory: Faster for small-scale systems due to low-latency access to memory,
but performance degrades with contention as more processors are added.
 Distributed Memory: Performance depends on efficient message-passing and load
balancing. Network latency can be a bottleneck, but the architecture handles large-scale
problems better.

3.6 Hardware Setup

 Shared Memory: Single-node systems with multi-core processors or SMP setups.


 Distributed Memory: Multi-node systems, such as clusters or supercomputers,
connected by high-speed networks.

3.7 Fault Tolerance

 Shared Memory: A failure in the shared memory affects all processors, leading to
system-wide failures.
 Distributed Memory: Failures in one node typically do not impact the memory or
operation of other nodes, improving overall fault tolerance.

3.8 Examples of Applications

 Shared Memory: Ideal for applications running on single-node systems, such as


financial modeling, image processing, and real-time simulations.
 Distributed Memory: Suitable for large-scale applications like weather modeling,
distributed machine learning, and big data processing.

4. Hybrid Architectures: Combining Shared and Distributed Memory

Modern high-performance computing systems often employ hybrid architectures, which


combine shared and distributed memory models. For instance, multi-core processors (shared
memory) are used within a single node, while nodes communicate with one another using a
distributed memory model.
4.1 Benefits of Hybrid Architectures

 Scalability and Performance: Hybrid architectures leverage the scalability of


distributed memory and the low-latency communication of shared memory.
 Efficient Resource Utilization: Shared memory is used for intra-node communication,
while distributed memory handles inter-node communication.

4.2 Programming Paradigms

 MPI + OpenMP: A common hybrid programming model, where MPI handles


communication between nodes and OpenMP is used for multi-threading within a node.
 CUDA + MPI: Used in GPU-accelerated systems to combine distributed memory (across
nodes) and shared memory (on GPUs).

5. Choosing Between Shared and Distributed Memory

The choice between shared memory and distributed memory depends on several factors:

 System Size: Shared memory is suitable for small systems, while distributed memory is
necessary for large-scale systems.
 Application Requirements: Shared memory is ideal for tasks requiring frequent
communication, while distributed memory is better for tasks with independent
computations.
 Scalability Needs: Distributed memory is the preferred option for applications that
require high scalability.

Conclusion

Shared memory and distributed memory represent two fundamental paradigms in parallel and
distributed computing, each with its strengths and limitations. Shared memory simplifies
programming and offers low-latency communication, making it ideal for small-scale systems. In
contrast, distributed memory provides scalability and fault tolerance, making it the architecture
of choice for large-scale systems like supercomputers and clusters.

Hybrid architectures that combine the two paradigms are increasingly popular, enabling systems
to harness the advantages of both approaches. By understanding the differences and trade-offs
between shared and distributed memory, developers can choose the most appropriate architecture
for their applications, ultimately achieving higher performance and efficiency in parallel and
distributed computing.

You might also like