0% found this document useful (0 votes)
59 views6 pages

P D Group2-2

Shared memory and distributed memory are two approaches to memory architecture in parallel computing. In shared memory, multiple processors can access a common physical memory directly as a global address space. Programming is simpler but scalability may be limited. Distributed memory gives each processor its own local memory, requiring explicit message passing for communication. Programming is more complex but scalability is improved. The optimal approach depends on an application's requirements, scalability goals, and tolerance for programming complexity.

Uploaded by

Ursulla Zeking
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views6 pages

P D Group2-2

Shared memory and distributed memory are two approaches to memory architecture in parallel computing. In shared memory, multiple processors can access a common physical memory directly as a global address space. Programming is simpler but scalability may be limited. Distributed memory gives each processor its own local memory, requiring explicit message passing for communication. Programming is more complex but scalability is improved. The optimal approach depends on an application's requirements, scalability goals, and tolerance for programming complexity.

Uploaded by

Ursulla Zeking
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

A COMPARATIVE ANALYSIS OF SHARED MEMORY AND DISTRIBUTED

MEMORY ARCHITECTURE FOR PARALLELISM

Introduction
Parallelism is a fundamental concept in computing, enabling the simultaneous execution of
multiple tasks to improve performance. The choice of parallel architecture plays a crucial role
in achieving efficient parallel processing.

Why Parallel Architectures?


Parallel computing is a type of computation in which many calculations or processes are carried
out simultaneously. It is based on the principle that large problems can often be divided into
smaller ones, which are then solved concurrently (“in parallel”). The main motivation behind
parallel computiseffective memory architecture in parallel computing arises from the
challenges associated with coordinating and managing data access among multiple processors.
As parallel systems employ multiple processors working in tandem, optimizing memory
architecture becomes crucial to ensure seamless communication, efficient data sharing and
synchronization. Effective memory architecture helps overcome bottlenecks, enhance
scalability and improve overall performance in parallel computing environment by providing
efficient ways for processor to access and share data.
Certainly! Shared memory and distributed memory architectures are two fundamental
approaches to effective memory architecture in parallel computing. Let's compare them and
discuss their trade-offs

SHARED MEMORY
Architecture:
In shared memory systems, multiple processors have the common ability to access all memory
as a global address space. Each processor can directly access the entire memory. With an intent
to provide communication among them or avoid redundant copies
Multiple processors can operate independently but share the same memory resources
Change in Memory a memory location affected by one processor are visible to all other
processors.
Shared memory is an efficient means of passing data between programs.
Historically, Shared Memory machines have been classified as UMA and NUMA based upon
memory access time.
I. Uniform Memory Access (UMA) (known as Cache Coherent UMA)
• Commonly represented today by Symmetric Multiprocessor (SMP) machines
• Identical processors
• Equal access and access times to memory

1
II. Non-Uniform Memory Access (NUMA)
• This architecture often links two or more SMPs
• In such that:
o One SMP can directly access memory of another SMP
o Not all processors have equal access time to all memories
o Memory access across link is slower

Application of shared memory:


•Modern Mobile Phones, Laptops, or Desktop PCs: The processors in these devices
share the same memory space, allowing for efficient communication and data transfer.
• Web Applications: Shared memory is used to scale up the performance of web
applications. It allows for efficient data transfer between different parts of the
application, improving the overall performance.
• Databases: Databases often use shared memory for storing and retrieving data. This
allows for fast access to data, improving the performance of database operations.
Advantages:
• Simplicity: Shared memory systems are generally easier to program, as there is no need
for explicit message passing.
• Data Sharing: All processors can access the same data, which can simplify data sharing
among processes.
• Simple synchronization mechanisms.
• Also, each interface can run at full bandwidth all the time, so the overall fabric
bandwidth is much higher than in shared bus architectures.

2
Challenges:
• Simplicity: Shared memory systems are generally easier to program, as there is no need
for explicit message passing.
• Data Sharing: All processors can access the same data, which can simplify data sharing
among processes.
• Simple synchronization mechanisms.
• Also, each interface can run at full bandwidth all the time, so the overall fabric
bandwidth is much higher than in shared bus architectures.

DISTRIBUTED MEMORY
Architecture:
In distributed memory systems, each processor has its own local memory, and processors
communicate by passing messages (data is explicitly sent and received). Memory is not shared
among processors, and each processor operates independently.
Software DSM systems can be implemented in an operating system, or as a programming
library and can be thought of as extensions of the underlying virtual memory architecture.
A software distributed shared memory (DSM) system in a distributed memory architecture is a
mechanism that allows multiple processors in parallel computing environment to access a
shared address space, even though physically they have their own separate memories. This
approach provides the convenience of a shared memory programming model without requiring
physically shared memory.
Software DSM system creates an illusion of a single, large shared memory space that is
accessible by all processor in the system (Virtually Shared Memory)

Hybrid architecture
A hybrid architecture is a system that combines the features of both shared-memory and
distributed-memory architectures. It is typically composed of several clusters or nodes, each
with multiple processors that share a local memory, and they communicate with other clusters
or nodes through message passing.
This means that they can exploit the benefits of both architectures, such as the ease of
programming within a cluster and the scalability across clusters. The challenge of this
architecture is to balance the trade-off between the communication cost and the computation
efficiency.

3
Application of Distributed Memory:
• Online Gaming, Music, YouTube: Entertainment platforms like these uses distributed
memory to handle large amounts of data. This allows for efficient data retrieval and
storage, improving the performance of these platforms.

• Amazon, eBay, Online Banking, E-Commerce Websites: These platforms often use
distributed memory architecture for their operations. This allows for efficient data
transfer between different parts of the platform, improving the overall performance.

• Search Engines, Wikipedia, Social Networking, Cloud Computing: These


applications use distributed memory to handle large and complex databases. This allows
for efficient data retrieval and storage. improving the performance of these applications.

Advantages:
• Scalability: Distributed memory systems can scale more easily as the number of
processors increases, as there is no contention for a shared memory space.
• Geographical Distribution: Can be used across multiple machines, making it suitable
for large-scale parallel computing.
Challenges:
• Complexity in Programming: Writing programs for distributed memory systems can
be more complex due to the need for explicit message passing and coordination.
• Data Sharing Overhead: Sharing data between processors requires explicit
communication, which can introduce overhead.
• Potential for Load Imbalance: Workload distribution among processors can be
challenging, and load imbalance may occur.

Trade-offs (Benefit of one over the other)


• Ease of Programming: Shared memory systems are generally easier to program due
to the abstraction of shared data, while distributed memory systems require explicit
communication, making programming more complex.

• Scalability: Distributed memory systems are often more scalable, especially in large-
scale parallel computing, where shared memory systems may face scalability limits

• Performance: The performance of shared memory systems can suffer from contention
and cache coherence overhead, while distributed memory systems may offer better
scalability and performance in certain scenarios.

4
Conclusion
In summary, the choice between shared memory and distributed memory architectures depends
on the specific requirements of the application, scalability goals, and the level of programming
complexity developers are willing to handle.

5
References
❖ Shared and Distributed memory Architecture by P Graham · 1999 · Cited by 1 — •
SGI Origin2000:
https://icps.u-strasbg.fr/~loechner/enseignement/PARALLELISME/openmp.pdf
❖ shared memory vs distributed memory, EPCC at The University of Edinburgh
December 2012.
https://www.futurelearn.com/info/courses/supercomputing/0/steps/24028#:~:text=Sha
red%2
❖ Distributed-Memory Architecture, Updated on: 23-July 2021,
https://www.tutorialspoint.com/what-is-distributedmemory-mimdarchitecture
❖ Francesco-Pier Federici, Distributed Computing with Python, chapter 3, Parallelism
with python.
❖ Shared Memory Model, 2019 Elsevier B.V.
https://www.sciencedirect.com/topics/computerscience/shared-memorymodel
❖ Main difference between shared and distributed memory, Apr 16, 2016, Gabriel
Southern
https://stackoverflow.com/questions/36642382/main-difference-between-shared-
memoryand-distributed-memory
❖ Application of a distributed memory: https://www.geeksforgeeks.org/what-is-
adistributed-system/
❖ Distributed shared memory and it advantages: https://www.geeksforgeeks.org/whatis-
distributed-shared-memory-and-its-advantages/
❖ shared memory architecture: https://en.wikipedia.org/wiki/Shared_memory

You might also like