0% found this document useful (0 votes)
5 views13 pages

Lecture 1.2.2

The document compares Shared Memory and Distributed Memory architectures in Parallel and Distributed Computing, highlighting their key features, advantages, and disadvantages. Shared Memory is easier to program and has lower latency but struggles with scalability, while Distributed Memory is highly scalable and fault-tolerant but requires complex programming and has higher communication latency. It also discusses programming models and tools for both architectures, emphasizing their respective use cases.

Uploaded by

dhruvtiwari360
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views13 pages

Lecture 1.2.2

The document compares Shared Memory and Distributed Memory architectures in Parallel and Distributed Computing, highlighting their key features, advantages, and disadvantages. Shared Memory is easier to program and has lower latency but struggles with scalability, while Distributed Memory is highly scalable and fault-tolerant but requires complex programming and has higher communication latency. It also discusses programming models and tools for both architectures, emphasizing their respective use cases.

Uploaded by

dhruvtiwari360
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 13

University Institute of Engineering

DEPARTMENT OF COMPUTER SCIENCE


& ENGINEERING
Bachelor of Engineering (Computer Science & Engineering)
Subject Name: Parallel and Distributed Computing
Subject Code: 22CSH-354/22ITH-354
Prepared by:
Er. Anupama Jamwal (E14665)

Parallel and Distributed Computing DISCOVER . LEARN . EMPOWER

1
Department of Computer Science and Engineering (CSE)

Content

Shared Memory vs. Distributed Memory in Parallel and


Distributed Computing

University Institute of Engineering (UIE) 2


Department of Computer Science and Engineering (CSE)

Shared Memory vs. Distributed Memory in Parallel and Distributed Computing


Parallel and distributed computing systems aim to achieve faster computation by
breaking down tasks into smaller sub-tasks and running them simultaneously. The
type of memory architecture significantly impacts how these systems operate and
communicate.

1. Shared Memory Architecture


Key Features:
•Single Memory Space: All processors share a common memory, allowing them
to directly access and modify the same data.
•Communication: Processors communicate through shared variables in memory.
•Synchronization: Mechanisms like mutexes, semaphores, or barriers ensure that
multiple processors access shared resources without conflicts.

University Institute of Engineering (UIE) 3


Department of Computer Science and Engineering (CSE)

Advantages:
•Ease of Programming: No need for explicit message-passing since data is directly
shared.
•Low Communication Latency: Data transfer between processors is faster due to direct
memory access.
•Efficient for Small Systems: Works well for tightly-coupled systems like multicore
processors.

Disadvantages:
•Scalability Issues: Performance degrades as the number of processors increases due to
contention for memory access.
•Limited to Local Nodes: Typically confined to a single machine, making it unsuitable for
large-scale distributed systems.

University Institute of Engineering (UIE) 4


Department of Computer Science and Engineering (CSE)

Programming Models in shared memory architecture

•OpenMP (Open Multi-Processing):


•Widely used for shared memory programming.
•Provides compiler directives (e.g., #pragma omp) to parallelize loops and sections of code.
•Supports dynamic and static scheduling for load balancing.

•Pthreads (POSIX Threads):


•Low-level API for creating and managing threads.
•Requires explicit synchronization (e.g., mutexes, condition variables).

University Institute of Engineering (UIE) 5


Department of Computer Science and Engineering (CSE)

Synchronization Challenges in shared memory :

•Race Conditions: When multiple threads access shared data simultaneously without

proper synchronization.

•Deadlocks: Occurs when two or more threads are waiting indefinitely for resources held

by each other.

•False Sharing: Performance degradation caused by multiple threads accessing different

variables within the same cache line.

University Institute of Engineering (UIE) 6


Department of Computer Science and Engineering (CSE)

2. Distributed Memory Architecture


Key Features:
•Independent Memory for Each Processor: Each processor has its own private memory, and processors
communicate by passing messages.
•Communication: Explicit message-passing interfaces like MPI (Message Passing Interface) or PVM
(Parallel Virtual Machine) are used.
•Scalability: Designed to scale across multiple machines, often connected by a network.

Advantages:
•Scalability: Can handle large-scale systems with thousands of nodes.
•Fault Tolerance: Failure of one node does not affect the entire system.
•Geographical Distribution: Suitable for systems spread across different locations.

University Institute of Engineering (UIE) 7


Department of Computer Science and Engineering (CSE)

Disadvantages:
•Complex Programming: Explicit message-passing requires careful management of data
distribution and communication.
•Higher Latency: Communication between processors is slower due to network overhead.
•Load Balancing Challenges: Ensuring all processors work efficiently without idle time can be
complex.
Usage :
Shared Memory: Best for small-scale systems, multicore processors, or when ease of
programming and low latency are critical.
•Distributed Memory: Suitable for large-scale systems, cluster computing, and applications
requiring high scalability and fault tolerance.

University Institute of Engineering (UIE) 8


Department of Computer Science and Engineering (CSE)

Programming Models in Distributed Memory Architecture

•MPI (Message Passing Interface):


•Standard for distributed memory programming.
•Offers functions for point-to-point communication (e.g., MPI_Send, MPI_Recv) and
collective operations (e.g., MPI_Bcast, MPI_Reduce).
•MapReduce:
•Framework for processing large datasets in distributed environments.
•Key-value pair processing with two phases: Map (data distribution) and Reduce (aggregation).

University Institute of Engineering (UIE) 9


Department of Computer Science and Engineering (CSE)

Synchronization Challenges

•Message Latency: Communication delays due to network


transmission.
•Fault Recovery: Handling node failures without losing computation
progress.
•Load Imbalance: Uneven distribution of work among nodes, leading
to bottlenecks.

University Institute of Engineering (UIE) 10


Department of Computer Science and Engineering (CSE)

Comparison Table

Feature Shared Memory Distributed Memory


Single, shared memory Independent memory per
Memory Access
space processor
Communication Through shared variables Via message passing
Scalability Limited to a single system Highly scalable
Programming Complexity Easier More complex
Communication Latency Low High
Hardware Requirements Single machine Multiple machines
Fault Tolerance Limited Better

University Institute of Engineering (UIE) 11


Department of Computer Science and Engineering (CSE)

Metric Shared Memory Distributed Memory

Low (failure of a node halts


Fault Tolerance High (fault recovery mechanisms)
execution)

Poor beyond a certain number of


Scalability Excellent for large-scale systems
cores
Bandwidth High (local memory buses) Limited by network bandwidth

Limited by Amdahl’s Law Influenced by communication


Speedup
(synchronization overhead) latency

Latency Low (local memory access) High (network communication)

University Institute of Engineering (UIE) 12


Department of Computer Science and Engineering (CSE)

Tools and Libraries required in both:

•Shared Memory Tools:


• Intel Threading Building Blocks (TBB)
• OpenMP
• Pthreads

•Distributed Memory Tools:


• MPI (OpenMPI, MPICH)
• Apache Hadoop, Spark
• Kubernetes for distributed container orchestration

University Institute of Engineering (UIE) 13

You might also like