0% found this document useful (0 votes)
18 views

Week 14 Applications of Parallel and Distributed Computing

LEC

Uploaded by

pakismunaakosayo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Week 14 Applications of Parallel and Distributed Computing

LEC

Uploaded by

pakismunaakosayo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Introduction to

Parallel and
Distributed
Computing
Parallel and distributed computing involve harnessing the power of multiple
processors and computers to solve complex problems. This field is crucial for
tackling modern challenges in various domains, including scientific
simulations, data analytics, and artificial intelligence.

sv
by DARWIN VARGAS
Definitions and Characteristics
Parallel computing utilizes multiple processors within a single system to execute tasks simultaneously, while distributed computing
involves distributing tasks across multiple independent machines connected through a network.

Parallel Computing Distributed Computing

Characterized by shared memory, tightly coupled processors, Features independent machines, distributed memory, and
and high communication speed. communication over a network.
Motivations and Driving Factors
The increasing complexity of problems, coupled with the demand for faster
processing and analysis, drives the adoption of parallel and distributed computing.

1 Increased Computational 2 Scalability and Flexibility


Power These systems can scale to
Parallel and distributed systems accommodate larger problems
offer significant performance and adapt to changing workloads
gains, allowing for the efficient by adding more resources.
handling of large datasets and
complex tasks.

3 Cost-Effectiveness
Utilizing existing hardware resources effectively can reduce costs associated
with purchasing new, high-performance systems.
Hardware Architectures
Different hardware architectures are employed in parallel and distributed systems, each with its strengths and weaknesses.

Shared Memory Architectures Distributed Memory Architectures


Multi-core processors, NUMA systems, and SMP systems. Clusters, grids, and cloud computing.
Software Paradigms and Models
Various software paradigms and models provide frameworks for developing and executing parallel and distributed applications.

Message Passing Interface (MPI)


A standard for communication between processes.

OpenMP
A directive-based API for shared-memory parallelism.

MapReduce
A programming model for distributed data processing.
Parallelism: Task, Data, and
Pipeline
Parallelism can be achieved in various ways, each suited for different types of
problems and data.

Task Parallelism Dividing a task into multiple sub-


tasks that can be executed
independently.

Data Parallelism Applying the same operation to


different parts of a dataset.

Pipeline Parallelism Breaking down a task into a


series of stages, where the output
of one stage becomes the input
for the next.
Distributed Systems: Client-Server, Peer-to-Peer
Distributed systems often employ specific architectures to manage interactions and resource sharing between nodes.

Client-Server Peer-to-Peer

A central server provides services to multiple clients. Nodes act as both clients and servers, communicating directly
with each other.
Synchronization and Coordination
Synchronization techniques ensure that parallel and distributed processes access shared
resources in a controlled and orderly manner.

Locks
Exclusive access to shared resources.

Semaphores
Limited access to shared resources.

Barriers
Synchronization points for processes.
Load Balancing and Resource
Management
Load balancing and resource management strategies aim to distribute workloads evenly
across available resources, maximizing efficiency and performance.

1 Dynamic Load Balancing


Adjusts workload distribution based on real-time conditions.

2 Static Load Balancing


Pre-defined allocation of tasks based on system configuration.

3 Resource Allocation
Efficiently distributing resources like CPU, memory, and network
bandwidth.
Fault Tolerance and
Reliability
Fault tolerance ensures the continued operation of a system even in the face
of failures, by employing redundancy and error handling mechanisms.

1 Redundancy 2 Error Detection and


Recovery
Multiple copies of data and
resources for failover. Mechanisms for detecting
and recovering from errors.

3 Checkpointing
Periodic saving of system state for restoration in case of failure.

You might also like