Assignement
Assignement
#### Introduction
In computer science, algorithms are step-by-step procedures or formulas for solving problems. As
computational needs grow, the speed and efficiency of these algorithms become increasingly important.
Traditional algorithms run on a single processor, but as the power of multi-core and distributed
computing systems advances, parallel algorithms have become essential for improving performance. This
report will delve into what parallel algorithms are, their importance, types, and examples.
1. **Performance Improvement**: Parallel algorithms can significantly reduce the time required to
execute complex computations, making them crucial for high-performance computing applications.
2. **Scalability**: They enable better utilization of multi-core processors and distributed computing
environments, allowing for scalable solutions that can handle larger data sets and more complex
problems.
3. **Efficiency**: Efficient use of resources in multi-core and distributed systems can lead to more
energy-efficient computations.
1. **Data Parallelism**: This type involves distributing subsets of the same data across multiple
processors and performing the same operation on each subset. For example, vectorized operations in
matrix computations.
2. **Task Parallelism**: Different tasks or processes are executed concurrently across multiple
processors. Each processor performs a different operation on the same or different sets of data.
3. **Pipeline Parallelism**: A sequence of stages is arranged so that the output of one stage is the input
of the next. Each stage can be processed concurrently by different processors.
2. **Distributed Memory Model**: Each processor has its own local memory, and processors
communicate by passing messages. This model is typically used in clusters and supercomputers. An
example is MPI (Message Passing Interface).
3. **Hybrid Model**: Combines elements of both shared and distributed memory models, often using
shared memory within nodes of a cluster and message passing between nodes.
1. **Decomposition**: The problem is divided into smaller tasks that can be solved concurrently. Tasks
should be as independent as possible to minimize synchronization overhead.
2. **Assignment**: Tasks are distributed among available processors. The goal is to balance the load to
avoid some processors being idle while others are overloaded.
4. **Mapping**: The physical assignment of tasks to processors, considering the architecture of the
computing system to optimize performance.
- **Parallel Merge Sort**: Divides the array into sub-arrays, sorts them in parallel, and then merges
them.
2. **Parallel Matrix Multiplication**: Distributes rows and columns of matrices across processors and
performs multiplication concurrently.
- **Parallel Dijkstra's Algorithm**: Finds the shortest path using multiple processors.
1. **Synchronization**: Ensuring that multiple processors do not interfere with each other’s tasks can be
complex.
2. **Communication Overhead**: In distributed memory systems, the cost of data transfer between
processors can be significant.
3. **Load Balancing**: Distributing tasks evenly among processors to prevent some from being
overworked while others are underutilized.
#### Conclusion
Parallel algorithms are essential for leveraging the full potential of modern multi-core and distributed
computing environments. They offer significant performance improvements and scalability but come
with challenges such as synchronization, communication overhead, and load balancing. Understanding
and effectively implementing parallel algorithms are crucial for tackling large-scale and complex
computational problems.
### References
1. Grama, A., Gupta, A., Karypis, G., & Kumar, V. (2003). Introduction to Parallel Computing. Addison-
Wesley.
2. Quinn, M. J. (2004). Parallel Programming in C with MPI and OpenMP. McGraw-Hill Education.
3. Wilkinson, B., & Allen, M. (2004). Parallel Programming: Techniques and Applications Using Networked
Workstations and Parallel Computers. Prentice Hall.