Parallel Computing
Parallel Computing
Overview
Concepts and Terminology Parallel Computer Memory Architectures Parallel Programming Models Designing Parallel Programs Parallel Algorithm Examples Conclusion
Coarse High computation, low communication Fine Low computation, high communication Synchronizations Data Communications Overhead imposed by compilers, libraries, tools, operating systems, etc.
Parallel Overhead
Compiler analyzes code and identifies opportunities for parallelism Analysis includes attempting to compute whether or not the parallelism actually improves performance. Loops are the most frequent target for automatic parallelism.
A Parallelizable Problem:
Calculate the potential energy for each of several thousand independent conformations of a molecule. When done find the minimum energy conformation.
A Non-Parallelizable Problem:
The Fibonacci Series
Perform a function on a 2D array. Single processor iterates through each element in the array
Assign each processor a partition of the array. Each process iterates through its own partition.
Worst case scenario. Phase 1 Phase 2 Phase 1 Phase 2 Phase 1 Phase 2 Phase 1
6, 5, 4, 3, 2, 1, 0
6, 4, 5, 2, 3, 0, 1 4, 6, 2, 5, 0, 3, 1 4, 2, 6, 0, 5, 1, 3 2, 4, 0, 6, 1, 5, 3 2, 0, 4, 1, 6, 3, 5 0, 2, 1, 4, 3, 6, 5 0, 1, 2, 3, 4, 5, 6
Conclusion
Parallel computing is fast. There are many different approaches and models of parallel computing. Parallel computing is the future of computing.
References
A Library of Parallel Algorithms, www2.cs.cmu.edu/~scandal/nesl/algorithms.html Internet Parallel Computing Archive, wotug.ukc.ac.uk/parallel Introduction to Parallel Computing, www.llnl.gov/computing/tutorials/parallel_comp/#Whatis Parallel Programming in C with MPI and OpenMP, Michael J. Quinn, McGraw Hill Higher Education, 2003 The New Turing Omnibus, A. K. Dewdney, Henry Holt and Company, 1993