Introduction To Parallel Programming
Introduction To Parallel Programming
Introduction To Parallel Programming
Introduction to Parallel
Section 1. Programming
Overview of Parallel Computer Systems
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 2 Æ 53
Preconditions of Parallel Computing…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 3 Æ 53
Preconditions of Parallel Computing…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 4 Æ 53
Preconditions of Parallel Computing…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 5 Æ 53
Preconditions of Parallel Computing
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 6 Æ 53
Types of parallel computer systems…
Supercomputers
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 7 Æ 53
Types of parallel computer systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 8 Æ 53
Types of parallel computer systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 9 Æ 53
Types of parallel computer systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 10 Æ 53
Types of parallel computer systems…
Supercomputers. BlueGene:
– It is still being developed, at present the current name of
the system is “BlueGene/L DD2 beta-System”, this is the
first phase of the complete computer system,
– Peak performance is forecasted to reach 360 TFlops by
the time the system is put into the final configuration,
– The features of the current variant of the system:
• 32 racks with 1024 dual-kernel 32-bit PowerPC 440 0.7 GHz
processors in each;
• Peak performance is approximately 180 TFlops;
• The maximum processing power demonstrated by LINPACK test
is 135 TFlops.
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 11 Æ 53
Types of parallel computer systems…
Supercomputers. МВС-15000…
(Interdepartmental Supercomputer Center of Russian Academy of Science)
– The total number of nodes is 276 (552 processors), each
computational nodes includes:
• 2 IBM PowerPC 970 processors with 2.2 GHz, cache L1 96 Kb and
cache L2 512 Kb,
• 4 Gb RAM per node,
• 40 Gb IDE hard disc
– SuSe Linux Enterprise Server operating systems, version 8
for the platforms x86 and PowerPC,
– Peak performance is 4857.6 GFlops and the maximum
processing power demonstrated in LINPACK test is 3052
GFlops.
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 12 Æ 53
Types of parallel computer systems…
Supercomputers. МВС-15000
Network Decisive Field
Internet Management Instrumental
Station (NMS) Computational
Computational
Node (ICN) Node (CN)
Computational Computational
Switch
Node (CN) Node (CN)
Gigabit Ethernet
… …
Computational Computational
File Server (FS) Node (CN) Node (CN)
Myrinet Switch
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 13 Æ 53
Types of parallel computer systems…
Clusters
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 14 Æ 53
Types of parallel computer systems…
Clusters. Beowulf…
– Nowadays a “Beowulf” type cluster is a system which
consists of a server node and one or more client nodes
which are connected with the means of Ethernet or some
other network. The system is made of commodity off-the-
shelf components able to operate under Linux, standard
Ethernet adaptors and switches. It does not contain any
specific hardware and can be easily reproduced.
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 15 Æ 53
Types of parallel computer systems…
Clusters. Beowulf:
– 1994, NASA Goddard Space Flight Research Center, the
cluster was created under Thomas Sterling and Don
Becker’s supervision:
• 16 computers based on 486DX4 100 MHz processors,
• Each node had 16 Mb RAM,
• The connection of the nodes was provided by three 10Mbit/s
network adaptors,
• Linux operating system, GNU compiler and MPI library.
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 16 Æ 53
Types of parallel computer systems…
Clusters. Avalon
– 1998, Avalon system, Los Alamos National Laboratory
(USA), the supervisor was astrophysicist Michael Warren:
• 68 processors (later it was expanded up to 140)
Alpha21164A with the clock frequency 533 MHz,
• 256 Mb RAM, 3 Gb HDD, Fast Ethernet card on the each
node,
• Linux operating system,
• The peak performance was 149 GFlops and the computing
power of the 48.6 GFlops demonstrated in LINPACK test.
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 17 Æ 53
Types of parallel computer systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 18 Æ 53
Types of parallel computer systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 19 Æ 53
Types of parallel computer systems…
Clusters. Thunder
– 2004, Livermore National Laboratory (USA) :
• 1024 servers with 4 Intel Itanium 1.4 GHz processors in each,
• 8 Gb RAM per node,
• Total disc capacity 150 Tb,
• Operating system CHAOS 2.0,
• At present Thunder Cluster with its performance 22938 GFlops and
the maximum one shown in LINPACK test 19940 GFlops takes the
5th position of the Top500 (in the summer of 2004 it occupied the
2nd position)
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 20 Æ 53
Types of parallel computer systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 21 Æ 53
Types of parallel computer systems
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 22 Æ 53
Taxonomy of Parallel Computer Systems…
Flynn’s taxonomy
– Flynn's taxonomy is the best-known classification scheme
for computer systems. It provide to specify the multiplicity
of hardware used to operate instruction and data streams:
• SISD (Single Instruction, Single Data)
• SIMD (Single Instruction, Multiple Data)
• MISD (Multiple Instruction, Single Data)
• MIMD (Multiple Instruction, Multiple Data)
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 23 Æ 53
Taxonomy of Parallel Computer Systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 24 Æ 53
Taxonomy of Parallel Computer Systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 25 Æ 53
Taxonomy of Parallel Computer Systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 26 Æ 53
Taxonomy of Parallel Computer Systems…
Processor Processor
…
Cache Cache
RAM
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 27 Æ 53
Taxonomy of Parallel Computer Systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 28 Æ 53
Taxonomy of Parallel Computer Systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 29 Æ 53
Taxonomy of Parallel Computer Systems…
Multiprocessors (case of distributed shared
memory)…
Processor Processor
Cache … Cache
RAM
RAM
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 30 Æ 53
Taxonomy of Parallel Computer Systems…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 31 Æ 53
Taxonomy of Parallel Computer Systems…
Multicomputers…
– no-remote memory access or NORMA,
– each system processor is able to use only its local
memory,
– getting access to the data available on other processors
requires explicit execution of message passing operations.
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 32 Æ 53
Taxonomy of Parallel Computer Systems…
Multicomputers…
Processor Processor
Cache Cache
RAM
RAM
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 33 Æ 53
Taxonomy of Parallel Computer Systems…
Multicomputers…
This approach is used in developing two important
types of multiprocessor computing systems:
– massively parallel processor or MPP, e.g.: IBM RS/6000
SP2, Intel PARAGON, ASCI Red, Parsytec transputer
system,
– clusters, e.g.: AC3 Velocity and NCSA NT Supercluster.
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 34 Æ 53
Taxonomy of Parallel Computer Systems…
Multicomputers. Clusters…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 35 Æ 53
Taxonomy of Parallel Computer Systems…
Multicomputers. Clusters…
Advantages:
– Clusters can be either created on the basis of separate
computers available for consumers or constructed of
standard computer units, this allows to cut down on costs,
– The increase of computational power of separate
processors makes possible to create clusters using a
relatively small number (several tens) of separate
processors (lowly parallel processing),
– It allows to subdivide into only large independent parts
(coarse granularity) in the computational algorithm for
parallel execution.
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 36 Æ 53
Taxonomy of Parallel Computer Systems
Multicomputers. Clusters
Problems:
– Arranging the interaction among computational cluster
nodes with the use of data transmission usually leads to
considerable time delays,
– Additional restrictions for the type of parallel algorithms
and programs being developed (low intensity of streams of
data transmission).
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 37 Æ 53
Overview of Interconnection Networks…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 38 Æ 53
Overview of Interconnection Networks…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 41 Æ 53
Overview of Interconnection Networks…
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 42 Æ 53
Overview of Interconnection Networks…
Ring ⎣ p 2⎦ 2 2 p
Mesh (N=2) 2 ⎣ p 2 ⎦ 2 p 4 2p
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 43 Æ 53
Software System Platforms for High-Performance Clusters
To be added
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 44 Æ 53
Summary
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 45 Æ 53
Discussions
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 46 Æ 53
Exercises
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 47 Æ 53
References
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 48 Æ 53
References
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 50 Æ 53
Next Section
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 51 Æ 53
Author’s Team
Gergel V.P., Professor, Doctor of Science in Engineering, Course
Author
Grishagin V.A., Associate Professor, Candidate of Science in
Mathematics
Abrosimova O.N., Assistant Professor (chapter 10)
Kurylev A.L., Assistant Professor (learning labs 4,5)
Labutin D.Y., Assistant Professor (ParaLab system)
Sysoev A.V., Assistant Professor (chapter 1)
Gergel A.V., Post-Graduate Student (chapter 12, learning lab 6)
Labutina A.A., Post-Graduate Student (chapters 7,8,9, learning labs
1,2,3, ParaLab system)
Senin A.V., Post-Graduate Student (chapter 11, learning labs on
Microsoft Compute Cluster)
Liverko S.V., Student (ParaLab system)
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 52 Æ 53
About the project
The purpose of the project is to develop the set of educational materials for the
teaching course “Multiprocessor computational systems and parallel programming”.
This course is designed for the consideration of the parallel computation problems,
which are stipulated in the recommendations of IEEE-CS and ACM Computing
Curricula 2001. The educational materials can be used for teaching/training
specialists in the fields of informatics, computer engineering and information
technologies. The curriculum consists of the training course “Introduction to the
methods of parallel programming” and the computer laboratory training “The
methods and technologies of parallel program development”. Such educational
materials makes possible to seamlessly combine both the fundamental education in
computer science and the practical training in the methods of developing the
software for solving complicated time-consuming computational problems using the
high performance computational systems.
The project was carried out in Nizhny Novgorod State University, the Software
Department of the Computing Mathematics and Cybernetics Faculty
(http://www.software.unn.ac.ru). The project was implemented with the support of
Microsoft Corporation.
Nizhni Novgorod, 2005 Introduction to Parallel Programming: Overview of Parallel Computer Systems
© Gergel V.P. 53 Æ 53