0% found this document useful (0 votes)
26 views

Parallel Programming

This document summarizes parallel programming concepts. It defines parallel programming as using multiple computational resources simultaneously to solve problems faster than sequential computing. The document then provides a brief history of parallel programming, describes common parallel programming models and languages, and discusses advantages and disadvantages of the approach. It also explains how parallel programming works by partitioning problems, communicating between parts, and mapping parts to processing units. Finally, it introduces two parallel programming methodologies: Single Instruction, Single Data and Multiple Instruction, Single Data.

Uploaded by

Eska Ralthây
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Parallel Programming

This document summarizes parallel programming concepts. It defines parallel programming as using multiple computational resources simultaneously to solve problems faster than sequential computing. The document then provides a brief history of parallel programming, describes common parallel programming models and languages, and discusses advantages and disadvantages of the approach. It also explains how parallel programming works by partitioning problems, communicating between parts, and mapping parts to processing units. Finally, it introduces two parallel programming methodologies: Single Instruction, Single Data and Multiple Instruction, Single Data.

Uploaded by

Eska Ralthây
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Universidad Alejandro de Humboldt

Facultad de Ingeniería
Carrera: Ingeniería en Informática
Materia: Inglés I Sección: 102

Parallel Programming
Integrantes:
Daniel Gómez C.I.: 29.505.671
Sharon Porchovic C.I.: 29.780.653
Index

 Definition
 History
 Parallel Programming Models
 Languages ​that make up the paradigm
 Advantages
 Disadvantages
 ¿How it works?
 Methodologies
Definition
Is an alternative to the continuous demand for
superior computational power in areas as important
as meteorological prediction, biocomputing,
astrophysics. Conventional sequential computers
have been increasing their speed considerably,
although not in relation to increasingly complex
systems that require more computing time.
Parallel computing is the use of multiple
computational resources to solve a problem. It
differs from sequential computing in that several
operations can occur simultaneously.
The classic parallelism, or put another way, the classic use of parallelism, is that of
designing efficient programs in the scientific field. The simulation of scientific
problems is an area of ​great importance, which require a large processing capacity
and memory space, due to the complex operations that must be performed.
History
In 1958 Luigi Federico Menabrea spoke about parallel programming
and the need for “branching” and “waiting”. Also in 1958 IBM
researchers, Cocke and Slotnick, discussed the use of parallelism in
numerical calculations for the first time. The latter proposes
SOLOMON, which was a project to make a super computer that was
never carried out, but its design served as the basis for the
development of future projects. In 1962 Burroughs Corporation
created a 4-processor computer that accessed 16 memory modules.
That same year, the ATLAS computer came into operation, it was the
first machine to implement the concepts of virtual memory and
paging. Then in 1964 the US Air Force, USAF, funded the design of
the first massive parallel computer ILLIAC IV. (256 processors).
Slotnick is hired to start the project (using as contractors, for example, Texas Instruments) In 1965
Dijkstra describes and names the problem of critical sections That same year, Cooley and Tukey,
developed the fast Fourier transform algorithm, which would be one of the algorithms that requires
more cycles of floating point operations. Amdahl and Slotnick debated the feasibility of parallel
processing in 1967. From these debates emerged Amdahl's law In 1968 Dijkstra described traffic
lights as a possible solution to the problem of critical sections From 1968 to 1976, different projects
were developed in the US, Russia, Japan and some European countries. The technology industry and
universities are the sectors that invest the most in research on parallelism First application running
on ILLIAC IV (1976). For this reason this computer was called "the most infamous of the
supercomputers", since it was only 25% completed, it took 11 years and 4 times the estimated cost.
History
Finally, in 1981 the ILLIAC IV project was dismantled by NASA.
Although it is claimed that it was a failure in economic terms,
it became the fastest computer of the time and several
important concepts used in the construction of the ILLIAC IV
were successfully implemented in future projects. In the mid-
1980s, a new type of parallel computer was created when
Caltech's "Concurrent Computation" project built a
supercomputer for scientific applications. The system showed
that extreme performance could be achieved using regular,
commercially available microprocessors. Starting in the late
1980s, clusters emerged to compete and with MPPs.
Finally, in 1981 the ILLIAC IV project was dismantled by NASA. Although it is claimed that it was a failure in
economic terms, it became the fastest computer of the time and several important concepts used in the
construction of the ILLIAC IV were successfully implemented in future projects. In the mid-1980s, a new type of
parallel computer was created when Caltech's "Concurrent Computation" project built a supercomputer for
scientific applications. The system showed that extreme performance could be achieved using regular,
commercially available microprocessors. Starting in the late 1980s, clusters emerged to compete and with MPPs.
A cluster is a type of parallel computer, built using multiple "off-the-shelf" computers, connected using an "off-
the-shelf" network. Today, clusters are the dominant architecture in data centers. For MPP (Massively Parallel
Processor) and clusters, the MPI (Message Passing Interface) standard emerged in the mid-1990s, which
converged from other APIs. For multiprocessors with shared memory, a similar process of convergence is found
in the late 1990s, with the emergence of pthreads and OpenMP.
Today, parallel computing is found in the everyday, with the advent of multi-core physical processors almost by
default in most computing devices. Software has been an active part in the evolution of parallel programming.
Parallel programs are more difficult to write than sequential programs, as it requires communication and
synchronization between the tasks that have been parallelized
Parallel Programming models

 Algorithmic skeletons  Workflows


 Components  Random Parallel
 Distributed objects Access Machine
 Method  Flow processing
invocation
remotely  Bulk synchronous
parallelism
Languages ​that make up the paradigm

 Currently, there is a variety of programming languages ​with a multiparadigm


approach, which allow flexibility when trying to achieve a goal, in terms of
programming. Likewise, languages ​tend to complement said behavior with
libraries, APIs or frameworks (whether community or private), which allow
the programmer to have work tools available.
 The parallel paradigm is implemented, in most of the currently best known
languages, through these tools. In the image, a count of specific tools of some
of those languages ​is made, mentioning that they are not the only ones that
exist.
 Likewise, there are other tools more applied to a specific field that use the
parallel paradigm to improve their services, such as CUDA, which is a
platform for working with nVidia algorithms using GPUs.
 Solve problems that cannot be
realized with a single CPU
 Solve problems, giving the results in a
minor expectation of time
 Allows to execute of greater orders
and complexity
Advantages  Allows to run more problems in
general
 Allows to divide a task into
independent parts
 Allows the execution of several
instructions simultaneously
 Allows to run code in a more faster
way
 Higher energy consumption
 Programs with greater difficulty when
writing
 Difficulty achieving good
synchronization and communication
between tasks
Disadvantages
 Delays caused by communication
between tasks
 Number of components used is
directly proportional to potential
failures
 Multiple processes are in race
condition if their result depends on
the order of their arrival
 If processes that are in race condition
are not properly synchronized, data
corruption can occur
¿How it works?

When designing a parallel algorithm it In this way, it consists of four stages


is necessary to take into account:  Partitioning: In the domain of data
 The times of communications. or functions.
 Maximize processing at each node  Communications: It is done through
or processing unit. different means or paradigms such
as memory or message passing
 The costs of implementing the
algorithm.  Agglomeration: Tasks or data are
grouped taking into account
 Planning times
possible dependencies
 Mapping: Groups are assigned to a
processing unit
Methodologies
Single Instruction, Single Multiple Instruction, Single
Data (SISD) Data (MISD)
 There is a processing element, which has  There are multiple elements of processing, in which
access to a single program and data storage. each one has private memory of the program, but
has common access to a global memory of
 At each step, the processing element loads information.
an instruction and the corresponding  At each step, each processing element gets the same
information and executes this instruction. information from memory and loads an instruction
from the program's private memory.
 The result is saved back to the data storage.
 Then, possibly different instructions for each unit
are executed in parallel, using the information
previously received.
Methodologies
Single Instruction, Multiple Multiple Instruction, Multiple
Data (SIMD) Data (MIMD)
 There are multiple elements of processing, in which  There are multiple processing units, each of which
each one has private access to the information has both instructions and separate information.
memory (shared or distributed). However, there is
only one program memory, from which a special
 Each element executes a different statement on a
processing unit obtains and dispatches instructions. different piece of information.

 At each step, each processing unit gets the same


 Process elements work asynchronously.
instruction and loads an item of information from its
private memory and executes this instruction on
that item.
 Then, the instruction is synchronously applied in
parallel by all the processing elements to different
information elements.

You might also like