Parallel Programming
Parallel Programming
Facultad de Ingeniería
Carrera: Ingeniería en Informática
Materia: Inglés I Sección: 102
Parallel Programming
Integrantes:
Daniel Gómez C.I.: 29.505.671
Sharon Porchovic C.I.: 29.780.653
Index
Definition
History
Parallel Programming Models
Languages that make up the paradigm
Advantages
Disadvantages
¿How it works?
Methodologies
Definition
Is an alternative to the continuous demand for
superior computational power in areas as important
as meteorological prediction, biocomputing,
astrophysics. Conventional sequential computers
have been increasing their speed considerably,
although not in relation to increasingly complex
systems that require more computing time.
Parallel computing is the use of multiple
computational resources to solve a problem. It
differs from sequential computing in that several
operations can occur simultaneously.
The classic parallelism, or put another way, the classic use of parallelism, is that of
designing efficient programs in the scientific field. The simulation of scientific
problems is an area of great importance, which require a large processing capacity
and memory space, due to the complex operations that must be performed.
History
In 1958 Luigi Federico Menabrea spoke about parallel programming
and the need for “branching” and “waiting”. Also in 1958 IBM
researchers, Cocke and Slotnick, discussed the use of parallelism in
numerical calculations for the first time. The latter proposes
SOLOMON, which was a project to make a super computer that was
never carried out, but its design served as the basis for the
development of future projects. In 1962 Burroughs Corporation
created a 4-processor computer that accessed 16 memory modules.
That same year, the ATLAS computer came into operation, it was the
first machine to implement the concepts of virtual memory and
paging. Then in 1964 the US Air Force, USAF, funded the design of
the first massive parallel computer ILLIAC IV. (256 processors).
Slotnick is hired to start the project (using as contractors, for example, Texas Instruments) In 1965
Dijkstra describes and names the problem of critical sections That same year, Cooley and Tukey,
developed the fast Fourier transform algorithm, which would be one of the algorithms that requires
more cycles of floating point operations. Amdahl and Slotnick debated the feasibility of parallel
processing in 1967. From these debates emerged Amdahl's law In 1968 Dijkstra described traffic
lights as a possible solution to the problem of critical sections From 1968 to 1976, different projects
were developed in the US, Russia, Japan and some European countries. The technology industry and
universities are the sectors that invest the most in research on parallelism First application running
on ILLIAC IV (1976). For this reason this computer was called "the most infamous of the
supercomputers", since it was only 25% completed, it took 11 years and 4 times the estimated cost.
History
Finally, in 1981 the ILLIAC IV project was dismantled by NASA.
Although it is claimed that it was a failure in economic terms,
it became the fastest computer of the time and several
important concepts used in the construction of the ILLIAC IV
were successfully implemented in future projects. In the mid-
1980s, a new type of parallel computer was created when
Caltech's "Concurrent Computation" project built a
supercomputer for scientific applications. The system showed
that extreme performance could be achieved using regular,
commercially available microprocessors. Starting in the late
1980s, clusters emerged to compete and with MPPs.
Finally, in 1981 the ILLIAC IV project was dismantled by NASA. Although it is claimed that it was a failure in
economic terms, it became the fastest computer of the time and several important concepts used in the
construction of the ILLIAC IV were successfully implemented in future projects. In the mid-1980s, a new type of
parallel computer was created when Caltech's "Concurrent Computation" project built a supercomputer for
scientific applications. The system showed that extreme performance could be achieved using regular,
commercially available microprocessors. Starting in the late 1980s, clusters emerged to compete and with MPPs.
A cluster is a type of parallel computer, built using multiple "off-the-shelf" computers, connected using an "off-
the-shelf" network. Today, clusters are the dominant architecture in data centers. For MPP (Massively Parallel
Processor) and clusters, the MPI (Message Passing Interface) standard emerged in the mid-1990s, which
converged from other APIs. For multiprocessors with shared memory, a similar process of convergence is found
in the late 1990s, with the emergence of pthreads and OpenMP.
Today, parallel computing is found in the everyday, with the advent of multi-core physical processors almost by
default in most computing devices. Software has been an active part in the evolution of parallel programming.
Parallel programs are more difficult to write than sequential programs, as it requires communication and
synchronization between the tasks that have been parallelized
Parallel Programming models