Parallel Programming: Homework Number 5 Objective
Parallel Programming: Homework Number 5 Objective
HOMEWORK NUMBER 5
OBJECTIVE:
At the end of this topic, the students should be able to perform:
INSTRUCTIONS:
For each of the questions listed below, please write an essay of not less than 500
words. The essay should be composed of a minimum of three paragraphs that address
the stated question based on the class discussion or lecture.
desk top equipment to the fastest HPC systems in the world, offering several hundred
thousand processing elements. each communicating entity has its own message
send/receive unit. The message is not stored on the communications link, but rather at
the senders/receivers at the endpoints. In contrast, shared memory communication can
be seen as a memory block used as a communication device, in which all the data are
candidates for message passing communication. For example, a home control system
has one microcontroller per household device—lamp, thermostat, faucet, appliance, and
so on. The devices must communicate relatively infrequently; furthermore, their physical
separation is large enough that we would not naturally think of them as sharing a central
pool of memory. Passing communication packets among the devices is a natural way to
multiple processor cores within the same computer -- are called nodes. Each node in the
parallel arrangement typically works on a portion of the overall computing problem. The
challenge then is to synchronize the actions of each parallel node, exchange data
between nodes and provide command and control over the entire parallel cluster. The
message passing interface defines a standard suite of functions for these tasks. MPI is
not endorsed as an official standard by any standards organization such as IEEE or ISO,
but it is generally considered to be the industry standard and it forms the basis for most
1.3 standard (dubbed MPI-1) provides over 115 functions. The later MPI 2.2 standard (or
MPI-2) offers over 500 functions and is largely backward compatible with MPI-1.
However, not all MPI libraries provide a full implementation of MPI-2. Today, the MPI
performance, include multicore and cluster support, and interoperate with more
applications.
2. What are the advantages and disadvantages of explicit and implicit parallel
programming approaches?
programming language, aimed at describing (to a certain degree of detail) the way in
which the parallel computation will take place. A wide range of solutions exists within this
framework. One extreme is represented by the ``ancient'' use of basic, low level
eventually added to existing programming languages. Although this allows the highest
degree of flexibility (any form of parallel control can be implemented in terms of the basic
low level primitivesgif), it leaves the additional layer of complexity completely on the
shoulders of the programmer, making his task extremely complicate. More sophisticated
approaches have been proposed, supplying the users with tools for dealing with parallel
supplying a uniform set of communication primitives to hide the details of the computing
environment (e.g., PVM [101] and Linda [39]) to sophisticated languages like PCN [35].
Explicit parallelism has various advantages and disadvantages. The main advantage is
its considerable flexibility, which allows to code a wide variety of patterns of execution,
giving a considerable freedom in the choice of what should be run in parallel and how.
On the other hand, the management of the parallelism--a very complex task--is left to the
programmer. Activities like detecting the components of the parallel executiongif and
less complex depending on the specific application.. Explicit parallelism the advantage of
explicit parallel programming is the absolute programmer control over the parallel
execution. A skilled parallel programmer takes advantage of explicit parallelism to
produce very efficient code. However, programming with explicit parallelism is often
difficult, especially for non computing specialists, because of the extra work involved in
parallelism A programmer that writes implicitly parallel code does not need to worry
about task division or process communication, focusing instead on the problem that his
or her program is intended to solve. Implicit parallelism generally facilitates the design of
expressed by some of the language's constructs. A pure implicitly parallel language does
opposed to explicit parallelism. Teacher: Dr. Jesus S. Paguigan Their focusing on a high
level description of the problem and their mathematical nature turned into Advantage
● they are referentially transparent, which means that variables are seen as
assignment languages).
● their operational semantics are based on some form of non-determinism (e.g., clause
parallelization, no additional effort for the programmer) are balanced by some non-
computation, and may exploit very fine grained parallelism, leading to slow-downs
instead of speed-ups;
● the system may attempt to parallelize code which is only apparently parallel, being
instead inherently sequential. This will introduce large amounts of synchronization points
3. Another popular statement is that shared memory programming is easier than message-
passing programming. Is it true? If so justify your answer with examples.
Shared Memory Process The shared memory in the shared memory model is the
that the processes can communicate with each other. All POSIX systems, as well as
A diagram that illustrates the shared memory model of process communication is given
as follows: In the above diagram, the shared memory can be accessed by Process 1
is faster as compared to the message passing model on the same machine. However,
shared memory model may create problems such as synchronization and memory
Message Passing Process Message passing model allows multiple processes to read
and write data to the message queue without being connected to each other. Messages
are stored on the queue until their recipient retrieves them. Message queues are quite
useful for interprocess communication and are used by most operating systems. A
as follows − Teacher: Dr. Jesus S. Paguigan In the above diagram, both the processes
P1 and P2 can access the message queue and store and retrieve data. An advantage of
message passing model is that it is easier to build parallel hardware. This is because
much easier to implement than the shared memory model. However, the message
passing model has slower communication than the shared memory model because the
connection setup takes time. Again, It's a pretty simple difference. In a shared memory
model, multiple workers all operate on the same data. This opens up a lot of the
everyone seperated, so that workers cannot modify each other's data. By analogy, lets
say we are working with a team on a project together. In one model, we are all crowded
around a table, with all of our papers and data layed out. We can only communicate by
changing things on the table. We have to be careful not to all try to operate on the same
piece of data at once, or it will get confusing and things will get mixed up. In a message
passing model, we all sit at our desks, with our own set of papers. When we want to, we
can pass a paper to someone else as a "message", and that worker can now do what
they want with it. We only ever have access to whatever we have in front of us, so we
never have to worry that someone is going to reach over and change one of the