0% found this document useful (0 votes)
2 views

Parallel and Distributed lec 9

The document discusses granularity in parallel computing, categorizing it into fine-grained, coarse-grained, and medium-grained parallelism. Fine-grained parallelism involves breaking tasks into small units for many processors, while coarse-grained parallelism involves larger tasks with less frequent communication. Medium-grained parallelism serves as a compromise between the two, balancing task size and communication time.

Uploaded by

reactuser76
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Parallel and Distributed lec 9

The document discusses granularity in parallel computing, categorizing it into fine-grained, coarse-grained, and medium-grained parallelism. Fine-grained parallelism involves breaking tasks into small units for many processors, while coarse-grained parallelism involves larger tasks with less frequent communication. Medium-grained parallelism serves as a compromise between the two, balancing task size and communication time.

Uploaded by

reactuser76
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Parallel and Distributed

Computing
COMP3139
Contents
• Granularity
• Fine-grained parallelism
• Coarse-grained parallelism
• Medium-grained parallelism
• Examples
GRANULARITY (PARALLEL COMPUTING)

• In Parallel Computing, granularity (or grain size) of a task is a


measure of the amount of work which is performed by that task.
• Granularity considers the communication overhead between
multiple processors or processing elements.
• Ratio of computation time to communication time, wherein
computation time is the time required to perform the computation of a
task and communication time is the time required to exchange data
between processors.

3
GRANULARITY (PARALLEL COMPUTING)

• If Tcomp is the computation time and Tcomm denotes the communication time, then the
granularity G of a task can be calculated as:

G = T comp /T comm
• Granularity is usually measured in terms of the number of instruction which
are executed in a particular task.
• Execution time of a program, combining the computation time and communication time.

4
CATEGORIES
FINE-GRAINED PARALLELISM

• In fine-grained parallelism, a program is broken down too many small


tasks.
• These tasks are assigned individually to many processors.
• The amount of work associated with a parallel task is low
• The work is evenly distributed among the processors.
• Fine-grained parallelism facilitates load balancing.

5
FINE-GRAINED PARALLELISM

• As each task processes less data, the number of processors required


to perform the complete processing is high.
• Fine-grained parallelism is best exploited in architectures which
support fast communication.
• Shared memory architecture which has a low communication
overhead is most suitable for fine-grained parallelism.

6
FINE-GRAINED PARALLELISM

• It is difficult for programmers to detect parallelism in a program;


• It is usually the compilers' responsibility to detect fine-grained
parallelism.
• An example of a fine-grained system (from outside the parallel
computing domain) is the system of neurons in our brain

7
FINE-GRAINED PARALLELISM (EXAMPLE)

• Assume there are 100 processors that are responsible for processing
the 10*10 image. Ignoring the communication overhead, the 100
processors can process the 10*10 image in 1 clock cycle. Each
processor is working on 1 pixel of the image and then communicates
the output to other processors. This is an example of fine-grained
parallelism.

8
FINE-GRAINED PARALLELISM (PSEUDOCODE FOR 100 PROCESSORS)

9
COARSE-GRAINED PARALLELISM

• In coarse-grained parallelism, a program is split into large tasks.


• A large amount of computation takes place in processors.
• Certain tasks process the bulk of the data while others might be idle.
• Coarse-grained parallelism fails to exploit the parallelism in the program as
most of the computation is performed sequentially on a processor.
• Message-passing architecture takes a long time to communicate data
among processes which makes it suitable for coarse-grained parallelism..

10
COARSE-GRAINED PARALLELISM (EXAMPLE)

• Further, if we reduce the processors to 2,


then the processing will take 50 clock cycles.
Each processor need to process 50 elements
which increases the computation time, but
the communication overhead decreases as
the number of processors which share data
decreases. This case illustrates coarse-
grained parallelism.

11
COARSE-GRAINED PARALLELISM (PSEUDOCODE FOR
25 PROCESSORS)

12
MEDIUM-GRAINED PARALLELISM

• Medium-grained parallelism is used relatively to fine-grained and


coarse-grained parallelism.
• Medium-grained parallelism is a compromise between fine-grained and
coarse-grained parallelism,

• we have task size and communication time greater than fine-grained


parallelism and lower than coarse-grained parallelism.

13
MEDIUM-GRAINED PARALLELISM
(EXAMPLE)

• Consider that there are 25 processors processing the 10*10 image. The
processing of the image will now take 4 clock cycles. This is an example
of medium-grained parallelism.

14
MEDIUM-GRAINED PARALLELISM
(PSEUDOCODE FOR 2 PROCESSORS)

15
THANK YOU

You might also like