Presented by
Presented by
SANIA ZAHRA
SAMRA SHAHID
DUR-E-ADAN MASOOD
PARALLEL ALGORITHM
FOR TASK SCHEDULING
• U
&
s
R
a
e
g
al
e
Li
&
fe
A
E
• p
H
x
pl
is
• In
a
ic
ttr
m
at
o
pl
io
ry
d
e
n
u
s
s
ct
io
n
OUTLINES:
BACKGROUND INFORMATION
• A parallel algorithm is an algorithm that can execute several instructions simultaneously on different
processing devices and then combine all the individual outputs to produce the final result.
• The problem is divided into sub-problems
and are executed in parallel to get individual
outputs. Later on, these individual outputs are
combined together to get the final
desired output.
A parallel algorithm assumes that there are multiple
processors. These processors may communicate with
each other using a shared memory or an
interconnection network.
STEPS OF PARALLEL ALGORITHM
COMBINING INDIVIDUAL
SIMULTANEOUS EXECUTION: RESULTS:
• Unlike a serial algorithm where • Once the sub-problems are
steps follow a strict order, a parallel
algorithm breaks the problem into solved independently, the
smaller sub-problems. These sub- results are combined to reach
problems are then designed to be
executed concurrently on different
the final solution for the original
processing units. problem
BENEFITS OF PARALLEL ALGORITHM
• The interest in parallel computing dates back to the late 1950's, with
advancements surfacing in the form of supercomputers throughout the
60's and 70's. These were shared memory multiprocessors, with
multiple processors working side-by-side on shared data.
• Parallelism is a computer science concept that is older Moore's Law. In
fact, it first appeared in print in a 1958 IBM research memo, in which
John Cocke, a mathematician, and Daniel Slotnick, a computer scientist,
discussed parallelism in numerical calculations.
WHY PARALLEL
ALGORITHM?
To Solve Larger Problems:
Many problems are so large & complex
that it is impossible or impractical to
solve them on a single computer
especially given limited memory.
WORKING OF PARALLEL TASK SCHEDULING ALGORITHM
Analysis of an algorithm helps us determine whether the algorithm is useful or not. Parallel
algorithms are designed to improve the computation speed of a computer. For analyzing a
Parallel Algorithm, we normally consider the following parameters −
• Time complexity (Execution Time),
• Total number of processors used, and
• Total cost.
TIME COMPLEXITY
• Execution time is measured on the basis of the time taken by the algorithm to solve a problem.
The total execution time is calculated from the moment when the algorithm starts executing to
the moment it stops. If all the processors do not start or end execution at the same time, then
the total execution time of the algorithm is the moment when the first processor started its
execution to the moment when the last processor stops its execution.
SPEEDUP OF AN ALGORITHM
The number of processors used is an important factor in analyzing the efficiency of a parallel
algorithm. The cost to buy, maintain, and run the computers are calculated. Larger the number of
processors used by an algorithm to solve a problem, more costly becomes the obtained result.
WORKING OF PARALLEL ALGORITHM
MODELS
The model of parallel algorithm are developed by considering a strategy for dividing the data and
processing method and also applying a suitable strategy to reduce interaction.
• One or more master processes that generate tasks and assign those
tasks to slaves processor.
• Task are assigned before hand it.
• Master can estimate the number of operations.
• Random assigning of task is preferable.
• Slaves are assigned smaller tasks.
• One master can assign task to sub-master and further to slaves.
MASTER SLAVES MODEL
Thanks