0% found this document useful (0 votes)
22 views23 pages

Presented by

Uploaded by

Samra Nawabi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views23 pages

Presented by

Uploaded by

Samra Nawabi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

PRESENTED BY:

SANIA ZAHRA
SAMRA SHAHID
DUR-E-ADAN MASOOD
PARALLEL ALGORITHM
FOR TASK SCHEDULING
• U
&
s
R
a
e
g
al
e
Li
&
fe
A
E
• p
H
x
pl
is
• In
a
ic
ttr
m
at
o
pl
io
ry
d
e
n
u
s
s
ct
io
n
OUTLINES:
BACKGROUND INFORMATION

• A parallel algorithm is an algorithm that can execute several instructions simultaneously on different
processing devices and then combine all the individual outputs to produce the final result.
• The problem is divided into sub-problems
and are executed in parallel to get individual
outputs. Later on, these individual outputs are
combined together to get the final
desired output.
A parallel algorithm assumes that there are multiple
processors. These processors may communicate with
each other using a shared memory or an
interconnection network.
STEPS OF PARALLEL ALGORITHM
COMBINING INDIVIDUAL
SIMULTANEOUS EXECUTION: RESULTS:
• Unlike a serial algorithm where • Once the sub-problems are
steps follow a strict order, a parallel
algorithm breaks the problem into solved independently, the
smaller sub-problems. These sub- results are combined to reach
problems are then designed to be
executed concurrently on different
the final solution for the original
processing units. problem
BENEFITS OF PARALLEL ALGORITHM

• Reduced Execution Time: • Improved Scalability:


• By dividing the workload and • As you add more processors to
tackling parts concurrently, a computer system, a well-
parallel algorithms can designed parallel algorithm can
significantly decrease the overall effectively utilize the additional
time to solve a problem. This resources, further speeding up
becomes especially beneficial for the solution process
computationally intensive tasks.
WHAT IS THE SCHEDULING ALGORITHM IN
PARALLEL COMPUTING?
• Parallel algorithms are given for scheduling problems such as
scheduling to minimize the number of tardy jobs, job sequencing
with deadlines, scheduling to minimize earliness and tardiness
penalties, channel assignment, and mini- mizing the mean finish
time.
A BRIEF HISTORY OF PARALLEL COMPUTING

• The interest in parallel computing dates back to the late 1950's, with
advancements surfacing in the form of supercomputers throughout the
60's and 70's. These were shared memory multiprocessors, with
multiple processors working side-by-side on shared data.
• Parallelism is a computer science concept that is older Moore's Law. In
fact, it first appeared in print in a 1958 IBM research memo, in which
John Cocke, a mathematician, and Daniel Slotnick, a computer scientist,
discussed parallelism in numerical calculations.
WHY PARALLEL
ALGORITHM?
 To Solve Larger Problems:
Many problems are so large & complex
that it is impossible or impractical to
solve them on a single computer
especially given limited memory.
WORKING OF PARALLEL TASK SCHEDULING ALGORITHM

Tasks: Discrete units of work that need to be executed.


Processors: Computing units that perform tasks.
Task Dependencies: Some tasks depend on the completion of others before they can start.
Load Balancing: Ensuring that tasks are evenly distributed across processors to avoid idle
times.
Makespan: The total time required to complete all tasks.
ANALYSIS OF PARALLEL ALGORITHM

Analysis of an algorithm helps us determine whether the algorithm is useful or not. Parallel
algorithms are designed to improve the computation speed of a computer. For analyzing a
Parallel Algorithm, we normally consider the following parameters −
• Time complexity (Execution Time),
• Total number of processors used, and
• Total cost.
TIME COMPLEXITY

• Execution time is measured on the basis of the time taken by the algorithm to solve a problem.
The total execution time is calculated from the moment when the algorithm starts executing to
the moment it stops. If all the processors do not start or end execution at the same time, then
the total execution time of the algorithm is the moment when the first processor started its
execution to the moment when the last processor stops its execution.
SPEEDUP OF AN ALGORITHM

The performance of a parallel algorithm is determined by calculating its speedup. Speedup is


defined as the ratio of the worst-case execution time of the fastest known sequential algorithm for
a particular problem to the worst-case execution time of the parallel algorithm.
Speedup =Worst case execution time of the fastest known sequential for a particular problem /
Worst case execution time of the parallel algorithm
NUMBER OF PROCESSORS USED

The number of processors used is an important factor in analyzing the efficiency of a parallel
algorithm. The cost to buy, maintain, and run the computers are calculated. Larger the number of
processors used by an algorithm to solve a problem, more costly becomes the obtained result.
WORKING OF PARALLEL ALGORITHM
MODELS
The model of parallel algorithm are developed by considering a strategy for dividing the data and
processing method and also applying a suitable strategy to reduce interaction.

 Data parallel Model


 Task Graph Model
 Work Pool Model
 Master Slave Model
DATA PARALLEL MODEL
• Task are assigned to processor and each task performs similar
types of operations on different data.
• Data parallelism is achieved by applying operation on multiple
data.
• Large problems encourage data parallelism because we use
more processor to use large problem.
TASK GRAPH MODEL
• Parallelism is expressed by a task graph which is either trivial
or non trivial.
• This model is suited to problem that has a huge amount of
data but number of computations/tasks are comparatively
less.
• Different problem are divided into different tasks to
implement a graph.
• Each task is independent units but has dependance with its
predecessor.
• Dependent tasks execution only when its predecessor task
finished execution.
Task Graph Model
WORK POOL MODEL

• Tasks are assigned among processor to balance the load.


• Any processor can execute any task.
• In this case, operations are large while data is small.
• No pre assigning task to the processor so, assigning can be
centralized or de centralized.
WORK PHOOL MODEL
MASTER SALVE MODEL

• One or more master processes that generate tasks and assign those
tasks to slaves processor.
• Task are assigned before hand it.
• Master can estimate the number of operations.
• Random assigning of task is preferable.
• Slaves are assigned smaller tasks.
• One master can assign task to sub-master and further to slaves.
MASTER SLAVES MODEL
Thanks

You might also like